According to Feedspot, rCUDA is one of the TOP 100 Influencers, Blogs, Podcasts & Youtubers in Spain. We thank Feedspot for these great news!

The rCUDA Team is glad to present its live demo in SC19 at Mellanox booth. Please come to the booth and learn about the new features that will be included in the next release of rCUDA.

During this year, the rCUDA Team has presented the rCUDA technology in Lugano (Switzerland), Perth (Australia) and Leicester (UK). You can access the recordings for each of the talks here: Lugano , Perth, Leicester.

The rCUDA Team is glad to announce that the support for the Slurm job scheduler has been completed. This support will be included in the next release of rCUDA. With this support, jobs submitted to the Slurm queues can use the remote GPUs provided by rCUDA in a transparent way, without having to modify the applications and without having to modify Slurm.

The rCUDA Team is working hard for months in order to get the new rCUDA version ready. We expect to release it within a few months. Nevertheless, we are getting some initial results. The figures next to this text show the performance of the new rCUDA version when moving data, located in pageable memory, to/from a V100 GPU. The figure labeled "V100 H2D pageable" depicts the bandwidth attained when moving data to the GPU. The traditional case (shown in green for CUDA, used locally in the same node where the GPU is) is compared with the performance of the previous rCUDA version and the performance of the new rCUDA version when using a GPU located in a different node. It can be seen that the new rCUDA version provides 2x performance over both the previous version and the original CUDA case. Similar results are achieved when moving data from the GPU back to the host (figure labeled "V100 D2H pageable").

1