Multi-GPU Selection Optimization
When an application uses only some GPUs on the server, the GPU positions significantly affect application performance.
The general rules are as follows. The actual results depend on testing.
- If there is frequent communication between GPUs, you are advised to select the GPUs distributed under the same CPU.
- If there is frequent GPU-CPU communication, you are advised to select the GPUs evenly distributed across two CPUs.
Figure 1 and Figure 2 show how to select two GPUs in the preceding scenarios.
You can run the nvidia-smi topo -m and lscpu |grep NUMA commands to query the connection between GPUs and CPUs. As shown in the following figures, GPU 0 is mounted to CPU 0, and GPU 1 is mounted to CPU 1.


GPU selection methods:
- export CUDA_VISIBLE_DEVICES=0,2 # Specify the GPUs to be used using the environment variable.
- mpirun -np 8 -x CUDA_VISIBLE_DEVICES=0,2 args # Use the mpirun command to specify the GPUs to be used.
Parent topic: Multi-GPU Optimization

