Running and Verification
Single Node
- Use PuTTY to log in to the server as the root user.
- Create a working directory and upload the test case file.
mkdir -p path/to/CASE
- Decompress the test case file.
tar xvf water_GMX50_bare.tar.gz
- Go to the directory generated after the decompression.
cd water-cut1.0_GMX50_bare/0768
- Create a topol.tpr file.
gmx_mpi grompp -f pme.mdp
- Check whether a topol.tpr file is generated.
ll topol.tpr
-rw-r--r-- 1 root root 18448672 Jan 11 16:41 topol.tpr
- Start the running.
mpirun --allow-run-as-root --mca btl ^openib -np 24 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -dlb yes -v -nsteps 10000 -resethway -noconfout -pin on -ntomp 4 -s topol.tpr
np indicates the number of MPI processes. If hyper-threading is not enabled, the number of MPI processes multiplied by the number of OMP threads must be less than or equal to the number of CPU cores. For example, in the command, the number of MPI processes (24) multiplied by the number of OMP threads (4) must be less than or equal to the number of CPU cores in the environment.
View the value of ns/day under Performance in the md.log file. A larger value indicates higher performance.
The following is an example of the test result.
Part of the total run time spent waiting due to load imbalance: 1.1%. Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 % Y 0 % Average PME mesh/force load: 1.033 Part of the total run time spent waiting due to PP/PME imbalance: 2.1 % Core t (s) Wall t (s) (%) Time: 14806.100 154.231 9600.0 (ns/day) (hour/ns) Performance: 5.603 4.283 GROMACS reminds you: "Come on boys, Let's push it hard" (P.J. Harvey)
Dual Nodes
- Use PuTTY to log in to the server as the root user.
- Create a working directory and upload the test case file.
mkdir -p path/to/CASE
- Decompress the test case file.
tar xvf water_GMX50_bare.tar.gz
- Go to the directory generated after the decompression.
cd water-cut1.0_GMX50_bare/0768
- Create a host file.
vi host
hostname1 hostname2
hostname1 and hostname2 are the host names of the two nodes.
- Start the running.
mpirun --allow-run-as-root --hostfile host --mca btl ^openib -np 48 -N 24 -x OMP_NUM_THREADS=4 -x PATH=$PATH -x LD_LIBRARY_PATH=$LD_LIBRARY_PATH gmx_mpi mdrun -dlb yes -v -nsteps 10000 -resethway -noconfout -pin on -ntomp 4 -s topol.tpr
Table 1 Parameter description Parameter
Description
--hostfile
Node file to be used.
-np
Total number of running MPI processes.
-N
Number of processes running on each server.
- If hyper-threading is not enabled, the number of MPI processes multiplied by the number of OMP threads must be less than or equal to the number of CPU cores. In this example, the np value of the dual nodes is 48, N indicates the number of processes on each node, the value of OMP_NUM_THREADS is 4, and the number of used CPU cores is 192 (2 x 24 x 4).
- The test cases are stored in the shared directory. Ensure that the PATH and LD_LIBRARY_PATH variables must be the same as those in Configuring the Compilation Environment.
View the value of ns/day under Performance in the md.log file. A larger value indicates higher performance.
The following is an example of the test result.
Part of the total run time spent waiting due to load imbalance: 1.3%. Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 % Y 0 % Average PME mesh/force load: 1.017 Part of the total run time spent waiting due to PP/PME imbalance: 2.2 % Core t (s) Wall t (s) (%) Time: 13667.657 71.196 19197.2 (ns/day) (hour/ns) Performance: 11.603 1.965 GROMACS reminds you: "Come on boys, Let's push it hard" (P.J. Harvey)