The code has a simple OMP master region which prints hello from the master thread, then exits the parallel region and prints the number of ranks, which rank this is, and the host name for the node. I have a hybrid MPI/OpenMP code compiled with Intel 2017 and run with Intel MPI 2017, on a Linux cluster under SLURM. Hello_parallel.f: Number of tasks= 1 My rank= 0 My name=kit002.localdomain MPI startup(): 0 81689 kit002.localdomain MPI startup(): Rank Pid Node name Pin cpu MPI startup(): shm and tmi data transfer modes MPI startup(): Multi-threaded optimized library OMP: Warning #123: Ignoring invalid OS proc ID 1. OMP: Warning #181: OMP_PLACES: ignored because GOMP_CPU_AFFINITY has been defined OMP: Warning #181: OMP_PROC_BIND: ignored because GOMP_CPU_AFFINITY has been defined Why this is a problem: I get a bunch of bizarre warnings about GOMP_CPU_AFFINITY, and invalid OS proc ID for the procs listed in GOMP_CPU_AFFINITY like this: The reason this is a problem is that I'm using OMP env vars to control affinity. When I mpirun env I see that GOMP_CPU_AFFINITY has been set for me, WHY? It appears Intel MPI is setting GOMP_CPU_AFFINITY, why? How do I prevent this?
0 Comments
Leave a Reply. |