The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases. Please read local_rank from `os.environ('LOCAL_RANK')` instead. INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs: entrypoint : pretrain_bert.py min_nodes : 4 max_nodes : 4 nproc_per_node : 8 run_id : none rdzv_backend : static rdzv_endpoint : 198.18.8.30:6000 rdzv_configs : {'rank': 3, 'timeout': 900} max_restarts : 3 monitor_interval : 5 log_dir : None metrics_cfg : {} INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_u1xq3jei/none_j1ltjjfz INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group /opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future. warnings.warn( INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result: restart_count=0 master_addr=198.18.8.30 master_port=6000 group_rank=3 group_world_size=4 local_ranks=[0, 1, 2, 3, 4, 5, 6, 7] role_ranks=[24, 25, 26, 27, 28, 29, 30, 31] global_ranks=[24, 25, 26, 27, 28, 29, 30, 31] role_world_sizes=[32, 32, 32, 32, 32, 32, 32, 32] global_world_sizes=[32, 32, 32, 32, 32, 32, 32, 32] INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_u1xq3jei/none_j1ltjjfz/attempt_0/0/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_u1xq3jei/none_j1ltjjfz/attempt_0/1/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_u1xq3jei/none_j1ltjjfz/attempt_0/2/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_u1xq3jei/none_j1ltjjfz/attempt_0/3/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker4 reply file to: /tmp/torchelastic_u1xq3jei/none_j1ltjjfz/attempt_0/4/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker5 reply file to: /tmp/torchelastic_u1xq3jei/none_j1ltjjfz/attempt_0/5/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker6 reply file to: /tmp/torchelastic_u1xq3jei/none_j1ltjjfz/attempt_0/6/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker7 reply file to: /tmp/torchelastic_u1xq3jei/none_j1ltjjfz/attempt_0/7/error.json [W ProcessGroupNCCL.cpp:1671] Rank 29 using best-guess GPU 5 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 28 using best-guess GPU 4 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 24 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 27 using best-guess GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 30 using best-guess GPU 6 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 26 using best-guess GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 31 using best-guess GPU 7 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 25 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. iv-ybpu7pvmiu5m57lh5kdd:3384:3384 [4] NCCL INFO Bootstrap : Using eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3383:3383 [3] NCCL INFO Bootstrap : Using eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3386:3386 [6] NCCL INFO Bootstrap : Using eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3387:3387 [7] NCCL INFO Bootstrap : Using eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3382:3382 [2] NCCL INFO Bootstrap : Using eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3380:3380 [0] NCCL INFO Bootstrap : Using eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3385:3385 [5] NCCL INFO Bootstrap : Using eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3386:3386 [6] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmiu5m57lh5kdd:3383:3383 [3] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmiu5m57lh5kdd:3387:3387 [7] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmiu5m57lh5kdd:3384:3384 [4] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmiu5m57lh5kdd:3383:3383 [3] NCCL INFO P2P plugin IBext iv-ybpu7pvmiu5m57lh5kdd:3386:3386 [6] NCCL INFO P2P plugin IBext iv-ybpu7pvmiu5m57lh5kdd:3387:3387 [7] NCCL INFO P2P plugin IBext iv-ybpu7pvmiu5m57lh5kdd:3384:3384 [4] NCCL INFO P2P plugin IBext iv-ybpu7pvmiu5m57lh5kdd:3387:3387 [7] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmiu5m57lh5kdd:3386:3386 [6] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmiu5m57lh5kdd:3383:3383 [3] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmiu5m57lh5kdd:3384:3384 [4] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmiu5m57lh5kdd:3380:3380 [0] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmiu5m57lh5kdd:3380:3380 [0] NCCL INFO P2P plugin IBext iv-ybpu7pvmiu5m57lh5kdd:3380:3380 [0] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmiu5m57lh5kdd:3382:3382 [2] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmiu5m57lh5kdd:3385:3385 [5] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmiu5m57lh5kdd:3382:3382 [2] NCCL INFO P2P plugin IBext iv-ybpu7pvmiu5m57lh5kdd:3385:3385 [5] NCCL INFO P2P plugin IBext iv-ybpu7pvmiu5m57lh5kdd:3382:3382 [2] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmiu5m57lh5kdd:3385:3385 [5] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmiu5m57lh5kdd:3381:3381 [1] NCCL INFO Bootstrap : Using eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3381:3381 [1] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmiu5m57lh5kdd:3381:3381 [1] NCCL INFO P2P plugin IBext iv-ybpu7pvmiu5m57lh5kdd:3381:3381 [1] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmiu5m57lh5kdd:3383:3383 [3] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3383:3383 [3] NCCL INFO Using network IBext iv-ybpu7pvmiu5m57lh5kdd:3386:3386 [6] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3386:3386 [6] NCCL INFO Using network IBext iv-ybpu7pvmiu5m57lh5kdd:3380:3380 [0] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3380:3380 [0] NCCL INFO Using network IBext iv-ybpu7pvmiu5m57lh5kdd:3384:3384 [4] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3384:3384 [4] NCCL INFO Using network IBext iv-ybpu7pvmiu5m57lh5kdd:3382:3382 [2] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3382:3382 [2] NCCL INFO Using network IBext iv-ybpu7pvmiu5m57lh5kdd:3385:3385 [5] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3385:3385 [5] NCCL INFO Using network IBext iv-ybpu7pvmiu5m57lh5kdd:3387:3387 [7] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3387:3387 [7] NCCL INFO Using network IBext iv-ybpu7pvmiu5m57lh5kdd:3381:3381 [1] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.230<0> iv-ybpu7pvmiu5m57lh5kdd:3381:3381 [1] NCCL INFO Using network IBext iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Trees [0] 24/-1/-1->31->29 [1] 24/-1/-1->31->29 iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Trees [0] 31/-1/-1->29->28 [1] 31/-1/-1->29->28 iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Trees [0] 25/-1/-1->27->26 [1] 25/-1/-1->27->26 iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Trees [0] -1/-1/-1->30->25 [1] -1/-1/-1->30->25 iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Trees [0] 26/-1/-1->24->31 [1] 26/-1/-1->24->31 iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Trees [0] 30/-1/-1->25->27 [1] 30/-1/-1->25->27 iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Trees [0] 27/-1/-1->26->24 [1] 27/-1/-1->26->24 iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Trees [0] 29/-1/-1->28->20 [1] 29/12/-1->28->-1 iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Channel 00 : 30[6b010] -> 31[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Channel 01 : 30[6b010] -> 31[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 00 : 28[69010] -> 30[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Channel 00 : 24[65010] -> 25[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Channel 00 : 26[67010] -> 29[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Channel 01 : 24[65010] -> 25[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 01 : 28[69010] -> 30[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Channel 01 : 26[67010] -> 29[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Channel 00 : 31[6b020] -> 24[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Channel 01 : 31[6b020] -> 24[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Channel 00 : 29[69020] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Channel 01 : 29[69020] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Channel 00 : 25[65020] -> 27[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 00 : 21[69020] -> 28[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Channel 01 : 25[65020] -> 27[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 01 : 21[69020] -> 28[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Channel 00 : 24[65010] -> 26[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Channel 00 : 27[67020] -> 26[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Channel 00 : 25[65020] -> 30[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Channel 01 : 27[67020] -> 26[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Channel 01 : 25[65020] -> 30[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Channel 01 : 24[65010] -> 26[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 00 : 28[69010] -> 29[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 01 : 28[69010] -> 29[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Channel 00 : 26[67010] -> 27[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Channel 00 : 30[6b010] -> 25[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Channel 01 : 30[6b010] -> 25[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Channel 01 : 26[67010] -> 27[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Channel 00 : 29[69020] -> 31[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 00 : 20[69010] -> 28[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO threadThresholds 8/8/64 | 256/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Channel 01 : 29[69020] -> 31[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Channel 00 : 27[67020] -> 25[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Channel 01 : 27[67020] -> 25[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO threadThresholds 8/8/64 | 256/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO threadThresholds 8/8/64 | 256/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Channel 00 : 26[67010] -> 24[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Channel 00 : 24[65010] -> 31[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Channel 01 : 26[67010] -> 24[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Channel 01 : 24[65010] -> 31[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Channel 01 : 25[65020] -> 28[69010] via P2P/indirect/30[6b010] iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Channel 00 : 27[67020] -> 29[69020] via P2P/indirect/28[69010] iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Channel 00 : 31[6b020] -> 29[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Channel 01 : 31[6b020] -> 29[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO threadThresholds 8/8/64 | 256/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO threadThresholds 8/8/64 | 256/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO threadThresholds 8/8/64 | 256/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Channel 00 : 29[69020] -> 28[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Channel 00 : 26[67010] -> 28[69010] via P2P/indirect/29[69020] iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Channel 00 : 24[65010] -> 28[69010] via P2P/indirect/27[67020] iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Channel 01 : 29[69020] -> 28[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 01 : 12[69010] -> 28[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 01 : 28[69010] -> 12[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 00 : 28[69010] -> 20[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO threadThresholds 8/8/64 | 256/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO threadThresholds 8/8/64 | 256/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Channel 01 : 27[67020] -> 30[6b010] via P2P/indirect/25[65020] iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Channel 00 : 26[67010] -> 30[6b010] via P2P/indirect/25[65020] iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO Channel 00 : 27[67020] -> 31[6b020] via P2P/indirect/24[65010] iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Channel 00 : 25[65020] -> 29[69020] via P2P/indirect/30[6b010] iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO Channel 01 : 26[67010] -> 31[6b020] via P2P/indirect/24[65010] iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 00 : 28[69010] -> 24[65010] via P2P/indirect/31[6b020] iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO Channel 00 : 25[65020] -> 31[6b020] via P2P/indirect/24[65010] iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Channel 01 : 24[65010] -> 29[69020] via P2P/indirect/31[6b020] iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO Channel 00 : 24[65010] -> 30[6b010] via P2P/indirect/25[65020] iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Channel 01 : 29[69020] -> 24[65010] via P2P/indirect/31[6b020] iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Channel 00 : 31[6b020] -> 25[65020] via P2P/indirect/30[6b010] iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Channel 00 : 30[6b010] -> 24[65010] via P2P/indirect/31[6b020] iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Channel 01 : 31[6b020] -> 26[67010] via P2P/indirect/24[65010] iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Channel 00 : 30[6b010] -> 26[67010] via P2P/indirect/29[69020] iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO Channel 00 : 31[6b020] -> 27[67020] via P2P/indirect/24[65010] iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO Channel 01 : 30[6b010] -> 27[67020] via P2P/indirect/25[65020] iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Channel 00 : 29[69020] -> 25[65020] via P2P/indirect/30[6b010] iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO Channel 00 : 29[69020] -> 27[67020] via P2P/indirect/26[67010] iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 01 : 28[69010] -> 25[65020] via P2P/indirect/30[6b010] iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO Channel 00 : 28[69010] -> 26[67010] via P2P/indirect/27[67020] iv-ybpu7pvmiu5m57lh5kdd:3386:3578 [6] NCCL INFO comm 0x7f89cc008fb0 rank 30 nranks 32 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3380:3579 [0] NCCL INFO comm 0x7f7d90008fb0 rank 24 nranks 32 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3383:3577 [3] NCCL INFO comm 0x7f5e60008fb0 rank 27 nranks 32 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3387:3583 [7] NCCL INFO comm 0x7f23ac008fb0 rank 31 nranks 32 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3384:3581 [4] NCCL INFO comm 0x7f0b6c008fb0 rank 28 nranks 32 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3382:3580 [2] NCCL INFO comm 0x7f9660008fb0 rank 26 nranks 32 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3381:3584 [1] NCCL INFO comm 0x7fbf20008fb0 rank 25 nranks 32 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3385:3582 [5] NCCL INFO comm 0x7f6f0c008fb0 rank 29 nranks 32 cudaDev 5 busId 69020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO Channel 00 : 0[69010] -> 1[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO Channel 01 : 0[69010] -> 1[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO Channel 00 : 1[69010] -> 0[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO Channel 01 : 1[69010] -> 0[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO Channel 00 : 0[67020] -> 1[67020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO Channel 00 : 0[6b010] -> 1[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO Channel 00 : 0[69020] -> 1[69020] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO Channel 01 : 0[69020] -> 1[69020] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO Channel 00 : 1[69020] -> 0[69020] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO Channel 00 : 0[6b020] -> 1[6b020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO Channel 01 : 0[67020] -> 1[67020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO Channel 01 : 1[69020] -> 0[69020] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO Channel 01 : 0[6b010] -> 1[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO Channel 01 : 0[6b020] -> 1[6b020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO Channel 00 : 0[67010] -> 1[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO Channel 00 : 0[65020] -> 1[65020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO Channel 00 : 0[65010] -> 1[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO Channel 00 : 1[67020] -> 0[67020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO Channel 00 : 1[6b010] -> 0[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO Channel 00 : 1[6b020] -> 0[6b020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO Channel 01 : 0[67010] -> 1[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO Channel 01 : 0[65020] -> 1[65020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO Channel 01 : 0[65010] -> 1[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO Channel 01 : 1[67020] -> 0[67020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO Channel 01 : 1[6b010] -> 0[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO Channel 01 : 1[6b020] -> 0[6b020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO Channel 00 : 1[67010] -> 0[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO Channel 00 : 1[65020] -> 0[65020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO Channel 00 : 1[65010] -> 0[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3383:3635 [3] NCCL INFO comm 0x7f5e40008fb0 rank 1 nranks 2 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3386:3636 [6] NCCL INFO comm 0x7f89a8008fb0 rank 1 nranks 2 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO Channel 01 : 1[67010] -> 0[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO Channel 01 : 1[65020] -> 0[65020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO Channel 01 : 1[65010] -> 0[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3387:3639 [7] NCCL INFO comm 0x7f2388008fb0 rank 1 nranks 2 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3382:3646 [2] NCCL INFO comm 0x7f9640008fb0 rank 1 nranks 2 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3381:3648 [1] NCCL INFO comm 0x7fbefc008fb0 rank 1 nranks 2 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3380:3644 [0] NCCL INFO comm 0x7f7d70008fb0 rank 1 nranks 2 cudaDev 0 busId 65010 - Init COMPLETE > number of parameters on (tensor, pipeline) model parallel rank (1, 3): 50802050 > number of parameters on (tensor, pipeline) model parallel rank (0, 3): 50802050 iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3384:3634 [4] NCCL INFO comm 0x7f0b2c008fb0 rank 1 nranks 2 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3385:3641 [5] NCCL INFO comm 0x7f6ee0008fb0 rank 1 nranks 2 cudaDev 5 busId 69020 - Init COMPLETE NCCL version 2.10.3+cuda11.4 iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO Trees [0] 3/-1/-1->1->0 [1] 3/-1/-1->1->0 iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO Trees [0] -1/-1/-1->2->3 [1] -1/-1/-1->2->3 iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO Trees [0] 2/-1/-1->3->1 [1] 2/-1/-1->3->1 iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO Channel 00/02 : 0 1 3 2 iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO Channel 01/02 : 0 1 3 2 iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO Channel 00 : 0[65010] -> 1[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO Channel 00 : 2[69010] -> 0[65010] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO Channel 01 : 0[65010] -> 1[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO Channel 01 : 2[69010] -> 0[65010] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO Channel 00 : 1[67010] -> 3[6b010] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO Channel 01 : 1[67010] -> 3[6b010] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO Channel 00 : 3[6b010] -> 2[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO Channel 01 : 3[6b010] -> 2[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO Channel 00 : 2[69010] -> 3[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO Channel 01 : 2[69010] -> 3[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO Channel 00 : 3[6b010] -> 1[67010] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO Channel 01 : 3[6b010] -> 1[67010] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO Channel 00 : 1[67010] -> 0[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO Channel 01 : 1[67010] -> 0[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3386:3661 [6] NCCL INFO comm 0x7f89a80d3010 rank 3 nranks 4 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3384:3662 [4] NCCL INFO comm 0x7f0b2c0be010 rank 2 nranks 4 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3382:3660 [2] NCCL INFO comm 0x7f96400f6010 rank 1 nranks 4 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3380:3659 [0] NCCL INFO comm 0x7f7d48008fb0 rank 0 nranks 4 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3380:3380 [0] NCCL INFO Launch mode Parallel iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] 1/-1/-1->3->-1 iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] 1/-1/-1->3->-1 iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] 1/-1/-1->3->-1 iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Channel 00 : 2[6b010] -> 3[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] 1/-1/-1->3->-1 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Channel 00 : 2[69010] -> 3[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Channel 01 : 2[69010] -> 3[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Channel 00 : 3[69010] -> 0[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Channel 00 : 2[65010] -> 3[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Channel 01 : 3[69010] -> 0[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Channel 01 : 2[6b010] -> 3[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Channel 00 : 3[6b010] -> 0[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Channel 00 : 2[67010] -> 3[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Channel 01 : 2[65010] -> 3[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Channel 01 : 3[6b010] -> 0[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Channel 01 : 2[67010] -> 3[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Channel 00 : 3[65010] -> 0[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Channel 00 : 3[67010] -> 0[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Channel 01 : 3[65010] -> 0[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Channel 01 : 3[67010] -> 0[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Channel 01 : 1[6b010] -> 3[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Channel 01 : 1[65010] -> 3[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Channel 01 : 1[67010] -> 3[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Channel 01 : 3[6b010] -> 1[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Channel 01 : 3[67010] -> 1[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Channel 01 : 3[65010] -> 1[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Channel 01 : 1[69010] -> 3[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Channel 01 : 3[69010] -> 1[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Channel 00 : 3[65010] -> 2[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Channel 00 : 3[6b010] -> 2[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Channel 00 : 3[67010] -> 2[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3386:3674 [6] NCCL INFO comm 0x7f8970008fb0 rank 3 nranks 4 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3380:3672 [0] NCCL INFO comm 0x7f7d48154010 rank 3 nranks 4 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3382:3673 [2] NCCL INFO comm 0x7f9640147000 rank 3 nranks 4 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Channel 00 : 3[69010] -> 2[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3384:3671 [4] NCCL INFO comm 0x7f0b2c11a660 rank 3 nranks 4 cudaDev 4 busId 69010 - Init COMPLETE NCCL version 2.10.3+cuda11.4 NCCL version 2.10.3+cuda11.4 NCCL version 2.10.3+cuda11.4 iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Channel 00/04 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Channel 01/04 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Channel 02/04 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Channel 03/04 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO Channel 00/02 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3387:3687 [7] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO Channel 01/02 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3387:3687 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO Channel 00/02 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3381:3696 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO Channel 01/02 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3381:3696 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Channel 00/04 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Channel 01/04 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Channel 02/04 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Channel 03/04 : 0 1 iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO Channel 00 : 0[6b010] -> 1[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3387:3687 [7] NCCL INFO Channel 00 : 1[6b020] -> 0[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO Channel 01 : 0[6b010] -> 1[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO Channel 00 : 0[65010] -> 1[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:3696 [1] NCCL INFO Channel 00 : 1[65020] -> 0[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3387:3687 [7] NCCL INFO Channel 01 : 1[6b020] -> 0[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO Channel 00 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Channel 00 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO Channel 01 : 0[65010] -> 1[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:3696 [1] NCCL INFO Channel 01 : 1[65020] -> 0[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Channel 01 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO Channel 01 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Channel 02 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO Channel 02 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3387:3687 [7] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3387:3687 [7] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3387:3687 [7] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3387:3687 [7] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3386:3684 [6] NCCL INFO comm 0x7f896c008fb0 rank 0 nranks 2 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3387:3687 [7] NCCL INFO comm 0x7f2360008fb0 rank 1 nranks 2 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3386:3386 [6] NCCL INFO Launch mode Parallel iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Channel 03 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO Channel 03 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:3696 [1] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3381:3696 [1] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3381:3696 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3381:3696 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3380:3692 [0] NCCL INFO comm 0x7f7d2c008fb0 rank 0 nranks 2 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3381:3696 [1] NCCL INFO comm 0x7fbed4008fb0 rank 1 nranks 2 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3380:3380 [0] NCCL INFO Launch mode Parallel iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Channel 00 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO Channel 00 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Channel 01 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO Channel 01 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Channel 02 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO Channel 02 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Channel 03 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3686 [2] NCCL INFO comm 0x7f95f8008fb0 rank 0 nranks 2 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3383:3688 [3] NCCL INFO comm 0x7f5e18008fb0 rank 1 nranks 2 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO Channel 03 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:3382 [2] NCCL INFO Launch mode Parallel iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3384:3690 [4] NCCL INFO comm 0x7f0af0008fb0 rank 0 nranks 2 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3385:3693 [5] NCCL INFO comm 0x7f6ebc008fb0 rank 1 nranks 2 cudaDev 5 busId 69020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3384:3384 [4] NCCL INFO Launch mode Parallel [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) time (ms) | model-and-optimizer-setup: 572.68 | train/valid/test-data-iterators-setup: 1344.69 iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO Channel 00 : 0[65020] -> 1[65020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO Channel 00 : 0[65010] -> 1[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO Channel 01 : 0[65020] -> 1[65020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO Channel 01 : 0[65010] -> 1[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO Channel 00 : 1[65020] -> 0[65020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO Channel 00 : 1[65010] -> 0[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO Channel 01 : 1[65020] -> 0[65020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO Channel 01 : 1[65010] -> 0[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3381:4841 [1] NCCL INFO comm 0x7fbed0008fb0 rank 1 nranks 2 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3380:4842 [0] NCCL INFO comm 0x7f7cf0008fb0 rank 1 nranks 2 cudaDev 0 busId 65010 - Init COMPLETE /dataset/xyn/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/xyn/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( NCCL version 2.10.3+cuda11.4 iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO Channel 00 : 0[67010] -> 1[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO Channel 00 : 0[67020] -> 1[67020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO Channel 01 : 0[67010] -> 1[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO Channel 00 : 0[69010] -> 1[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO Channel 00 : 0[6b010] -> 1[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO Channel 00 : 0[69020] -> 1[69020] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO Channel 01 : 0[69020] -> 1[69020] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO Channel 01 : 0[69010] -> 1[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO Channel 00 : 1[69020] -> 0[69020] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO Channel 00 : 0[6b020] -> 1[6b020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO Channel 00 : 1[69010] -> 0[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO Channel 01 : 1[69020] -> 0[69020] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO Channel 01 : 1[69010] -> 0[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO Channel 01 : 0[67020] -> 1[67020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO Channel 00 : 1[67010] -> 0[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO Channel 01 : 0[6b010] -> 1[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO Channel 01 : 1[67010] -> 0[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO Channel 00 : 1[6b010] -> 0[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO Channel 01 : 0[6b020] -> 1[6b020] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO Channel 00 : 1[67020] -> 0[67020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3382:4849 [2] NCCL INFO comm 0x7f95b8008fb0 rank 1 nranks 2 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO Channel 01 : 1[6b010] -> 0[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO Channel 00 : 1[6b020] -> 0[6b020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO Channel 01 : 1[67020] -> 0[67020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3386:4854 [6] NCCL INFO comm 0x7f891c008fb0 rank 1 nranks 2 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO Channel 01 : 1[6b020] -> 0[6b020] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3383:4850 [3] NCCL INFO comm 0x7f5e14008fb0 rank 1 nranks 2 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3387:4853 [7] NCCL INFO comm 0x7f235c008fb0 rank 1 nranks 2 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3385:4857 [5] NCCL INFO comm 0x7f6eb8008fb0 rank 1 nranks 2 cudaDev 5 busId 69020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3384:4858 [4] NCCL INFO comm 0x7f0aac008fb0 rank 1 nranks 2 cudaDev 4 busId 69010 - Init COMPLETE /dataset/xyn/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/xyn/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/xyn/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/xyn/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/xyn/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/xyn/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO Trees [0] 2/-1/-1->3->1 [1] 2/-1/-1->3->1 iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO Trees [0] -1/-1/-1->2->3 [1] -1/-1/-1->2->3 iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO Channel 00/02 : 0 1 3 2 iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO Channel 01/02 : 0 1 3 2 iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO Trees [0] 3/-1/-1->1->0 [1] 3/-1/-1->1->0 iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO Channel 00 : 0[65020] -> 1[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO Channel 00 : 2[69020] -> 0[65020] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO Channel 01 : 0[65020] -> 1[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO Channel 01 : 2[69020] -> 0[65020] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO Channel 00 : 1[67020] -> 3[6b020] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO Channel 01 : 1[67020] -> 3[6b020] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO Channel 00 : 3[6b020] -> 2[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO Channel 01 : 3[6b020] -> 2[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO Channel 00 : 2[69020] -> 3[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO Channel 01 : 2[69020] -> 3[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO Channel 00 : 3[6b020] -> 1[67020] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO Channel 01 : 3[6b020] -> 1[67020] via direct shared memory iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO Channel 00 : 1[67020] -> 0[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO Channel 01 : 1[67020] -> 0[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3385:4881 [5] NCCL INFO comm 0x7f6eb80be010 rank 2 nranks 4 cudaDev 5 busId 69020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3387:4882 [7] NCCL INFO comm 0x7f235c0c8010 rank 3 nranks 4 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3381:4848 [1] NCCL INFO comm 0x7fbbbc008fb0 rank 0 nranks 4 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3383:4880 [3] NCCL INFO comm 0x7f5e140eb010 rank 1 nranks 4 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3381:3381 [1] NCCL INFO Launch mode Parallel iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Trees [0] 7/-1/-1->6->4 [1] 7/2/-1->6->-1 iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO Trees [0] -1/-1/-1->7->6 [1] -1/-1/-1->7->6 iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Trees [0] 7/-1/-1->6->4 [1] 7/2/-1->6->-1 iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO Trees [0] -1/-1/-1->7->6 [1] -1/-1/-1->7->6 iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Channel 00 : 5[67020] -> 6[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Trees [0] 7/-1/-1->6->4 [1] 7/2/-1->6->-1 iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO Trees [0] -1/-1/-1->7->6 [1] -1/-1/-1->7->6 iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Trees [0] 7/-1/-1->6->4 [1] 7/2/-1->6->-1 iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO Trees [0] -1/-1/-1->7->6 [1] -1/-1/-1->7->6 iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Channel 00 : 5[69020] -> 6[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO Channel 00 : 7[69020] -> 0[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Channel 01 : 5[67020] -> 6[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Channel 01 : 5[69020] -> 6[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Channel 00 : 5[6b020] -> 6[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Channel 00 : 6[67010] -> 7[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Channel 00 : 6[69010] -> 7[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO Channel 00 : 7[67020] -> 0[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO Channel 00 : 7[6b020] -> 0[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO Channel 01 : 7[69020] -> 0[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Channel 01 : 6[67010] -> 7[67020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Channel 01 : 6[69010] -> 7[69020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Channel 00 : 5[65020] -> 6[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Channel 01 : 5[6b020] -> 6[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO Channel 01 : 7[6b020] -> 0[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO Channel 01 : 7[67020] -> 0[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO Channel 00 : 7[65020] -> 0[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Channel 01 : 5[65020] -> 6[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Channel 00 : 6[6b010] -> 7[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO Channel 01 : 7[65020] -> 0[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Channel 00 : 6[65010] -> 7[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Channel 01 : 6[6b010] -> 7[6b020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Channel 01 : 6[65010] -> 7[65020] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO Channel 00 : 7[67020] -> 6[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO Channel 01 : 7[67020] -> 6[67010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Channel 00 : 4[67010] -> 6[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Channel 00 : 4[69010] -> 6[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO Channel 00 : 7[6b020] -> 6[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO Channel 00 : 7[65020] -> 6[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO Channel 01 : 7[6b020] -> 6[6b010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO Channel 01 : 7[65020] -> 6[65010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Channel 01 : 2[67010] -> 6[67010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Channel 01 : 6[67010] -> 2[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Channel 00 : 4[6b010] -> 6[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Channel 00 : 4[65010] -> 6[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Channel 01 : 2[69010] -> 6[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Channel 00 : 6[67010] -> 4[67010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Channel 01 : 6[69010] -> 2[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3382:4982 [2] NCCL INFO comm 0x7f9164008fb0 rank 6 nranks 8 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3383:4985 [3] NCCL INFO comm 0x7f5af4008fb0 rank 7 nranks 8 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Channel 01 : 2[6b010] -> 6[6b010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Channel 01 : 6[6b010] -> 2[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Channel 01 : 2[65010] -> 6[65010] [receive] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Channel 01 : 6[65010] -> 2[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Channel 00 : 6[6b010] -> 4[6b010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Channel 00 : 6[65010] -> 4[65010] [send] via NET/IBext/0 iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3386:4981 [6] NCCL INFO comm 0x7f84d4008fb0 rank 6 nranks 8 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3387:4988 [7] NCCL INFO comm 0x7f2028008fb0 rank 7 nranks 8 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO Connected all rings iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO Channel 00 : 7[69020] -> 6[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3380:4983 [0] NCCL INFO comm 0x7f789c008fb0 rank 6 nranks 8 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3381:4987 [1] NCCL INFO comm 0x7fbb9c008fb0 rank 7 nranks 8 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO Channel 01 : 7[69020] -> 6[69010] via P2P/IPC iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Channel 00 : 6[69010] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO Connected all trees iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmiu5m57lh5kdd:3384:4984 [4] NCCL INFO comm 0x7f069c008fb0 rank 6 nranks 8 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmiu5m57lh5kdd:3385:4986 [5] NCCL INFO comm 0x7f6a24008fb0 rank 7 nranks 8 cudaDev 5 busId 69020 - Init COMPLETE iteration 100/ 220 | consumed samples: 614400 | elapsed time per iteration (ms): 14120.0 | learning rate: 8.586E-07 | tpt: 435.1 samples/s | global batch size: 6144 | lm loss: 9.568258E+00 | sop loss: 6.980194E-01 | loss scale: 262144.0 | grad norm: 1.889 | number of skipped iterations: 15 | number of nan iterations: 0 | time (ms) | forward-compute: 3148.62 | forward-recv: 1181.51 | backward-compute: 6818.10 | backward-send: 31.57 | backward-send-forward-recv: 502.71 | backward-params-all-reduce: 25.87 | backward-embedding-all-reduce: 2393.06 | optimizer-copy-to-main-grad: 1.30 | optimizer-unscale-and-check-inf: 8.62 | optimizer-clip-main-grad: 1.84 | optimizer-copy-main-to-model-params: 0.81 | optimizer: 14.58 | batch-generator: 27.07 [Rank 25] (after 100 iterations) memory (MB) | allocated: 1355.98095703125 | max allocated: 8735.56982421875 | reserved: 17502.0 | max reserved: 17502.0 [Rank 24] (after 100 iterations) memory (MB) | allocated: 1355.98095703125 | max allocated: 8735.56982421875 | reserved: 17310.0 | max reserved: 17310.0 timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/06 00:26:44.258, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13886 MiB, 18624 MiB 2022/07/06 00:26:44.261, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13690 MiB, 18820 MiB 2022/07/06 00:26:44.264, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 3 %, 32510 MiB, 13910 MiB, 18600 MiB 2022/07/06 00:26:44.266, Tesla V100-SXM2-32GB, 470.57.02, 93 %, 3 %, 32510 MiB, 15056 MiB, 17454 MiB 2022/07/06 00:26:44.267, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 14818 MiB, 17692 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/06 00:26:44.269, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13186 MiB, 19324 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/06 00:26:44.271, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13886 MiB, 18624 MiB 2022/07/06 00:26:44.272, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 3 %, 32510 MiB, 13890 MiB, 18620 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/06 00:26:44.273, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13886 MiB, 18624 MiB 2022/07/06 00:26:44.275, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13690 MiB, 18820 MiB 2022/07/06 00:26:44.275, Tesla V100-SXM2-32GB, 470.57.02, 86 %, 3 %, 32510 MiB, 13308 MiB, 19202 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/06 00:26:44.276, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13886 MiB, 18624 MiB 2022/07/06 00:26:44.276, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13690 MiB, 18820 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/06 00:26:44.278, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 3 %, 32510 MiB, 13910 MiB, 18600 MiB 2022/07/06 00:26:44.278, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13886 MiB, 18624 MiB 2022/07/06 00:26:44.279, Tesla V100-SXM2-32GB, 470.57.02, 24 %, 3 %, 32510 MiB, 13690 MiB, 18820 MiB 2022/07/06 00:26:44.279, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 3 %, 32510 MiB, 13910 MiB, 18600 MiB 2022/07/06 00:26:44.280, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13886 MiB, 18624 MiB 2022/07/06 00:26:44.282, Tesla V100-SXM2-32GB, 470.57.02, 93 %, 3 %, 32510 MiB, 15056 MiB, 17454 MiB 2022/07/06 00:26:44.282, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 13886 MiB, 18624 MiB 2022/07/06 00:26:44.283, Tesla V100-SXM2-32GB, 470.57.02, 24 %, 3 %, 32510 MiB, 13690 MiB, 18820 MiB 2022/07/06 00:26:44.284, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 3 %, 32510 MiB, 13910 MiB, 18600 MiB 2022/07/06 00:26:44.284, Tesla V100-SXM2-32GB, 470.57.02, 93 %, 3 %, 32510 MiB, 15056 MiB, 17454 MiB 2022/07/06 00:26:44.285, Tesla V100-SXM2-32GB, 470.57.02, 24 %, 3 %, 32510 MiB, 13690 MiB, 18820 MiB 2022/07/06 00:26:44.287, Tesla V100-SXM2-32GB, 470.57.02, 20 %, 3 %, 32510 MiB, 14818 MiB, 17692 MiB 2022/07/06 00:26:44.288, Tesla V100-SXM2-32GB, 470.57.02, 24 %, 3 %, 32510 MiB, 13690 MiB, 18820 MiB 2022/07/06 00:26:44.289, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 3 %, 32510 MiB, 13910 MiB, 18600 MiB 2022/07/06 00:26:44.289, Tesla V100-SXM2-32GB, 470.57.02, 93 %, 3 %, 32510 MiB, 15056 MiB, 17454 MiB 2022/07/06 00:26:44.290, Tesla V100-SXM2-32GB, 470.57.02, 20 %, 3 %, 32510 MiB, 14818 MiB, 17692 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/06 00:26:44.291, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 3 %, 32510 MiB, 13910 MiB, 18600 MiB 2022/07/06 00:26:44.293, Tesla V100-SXM2-32GB, 470.57.02, 28 %, 3 %, 32510 MiB, 13186 MiB, 19324 MiB 2022/07/06 00:26:44.293, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 3 %, 32510 MiB, 13910 MiB, 18600 MiB 2022/07/06 00:26:44.294, Tesla V100-SXM2-32GB, 470.57.02, 93 %, 3 %, 32510 MiB, 15056 MiB, 17454 MiB 2022/07/06 00:26:44.294, Tesla V100-SXM2-32GB, 470.57.02, 20 %, 3 %, 32510 MiB, 14818 MiB, 17692 MiB 2022/07/06 00:26:44.294, Tesla V100-SXM2-32GB, 470.57.02, 28 %, 3 %, 32510 MiB, 13186 MiB, 19324 MiB 2022/07/06 00:26:44.295, Tesla V100-SXM2-32GB, 470.57.02, 12 %, 3 %, 32510 MiB, 13886 MiB, 18624 MiB 2022/07/06 00:26:44.296, Tesla V100-SXM2-32GB, 470.57.02, 93 %, 3 %, 32510 MiB, 15056 MiB, 17454 MiB 2022/07/06 00:26:44.300, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 3 %, 32510 MiB, 13890 MiB, 18620 MiB 2022/07/06 00:26:44.300, Tesla V100-SXM2-32GB, 470.57.02, 93 %, 3 %, 32510 MiB, 15056 MiB, 17454 MiB 2022/07/06 00:26:44.301, Tesla V100-SXM2-32GB, 470.57.02, 20 %, 3 %, 32510 MiB, 14818 MiB, 17692 MiB 2022/07/06 00:26:44.302, Tesla V100-SXM2-32GB, 470.57.02, 28 %, 3 %, 32510 MiB, 13186 MiB, 19324 MiB 2022/07/06 00:26:44.302, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 3 %, 32510 MiB, 13890 MiB, 18620 MiB 2022/07/06 00:26:44.303, Tesla V100-SXM2-32GB, 470.57.02, 24 %, 3 %, 32510 MiB, 13690 MiB, 18820 MiB 2022/07/06 00:26:44.304, Tesla V100-SXM2-32GB, 470.57.02, 20 %, 3 %, 32510 MiB, 14818 MiB, 17692 MiB 2022/07/06 00:26:44.306, Tesla V100-SXM2-32GB, 470.57.02, 86 %, 3 %, 32510 MiB, 13308 MiB, 19202 MiB 2022/07/06 00:26:44.306, Tesla V100-SXM2-32GB, 470.57.02, 20 %, 3 %, 32510 MiB, 14818 MiB, 17692 MiB 2022/07/06 00:26:44.307, Tesla V100-SXM2-32GB, 470.57.02, 28 %, 3 %, 32510 MiB, 13186 MiB, 19324 MiB 2022/07/06 00:26:44.308, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 3 %, 32510 MiB, 13890 MiB, 18620 MiB 2022/07/06 00:26:44.308, Tesla V100-SXM2-32GB, 470.57.02, 86 %, 3 %, 32510 MiB, 13308 MiB, 19202 MiB 2022/07/06 00:26:44.309, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 3 %, 32510 MiB, 13910 MiB, 18600 MiB 2022/07/06 00:26:44.309, Tesla V100-SXM2-32GB, 470.57.02, 28 %, 3 %, 32510 MiB, 13186 MiB, 19324 MiB 2022/07/06 00:26:44.312, Tesla V100-SXM2-32GB, 470.57.02, 28 %, 3 %, 32510 MiB, 13186 MiB, 19324 MiB 2022/07/06 00:26:44.313, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 3 %, 32510 MiB, 13890 MiB, 18620 MiB 2022/07/06 00:26:44.314, Tesla V100-SXM2-32GB, 470.57.02, 86 %, 3 %, 32510 MiB, 13308 MiB, 19202 MiB 2022/07/06 00:26:44.315, Tesla V100-SXM2-32GB, 470.57.02, 93 %, 3 %, 32510 MiB, 15056 MiB, 17454 MiB 2022/07/06 00:26:44.316, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 3 %, 32510 MiB, 13890 MiB, 18620 MiB 2022/07/06 00:26:44.317, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 3 %, 32510 MiB, 13890 MiB, 18620 MiB 2022/07/06 00:26:44.318, Tesla V100-SXM2-32GB, 470.57.02, 86 %, 3 %, 32510 MiB, 13308 MiB, 19202 MiB 2022/07/06 00:26:44.320, Tesla V100-SXM2-32GB, 470.57.02, 20 %, 3 %, 32510 MiB, 14818 MiB, 17692 MiB 2022/07/06 00:26:44.321, Tesla V100-SXM2-32GB, 470.57.02, 86 %, 3 %, 32510 MiB, 13308 MiB, 19202 MiB 2022/07/06 00:26:44.323, Tesla V100-SXM2-32GB, 470.57.02, 86 %, 3 %, 32510 MiB, 13308 MiB, 19202 MiB 2022/07/06 00:26:44.325, Tesla V100-SXM2-32GB, 470.57.02, 28 %, 3 %, 32510 MiB, 13186 MiB, 19324 MiB 2022/07/06 00:26:44.329, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 3 %, 32510 MiB, 13890 MiB, 18620 MiB 2022/07/06 00:26:44.337, Tesla V100-SXM2-32GB, 470.57.02, 86 %, 3 %, 32510 MiB, 13308 MiB, 19202 MiB iteration 200/ 220 | consumed samples: 1228800 | elapsed time per iteration (ms): 13983.5 | learning rate: 1.869E-06 | tpt: 439.4 samples/s | global batch size: 6144 | lm loss: 8.904323E+00 | sop loss: 6.933971E-01 | loss scale: 262144.0 | grad norm: 1.693 | number of skipped iterations: 0 | number of nan iterations: 0 | time (ms) | forward-compute: 3122.23 | forward-recv: 1088.75 | backward-compute: 6814.63 | backward-send: 31.43 | backward-send-forward-recv: 494.22 | backward-params-all-reduce: 26.13 | backward-embedding-all-reduce: 2392.23 | optimizer-copy-to-main-grad: 1.28 | optimizer-unscale-and-check-inf: 1.61 | optimizer-clip-main-grad: 2.12 | optimizer-copy-main-to-model-params: 0.95 | optimizer: 8.27 | batch-generator: 25.13 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ validation loss at the end of training for val data | lm loss value: 8.597045E+00 | lm loss PPL: 5.415632E+03 | sop loss value: 6.920559E-01 | sop loss PPL: 1.997819E+00 | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- validation loss at the end of training for test data | lm loss value: 8.566530E+00 | lm loss PPL: 5.252872E+03 | sop loss value: 6.935420E-01 | sop loss PPL: 2.000790E+00 | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- INFO:torch.distributed.elastic.agent.server.api:[default] worker group successfully finished. Waiting 300 seconds for other agents to finish. INFO:torch.distributed.elastic.agent.server.api:Local worker group finished (SUCCEEDED). Waiting 300 seconds for other agents to finish /opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:70: FutureWarning: This is an experimental API and will be changed in future. warnings.warn( INFO:torch.distributed.elastic.agent.server.api:Done waiting for other agents. Elapsed: 0.07415962219238281 seconds {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 24, "group_rank": 3, "worker_id": "3380", "role": "default", "hostname": "iv-ybpu7pvmiu5m57lh5kdd", "state": "SUCCEEDED", "total_run_time": 3274, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 4, \"entry_point\": \"python\", \"local_rank\": [0], \"role_rank\": [24], \"role_world_size\": [32]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 25, "group_rank": 3, "worker_id": "3381", "role": "default", "hostname": "iv-ybpu7pvmiu5m57lh5kdd", "state": "SUCCEEDED", "total_run_time": 3274, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 4, \"entry_point\": \"python\", \"local_rank\": [1], \"role_rank\": [25], \"role_world_size\": [32]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 26, "group_rank": 3, "worker_id": "3382", "role": "default", "hostname": "iv-ybpu7pvmiu5m57lh5kdd", "state": "SUCCEEDED", "total_run_time": 3274, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 4, \"entry_point\": \"python\", \"local_rank\": [2], \"role_rank\": [26], \"role_world_size\": [32]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 27, "group_rank": 3, "worker_id": "3383", "role": "default", "hostname": "iv-ybpu7pvmiu5m57lh5kdd", "state": "SUCCEEDED", "total_run_time": 3274, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 4, \"entry_point\": \"python\", \"local_rank\": [3], \"role_rank\": [27], \"role_world_size\": [32]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 28, "group_rank": 3, "worker_id": "3384", "role": "default", "hostname": "iv-ybpu7pvmiu5m57lh5kdd", "state": "SUCCEEDED", "total_run_time": 3274, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 4, \"entry_point\": \"python\", \"local_rank\": [4], \"role_rank\": [28], \"role_world_size\": [32]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 29, "group_rank": 3, "worker_id": "3385", "role": "default", "hostname": "iv-ybpu7pvmiu5m57lh5kdd", "state": "SUCCEEDED", "total_run_time": 3274, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 4, \"entry_point\": \"python\", \"local_rank\": [5], \"role_rank\": [29], \"role_world_size\": [32]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 30, "group_rank": 3, "worker_id": "3386", "role": "default", "hostname": "iv-ybpu7pvmiu5m57lh5kdd", "state": "SUCCEEDED", "total_run_time": 3274, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 4, \"entry_point\": \"python\", \"local_rank\": [6], \"role_rank\": [30], \"role_world_size\": [32]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 31, "group_rank": 3, "worker_id": "3387", "role": "default", "hostname": "iv-ybpu7pvmiu5m57lh5kdd", "state": "SUCCEEDED", "total_run_time": 3274, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 4, \"entry_point\": \"python\", \"local_rank\": [7], \"role_rank\": [31], \"role_world_size\": [32]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "AGENT", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": null, "group_rank": 3, "worker_id": null, "role": "default", "hostname": "iv-ybpu7pvmiu5m57lh5kdd", "state": "SUCCEEDED", "total_run_time": 3274, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 4, \"entry_point\": \"python\"}", "agent_restarts": 0}} ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. *****************************************