The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases. Please read local_rank from `os.environ('LOCAL_RANK')` instead. INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs: entrypoint : pretrain_bert.py min_nodes : 2 max_nodes : 2 nproc_per_node : 8 run_id : none rdzv_backend : static rdzv_endpoint : 198.18.8.14:6000 rdzv_configs : {'rank': 1, 'timeout': 900} max_restarts : 3 monitor_interval : 5 log_dir : None metrics_cfg : {} INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_xs5ocqwv/none_7bwri3cf INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group /opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future. warnings.warn( INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result: restart_count=0 master_addr=198.18.8.14 master_port=6000 group_rank=1 group_world_size=2 local_ranks=[0, 1, 2, 3, 4, 5, 6, 7] role_ranks=[8, 9, 10, 11, 12, 13, 14, 15] global_ranks=[8, 9, 10, 11, 12, 13, 14, 15] role_world_sizes=[16, 16, 16, 16, 16, 16, 16, 16] global_world_sizes=[16, 16, 16, 16, 16, 16, 16, 16] INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_xs5ocqwv/none_7bwri3cf/attempt_0/0/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_xs5ocqwv/none_7bwri3cf/attempt_0/1/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_xs5ocqwv/none_7bwri3cf/attempt_0/2/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_xs5ocqwv/none_7bwri3cf/attempt_0/3/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker4 reply file to: /tmp/torchelastic_xs5ocqwv/none_7bwri3cf/attempt_0/4/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker5 reply file to: /tmp/torchelastic_xs5ocqwv/none_7bwri3cf/attempt_0/5/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker6 reply file to: /tmp/torchelastic_xs5ocqwv/none_7bwri3cf/attempt_0/6/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker7 reply file to: /tmp/torchelastic_xs5ocqwv/none_7bwri3cf/attempt_0/7/error.json [W ProcessGroupNCCL.cpp:1671] Rank 10 using best-guess GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 14 using best-guess GPU 6 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 13 using best-guess GPU 5 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 8 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 9 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 12 using best-guess GPU 4 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 11 using best-guess GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 15 using best-guess GPU 7 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. iv-2udaavw4l02thdv8lcrl:6808:6808 [0] NCCL INFO Bootstrap : Using eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6808:6808 [0] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-2udaavw4l02thdv8lcrl:6808:6808 [0] NCCL INFO P2P plugin IBext iv-2udaavw4l02thdv8lcrl:6808:6808 [0] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-2udaavw4l02thdv8lcrl:6815:6815 [7] NCCL INFO Bootstrap : Using eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6809:6809 [1] NCCL INFO Bootstrap : Using eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6814:6814 [6] NCCL INFO Bootstrap : Using eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6815:6815 [7] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-2udaavw4l02thdv8lcrl:6815:6815 [7] NCCL INFO P2P plugin IBext iv-2udaavw4l02thdv8lcrl:6815:6815 [7] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-2udaavw4l02thdv8lcrl:6809:6809 [1] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-2udaavw4l02thdv8lcrl:6809:6809 [1] NCCL INFO P2P plugin IBext iv-2udaavw4l02thdv8lcrl:6809:6809 [1] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-2udaavw4l02thdv8lcrl:6810:6810 [2] NCCL INFO Bootstrap : Using eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6814:6814 [6] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-2udaavw4l02thdv8lcrl:6814:6814 [6] NCCL INFO P2P plugin IBext iv-2udaavw4l02thdv8lcrl:6814:6814 [6] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-2udaavw4l02thdv8lcrl:6810:6810 [2] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-2udaavw4l02thdv8lcrl:6810:6810 [2] NCCL INFO P2P plugin IBext iv-2udaavw4l02thdv8lcrl:6810:6810 [2] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-2udaavw4l02thdv8lcrl:6812:6812 [4] NCCL INFO Bootstrap : Using eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6808:6808 [0] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6808:6808 [0] NCCL INFO Using network IBext iv-2udaavw4l02thdv8lcrl:6812:6812 [4] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-2udaavw4l02thdv8lcrl:6812:6812 [4] NCCL INFO P2P plugin IBext iv-2udaavw4l02thdv8lcrl:6812:6812 [4] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-2udaavw4l02thdv8lcrl:6815:6815 [7] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6815:6815 [7] NCCL INFO Using network IBext iv-2udaavw4l02thdv8lcrl:6809:6809 [1] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6809:6809 [1] NCCL INFO Using network IBext iv-2udaavw4l02thdv8lcrl:6814:6814 [6] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6814:6814 [6] NCCL INFO Using network IBext iv-2udaavw4l02thdv8lcrl:6810:6810 [2] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6810:6810 [2] NCCL INFO Using network IBext iv-2udaavw4l02thdv8lcrl:6812:6812 [4] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6812:6812 [4] NCCL INFO Using network IBext iv-2udaavw4l02thdv8lcrl:6813:6813 [5] NCCL INFO Bootstrap : Using eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6811:6811 [3] NCCL INFO Bootstrap : Using eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6813:6813 [5] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-2udaavw4l02thdv8lcrl:6811:6811 [3] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-2udaavw4l02thdv8lcrl:6813:6813 [5] NCCL INFO P2P plugin IBext iv-2udaavw4l02thdv8lcrl:6811:6811 [3] NCCL INFO P2P plugin IBext iv-2udaavw4l02thdv8lcrl:6813:6813 [5] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-2udaavw4l02thdv8lcrl:6811:6811 [3] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-2udaavw4l02thdv8lcrl:6813:6813 [5] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6811:6811 [3] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.42<0> iv-2udaavw4l02thdv8lcrl:6811:6811 [3] NCCL INFO Using network IBext iv-2udaavw4l02thdv8lcrl:6813:6813 [5] NCCL INFO Using network IBext iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Trees [0] 10/-1/-1->8->15 [1] 10/-1/-1->8->15 iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Trees [0] 14/-1/-1->9->11 [1] 14/-1/-1->9->11 iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Trees [0] 11/-1/-1->10->8 [1] 11/-1/-1->10->8 iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Trees [0] 9/-1/-1->11->10 [1] 9/-1/-1->11->10 iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Trees [0] 13/-1/-1->12->4 [1] 13/4/-1->12->-1 iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Trees [0] 8/-1/-1->15->13 [1] 8/-1/-1->15->13 iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Trees [0] -1/-1/-1->14->9 [1] -1/-1/-1->14->9 iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Trees [0] 15/-1/-1->13->12 [1] 15/-1/-1->13->12 iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Channel 00 : 8[65010] -> 9[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Channel 00 : 10[67010] -> 13[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 00 : 12[69010] -> 14[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Channel 00 : 14[6b010] -> 15[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Channel 01 : 8[65010] -> 9[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Channel 01 : 10[67010] -> 13[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 01 : 12[69010] -> 14[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Channel 01 : 14[6b010] -> 15[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Channel 00 : 13[69020] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Channel 00 : 15[6b020] -> 8[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Channel 00 : 9[65020] -> 11[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Channel 01 : 13[69020] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Channel 01 : 15[6b020] -> 8[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Channel 01 : 9[65020] -> 11[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 00 : 5[69020] -> 12[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Channel 00 : 8[65010] -> 10[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Channel 00 : 11[67020] -> 10[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Channel 00 : 9[65020] -> 14[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 01 : 5[69020] -> 12[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Channel 01 : 8[65010] -> 10[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Channel 01 : 11[67020] -> 10[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Channel 01 : 9[65020] -> 14[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Channel 00 : 10[67010] -> 11[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Channel 00 : 14[6b010] -> 9[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Channel 01 : 10[67010] -> 11[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Channel 01 : 14[6b010] -> 9[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Channel 00 : 11[67020] -> 9[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 00 : 12[69010] -> 13[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Channel 01 : 11[67020] -> 9[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 01 : 12[69010] -> 13[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Channel 00 : 8[65010] -> 15[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Channel 00 : 10[67010] -> 8[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 00 : 4[69010] -> 12[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Channel 00 : 13[69020] -> 15[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Channel 01 : 8[65010] -> 15[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Channel 01 : 10[67010] -> 8[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Channel 01 : 13[69020] -> 15[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Channel 01 : 9[65020] -> 12[69010] via P2P/indirect/14[6b010] iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 01 : 4[69010] -> 12[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Channel 00 : 11[67020] -> 13[69020] via P2P/indirect/12[69010] iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 00 : 12[69010] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 01 : 12[69010] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Channel 00 : 15[6b020] -> 13[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Channel 01 : 15[6b020] -> 13[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Channel 00 : 8[65010] -> 12[69010] via P2P/indirect/11[67020] iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Channel 00 : 10[67010] -> 12[69010] via P2P/indirect/13[69020] iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Channel 00 : 13[69020] -> 12[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Channel 01 : 13[69020] -> 12[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Channel 01 : 11[67020] -> 14[6b010] via P2P/indirect/9[65020] iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Channel 00 : 10[67010] -> 14[6b010] via P2P/indirect/9[65020] iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO Channel 00 : 11[67020] -> 15[6b020] via P2P/indirect/8[65010] iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Channel 00 : 9[65020] -> 13[69020] via P2P/indirect/14[6b010] iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO Channel 01 : 10[67010] -> 15[6b020] via P2P/indirect/8[65010] iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO Channel 00 : 9[65020] -> 15[6b020] via P2P/indirect/8[65010] iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 00 : 12[69010] -> 8[65010] via P2P/indirect/15[6b020] iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Channel 01 : 8[65010] -> 13[69020] via P2P/indirect/15[6b020] iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO Channel 00 : 8[65010] -> 14[6b010] via P2P/indirect/9[65020] iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Channel 01 : 13[69020] -> 8[65010] via P2P/indirect/15[6b020] iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Channel 00 : 15[6b020] -> 9[65020] via P2P/indirect/14[6b010] iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Channel 00 : 14[6b010] -> 8[65010] via P2P/indirect/15[6b020] iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Channel 01 : 15[6b020] -> 10[67010] via P2P/indirect/8[65010] iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Channel 00 : 14[6b010] -> 10[67010] via P2P/indirect/13[69020] iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO Channel 00 : 15[6b020] -> 11[67020] via P2P/indirect/8[65010] iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO Channel 01 : 14[6b010] -> 11[67020] via P2P/indirect/9[65020] iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Channel 00 : 13[69020] -> 9[65020] via P2P/indirect/14[6b010] iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 01 : 12[69010] -> 9[65020] via P2P/indirect/14[6b010] iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO Channel 00 : 13[69020] -> 11[67020] via P2P/indirect/10[67010] iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO Channel 00 : 12[69010] -> 10[67010] via P2P/indirect/11[67020] iv-2udaavw4l02thdv8lcrl:6813:7003 [5] NCCL INFO comm 0x7eff08008fb0 rank 13 nranks 16 cudaDev 5 busId 69020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6814:6998 [6] NCCL INFO comm 0x7f9158008fb0 rank 14 nranks 16 cudaDev 6 busId 6b010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6811:7004 [3] NCCL INFO comm 0x7f47a4008fb0 rank 11 nranks 16 cudaDev 3 busId 67020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6812:7000 [4] NCCL INFO comm 0x7f7244008fb0 rank 12 nranks 16 cudaDev 4 busId 69010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6809:6997 [1] NCCL INFO comm 0x7fa350008fb0 rank 9 nranks 16 cudaDev 1 busId 65020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6808:6991 [0] NCCL INFO comm 0x7fc850008fb0 rank 8 nranks 16 cudaDev 0 busId 65010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6815:6995 [7] NCCL INFO comm 0x7f49d0008fb0 rank 15 nranks 16 cudaDev 7 busId 6b020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6810:6999 [2] NCCL INFO comm 0x7fdb90008fb0 rank 10 nranks 16 cudaDev 2 busId 67010 - Init COMPLETE > number of parameters on (tensor, pipeline) model parallel rank (1, 2): 37807104 > number of parameters on (tensor, pipeline) model parallel rank (0, 2): 37807104 iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO Channel 00 : 0[65010] -> 1[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO Channel 01 : 0[65010] -> 1[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO Channel 00 : 1[69010] -> 0[65010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO Channel 00 : 0[65020] -> 1[69020] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO Channel 01 : 0[65020] -> 1[69020] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO Channel 00 : 1[69020] -> 0[65020] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO Channel 01 : 1[69010] -> 0[65010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO Channel 01 : 1[69020] -> 0[65020] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO Channel 00 : 0[67020] -> 1[6b020] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO Channel 00 : 0[67010] -> 1[6b010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO Channel 01 : 0[67020] -> 1[6b020] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO Channel 01 : 0[67010] -> 1[6b010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO Channel 00 : 1[6b020] -> 0[67020] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO Channel 00 : 1[6b010] -> 0[67010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO Channel 01 : 1[6b020] -> 0[67020] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO Channel 01 : 1[6b010] -> 0[67010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6815:7064 [7] NCCL INFO comm 0x7f49ac008fb0 rank 1 nranks 2 cudaDev 7 busId 6b020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6814:7066 [6] NCCL INFO comm 0x7f912c008fb0 rank 1 nranks 2 cudaDev 6 busId 6b010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6813:7070 [5] NCCL INFO comm 0x7efee4008fb0 rank 1 nranks 2 cudaDev 5 busId 69020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6812:7067 [4] NCCL INFO comm 0x7f7228008fb0 rank 1 nranks 2 cudaDev 4 busId 69010 - Init COMPLETE > number of parameters on (tensor, pipeline) model parallel rank (1, 3): 50802050 > number of parameters on (tensor, pipeline) model parallel rank (0, 3): 50802050 NCCL version 2.10.3+cuda11.4 NCCL version 2.10.3+cuda11.4 iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO Trees [0] 0/-1/-1->1->-1 [1] 0/-1/-1->1->-1 [2] 0/-1/-1->1->-1 [3] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Channel 00/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Channel 01/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Channel 02/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Channel 03/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Trees [0] -1/-1/-1->0->1 [1] -1/-1/-1->0->1 [2] -1/-1/-1->0->1 [3] -1/-1/-1->0->1 iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Channel 00/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Channel 01/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Channel 02/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Channel 03/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO Channel 00 : 1[6b010] -> 0[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Channel 00 : 0[69010] -> 1[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO Channel 01 : 1[6b010] -> 0[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Channel 01 : 0[69010] -> 1[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO Channel 02 : 1[6b010] -> 0[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Channel 02 : 0[69010] -> 1[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO Channel 03 : 1[6b010] -> 0[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Channel 03 : 0[69010] -> 1[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO Channel 00 : 1[67010] -> 0[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Channel 00 : 0[65010] -> 1[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO Channel 01 : 1[67010] -> 0[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Channel 01 : 0[65010] -> 1[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO Channel 02 : 1[67010] -> 0[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Channel 02 : 0[65010] -> 1[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO Channel 03 : 1[67010] -> 0[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:7080 [6] NCCL INFO comm 0x7f912c0d3010 rank 1 nranks 2 cudaDev 6 busId 6b010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6812:7079 [4] NCCL INFO comm 0x7f71f8008fb0 rank 0 nranks 2 cudaDev 4 busId 69010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Channel 03 : 0[65010] -> 1[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:6812 [4] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6810:7081 [2] NCCL INFO comm 0x7fdb54008fb0 rank 1 nranks 2 cudaDev 2 busId 67010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6808:7077 [0] NCCL INFO comm 0x7fc81c008fb0 rank 0 nranks 2 cudaDev 0 busId 65010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6808:6808 [0] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO Trees [0] -1/-1/-1->2->3 [1] -1/-1/-1->2->3 iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Trees [0] 2/-1/-1->3->1 [1] 2/1/-1->3->-1 iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO Trees [0] -1/-1/-1->2->3 [1] -1/-1/-1->2->3 iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Trees [0] 2/-1/-1->3->1 [1] 2/1/-1->3->-1 iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO Channel 00 : 1[6b010] -> 2[67010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO Channel 00 : 1[69010] -> 2[65010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO Channel 01 : 1[6b010] -> 2[67010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO Channel 00 : 2[67010] -> 3[6b010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO Channel 01 : 2[67010] -> 3[6b010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO Channel 01 : 1[69010] -> 2[65010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO Channel 00 : 2[65010] -> 3[69010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO Channel 01 : 2[65010] -> 3[69010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Channel 00 : 3[6b010] -> 0[67010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Channel 01 : 3[6b010] -> 0[67010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Channel 00 : 3[69010] -> 0[65010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Channel 01 : 3[69010] -> 0[65010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Channel 00 : 1[6b010] -> 3[6b010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Channel 01 : 1[6b010] -> 3[6b010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Channel 00 : 1[69010] -> 3[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Channel 01 : 1[69010] -> 3[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Channel 00 : 3[6b010] -> 1[6b010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Channel 00 : 3[69010] -> 1[69010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Channel 01 : 3[6b010] -> 1[6b010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Channel 01 : 3[69010] -> 1[69010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Channel 00 : 3[6b010] -> 2[67010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Channel 01 : 3[6b010] -> 2[67010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6814:7092 [6] NCCL INFO comm 0x7f912c2b53a0 rank 3 nranks 4 cudaDev 6 busId 6b010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6810:7093 [2] NCCL INFO comm 0x7fdb540de4b0 rank 2 nranks 4 cudaDev 2 busId 67010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Channel 00 : 3[69010] -> 2[65010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Channel 01 : 3[69010] -> 2[65010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:7090 [0] NCCL INFO comm 0x7fc81c0de690 rank 2 nranks 4 cudaDev 0 busId 65010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6812:7091 [4] NCCL INFO comm 0x7f71f80ad5d0 rank 3 nranks 4 cudaDev 4 busId 69010 - Init COMPLETE NCCL version 2.10.3+cuda11.4 NCCL version 2.10.3+cuda11.4 iv-2udaavw4l02thdv8lcrl:6809:7107 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO Channel 00/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6809:7107 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO Channel 01/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Channel 00/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Channel 01/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Channel 02/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Channel 03/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Channel 00/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Channel 01/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Channel 02/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Channel 03/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6809:7107 [1] NCCL INFO Channel 00 : 1[65020] -> 0[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO Channel 00 : 0[65010] -> 1[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6815:7106 [7] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6815:7106 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO Channel 00/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO Channel 01/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6809:7107 [1] NCCL INFO Channel 01 : 1[65020] -> 0[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO Channel 01 : 0[65010] -> 1[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:7107 [1] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6809:7107 [1] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6809:7107 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6809:7107 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:7105 [0] NCCL INFO comm 0x7fc814008fb0 rank 0 nranks 2 cudaDev 0 busId 65010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6809:7107 [1] NCCL INFO comm 0x7fa31c008fb0 rank 1 nranks 2 cudaDev 1 busId 65020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6808:6808 [0] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO Channel 00 : 1[67020] -> 0[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Channel 00 : 0[67010] -> 1[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6815:7106 [7] NCCL INFO Channel 00 : 1[6b020] -> 0[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO Channel 00 : 1[69020] -> 0[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO Channel 00 : 0[6b010] -> 1[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Channel 00 : 0[69010] -> 1[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO Channel 01 : 1[67020] -> 0[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Channel 01 : 0[67010] -> 1[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6815:7106 [7] NCCL INFO Channel 01 : 1[6b020] -> 0[6b010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO Channel 01 : 1[69020] -> 0[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO Channel 02 : 1[67020] -> 0[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO Channel 01 : 0[6b010] -> 1[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Channel 01 : 0[69010] -> 1[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Channel 02 : 0[67010] -> 1[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO Channel 02 : 1[69020] -> 0[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO Channel 03 : 1[67020] -> 0[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Channel 02 : 0[69010] -> 1[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Channel 03 : 0[67010] -> 1[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO Channel 03 : 1[69020] -> 0[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Channel 03 : 0[69010] -> 1[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6815:7106 [7] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6815:7106 [7] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6815:7106 [7] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6815:7106 [7] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6814:7103 [6] NCCL INFO comm 0x7f90f0008fb0 rank 0 nranks 2 cudaDev 6 busId 6b010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6815:7106 [7] NCCL INFO comm 0x7f4984008fb0 rank 1 nranks 2 cudaDev 7 busId 6b020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6814:6814 [6] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6810:7109 [2] NCCL INFO comm 0x7fdb44008fb0 rank 0 nranks 2 cudaDev 2 busId 67010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6811:7111 [3] NCCL INFO comm 0x7f4780008fb0 rank 1 nranks 2 cudaDev 3 busId 67020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6810:6810 [2] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6813:7115 [5] NCCL INFO comm 0x7efebc008fb0 rank 1 nranks 2 cudaDev 5 busId 69020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6812:7112 [4] NCCL INFO comm 0x7f71e4008fb0 rank 0 nranks 2 cudaDev 4 busId 69010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6812:6812 [4] NCCL INFO Launch mode Parallel [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) time (ms) | model-and-optimizer-setup: 399.14 | train/valid/test-data-iterators-setup: 936.89 iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO Channel 00 : 0[6b020] -> 1[67020] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO Channel 00 : 0[6b010] -> 1[67010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO Channel 00 : 0[69020] -> 1[65020] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO Channel 00 : 0[69010] -> 1[65010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO Channel 01 : 0[6b010] -> 1[67010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO Channel 01 : 0[6b020] -> 1[67020] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO Channel 01 : 0[69020] -> 1[65020] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO Channel 01 : 0[69010] -> 1[65010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO Channel 00 : 1[67010] -> 0[6b010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO Channel 00 : 1[67020] -> 0[6b020] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO Channel 00 : 1[65020] -> 0[69020] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO Channel 00 : 1[65010] -> 0[69010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO Channel 01 : 1[67010] -> 0[6b010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO Channel 01 : 1[67020] -> 0[6b020] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO Channel 01 : 1[65020] -> 0[69020] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO Channel 01 : 1[65010] -> 0[69010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6810:8248 [2] NCCL INFO comm 0x7fdb04008fb0 rank 1 nranks 2 cudaDev 2 busId 67010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6811:8247 [3] NCCL INFO comm 0x7f477c008fb0 rank 1 nranks 2 cudaDev 3 busId 67020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6809:8251 [1] NCCL INFO comm 0x7fa318008fb0 rank 1 nranks 2 cudaDev 1 busId 65020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:8252 [0] NCCL INFO comm 0x7fc7cc008fb0 rank 1 nranks 2 cudaDev 0 busId 65010 - Init COMPLETE /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( NCCL version 2.10.3+cuda11.4 NCCL version 2.10.3+cuda11.4 iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO Channel 00/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6814:8264 [6] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO Channel 01/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6814:8264 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO Channel 00/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO Channel 01/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6815:8262 [7] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6815:8262 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6812:8274 [4] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO Channel 00/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO Channel 01/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6812:8274 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO Channel 00/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO Channel 01/02 : 0 1 iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6813:8271 [5] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6813:8271 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO Channel 00 : 0[67010] -> 1[6b010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:8264 [6] NCCL INFO Channel 00 : 1[6b010] -> 0[67010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO Channel 01 : 0[67010] -> 1[6b010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:8264 [6] NCCL INFO Channel 01 : 1[6b010] -> 0[67010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6815:8262 [7] NCCL INFO Channel 00 : 1[6b020] -> 0[67020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO Channel 00 : 0[67020] -> 1[6b020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6815:8262 [7] NCCL INFO Channel 01 : 1[6b020] -> 0[67020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO Channel 01 : 0[67020] -> 1[6b020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:8264 [6] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6814:8264 [6] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6814:8264 [6] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6814:8264 [6] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6810:8263 [2] NCCL INFO comm 0x7fd9ec008fb0 rank 0 nranks 2 cudaDev 2 busId 67010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6814:8264 [6] NCCL INFO comm 0x7f90b0008fb0 rank 1 nranks 2 cudaDev 6 busId 6b010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6810:6810 [2] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO Channel 00 : 0[65010] -> 1[69010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6812:8274 [4] NCCL INFO Channel 00 : 1[69010] -> 0[65010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO Channel 00 : 0[65020] -> 1[69020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO Channel 01 : 0[65010] -> 1[69010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6812:8274 [4] NCCL INFO Channel 01 : 1[69010] -> 0[65010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6813:8271 [5] NCCL INFO Channel 00 : 1[69020] -> 0[65020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO Channel 01 : 0[65020] -> 1[69020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6813:8271 [5] NCCL INFO Channel 01 : 1[69020] -> 0[65020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6815:8262 [7] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6815:8262 [7] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6815:8262 [7] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6815:8262 [7] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6815:8262 [7] NCCL INFO comm 0x7f4980008fb0 rank 1 nranks 2 cudaDev 7 busId 6b020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6811:8260 [3] NCCL INFO comm 0x7f466c008fb0 rank 0 nranks 2 cudaDev 3 busId 67020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6811:6811 [3] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6813:8271 [5] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6813:8271 [5] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6813:8271 [5] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6813:8271 [5] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6813:8271 [5] NCCL INFO comm 0x7efeb8008fb0 rank 1 nranks 2 cudaDev 5 busId 69020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6809:8270 [1] NCCL INFO comm 0x7fa1fc008fb0 rank 0 nranks 2 cudaDev 1 busId 65020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6809:6809 [1] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6812:8274 [4] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:8274 [4] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6812:8274 [4] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6812:8274 [4] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6808:8273 [0] NCCL INFO comm 0x7fc6bc008fb0 rank 0 nranks 2 cudaDev 0 busId 65010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6812:8274 [4] NCCL INFO comm 0x7f71b0008fb0 rank 1 nranks 2 cudaDev 4 busId 69010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6808:6808 [0] NCCL INFO Launch mode Parallel /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( NCCL version 2.10.3+cuda11.4 iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO Trees [0] 0/-1/-1->1->-1 [1] 0/-1/-1->1->-1 [2] 0/-1/-1->1->-1 [3] 0/-1/-1->1->-1 iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Channel 00/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Channel 01/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Channel 02/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Channel 03/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Trees [0] -1/-1/-1->0->1 [1] -1/-1/-1->0->1 [2] -1/-1/-1->0->1 [3] -1/-1/-1->0->1 iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO Channel 00 : 1[6b020] -> 0[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Channel 00 : 0[69020] -> 1[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO Channel 01 : 1[6b020] -> 0[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Channel 01 : 0[69020] -> 1[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO Channel 02 : 1[6b020] -> 0[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Channel 02 : 0[69020] -> 1[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO Channel 03 : 1[6b020] -> 0[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Channel 03 : 0[69020] -> 1[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6813:8301 [5] NCCL INFO comm 0x7efcb4008fb0 rank 0 nranks 2 cudaDev 5 busId 69020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6815:8302 [7] NCCL INFO comm 0x7f49800db490 rank 1 nranks 2 cudaDev 7 busId 6b020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6813:6813 [5] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Channel 00/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Channel 01/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Channel 02/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Channel 03/04 : 0 1 iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Channel 00 : 0[65020] -> 1[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO Channel 00 : 1[67020] -> 0[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Channel 01 : 0[65020] -> 1[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO Channel 01 : 1[67020] -> 0[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Channel 02 : 0[65020] -> 1[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO Channel 02 : 1[67020] -> 0[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Channel 03 : 0[65020] -> 1[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO Channel 03 : 1[67020] -> 0[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6809:8385 [1] NCCL INFO comm 0x7fa090008fb0 rank 0 nranks 2 cudaDev 1 busId 65020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6811:8386 [3] NCCL INFO comm 0x7f44f8008fb0 rank 1 nranks 2 cudaDev 3 busId 67020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6809:6809 [1] NCCL INFO Launch mode Parallel iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Trees [0] 5/-1/-1->4->7 [1] 5/-1/-1->4->7 iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO Trees [0] -1/-1/-1->5->4 [1] -1/-1/-1->5->4 iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Trees [0] 4/-1/-1->7->6 [1] 4/-1/-1->7->6 iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Trees [0] 7/-1/-1->6->2 [1] 7/2/-1->6->-1 iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Trees [0] 5/-1/-1->4->6 [1] 5/-1/-1->4->6 iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Trees [0] 7/-1/-1->5->4 [1] 7/-1/-1->5->4 iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Trees [0] 4/-1/-1->6->2 [1] 4/2/-1->6->-1 iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO Trees [0] -1/-1/-1->7->5 [1] -1/-1/-1->7->5 iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Channel 00 : 7[69020] -> 0[65010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO Channel 00 : 5[65020] -> 6[69010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO Channel 01 : 5[65020] -> 6[69010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Channel 01 : 7[69020] -> 0[65010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Channel 00 : 3[69020] -> 4[65010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Channel 00 : 5[67020] -> 6[6b010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Channel 01 : 5[67020] -> 6[6b010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Channel 00 : 3[6b020] -> 4[67010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO Channel 00 : 7[6b020] -> 0[67010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Channel 01 : 3[69020] -> 4[65010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Channel 00 : 4[65010] -> 5[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Channel 01 : 4[65010] -> 5[65020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Channel 01 : 3[6b020] -> 4[67010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Channel 00 : 4[67010] -> 5[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Channel 01 : 4[67010] -> 5[67020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Channel 00 : 6[69010] -> 7[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Channel 00 : 6[6b010] -> 7[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO Channel 01 : 7[6b020] -> 0[67010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Channel 01 : 6[6b010] -> 7[6b020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Channel 01 : 6[69010] -> 7[69020] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Channel 00 : 4[67010] -> 6[6b010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Channel 01 : 4[67010] -> 6[6b010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Channel 00 : 2[69010] -> 6[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Channel 00 : 4[65010] -> 7[69020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Channel 00 : 5[67020] -> 7[6b020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO Channel 00 : 5[65020] -> 4[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Channel 01 : 2[69010] -> 6[69010] [receive] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Channel 01 : 4[65010] -> 7[69020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Channel 01 : 5[67020] -> 7[6b020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO Channel 01 : 5[65020] -> 4[65010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Connected all rings iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Channel 00 : 6[69010] -> 2[69010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Channel 01 : 6[69010] -> 2[69010] [send] via NET/IBext/0/GDRDMA iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO Channel 00 : 7[6b020] -> 5[67020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO Channel 01 : 7[6b020] -> 5[67020] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Channel 00 : 2[6b010] -> 6[6b010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Channel 01 : 2[6b010] -> 6[6b010] [receive] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Channel 00 : 6[6b010] -> 2[6b010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Channel 00 : 7[69020] -> 4[65010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Channel 01 : 7[69020] -> 4[65010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Channel 01 : 6[6b010] -> 2[6b010] [send] via NET/IBext/0 iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Channel 00 : 5[67020] -> 4[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Channel 01 : 5[67020] -> 4[67010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Channel 00 : 6[6b010] -> 4[67010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Channel 01 : 6[6b010] -> 4[67010] via direct shared memory iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Channel 00 : 7[69020] -> 6[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Channel 01 : 7[69020] -> 6[69010] via P2P/IPC iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO Connected all trees iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-2udaavw4l02thdv8lcrl:6810:8394 [2] NCCL INFO comm 0x7fd880008fb0 rank 4 nranks 8 cudaDev 2 busId 67010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6814:8396 [6] NCCL INFO comm 0x7f8de4008fb0 rank 6 nranks 8 cudaDev 6 busId 6b010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6811:8391 [3] NCCL INFO comm 0x7f44e8008fb0 rank 5 nranks 8 cudaDev 3 busId 67020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6815:8399 [7] NCCL INFO comm 0x7f4748008fb0 rank 7 nranks 8 cudaDev 7 busId 6b020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6808:8393 [0] NCCL INFO comm 0x7fc558008fb0 rank 4 nranks 8 cudaDev 0 busId 65010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6812:8395 [4] NCCL INFO comm 0x7f6e94008fb0 rank 6 nranks 8 cudaDev 4 busId 69010 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6809:8392 [1] NCCL INFO comm 0x7fa088008fb0 rank 5 nranks 8 cudaDev 1 busId 65020 - Init COMPLETE iv-2udaavw4l02thdv8lcrl:6813:8398 [5] NCCL INFO comm 0x7efc8c008fb0 rank 7 nranks 8 cudaDev 5 busId 69020 - Init COMPLETE [Rank 9] (after 100 iterations) memory (MB) | allocated: 979.1171875 | max allocated: 5475.4501953125 | reserved: 9444.0 | max reserved: 9444.0 [Rank 8] (after 100 iterations) memory (MB) | allocated: 979.1171875 | max allocated: 5475.4501953125 | reserved: 9956.0 | max reserved: 9956.0 [Rank 13] (after 100 iterations) memory (MB) | allocated: 1228.98095703125 | max allocated: 6083.9453125 | reserved: 11556.0 | max reserved: 11556.0 iteration 100/ 210 | consumed samples: 204800 | elapsed time per iteration (ms): 8863.6 | tpt: 231.1 samples/s | global batch size: 2048 | lm loss: 9.572460E+00 | sop loss: 6.981737E-01 | loss scale: 262144.0 | grad norm: 2.802 | number of skipped iterations: 15 | number of nan iterations: 0 | [Rank 12] (after 100 iterations) memory (MB) | allocated: 1228.98095703125 | max allocated: 6083.9453125 | reserved: 12068.0 | max reserved: 12068.0 time (ms) | forward-compute: 2118.48 | forward-recv: 660.37 | backward-compute: 4515.72 | backward-send: 10.83 | backward-send-forward-recv: 142.63 | backward-params-all-reduce: 2.95 | backward-embedding-all-reduce: 1394.32 | optimizer-copy-to-main-grad: 1.30 | optimizer-unscale-and-check-inf: 8.50 | optimizer-clip-main-grad: 1.96 | optimizer-copy-main-to-model-params: 0.81 | optimizer: 14.59 | batch-generator: 26.05 timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/16 09:34:53.293, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 21218 MiB, 11292 MiB 2022/06/16 09:34:53.293, Tesla V100-SXM2-32GB, 470.57.02, 68 %, 2 %, 32510 MiB, 21724 MiB, 10786 MiB 2022/06/16 09:34:53.294, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 2 %, 32510 MiB, 21626 MiB, 10884 MiB 2022/06/16 09:34:53.295, Tesla V100-SXM2-32GB, 470.57.02, 47 %, 2 %, 32510 MiB, 21746 MiB, 10764 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/16 09:34:53.297, Tesla V100-SXM2-32GB, 470.57.02, 35 %, 3 %, 32510 MiB, 18992 MiB, 13518 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/16 09:34:53.298, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 21218 MiB, 11292 MiB 2022/06/16 09:34:53.301, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 19532 MiB, 12978 MiB 2022/06/16 09:34:53.301, Tesla V100-SXM2-32GB, 470.57.02, 31 %, 2 %, 32510 MiB, 21218 MiB, 11292 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/16 09:34:53.302, Tesla V100-SXM2-32GB, 470.57.02, 68 %, 2 %, 32510 MiB, 21724 MiB, 10786 MiB 2022/06/16 09:34:53.304, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 20280 MiB, 12230 MiB 2022/06/16 09:34:53.305, Tesla V100-SXM2-32GB, 470.57.02, 68 %, 2 %, 32510 MiB, 21724 MiB, 10786 MiB 2022/06/16 09:34:53.305, Tesla V100-SXM2-32GB, 470.57.02, 31 %, 2 %, 32510 MiB, 21218 MiB, 11292 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/16 09:34:53.306, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 2 %, 32510 MiB, 21626 MiB, 10884 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/16 09:34:53.307, Tesla V100-SXM2-32GB, 470.57.02, 76 %, 3 %, 32510 MiB, 19106 MiB, 13404 MiB 2022/06/16 09:34:53.308, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 2 %, 32510 MiB, 21626 MiB, 10884 MiB 2022/06/16 09:34:53.308, Tesla V100-SXM2-32GB, 470.57.02, 68 %, 2 %, 32510 MiB, 21724 MiB, 10786 MiB 2022/06/16 09:34:53.309, Tesla V100-SXM2-32GB, 470.57.02, 47 %, 2 %, 32510 MiB, 21746 MiB, 10764 MiB 2022/06/16 09:34:53.308, Tesla V100-SXM2-32GB, 470.57.02, 31 %, 2 %, 32510 MiB, 21218 MiB, 11292 MiB 2022/06/16 09:34:53.309, Tesla V100-SXM2-32GB, 470.57.02, 31 %, 2 %, 32510 MiB, 21218 MiB, 11292 MiB 2022/06/16 09:34:53.310, Tesla V100-SXM2-32GB, 470.57.02, 31 %, 2 %, 32510 MiB, 21218 MiB, 11292 MiB 2022/06/16 09:34:53.310, Tesla V100-SXM2-32GB, 470.57.02, 31 %, 2 %, 32510 MiB, 21218 MiB, 11292 MiB 2022/06/16 09:34:53.313, Tesla V100-SXM2-32GB, 470.57.02, 47 %, 2 %, 32510 MiB, 21746 MiB, 10764 MiB 2022/06/16 09:34:53.314, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 2 %, 32510 MiB, 21626 MiB, 10884 MiB 2022/06/16 09:34:53.314, Tesla V100-SXM2-32GB, 470.57.02, 35 %, 3 %, 32510 MiB, 18992 MiB, 13518 MiB 2022/06/16 09:34:53.315, Tesla V100-SXM2-32GB, 470.57.02, 68 %, 2 %, 32510 MiB, 21724 MiB, 10786 MiB 2022/06/16 09:34:53.316, Tesla V100-SXM2-32GB, 470.57.02, 68 %, 2 %, 32510 MiB, 21724 MiB, 10786 MiB 2022/06/16 09:34:53.316, Tesla V100-SXM2-32GB, 470.57.02, 68 %, 2 %, 32510 MiB, 21724 MiB, 10786 MiB 2022/06/16 09:34:53.317, Tesla V100-SXM2-32GB, 470.57.02, 68 %, 2 %, 32510 MiB, 21724 MiB, 10786 MiB 2022/06/16 09:34:53.319, Tesla V100-SXM2-32GB, 470.57.02, 35 %, 3 %, 32510 MiB, 18992 MiB, 13518 MiB 2022/06/16 09:34:53.320, Tesla V100-SXM2-32GB, 470.57.02, 47 %, 2 %, 32510 MiB, 21746 MiB, 10764 MiB 2022/06/16 09:34:53.320, Tesla V100-SXM2-32GB, 470.57.02, 29 %, 3 %, 32510 MiB, 19532 MiB, 12978 MiB 2022/06/16 09:34:53.321, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 2 %, 32510 MiB, 21626 MiB, 10884 MiB 2022/06/16 09:34:53.322, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 2 %, 32510 MiB, 21626 MiB, 10884 MiB 2022/06/16 09:34:53.322, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 2 %, 32510 MiB, 21626 MiB, 10884 MiB 2022/06/16 09:34:53.323, Tesla V100-SXM2-32GB, 470.57.02, 48 %, 2 %, 32510 MiB, 21626 MiB, 10884 MiB 2022/06/16 09:34:53.325, Tesla V100-SXM2-32GB, 470.57.02, 29 %, 3 %, 32510 MiB, 19532 MiB, 12978 MiB 2022/06/16 09:34:53.326, Tesla V100-SXM2-32GB, 470.57.02, 35 %, 3 %, 32510 MiB, 18992 MiB, 13518 MiB 2022/06/16 09:34:53.326, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 20280 MiB, 12230 MiB 2022/06/16 09:34:53.327, Tesla V100-SXM2-32GB, 470.57.02, 47 %, 2 %, 32510 MiB, 21746 MiB, 10764 MiB 2022/06/16 09:34:53.328, Tesla V100-SXM2-32GB, 470.57.02, 47 %, 2 %, 32510 MiB, 21746 MiB, 10764 MiB 2022/06/16 09:34:53.328, Tesla V100-SXM2-32GB, 470.57.02, 47 %, 2 %, 32510 MiB, 21746 MiB, 10764 MiB 2022/06/16 09:34:53.329, Tesla V100-SXM2-32GB, 470.57.02, 47 %, 2 %, 32510 MiB, 21746 MiB, 10764 MiB 2022/06/16 09:34:53.331, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 0 %, 32510 MiB, 20280 MiB, 12230 MiB 2022/06/16 09:34:53.332, Tesla V100-SXM2-32GB, 470.57.02, 29 %, 3 %, 32510 MiB, 19532 MiB, 12978 MiB 2022/06/16 09:34:53.332, Tesla V100-SXM2-32GB, 470.57.02, 76 %, 3 %, 32510 MiB, 19106 MiB, 13404 MiB 2022/06/16 09:34:53.333, Tesla V100-SXM2-32GB, 470.57.02, 35 %, 3 %, 32510 MiB, 18992 MiB, 13518 MiB 2022/06/16 09:34:53.334, Tesla V100-SXM2-32GB, 470.57.02, 35 %, 3 %, 32510 MiB, 18992 MiB, 13518 MiB 2022/06/16 09:34:53.334, Tesla V100-SXM2-32GB, 470.57.02, 35 %, 3 %, 32510 MiB, 18992 MiB, 13518 MiB 2022/06/16 09:34:53.337, Tesla V100-SXM2-32GB, 470.57.02, 35 %, 3 %, 32510 MiB, 18992 MiB, 13518 MiB 2022/06/16 09:34:53.339, Tesla V100-SXM2-32GB, 470.57.02, 76 %, 3 %, 32510 MiB, 19106 MiB, 13404 MiB 2022/06/16 09:34:53.340, Tesla V100-SXM2-32GB, 470.57.02, 7 %, 3 %, 32510 MiB, 20280 MiB, 12230 MiB 2022/06/16 09:34:53.341, Tesla V100-SXM2-32GB, 470.57.02, 29 %, 3 %, 32510 MiB, 19532 MiB, 12978 MiB 2022/06/16 09:34:53.342, Tesla V100-SXM2-32GB, 470.57.02, 29 %, 3 %, 32510 MiB, 19532 MiB, 12978 MiB 2022/06/16 09:34:53.342, Tesla V100-SXM2-32GB, 470.57.02, 29 %, 3 %, 32510 MiB, 19532 MiB, 12978 MiB 2022/06/16 09:34:53.343, Tesla V100-SXM2-32GB, 470.57.02, 29 %, 3 %, 32510 MiB, 19532 MiB, 12978 MiB 2022/06/16 09:34:53.346, Tesla V100-SXM2-32GB, 470.57.02, 76 %, 3 %, 32510 MiB, 19106 MiB, 13404 MiB 2022/06/16 09:34:53.347, Tesla V100-SXM2-32GB, 470.57.02, 7 %, 3 %, 32510 MiB, 20280 MiB, 12230 MiB 2022/06/16 09:34:53.348, Tesla V100-SXM2-32GB, 470.57.02, 7 %, 3 %, 32510 MiB, 20280 MiB, 12230 MiB 2022/06/16 09:34:53.348, Tesla V100-SXM2-32GB, 470.57.02, 7 %, 3 %, 32510 MiB, 20280 MiB, 12230 MiB 2022/06/16 09:34:53.348, Tesla V100-SXM2-32GB, 470.57.02, 7 %, 3 %, 32510 MiB, 20280 MiB, 12230 MiB 2022/06/16 09:34:53.351, Tesla V100-SXM2-32GB, 470.57.02, 76 %, 3 %, 32510 MiB, 19106 MiB, 13404 MiB 2022/06/16 09:34:53.352, Tesla V100-SXM2-32GB, 470.57.02, 76 %, 3 %, 32510 MiB, 19106 MiB, 13404 MiB 2022/06/16 09:34:53.353, Tesla V100-SXM2-32GB, 470.57.02, 76 %, 3 %, 32510 MiB, 19106 MiB, 13404 MiB 2022/06/16 09:34:53.353, Tesla V100-SXM2-32GB, 470.57.02, 76 %, 3 %, 32510 MiB, 19106 MiB, 13404 MiB iteration 200/ 210 | consumed samples: 409600 | elapsed time per iteration (ms): 8765.2 | tpt: 233.7 samples/s | global batch size: 2048 | lm loss: 8.917925E+00 | sop loss: 6.941690E-01 | loss scale: 262144.0 | grad norm: 3.933 | number of skipped iterations: 0 | number of nan iterations: 0 | time (ms) | forward-compute: 2100.25 | forward-recv: 593.83 | backward-compute: 4515.73 | backward-send: 10.93 | backward-send-forward-recv: 132.42 | backward-params-all-reduce: 2.97 | backward-embedding-all-reduce: 1394.04 | optimizer-copy-to-main-grad: 1.80 | optimizer-unscale-and-check-inf: 1.37 | optimizer-clip-main-grad: 2.18 | optimizer-copy-main-to-model-params: 1.00 | optimizer: 8.85 | batch-generator: 23.15 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ validation loss at the end of training for val data | lm loss value: 8.696914E+00 | lm loss PPL: 5.984414E+03 | sop loss value: 6.920051E-01 | sop loss PPL: 1.997717E+00 | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- validation loss at the end of training for test data | lm loss value: 8.660933E+00 | lm loss PPL: 5.772916E+03 | sop loss value: 6.939546E-01 | sop loss PPL: 2.001615E+00 | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- INFO:torch.distributed.elastic.agent.server.api:[default] worker group successfully finished. Waiting 300 seconds for other agents to finish. INFO:torch.distributed.elastic.agent.server.api:Local worker group finished (SUCCEEDED). Waiting 300 seconds for other agents to finish /opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:70: FutureWarning: This is an experimental API and will be changed in future. warnings.warn( INFO:torch.distributed.elastic.agent.server.api:Done waiting for other agents. Elapsed: 0.006617069244384766 seconds {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 8, "group_rank": 1, "worker_id": "6808", "role": "default", "hostname": "iv-2udaavw4l02thdv8lcrl", "state": "SUCCEEDED", "total_run_time": 1972, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [0], \"role_rank\": [8], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 9, "group_rank": 1, "worker_id": "6809", "role": "default", "hostname": "iv-2udaavw4l02thdv8lcrl", "state": "SUCCEEDED", "total_run_time": 1972, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [1], \"role_rank\": [9], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 10, "group_rank": 1, "worker_id": "6810", "role": "default", "hostname": "iv-2udaavw4l02thdv8lcrl", "state": "SUCCEEDED", "total_run_time": 1972, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [2], \"role_rank\": [10], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 11, "group_rank": 1, "worker_id": "6811", "role": "default", "hostname": "iv-2udaavw4l02thdv8lcrl", "state": "SUCCEEDED", "total_run_time": 1972, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [3], \"role_rank\": [11], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 12, "group_rank": 1, "worker_id": "6812", "role": "default", "hostname": "iv-2udaavw4l02thdv8lcrl", "state": "SUCCEEDED", "total_run_time": 1972, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [4], \"role_rank\": [12], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 13, "group_rank": 1, "worker_id": "6813", "role": "default", "hostname": "iv-2udaavw4l02thdv8lcrl", "state": "SUCCEEDED", "total_run_time": 1972, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [5], \"role_rank\": [13], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 14, "group_rank": 1, "worker_id": "6814", "role": "default", "hostname": "iv-2udaavw4l02thdv8lcrl", "state": "SUCCEEDED", "total_run_time": 1972, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [6], \"role_rank\": [14], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 15, "group_rank": 1, "worker_id": "6815", "role": "default", "hostname": "iv-2udaavw4l02thdv8lcrl", "state": "SUCCEEDED", "total_run_time": 1972, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [7], \"role_rank\": [15], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "AGENT", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": null, "group_rank": 1, "worker_id": null, "role": "default", "hostname": "iv-2udaavw4l02thdv8lcrl", "state": "SUCCEEDED", "total_run_time": 1972, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\"}", "agent_restarts": 0}} ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. *****************************************