The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases. Please read local_rank from `os.environ('LOCAL_RANK')` instead. INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs: entrypoint : pretrain_bert.py min_nodes : 2 max_nodes : 2 nproc_per_node : 8 run_id : none rdzv_backend : static rdzv_endpoint : 198.18.8.14:6000 rdzv_configs : {'rank': 1, 'timeout': 900} max_restarts : 3 monitor_interval : 5 log_dir : None metrics_cfg : {} INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_dmb4lz8k/none_ts_jearx INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group /opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future. warnings.warn( INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result: restart_count=0 master_addr=198.18.8.14 master_port=6000 group_rank=1 group_world_size=2 local_ranks=[0, 1, 2, 3, 4, 5, 6, 7] role_ranks=[8, 9, 10, 11, 12, 13, 14, 15] global_ranks=[8, 9, 10, 11, 12, 13, 14, 15] role_world_sizes=[16, 16, 16, 16, 16, 16, 16, 16] global_world_sizes=[16, 16, 16, 16, 16, 16, 16, 16] INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_dmb4lz8k/none_ts_jearx/attempt_0/0/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_dmb4lz8k/none_ts_jearx/attempt_0/1/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_dmb4lz8k/none_ts_jearx/attempt_0/2/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_dmb4lz8k/none_ts_jearx/attempt_0/3/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker4 reply file to: /tmp/torchelastic_dmb4lz8k/none_ts_jearx/attempt_0/4/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker5 reply file to: /tmp/torchelastic_dmb4lz8k/none_ts_jearx/attempt_0/5/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker6 reply file to: /tmp/torchelastic_dmb4lz8k/none_ts_jearx/attempt_0/6/error.json INFO:torch.distributed.elastic.multiprocessing:Setting worker7 reply file to: /tmp/torchelastic_dmb4lz8k/none_ts_jearx/attempt_0/7/error.json [W ProcessGroupNCCL.cpp:1671] Rank 10 using best-guess GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 14 using best-guess GPU 6 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 12 using best-guess GPU 4 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 11 using best-guess GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 15 using best-guess GPU 7 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 13 using best-guess GPU 5 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 8 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. [W ProcessGroupNCCL.cpp:1671] Rank 9 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. iv-ybpu7pvmis5m57pm6ny1:26828:26828 [5] NCCL INFO Bootstrap : Using eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26826:26826 [3] NCCL INFO Bootstrap : Using eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26825:26825 [2] NCCL INFO Bootstrap : Using eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26828:26828 [5] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmis5m57pm6ny1:26828:26828 [5] NCCL INFO P2P plugin IBext iv-ybpu7pvmis5m57pm6ny1:26828:26828 [5] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmis5m57pm6ny1:26827:26827 [4] NCCL INFO Bootstrap : Using eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26823:26823 [0] NCCL INFO Bootstrap : Using eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26826:26826 [3] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmis5m57pm6ny1:26826:26826 [3] NCCL INFO P2P plugin IBext iv-ybpu7pvmis5m57pm6ny1:26826:26826 [3] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmis5m57pm6ny1:26830:26830 [7] NCCL INFO Bootstrap : Using eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26825:26825 [2] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmis5m57pm6ny1:26825:26825 [2] NCCL INFO P2P plugin IBext iv-ybpu7pvmis5m57pm6ny1:26825:26825 [2] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmis5m57pm6ny1:26824:26824 [1] NCCL INFO Bootstrap : Using eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26829:26829 [6] NCCL INFO Bootstrap : Using eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26827:26827 [4] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmis5m57pm6ny1:26827:26827 [4] NCCL INFO P2P plugin IBext iv-ybpu7pvmis5m57pm6ny1:26827:26827 [4] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmis5m57pm6ny1:26823:26823 [0] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmis5m57pm6ny1:26823:26823 [0] NCCL INFO P2P plugin IBext iv-ybpu7pvmis5m57pm6ny1:26823:26823 [0] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmis5m57pm6ny1:26830:26830 [7] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmis5m57pm6ny1:26830:26830 [7] NCCL INFO P2P plugin IBext iv-ybpu7pvmis5m57pm6ny1:26830:26830 [7] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmis5m57pm6ny1:26824:26824 [1] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmis5m57pm6ny1:26824:26824 [1] NCCL INFO P2P plugin IBext iv-ybpu7pvmis5m57pm6ny1:26824:26824 [1] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmis5m57pm6ny1:26829:26829 [6] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so iv-ybpu7pvmis5m57pm6ny1:26829:26829 [6] NCCL INFO P2P plugin IBext iv-ybpu7pvmis5m57pm6ny1:26829:26829 [6] NCCL INFO NCCL_IB_PCI_RELAXED_ORDERING set by environment to 1. iv-ybpu7pvmis5m57pm6ny1:26828:26828 [5] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26828:26828 [5] NCCL INFO Using network IBext iv-ybpu7pvmis5m57pm6ny1:26830:26830 [7] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26830:26830 [7] NCCL INFO Using network IBext iv-ybpu7pvmis5m57pm6ny1:26826:26826 [3] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26826:26826 [3] NCCL INFO Using network IBext iv-ybpu7pvmis5m57pm6ny1:26825:26825 [2] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26825:26825 [2] NCCL INFO Using network IBext iv-ybpu7pvmis5m57pm6ny1:26824:26824 [1] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26824:26824 [1] NCCL INFO Using network IBext iv-ybpu7pvmis5m57pm6ny1:26829:26829 [6] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26829:26829 [6] NCCL INFO Using network IBext iv-ybpu7pvmis5m57pm6ny1:26827:26827 [4] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26823:26823 [0] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB eth0:192.168.11.142<0> iv-ybpu7pvmis5m57pm6ny1:26823:26823 [0] NCCL INFO Using network IBext iv-ybpu7pvmis5m57pm6ny1:26827:26827 [4] NCCL INFO Using network IBext iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO NCCL_IB_GID_INDEX set by environment to 3. iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO NCCL_IB_TIMEOUT set by environment to 23. iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO NCCL_IB_RETRY_CNT set by environment to 7. iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Trees [0] 11/-1/-1->10->8 [1] 11/-1/-1->10->8 iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Trees [0] 9/-1/-1->11->10 [1] 9/-1/-1->11->10 iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Trees [0] 13/-1/-1->12->4 [1] 13/4/-1->12->-1 iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Trees [0] 15/-1/-1->13->12 [1] 15/-1/-1->13->12 iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Trees [0] -1/-1/-1->14->9 [1] -1/-1/-1->14->9 iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Trees [0] 8/-1/-1->15->13 [1] 8/-1/-1->15->13 iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Trees [0] 10/-1/-1->8->15 [1] 10/-1/-1->8->15 iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Trees [0] 14/-1/-1->9->11 [1] 14/-1/-1->9->11 iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Channel 00 : 10[67010] -> 13[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Channel 01 : 10[67010] -> 13[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Channel 00 : 14[6b010] -> 15[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 00 : 12[69010] -> 14[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Channel 00 : 8[65010] -> 9[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Channel 01 : 14[6b010] -> 15[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 01 : 12[69010] -> 14[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Channel 01 : 8[65010] -> 9[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Channel 00 : 13[69020] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Channel 00 : 15[6b020] -> 8[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Channel 01 : 13[69020] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Channel 00 : 9[65020] -> 11[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Channel 01 : 15[6b020] -> 8[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Channel 01 : 9[65020] -> 11[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 00 : 5[69020] -> 12[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Channel 00 : 8[65010] -> 10[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Channel 00 : 9[65020] -> 14[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Channel 00 : 11[67020] -> 10[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 01 : 5[69020] -> 12[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Channel 01 : 8[65010] -> 10[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Channel 01 : 11[67020] -> 10[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Channel 01 : 9[65020] -> 14[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Channel 00 : 10[67010] -> 11[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Channel 00 : 14[6b010] -> 9[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Channel 01 : 10[67010] -> 11[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Channel 01 : 14[6b010] -> 9[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 00 : 12[69010] -> 13[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Channel 00 : 11[67020] -> 9[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 01 : 12[69010] -> 13[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Channel 01 : 11[67020] -> 9[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 00 : 4[69010] -> 12[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Channel 00 : 13[69020] -> 15[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Channel 00 : 10[67010] -> 8[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Channel 00 : 8[65010] -> 15[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 01 : 4[69010] -> 12[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Channel 01 : 13[69020] -> 15[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Channel 01 : 8[65010] -> 15[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Channel 01 : 10[67010] -> 8[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Channel 01 : 9[65020] -> 12[69010] via P2P/indirect/14[6b010] iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 00 : 12[69010] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Channel 00 : 11[67020] -> 13[69020] via P2P/indirect/12[69010] iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 01 : 12[69010] -> 4[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Channel 00 : 15[6b020] -> 13[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Channel 01 : 15[6b020] -> 13[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Channel 00 : 8[65010] -> 12[69010] via P2P/indirect/11[67020] iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Channel 00 : 10[67010] -> 12[69010] via P2P/indirect/13[69020] iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Channel 00 : 13[69020] -> 12[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Channel 01 : 13[69020] -> 12[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Channel 01 : 11[67020] -> 14[6b010] via P2P/indirect/9[65020] iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Channel 00 : 10[67010] -> 14[6b010] via P2P/indirect/9[65020] iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO Channel 00 : 11[67020] -> 15[6b020] via P2P/indirect/8[65010] iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Channel 00 : 9[65020] -> 13[69020] via P2P/indirect/14[6b010] iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO Channel 01 : 10[67010] -> 15[6b020] via P2P/indirect/8[65010] iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 00 : 12[69010] -> 8[65010] via P2P/indirect/15[6b020] iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO Channel 00 : 9[65020] -> 15[6b020] via P2P/indirect/8[65010] iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Channel 01 : 8[65010] -> 13[69020] via P2P/indirect/15[6b020] iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO Channel 00 : 8[65010] -> 14[6b010] via P2P/indirect/9[65020] iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Channel 01 : 13[69020] -> 8[65010] via P2P/indirect/15[6b020] iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Channel 00 : 15[6b020] -> 9[65020] via P2P/indirect/14[6b010] iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Channel 00 : 14[6b010] -> 8[65010] via P2P/indirect/15[6b020] iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Channel 01 : 15[6b020] -> 10[67010] via P2P/indirect/8[65010] iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Channel 00 : 14[6b010] -> 10[67010] via P2P/indirect/13[69020] iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO Channel 00 : 15[6b020] -> 11[67020] via P2P/indirect/8[65010] iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO Channel 01 : 14[6b010] -> 11[67020] via P2P/indirect/9[65020] iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Channel 00 : 13[69020] -> 9[65020] via P2P/indirect/14[6b010] iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO Channel 00 : 13[69020] -> 11[67020] via P2P/indirect/10[67010] iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 01 : 12[69010] -> 9[65020] via P2P/indirect/14[6b010] iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO Channel 00 : 12[69010] -> 10[67010] via P2P/indirect/11[67020] iv-ybpu7pvmis5m57pm6ny1:26828:27036 [5] NCCL INFO comm 0x7f0d3c008fb0 rank 13 nranks 16 cudaDev 5 busId 69020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26826:27038 [3] NCCL INFO comm 0x7fc13c008fb0 rank 11 nranks 16 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26830:27037 [7] NCCL INFO comm 0x7f6f24008fb0 rank 15 nranks 16 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26824:27039 [1] NCCL INFO comm 0x7f9cc8008fb0 rank 9 nranks 16 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26825:27040 [2] NCCL INFO comm 0x7f6830008fb0 rank 10 nranks 16 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26829:27041 [6] NCCL INFO comm 0x7f9da8008fb0 rank 14 nranks 16 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26823:27043 [0] NCCL INFO comm 0x7f98d0008fb0 rank 8 nranks 16 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26827:27042 [4] NCCL INFO comm 0x7f2be4008fb0 rank 12 nranks 16 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Trees [0] 7/-1/-1->6->2 [1] 7/2/-1->6->-1 iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO Trees [0] 4/-1/-1->7->6 [1] 4/-1/-1->7->6 iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Trees [0] 5/-1/-1->4->7 [1] 5/-1/-1->4->7 iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO Trees [0] -1/-1/-1->5->4 [1] -1/-1/-1->5->4 iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Channel 00 : 6[69010] -> 0[65010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Channel 00 : 4[65010] -> 5[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Channel 01 : 4[65010] -> 5[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Channel 01 : 6[69010] -> 0[65010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO Channel 00 : 5[67010] -> 7[6b010] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO Channel 01 : 5[67010] -> 7[6b010] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Channel 00 : 2[69010] -> 4[65010] [receive] via NET/IBext/0 iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Channel 01 : 2[69010] -> 4[65010] [receive] via NET/IBext/0 iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO Channel 00 : 7[6b010] -> 6[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO Channel 00 : 5[67010] -> 4[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO Channel 01 : 7[6b010] -> 6[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO Channel 01 : 5[67010] -> 4[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Channel 00 : 4[65010] -> 7[6b010] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Channel 01 : 4[65010] -> 7[6b010] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Channel 00 : 6[69010] -> 7[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Channel 01 : 6[69010] -> 7[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Channel 00 : 2[69010] -> 6[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Channel 01 : 2[69010] -> 6[69010] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Channel 00 : 6[69010] -> 2[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Channel 01 : 6[69010] -> 2[69010] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO Channel 00 : 7[6b010] -> 4[65010] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO Channel 01 : 7[6b010] -> 4[65010] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26825:27089 [2] NCCL INFO comm 0x7f67b0008fb0 rank 5 nranks 8 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26823:27090 [0] NCCL INFO comm 0x7f984c008fb0 rank 4 nranks 8 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26827:27091 [4] NCCL INFO comm 0x7f2b58008fb0 rank 6 nranks 8 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26829:27092 [6] NCCL INFO comm 0x7f9d18008fb0 rank 7 nranks 8 cudaDev 6 busId 6b010 - Init COMPLETE NCCL version 2.10.3+cuda11.4 NCCL version 2.10.3+cuda11.4 NCCL version 2.10.3+cuda11.4 NCCL version 2.10.3+cuda11.4 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 00/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 01/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 02/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 03/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 04/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 05/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 06/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 07/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 08/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 09/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 10/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 11/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 12/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 13/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 14/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 15/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 16/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 17/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 18/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 19/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 20/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 21/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 22/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 23/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 24/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 25/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 26/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 27/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 28/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 29/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 30/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Channel 31/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Trees [0] -1/-1/-1->0->-1 [1] -1/-1/-1->0->-1 [2] -1/-1/-1->0->-1 [3] -1/-1/-1->0->-1 [4] -1/-1/-1->0->-1 [5] -1/-1/-1->0->-1 [6] -1/-1/-1->0->-1 [7] -1/-1/-1->0->-1 [8] -1/-1/-1->0->-1 [9] -1/-1/-1->0->-1 [10] -1/-1/-1->0->-1 [11] -1/-1/-1->0->-1 [12] -1/-1/-1->0->-1 [13] -1/-1/-1->0->-1 [14] -1/-1/-1->0->-1 [15] -1/-1/-1->0->-1 [16] -1/-1/-1->0->-1 [17] -1/-1/-1->0->-1 [18] -1/-1/-1->0->-1 [19] -1/-1/-1->0->-1 [20] -1/-1/-1->0->-1 [21] -1/-1/-1->0->-1 [22] -1/-1/-1->0->-1 [23] -1/-1/-1->0->-1 [24] -1/-1/-1->0->-1 [25] -1/-1/-1->0->-1 [26] -1/-1/-1->0->-1 [27] -1/-1/-1->0->-1 [28] -1/-1/-1->0->-1 [29] -1/-1/-1->0->-1 [30] -1/-1/-1->0->-1 [31] -1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 00/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 01/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 02/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 03/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 04/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 05/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 06/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 07/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 08/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 09/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 10/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 11/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 12/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 13/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 14/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 15/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 16/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 17/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 18/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 19/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 20/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 21/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 22/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 23/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 24/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 25/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 26/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 27/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 28/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 29/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 30/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Channel 31/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Trees [0] -1/-1/-1->0->-1 [1] -1/-1/-1->0->-1 [2] -1/-1/-1->0->-1 [3] -1/-1/-1->0->-1 [4] -1/-1/-1->0->-1 [5] -1/-1/-1->0->-1 [6] -1/-1/-1->0->-1 [7] -1/-1/-1->0->-1 [8] -1/-1/-1->0->-1 [9] -1/-1/-1->0->-1 [10] -1/-1/-1->0->-1 [11] -1/-1/-1->0->-1 [12] -1/-1/-1->0->-1 [13] -1/-1/-1->0->-1 [14] -1/-1/-1->0->-1 [15] -1/-1/-1->0->-1 [16] -1/-1/-1->0->-1 [17] -1/-1/-1->0->-1 [18] -1/-1/-1->0->-1 [19] -1/-1/-1->0->-1 [20] -1/-1/-1->0->-1 [21] -1/-1/-1->0->-1 [22] -1/-1/-1->0->-1 [23] -1/-1/-1->0->-1 [24] -1/-1/-1->0->-1 [25] -1/-1/-1->0->-1 [26] -1/-1/-1->0->-1 [27] -1/-1/-1->0->-1 [28] -1/-1/-1->0->-1 [29] -1/-1/-1->0->-1 [30] -1/-1/-1->0->-1 [31] -1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 00/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 01/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 02/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 03/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 04/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 05/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 06/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 07/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 08/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 09/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 10/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 11/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 12/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 13/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 14/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 15/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 16/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 17/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 18/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 19/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 20/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 21/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 22/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 23/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 24/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 25/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 26/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 27/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 28/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 29/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 30/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Channel 31/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Trees [0] -1/-1/-1->0->-1 [1] -1/-1/-1->0->-1 [2] -1/-1/-1->0->-1 [3] -1/-1/-1->0->-1 [4] -1/-1/-1->0->-1 [5] -1/-1/-1->0->-1 [6] -1/-1/-1->0->-1 [7] -1/-1/-1->0->-1 [8] -1/-1/-1->0->-1 [9] -1/-1/-1->0->-1 [10] -1/-1/-1->0->-1 [11] -1/-1/-1->0->-1 [12] -1/-1/-1->0->-1 [13] -1/-1/-1->0->-1 [14] -1/-1/-1->0->-1 [15] -1/-1/-1->0->-1 [16] -1/-1/-1->0->-1 [17] -1/-1/-1->0->-1 [18] -1/-1/-1->0->-1 [19] -1/-1/-1->0->-1 [20] -1/-1/-1->0->-1 [21] -1/-1/-1->0->-1 [22] -1/-1/-1->0->-1 [23] -1/-1/-1->0->-1 [24] -1/-1/-1->0->-1 [25] -1/-1/-1->0->-1 [26] -1/-1/-1->0->-1 [27] -1/-1/-1->0->-1 [28] -1/-1/-1->0->-1 [29] -1/-1/-1->0->-1 [30] -1/-1/-1->0->-1 [31] -1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 00/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 01/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 02/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 03/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 04/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 05/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 06/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 07/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 08/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 09/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 10/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 11/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 12/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 13/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 14/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 15/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 16/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 17/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 18/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 19/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 20/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 21/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 22/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 23/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 24/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 25/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 26/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 27/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 28/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 29/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 30/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Channel 31/32 : 0 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Trees [0] -1/-1/-1->0->-1 [1] -1/-1/-1->0->-1 [2] -1/-1/-1->0->-1 [3] -1/-1/-1->0->-1 [4] -1/-1/-1->0->-1 [5] -1/-1/-1->0->-1 [6] -1/-1/-1->0->-1 [7] -1/-1/-1->0->-1 [8] -1/-1/-1->0->-1 [9] -1/-1/-1->0->-1 [10] -1/-1/-1->0->-1 [11] -1/-1/-1->0->-1 [12] -1/-1/-1->0->-1 [13] -1/-1/-1->0->-1 [14] -1/-1/-1->0->-1 [15] -1/-1/-1->0->-1 [16] -1/-1/-1->0->-1 [17] -1/-1/-1->0->-1 [18] -1/-1/-1->0->-1 [19] -1/-1/-1->0->-1 [20] -1/-1/-1->0->-1 [21] -1/-1/-1->0->-1 [22] -1/-1/-1->0->-1 [23] -1/-1/-1->0->-1 [24] -1/-1/-1->0->-1 [25] -1/-1/-1->0->-1 [26] -1/-1/-1->0->-1 [27] -1/-1/-1->0->-1 [28] -1/-1/-1->0->-1 [29] -1/-1/-1->0->-1 [30] -1/-1/-1->0->-1 [31] -1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO 32 coll channels, 32 p2p channels, 32 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26829:27104 [6] NCCL INFO comm 0x7f9d14008fb0 rank 0 nranks 1 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO 32 coll channels, 32 p2p channels, 32 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26827:27107 [4] NCCL INFO comm 0x7f2b54008fb0 rank 0 nranks 1 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO 32 coll channels, 32 p2p channels, 32 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26825:27103 [2] NCCL INFO comm 0x7f67ac008fb0 rank 0 nranks 1 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO 32 coll channels, 32 p2p channels, 32 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26823:27109 [0] NCCL INFO comm 0x7f9840008fb0 rank 0 nranks 1 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO Channel 00/02 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26830:27122 [7] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO Channel 01/02 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26830:27122 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Channel 00/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Channel 01/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Channel 02/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Channel 03/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO Channel 00/02 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26824:27124 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO Channel 01/02 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26824:27124 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26830:27122 [7] NCCL INFO Channel 00 : 1[6b020] -> 0[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO Channel 00 : 0[6b010] -> 1[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Channel 00/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Channel 01/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Channel 02/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Channel 03/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26830:27122 [7] NCCL INFO Channel 01 : 1[6b020] -> 0[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO Channel 01 : 0[6b010] -> 1[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO Channel 00 : 0[65010] -> 1[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:27124 [1] NCCL INFO Channel 00 : 1[65020] -> 0[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO Channel 01 : 0[65010] -> 1[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26830:27122 [7] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26830:27122 [7] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26830:27122 [7] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26830:27122 [7] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26830:27122 [7] NCCL INFO comm 0x7f6e98008fb0 rank 1 nranks 2 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26829:27118 [6] NCCL INFO comm 0x7f9d08008fb0 rank 0 nranks 2 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26829:26829 [6] NCCL INFO Launch mode Parallel iv-ybpu7pvmis5m57pm6ny1:26824:27124 [1] NCCL INFO Channel 01 : 1[65020] -> 0[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Channel 00 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO Channel 00 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Channel 01 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO Channel 01 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Channel 02 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO Channel 02 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26824:27124 [1] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26824:27124 [1] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26824:27124 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26824:27124 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26823:27120 [0] NCCL INFO comm 0x7f9838008fb0 rank 0 nranks 2 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26824:27124 [1] NCCL INFO comm 0x7f9c34008fb0 rank 1 nranks 2 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26823:26823 [0] NCCL INFO Launch mode Parallel iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Channel 03 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO Channel 03 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO Channel 00 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Channel 00 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO Channel 01 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Channel 01 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO Channel 02 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Channel 02 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO Channel 03 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Channel 03 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26825:27123 [2] NCCL INFO comm 0x7f67a0008fb0 rank 0 nranks 2 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26826:27125 [3] NCCL INFO comm 0x7fc0ac008fb0 rank 1 nranks 2 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26825:26825 [2] NCCL INFO Launch mode Parallel iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26828:27132 [5] NCCL INFO comm 0x7f0cb0008fb0 rank 1 nranks 2 cudaDev 5 busId 69020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26827:27129 [4] NCCL INFO comm 0x7f2b48008fb0 rank 0 nranks 2 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26827:26827 [4] NCCL INFO Launch mode Parallel [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:99] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) time (ms) | model-and-optimizer-setup: 218.26 | train/valid/test-data-iterators-setup: 1088.13 /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( /dataset/workspace/Megatron-LM/megatron/model/transformer.py:536: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/core/LegacyTypeDispatch.h:74.) output = bias_dropout_add_func( iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Trees [0] 5/-1/-1->4->7 [1] 5/-1/-1->4->7 iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO Trees [0] -1/-1/-1->5->4 [1] -1/-1/-1->5->4 iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO Trees [0] 4/-1/-1->7->6 [1] 4/-1/-1->7->6 iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Trees [0] 7/-1/-1->6->2 [1] 7/2/-1->6->-1 iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Channel 00 : 6[69020] -> 0[65020] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Channel 00 : 4[65020] -> 5[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Channel 01 : 4[65020] -> 5[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Channel 01 : 6[69020] -> 0[65020] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO Channel 00 : 5[67020] -> 7[6b020] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO Channel 01 : 5[67020] -> 7[6b020] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Channel 00 : 2[69020] -> 4[65020] [receive] via NET/IBext/0 iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Channel 01 : 2[69020] -> 4[65020] [receive] via NET/IBext/0 iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO Channel 00 : 5[67020] -> 4[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO Channel 00 : 7[6b020] -> 6[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO Channel 01 : 5[67020] -> 4[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO Channel 01 : 7[6b020] -> 6[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Channel 00 : 4[65020] -> 7[6b020] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Channel 01 : 4[65020] -> 7[6b020] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Channel 00 : 6[69020] -> 7[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Channel 01 : 6[69020] -> 7[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Channel 00 : 2[69020] -> 6[69020] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Channel 01 : 2[69020] -> 6[69020] [receive] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Channel 00 : 6[69020] -> 2[69020] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Channel 01 : 6[69020] -> 2[69020] [send] via NET/IBext/0/GDRDMA iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO Channel 00 : 7[6b020] -> 4[65020] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO Channel 01 : 7[6b020] -> 4[65020] via direct shared memory iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26828:28264 [5] NCCL INFO comm 0x7f0968008fb0 rank 6 nranks 8 cudaDev 5 busId 69020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26824:28263 [1] NCCL INFO comm 0x7f98cc008fb0 rank 4 nranks 8 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26826:28262 [3] NCCL INFO comm 0x7fbd50008fb0 rank 5 nranks 8 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26830:28265 [7] NCCL INFO comm 0x7f6b50008fb0 rank 7 nranks 8 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Channel 00/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Channel 01/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Channel 02/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Channel 03/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO Setting affinity for GPU 5 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Setting affinity for GPU 4 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO Channel 00/02 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO Channel 01/02 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26830:28414 [7] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO Setting affinity for GPU 6 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26830:28414 [7] NCCL INFO Setting affinity for GPU 7 to 0fffff,fffffc00,00000000 iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Channel 00/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Channel 01/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO Setting affinity for GPU 3 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Channel 02/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Channel 03/04 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Setting affinity for GPU 2 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO Channel 00/02 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26824:28413 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO Channel 01/02 : 0 1 iv-ybpu7pvmis5m57pm6ny1:26824:28413 [1] NCCL INFO Setting affinity for GPU 1 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO Setting affinity for GPU 0 to 03ff,ffffffff iv-ybpu7pvmis5m57pm6ny1:26830:28414 [7] NCCL INFO Channel 00 : 1[6b020] -> 0[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO Channel 00 : 0[6b010] -> 1[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26830:28414 [7] NCCL INFO Channel 01 : 1[6b020] -> 0[6b010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO Channel 00 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO Channel 01 : 0[6b010] -> 1[6b020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Channel 00 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO Channel 01 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:28413 [1] NCCL INFO Channel 00 : 1[65020] -> 0[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO Channel 00 : 0[65010] -> 1[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Channel 01 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO Channel 02 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO Channel 00 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Channel 00 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:28413 [1] NCCL INFO Channel 01 : 1[65020] -> 0[65010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26830:28414 [7] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26830:28414 [7] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26830:28414 [7] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26830:28414 [7] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26830:28414 [7] NCCL INFO comm 0x7f6a1c008fb0 rank 1 nranks 2 cudaDev 7 busId 6b020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26829:28406 [6] NCCL INFO comm 0x7f9870008fb0 rank 0 nranks 2 cudaDev 6 busId 6b010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO Channel 01 : 0[65010] -> 1[65020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26829:26829 [6] NCCL INFO Launch mode Parallel iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Channel 02 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO Channel 03 : 1[69020] -> 0[69010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO Channel 01 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Channel 01 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Channel 03 : 0[69010] -> 1[69020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO Channel 02 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Channel 02 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:28413 [1] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26824:28413 [1] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26824:28413 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26824:28413 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO Channel 03 : 1[67020] -> 0[67010] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Channel 03 : 0[67010] -> 1[67020] via P2P/IPC iv-ybpu7pvmis5m57pm6ny1:26824:28413 [1] NCCL INFO comm 0x7f97ac008fb0 rank 1 nranks 2 cudaDev 1 busId 65020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26823:28412 [0] NCCL INFO comm 0x7f9388008fb0 rank 0 nranks 2 cudaDev 0 busId 65010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26823:26823 [0] NCCL INFO Launch mode Parallel iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26828:28409 [5] NCCL INFO comm 0x7f0944008fb0 rank 1 nranks 2 cudaDev 5 busId 69020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26827:28404 [4] NCCL INFO comm 0x7f2690008fb0 rank 0 nranks 2 cudaDev 4 busId 69010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26827:26827 [4] NCCL INFO Launch mode Parallel iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Connected all rings iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO Connected all trees iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO 4 coll channels, 4 p2p channels, 4 p2p channels per peer iv-ybpu7pvmis5m57pm6ny1:26826:28411 [3] NCCL INFO comm 0x7fbc40008fb0 rank 1 nranks 2 cudaDev 3 busId 67020 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26825:28408 [2] NCCL INFO comm 0x7f6300008fb0 rank 0 nranks 2 cudaDev 2 busId 67010 - Init COMPLETE iv-ybpu7pvmis5m57pm6ny1:26825:26825 [2] NCCL INFO Launch mode Parallel iteration 100/ 210 | consumed samples: 819200 | elapsed time per iteration (ms): 23872.7 | tpt: 343.2 samples/s | global batch size: 8192 | lm loss: 9.532099E+00 | sop loss: 7.046157E-01 | loss scale: 262144.0 | grad norm: 2.182 | number of skipped iterations: 15 | number of nan iterations: 0 | time (ms) | forward-compute: 7071.10 | backward-compute: 16672.71 | backward-params-all-reduce: 101.27 | backward-embedding-all-reduce: 0.04 | optimizer-copy-to-main-grad: 3.21 | optimizer-unscale-and-check-inf: 7.82 | optimizer-clip-main-grad: 3.56 | optimizer-copy-main-to-model-params: 2.57 | optimizer: 23.72 | batch-generator: 44.88 timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/15 10:01:43.730, Tesla V100-SXM2-32GB, 470.57.02, 66 %, 10 %, 32510 MiB, 11018 MiB, 21492 MiB 2022/06/15 10:01:43.731, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 10 %, 32510 MiB, 11268 MiB, 21242 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/15 10:01:43.732, Tesla V100-SXM2-32GB, 470.57.02, 51 %, 10 %, 32510 MiB, 11146 MiB, 21364 MiB 2022/06/15 10:01:43.733, Tesla V100-SXM2-32GB, 470.57.02, 66 %, 10 %, 32510 MiB, 11018 MiB, 21492 MiB 2022/06/15 10:01:43.736, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 10 %, 32510 MiB, 10754 MiB, 21756 MiB 2022/06/15 10:01:43.738, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 10 %, 32510 MiB, 11268 MiB, 21242 MiB 2022/06/15 10:01:43.740, Tesla V100-SXM2-32GB, 470.57.02, 27 %, 10 %, 32510 MiB, 10792 MiB, 21718 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/15 10:01:43.740, Tesla V100-SXM2-32GB, 470.57.02, 51 %, 10 %, 32510 MiB, 11146 MiB, 21364 MiB 2022/06/15 10:01:43.741, Tesla V100-SXM2-32GB, 470.57.02, 45 %, 11 %, 32510 MiB, 10908 MiB, 21602 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/15 10:01:43.742, Tesla V100-SXM2-32GB, 470.57.02, 66 %, 10 %, 32510 MiB, 11018 MiB, 21492 MiB 2022/06/15 10:01:43.742, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 10 %, 32510 MiB, 10754 MiB, 21756 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/15 10:01:43.743, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 24 %, 32510 MiB, 11022 MiB, 21488 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/15 10:01:43.743, Tesla V100-SXM2-32GB, 470.57.02, 66 %, 10 %, 32510 MiB, 11018 MiB, 21492 MiB 2022/06/15 10:01:43.744, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 10 %, 32510 MiB, 11268 MiB, 21242 MiB 2022/06/15 10:01:43.744, Tesla V100-SXM2-32GB, 470.57.02, 27 %, 10 %, 32510 MiB, 10792 MiB, 21718 MiB 2022/06/15 10:01:43.745, Tesla V100-SXM2-32GB, 470.57.02, 66 %, 10 %, 32510 MiB, 11018 MiB, 21492 MiB 2022/06/15 10:01:43.745, Tesla V100-SXM2-32GB, 470.57.02, 66 %, 10 %, 32510 MiB, 11018 MiB, 21492 MiB 2022/06/15 10:01:43.746, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 23 %, 32510 MiB, 11014 MiB, 21496 MiB 2022/06/15 10:01:43.746, Tesla V100-SXM2-32GB, 470.57.02, 66 %, 10 %, 32510 MiB, 11018 MiB, 21492 MiB 2022/06/15 10:01:43.749, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 10 %, 32510 MiB, 11268 MiB, 21242 MiB 2022/06/15 10:01:43.749, Tesla V100-SXM2-32GB, 470.57.02, 51 %, 10 %, 32510 MiB, 11146 MiB, 21364 MiB 2022/06/15 10:01:43.749, Tesla V100-SXM2-32GB, 470.57.02, 45 %, 11 %, 32510 MiB, 10908 MiB, 21602 MiB 2022/06/15 10:01:43.751, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 10 %, 32510 MiB, 11268 MiB, 21242 MiB 2022/06/15 10:01:43.752, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 10 %, 32510 MiB, 11268 MiB, 21242 MiB 2022/06/15 10:01:43.753, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 10 %, 32510 MiB, 11268 MiB, 21242 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/06/15 10:01:43.755, Tesla V100-SXM2-32GB, 470.57.02, 51 %, 10 %, 32510 MiB, 11146 MiB, 21364 MiB 2022/06/15 10:01:43.755, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 10 %, 32510 MiB, 10754 MiB, 21756 MiB 2022/06/15 10:01:43.755, Tesla V100-SXM2-32GB, 470.57.02, 14 %, 9 %, 32510 MiB, 11022 MiB, 21488 MiB 2022/06/15 10:01:43.757, Tesla V100-SXM2-32GB, 470.57.02, 51 %, 10 %, 32510 MiB, 11146 MiB, 21364 MiB 2022/06/15 10:01:43.757, Tesla V100-SXM2-32GB, 470.57.02, 51 %, 10 %, 32510 MiB, 11146 MiB, 21364 MiB 2022/06/15 10:01:43.758, Tesla V100-SXM2-32GB, 470.57.02, 51 %, 10 %, 32510 MiB, 11146 MiB, 21364 MiB 2022/06/15 10:01:43.759, Tesla V100-SXM2-32GB, 470.57.02, 66 %, 10 %, 32510 MiB, 11018 MiB, 21492 MiB 2022/06/15 10:01:43.760, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 10 %, 32510 MiB, 10754 MiB, 21756 MiB 2022/06/15 10:01:43.760, Tesla V100-SXM2-32GB, 470.57.02, 27 %, 10 %, 32510 MiB, 10792 MiB, 21718 MiB 2022/06/15 10:01:43.761, Tesla V100-SXM2-32GB, 470.57.02, 10 %, 9 %, 32510 MiB, 11014 MiB, 21496 MiB 2022/06/15 10:01:43.762, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 10 %, 32510 MiB, 10754 MiB, 21756 MiB 2022/06/15 10:01:43.763, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 10 %, 32510 MiB, 10754 MiB, 21756 MiB 2022/06/15 10:01:43.764, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 10 %, 32510 MiB, 10754 MiB, 21756 MiB 2022/06/15 10:01:43.766, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 10 %, 32510 MiB, 11268 MiB, 21242 MiB 2022/06/15 10:01:43.766, Tesla V100-SXM2-32GB, 470.57.02, 27 %, 10 %, 32510 MiB, 10792 MiB, 21718 MiB 2022/06/15 10:01:43.766, Tesla V100-SXM2-32GB, 470.57.02, 45 %, 11 %, 32510 MiB, 10908 MiB, 21602 MiB 2022/06/15 10:01:43.768, Tesla V100-SXM2-32GB, 470.57.02, 27 %, 10 %, 32510 MiB, 10792 MiB, 21718 MiB 2022/06/15 10:01:43.769, Tesla V100-SXM2-32GB, 470.57.02, 27 %, 10 %, 32510 MiB, 10792 MiB, 21718 MiB 2022/06/15 10:01:43.770, Tesla V100-SXM2-32GB, 470.57.02, 27 %, 10 %, 32510 MiB, 10792 MiB, 21718 MiB 2022/06/15 10:01:43.772, Tesla V100-SXM2-32GB, 470.57.02, 51 %, 10 %, 32510 MiB, 11146 MiB, 21364 MiB 2022/06/15 10:01:43.772, Tesla V100-SXM2-32GB, 470.57.02, 45 %, 11 %, 32510 MiB, 10908 MiB, 21602 MiB 2022/06/15 10:01:43.772, Tesla V100-SXM2-32GB, 470.57.02, 14 %, 9 %, 32510 MiB, 11022 MiB, 21488 MiB 2022/06/15 10:01:43.776, Tesla V100-SXM2-32GB, 470.57.02, 45 %, 11 %, 32510 MiB, 10908 MiB, 21602 MiB 2022/06/15 10:01:43.777, Tesla V100-SXM2-32GB, 470.57.02, 45 %, 11 %, 32510 MiB, 10908 MiB, 21602 MiB 2022/06/15 10:01:43.778, Tesla V100-SXM2-32GB, 470.57.02, 45 %, 11 %, 32510 MiB, 10908 MiB, 21602 MiB 2022/06/15 10:01:43.780, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 10 %, 32510 MiB, 10754 MiB, 21756 MiB 2022/06/15 10:01:43.780, Tesla V100-SXM2-32GB, 470.57.02, 14 %, 9 %, 32510 MiB, 11022 MiB, 21488 MiB 2022/06/15 10:01:43.780, Tesla V100-SXM2-32GB, 470.57.02, 10 %, 9 %, 32510 MiB, 11014 MiB, 21496 MiB 2022/06/15 10:01:43.782, Tesla V100-SXM2-32GB, 470.57.02, 14 %, 9 %, 32510 MiB, 11022 MiB, 21488 MiB 2022/06/15 10:01:43.782, Tesla V100-SXM2-32GB, 470.57.02, 14 %, 9 %, 32510 MiB, 11022 MiB, 21488 MiB 2022/06/15 10:01:43.783, Tesla V100-SXM2-32GB, 470.57.02, 14 %, 9 %, 32510 MiB, 11022 MiB, 21488 MiB 2022/06/15 10:01:43.785, Tesla V100-SXM2-32GB, 470.57.02, 27 %, 10 %, 32510 MiB, 10792 MiB, 21718 MiB 2022/06/15 10:01:43.785, Tesla V100-SXM2-32GB, 470.57.02, 10 %, 9 %, 32510 MiB, 11014 MiB, 21496 MiB 2022/06/15 10:01:43.787, Tesla V100-SXM2-32GB, 470.57.02, 10 %, 9 %, 32510 MiB, 11014 MiB, 21496 MiB 2022/06/15 10:01:43.787, Tesla V100-SXM2-32GB, 470.57.02, 10 %, 9 %, 32510 MiB, 11014 MiB, 21496 MiB 2022/06/15 10:01:43.788, Tesla V100-SXM2-32GB, 470.57.02, 10 %, 9 %, 32510 MiB, 11014 MiB, 21496 MiB 2022/06/15 10:01:43.791, Tesla V100-SXM2-32GB, 470.57.02, 45 %, 11 %, 32510 MiB, 10908 MiB, 21602 MiB 2022/06/15 10:01:43.795, Tesla V100-SXM2-32GB, 470.57.02, 14 %, 9 %, 32510 MiB, 11022 MiB, 21488 MiB 2022/06/15 10:01:43.798, Tesla V100-SXM2-32GB, 470.57.02, 10 %, 9 %, 32510 MiB, 11014 MiB, 21496 MiB iteration 200/ 210 | consumed samples: 1638400 | elapsed time per iteration (ms): 23825.2 | tpt: 343.8 samples/s | global batch size: 8192 | lm loss: 8.902217E+00 | sop loss: 6.934728E-01 | loss scale: 262144.0 | grad norm: 2.953 | number of skipped iterations: 0 | number of nan iterations: 0 | time (ms) | forward-compute: 7019.06 | backward-compute: 16679.83 | backward-params-all-reduce: 101.01 | backward-embedding-all-reduce: 0.03 | optimizer-copy-to-main-grad: 3.14 | optimizer-unscale-and-check-inf: 2.54 | optimizer-clip-main-grad: 4.20 | optimizer-copy-main-to-model-params: 3.01 | optimizer: 19.89 | batch-generator: 16.39 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ validation loss at the end of training for val data | lm loss value: 8.648677E+00 | lm loss PPL: 5.702596E+03 | sop loss value: 6.923253E-01 | sop loss PPL: 1.998357E+00 | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- validation loss at the end of training for test data | lm loss value: 8.620791E+00 | lm loss PPL: 5.545774E+03 | sop loss value: 6.930708E-01 | sop loss PPL: 1.999847E+00 | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- INFO:torch.distributed.elastic.agent.server.api:[default] worker group successfully finished. Waiting 300 seconds for other agents to finish. INFO:torch.distributed.elastic.agent.server.api:Local worker group finished (SUCCEEDED). Waiting 300 seconds for other agents to finish /opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:70: FutureWarning: This is an experimental API and will be changed in future. warnings.warn( INFO:torch.distributed.elastic.agent.server.api:Done waiting for other agents. Elapsed: 0.009713888168334961 seconds {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 8, "group_rank": 1, "worker_id": "26823", "role": "default", "hostname": "iv-ybpu7pvmis5m57pm6ny1", "state": "SUCCEEDED", "total_run_time": 5155, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [0], \"role_rank\": [8], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 9, "group_rank": 1, "worker_id": "26824", "role": "default", "hostname": "iv-ybpu7pvmis5m57pm6ny1", "state": "SUCCEEDED", "total_run_time": 5155, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [1], \"role_rank\": [9], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 10, "group_rank": 1, "worker_id": "26825", "role": "default", "hostname": "iv-ybpu7pvmis5m57pm6ny1", "state": "SUCCEEDED", "total_run_time": 5155, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [2], \"role_rank\": [10], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 11, "group_rank": 1, "worker_id": "26826", "role": "default", "hostname": "iv-ybpu7pvmis5m57pm6ny1", "state": "SUCCEEDED", "total_run_time": 5155, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [3], \"role_rank\": [11], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 12, "group_rank": 1, "worker_id": "26827", "role": "default", "hostname": "iv-ybpu7pvmis5m57pm6ny1", "state": "SUCCEEDED", "total_run_time": 5155, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [4], \"role_rank\": [12], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 13, "group_rank": 1, "worker_id": "26828", "role": "default", "hostname": "iv-ybpu7pvmis5m57pm6ny1", "state": "SUCCEEDED", "total_run_time": 5155, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [5], \"role_rank\": [13], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 14, "group_rank": 1, "worker_id": "26829", "role": "default", "hostname": "iv-ybpu7pvmis5m57pm6ny1", "state": "SUCCEEDED", "total_run_time": 5155, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [6], \"role_rank\": [14], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 15, "group_rank": 1, "worker_id": "26830", "role": "default", "hostname": "iv-ybpu7pvmis5m57pm6ny1", "state": "SUCCEEDED", "total_run_time": 5155, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\", \"local_rank\": [7], \"role_rank\": [15], \"role_world_size\": [16]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "AGENT", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": null, "group_rank": 1, "worker_id": null, "role": "default", "hostname": "iv-ybpu7pvmis5m57pm6ny1", "state": "SUCCEEDED", "total_run_time": 5155, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 2, \"entry_point\": \"python\"}", "agent_restarts": 0}} ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. *****************************************