loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** loaded library: loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1/usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1/usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 ------------------------ arguments ------------------------ batches_per_epoch ............................... 625 channel_last .................................... False ddp ............................................. True exit_num ........................................ 300 fuse_bn_add_relu ................................ False fuse_bn_relu .................................... False gpu_stat_file ................................... None grad_clipping ................................... 0.0 graph ........................................... False label_smoothing ................................. 0.1 learning_rate ................................... 2.048 legacy_init ..................................... False load_path ....................................... None lr_decay_type ................................... cosine metric_local .................................... True metric_train_acc ................................ True momentum ........................................ 0.875 nccl_fusion_max_ops ............................. 24 nccl_fusion_threshold_mb ........................ 16 num_classes ..................................... 1000 num_devices_per_node ............................ 8 num_epochs ...................................... 1 num_nodes ....................................... 1 ofrecord_part_num ............................... 256 ofrecord_path ................................... /dataset/79846248 print_interval .................................. 100 print_timestamp ................................. False samples_per_epoch ............................... 1281167 save_init ....................................... False save_path ....................................... None scale_grad ...................................... False skip_eval ....................................... True synthetic_data .................................. False total_batches ................................... -1 train_batch_size ................................ 256 train_global_batch_size ......................... 2048 use_fp16 ........................................ False use_gpu_decode .................................. False val_batch_size .................................. 50 val_batches_per_epoch ........................... 125 val_global_batch_size ........................... 400 val_samples_per_epoch ........................... 50000 warmup_epochs ................................... 5 weight_decay .................................... 3.0517578125e-05 zero_init_residual .............................. True -------------------- end of arguments --------------------- ***** Model Init ***** ***** Model Init Finish, time escapled: 2.75265 s ***** W20220408 10:09:22.273742 7595 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220408 10:09:22.277673 7501 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220408 10:09:22.281666 7589 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220408 10:09:22.282675 7336 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220408 10:09:22.282765 7386 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220408 10:09:22.278416 7688 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220408 10:09:22.285899 7790 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220408 10:09:22.284435 7206 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 [rank:0] [train], epoch: 0/1, iter: 100/625, loss: 0.86730, lr: 0.000000, top1: 0.00227, throughput: 291.74 | 2022-04-08 10:10:43.406 [rank:5] [train], epoch: 0/1, iter: 100/625, loss: 0.86721, lr: 0.000000, top1: 0.00117, throughput: 291.57 | 2022-04-08 10:10:43.433 [rank:3] [train], epoch: 0/1, iter: 100/625, loss: 0.86745, lr: 0.000000, top1: 0.00125, throughput: 291.68 | 2022-04-08 10:10:43.452 [rank:4] [train], epoch: 0/1, iter: 100/625, loss: 0.86745, lr: 0.000000, top1: 0.00129, throughput: 291.67 | 2022-04-08 10:10:43.452 [rank:6] [train], epoch: 0/1, iter: 100/625, loss: 0.86705, lr: 0.000000, top1: 0.00137, throughput: 291.58 | 2022-04-08 10:10:43.462 [rank:7] [train], epoch: 0/1, iter: 100/625, loss: 0.86732, lr: 0.000000, top1: 0.00133, throughput: 291.63 | 2022-04-08 10:10:43.463 [rank:2] [train], epoch: 0/1, iter: 100/625, loss: 0.86738, lr: 0.000000, top1: 0.00168, throughput: 291.60 | 2022-04-08 10:10:43.465 [rank:1] [train], epoch: 0/1, iter: 100/625, loss: 0.86707, lr: 0.000000, top1: 0.00141, throughput: 291.46 | 2022-04-08 10:10:43.473 timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/08 10:10:43.595, Tesla V100-SXM2-32GB, 470.57.02, 72 %, 24 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/08 10:10:43.600, Tesla V100-SXM2-32GB, 470.57.02, 29 %, 21 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/08 10:10:43.603, Tesla V100-SXM2-32GB, 470.57.02, 17 %, 8 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/08 10:10:43.609, Tesla V100-SXM2-32GB, 470.57.02, 21 %, 2 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/08 10:10:43.616, Tesla V100-SXM2-32GB, 470.57.02, 41 %, 19 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/08 10:10:43.620, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 11 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/08 10:10:43.622, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 31 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/08 10:10:43.628, Tesla V100-SXM2-32GB, 470.57.02, 50 %, 37 %, 32510 MiB, 5220 MiB, 27290 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/08 10:10:43.674, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/08 10:10:43.676, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/08 10:10:43.686, Tesla V100-SXM2-32GB, 470.57.02, 56 %, 46 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/08 10:10:43.688, Tesla V100-SXM2-32GB, 470.57.02, 56 %, 46 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/08 10:10:43.707, Tesla V100-SXM2-32GB, 470.57.02, 17 %, 8 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/08 10:10:43.709, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 82 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/08 10:10:43.739, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/08 10:10:43.749, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5256 MiB, 27254 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/08 10:10:43.767, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 55 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/08 10:10:43.769, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 55 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/08 10:10:43.775, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5184 MiB, 27326 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/08 10:10:43.781, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/08 10:10:43.788, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5196 MiB, 27314 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/08 10:10:43.791, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 71 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/08 10:10:43.795, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/08 10:10:43.796, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 70 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/08 10:10:43.812, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 70 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/08 10:10:43.814, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/08 10:10:43.815, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/08 10:10:43.816, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 82 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/08 10:10:43.819, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 71 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/08 10:10:43.819, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 72 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/08 10:10:43.828, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 72 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/08 10:10:43.835, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 71 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/08 10:10:43.837, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 71 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/08 10:10:43.838, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 57 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/08 10:10:43.841, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 82 %, 32510 MiB, 5280 MiB, 27230 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/08 10:10:43.871, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 82 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/08 10:10:43.871, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 82 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/08 10:10:43.872, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 68 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/08 10:10:43.874, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 57 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/08 10:10:43.878, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 57 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/08 10:10:43.878, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 57 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/08 10:10:43.878, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 53 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/08 10:10:43.879, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 54 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/08 10:10:43.884, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 68 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/08 10:10:43.894, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 68 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/08 10:10:43.894, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 68 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/08 10:10:43.895, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 71 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/08 10:10:43.895, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 70 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/08 10:10:43.898, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 54 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/08 10:10:43.903, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 54 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/08 10:10:43.904, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 54 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/08 10:10:43.904, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 60 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/08 10:10:43.905, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 72 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/08 10:10:43.913, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 70 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/08 10:10:43.919, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 70 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/08 10:10:43.920, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 70 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/08 10:10:43.920, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 57 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/08 10:10:43.925, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 57 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/08 10:10:43.933, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 57 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/08 10:10:43.934, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 57 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/08 10:10:43.934, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 68 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/08 10:10:43.943, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 54 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/08 10:10:43.949, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 70 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/08 10:10:43.961, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 57 %, 32510 MiB, 5220 MiB, 27290 MiB [rank:7] [train], epoch: 0/1, iter: 200/625, loss: 0.86729, lr: 0.000000, top1: 0.00117, throughput: 317.76 | 2022-04-08 10:12:04.027 [rank:4] [train], epoch: 0/1, iter: 200/625, loss: 0.86710, lr: 0.000000, top1: 0.00148, throughput: 317.68 | 2022-04-08 10:12:04.037 [rank:5] [train], epoch: 0/1, iter: 200/625, loss: 0.86733, lr: 0.000000, top1: 0.00156, throughput: 317.59 | 2022-04-08 10:12:04.041 [rank:1] [train], epoch: 0/1, iter: 200/625, loss: 0.86732, lr: 0.000000, top1: 0.00137, throughput: 317.66 | 2022-04-08 10:12:04.061 [rank:6] [train], epoch: 0/1, iter: 200/625, loss: 0.86730, lr: 0.000000, top1: 0.00133, throughput: 317.47 | 2022-04-08 10:12:04.100 [rank:0] [train], epoch: 0/1, iter: 200/625, loss: 0.86723, lr: 0.000000, top1: 0.00137, throughput: 317.21 | 2022-04-08 10:12:04.109 [rank:2] [train], epoch: 0/1, iter: 200/625, loss: 0.86745, lr: 0.000000, top1: 0.00156, throughput: 317.43 | 2022-04-08 10:12:04.112 [rank:3] [train], epoch: 0/1, iter: 200/625, loss: 0.86744, lr: 0.000000, top1: 0.00187, throughput: 317.24 | 2022-04-08 10:12:04.148 [rank:3] [train], epoch: 0/1, iter: 300/625, loss: 0.86734, lr: 0.000000, top1: 0.00187, throughput: 320.94 | 2022-04-08 10:13:23.914 [rank:2] [train], epoch: 0/1, iter: 300/625, loss: 0.86750, lr: 0.000000, top1: 0.00145, throughput: 320.79 | 2022-04-08 10:13:23.916 [rank:6] [train], epoch: 0/1, iter: 300/625, loss: 0.86703, lr: 0.000000, top1: 0.00117, throughput: 320.64 | 2022-04-08 10:13:23.941 [rank:7] [train], epoch: 0/1, iter: 300/625, loss: 0.86731, lr: 0.000000, top1: 0.00199, throughput: 320.30 | 2022-04-08 10:13:23.952 [rank:5] [train], epoch: 0/1, iter: 300/625, loss: 0.86709, lr: 0.000000, top1: 0.00164, throughput: 320.29 | 2022-04-08 10:13:23.969 [rank:1] [train], epoch: 0/1, iter: 300/625, loss: 0.86755, lr: 0.000000, top1: 0.00160, throughput: 320.36 | 2022-04-08 10:13:23.972 [rank:4] [train], epoch: 0/1, iter: 300/625, loss: 0.86747, lr: 0.000000, top1: 0.00141, throughput: 320.19 | 2022-04-08 10:13:23.988 [rank:0] [train], epoch: 0/1, iter: 300/625, loss: 0.86736, lr: 0.000000, top1: 0.00145, throughput: 320.46 | 2022-04-08 10:13:23.994