loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** loaded library: loaded library: loaded library: loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1/usr/lib/x86_64-linux-gnu/libibverbs.so.1 /usr/lib/x86_64-linux-gnu/libibverbs.so.1/usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 ------------------------ arguments ------------------------ batches_per_epoch ............................... 625 channel_last .................................... False ddp ............................................. True exit_num ........................................ 300 fuse_bn_add_relu ................................ False fuse_bn_relu .................................... False gpu_stat_file ................................... None grad_clipping ................................... 0.0 graph ........................................... False label_smoothing ................................. 0.1 learning_rate ................................... 2.048 legacy_init ..................................... False load_path ....................................... None lr_decay_type ................................... cosine metric_local .................................... True metric_train_acc ................................ True momentum ........................................ 0.875 nccl_fusion_max_ops ............................. 24 nccl_fusion_threshold_mb ........................ 16 num_classes ..................................... 1000 num_devices_per_node ............................ 8 num_epochs ...................................... 1 num_nodes ....................................... 1 ofrecord_part_num ............................... 256 ofrecord_path ................................... /dataset/79846248 print_interval .................................. 100 print_timestamp ................................. False samples_per_epoch ............................... 1281167 save_init ....................................... False save_path ....................................... None scale_grad ...................................... False skip_eval ....................................... True synthetic_data .................................. False total_batches ................................... -1 train_batch_size ................................ 256 train_global_batch_size ......................... 2048 use_fp16 ........................................ False use_gpu_decode .................................. False val_batch_size .................................. 50 val_batches_per_epoch ........................... 125 val_global_batch_size ........................... 400 val_samples_per_epoch ........................... 50000 warmup_epochs ................................... 5 weight_decay .................................... 3.0517578125e-05 zero_init_residual .............................. True -------------------- end of arguments --------------------- ***** Model Init ***** ***** Model Init Finish, time escapled: 2.80848 s ***** W20220407 18:17:56.872377 7579 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220407 18:17:56.877652 7387 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220407 18:17:56.879238 7354 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220407 18:17:56.880179 7389 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220407 18:17:56.884121 7681 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220407 18:17:56.884260 7477 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220407 18:17:56.879853 7783 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 W20220407 18:17:56.881675 7407 cudnn_conv_util.cpp:102] Currently available alogrithm (algo=0, require memory=0, idx=1) meeting requirments (max_workspace_size=1073741824, determinism=0) is not fastest. Fastest algorithm (3) requires memory 1520566288 [rank:2] [train], epoch: 0/1, iter: 100/625, loss: 0.86760, lr: 0.000000, top1: 0.00086, throughput: 293.06 | 2022-04-07 18:19:17.667 [rank:4] [train], epoch: 0/1, iter: 100/625, loss: 0.86737, lr: 0.000000, top1: 0.00129, throughput: 293.25 | 2022-04-07 18:19:17.681 [rank:3] [train], epoch: 0/1, iter: 100/625, loss: 0.86728, lr: 0.000000, top1: 0.00109, throughput: 292.83 | 2022-04-07 18:19:17.692 [rank:5] [train], epoch: 0/1, iter: 100/625, loss: 0.86760, lr: 0.000000, top1: 0.00109, throughput: 293.10 | 2022-04-07 18:19:17.716 [rank:1] [train], epoch: 0/1, iter: 100/625, loss: 0.86710, lr: 0.000000, top1: 0.00137, throughput: 292.84 | 2022-04-07 18:19:17.727 [rank:6] [train], epoch: 0/1, iter: 100/625, loss: 0.86774, lr: 0.000000, top1: 0.00102, throughput: 292.86 | 2022-04-07 18:19:17.727 [rank:0] [train], epoch: 0/1, iter: 100/625, loss: 0.86738, lr: 0.000000, top1: 0.00129, throughput: 292.71 | 2022-04-07 18:19:17.736 [rank:7] [train], epoch: 0/1, iter: 100/625, loss: 0.86722, lr: 0.000000, top1: 0.00117, throughput: 293.59 | 2022-04-07 18:19:17.752 timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/07 18:19:17.878, Tesla V100-SXM2-32GB, 470.57.02, 56 %, 42 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/07 18:19:17.885, Tesla V100-SXM2-32GB, 470.57.02, 42 %, 20 %, 32510 MiB, 5204 MiB, 27306 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/07 18:19:17.897, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 50 %, 32510 MiB, 5280 MiB, 27230 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/07 18:19:17.903, Tesla V100-SXM2-32GB, 470.57.02, 56 %, 42 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/07 18:19:17.904, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 36 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/07 18:19:17.907, Tesla V100-SXM2-32GB, 470.57.02, 56 %, 42 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/07 18:19:17.918, Tesla V100-SXM2-32GB, 470.57.02, 42 %, 20 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/07 18:19:17.919, Tesla V100-SXM2-32GB, 470.57.02, 69 %, 30 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/07 18:19:17.924, Tesla V100-SXM2-32GB, 470.57.02, 42 %, 20 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/07 18:19:17.926, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 50 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/07 18:19:17.927, Tesla V100-SXM2-32GB, 470.57.02, 40 %, 11 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/07 18:19:17.937, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 50 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/07 18:19:17.940, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 36 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/07 18:19:17.941, Tesla V100-SXM2-32GB, 470.57.02, 35 %, 16 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/07 18:19:17.945, Tesla V100-SXM2-32GB, 470.57.02, 80 %, 36 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/07 18:19:17.951, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 74 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/07 18:19:17.957, Tesla V100-SXM2-32GB, 470.57.02, 46 %, 38 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/07 18:19:17.967, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 74 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/07 18:19:17.981, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/07 18:19:17.997, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/07 18:19:18.002, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 81 %, 32510 MiB, 5140 MiB, 27370 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/07 18:19:18.015, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 81 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/07 18:19:18.018, Tesla V100-SXM2-32GB, 470.57.02, 46 %, 37 %, 32510 MiB, 5220 MiB, 27290 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/07 18:19:18.024, Tesla V100-SXM2-32GB, 470.57.02, 46 %, 37 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/07 18:19:18.024, Tesla V100-SXM2-32GB, 470.57.02, 56 %, 41 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/07 18:19:18.035, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 71 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/07 18:19:18.035, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 71 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/07 18:19:18.041, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/07 18:19:18.061, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/07 18:19:18.061, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/07 18:19:18.065, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 69 %, 32510 MiB, 5280 MiB, 27230 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/07 18:19:18.074, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 69 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/07 18:19:18.074, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 69 %, 32510 MiB, 5280 MiB, 27230 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/04/07 18:19:18.077, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 73 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/07 18:19:18.083, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 71 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/07 18:19:18.084, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 73 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/07 18:19:18.085, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 73 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/07 18:19:18.085, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 71 %, 32510 MiB, 5184 MiB, 27326 MiB 2022/04/07 18:19:18.093, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 74 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/07 18:19:18.097, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/07 18:19:18.097, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 74 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/07 18:19:18.097, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 74 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/07 18:19:18.098, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5204 MiB, 27306 MiB 2022/04/07 18:19:18.103, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/07 18:19:18.106, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 69 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/07 18:19:18.106, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/07 18:19:18.106, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/07 18:19:18.108, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 69 %, 32510 MiB, 5280 MiB, 27230 MiB 2022/04/07 18:19:18.115, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/07 18:19:18.118, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 73 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/07 18:19:18.118, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/07 18:19:18.119, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 80 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/07 18:19:18.122, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 73 %, 32510 MiB, 5256 MiB, 27254 MiB 2022/04/07 18:19:18.124, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 72 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/07 18:19:18.127, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 53 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/07 18:19:18.127, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 72 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/07 18:19:18.127, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 72 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/07 18:19:18.129, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 53 %, 32510 MiB, 5264 MiB, 27246 MiB 2022/04/07 18:19:18.142, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 84 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/07 18:19:18.144, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 55 %, 32510 MiB, 5196 MiB, 27314 MiB 2022/04/07 18:19:18.148, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 56 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/07 18:19:18.166, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 56 %, 32510 MiB, 5140 MiB, 27370 MiB 2022/04/07 18:19:18.182, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 72 %, 32510 MiB, 5220 MiB, 27290 MiB 2022/04/07 18:19:18.184, Tesla V100-SXM2-32GB, 470.57.02, 99 %, 72 %, 32510 MiB, 5220 MiB, 27290 MiB [rank:1] [train], epoch: 0/1, iter: 200/625, loss: 0.86781, lr: 0.000000, top1: 0.00133, throughput: 318.20 | 2022-04-07 18:20:38.181 [rank:6] [train], epoch: 0/1, iter: 200/625, loss: 0.86713, lr: 0.000000, top1: 0.00145, throughput: 318.08 | 2022-04-07 18:20:38.211 [rank:0] [train], epoch: 0/1, iter: 200/625, loss: 0.86758, lr: 0.000000, top1: 0.00148, throughput: 318.05 | 2022-04-07 18:20:38.226 [rank:5] [train], epoch: 0/1, iter: 200/625, loss: 0.86757, lr: 0.000000, top1: 0.00125, throughput: 317.96 | 2022-04-07 18:20:38.229 [rank:7] [train], epoch: 0/1, iter: 200/625, loss: 0.86750, lr: 0.000000, top1: 0.00125, throughput: 318.09 | 2022-04-07 18:20:38.229 [rank:2] [train], epoch: 0/1, iter: 200/625, loss: 0.86747, lr: 0.000000, top1: 0.00176, throughput: 317.71 | 2022-04-07 18:20:38.244 [rank:3] [train], epoch: 0/1, iter: 200/625, loss: 0.86721, lr: 0.000000, top1: 0.00145, throughput: 317.79 | 2022-04-07 18:20:38.248 [rank:4] [train], epoch: 0/1, iter: 200/625, loss: 0.86742, lr: 0.000000, top1: 0.00129, throughput: 317.65 | 2022-04-07 18:20:38.274 [rank:2] [train], epoch: 0/1, iter: 300/625, loss: 0.86710, lr: 0.000000, top1: 0.00141, throughput: 319.79 | 2022-04-07 18:21:58.296 [rank:1] [train], epoch: 0/1, iter: 300/625, loss: 0.86717, lr: 0.000000, top1: 0.00109, throughput: 319.50 | 2022-04-07 18:21:58.305 [rank:4] [train], epoch: 0/1, iter: 300/625, loss: 0.86738, lr: 0.000000, top1: 0.00113, throughput: 319.72 | 2022-04-07 18:21:58.345 [rank:3] [train], epoch: 0/1, iter: 300/625, loss: 0.86706, lr: 0.000000, top1: 0.00125, throughput: 319.60 | 2022-04-07 18:21:58.348 [rank:7] [train], epoch: 0/1, iter: 300/625, loss: 0.86757, lr: 0.000000, top1: 0.00098, throughput: 319.49 | 2022-04-07 18:21:58.358 [rank:5] [train], epoch: 0/1, iter: 300/625, loss: 0.86734, lr: 0.000000, top1: 0.00125, throughput: 319.44 | 2022-04-07 18:21:58.369 [rank:0] [train], epoch: 0/1, iter: 300/625, loss: 0.86740, lr: 0.000000, top1: 0.00113, throughput: 319.41 | 2022-04-07 18:21:58.374 [rank:6] [train], epoch: 0/1, iter: 300/625, loss: 0.86736, lr: 0.000000, top1: 0.00109, throughput: 319.70 | 2022-04-07 18:21:58.286