[03/11 01:56:03 libai]: Rank of current process: 0. World size: 8 [03/11 01:56:03 libai]: Command line arguments: Namespace(config_file='configs/swin_imagenet.py', eval_only=False, fast_dev_run=False, opts=['model.cfg.hidden_dropout_prob=0.1', 'model.cfg.attention_probs_dropout_prob=0.1', 'model.cfg.bias_dropout_fusion=true', 'model.cfg.hidden_layers=12', 'model.cfg.hidden_size=768', 'model.cfg.num_attention_heads=12', 'model.cfg.intermediate_size=3072', 'model.cfg.ffn_hidden_size=3072', 'model.cfg.head_size=64', 'graph.enabled=true', 'train.dist.pipeline_num_layers=12', 'train.train_micro_batch_size=64', 'train.global_batch_size=2048', 'train.dist.tensor_parallel_size=1', 'train.dist.pipeline_parallel_size=2', 'train.amp.enabled=true', 'train.activation_checkpoint.enabled=true', 'train.num_accumulation_steps=8', 'train.evaluation.enabled=false', 'train.train_iter=220', 'train.train_epoch=0', 'train.log_period=100', 'train.zero_optimization.enabled=true', 'train.zero_optimization.stage=2', 'train.load_weight=', 'train.output_dir=test_logs/oneflow-28/NVIDIA_GeForce_RTX_3080_Ti/1ea2bb7/LibAI_swin_imagenet_graph_nl12_nah12_hs768_FP16_actrue_DP4_MP1_PP2_zerotrue_stage2_mbs64_gbs2048_acc8_1n8g'], resume=False) [03/11 01:56:04 libai]: Contents of args.config_file=configs/swin_imagenet.py: from libai.config import LazyCall from .common.models.swin.swin_tiny_patch4_window7_224 import model from .common.models.graph import graph from .common.train import train from .common.optim import optim from .common.data.imagenet import dataloader from flowvision.data import Mixup from flowvision.loss.cross_entropy import SoftTargetCrossEntropy # Refine data path to imagenet dataloader.train.dataset[0].root = "/ssd/dataset/ImageNet/extract" dataloader.test[0].dataset.root = "/ssd/dataset/ImageNet/extract" # Add Mixup Func dataloader.train.mixup_func = LazyCall(Mixup)(  mixup_alpha=0.8,  cutmix_alpha=1.0,  prob=1.0,  switch_prob=0.5,  mode="batch",  num_classes=1000, ) # Refine model cfg for vit training on imagenet model.cfg.num_classes = 1000 model.cfg.loss_func = SoftTargetCrossEntropy() # Refine optimizer cfg for vit model optim.lr = 1e-3 optim.eps = 1e-8 optim.weight_decay = 0.05 optim.params.clip_grad_max_norm = None optim.params.clip_grad_norm_type = None # Refine train cfg for vit model train.train_micro_batch_size = 128 train.test_micro_batch_size = 128 train.train_epoch = 300 train.warmup_ratio = 20 / 300 train.eval_period = 1562 train.log_period = 100 # Scheduler train.scheduler.warmup_factor = 0.001 train.scheduler.alpha = 0.01 train.scheduler.warmup_method = "linear" # Set fp16 ON train.amp.enabled = True [03/11 01:56:04 libai]: Full config saved to test_logs/oneflow-28/NVIDIA_GeForce_RTX_3080_Ti/1ea2bb7/LibAI_swin_imagenet_graph_nl12_nah12_hs768_FP16_actrue_DP4_MP1_PP2_zerotrue_stage2_mbs64_gbs2048_acc8_1n8g/config.yaml [03/11 01:56:04 lb.engine.default]: > compiling dataset index builder ... make: Entering directory '/ssd/home/ouyangyu/libai_week_test/libai/libai/data/data_utils' make: Nothing to be done for 'default'. make: Leaving directory '/ssd/home/ouyangyu/libai_week_test/libai/libai/data/data_utils' [03/11 01:56:04 lb.engine.default]: >>> done with dataset index builder. Compilation time: 0.056 seconds [03/11 01:56:04 lb.engine.default]: >>> done with compiling. Compilation time: 0.058 seconds [03/11 01:56:04 lb.engine.default]: Prepare training, validating, testing set [03/11 01:56:07 lb.engine.default]: Prepare testing set [03/11 01:56:17 lb.engine.default]: Auto-scaling the config to train.train_iter=220, train.warmup_iter=15 [03/11 01:56:17 libai]: > Start building model... W20230311 01:56:19.654317 3224376 eager_local_op_interpreter.cpp:256] Casting a local tensor to a global tensor with Broadcast sbp will modify the data of input! If you want to keep the input local tensor unchanged, please set the arg copy to True. [03/11 01:56:21 lb.engine.default]: Model: SwinTransformer( (patch_embed): PatchEmbed( (proj): Conv2d(3, 96, kernel_size=(4, 4), stride=(4, 4)) (norm): LayerNorm((96,), eps=1e-05, elementwise_affine=True) ) (pos_drop): Dropout(p=0.0, inplace=False) (layers): ModuleList( (0): BasicLayer( (blocks): ModuleList( (0): SwinTransformerBlock( (norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=96, out_features=288, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=96, out_features=96, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): Identity() (norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=96, out_features=384, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=384, out_features=96, bias=True, parallel=row) ) ) (1): SwinTransformerBlock( (norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=96, out_features=288, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=96, out_features=96, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=96, out_features=384, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=384, out_features=96, bias=True, parallel=row) ) ) ) (downsample): PatchMerging( (reduction): Linear1D(in_features=384, out_features=192, bias=False, parallel=data) (norm): LayerNorm((384,), eps=1e-05, elementwise_affine=True) ) ) (1): BasicLayer( (blocks): ModuleList( (0): SwinTransformerBlock( (norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=192, out_features=576, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=192, out_features=192, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=192, out_features=768, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=768, out_features=192, bias=True, parallel=row) ) ) (1): SwinTransformerBlock( (norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=192, out_features=576, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=192, out_features=192, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=192, out_features=768, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=768, out_features=192, bias=True, parallel=row) ) ) ) (downsample): PatchMerging( (reduction): Linear1D(in_features=768, out_features=384, bias=False, parallel=data) (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) ) ) (2): BasicLayer( (blocks): ModuleList( (0): SwinTransformerBlock( (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=384, out_features=1152, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=384, out_features=384, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=384, out_features=1536, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=1536, out_features=384, bias=True, parallel=row) ) ) (1): SwinTransformerBlock( (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=384, out_features=1152, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=384, out_features=384, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=384, out_features=1536, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=1536, out_features=384, bias=True, parallel=row) ) ) (2): SwinTransformerBlock( (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=384, out_features=1152, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=384, out_features=384, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=384, out_features=1536, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=1536, out_features=384, bias=True, parallel=row) ) ) (3): SwinTransformerBlock( (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=384, out_features=1152, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=384, out_features=384, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=384, out_features=1536, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=1536, out_features=384, bias=True, parallel=row) ) ) (4): SwinTransformerBlock( (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=384, out_features=1152, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=384, out_features=384, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=384, out_features=1536, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=1536, out_features=384, bias=True, parallel=row) ) ) (5): SwinTransformerBlock( (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=384, out_features=1152, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=384, out_features=384, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=384, out_features=1536, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=1536, out_features=384, bias=True, parallel=row) ) ) ) (downsample): PatchMerging( (reduction): Linear1D(in_features=1536, out_features=768, bias=False, parallel=data) (norm): LayerNorm((1536,), eps=1e-05, elementwise_affine=True) ) ) (3): BasicLayer( (blocks): ModuleList( (0): SwinTransformerBlock( (norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=768, out_features=2304, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=768, out_features=768, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=768, out_features=3072, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=3072, out_features=768, bias=True, parallel=row) ) ) (1): SwinTransformerBlock( (norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear1D(in_features=768, out_features=2304, bias=True, parallel=data) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear1D(in_features=768, out_features=768, bias=True, parallel=data) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0.0 (dense_h_to_4h): Linear1D(in_features=768, out_features=3072, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=3072, out_features=768, bias=True, parallel=row) ) ) ) ) ) (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (avgpool): AdaptiveAvgPool1d() (head): Linear1D(in_features=768, out_features=1000, bias=True, parallel=data) (loss_func): SoftTargetCrossEntropy() ) [03/11 01:56:21 libai]: >>> done with building model. Building time: 4.119 seconds [03/11 01:56:21 lb.engine.trainer]: Starting training from iteration 0 [03/11 01:56:24 lb.models.utils.graph_base]: Start compiling the train graph which may take some time. Please wait for a moment ... W20230311 01:56:36.539301 3224381 insert_nccl_logical_op_pass.cpp:1088] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1125 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230311 01:56:36.825002 3224383 insert_nccl_logical_op_pass.cpp:1088] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1125 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230311 01:56:36.849117 3224376 insert_nccl_logical_op_pass.cpp:1088] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1125 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230311 01:56:36.855331 3224377 insert_nccl_logical_op_pass.cpp:1088] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1125 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230311 01:56:36.861972 3224385 insert_nccl_logical_op_pass.cpp:1088] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1125 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230311 01:56:36.883067 3224380 insert_nccl_logical_op_pass.cpp:1088] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1125 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230311 01:56:36.954288 3224379 insert_nccl_logical_op_pass.cpp:1088] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1125 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230311 01:56:37.432719 3224378 insert_nccl_logical_op_pass.cpp:1088] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1125 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/11 02:02:51.168, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 21 %, 0 %, 12288 MiB, 7067 MiB, 4986 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/11 02:02:51.173, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.173, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 21 %, 0 %, 12288 MiB, 7067 MiB, 4986 MiB 2023/03/11 02:02:51.178, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.179, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.183, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 1 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.184, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.186, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9643 MiB, 2410 MiB 2023/03/11 02:02:51.186, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 1 %, 12288 MiB, 7127 MiB, 4926 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/11 02:02:51.187, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 99 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.188, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9643 MiB, 2410 MiB 2023/03/11 02:02:51.190, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 21 %, 0 %, 12288 MiB, 7067 MiB, 4986 MiB 2023/03/11 02:02:51.190, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 21 %, 0 %, 12288 MiB, 7067 MiB, 4986 MiB 2023/03/11 02:02:51.192, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.194, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 99 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/11 02:02:51.197, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.197, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.201, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 87 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/11 02:02:51.202, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.203, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 21 %, 0 %, 12288 MiB, 7067 MiB, 4986 MiB 2023/03/11 02:02:51.2052023/03/11 02:02:51.205, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB , NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.210, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 21 %, 0 %, 12288 MiB, 7067 MiB, 4986 MiB 2023/03/11 02:02:51.210, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 21 %, 0 %, 12288 MiB, 7067 MiB, 4986 MiB 2023/03/11 02:02:51.211, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 87 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.212, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.214, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 1 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.214, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 1 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.216, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.216, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.218, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.220, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9643 MiB, 2410 MiB 2023/03/11 02:02:51.221, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9643 MiB, 2410 MiB 2023/03/11 02:02:51.222, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.224, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.226, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 1 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.227, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 99 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.228, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 99 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.230, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 1 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.230, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 1 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:51.231, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9643 MiB, 2410 MiB 2023/03/11 02:02:51.235, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.233, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.237, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9643 MiB, 2410 MiB 2023/03/11 02:02:51.238, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9643 MiB, 2410 MiB 2023/03/11 02:02:51.239, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 99 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.241, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 84 %, 3 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.241, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 84 %, 3 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.244, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 99 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.245, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 99 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.265, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.270, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.271, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 100 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.274, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 84 %, 3 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.277, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 84 %, 3 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:51.277, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 84 %, 3 %, 12288 MiB, 9639 MiB, 2414 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/11 02:02:52.997, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 41 %, 17 %, 12288 MiB, 7067 MiB, 4986 MiB 2023/03/11 02:02:52.999, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 69 %, 24 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:53.000, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 47 %, 17 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:53.001, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 84 %, 40 %, 12288 MiB, 7127 MiB, 4926 MiB 2023/03/11 02:02:53.002, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 45 %, 1 %, 12288 MiB, 9643 MiB, 2410 MiB 2023/03/11 02:02:53.004, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 0 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:53.004, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 15 %, 0 %, 12288 MiB, 9639 MiB, 2414 MiB 2023/03/11 02:02:53.007, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 51 %, 5 %, 12288 MiB, 9639 MiB, 2414 MiB [03/11 02:02:56 lb.utils.events]: eta: 0:07:23 iteration: 99/220 consumed_samples: 204800 total_loss: 6.889 time: 3.7326 s/iter data_time: 0.8329 s/iter total_throughput: 548.69 samples/s lr: 5.82e-04 [03/11 02:09:06 lb.utils.events]: eta: 0:01:13 iteration: 199/220 consumed_samples: 409600 total_loss: 6.859 time: 3.7159 s/iter data_time: 1.1499 s/iter total_throughput: 551.14 samples/s lr: 3.21e-05 [03/11 02:10:18 lb.utils.events]: eta: 0:00:00 iteration: 219/220 consumed_samples: 450560 total_loss: 6.851 time: 3.7060 s/iter data_time: 1.0440 s/iter total_throughput: 552.61 samples/s lr: 1.01e-05 [03/11 02:10:18 lb.engine.hooks]: Overall training speed: 218 iterations in 0:13:27 (3.7061 s / it) [03/11 02:10:18 lb.engine.hooks]: Total training time: 0:13:28 (0:00:00 on hooks) ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** oneflow-version(git_commit)=0.9.1.dev20230309+cu117 oneflow-commit(git_commit)=1ea2bb7 oneflow-libai(git_commit)=50a973dc5de635b8613ad7666c073c763e238850