[03/05 09:39:00 libai]: Rank of current process: 0. World size: 8 [03/05 09:39:00 libai]: Command line arguments: Namespace(config_file='configs/gpt2_pretrain.py', eval_only=False, fast_dev_run=False, opts=['model.cfg.hidden_dropout_prob=0.1', 'model.cfg.attention_probs_dropout_prob=0.1', 'model.cfg.bias_dropout_fusion=true', 'model.cfg.hidden_layers=24', 'model.cfg.hidden_size=1024', 'model.cfg.num_attention_heads=16', 'model.cfg.intermediate_size=4096', 'model.cfg.ffn_hidden_size=4096', 'model.cfg.head_size=64', 'graph.enabled=true', 'train.dist.pipeline_num_layers=24', 'train.train_micro_batch_size=8', 'train.global_batch_size=128', 'train.dist.tensor_parallel_size=2', 'train.dist.pipeline_parallel_size=2', 'train.amp.enabled=true', 'train.activation_checkpoint.enabled=true', 'train.num_accumulation_steps=8', 'train.evaluation.enabled=false', 'train.train_iter=220', 'train.train_epoch=0', 'train.log_period=100', 'train.zero_optimization.enabled=true', 'train.zero_optimization.stage=2', 'train.load_weight=', 'train.output_dir=test_logs/oneflow-28/NVIDIA_GeForce_RTX_3080_Ti/7d07caf/LibAI_gpt2_pretrain_graph_nl24_nah16_hs1024_FP16_actrue_DP2_MP2_PP2_zerotrue_stage2_mbs8_gbs128_acc8_1n8g'], resume=False) [03/05 09:39:00 libai]: Contents of args.config_file=configs/gpt2_pretrain.py: from libai.config import LazyCall from libai.evaluation import PPLEvaluator from .common.models.gpt import pretrain_model as model from .common.train import train from .common.optim import optim from .common.data.gpt_dataset import dataloader, tokenization from .common.models.graph import graph vocab_file = "./data_test/gpt_data/gpt2-vocab.json" merge_files = "./data_test/gpt_data/gpt2-merges.txt" data_prefix = "./data_test/gpt_data/loss_compara_content_sentence" tokenization.tokenizer.vocab_file = vocab_file tokenization.tokenizer.merges_file = merge_files dataloader.train.dataset[0].data_prefix = data_prefix dataloader.train.dataset[0].indexed_dataset.data_prefix = data_prefix # GPT-2 model config model.cfg.embedding_dropout_prob = 0.1 model.cfg.attention_dropout_prob = 0.1 model.cfg.num_attention_heads = 16 model.cfg.hidden_size = 384 model.cfg.ffn_hidden_size = 1536 model.cfg.hidden_layers = 6 model.cfg.max_seq_length = 1024 train.input_placement_device = "cpu" train.dist.pipeline_num_layers = model.cfg.hidden_layers for ds in dataloader.train.dataset:  ds.max_seq_length = model.cfg.max_seq_length optim.lr = 1.5e-4 train.train_micro_batch_size = 4 train.amp.enabled = True train.evaluation.evaluator = LazyCall(PPLEvaluator)() train.output_dir = "./output/gpt2_output" [03/05 09:39:00 libai]: Full config saved to test_logs/oneflow-28/NVIDIA_GeForce_RTX_3080_Ti/7d07caf/LibAI_gpt2_pretrain_graph_nl24_nah16_hs1024_FP16_actrue_DP2_MP2_PP2_zerotrue_stage2_mbs8_gbs128_acc8_1n8g/config.yaml [03/05 09:39:00 lb.engine.default]: > compiling dataset index builder ... make: Entering directory '/ssd/home/ouyangyu/libai_week_test/libai/libai/data/data_utils' make: Nothing to be done for 'default'. make: Leaving directory '/ssd/home/ouyangyu/libai_week_test/libai/libai/data/data_utils' [03/05 09:39:01 lb.engine.default]: >>> done with dataset index builder. Compilation time: 0.055 seconds [03/05 09:39:01 lb.engine.default]: >>> done with compiling. Compilation time: 0.057 seconds [03/05 09:39:01 lb.engine.default]: Prepare training, validating, testing set [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: building dataset index ... [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: warming up index mmap file... [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: reading sizes... [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: reading pointers... [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: reading document index... [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: warming up data mmap file... [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: creating numpy buffer of mmap... [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: creating memory view of numpy buffer... [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: Finished creating indexed dataset in 0.075562 seconds [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: indexed dataset stats: [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: number of documents: 50000 [03/05 09:39:01 lb.data.data_utils.indexed_dataset]: number of sentences: 1249934 [03/05 09:39:01 lb.data.datasets.gpt_dataset]: > loading doc-idx mapping from ./data_test/gpt_data/loss_compara_content_sentence_gpt-2_indexmap_28160ns_1024sl_1234s_doc_idx.npy [03/05 09:39:01 lb.data.datasets.gpt_dataset]: > loading sample-idx mapping from ./data_test/gpt_data/loss_compara_content_sentence_gpt-2_indexmap_28160ns_1024sl_1234s_sample_idx.npy [03/05 09:39:01 lb.data.datasets.gpt_dataset]: > loading shuffle-idx mapping from ./data_test/gpt_data/loss_compara_content_sentence_gpt-2_indexmap_28160ns_1024sl_1234s_shuffle_idx.npy [03/05 09:39:01 lb.data.datasets.gpt_dataset]: loaded indexed file in 0.012 seconds [03/05 09:39:01 lb.data.datasets.gpt_dataset]: total number of samples: 57333 [03/05 09:39:01 lb.data.datasets.gpt_dataset]: total number of epochs: 1 [03/05 09:39:01 lb.data.datasets.gpt_dataset]: > loading doc-idx mapping from ./data_test/gpt_data/loss_compara_content_sentence_gpt-2_indexmap_64ns_1024sl_1234s_doc_idx.npy [03/05 09:39:01 lb.data.datasets.gpt_dataset]: > loading sample-idx mapping from ./data_test/gpt_data/loss_compara_content_sentence_gpt-2_indexmap_64ns_1024sl_1234s_sample_idx.npy [03/05 09:39:01 lb.data.datasets.gpt_dataset]: > loading shuffle-idx mapping from ./data_test/gpt_data/loss_compara_content_sentence_gpt-2_indexmap_64ns_1024sl_1234s_shuffle_idx.npy [03/05 09:39:01 lb.data.datasets.gpt_dataset]: loaded indexed file in 0.001 seconds [03/05 09:39:01 lb.data.datasets.gpt_dataset]: total number of samples: 57333 [03/05 09:39:01 lb.data.datasets.gpt_dataset]: total number of epochs: 1 [03/05 09:39:01 lb.data.datasets.gpt_dataset]: > loading doc-idx mapping from ./data_test/gpt_data/loss_compara_content_sentence_gpt-2_indexmap_64ns_1024sl_1234s_doc_idx.npy [03/05 09:39:01 lb.data.datasets.gpt_dataset]: > loading sample-idx mapping from ./data_test/gpt_data/loss_compara_content_sentence_gpt-2_indexmap_64ns_1024sl_1234s_sample_idx.npy [03/05 09:39:01 lb.data.datasets.gpt_dataset]: > loading shuffle-idx mapping from ./data_test/gpt_data/loss_compara_content_sentence_gpt-2_indexmap_64ns_1024sl_1234s_shuffle_idx.npy [03/05 09:39:01 lb.data.datasets.gpt_dataset]: loaded indexed file in 0.001 seconds [03/05 09:39:01 lb.data.datasets.gpt_dataset]: total number of samples: 57333 [03/05 09:39:01 lb.data.datasets.gpt_dataset]: total number of epochs: 1 [03/05 09:39:10 lb.engine.default]: Auto-scaling the config to train.train_iter=220, train.warmup_iter=0 [03/05 09:39:10 libai]: > Start building model... [03/05 09:39:12 lb.engine.default]: Model: GPTForPreTraining( (GPT_model): GPTModel( (embeddings): GPTEmbedding( (token_embeddings): VocabEmbedding(num_embeddings=50432, embedding_dim=1024) (position_embeddings): Embedding(num_embeddings=1024, embedding_dim=1024) (dropout): Dropout(p=0.1, inplace=False) ) (transformer): Transformer( (layers): ModuleList( (0): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (1): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (2): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (3): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (4): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (5): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (6): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (7): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (8): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (9): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (10): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (11): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (12): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (13): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (14): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (15): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (16): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (17): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (18): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (19): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (20): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (21): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (22): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (23): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) ) (layernorm_f): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) (lm_head): LMLogits() ) (loss_func): GPTLoss( (lm_loss): ParallelCrossEntropyLoss() ) ) [03/05 09:39:12 libai]: >>> done with building model. Building time: 1.995 seconds WARNING [03/05 09:39:12 lb.scheduler.lr_scheduler]: warmup iters equals to zero, return CosineLR [03/05 09:39:12 lb.engine.trainer]: Starting training from iteration 0 W20230305 09:39:12.651253 1882053 eager_local_op_interpreter.cpp:256] Casting a local tensor to a global tensor with Broadcast sbp will modify the data of input! If you want to keep the input local tensor unchanged, please set the arg copy to True. [03/05 09:39:12 lb.models.utils.graph_base]: Start compiling the train graph which may take some time. Please wait for a moment ... W20230305 09:39:36.583539 1882057 insert_nccl_logical_op_pass.cpp:1150] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1032 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230305 09:39:37.682915 1882055 insert_nccl_logical_op_pass.cpp:1150] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1032 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230305 09:39:37.807226 1882054 insert_nccl_logical_op_pass.cpp:1150] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1032 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230305 09:39:38.105569 1882056 insert_nccl_logical_op_pass.cpp:1150] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1032 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230305 09:39:38.323150 1882060 insert_nccl_logical_op_pass.cpp:1150] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1032 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230305 09:39:38.350802 1882053 insert_nccl_logical_op_pass.cpp:1150] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1032 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230305 09:39:38.400146 1882058 insert_nccl_logical_op_pass.cpp:1150] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1032 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. W20230305 09:39:38.419078 1882062 insert_nccl_logical_op_pass.cpp:1150] In Graph: GraphBase_0 Placement: cuda-@0:0-@1:1-@2:2-@3:3 the total_op_num = 1032 and has 2 different nccl stream which is possible to trigger cuda stream kernel launch upper limit. So the nccl logical kernel will from async to sync exec, which may affect performance. timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/05 09:48:23.071, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 85 %, 16 %, 12288 MiB, 7315 MiB, 4738 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/05 09:48:23.072, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 85 %, 16 %, 12288 MiB, 7315 MiB, 4738 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/05 09:48:23.074, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 93 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/05 09:48:23.075, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 85 %, 16 %, 12288 MiB, 7315 MiB, 4738 MiB 2023/03/05 09:48:23.076, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 93 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.077, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 85 %, 16 %, 12288 MiB, 7315 MiB, 4738 MiB 2023/03/05 09:48:23.078, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 85 %, 16 %, 12288 MiB, 7315 MiB, 4738 MiB 2023/03/05 09:48:23.079, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 95 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.081, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 93 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.082, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 95 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.080, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 85 %, 16 %, 12288 MiB, 7315 MiB, 4738 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/05 09:48:23.084, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 93 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.086, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 93 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.086, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 21 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.089, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 95 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.091, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 21 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.092, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 93 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.092, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 85 %, 16 %, 12288 MiB, 7315 MiB, 4738 MiB 2023/03/05 09:48:23.093, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 95 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.096, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 95 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.098, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.100, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 21 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.101, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.102, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 95 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.103, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 93 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.104, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 21 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.106, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 21 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.107, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.108, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.109, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.110, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 84 %, 14 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.112, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 95 %, 18 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.113, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.115, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.115, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.117, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.117, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.121, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.122, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 84 %, 14 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:23.123, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.124, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.125, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.126, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.127, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.127, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.128, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.129, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.131, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.134, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.135, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.136, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.136, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.137, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.140, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:23.141, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:23.145, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 3 %, 1 %, 12288 MiB, 7481 MiB, 4572 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2023/03/05 09:48:28.284, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 95 %, 18 %, 12288 MiB, 7315 MiB, 4738 MiB 2023/03/05 09:48:28.285, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 97 %, 20 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:28.286, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 99 %, 21 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:28.287, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 98 %, 21 %, 12288 MiB, 7389 MiB, 4664 MiB 2023/03/05 09:48:28.288, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 0 %, 0 %, 12288 MiB, 7481 MiB, 4572 MiB 2023/03/05 09:48:28.288, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:28.290, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 1 %, 1 %, 12288 MiB, 7501 MiB, 4552 MiB 2023/03/05 09:48:28.290, NVIDIA GeForce RTX 3080 Ti, 515.65.01, 0 %, 0 %, 12288 MiB, 7481 MiB, 4572 MiB [03/05 09:48:33 lb.utils.events]: eta: 0:10:28 iteration: 99/220 consumed_samples: 12800 total_loss: 7.262 time: 5.2420 s/iter data_time: 0.0064 s/iter total_throughput: 24.42 samples/s lr: 8.74e-05 [03/05 09:57:17 lb.utils.events]: eta: 0:01:44 iteration: 199/220 consumed_samples: 25600 total_loss: 6.954 time: 5.2419 s/iter data_time: 0.0061 s/iter total_throughput: 24.42 samples/s lr: 4.81e-06 [03/05 09:59:02 lb.utils.events]: eta: 0:00:00 iteration: 219/220 consumed_samples: 28160 total_loss: 6.702 time: 5.2417 s/iter data_time: 0.0057 s/iter total_throughput: 24.42 samples/s lr: 1.51e-06 [03/05 09:59:02 lb.engine.hooks]: Overall training speed: 218 iterations in 0:19:02 (5.2417 s / it) [03/05 09:59:02 lb.engine.hooks]: Total training time: 0:19:02 (0:00:00 on hooks) ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** oneflow-version(git_commit)=0.9.1.dev20230304+cu117 oneflow-commit(git_commit)=7d07caf oneflow-libai(git_commit)=50a973dc5de635b8613ad7666c073c763e238850