loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 loaded library: loaded library: loaded library: loaded library: /usr/lib/x86_64-linux-gnu/libibverbs.so.1 /usr/lib/x86_64-linux-gnu/libibverbs.so.1/usr/lib/x86_64-linux-gnu/libibverbs.so.1 /usr/lib/x86_64-linux-gnu/libibverbs.so.1 [07/05 09:03:08 libai]: Rank of current process: 0. World size: 4 [07/05 09:03:08 libai]: Command line arguments: Namespace(config_file='configs/gpt2_nl24_nah16_hs1024.py', eval_only=False, fast_dev_run=False, opts=['model.cfg.num_layers=24', 'train.dist.pipeline_num_layers=24', 'train.train_micro_batch_size=32', 'train.global_batch_size=256', 'train.dist.tensor_parallel_size=1', 'train.dist.pipeline_parallel_size=4', 'train.amp.enabled=true', 'train.activation_checkpoint.enabled=true', 'train.train_iter=220', 'train.log_period=100', 'train.output_dir=test_logs/01b1d32/1n4g/LibAI_gpt2_nl24_nah16_hs1024_FP16_actrue_mp1_pp4_mb32_gb256_1n4g_20220705_090306115011489'], resume=False) [07/05 09:03:08 libai]: Contents of args.config_file=configs/gpt2_nl24_nah16_hs1024.py: from libai.config import LazyCall from libai.evaluation import PPLEvaluator from libai.config import LazyCall from .common.models.gpt import pretrain_model as model from .common.train import train from .common.optim import optim from .common.data.gpt_dataset import dataloader, tokenization from .common.models.graph import graph #vocab_file = "/workspace/dataset/gpt2-vocab.json" #merges_file = "/workspace/dataset/gpt2-merges.txt" #data_prefix = "/workspace/dataset/loss_compara_content_sentence" vocab_file = "/dataset/source/dataset/gpt2-vocab.json" merges_file = "/dataset/source/dataset/gpt2-merges.txt" data_prefix = "/dataset/source/dataset/loss_compara_content_sentence" tokenization.tokenizer.vocab_file = vocab_file tokenization.tokenizer.merges_file = merges_file dataloader.train.dataset[0].data_prefix = data_prefix dataloader.train.dataset[0].indexed_dataset.data_prefix = data_prefix # dataloader.train.num_workers = 4 # GPT-2 model config model.cfg.embedding_dropout_prob = 0.1 model.cfg.attention_dropout_prob = 0.1 model.cfg.num_attention_heads = 16 model.cfg.hidden_size = 1024 model.cfg.ffn_hidden_size = 4096 #model.cfg.num_layers = 24 model.cfg.max_seq_length = 1024 #model.cfg.initializer_range = 0.006 # model.cfg.bias_dropout_fusion = True # model.cfg.bias_gelu_fusion = True # model.cfg.scale_mask_softmax_fusion = True train.input_placement_device = "cpu" for ds in dataloader.train.dataset:  ds.max_seq_length = model.cfg.max_seq_length optim.lr = 1.5e-4 #train.dist.pipeline_num_layers = model.cfg.num_layers train.test_micro_batch_size = 4 train.evaluation.evaluator = LazyCall(PPLEvaluator)() train.evaluation.enabled = False train.evaluation.eval_iter = 30 [07/05 09:03:08 libai]: Full config saved to test_logs/01b1d32/1n4g/LibAI_gpt2_nl24_nah16_hs1024_FP16_actrue_mp1_pp4_mb32_gb256_1n4g_20220705_090306115011489/config.yaml [07/05 09:03:08 lb.engine.default]: > compiling dataset index builder ... make: Entering directory '/dataset/xyn/libai_bench/libai/libai/data/data_utils' make: Nothing to be done for 'default'. make: Leaving directory '/dataset/xyn/libai_bench/libai/libai/data/data_utils' [07/05 09:03:08 lb.engine.default]: >>> done with dataset index builder. Compilation time: 0.041 seconds [07/05 09:03:08 lb.engine.default]: >>> done with compiling. Compilation time: 0.042 seconds [07/05 09:03:09 lb.engine.default]: Prepare training, validating, testing set [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: building dataset index ... [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: warming up index mmap file... [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: reading sizes... [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: reading pointers... [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: reading document index... [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: warming up data mmap file... [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: creating numpy buffer of mmap... [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: creating memory view of numpy buffer... [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: Finished creating indexed dataset in 0.091799 seconds [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: indexed dataset stats: [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: number of documents: 50000 [07/05 09:03:09 lb.data.data_utils.indexed_dataset]: number of sentences: 1249934 [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  > loading doc-idx mapping from /dataset/source/dataset/loss_compara_content_sentence_gpt-2_indexmap_56320ns_1024sl_1234s_doc_idx.npy [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  > loading sample-idx mapping from /dataset/source/dataset/loss_compara_content_sentence_gpt-2_indexmap_56320ns_1024sl_1234s_sample_idx.npy [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  > loading shuffle-idx mapping from /dataset/source/dataset/loss_compara_content_sentence_gpt-2_indexmap_56320ns_1024sl_1234s_shuffle_idx.npy [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  loaded indexed file in 0.008 seconds [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  total number of samples: 57333 [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  total number of epochs: 1 [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  > loading doc-idx mapping from /dataset/source/dataset/loss_compara_content_sentence_gpt-2_indexmap_4ns_1024sl_1234s_doc_idx.npy [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  > loading sample-idx mapping from /dataset/source/dataset/loss_compara_content_sentence_gpt-2_indexmap_4ns_1024sl_1234s_sample_idx.npy [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  > loading shuffle-idx mapping from /dataset/source/dataset/loss_compara_content_sentence_gpt-2_indexmap_4ns_1024sl_1234s_shuffle_idx.npy [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  loaded indexed file in 0.003 seconds [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  total number of samples: 57333 [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  total number of epochs: 1 [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  > loading doc-idx mapping from /dataset/source/dataset/loss_compara_content_sentence_gpt-2_indexmap_4ns_1024sl_1234s_doc_idx.npy [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  > loading sample-idx mapping from /dataset/source/dataset/loss_compara_content_sentence_gpt-2_indexmap_4ns_1024sl_1234s_sample_idx.npy [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  > loading shuffle-idx mapping from /dataset/source/dataset/loss_compara_content_sentence_gpt-2_indexmap_4ns_1024sl_1234s_shuffle_idx.npy [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  loaded indexed file in 0.002 seconds [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  total number of samples: 57333 [07/05 09:03:09 lb.data.datasets.gpt_dataset]:  total number of epochs: 1 [07/05 09:03:10 lb.engine.default]: Auto-scaling the config to train.train_iter=220, train.warmup_iter=0 [07/05 09:03:12 lb.engine.default]: Model: GPTForPreTraining( (GPT_model): GPTModel( (embeddings): GPTEmbedding( (token_embeddings): VocabEmbedding(num_embeddings=50304, embedding_dim=1024) (position_embeddings): Embedding(num_embeddings=1024, embedding_dim=1024) (dropout): Dropout(p=0.1, inplace=False) ) (transformer): Transformer( (layers): ModuleList( (0): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (1): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (2): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (3): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (4): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (5): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (6): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (7): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (8): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (9): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (10): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (11): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (12): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (13): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (14): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (15): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (16): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (17): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (18): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (19): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (20): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (21): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (22): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) (23): TransformerLayer( (drop_path): Identity() (input_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (self_attention): MultiheadAttention( hidden_size=1024, num_heads=16, is_cross_attention=False (dropout): Dropout(p=0.1, inplace=False) (query_key_value): Linear1D(in_features=1024, out_features=3072, bias=True, parallel=col) (dense): Linear1D(in_features=1024, out_features=1024, bias=True, parallel=row) ) (post_attention_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): MLP( bias_gelu_fusion=True, bias_dropout_fusion=True, dropout=0 (dense_h_to_4h): Linear1D(in_features=1024, out_features=4096, bias=True, parallel=col) (dense_4h_to_h): Linear1D(in_features=4096, out_features=1024, bias=True, parallel=row) ) ) ) (layernorm_f): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) (lm_head): LMLogits() ) (loss_func): GPTLoss( (lm_loss): ParallelCrossEntropyLoss() ) ) WARNING [07/05 09:03:12 lb.scheduler.lr_scheduler]: warmup iters equals to zero, return CosineLR [07/05 09:03:22 lb.engine.trainer]: Starting training from iteration 0 [07/05 09:03:22 lb.models.utils.graph_base]: Start compling the train graph which may take some time. Please wait for a moment ... timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/05 09:13:40.549, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 64 %, 32510 MiB, 16498 MiB, 16012 MiB 2022/07/05 09:13:40.551, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 60 %, 32510 MiB, 17126 MiB, 15384 MiB 2022/07/05 09:13:40.552, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 63 %, 32510 MiB, 17146 MiB, 15364 MiB 2022/07/05 09:13:40.553, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 67 %, 32510 MiB, 16626 MiB, 15884 MiB 2022/07/05 09:13:40.554, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.554, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/05 09:13:40.559, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.559, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 64 %, 32510 MiB, 16498 MiB, 16012 MiB 2022/07/05 09:13:40.559, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 64 %, 32510 MiB, 16498 MiB, 16012 MiB 2022/07/05 09:13:40.560, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.561, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 60 %, 32510 MiB, 17126 MiB, 15384 MiB 2022/07/05 09:13:40.561, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 60 %, 32510 MiB, 17126 MiB, 15384 MiB 2022/07/05 09:13:40.564, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 63 %, 32510 MiB, 17146 MiB, 15364 MiB 2022/07/05 09:13:40.564, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 63 %, 32510 MiB, 17146 MiB, 15364 MiB 2022/07/05 09:13:40.566, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 65 %, 32510 MiB, 16626 MiB, 15884 MiB 2022/07/05 09:13:40.566, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 65 %, 32510 MiB, 16626 MiB, 15884 MiB 2022/07/05 09:13:40.567, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.568, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.569, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.570, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.573, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.573, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.574, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:40.575, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB timestamp, name, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB] 2022/07/05 09:13:46.646, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 63 %, 32510 MiB, 16498 MiB, 16012 MiB 2022/07/05 09:13:46.647, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 64 %, 32510 MiB, 17126 MiB, 15384 MiB 2022/07/05 09:13:46.647, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 64 %, 32510 MiB, 17146 MiB, 15364 MiB 2022/07/05 09:13:46.648, Tesla V100-SXM2-32GB, 470.57.02, 100 %, 73 %, 32510 MiB, 16626 MiB, 15884 MiB 2022/07/05 09:13:46.649, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:46.649, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:46.650, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB 2022/07/05 09:13:46.650, Tesla V100-SXM2-32GB, 470.57.02, 0 %, 0 %, 32510 MiB, 32507 MiB, 3 MiB [07/05 09:13:52 lb.utils.events]:  eta: 0:12:10 iteration: 99/220 consumed_samples: 25600 total_loss: 7.298 time: 6.0968 s/iter data_time: 0.0049 s/iter total_throughput: 41.99 samples/s lr: 8.74e-05 [07/05 09:24:04 lb.utils.events]:  eta: 0:02:02 iteration: 199/220 consumed_samples: 51200 total_loss: 7.044 time: 6.1077 s/iter data_time: 0.0048 s/iter total_throughput: 41.91 samples/s lr: 4.81e-06 [07/05 09:26:06 lb.utils.events]:  eta: 0:00:00 iteration: 219/220 consumed_samples: 56320 total_loss: 6.826 time: 6.1091 s/iter data_time: 0.0049 s/iter total_throughput: 41.90 samples/s lr: 1.51e-06 [07/05 09:26:06 lb.engine.hooks]: Overall training speed: 218 iterations in 0:22:11 (6.1091 s / it) [07/05 09:26:06 lb.engine.hooks]: Total training time: 0:22:11 (0:00:00 on hooks) ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. *****************************************