hanasim commited on
Commit
9c6d605
1 Parent(s): 933d0d0

Model save

Browse files
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: openai/whisper-tiny
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - common_voice_16_0
8
+ metrics:
9
+ - wer
10
+ model-index:
11
+ - name: breeze-dsw-tiny-id
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: common_voice_16_0
18
+ type: common_voice_16_0
19
+ config: id
20
+ split: test
21
+ args: id
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 45.243352654338025
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # breeze-dsw-tiny-id
32
+
33
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the common_voice_16_0 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.7109
36
+ - Wer: 45.2434
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 1e-05
56
+ - train_batch_size: 32
57
+ - eval_batch_size: 16
58
+ - seed: 42
59
+ - distributed_type: multi-GPU
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_steps: 500
63
+ - training_steps: 2000
64
+ - mixed_precision_training: Native AMP
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
69
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
70
+ | 0.99 | 0.1 | 100 | 0.8486 | 54.1724 |
71
+ | 0.7896 | 1.04 | 200 | 0.7578 | 48.3393 |
72
+ | 0.4164 | 1.14 | 300 | 0.7388 | 49.2594 |
73
+ | 0.5456 | 2.09 | 400 | 0.7178 | 46.1266 |
74
+ | 0.476 | 3.03 | 500 | 0.7109 | 45.2434 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - Transformers 4.37.0.dev0
80
+ - Pytorch 2.1.2+cu121
81
+ - Datasets 2.16.2.dev0
82
+ - Tokenizers 0.15.0
breeze-dsw-tiny-id.log CHANGED
@@ -1,17 +1,17 @@
1
- [2024-01-12 16:30:04,760] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2
- [2024-01-12 16:30:05,517] [WARNING] [runner.py:202:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
3
- [2024-01-12 16:30:05,517] [INFO] [runner.py:571:main] cmd = /home/sp-operator/hf_env/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/run_speech_recognition_seq2seq_streaming.py --deepspeed=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/ds_config.json --model_name_or_path=openai/whisper-tiny --dataset_name=mozilla-foundation/common_voice_16_0 --dataset_config_name=id --language=Indonesian --train_split_name=train --eval_split_name=test --model_index_name=Breeze DSW Indonesian - tiny --max_steps=1000 --output_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id --per_device_train_batch_size=32 --per_device_eval_batch_size=16 --logging_steps=25 --learning_rate=1e-5 --warmup_steps=500 --evaluation_strategy=steps --eval_steps=100 --save_strategy=steps --save_steps=100 --generation_max_length=225 --length_column_name=input_length --max_duration_in_seconds=30 --text_column_name=sentence --freeze_feature_encoder=False --report_to=tensorboard --metric_for_best_model=wer --greater_is_better=False --load_best_model_at_end --gradient_checkpointing --fp16 --overwrite_output_dir --do_train --do_eval --predict_with_generate --do_normalize_eval --streaming --use_auth_token --push_to_hub
4
- [2024-01-12 16:30:07,172] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
5
- [2024-01-12 16:30:07,952] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0]}
6
- [2024-01-12 16:30:07,952] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=1, node_rank=0
7
- [2024-01-12 16:30:07,952] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})
8
- [2024-01-12 16:30:07,952] [INFO] [launch.py:163:main] dist_world_size=1
9
- [2024-01-12 16:30:07,952] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0
10
- [2024-01-12 16:30:10,956] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
11
- [2024-01-12 16:30:11,158] [INFO] [comm.py:637:init_distributed] cdb=None
12
- [2024-01-12 16:30:11,158] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
13
- 01/12/2024 16:30:11 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True
14
- 01/12/2024 16:30:11 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
15
  _n_gpu=1,
16
  adafactor=False,
17
  adam_beta1=0.9,
@@ -39,7 +39,7 @@ do_predict=False,
39
  do_train=True,
40
  eval_accumulation_steps=None,
41
  eval_delay=0,
42
- eval_steps=100,
43
  evaluation_strategy=steps,
44
  fp16=True,
45
  fp16_backend=auto,
@@ -78,7 +78,7 @@ local_rank=0,
78
  log_level=passive,
79
  log_level_replica=warning,
80
  log_on_each_node=True,
81
- logging_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/runs/Jan12_16-30-10_knight,
82
  logging_first_step=False,
83
  logging_nan_inf_filter=True,
84
  logging_steps=25,
@@ -86,7 +86,7 @@ logging_strategy=steps,
86
  lr_scheduler_kwargs={},
87
  lr_scheduler_type=linear,
88
  max_grad_norm=1.0,
89
- max_steps=1000,
90
  metric_for_best_model=wer,
91
  mp_parameters=,
92
  neftune_noise_alpha=None,
@@ -95,7 +95,7 @@ num_train_epochs=3.0,
95
  optim=adamw_torch,
96
  optim_args=None,
97
  output_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id,
98
- overwrite_output_dir=True,
99
  past_index=-1,
100
  per_device_eval_batch_size=16,
101
  per_device_train_batch_size=32,
@@ -113,7 +113,7 @@ run_name=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../bree
113
  save_on_each_node=False,
114
  save_only_model=False,
115
  save_safetensors=True,
116
- save_steps=100,
117
  save_strategy=steps,
118
  save_total_limit=None,
119
  seed=42,
@@ -135,7 +135,7 @@ warmup_ratio=0.0,
135
  warmup_steps=500,
136
  weight_decay=0.0,
137
  )
138
- 01/12/2024 16:30:11 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
139
  _n_gpu=1,
140
  adafactor=False,
141
  adam_beta1=0.9,
@@ -163,7 +163,7 @@ do_predict=False,
163
  do_train=True,
164
  eval_accumulation_steps=None,
165
  eval_delay=0,
166
- eval_steps=100,
167
  evaluation_strategy=steps,
168
  fp16=True,
169
  fp16_backend=auto,
@@ -202,7 +202,7 @@ local_rank=0,
202
  log_level=passive,
203
  log_level_replica=warning,
204
  log_on_each_node=True,
205
- logging_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/runs/Jan12_16-30-10_knight,
206
  logging_first_step=False,
207
  logging_nan_inf_filter=True,
208
  logging_steps=25,
@@ -210,7 +210,7 @@ logging_strategy=steps,
210
  lr_scheduler_kwargs={},
211
  lr_scheduler_type=linear,
212
  max_grad_norm=1.0,
213
- max_steps=1000,
214
  metric_for_best_model=wer,
215
  mp_parameters=,
216
  neftune_noise_alpha=None,
@@ -219,7 +219,7 @@ num_train_epochs=3.0,
219
  optim=adamw_torch,
220
  optim_args=None,
221
  output_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id,
222
- overwrite_output_dir=True,
223
  past_index=-1,
224
  per_device_eval_batch_size=16,
225
  per_device_train_batch_size=32,
@@ -237,7 +237,7 @@ run_name=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../bree
237
  save_on_each_node=False,
238
  save_only_model=False,
239
  save_safetensors=True,
240
- save_steps=100,
241
  save_strategy=steps,
242
  save_total_limit=None,
243
  seed=42,
@@ -259,41 +259,42 @@ warmup_ratio=0.0,
259
  warmup_steps=500,
260
  weight_decay=0.0,
261
  )
262
- [2024-01-12 16:30:24,445] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.12.6, git-hash=unknown, git-branch=unknown
263
- [2024-01-12 16:30:25,203] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
 
264
  Installed CUDA version 12.3 does not match the version torch was compiled with 12.1 but since the APIs are compatible, accepting this combination
265
  [1/4] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/TH -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_61,code=compute_61 -c /home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o
266
  [2/4] c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/TH -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX256__ -D__ENABLE_CUDA__ -c /home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o
267
  [3/4] c++ -MMD -MF cpu_adam_impl.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/TH -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX256__ -D__ENABLE_CUDA__ -c /home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/cpu_adam_impl.cpp -o cpu_adam_impl.o
268
  [4/4] c++ cpu_adam.o cpu_adam_impl.o custom_cuda_kernel.cuda.o -shared -lcurand -L/home/sp-operator/hf_env/lib/python3.10/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart -o cpu_adam.so
269
- Time to load cpu_adam op: 27.245380878448486 seconds
270
  Adam Optimizer #0 is created with AVX2 arithmetic capability.
271
  Config: alpha=0.000010, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_w=1
272
- [2024-01-12 16:30:54,187] [INFO] [logging.py:96:log_dist] [Rank 0] Using DeepSpeed Optimizer param name adamw as basic optimizer
273
- [2024-01-12 16:30:54,187] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
274
- [2024-01-12 16:30:54,191] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
275
- [2024-01-12 16:30:54,191] [INFO] [utils.py:56:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
276
- [2024-01-12 16:30:54,191] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.float16 ZeRO stage 2 optimizer
277
- [2024-01-12 16:30:54,192] [INFO] [stage_1_and_2.py:148:__init__] Reduce bucket size 200000000
278
- [2024-01-12 16:30:54,192] [INFO] [stage_1_and_2.py:149:__init__] Allgather bucket size 200000000
279
- [2024-01-12 16:30:54,192] [INFO] [stage_1_and_2.py:150:__init__] CPU Offload: True
280
- [2024-01-12 16:30:54,192] [INFO] [stage_1_and_2.py:151:__init__] Round robin gradient partitioning: False
281
- [2024-01-12 16:30:54,469] [INFO] [utils.py:791:see_memory_usage] Before initializing optimizer states
282
- [2024-01-12 16:30:54,470] [INFO] [utils.py:792:see_memory_usage] MA 0.11 GB Max_MA 0.11 GB CA 0.13 GB Max_CA 0 GB
283
- [2024-01-12 16:30:54,470] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory: used = 3.57 GB, percent = 5.7%
284
- [2024-01-12 16:30:54,736] [INFO] [utils.py:791:see_memory_usage] After initializing optimizer states
285
- [2024-01-12 16:30:54,737] [INFO] [utils.py:792:see_memory_usage] MA 0.11 GB Max_MA 0.11 GB CA 0.13 GB Max_CA 0 GB
286
- [2024-01-12 16:30:54,737] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory: used = 4.1 GB, percent = 6.5%
287
- [2024-01-12 16:30:54,737] [INFO] [stage_1_and_2.py:516:__init__] optimizer state initialized
288
- [2024-01-12 16:30:54,814] [INFO] [utils.py:791:see_memory_usage] After initializing ZeRO optimizer
289
- [2024-01-12 16:30:54,815] [INFO] [utils.py:792:see_memory_usage] MA 0.11 GB Max_MA 0.11 GB CA 0.13 GB Max_CA 0 GB
290
- [2024-01-12 16:30:54,815] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory: used = 4.1 GB, percent = 6.5%
291
- [2024-01-12 16:30:54,819] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = adamw
292
- [2024-01-12 16:30:54,819] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = WarmupDecayLR
293
- [2024-01-12 16:30:54,819] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = <deepspeed.runtime.lr_schedules.WarmupDecayLR object at 0x7fbfbe6be260>
294
- [2024-01-12 16:30:54,819] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[1e-05], mom=[[0.9, 0.999]]
295
- [2024-01-12 16:30:54,820] [INFO] [config.py:984:print] DeepSpeedEngine configuration:
296
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] activation_checkpointing_config {
297
  "partition_activations": false,
298
  "contiguous_memory_optimization": false,
299
  "cpu_checkpointing": false,
@@ -301,10 +302,10 @@ Config: alpha=0.000010, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_
301
  "synchronize_checkpoint_boundary": false,
302
  "profile": false
303
  }
304
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
305
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] amp_enabled .................. False
306
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] amp_params ................... False
307
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] autotuning_config ............ {
308
  "enabled": false,
309
  "start_step": null,
310
  "end_step": null,
@@ -329,31 +330,31 @@ Config: alpha=0.000010, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_
329
  "min_train_micro_batch_size_per_gpu": 1,
330
  "num_tuning_micro_batch_sizes": 3
331
  }
332
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] bfloat16_enabled ............. False
333
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] checkpoint_parallel_write_pipeline False
334
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] checkpoint_tag_validation_enabled True
335
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] checkpoint_tag_validation_fail False
336
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7fbfe623d510>
337
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] communication_data_type ...... None
338
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
339
- [2024-01-12 16:30:54,820] [INFO] [config.py:988:print] curriculum_enabled_legacy .... False
340
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] curriculum_params_legacy ..... False
341
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
342
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] data_efficiency_enabled ...... False
343
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] dataloader_drop_last ......... False
344
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] disable_allgather ............ False
345
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] dump_state ................... False
346
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] dynamic_loss_scale_args ...... {'init_scale': 65536, 'scale_window': 1000, 'delayed_shift': 2, 'consecutive_hysteresis': False, 'min_scale': 1}
347
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] eigenvalue_enabled ........... False
348
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] eigenvalue_gas_boundary_resolution 1
349
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] eigenvalue_layer_name ........ bert.encoder.layer
350
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] eigenvalue_layer_num ......... 0
351
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] eigenvalue_max_iter .......... 100
352
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] eigenvalue_stability ......... 1e-06
353
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] eigenvalue_tol ............... 0.01
354
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] eigenvalue_verbose ........... False
355
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] elasticity_enabled ........... False
356
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] flops_profiler_config ........ {
357
  "enabled": false,
358
  "recompute_fwd_factor": 0.0,
359
  "profile_step": 1,
@@ -362,24 +363,24 @@ Config: alpha=0.000010, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_
362
  "detailed": true,
363
  "output_file": null
364
  }
365
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] fp16_auto_cast ............... False
366
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] fp16_enabled ................. True
367
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] fp16_master_weights_and_gradients False
368
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] global_rank .................. 0
369
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] grad_accum_dtype ............. None
370
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] gradient_accumulation_steps .. 1
371
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] gradient_clipping ............ 1.0
372
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] gradient_predivide_factor .... 1.0
373
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] graph_harvesting ............. False
374
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
375
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] initial_dynamic_scale ........ 65536
376
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] load_universal_checkpoint .... False
377
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] loss_scale ................... 0
378
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] memory_breakdown ............. False
379
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] mics_hierarchial_params_gather False
380
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] mics_shard_size .............. -1
381
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False
382
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] nebula_config ................ {
383
  "enabled": false,
384
  "persistent_storage_path": null,
385
  "persistent_time_interval": 100,
@@ -387,32 +388,32 @@ Config: alpha=0.000010, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_
387
  "enable_nebula_load": true,
388
  "load_path": null
389
  }
390
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] optimizer_legacy_fusion ...... False
391
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] optimizer_name ............... adamw
392
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] optimizer_params ............. {'lr': 1e-05, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.0}
393
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
394
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] pld_enabled .................. False
395
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] pld_params ................... False
396
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] prescale_gradients ........... False
397
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] scheduler_name ............... WarmupDecayLR
398
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] scheduler_params ............. {'last_batch_iteration': -1, 'total_num_steps': 1000, 'warmup_min_lr': 0, 'warmup_max_lr': 1e-05, 'warmup_num_steps': 500}
399
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] seq_parallel_communication_data_type torch.float32
400
- [2024-01-12 16:30:54,821] [INFO] [config.py:988:print] sparse_attention ............. None
401
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] sparse_gradients_enabled ..... False
402
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] steps_per_print .............. inf
403
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] train_batch_size ............. 32
404
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] train_micro_batch_size_per_gpu 32
405
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] use_data_before_expert_parallel_ False
406
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] use_node_local_storage ....... False
407
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] wall_clock_breakdown ......... False
408
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] weight_quantization_config ... None
409
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] world_size ................... 1
410
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] zero_allow_untested_optimizer False
411
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] zero_config .................. stage=2 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=200000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=200000000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='cpu', nvme_path=None, buffer_count=4, pin_memory=True, pipeline=False, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50,000,000 param_persistence_threshold=100,000 model_persistence_threshold=sys.maxsize max_live_parameters=1,000,000,000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True
412
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] zero_enabled ................. True
413
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] zero_force_ds_cpu_optimizer .. True
414
- [2024-01-12 16:30:54,822] [INFO] [config.py:988:print] zero_optimization_stage ...... 2
415
- [2024-01-12 16:30:54,822] [INFO] [config.py:974:print_user_config] json = {
416
  "fp16": {
417
  "enabled": true,
418
  "loss_scale": 0,
@@ -434,7 +435,7 @@ Config: alpha=0.000010, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_
434
  "type": "WarmupDecayLR",
435
  "params": {
436
  "last_batch_iteration": -1,
437
- "total_num_steps": 1000,
438
  "warmup_min_lr": 0,
439
  "warmup_max_lr": 1e-05,
440
  "warmup_num_steps": 500
@@ -462,72 +463,20 @@ Config: alpha=0.000010, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_
462
  "enabled": false
463
  }
464
  }
465
- [2024-01-12 16:31:18,456] [INFO] [loss_scaler.py:190:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, but hysteresis is 2. Reducing hysteresis to 1
466
- [2024-01-12 16:31:30,969] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768
467
- [2024-01-12 16:31:43,512] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768, reducing to 16384
468
- [2024-01-12 16:31:56,986] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384, reducing to 8192
469
- {'loss': 1.7847, 'learning_rate': 4.898977360288234e-06, 'epoch': 0.03}
470
- {'loss': 1.3566, 'learning_rate': 6.160712527409633e-06, 'epoch': 0.05}
471
- {'loss': 1.1407, 'learning_rate': 6.85912902234906e-06, 'epoch': 0.07}
472
- {'loss': 0.99, 'learning_rate': 7.344547104469332e-06, 'epoch': 0.1}
473
- {'eval_loss': 0.8486328125, 'eval_wer': 54.17241696568221, 'eval_runtime': 1294.216, 'eval_samples_per_second': 2.814, 'eval_steps_per_second': 0.176, 'epoch': 0.1}
474
- [2024-01-12 17:14:22,872] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step100 is about to be saved!
475
- [2024-01-12 17:14:22,879] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-100/global_step100/mp_rank_00_model_states.pt
476
- [2024-01-12 17:14:22,879] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-100/global_step100/mp_rank_00_model_states.pt...
477
- [2024-01-12 17:14:23,774] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-100/global_step100/mp_rank_00_model_states.pt.
478
- [2024-01-12 17:14:23,782] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-100/global_step100/zero_pp_rank_0_mp_rank_00_optim_states.pt...
479
- [2024-01-12 17:14:28,538] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-100/global_step100/zero_pp_rank_0_mp_rank_00_optim_states.pt.
480
- [2024-01-12 17:14:28,695] [INFO] [engine.py:3431:_save_zero_checkpoint] zero checkpoint saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-100/global_step100/zero_pp_rank_0_mp_rank_00_optim_states.pt
481
- [2024-01-12 17:14:28,695] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step100 is ready now!
482
- {'loss': 0.765, 'learning_rate': 7.716963756434345e-06, 'epoch': 0.12}
483
- {'loss': 0.5849, 'learning_rate': 8.019180844200955e-06, 'epoch': 0.15}
484
- {'loss': 0.7834, 'learning_rate': 8.27351214279797e-06, 'epoch': 1.02}
485
- {'loss': 0.7896, 'learning_rate': 8.49307723936858e-06, 'epoch': 1.04}
486
- {'eval_loss': 0.7578125, 'eval_wer': 48.339313644309506, 'eval_runtime': 1161.7981, 'eval_samples_per_second': 3.135, 'eval_steps_per_second': 0.196, 'epoch': 1.04}
487
- [2024-01-12 17:55:30,694] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step200 is about to be saved!
488
- [2024-01-12 17:55:30,700] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-200/global_step200/mp_rank_00_model_states.pt
489
- [2024-01-12 17:55:30,701] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-200/global_step200/mp_rank_00_model_states.pt...
490
- [2024-01-12 17:55:31,582] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-200/global_step200/mp_rank_00_model_states.pt.
491
- [2024-01-12 17:55:31,591] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-200/global_step200/zero_pp_rank_0_mp_rank_00_optim_states.pt...
492
- [2024-01-12 17:55:36,176] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-200/global_step200/zero_pp_rank_0_mp_rank_00_optim_states.pt.
493
- [2024-01-12 17:55:36,323] [INFO] [engine.py:3431:_save_zero_checkpoint] zero checkpoint saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-200/global_step200/zero_pp_rank_0_mp_rank_00_optim_states.pt
494
- [2024-01-12 17:55:36,323] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step200 is ready now!
495
- {'loss': 0.8413, 'learning_rate': 8.686247975778677e-06, 'epoch': 1.07}
496
- {'loss': 0.68, 'learning_rate': 8.858694625217149e-06, 'epoch': 1.09}
497
- {'loss': 0.5851, 'learning_rate': 9.014436199608479e-06, 'epoch': 1.12}
498
- {'loss': 0.4164, 'learning_rate': 9.156425255148058e-06, 'epoch': 1.14}
499
- {'eval_loss': 0.73876953125, 'eval_wer': 49.25936148679732, 'eval_runtime': 1191.6636, 'eval_samples_per_second': 3.056, 'eval_steps_per_second': 0.191, 'epoch': 1.14}
500
- [2024-01-12 18:37:19,231] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step300 is about to be saved!
501
- [2024-01-12 18:37:19,238] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-300/global_step300/mp_rank_00_model_states.pt
502
- [2024-01-12 18:37:19,238] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-300/global_step300/mp_rank_00_model_states.pt...
503
- [2024-01-12 18:37:20,104] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-300/global_step300/mp_rank_00_model_states.pt.
504
- [2024-01-12 18:37:20,112] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-300/global_step300/zero_pp_rank_0_mp_rank_00_optim_states.pt...
505
- [2024-01-12 18:37:24,766] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-300/global_step300/zero_pp_rank_0_mp_rank_00_optim_states.pt.
506
- [2024-01-12 18:37:24,902] [INFO] [engine.py:3431:_save_zero_checkpoint] zero checkpoint saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-300/global_step300/zero_pp_rank_0_mp_rank_00_optim_states.pt
507
- [2024-01-12 18:37:24,903] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step300 is ready now!
508
- {'loss': 0.5102, 'learning_rate': 9.28689473531776e-06, 'epoch': 2.01}
509
- {'loss': 0.602, 'learning_rate': 9.407574351377137e-06, 'epoch': 2.04}
510
- {'loss': 0.6399, 'learning_rate': 9.519831289296397e-06, 'epoch': 2.06}
511
- {'loss': 0.5456, 'learning_rate': 9.624764935335318e-06, 'epoch': 2.09}
512
- {'eval_loss': 0.7177734375, 'eval_wer': 46.126598583126324, 'eval_runtime': 1169.8926, 'eval_samples_per_second': 3.113, 'eval_steps_per_second': 0.195, 'epoch': 2.09}
513
- [2024-01-12 19:18:32,645] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step400 is about to be saved!
514
- [2024-01-12 19:18:32,655] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-400/global_step400/mp_rank_00_model_states.pt
515
- [2024-01-12 19:18:32,655] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-400/global_step400/mp_rank_00_model_states.pt...
516
- [2024-01-12 19:18:33,633] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-400/global_step400/mp_rank_00_model_states.pt.
517
- [2024-01-12 19:18:33,641] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-400/global_step400/zero_pp_rank_0_mp_rank_00_optim_states.pt...
518
- [2024-01-12 19:18:38,426] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-400/global_step400/zero_pp_rank_0_mp_rank_00_optim_states.pt.
519
- [2024-01-12 19:18:38,469] [INFO] [engine.py:3431:_save_zero_checkpoint] zero checkpoint saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-400/global_step400/zero_pp_rank_0_mp_rank_00_optim_states.pt
520
- [2024-01-12 19:18:38,470] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step400 is ready now!
521
- {'loss': 0.48, 'learning_rate': 9.723272550712454e-06, 'epoch': 2.11}
522
- {'loss': 0.3325, 'learning_rate': 9.816095971633122e-06, 'epoch': 2.14}
523
- {'loss': 0.3389, 'learning_rate': 9.90385555539545e-06, 'epoch': 3.01}
524
- {'loss': 0.476, 'learning_rate': 9.987075336738768e-06, 'epoch': 3.03}
525
- {'eval_loss': 0.7109375, 'eval_wer': 45.243352654338025, 'eval_runtime': 1126.9534, 'eval_samples_per_second': 3.232, 'eval_steps_per_second': 0.202, 'epoch': 3.03}
526
- [2024-01-12 20:00:05,991] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step500 is about to be saved!
527
- [2024-01-12 20:00:06,003] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-500/global_step500/mp_rank_00_model_states.pt
528
- [2024-01-12 20:00:06,003] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-500/global_step500/mp_rank_00_model_states.pt...
529
- [2024-01-12 20:00:06,891] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-500/global_step500/mp_rank_00_model_states.pt.
530
- [2024-01-12 20:00:06,900] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt...
531
- [2024-01-12 20:00:11,513] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt.
532
- [2024-01-12 20:00:11,652] [INFO] [engine.py:3431:_save_zero_checkpoint] zero checkpoint saved /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/tmp-checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt
533
- [2024-01-12 20:00:11,653] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step500 is ready now!
 
1
+ [2024-01-12 23:20:02,304] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2
+ [2024-01-12 23:20:03,384] [WARNING] [runner.py:202:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
3
+ [2024-01-12 23:20:03,384] [INFO] [runner.py:571:main] cmd = /home/sp-operator/hf_env/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/run_speech_recognition_seq2seq_streaming.py --deepspeed=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/ds_config.json --model_name_or_path=openai/whisper-tiny --dataset_name=mozilla-foundation/common_voice_16_0 --dataset_config_name=id --language=Indonesian --train_split_name=train --eval_split_name=test --model_index_name=Breeze DSW Indonesian - tiny --max_steps=2000 --output_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id --per_device_train_batch_size=32 --per_device_eval_batch_size=16 --logging_steps=25 --learning_rate=1e-5 --warmup_steps=500 --evaluation_strategy=steps --eval_steps=200 --save_strategy=steps --save_steps=200 --generation_max_length=225 --length_column_name=input_length --max_duration_in_seconds=30 --text_column_name=sentence --freeze_feature_encoder=False --report_to=tensorboard --metric_for_best_model=wer --greater_is_better=False --load_best_model_at_end --gradient_checkpointing --fp16 --do_train --do_eval --predict_with_generate --do_normalize_eval --streaming --use_auth_token --push_to_hub
4
+ [2024-01-12 23:20:05,084] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
5
+ [2024-01-12 23:20:05,852] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0]}
6
+ [2024-01-12 23:20:05,852] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=1, node_rank=0
7
+ [2024-01-12 23:20:05,852] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})
8
+ [2024-01-12 23:20:05,852] [INFO] [launch.py:163:main] dist_world_size=1
9
+ [2024-01-12 23:20:05,852] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0
10
+ [2024-01-12 23:20:08,880] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
11
+ [2024-01-12 23:20:09,073] [INFO] [comm.py:637:init_distributed] cdb=None
12
+ [2024-01-12 23:20:09,073] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
13
+ 01/12/2024 23:20:09 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True
14
+ 01/12/2024 23:20:09 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
15
  _n_gpu=1,
16
  adafactor=False,
17
  adam_beta1=0.9,
 
39
  do_train=True,
40
  eval_accumulation_steps=None,
41
  eval_delay=0,
42
+ eval_steps=200,
43
  evaluation_strategy=steps,
44
  fp16=True,
45
  fp16_backend=auto,
 
78
  log_level=passive,
79
  log_level_replica=warning,
80
  log_on_each_node=True,
81
+ logging_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/runs/Jan12_23-20-08_knight,
82
  logging_first_step=False,
83
  logging_nan_inf_filter=True,
84
  logging_steps=25,
 
86
  lr_scheduler_kwargs={},
87
  lr_scheduler_type=linear,
88
  max_grad_norm=1.0,
89
+ max_steps=2000,
90
  metric_for_best_model=wer,
91
  mp_parameters=,
92
  neftune_noise_alpha=None,
 
95
  optim=adamw_torch,
96
  optim_args=None,
97
  output_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id,
98
+ overwrite_output_dir=False,
99
  past_index=-1,
100
  per_device_eval_batch_size=16,
101
  per_device_train_batch_size=32,
 
113
  save_on_each_node=False,
114
  save_only_model=False,
115
  save_safetensors=True,
116
+ save_steps=200,
117
  save_strategy=steps,
118
  save_total_limit=None,
119
  seed=42,
 
135
  warmup_steps=500,
136
  weight_decay=0.0,
137
  )
138
+ 01/12/2024 23:20:09 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
139
  _n_gpu=1,
140
  adafactor=False,
141
  adam_beta1=0.9,
 
163
  do_train=True,
164
  eval_accumulation_steps=None,
165
  eval_delay=0,
166
+ eval_steps=200,
167
  evaluation_strategy=steps,
168
  fp16=True,
169
  fp16_backend=auto,
 
202
  log_level=passive,
203
  log_level_replica=warning,
204
  log_on_each_node=True,
205
+ logging_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/runs/Jan12_23-20-08_knight,
206
  logging_first_step=False,
207
  logging_nan_inf_filter=True,
208
  logging_steps=25,
 
210
  lr_scheduler_kwargs={},
211
  lr_scheduler_type=linear,
212
  max_grad_norm=1.0,
213
+ max_steps=2000,
214
  metric_for_best_model=wer,
215
  mp_parameters=,
216
  neftune_noise_alpha=None,
 
219
  optim=adamw_torch,
220
  optim_args=None,
221
  output_dir=/cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id,
222
+ overwrite_output_dir=False,
223
  past_index=-1,
224
  per_device_eval_batch_size=16,
225
  per_device_train_batch_size=32,
 
237
  save_on_each_node=False,
238
  save_only_model=False,
239
  save_safetensors=True,
240
+ save_steps=200,
241
  save_strategy=steps,
242
  save_total_limit=None,
243
  seed=42,
 
259
  warmup_steps=500,
260
  weight_decay=0.0,
261
  )
262
+ 01/12/2024 23:20:09 - INFO - __main__ - Checkpoint detected, resuming training at /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500. To avoid this behavior, change the `--output_dir` or add `--overwrite_output_dir` to train from scratch.
263
+ [2024-01-12 23:20:24,568] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.12.6, git-hash=unknown, git-branch=unknown
264
+ [2024-01-12 23:20:25,411] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
265
  Installed CUDA version 12.3 does not match the version torch was compiled with 12.1 but since the APIs are compatible, accepting this combination
266
  [1/4] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/TH -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_61,code=compute_61 -c /home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o
267
  [2/4] c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/TH -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX256__ -D__ENABLE_CUDA__ -c /home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o
268
  [3/4] c++ -MMD -MF cpu_adam_impl.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/TH -isystem /home/sp-operator/hf_env/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX256__ -D__ENABLE_CUDA__ -c /home/sp-operator/hf_env/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/cpu_adam_impl.cpp -o cpu_adam_impl.o
269
  [4/4] c++ cpu_adam.o cpu_adam_impl.o custom_cuda_kernel.cuda.o -shared -lcurand -L/home/sp-operator/hf_env/lib/python3.10/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart -o cpu_adam.so
270
+ Time to load cpu_adam op: 27.198795795440674 seconds
271
  Adam Optimizer #0 is created with AVX2 arithmetic capability.
272
  Config: alpha=0.000010, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_w=1
273
+ [2024-01-12 23:20:54,305] [INFO] [logging.py:96:log_dist] [Rank 0] Using DeepSpeed Optimizer param name adamw as basic optimizer
274
+ [2024-01-12 23:20:54,305] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
275
+ [2024-01-12 23:20:54,310] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
276
+ [2024-01-12 23:20:54,310] [INFO] [utils.py:56:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
277
+ [2024-01-12 23:20:54,310] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.float16 ZeRO stage 2 optimizer
278
+ [2024-01-12 23:20:54,310] [INFO] [stage_1_and_2.py:148:__init__] Reduce bucket size 200000000
279
+ [2024-01-12 23:20:54,310] [INFO] [stage_1_and_2.py:149:__init__] Allgather bucket size 200000000
280
+ [2024-01-12 23:20:54,310] [INFO] [stage_1_and_2.py:150:__init__] CPU Offload: True
281
+ [2024-01-12 23:20:54,311] [INFO] [stage_1_and_2.py:151:__init__] Round robin gradient partitioning: False
282
+ [2024-01-12 23:20:54,579] [INFO] [utils.py:791:see_memory_usage] Before initializing optimizer states
283
+ [2024-01-12 23:20:54,580] [INFO] [utils.py:792:see_memory_usage] MA 0.11 GB Max_MA 0.11 GB CA 0.13 GB Max_CA 0 GB
284
+ [2024-01-12 23:20:54,580] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory: used = 3.56 GB, percent = 5.7%
285
+ [2024-01-12 23:20:54,841] [INFO] [utils.py:791:see_memory_usage] After initializing optimizer states
286
+ [2024-01-12 23:20:54,841] [INFO] [utils.py:792:see_memory_usage] MA 0.11 GB Max_MA 0.11 GB CA 0.13 GB Max_CA 0 GB
287
+ [2024-01-12 23:20:54,841] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory: used = 4.09 GB, percent = 6.5%
288
+ [2024-01-12 23:20:54,841] [INFO] [stage_1_and_2.py:516:__init__] optimizer state initialized
289
+ [2024-01-12 23:20:54,919] [INFO] [utils.py:791:see_memory_usage] After initializing ZeRO optimizer
290
+ [2024-01-12 23:20:54,920] [INFO] [utils.py:792:see_memory_usage] MA 0.11 GB Max_MA 0.11 GB CA 0.13 GB Max_CA 0 GB
291
+ [2024-01-12 23:20:54,920] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory: used = 4.09 GB, percent = 6.5%
292
+ [2024-01-12 23:20:54,924] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = adamw
293
+ [2024-01-12 23:20:54,924] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = WarmupDecayLR
294
+ [2024-01-12 23:20:54,924] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = <deepspeed.runtime.lr_schedules.WarmupDecayLR object at 0x7f0d093ad6f0>
295
+ [2024-01-12 23:20:54,924] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[1e-05], mom=[[0.9, 0.999]]
296
+ [2024-01-12 23:20:54,925] [INFO] [config.py:984:print] DeepSpeedEngine configuration:
297
+ [2024-01-12 23:20:54,925] [INFO] [config.py:988:print] activation_checkpointing_config {
298
  "partition_activations": false,
299
  "contiguous_memory_optimization": false,
300
  "cpu_checkpointing": false,
 
302
  "synchronize_checkpoint_boundary": false,
303
  "profile": false
304
  }
305
+ [2024-01-12 23:20:54,925] [INFO] [config.py:988:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
306
+ [2024-01-12 23:20:54,925] [INFO] [config.py:988:print] amp_enabled .................. False
307
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] amp_params ................... False
308
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] autotuning_config ............ {
309
  "enabled": false,
310
  "start_step": null,
311
  "end_step": null,
 
330
  "min_train_micro_batch_size_per_gpu": 1,
331
  "num_tuning_micro_batch_sizes": 3
332
  }
333
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] bfloat16_enabled ............. False
334
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] checkpoint_parallel_write_pipeline False
335
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] checkpoint_tag_validation_enabled True
336
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] checkpoint_tag_validation_fail False
337
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f0d08915ea0>
338
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] communication_data_type ...... None
339
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
340
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] curriculum_enabled_legacy .... False
341
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] curriculum_params_legacy ..... False
342
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
343
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] data_efficiency_enabled ...... False
344
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] dataloader_drop_last ......... False
345
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] disable_allgather ............ False
346
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] dump_state ................... False
347
+ [2024-01-12 23:20:54,926] [INFO] [config.py:988:print] dynamic_loss_scale_args ...... {'init_scale': 65536, 'scale_window': 1000, 'delayed_shift': 2, 'consecutive_hysteresis': False, 'min_scale': 1}
348
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] eigenvalue_enabled ........... False
349
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] eigenvalue_gas_boundary_resolution 1
350
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] eigenvalue_layer_name ........ bert.encoder.layer
351
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] eigenvalue_layer_num ......... 0
352
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] eigenvalue_max_iter .......... 100
353
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] eigenvalue_stability ......... 1e-06
354
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] eigenvalue_tol ............... 0.01
355
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] eigenvalue_verbose ........... False
356
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] elasticity_enabled ........... False
357
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] flops_profiler_config ........ {
358
  "enabled": false,
359
  "recompute_fwd_factor": 0.0,
360
  "profile_step": 1,
 
363
  "detailed": true,
364
  "output_file": null
365
  }
366
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] fp16_auto_cast ............... False
367
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] fp16_enabled ................. True
368
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] fp16_master_weights_and_gradients False
369
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] global_rank .................. 0
370
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] grad_accum_dtype ............. None
371
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] gradient_accumulation_steps .. 1
372
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] gradient_clipping ............ 1.0
373
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] gradient_predivide_factor .... 1.0
374
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] graph_harvesting ............. False
375
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
376
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] initial_dynamic_scale ........ 65536
377
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] load_universal_checkpoint .... False
378
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] loss_scale ................... 0
379
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] memory_breakdown ............. False
380
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] mics_hierarchial_params_gather False
381
+ [2024-01-12 23:20:54,927] [INFO] [config.py:988:print] mics_shard_size .............. -1
382
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False
383
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] nebula_config ................ {
384
  "enabled": false,
385
  "persistent_storage_path": null,
386
  "persistent_time_interval": 100,
 
388
  "enable_nebula_load": true,
389
  "load_path": null
390
  }
391
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] optimizer_legacy_fusion ...... False
392
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] optimizer_name ............... adamw
393
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] optimizer_params ............. {'lr': 1e-05, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.0}
394
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
395
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] pld_enabled .................. False
396
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] pld_params ................... False
397
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] prescale_gradients ........... False
398
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] scheduler_name ............... WarmupDecayLR
399
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] scheduler_params ............. {'last_batch_iteration': -1, 'total_num_steps': 2000, 'warmup_min_lr': 0, 'warmup_max_lr': 1e-05, 'warmup_num_steps': 500}
400
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] seq_parallel_communication_data_type torch.float32
401
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] sparse_attention ............. None
402
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] sparse_gradients_enabled ..... False
403
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] steps_per_print .............. inf
404
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] train_batch_size ............. 32
405
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] train_micro_batch_size_per_gpu 32
406
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] use_data_before_expert_parallel_ False
407
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] use_node_local_storage ....... False
408
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] wall_clock_breakdown ......... False
409
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] weight_quantization_config ... None
410
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] world_size ................... 1
411
+ [2024-01-12 23:20:54,928] [INFO] [config.py:988:print] zero_allow_untested_optimizer False
412
+ [2024-01-12 23:20:54,929] [INFO] [config.py:988:print] zero_config .................. stage=2 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=200000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=200000000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='cpu', nvme_path=None, buffer_count=4, pin_memory=True, pipeline=False, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50,000,000 param_persistence_threshold=100,000 model_persistence_threshold=sys.maxsize max_live_parameters=1,000,000,000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True
413
+ [2024-01-12 23:20:54,929] [INFO] [config.py:988:print] zero_enabled ................. True
414
+ [2024-01-12 23:20:54,929] [INFO] [config.py:988:print] zero_force_ds_cpu_optimizer .. True
415
+ [2024-01-12 23:20:54,929] [INFO] [config.py:988:print] zero_optimization_stage ...... 2
416
+ [2024-01-12 23:20:54,929] [INFO] [config.py:974:print_user_config] json = {
417
  "fp16": {
418
  "enabled": true,
419
  "loss_scale": 0,
 
435
  "type": "WarmupDecayLR",
436
  "params": {
437
  "last_batch_iteration": -1,
438
+ "total_num_steps": 2.000000e+03,
439
  "warmup_min_lr": 0,
440
  "warmup_max_lr": 1e-05,
441
  "warmup_num_steps": 500
 
463
  "enabled": false
464
  }
465
  }
466
+ [2024-01-12 23:20:54,947] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/mp_rank_00_model_states.pt...
467
+ [2024-01-12 23:20:56,196] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/mp_rank_00_model_states.pt.
468
+ [2024-01-12 23:20:56,219] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/mp_rank_00_model_states.pt...
469
+ [2024-01-12 23:20:56,262] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/mp_rank_00_model_states.pt.
470
+ [2024-01-12 23:20:56,313] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt...
471
+ [2024-01-12 23:21:01,178] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt.
472
+ [2024-01-12 23:21:01,178] [INFO] [engine.py:2988:_get_all_zero_checkpoint_state_dicts] successfully read 1 ZeRO state_dicts for rank 0
473
+ [2024-01-12 23:21:01,212] [INFO] [engine.py:2920:_load_zero_checkpoint] loading 1 zero partition checkpoints for rank 0
474
+ [2024-01-12 23:22:01,846] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/mp_rank_00_model_states.pt...
475
+ [2024-01-12 23:22:02,649] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/mp_rank_00_model_states.pt.
476
+ [2024-01-12 23:22:02,658] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/mp_rank_00_model_states.pt...
477
+ [2024-01-12 23:22:02,684] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/mp_rank_00_model_states.pt.
478
+ [2024-01-12 23:22:02,762] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt...
479
+ [2024-01-12 23:22:07,278] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /cosmos/home/sp-operator/ai/training/models/huggingface/scripts/../breeze-dsw-tiny-id/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt.
480
+ [2024-01-12 23:22:07,279] [INFO] [engine.py:2988:_get_all_zero_checkpoint_state_dicts] successfully read 1 ZeRO state_dicts for rank 0
481
+ [2024-01-12 23:22:07,313] [INFO] [engine.py:2920:_load_zero_checkpoint] loading 1 zero partition checkpoints for rank 0
482
+ {'train_runtime': 66.1004, 'train_samples_per_second': 968.225, 'train_steps_per_second': 30.257, 'train_loss': 0.0, 'epoch': 3.03}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
generation_config.json ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 2,
5
+ 2
6
+ ],
7
+ [
8
+ 3,
9
+ 0
10
+ ],
11
+ [
12
+ 3,
13
+ 2
14
+ ],
15
+ [
16
+ 3,
17
+ 3
18
+ ],
19
+ [
20
+ 3,
21
+ 4
22
+ ],
23
+ [
24
+ 3,
25
+ 5
26
+ ]
27
+ ],
28
+ "begin_suppress_tokens": [
29
+ 220,
30
+ 50257
31
+ ],
32
+ "bos_token_id": 50257,
33
+ "decoder_start_token_id": 50258,
34
+ "eos_token_id": 50257,
35
+ "forced_decoder_ids": [
36
+ [
37
+ 1,
38
+ null
39
+ ],
40
+ [
41
+ 2,
42
+ 50359
43
+ ]
44
+ ],
45
+ "is_multilingual": true,
46
+ "lang_to_id": {
47
+ "<|af|>": 50327,
48
+ "<|am|>": 50334,
49
+ "<|ar|>": 50272,
50
+ "<|as|>": 50350,
51
+ "<|az|>": 50304,
52
+ "<|ba|>": 50355,
53
+ "<|be|>": 50330,
54
+ "<|bg|>": 50292,
55
+ "<|bn|>": 50302,
56
+ "<|bo|>": 50347,
57
+ "<|br|>": 50309,
58
+ "<|bs|>": 50315,
59
+ "<|ca|>": 50270,
60
+ "<|cs|>": 50283,
61
+ "<|cy|>": 50297,
62
+ "<|da|>": 50285,
63
+ "<|de|>": 50261,
64
+ "<|el|>": 50281,
65
+ "<|en|>": 50259,
66
+ "<|es|>": 50262,
67
+ "<|et|>": 50307,
68
+ "<|eu|>": 50310,
69
+ "<|fa|>": 50300,
70
+ "<|fi|>": 50277,
71
+ "<|fo|>": 50338,
72
+ "<|fr|>": 50265,
73
+ "<|gl|>": 50319,
74
+ "<|gu|>": 50333,
75
+ "<|haw|>": 50352,
76
+ "<|ha|>": 50354,
77
+ "<|he|>": 50279,
78
+ "<|hi|>": 50276,
79
+ "<|hr|>": 50291,
80
+ "<|ht|>": 50339,
81
+ "<|hu|>": 50286,
82
+ "<|hy|>": 50312,
83
+ "<|id|>": 50275,
84
+ "<|is|>": 50311,
85
+ "<|it|>": 50274,
86
+ "<|ja|>": 50266,
87
+ "<|jw|>": 50356,
88
+ "<|ka|>": 50329,
89
+ "<|kk|>": 50316,
90
+ "<|km|>": 50323,
91
+ "<|kn|>": 50306,
92
+ "<|ko|>": 50264,
93
+ "<|la|>": 50294,
94
+ "<|lb|>": 50345,
95
+ "<|ln|>": 50353,
96
+ "<|lo|>": 50336,
97
+ "<|lt|>": 50293,
98
+ "<|lv|>": 50301,
99
+ "<|mg|>": 50349,
100
+ "<|mi|>": 50295,
101
+ "<|mk|>": 50308,
102
+ "<|ml|>": 50296,
103
+ "<|mn|>": 50314,
104
+ "<|mr|>": 50320,
105
+ "<|ms|>": 50282,
106
+ "<|mt|>": 50343,
107
+ "<|my|>": 50346,
108
+ "<|ne|>": 50313,
109
+ "<|nl|>": 50271,
110
+ "<|nn|>": 50342,
111
+ "<|no|>": 50288,
112
+ "<|oc|>": 50328,
113
+ "<|pa|>": 50321,
114
+ "<|pl|>": 50269,
115
+ "<|ps|>": 50340,
116
+ "<|pt|>": 50267,
117
+ "<|ro|>": 50284,
118
+ "<|ru|>": 50263,
119
+ "<|sa|>": 50344,
120
+ "<|sd|>": 50332,
121
+ "<|si|>": 50322,
122
+ "<|sk|>": 50298,
123
+ "<|sl|>": 50305,
124
+ "<|sn|>": 50324,
125
+ "<|so|>": 50326,
126
+ "<|sq|>": 50317,
127
+ "<|sr|>": 50303,
128
+ "<|su|>": 50357,
129
+ "<|sv|>": 50273,
130
+ "<|sw|>": 50318,
131
+ "<|ta|>": 50287,
132
+ "<|te|>": 50299,
133
+ "<|tg|>": 50331,
134
+ "<|th|>": 50289,
135
+ "<|tk|>": 50341,
136
+ "<|tl|>": 50348,
137
+ "<|tr|>": 50268,
138
+ "<|tt|>": 50351,
139
+ "<|uk|>": 50280,
140
+ "<|ur|>": 50290,
141
+ "<|uz|>": 50337,
142
+ "<|vi|>": 50278,
143
+ "<|yi|>": 50335,
144
+ "<|yo|>": 50325,
145
+ "<|zh|>": 50260
146
+ },
147
+ "max_initial_timestamp_index": 1,
148
+ "max_length": 448,
149
+ "no_timestamps_token_id": 50363,
150
+ "pad_token_id": 50257,
151
+ "return_timestamps": false,
152
+ "suppress_tokens": [
153
+ 1,
154
+ 2,
155
+ 7,
156
+ 8,
157
+ 9,
158
+ 10,
159
+ 14,
160
+ 25,
161
+ 26,
162
+ 27,
163
+ 28,
164
+ 29,
165
+ 31,
166
+ 58,
167
+ 59,
168
+ 60,
169
+ 61,
170
+ 62,
171
+ 63,
172
+ 90,
173
+ 91,
174
+ 92,
175
+ 93,
176
+ 359,
177
+ 503,
178
+ 522,
179
+ 542,
180
+ 873,
181
+ 893,
182
+ 902,
183
+ 918,
184
+ 922,
185
+ 931,
186
+ 1350,
187
+ 1853,
188
+ 1982,
189
+ 2460,
190
+ 2627,
191
+ 3246,
192
+ 3253,
193
+ 3268,
194
+ 3536,
195
+ 3846,
196
+ 3961,
197
+ 4183,
198
+ 4667,
199
+ 6585,
200
+ 6647,
201
+ 7273,
202
+ 9061,
203
+ 9383,
204
+ 10428,
205
+ 10929,
206
+ 11938,
207
+ 12033,
208
+ 12331,
209
+ 12562,
210
+ 13793,
211
+ 14157,
212
+ 14635,
213
+ 15265,
214
+ 15618,
215
+ 16553,
216
+ 16604,
217
+ 18362,
218
+ 18956,
219
+ 20075,
220
+ 21675,
221
+ 22520,
222
+ 26130,
223
+ 26161,
224
+ 26435,
225
+ 28279,
226
+ 29464,
227
+ 31650,
228
+ 32302,
229
+ 32470,
230
+ 36865,
231
+ 42863,
232
+ 47425,
233
+ 49870,
234
+ 50254,
235
+ 50258,
236
+ 50358,
237
+ 50359,
238
+ 50360,
239
+ 50361,
240
+ 50362
241
+ ],
242
+ "task_to_id": {
243
+ "transcribe": 50359,
244
+ "translate": 50358
245
+ },
246
+ "transformers_version": "4.37.0.dev0"
247
+ }
runs/Jan12_16-30-10_knight/events.out.tfevents.1705073454.knight.2969.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:47a10f2b390f8fcf2d8432d9a833e6e89e5082aeb6216492ad977e96b4cbd615
3
- size 10146
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ba9e233371b3e102c4903c003d3b8439a3a019599bfacc858d99690184902e0
3
+ size 10774
runs/Jan12_23-20-08_knight/events.out.tfevents.1705098061.knight.5818.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ebe12d70f80901f82ef94e5b29b2a2326f06d8ff421d77d2bbcef91d8b4f272
3
+ size 5792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3fd618dd6c112a73e5aab8aeb42b07a9a09d43debf0d38a038d883287d3dc201
3
  size 6648
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46f603b46ca246f27404536a46a7e47f47604598223fce33b450057908051757
3
  size 6648