Dataset Viewer
Auto-converted to Parquet
text
stringlengths
0
1.16k
[2025-02-15 12:10:48,461] torch.distributed.run: [WARNING]
[2025-02-15 12:10:48,461] torch.distributed.run: [WARNING] *****************************************
[2025-02-15 12:10:48,461] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2025-02-15 12:10:48,461] torch.distributed.run: [WARNING] *****************************************
[2025-02-15 12:10:48,760] torch.distributed.elastic.agent.server.api: [WARNING] Received Signals.SIGINT death signal, shutting down workers
[2025-02-15 12:10:48,760] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 6598 closing signal SIGINT
[2025-02-15 12:10:48,760] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 6599 closing signal SIGINT
[2025-02-15 12:10:48,997] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 6598 closing signal SIGTERM
[2025-02-15 12:10:48,997] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 6599 closing signal SIGTERM
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 727, in run
result = self._invoke_run(role)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 868, in _invoke_run
time.sleep(monitor_interval)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 62, in _terminate_process_handler
raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval)
torch.distributed.elastic.multiprocessing.api.SignalException: Process 6595 got signal: 2
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==2.2.1', 'console_scripts', 'torchrun')())
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
result = agent.run()
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
result = f(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 734, in run
self._shutdown(e.sigval)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/local_elastic_agent.py", line 311, in _shutdown
self._pcontext.close(death_sig)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 318, in close
self._close(death_sig=death_sig, timeout=timeout)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 706, in _close
handler.proc.wait(time_to_wait)
File "/opt/conda/lib/python3.10/subprocess.py", line 1209, in wait
return self._wait(timeout=timeout)
File "/opt/conda/lib/python3.10/subprocess.py", line 1953, in _wait
time.sleep(delay)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 62, in _terminate_process_handler
raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval)
torch.distributed.elastic.multiprocessing.api.SignalException: Process 6595 got signal: 2
[2025-02-15 12:11:09,060] torch.distributed.run: [WARNING]
[2025-02-15 12:11:09,060] torch.distributed.run: [WARNING] *****************************************
[2025-02-15 12:11:09,060] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2025-02-15 12:11:09,060] torch.distributed.run: [WARNING] *****************************************
[2025-02-15 12:11:11,416] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-02-15 12:11:11,431] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
[2025-02-15 12:11:14,256] [INFO] [comm.py:637:init_distributed] cdb=None
[2025-02-15 12:11:14,256] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
02/15/2025 12:11:14 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False
02/15/2025 12:11:14 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=True,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=2,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=zero_stage1_config.json,
disable_tqdm=False,
dispatch_batches=None,
do_eval=False,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=no,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
101