Datasets:

License:

issue getting data set

#1
by brando - opened
The Ultimate Data Centric Alliance org

Exception has occurred: DatasetGenerationCastError
An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'source'})

This happened while the json dataset builder was generating data using

hf://datasets/UDACA/Code-Mixed-Dataset/combined_dataset.jsonl (at revision 4ea34026bf6a4b7cda65782406b7e32484a43ce0)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
datasets.table.CastError: Couldn't cast
text: string
source: string
to
{'text': Value(dtype='string', id=None)}
because column names don't match

During handling of the above exception, another exception occurred:

File "/lfs/skampere1/0/brando9/SHED-Shapley-Based-Automated-Dataset-Refinement/experiments/2024/11_nov/download_zipfit_data_src.py", line 15, in download_train_split
dataset = load_dataset(dataset_name, split="train")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lfs/skampere1/0/brando9/SHED-Shapley-Based-Automated-Dataset-Refinement/experiments/2024/11_nov/download_zipfit_data_src.py", line 40, in
download_train_split(dataset_name)
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'source'})

This happened while the json dataset builder was generating data using

hf://datasets/UDACA/Code-Mixed-Dataset/combined_dataset.jsonl (at revision 4ea34026bf6a4b7cda65782406b7e32484a43ce0)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

The Ultimate Data Centric Alliance org
from datasets import load_dataset

def download_train_split(dataset_name, save_path=None):
    """
    Downloads only the 'train' split of a Hugging Face dataset.

    Args:
        dataset_name (str): The name of the dataset on Hugging Face (e.g., "UDACA/Code-Mixed-Dataset").
        save_path (str, optional): Custom path to save the train split. Defaults to None, 
                                   which uses the Hugging Face default cache directory.
    """
    print(f"Downloading 'train' split of dataset: {dataset_name}...")
    
    # Load only the 'train' split
    dataset = load_dataset(dataset_name, split="train")
    
    # Save to custom path or Hugging Face cache
    if save_path:
        # Ensure the directory exists
        import os
        os.makedirs(save_path, exist_ok=True)
        
        # Save as a JSONL file
        file_path = f"{save_path}/train.jsonl"
        dataset.to_json(file_path, lines=True)
        print(f"'train' split saved to: {file_path}")
    else:
        print("'train' split downloaded to the default Hugging Face cache directory.")
    
    print("Download complete!")

if __name__ == "__main__":
    # Specify the dataset name
    dataset_name = "UDACA/Code-Mixed-Dataset"
    
    # Optional: Specify a custom save path
    # save_path = "./code_mixed_dataset"  # Change to your desired directory or set to None for default location
    
    # Download the train split
    download_train_split(dataset_name)

script

Sign up or log in to comment