Usage (Huggingface Hub -- Recommended)
Replace bal-Arab
with your specific language.
from huggingface_hub import snapshot_download
folder = snapshot_download(
"cis-lmu/glotcc-v1",
repo_type="dataset",
local_dir="./path/to/glotcc-v1/",
# Replace "v1.0/bal-Arab/*" with the path for any other language available in the dataset
allow_patterns="v1.0/bal-Arab/*"
)
For faster downloads, make sure to pip install huggingface_hub[hf_transfer]
and set the environment variable HF_HUB_ENABLE_HF_TRANSFER
=1.
Then you can load it with any library that supports Parquet files, such as Pandas:
import pandas as pd
# Load the dataset from a Parquet file
# Replace the file path with the path to the desired language's Parquet file
dataset = pd.read_parquet('./path/to/glotcc-v1/v1.0/bal-Arab/bal-Arab_0.parquet')
# Print the first 5 rows of the dataset
print(dataset.head())
Usage (Huggingface datasets)
from datasets import load_dataset
# Replace "bal-Arab" with the name of any other language available in the dataset
dataset = load_dataset("cis-lmu/glotcc-v1", name="bal-Arab", split="train")
# Print the first row of data
print(dataset[0])
Usage (Huggingface datasets -- streaming=True)
from datasets import load_dataset
# Replace "bal-Arab" with the name of any other language available in the dataset
fw = load_dataset("cis-lmu/glotcc-v1", name="bal-Arab", split="train", streaming=True)
# Create an iterator from the streaming dataset
iterator = iter(fw)
# Print the next item from the iterator
print(next(iterator))
Usage (direct download)
If you prefer not to use the Hugging Face datasets or hub you can download it directly. For example, to download the first file of bal-Arab
:
!wget https://huggingface.co./datasets/cis-lmu/GlotCC-V1/resolve/main/v1.0/bal-Arab/bal-Arab_0.parquet
- Downloads last month
- 117