File size: 6,043 Bytes
ba655c1 7339033 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1929035
num_examples: 5000
- name: validation
num_bytes: 1926717
num_examples: 5000
- name: test
num_bytes: 1926477
num_examples: 5000
download_size: 5840409
dataset_size: 5782229
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Random ASCII Dataset
This dataset contains random sequences of ASCII characters, with "train," "validation," and "test" splits, designed to simulate text-like structures using all printable ASCII characters. Each sequence consists of pseudo-randomly generated "words" of various lengths, separated by spaces to mimic natural language text.
## Dataset Details
- **Splits**: Train, Validation, and Test
- **Number of sequences**:
- Train: 5000 sequences
- Validation: 5000 sequences
- Test: 5000 sequences
- **Sequence length**: 512 characters per sequence
- **Character pool**: All printable ASCII characters, including letters, digits, punctuation, and whitespace.
## Sample Usage
To load this dataset in Python, you can use the Hugging Face `datasets` library:
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("brando/random-ascii-dataset")
# Access the train, validation, and test splits
train_data = dataset["train"]
val_data = dataset["validation"]
test_data = dataset["test"]
# Print a sample
print(train_data[0]["text"])
```
# Example Data
Below are examples of random sequences generated in this dataset:
```python
#Example 1:
"!Q4$^V3w L@#12 Vd&$%4B+ (k#yFw! [7*9z"
#Example 2:
"T^&3xR f$xH&ty ^23M* qW@# Lm5&"
#Example 3:
"b7$W %&6Zn!!R xT&8N z#G m93T +%^0"
```
# License
This dataset is released under the Apache License 2.0. You are free to use, modify, and distribute this dataset under the terms of the Apache License.
# Citation
```bibtex
@misc{miranda2021ultimateutils,
title={Ultimate Utils - the Ultimate Utils Library for Machine Learning and Artificial Intelligence},
author={Brando Miranda},
year={2021},
url={https://github.com/brando90/ultimate-utils},
note={Available at: \url{https://www.ideals.illinois.edu/handle/2142/112797}},
abstract={Ultimate Utils is a comprehensive library providing utility functions and tools to facilitate efficient machine learning and AI research, including efficient tensor manipulations and gradient handling with methods such as `detach()` for creating gradient-free tensors.}
}
```
# Code that generate it
```python
#ref: https://chatgpt.com/c/671ff56a-563c-8001-afd5-94632fe63d67
import os
import random
import string
from huggingface_hub import login
from datasets import Dataset, DatasetDict
# Function to load the Hugging Face API token from a file
def load_token(file_path: str) -> str:
"""Load API token from a specified file path."""
with open(os.path.expanduser(file_path)) as f:
return f.read().strip()
# Function to log in to Hugging Face using a token
def login_to_huggingface(token: str) -> None:
"""Authenticate with Hugging Face Hub."""
login(token=token)
print("Login successful")
# Function to generate a random word of a given length
def generate_random_word(length: int, character_pool: str) -> str:
"""Generate a random word of specified length from a character pool."""
return "".join(random.choice(character_pool) for _ in range(length))
# Function to generate a single random sentence with "words" of random lengths
def generate_random_sentence(sequence_length: int, character_pool: str) -> str:
"""Generate a random sentence of approximately sequence_length characters."""
words = [
generate_random_word(random.randint(3, 10), character_pool)
for _ in range(sequence_length // 10) # Estimate number of words to fit length
]
sentence = " ".join(words)
# print(f"Generated sentence length: {len(sentence)}\a") # Print length and sound alert
return sentence
# Function to create a dataset of random "sentences"
def create_random_text_dataset(num_sequences: int, sequence_length: int, character_pool: str) -> Dataset:
"""Create a dataset with random text sequences."""
data = {
"text": [generate_random_sentence(sequence_length, character_pool) for _ in range(num_sequences)]
}
return Dataset.from_dict(data)
# Main function to generate, inspect, and upload dataset with train, validation, and test splits
def main() -> None:
# Step 1: Load token and log in
key_file_path: str = "/lfs/skampere1/0/brando9/keys/brandos_hf_token.txt"
token: str = load_token(key_file_path)
login_to_huggingface(token)
# Step 2: Dataset parameters
num_sequences_train: int = 5000
num_sequences_val: int = 5000
num_sequences_test: int = 5000
sequence_length: int = 512
character_pool: str = string.printable # All ASCII characters (letters, digits, punctuation, whitespace)
# Step 3: Create datasets for each split
train_dataset = create_random_text_dataset(num_sequences_train, sequence_length, character_pool)
val_dataset = create_random_text_dataset(num_sequences_val, sequence_length, character_pool)
test_dataset = create_random_text_dataset(num_sequences_test, sequence_length, character_pool)
# Step 4: Combine into a DatasetDict with train, validation, and test splits
dataset_dict = DatasetDict({
"train": train_dataset,
"validation": val_dataset,
"test": test_dataset
})
# Step 5: Print a sample of the train dataset for verification
print("Sample of train dataset:", train_dataset[:5])
# Step 6: Push the dataset to Hugging Face Hub
dataset_name: str = "brando/random-ascii-dataset"
dataset_dict.push_to_hub(dataset_name)
print(f"Dataset uploaded to https://huggingface.co./datasets/{dataset_name}")
# Run the main function
if __name__ == "__main__":
main()
``` |