hdallatorre
commited on
Commit
•
78b6e3b
1
Parent(s):
1c94a12
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ pretty_name: Human Reference Genome
|
|
16 |
|
17 |
The Multi-species dataset was constructed by parsing the genomes available on [NCBI](https://www.ncbi.nlm.nih.gov/), before arbitrarily selecting only one species from each genus. Plant and virus genomes were not taken into account, as their regulatory elements differ from those of interest in the paper's tasks. The resulting collection of genomes was downsampled to a total of 850 species, in which several genomes that are heavily studied in the literature have been incorporated. The collection represents 174B nucleotides, resulting in roughly 29B tokens. The distribution of each genomics class in the dataset is displayed below:
|
18 |
|
19 |
-
|
20 |
| Class | Number of species | Number of nucleotides (B) |
|
21 |
| ---------------------| -------------------| --------------------------|
|
22 |
| Bacteria | 667 | 17.1 |
|
@@ -25,7 +25,7 @@ The Multi-species dataset was constructed by parsing the genomes available on [N
|
|
25 |
| Protozoa | 10 | 0.5 |
|
26 |
| Mammalian Vertebrate | 31 | 69.8 |
|
27 |
| Other Vertebrate | 57 | 63.4 |
|
28 |
-
|
29 |
### Supported Tasks and Leaderboards
|
30 |
|
31 |
This dataset has been used as a pre-training corpus for the Nucleotide Transformers models. Depending on the configuration used, each sequence is 6,200 or 12,200 base pase pairs long. If the dataset is iterated without being shuffled, the first 100 nucleotides of a sequence are the same as the last 100 base pairs of the previous sequence, and the last 100 nucleotides are the same as the first 100 base pairs of the next sequence. During training, this allows for randomly selecting a nucleotide between the first 200 nucleotides of the sequence and start the tokenization from this nucleotide. That way, all the chromosome is covered and the model sees different tokens for a given sequence at each epoch.
|
|
|
16 |
|
17 |
The Multi-species dataset was constructed by parsing the genomes available on [NCBI](https://www.ncbi.nlm.nih.gov/), before arbitrarily selecting only one species from each genus. Plant and virus genomes were not taken into account, as their regulatory elements differ from those of interest in the paper's tasks. The resulting collection of genomes was downsampled to a total of 850 species, in which several genomes that are heavily studied in the literature have been incorporated. The collection represents 174B nucleotides, resulting in roughly 29B tokens. The distribution of each genomics class in the dataset is displayed below:
|
18 |
|
19 |
+
```
|
20 |
| Class | Number of species | Number of nucleotides (B) |
|
21 |
| ---------------------| -------------------| --------------------------|
|
22 |
| Bacteria | 667 | 17.1 |
|
|
|
25 |
| Protozoa | 10 | 0.5 |
|
26 |
| Mammalian Vertebrate | 31 | 69.8 |
|
27 |
| Other Vertebrate | 57 | 63.4 |
|
28 |
+
```
|
29 |
### Supported Tasks and Leaderboards
|
30 |
|
31 |
This dataset has been used as a pre-training corpus for the Nucleotide Transformers models. Depending on the configuration used, each sequence is 6,200 or 12,200 base pase pairs long. If the dataset is iterated without being shuffled, the first 100 nucleotides of a sequence are the same as the last 100 base pairs of the previous sequence, and the last 100 nucleotides are the same as the first 100 base pairs of the next sequence. During training, this allows for randomly selecting a nucleotide between the first 200 nucleotides of the sequence and start the tokenization from this nucleotide. That way, all the chromosome is covered and the model sees different tokens for a given sequence at each epoch.
|