astro-hep-corpus / README.md
arnosimons's picture
Create README.md
2c5d625 verified
|
raw
history blame
No virus
3.05 kB
metadata
task_categories:
  - feature-extraction
language:
  - en
tags:
  - physics
  - astrophysics
  - high energy physics
  - science
pretty_name: Astro-HEP Corpus
size_categories:
  - 100K<n<1M

Dataset Card for Astro-HEP Corpus

Astro-HEP-Corpus consists of approximately 21.8 million paragraphs extracted from more than 600,000 scholarly articles related to astrophysics or high energy physics or both. All articles were published between 1986 and 2022 (inclusive) on the open-access archive arXiv.org.

The final dataset has the following columns:

Column Description
Text Full text of the paragraph
Characters Number of unicode characters in the paragraph
Subwords Number of BERT subwords in the paragraph
arXiv ID Identifier of the parent article provided by arXiv
Year Year of the first publication of the parent article
Month Month of the first publication of the parent article
Day Day of the first publication of the parent article
Position Position in the sequence of paragraphs in the article

The corpus served as training data for the Astro-HEP-BERT model. For further insights into the corpus, the model, and the underlying research project (Network Epistemology in Practice) please refer to this paper [link coming soon].

Construction

The articles were selected using the original arXiv metadata file and the original arXiv taxonomy, which comprises four primary categories for high energy physics (hep-ex, hep-lat, hep-ph, and hep-th) and one primary category for astrophysics (astro-ph). The latter includes six subcategories (astro-ph.CO, astro-ph.EP, astro-ph.GA, astro-ph.HE, astro-ph.IM, and astro-ph.SR). Pandoc was used to extract plain text from the original latex files sourced from arXiv.org. In addition, all in-text citations were replaced with the marker "[CIT]" and all multiline mathematical expressions were replaced with with "FORMULA". Mathematical expressions in the inline math mode (e.g. "$...$") remained unaltered. To parse the plain text versions of the articles into their respective paragraphs, no specialized parser was necessary because, owing to the requirements of the LaTeX markup language and the parsing already performed by Pandoc, all paragraphs could be parsed by straightforward newline splitting. Additional cleaning was performed to remove noisy paragraphs (see here [link coming soon]).