--- task_categories: - feature-extraction language: - en tags: - physics - astrophysics - high energy physics - science pretty_name: Astro-HEP Corpus size_categories: - 100KarXiv.org. The final dataset has the following columns: |Column|Description| |:----------:|:-:| |*Text*|Full text of the paragraph| |*Characters*|Number of unicode characters in the paragraph| |*Subwords*|Number of BERT subwords in the paragraph| |*arXiv ID*|Identifier of the parent article provided by arXiv| |*Year*|Year of the first publication of the parent article| |*Month*|Month of the first publication of the parent article| |*Day*|Day of the first publication of the parent article| |*Position*|Position in the sequence of paragraphs in the article| The corpus served as training data for the Astro-HEP-BERT model. For further insights into the corpus, the model, and the underlying research project (Network Epistemology in Practice) please refer to this paper [link coming soon]. ## Construction The articles were selected using the original arXiv metadata file and the original arXiv taxonomy, which comprises four primary categories for high energy physics (hep-ex, hep-lat, hep-ph, and hep-th) and one primary category for astrophysics (astro-ph). The latter includes six subcategories (astro-ph.CO, astro-ph.EP, astro-ph.GA, astro-ph.HE, astro-ph.IM, and astro-ph.SR). Pandoc was used to extract plain text from the original latex files sourced from arXiv.org. In addition, all in-text citations were replaced with the marker "[CIT]" and all multiline mathematical expressions were replaced with with "FORMULA". Mathematical expressions in the inline math mode (e.g. "$...$") remained unaltered. To parse the plain text versions of the articles into their respective paragraphs, no specialized parser was necessary because, owing to the requirements of the LaTeX markup language and the parsing already performed by Pandoc, all paragraphs could be parsed by straightforward newline splitting. Additional cleaning was performed to remove noisy paragraphs (see here [link coming soon]).