--- language: - en license: cc0-1.0 size_categories: - 10K ## Uses NEPATEC1.0 is one of a kind unique dataset for the environmental Review/permitting domain. It can be used for various scientific studies, including (1) Training LLMs for domain adaptation, (2) leveraging NLP models, including LLMs for exploratory data analytics such as spatio-temporal trend analysis across project type, location, agencies, among others. Such studies can offer valuable insights about the NEPA processes that can inform future environmental reviews. ## Usage To downlaod and use the data using HuggingFace datasets library, use the following code ``` from datasets import load_dataset data = load_dataset("PolicyAI/NEPATEC1.0") ``` ## Dataset Structure The dataset is a list of EIS Project level dictionaries. Each dictionary has the following structure: ``` { “Project Title”: Title of the EIS Project, “State”: A set of states the EIS targets, “Agency”: A set of agencies the EIS is associated with, “EPA Comment Letter Date”: A set of public comment dates associated with the EIS documents , “Federal Register Date”: A set of registration dates associated with the EIS documents, “Documents”: A list of documents associated with the EIS and their corresponding textual data and extracted named entities. } ``` The documents associated with each EIS project datapoint each are structured in a dictionary format, with the following structure: ``` { “Metadata”: A dictionary containing the document title, “Pages”: A list of dictionaries for each page of the document, containing the textual data for that page as well as the extracted named entities } ``` The named entities are also structured in a dictionary format, with the following structure: ``` { “text”: The text for the named entity, “label”: The label for the named entity, “score”: Confidence score for the text to belong to the given label } ``` ## Dataset Creation The NEPATEC 1.0 dataset was scrapped from the United States Environmental Protection Agency data website by making an empty search, which returned all the documents in the database. There were two steps to this collection: 1. Meta-Data collection: The EPA website provides an option to download metadata related to all the documents in the search 2. Document Scrapping: In this step we scrapped and downloaded all the document retrieved from the search #### Data Collection and Processing After downloading the data, one of the major issues faced was merging documents by project names, as there were multiple projects with same name, as well as some projects with slightly changed names. To solve this issue, we followed a two-step merging process: 1. Duplicate Merging: Merging titles and corresponding meta-data with duplicate names 2. Fuzzy Merging: Merging similar titles using fuzzy matching and their corresponding meta-data We used PyMuPDF to parse textual and image data from the downloaded documents. The parsed textual data was split by pages. We used these page-wise text to extract named entities using the GLiNER toolkit. The GLiNER model used accepts around 400 tokens per input, thus we processed and split the page-wise text to 150 words per batch and passed these batches through the GLiNER pipeline. ## Limitations Due to the merging process of similar titles projects, we had to drop over 7,000 documents and corresponding projects from our dataset. Hence, the NEPATEC 1.0 dataset does not contain all of the documents available on EPA website. ## Acknowledgement This work was supported by the Office of Policy, U.S. Department of Energy, and Pacific Northwest National Laboratory, which is operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract DE-AC05–76RLO1830. This dataset card has been cleared by PNNL for public release as PNNL-36100. The NEPATEC1.0 dataset has been cleared by PNNL for public release as PNNL-SA-199568.