dataset
stringclasses 45
values | id
stringlengths 17
64
| messages
listlengths 2
2
|
---|---|---|
science.healthver_entailment | science.healthver_entailment.2378 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: People with chronic conditions like diabetes and lung and heart disease are at greater risk of being hospitalized due to the coronavirus - but more than one in five patients who end up in intensive care had no such health problems.\n\nAbstract:\nObjective: To undertake a review and critical appraisal of published/preprint reports that offer methods of determining the effects of hypertension, diabetes, stroke, cancer, kidney issues, and high-cholesterol on COVID-19 disease severity.\nData sources: Google Scholar, PubMed, COVID-19 Open Research Dataset: a resource of over 128,000 scholarly articles, including over 59,000 articles with full text related to COVID-19, SARS-CoV-2, and coronaviruses.\nMethods: A search was conducted by two authors independently on the freely available COVID-19 Open Research Dataset (CORD-19).\nWe developed an automated search engine to screen a total of 59,000 articles in a few seconds.\nThe search engine was built using a retrieval function that ranks a set of documents based on the query terms appearing in each document regardless of their proximity within the document.\nFiltering of the articles was then undertaken using keywords and questions, e.g. \"Effects of diabetes on COVID/normal coronavirus/SARS-CoV-2/nCoV/COVID-19 disease severity, mortality?\".\nThe search terms were repeated for all the comorbidities considered in this paper.\nAdditional articles were retrieved by searching via Google Scholar and PubMed.\nFindings: A total of 54 articles were considered for a full review.\nIt was observed that diabetes, hypertension, and cholesterol levels possess an apparent relation to COVID-19 severity.\nOther comorbidities, such as cancer, kidney disease, and stroke, must be further evaluated to determine a strong relationship to the virus.\nReports associating cancer, kidney disease, and stroke with COVID-19 should be carefully interpreted, not only because of the size of the samples, but also because patients could be old, have a history of smoking, or have any other clinical condition suggesting that these factors might be associated with the poor COVID-19 outcomes rather than the comorbidity itself.\nSuch reports could lead many oncologists and physicians to change their treatment strategies without solid evidence and recommendations.\nFurther research regarding this relationship and its clinical management is warranted.\nAdditionally, treatment options must be examined further to provide optimal treatment and ensure better outcomes for patients suffering from these comorbidities.\nIt should be noted that, whether definitive measurements exist or not, the care of patients as well as the research involved should be largely prioritized to tackle this deadly pandemic.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.1456 | [
{
"content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: Exposure to organophosphate (OP) pesticides and health conditions in agricultural and non-agricultural workers from Maule, Chile The objective was to evaluate the characteristics of exposure to OP pesticides and health status in Chilean farm workers from the Maule Region. An occupational health questionnaire was administered in 207 agricultural and non-agricultural workers. For the group of agricultural workers, we asked about specific occupational exposure history and symptoms of OP pesticide poisoning. The main health problem of the exposed group was previous OP pesticide poisoning (p < 0.001). Fifty-six percent of agricultural workers reported symptoms consistent with acute OP pesticide poisoning. The use of respiratory personal protective equipment and younger age were protective against these symptoms, and number of years of OP pesticide exposure was positively associated with reporting symptoms of poisoning. Of the pesticide applicators 47 % reported using chlorpyrifos. The regulations regarding use and application of pesticides should be strengthened, as should training and intervention with workers to improve the use of personal protective equipment.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"organophosphate (OP) pesticides\", \"OP pesticide\", \"pesticides\", \"OP pesticides\", \"chlorpyrifos\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"occupational exposure history\", \"positively\", \"symptoms\", \"health conditions\"], \"Food\": [], \"HealthCareActivity\": [\"intervention\"], \"InjuryOrPoisoning\": [\"poisoning\"], \"IntellectualProduct\": [\"occupational health questionnaire\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [\"workers\", \"exposed group\"], \"ProfessionalOrOccupationalGroup\": [\"farm workers\"], \"ResearchActivity\": [], \"SpatialConcept\": [\"Chile\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.bioasq_yesno_qa | science.bioasq_yesno_qa.977 | [
{
"content": "Based on the following biomedical documents, answer the question with either yes or no.\n\nQuestion: Is TIM-3 a target for cancer immunotherapy in NSCLC?\n\nContext: Our results imply that implementing combined treatment on CIK cells before transfusion via antibodies targeting PD-L1, LAG-3, TIM-3, and CEACAM-1 might improve the efficiency of CIK therapy for NSCLC patients.\nFurthermore, TIM-3 and CEACAM1 were strongly expressed simultaneously during long-term CIK culture and showed a significant and mutually positive correlation.\nIn present study, we detected the dynamic expression of eight major checkpoint molecules (CTLA-4, PD-1, PD-L1, TIM- 3, CEACAM-1, LAG-3, TIGIT and BTLA) on CIK cells from NSCLC patients.\nAgents targeting other immune inhibitory (e.g., Tim-3) or immune stimulating (e.g., CD137) receptors on T cells and other approaches such as adoptive cell transfer are tested for clinical efficacy in melanoma as well.\nWe found immune activation coexistent with elevation of multiple targetable immune checkpoint molecules, including PD-L1, PD-L2, PD-1, TIM-3, B7-H3, BTLA, and CTLA-4, along with increases in tumor infiltration by CD4(+)Foxp3(+) regulatory T cells in lung adenocarcinomas that displayed an EMT phenotype\nCytometric profiling identified an immunologically \"hot\" cluster with abundant CD8+ T cells expressing high levels of PD-1 and TIM-3 and an immunologically \"cold\" cluster with lower relative abundance of CD8+ T cells and expression of inhibitory markers\nInterestingly, CD161+ CD4+ T cells highly express OX40 co-stimulatory receptor, less frequently 4-1BB, and display an activated but not completely exhausted PD-1-positive Tim-3-negative phenotype.\n. Furthermore, overexpression of targetable immune checkpoints, such as CTLA-4 and TIM-3 were associated with EMT in both NSCLCs.",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.29053 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nWe present a method for grouping the synonyms of a lemma according to its dictionary senses. The senses are defined by a large machine readable dictionary for French, the TLFi (Tr 'esor de la langue fran c aise informatis 'e) and the synonyms are given by 5 synonym dictionaries (also for French). To evaluate the proposed method, we manually constructed a gold standard where for each (word, definition) pair and given the set of synonyms defined for that word by the 5 synonym dictionaries, 4 lexicographers specified the set of synonyms they judge adequate. While inter-annotator agreement ranges on that task from 67 to at best 88 depending on the annotator pair and on the synonym dictionary being considered, the automatic procedure we propose scores a precision of 67 and a recall of 71 . The proposed method is compared with related work namely, word sense disambiguation, synonym lexicon acquisition and WordNet construction.\n\nCited Abstract(s):\n[1]: Veronis (2004) has recently proposed an innovative unsupervised algorithm for word sense disambiguation based on small-world graphs called HyperLex. This paper explores two sides of the algorithm. First, we extend Veronis' work by optimizing the free parameters (on a set of words which is different to the target set). Second, given that the empirical comparison among unsupervised systems (and with respect to supervised systems) is seldom made, we used hand-tagged corpora to map the induced senses to a standard lexicon (WordNet) and a publicly available gold standard (Senseval 3 English Lexical Sample). Our results for nouns show that thanks to the optimization of parameters and the mapping method, HyperLex obtains results close to supervised systems using the same kind of bag-of-words features. Given the information loss inherent in any mapping step and the fact that the parameters were tuned for another set of words, these are very interesting results.",
"role": "user"
},
{
"content": "It would in principle be possible to use an unsupervised approach and attempt to disambiguate synonyms on the basis of raw corpora. Such approaches however are not based on a fixed list of senses where the senses for a target word are a closed list coming from a dictionary. Instead they induce word senses directly from the corpus by using clustering techniques, which group together similar examples. To associate synonyms with definitions, it would therefore be necessary to define an additional mapping between corpus induced word senses and dictionary definitions. As noted in [1] , such a mapping usually introduces noise and information loss however.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.22042 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nThe threat of hardware Trojans has been widely recognized by academia, industry, and government agencies. A Trojan can compromise security of a system in spite of cryptographic protection. The damage caused by a Trojan may not be limited to a business or reputation, but could have a severe impact on public safety, national economy, or national security. An extremely stealthy way of implementing hardware Trojans has been presented by at CHES'2012. Their work have shown that it is possible to inject a Trojan in a random number generator compliant with FIPS 140-2 and NIST SP800-90 standards by exploiting non-zero aliasing probability of Logic Built-In-Self-Test (LBIST). In this paper, we present two methods for modifying LBIST to prevent such an attack. The first method makes test patterns dependent on a configurable key which is programed into a chip after the manufacturing stage. The second method uses a remote test management system which can execute LBIST using a different set of test patterns at each test cycle.\n\nCited Abstract(s):\n[1]: Many semiconductor chips used in a wide range of applications require protection against physical attacks or tamper resistance. These attacks assume that a direct access to the chip is possible with either establishing electrical connections to signal wires or at least doing some measurements. The importance of protection against physical attacks is dictated by the amount of valuable and sensitive information stored on the chip. This could be secret data or company secrets and intellectual property (IP), electronic money for service access, or banking smartcards. The security in chips serves to deter prospective attackers from performing unauthorized access and benefiting from it. There are many areas that rely on tamper resistance of silicon chips. One of the first was car industry with theft protection and car alarms. Then in the early 1990s service providers such as PayTV, satellite TV, and utility companies realized that their service can be stolen if the access and payment cards are not properly protected. From the late 1990s home entertainment companies realized that their game consoles became the target of dishonest users who wanted to run illegal copies of the games. These days many device manufacturers from computer peripherals and mobile phones to printers and computers are worried about possible IP theft by third parties – either competitors or subcontractors. All the above challenges force hardware engineers to find secure solutions – either better protected off-the-shelf chips or their own custom chips. As in most cases it is impractical to block direct access to the device and its components, protection against physical attacks became the essential part of the system design. These days we have a continuous battle between the manufacturers who invent new security solutions learning their lessons from previous mistakes and the hacker community which is constantly trying to break the protection in various devices. Both sides are also constantly improving their knowledge and experience. In this endless war, the front line shifts forward and backward regularly. Deep down, the problem concerns both economics and law. On the one hand, when dishonest people try to steal property, there will be a demand to increase security. On the other, reverse engineering was always part of technological progress, helping to design compatible products and improve existing ones. The dividing line between legal (reverse engineering) and illegal (piracy) is difficult.",
"role": "user"
},
{
"content": "To perform a visual inspection, a target chip is first depackaged. Then, the layers of the chip are removed one-by-one by polishing or etching and, for each exposed layer, images are taken with a high-resolution imaging method such as Scanning Electron Microscopy (SEM) or Focused Ion Beam (FIM) [1] . These images are then compared to the corresponding images of a golden chip to detect possible differences. A golden chip is a chip which is known to have the correct functionality complying with the specification, without any malicious modifications.",
"role": "assistant"
}
] |
science.bioasq_yesno_qa | science.bioasq_yesno_qa.1179 | [
{
"content": "Based on the following biomedical documents, answer the question with either yes or no.\n\nQuestion: Is there a genetic cause of craniostenosis?\n\nContext: Apert syndrome - acrocephalosyndactyly - is a rare autosomal dominant disorder representing 1:65 000 cases of living newborns. Characteristic malformations of the Apert syndrome are early craniostenosis, microviscerocranium and II-V finger syndactyly of hand and toes with proximal phalanx of the bilateral thumb \"in delta\".\nA 3-year-old child with tertiary trisomy (14 (+14q--), daughter of a mother with a balanced reciprocal translocation [46,XX,t(14;16)(q11;q24) is presented. Craniostenosis and developmental retardation were the primary presenting features in this patient.\nSaethre-Chotzen syndrome is an autosomal dominant disease characterized by craniosynostosis, ptosis, and limb and external ear abnormalities\n. For 98 patients (15%) a syndrome is associated. Third part of them has Apert syndrome, an other third part has Crouzon syndrome, and for the last third more exceptional acrocephalosyndactyly syndrome (Saethre-Chotzen, Pfeiffer) or others atypical associations, sometimes not yet described, but with an autosomal dominant inheritance.\nCoronal craniostenosis seems to be a dominant autosomal character, when scaphocephaly is more often sporadic; for both, an autosomal dominant inheritance is not excluded for some pedigrees.\ngenetic origins are not completely clear although mutations in the genes that code for fibroblast growth factor receptors have been described; depending upon the gene involved, the type of mutation and the embryological period in which the mutation itself occurs, a type of craniosynostosis arises that may involve one or more cranial sutures. The\nSaethre-Chotzen syndrome is an autosomal dominant disease characterized by craniosynostosis, ptosis, and limb and external ear abnormalities.\nIdentification and analysis of the genetic causes in nine unrelated probands with syndromic craniosynostosis.\nTo identify and analyze causative genetic variants in nine unrelated probands mainly manifested as syndromic craniosynostosis, we reviewed the relevant medical information of the patients and performed the whole exome sequencing, further verified with Sanger sequencing and parental background.\n[Genetic counseling in craniostenosis. Results of a prospective study performed with a group of studies on craniofacial malformations].\nConstitutional 11q interstitial deletion syndrome presents with congenital anomalies including microcephaly with craniostenosis, minor dysmorphic features, vitreoretinopathy, and renal anomalies.\nA suckling baby with microcephaly, craniostenosis, downward slanting palpebral fissues, malformed ears, cerebral, cardiac and intestinal malformation, and partial 6q25 leads to 6qter trisomy is presented.\nIts genetic origins are not completely clear although mutations in the genes that code for fibroblast growth factor receptors have been described; depending upon the gene involved, the type of mutation and the embryological period in which the mutation itself occurs, a type of craniosynostosis arises that may involve one or more cranial sutures.\nthrough which skull growth abnormalities are seen. It is becoming clearer that in most patients with craniosynostosis, there is regional imbalance of skull growth, which co-exists with a variety of other equally important factors, such as genetic defects, raised intracranial pressure, venous hypertension, and other brain parenchymal a\nRecent genetic studies have identified several novel genes and pathways that cause nonsyndromic craniosynostosis, providing genetic evidence linking the causes of syndromic and nonsyndromic craniosynostoses, and allowing for genotype-based prediction of risk of recurrence in some nonsyndromic families.",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.bioasq_factoid_qa | science.bioasq_factoid_qa.845 | [
{
"content": "Answer the following biomedical question based on the given context. Provide only the answer without additional explanation.\n\nQuestion: Which algorithm has been developed for finding conserved non-coding elements (CNEs) in genomes?\n\nContext: CNEFinder: finding conserved non-coding elements in genomes.\nWe fill this gap by presenting CNEFinder; a tool for identifying CNEs between two given DNA sequences with user-defined criteria. The results presented here show the tool's ability of identifying CNEs accurately and efficiently.\nCNEFinder: finding conserved non-coding elements in genomes.Free software under the terms of the GNU GPL (https://github.com/lorrainea/CNEFinder).\nCurrently, there are no tools published in the literature for systematically identifying CNEs in genomes.<br><b>Results</b>: We fill this gap by presenting CNEFinder; a tool for identifying CNEs between two given DNA sequences with user-defined criteria.\nWe fill this gap by presenting CNEFinder; a tool for identifying CNEs between two given DNA sequences with user-defined criteria.",
"role": "user"
},
{
"content": "CNEFinder",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.5695 | [
{
"content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: In mammalian cells, cyclin D–CDK4/6 and cyclin E–CDK2 complexes are required to promote cell cycle entrance from quiescence, progression through the G1 phase and transition from G1 into S phase in response tomitogenic stimulation (Sherr, 1995; Malumbres and Barbacid, 2001; Chulu and Liu, 2009).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.22276 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nThe rapid spread of COVID-19 global outbreak will inevitably pose great challenges for public health and medical care, especially for major concerns of public health such as HIV. In the light of the COVID-19 pandemic, UNAIDS is urging countries to remain steadfast in their HIV prevention efforts. Few studies of COVID-19 have been focused on marginalized populations, especially those with HIV infection. This study aims to review the studies and researches of COVID-19 among people living with HIV, covering the impacts of COVID-19 on their disease progression, HIV-related medical services, HIV epidemics, and measures on relieving the above-mentioned impacts, which might provide references for targeted scientific research and prevention practice of COVID-19 in HIV population, and optimizing prevention and treatment services in the context of this pandemic.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.29 | [
{
"content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: Usefulness of Embolization for Iatrogenic Dural Arteriovenous Fistula Associated with Recurrent Chronic Subdural Hematoma: A Case Report and Literature Review Refractory chronic subdural hematomas due to iatrogenic dural arteriovenous fistulas (dAVFs) are difficult to treat. We report our experience and propose a guideline on basis of a literature review for the usefulness of embolization of middle meningeal artery (MMA) for the treatment of the same. We report a case with right hemiparesis and aphasia 1 month after a fall from a bicycle. Computed tomography scan of the head showed left chronic subdural hematoma, which was evacuated by burr-hole drainage. The postoperative course was complicated by reaccumulation within short period of time. On superselective digital subtraction angiography of MMA, iatrogenic dAVF was found on left side. We embolized successfully it using n-butyl cyanoacrylate after a third irrigation. No reaccumulation found in the postoperative period or at last follow-up. We propose treatment protocol based on our experience and literature review. Refractory chronic subdural hematoma with reaccumulation within a short interval should be subjected to digital subtraction angiography of the MMA. Embolization of ipsilateral MMA is safe, effective, and a useful option for the treatment of iatrogenic dAVF and resolution of hematoma.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"middle meningeal artery\", \"dAVFs\", \"MMA\", \"dAVF\", \"Dural Arteriovenous Fistula\", \"dural arteriovenous fistulas\"], \"Bacterium\": [], \"BiologicFunction\": [\"Iatrogenic\", \"aphasia\", \"chronic subdural hematomas\", \"Chronic Subdural Hematoma\", \"hematoma\", \"chronic subdural hematoma\", \"iatrogenic\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"n-butyl cyanoacrylate\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"reaccumulation\", \"fall\", \"right hemiparesis\"], \"Food\": [], \"HealthCareActivity\": [\"superselective digital subtraction angiography\", \"digital subtraction angiography\", \"embolization\", \"embolized\", \"treatment\", \"Computed tomography scan\", \"treatment protocol\", \"Embolization\", \"follow-up\", \"irrigation\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"Case Report\", \"literature review\", \"Literature Review\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [], \"SpatialConcept\": [\"ipsilateral\", \"left\", \"left side\", \"head\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.40 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nThis paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.\n\nCited Abstract(s):\n[1]: This paper presents a method for inducing the parts of speech of a language and part-of-speech labels for individual words from a large text corpus. Vector representations for the part-of-speech of a word are formed from entries of its near lexical neighbors. A dimensionality reduction creates a space representing the syntactic categories of unambiguous words. A neural net trained on these spatial representations classifies individual contexts of occurrence of ambiguous words. The method classifies both ambiguous and unambiguous words correctly with high accuracy.",
"role": "user"
},
{
"content": "In a previous paper [1] , we trained a neural network to disambiguate part-of-speech using context; however, no information about the word that is to be categorized was used. This scheme fails for cases like The soldiers rarely come home.'' vs. The soldiers will come home.'' where the context is identical and information about the lexical item in question ( rarely'' vs. will'') is needed in combination with context for correct classification. In this paper, we will compare two tagging algorithms, one based on classifying word types, and one based on classifying words-plus-context.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.186326 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\npositron emission tomography ( pet ) [ 1 ] is currently one of the most perspective techniques in the field of medical imaging .\npet is based on the fact that the electron and positron annihilate and their mass is converted to energy in the form of two gamma quanta flying in the opposite directions .\nthe two gamma quanta registered in coincidence define a line referred to as line of response ( lor ) .\nthe image of distribution of the radionuclide is obtained from the high statistics sample of reconstructed lors . in time of flight pet ( tof - pet ) systems\n[ 2 , 3 ] , the applied detectors measure the difference in the arrival time of the two gamma rays which enables to shorten significantly a range along the lor used for the reconstruction of the image .\ncurrently all commercial pet devices use inorganic scintillator materials , usually lso or lyso crystals , as radiation detectors .\nthese are characterized by relatively long rise- and decay times , of the order of tens of nanoseconds .\nthe j - pet collaboration investigates a possibility of construction of a pet scanner from plastic scintillators which would allow for simultaneous imaging of the whole human body .\nthe j - pet chamber is built out from long strips forming the cylinder [ 4 , 5 ] .\nlight signals from each strip are converted into electrical signals by two photomultipliers ( pm ) placed at opposite edges .\nit should be noted that the better the time resolutions of the detection system the better is the quality of a reconstructed images . in this paper\nwe will investigate the tof resolution of a novel j - pet scanner and we will show a simple method to improve the results by applying the compressive sensing theory . in the following\nwe define the time resolution and present shortly the method of signal normalization based on compressive sensing theory .\nthen we describe an experimental setup used for signal registration and present results of improving tof resolution in 30 cm long plastic scintillator strip , read out on both sides by the hamamatsu r4998 photomultipliers .\nsignals from the photomultipliers were sampled in 50 ps steps using the lecroy signal data analyzer 6000a .\nin the following we will be interested in determination of moment of interaction of gamma quantum with a strip . the interaction moment ( @xmath0 )\nis given by : @xmath1 where @xmath2 and @xmath3 are the arrival times to the pm1(2 ) , respectively , @xmath4 is the length of whole strip , and @xmath5 is the effective speed of the light in used scintillator . in the recent work [ 6 ] ,\nthe speed of the light in the scintillator was estimated to 12.6 cm / ns . in order to determine the resolution ( standard deviation ) of thit determination an indirect method based on the estimation of the resolution of time difference ( @xmath6 )\nwill be provided .\nwe assume for the sake of simplicity that @xmath5 in eq .\n1 is known exactly . since the time difference @xmath7\n, we have @xmath8 and the resolution of @xmath0 based on eq .\n[ thit ] may be expressed as : @xmath9 which implies that the resolution of the determination of interaction moment ( @xmath0 ) is twice better than the resolution of the time difference @xmath6 .\na necessary data to carry out the research have been acquired by a single module of the j - pet detector [ 6 ] .\nthe scheme of experimental setup is presented in fig .\nthe 30 cm long strip was connected on two sides to the photomultipliers ( pms ) .\nthe radioactive @xmath10 source was moved from the first to the second end in steps of 6 mm . at each position ,\nabout 10 000 pairs of signals from pm1 and pm2 were registered in coincidence with reference detector .\nthe signals were sampled using the scope with a probing interval of 50 ps .\nexamples of two signals registered at pm1 and pm2 are shown in fig .\n2 with blue ( red ) colors , respectively , for the case when the scintillator was irradiated at distance of 7 cm from pm2 ( 23 cm from pm1 ) . , scaledwidth=70.0% ] in the final , multi - modular devices with hundreds of photomultipliers probing with scopes will not be possible . therefore , a multi - threshold sampling method to generate samples of a pet event waveform with respect to four user - defined amplitudes was proposed .\nan electronic system for probing these signals in a voltage domain was developed and successfully tested [ 7 ] .\nbased on the signals registered via scope , we simulate a four - level measurement with sampling in the voltage domain at 50 , 100 , 200 and 300 mv , indicated by four black horizontal lines in fig .\nit should be stressed that due to the time walk effect the resolution determined when applying the lowest threshold ( 50 mv ) is better with respect to the resolution obtained at the highest level ( 300 mv ) . therefore the simplest way to define the start of each pulse\n, times @xmath2 and @xmath3 , is to use the information from registration time at the lowest amplitude level , marked with vertical dashed lines in fig .\n2 . from this one may easily estimate the resolution of @xmath6 and therefore the resolution of thit ( see eq .\n3 ) . , scaledwidth=70.0% ] since the shape and amplitude of signals are predominantly related with the hit position , further improvement of the resolution may be provided by the analysis of the full time signals . according to the theory of compressive sensing ( cs ) [ 8 , 9 ] , a signal that is sparse in some domain can be recovered based on far fewer samples than required by the nyquist sampling theorem . in recent articles\n[ 10 , 11 , 12 ] we have proposed a novel signal recovery scheme based on the compress sensing method and the statistical analysis that fits to the signal processing scenario in j - pet devices . under this theory\nonly a recovery of a sparse or compressible signals is possible . in articles\n[ 10 , 11 ] the sparse representation of signals was provided by the principal component analysis ( pca ) decomposition .\nwe will not describe all the steps of signal processing here , but we just state that the recovery of full time signals based on eight samples is very accurate . for further details about signal recovery scheme in the j - pet framework the interested reader\nis referred to ref .\n[ 10 , 11 , 12 ] .\nthe application of cs theory enable to take an advantage from fully sampled signals and open an area for completely new algorithms for the estimation of the values of times @xmath2 and @xmath3 ( see fig .\n2 ) . in the following we will use the recovered signals to provide the signal normalization . due to the low detection efficiency of the plastic scintillators ( and interaction of gamma quanta predominantly via compton effect ) , low number of photons reach the photomultipliers and the charge , as well as amplitude , of signals is subject to a large variations .\nhowever , the shapes of the signals are highly related with the position of the interaction . in order to improve the time resolution , a method of signals normalization\nis proposed which permits to decrease the smearing of signals charge .\nthe procedure of signal normalization is as follows .\nconsider @xmath11 data sets representing charges of signals at pm1 and pm2 gathered for @xmath11 positions .\nthe mean values of the charges at positions along the strip ( @xmath12 ) at pm1 , pm2 will be denoted by @xmath13 and @xmath14 , respectively . furthermore , the standard deviation of charges at each position along the strip ( @xmath12 ) at pm1 , pm2 will be denoted by @xmath15 and @xmath16 , respectively .\nsuppose that new pair of signals @xmath17 and @xmath18 have been recovered based on time samples registered at four amplitudes levels at pm1 and pm2 , respectively .\nthe charges of the signals , @xmath19 and @xmath20 , are calculated as an integrations of the @xmath17 and @xmath18 functions , respectively .\nthe proposed normalization procedure qualifies a new measurement , represented by @xmath19 and @xmath20 , to the @xmath21 position : @xmath22 next , the recovered signals @xmath17 and @xmath18 are normalized according to the formula : @xmath23 where @xmath24 and @xmath25 denote the normalized signals .\nin the first step of the analysis the charge distributions of signals were investigated .\nexperimental results based on the signals registered along the scintillator strip are presented in fig .\n3 . mean values of charges at pm1(2 ) are marked with solid blue ( red ) curves .\nas expected , the curves are symmetrical with respect to the center of the scintillator strip ( position of 15 cm ) .\nthe distributions of @xmath26+/-@xmath27 and @xmath28+/- @xmath29 at pm1(2 ) along the strip are marked with dashed blue ( red ) curves . from fig .\n3 one may observe that @xmath27 and @xmath29 have the same trend as @xmath26 and @xmath28 , respectively , and are in the range from about 6 to 20 pc .\n, scaledwidth=70.0% ] figure 4 shows an example of the normalization of signals registered at pm1 at fixed position 5 cm from pm1 .\nthe three corresponding signals registered at second photomultiplier ( pm2 ) are not shown here , but were used during the normalization process to estimate the position of irradiation ( see eq .\nthe left part of fig .\n4 shows a three randomly selected raw signals registered via scope .\nthe right part of fig .\n4 presents the same signals after the normalization procedure provided according to the description in sec .\n. the same colors of the signals on the left and right part in fig . 4 indicate a corresponding pair of signals before and after normalization . as it is seen from left part of fig .\n4 the shapes of the three signals are similar but the charges differ .\nhowever , an estimated positions according to eq . 4 were found to be very close , and therefore the charges of the normalized signals were also very similar ( see right part of fig .\n4 ) . [ cols=\"^,^ \" , ] the normalization method was verified using signals from all the irradiation positions .\neach pair of signals was recovered via compressed sensing method based on the information from four amplitude levels ( see fig .\n2 ) and next normalized according to the description in sec . 2.4 . in order to compare the time resolutions before and after normalization process ,\nthe normalized signals , @xmath24 and @xmath25 , were sampled in the voltage domain at four levels from 50 to 300 mv ( see fig .\n2 ) . for each position and at each level\nthe distribution of time difference was calculated . in fig .\n5 and 6 the resulting resolutions ( @xmath6 ) are presented as a function of irradiated position , determined when applying the lowest threshold ( 50 mv ) and the highest threshold ( 300 mv ) , respectively . in fig . 5 and 6 ,\nthe resolutions obtained based on the raw and normalized signals are indicated with blue and red squares , respectively .\nas expected , due to the time walk effect the resolution determined when applying threshold at 50 mv is better with respect to the resolution obtained at 300 mv\n. however , the influence of time walk effect is more visible in the case of raw signals . from fig . 5 and 6 one can infer that the time resolution is almost independent of the position of irradiation .\nan average resolution of the time difference along the strip at the lowest tested amplitude level ( fig .\n5 ) was determined to be 172 ps and 160 ps for the raw and normalized signals , respectively .\nthis corresponds to the improvement of the resolution of the moment of the interaction from 86 ps to about 80 ps .\n, scaledwidth=70.0% ] , scaledwidth=70.0% ]\nin this paper the concept of signal normalization in a novel j - pet scanner was introduced .\nj - pet device is based on plastic scintillators and therefore is a promising solution in view of the tof resolution . in a related works [ 10 , 11 ] it was shown that compressive sensing theory can be successfully applied to the problem of signal recovery in a j - pet scanner .\nthe information from fully recovered signals was utilized in order to provide the normalization of the signals .\nit was shown that with fully recovered signals a better time resolution of j - pet scanner is achieved ; the resolution of the moment of the interaction was improved from about 86 ps to 80 ps .\nit should be stressed that different approaches for utilizing the recovered information from compressive sensing theory may be considered and the studies are in progress .\nwe acknowledge technical and administrative support of t. gucwa - ry , a. heczko , m. kajetanowicz , g. konopka - cupia , j. majewski , w. migda , a. misiak , and the financial support by the polish national center for development and research through grant innotech - k1/in1/64/159174/ncbr/12 , the foundation for polish science through mpd programme , the eu and mshe grant no .\npoig.02.03.00 - 161 00 - 013/09 , doctus - the lesser poland phd scholarship fund .\n99 j. l. humm et al . , from pet detectors to pet scanners eur .\nimaging 30 ( 2003 ) 1574 j. s. karp et al .\n, benefit of time - of - flight in pet : experimental and clinical results j. nucl . med .\n49 ( 2008 ) 462 m. conti , state of the art and challenges of time - of - flight pet phys . med . 25 ( 2009 ) 1 p. moskal et al . , novel detector systems for the positron emission tomography , bio - algorithms and med - systems 7 ( 2011 ) 73 ; [ arxiv:1305.5187 [ physics.med-ph ] ] .\np. moskal et al . , a novel tof - pet detector based on organic scintillators radiotheraphy and oncology 110 ( 2014 ) s69 .\np. moskal et al .\n, nuclear instruments and methods in physics research section a 764 ( 2014 ) 317 .\nm. palka et al . , a novel method based solely on field programmable gate array ( fpga ) units enabling measurement of time and charge of analog signals in positron emission tomography ( pet ) bio - algorithms and med - systems 10 ( 2014 ) 41 ; [ arxiv:1311.6127 [ physics.ins-det ] ] .\ne. candes , j. romberg , t. tao , ieee transaction on information theory 52 ( 2006 ) 489 .\nd. donoho , ieee transaction on information theory 52 ( 2006 ) 1289 .\nl. raczyski et al . , acta phys .\nb proceed .\nsuppl . 6 ( 2013 ) 1121 , [ arxiv:1310.1612 [ physics.med-ph ] ] .\nl. raczyski et al . , nuclear instruments and methods in physics research section a , 786 ( 2015 ) 105 . l. raczyski et al . , nuclear instruments and methods in physics research section a , 764 ( 2014 ) 186 .",
"role": "user"
},
{
"content": "nowadays , in positron emission tomography ( pet ) systems , a time of flight information is used to improve the image reconstruction process . in time of flight pet ( tof - pet ) , fast detectors are able to measure the difference in the arrival time of the two gamma rays , with the precision enabling to shorten significantly a range along the line - of - response ( lor ) where the annihilation occurred . in the new concept , called j - pet scanner , gamma rays are detected in plastic scintillators . in a single strip of j - pet system , time values are obtained by probing signals in the amplitude domain . owing to compress sensing theory , information about the shape and amplitude of the signals is recovered . in this paper \n we demonstrate that based on the acquired signals parameters , a better signal normalization may be provided in order to improve the tof resolution . \n the procedure was tested using large sample of data registered by a dedicated detection setup enabling sampling of signals with 50 ps intervals . experimental setup provided irradiation of a chosen position in the plastic scintillator strip with annihilation gamma quanta . ` \n compressed sensing ` , ` positron emission tomography ` , ` time - of - flight `",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.49140 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nzinc sulfate is evaluated in cases of pneumonia . there are reports indicating its effectivenss in improving pneumonia .\nother studies did not find efficacy of zinc sulfate in recovering pneumonia . to elucidate the effect of zinc sulfate in acute lower respiratory tract infection , we arranged this study .\nthis was a double blind pilot study on 50 patients aged 2 to 23 months .\nacute bronchiolitis was diagnosed clinically and radiologically as first episode of wheezing in previously healthy infant .\npatients were randomly divided into two groups : case group took oral zinc sulfate 1% at a rate of 1cc per kg for infants under one year old and 10cc ( 20 mg of elemental zinc ) for patients older than one year old .\ninclusion criteria were : age 2 months to 23 months and first episode of wheezing attack .\nexclusion criteria were : previous episodes of wheezing , congenital heart disease , severe malnutrition , taking zinc sulfate in the last month prior to admission and pneumonia .\nclinical manifestations including tachypnea defined as respiratory rate above 50 per minute at rest in infants below 1 year old and respiratory rate above 40 per minute at rest in children above 1 year old , wheezing defined clinically as high pitched , musical inspiratory lung sound , cyanosis defined as blue color of mucous membranes and skin , subcostal and intercostal retraction defined as skin inward movement between or below ribs during inspiration , nasal flaring defined as nasal openings enlargement during inspiration , and fever defined as axillary temperature above 37c , were evaluated at admission , 24 , 48 and 72 hours after begin of treatment .\nrecovery from acute bronchiolitis was considered when tachypnea , subcostal and intercostal retractions , wheezing or cyanosis were completely improved .\ndata was analyzed with spss version 16.0 . for quantitative variables t - test and for\nthere were no significant differences in clinical features regarding cyanosis , tachypnea , intercostal and subcostal retractions , and wheezing in the 2 groups before starting therapy ( p>0.05 ) .\nmean age in case group was 168.3108.6 days and in control group 169.1690.39 days ( p=0.9 ) .\ncase group consisted of 18 males and 7 females and control group had 15 males and 10 females ( p=0.5 ) .\n20 patients were fed with breast milk and 5 patients with breast milk and formula or other feedings . in control group\nbaseline data of case and control groups clinical findings before treating and 24 , 48 , 72 hours after treating is shown in table 2 .\nfig 1 shows that 24 hours after treating patients in control group showed complete recovery of tachypnea , cyanosis , intercostal and subcostal retractions and wheezing in contrary to patients of case group ( p=0.04 ) .\nbut there was no statistically significant differences in recovery of respiratory distress in days 2 and 3 in the two groups ( p=0.9 for each one ) . \n \nclinical response of tachypnea , cyanosis , wheezing , intercostal and subcostal retractions in the 2 groups after starting treatment ( clinical response was considered when all of above clinical features were disappeared ) \n comparison of clinical manifestations between the two groups at admission and after starting therapy\nthis study showed that zinc sulfate had no effect in improving clinical manifestations of acute bronchiolitis .\nbrooks et al studied effect of zinc on children less than 2 years old who suffered from pneumonia .\npatients were divided into 2 groups : case group took 20 mg of elemental zinc and control group received placebo during hospitalization . in case group improvement of severity of disease including tachypnea , chest retraction , and hypoxia was accelerated .\nmahalanabis and co - workers reported that severity of disease was decreased in those patients who took zinc .\nthis may be attributed to the role of zinc in reducing inflammation of lower respiratory tract resulting in improvement of respiratory distress .\nit is suggested that zinc through its antioxidant effect may improve symptoms of respiratory tract infection . in another study by bose et al on patients aged 2 months to 23 months with severe pneumonia , case group took zinc at a rate of 20 mg daily and\nit was detected that no significant differences were shown in severity of disease including respiratory rate , chest retractions , feeding disturbances and also in duration of hospital stay . also according to chang et al zinc had no significant benefit in clinical recovery with regard to decrement of fever duration , respiratory rate or hospital stay in cases aged less than 11 years with acute lower respiratory tract infection .\nthese findings may be due to serum zinc level of both groups . if the rate of serum zinc is within normal range before treating , it is possible that zinc administration does not improve clinical manifestations and hospitalization .\nunfortunate - ly we had not evaluated serum zinc level of patients at admission , so that the initial serum zinc level of our case group might have been normal .\nwe do not have any explanation why respiratory distress symptoms were improved significantly 24 hours after therapy in the control group .\nthe lack of evaluation of serum zinc level in both groups before starting treatment can be seen as a limitation of our study .",
"role": "user"
},
{
"content": "objectiveevaluating the effect of zinc sulfate in improving the clinical manifestations of acute bronchiolitis in children younger than 2 years.methodsthis was a double blind pilot trial on 50 patients aged 2 to 23 months at ghaem and dr . \n sheikh hospitals in mashhad from january 2008 to march 2009 . \n patients were randomly divided into two groups : a case group received oral zinc sulfate and to the control group was given placebo.findingsmean age of case group was 168.0108.6 days and control group 169.290.4 days ( p=0.98 ) with male predominance in both groups . at \n first there was no statistically significant difference between the two groups in reducing the symptoms . \n but 24 hours after treating , improvement of some important manifestations including tachypnea , subcostal and intercostal retraction , wheezing and cyanosis revealed statistically significant difference in control group in comparison with case group ( p=0.04).conclusionzinc sulfate has no benefit in improving clinical manifestations of acute bronchiolitis .",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.17718 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nAsymmetric Hydrogenation of Unfunctionalized Tetrasubstituted Acyclic Olefins\n\nPaper sections:\nIntroduction\n<p>Over the past decades, the asymmetric hydrogenation of C=C, C=O, and C=N double bonds has matured to become one of the premier methods to install stereocenters in organic molecules, with several industrial processes relying on its chemical prowess.[1] Moreover, the awarding of half of the 2001 Nobel Prize in Chemistry for asymmetric hydrogenation is testament to its transformational impact on organic synthesis.[2] Among the most challenging substrates to date are unfunctionalized olefins, as they lack both nearby heteroatoms acting as transient coordinating groups to the metal center and polarity effects activating the double bond.[3] Whereas the reduction of 1,1-disubstituted and trisubstituted alkenes affords products with a single stereocenter, the asymmetric hydrogenation of tetrasubstituted olefins can generate two vicinal stereocenters and thus give rise to products of higher stereochemical complexity. Owing to the aforementioned reasons and the challenges associated with their steric encumbrance, unfunctionalized tetrasubstituted acyclic olefins arguably represent the most challenging substrate class for asymmetric hydrogenation.[4,5]</p><p>After an initial report by Buchwald on the asymmetric hydrogenation of unfunctionalized tetrasubstituted olefins using a highly electrophilic but sensitive ansa-zirconocene catalyst,[6] Pfaltz's application of iridium/phosphinyl oxazoline (PHOX) complexes[7] paved the way for the development of several related catalysts.[8,9] In all cases, the investigated substrates were restricted to simple cyclic 2,3-disubstituted indenes, 3,4-disubstituted 1,2-dihydronaphthalenes and, if acyclic, trimethylstyrenes (Figure 1). In most cases, the iridium catalysts show variable enantioselectivity and little functional group tolerance. Thus, while considerable progress has been made for functionalized analogues,[5,10] the asymmetric hydrogenation of unfunctionalized tetrasubstituted acyclic olefins has remained an open challenge.</p><p>Recently, we disclosed efficient methods for the stereoselective synthesis of tetrasubstituted, acyclic olefins,[11] whose enantioselective reduction would give rise to intriguing structural motifs inaccessible with state-of-the-art synthetic methods. In this article, we report the first generally applicable, highly enantio- and diastereoselective hydrogenation of such unfunctionalized tetrasubstituted acyclic olefins.</p>\n\nResults and Discussion\n<p>(E)-2,3-Diphenylbut-2-ene ((E)-2a) was chosen as a representative starting point for our study (Figure 2). An initial parallel screening of a subset of 34 iridium catalysts in our catalyst library (see Figure S1 and Table S1 in the Supporting Information) led to rapid hit identification. The best-performing PHOX-type complex (S)-1a[7a] afforded the desired product (2R,3R)-3a in 80% conversion and a diastereomeric ratio (dr) of 98:2 under relatively mild conditions (2 mol% catalyst, 50 bar H2, PhCl, 30°C, 24 h), albeit with a poor enantiomeric ratio (er) of 80:20. The modular synthetic approach to catalysts bearing such phosphinomethyl-oxazoline ligands allowed for efficient structure optimization and a secondary serial screening of 15 variants (see Figure S2 and Table S2 in the Supporting Information) identified (R)-1o as the optimal catalyst, now affording the desired product (2S,3S)-3a with >99% conversion, 98:2 dr and substantially improved enantioselectivity (98:2 er).</p><p>Catalyst (R)-1o proved highly successful in the hydrogenation of a broad range of substrates (Figure 3a). Both electron-donating and electron-withdrawing substituents could be incorporated on the (E)-1,2-dimethyl-1,2-diaryl olefin core ((E)-2a–f,h). Only the most electron-deficient alkene (E)-2g bearing two trifluoromethyl groups on the aryl substituents could not be hydrogenated successfully. The enantiomeric ratio of the major diastereomer remained high (>99:1–97:3 er), whereas the minor diastereomer showed low enantiomeric purity throughout (63:37–50:50 er). Crucially, several important functional groups such as Cl (2i), TMS (2j), Bpin (2k), NHCOCF3 (2l), OAc (2m), CO2Me (2n), OTIPS (2o) and OBn (2p and 2q) were compatible and did not affect the efficacy of this transformation.</p><p>The substrate scope is not restricted to the (E)-1,2-dimethyl-1,2-diaryl substructure: several other (E)-1,2-dialkyl-1,2-diaryl olefins ((E)-2r–y), but also 1,1-dialkyl-2,2-diaryl olefins (2z and 2aa) and very challenging alkyl-triaryl olefins (2ae–ai) were all hydrogenated with excellent yield, er, and dr. Importantly, catalyst (R)-1o provided optimal results for all tested substrates, thus obviating the need to build up large catalyst libraries often required for other iridium-based catalyst systems.[7a,8a] Low enantioselectivity was observed only for extremely challenging substrates, that is, in which the Re- and Si-faces are differentiated only by the electronic properties of the aromatic substituents ((Z)-2c and 2ac) or the length of an alkyl substituent (2ab) as well as for trialkyl-aryl substrate 2ad. In addition, both enantiomers of the catalyst are readily available. Thus, the application of (R)-1o and (S)-1o in the asymmetric hydrogenation of 1,1-dialkyl-2,2-diaryl olefins (E)-2aj and (Z)-2aj allowed access to all four stereoisomers of the product 3aj with outstanding stereoselectivities (Figure 3b).</p><p>The reaction was preparatively useful and readily scalable: the asymmetric hydrogenation of Bpin-containing olefin (E)-2k occurred with virtually the same efficiency on a gram scale (Figure 3c). This allowed for the preparation of (2S,3S)-3k and its use in different post-hydrogenation chemical modifications of interest to medicinal chemists,[13] highlighting the utility of this asymmetric hydrogenation protocol to introduce vicinal stereocenters in organic synthesis.</p><p>Although catalysts similar to (R)-1o, for example catalyst (S)-1j bearing a phenyl group instead of the 3,5-bis-tert-butylphenyl motif, have previously been reported by the Pfaltz group,[7a] the introduction of two tert-butyl groups has a substantial effect on the activity and selectivity of the catalyst, which is particularly pronounced for challenging substrates. For example, the hydrogenation of (Z)-2aa with (R)-1o occurs in 98% yield with 95:5 dr and 99:1 er. In contrast, its phenyl analogue (S)-1j achieves only 17% conversion with 91:9 dr and 97:3 er under otherwise identical conditions. Furthermore, employing analogue (S)-1c bearing diphenylphosphine and isopropyl oxazoline donors failed to give any of the desired product, whereas it was the catalyst of choice for cyclic tetrasubstituted olefins and trimethylstyrenes (see Table S3 in the Supporting Information for additional examples). These results clearly highlight the challenges associated with the hydrogenation of acyclic, tetrasubstituted olefins and the substantial improvement achieved by introducing a 3,5-bis-tert-butylphenyl motif on the oxazoline donor.</p><p>To shed light on the origin of diastereoselectivity, we conducted the hydrogenation of (E)-2d under D2 atmosphere and observed incomplete deuterium incorporation at the benzylic positions as well as partial deuteration of the formerly allylic positions, indicating the possibility of a competing β-hydride elimination/reinsertion pathway (Figure 4a). This observation is particularly relevant for the formation of the minor diastereomer, in which only 55% D incorporation was observed at both benzylic positions. Although only a small kinetic isotope effect (KIE) was observed when the reaction was performed under D2 vs. H2 (kH/kD = 1.3), perdeuteration of the methyl groups in (E)-2a led to a considerable KIE of 2.0 in a competition experiment (Figure 4b). The increased KIE is potentially caused by the multiplicative effect of deuterium atoms,[14] underlining the non-innocence of the allylic hydrogens in the catalytic cycle. However, hydrogenation of the isomerized olefins 4a favored the formation of meso-3a, thus showing that non-metalbound 4a is not an intermediate in the main catalytic cycle (Figure 4c).</p><p>On the basis of previous studies on Ir-catalyzed hydrogenations, our experimental observations, and DFT calculations for the asymmetric hydrogenation of model substrate (E)-2a with catalyst (R)-1o, we propose a plausible mechanistic pathway (Figure 5) with a main IrIII/IrV catalytic cycle in which the resting state of the catalyst is the IrIII dihydride species Ir-I.[15,16] Endergonic coordination of (E)-2a to Ir-I leads to the formation of Ir-II. Subsequent endergonic coordination of Ir-II and H2, followed by simultaneous hydride transfer (migratory insertion) and oxidative addition of H2 generates IrV intermediate Ir-III. The enantioselectivity is dictated by the relative barriers of the hydride transfer/oxidative addition step in accordance with the Curtin–Hammett principle. For this particular olefin–catalyst pairing, the experimental er (98:2) is in excellent agreement with the calculated relative energy difference of 2.3 kcalmol−1 (97:3 er). In a previous report, Bayer et al. suggested that the stereochemical outcome of a similar reaction is influenced by a key C–H/π dispersive interaction between the substrate aryl ring and the oxazoline moiety.[15e] Comparing TS1 and TS1′ (Figure 5), it is clear that there is a much more efficient overlap of the aryl ring with the oxazoline C–H moiety in TS1. Furthermore, to accommodate this interaction in TS1′, the 3,5-bis-tert-butylphenyl group must rotate away from the substrate, which causes a distortion in the ligand backbone due primarily to steric clash with the adjacent cyclohexane substituent. These powerful interactions are particularly prominent in the hydrogenation of tetrasubstituted olefins where the substrate cannot orient itself to place a vinyl hydrogen proximal to the oxazoline ring.</p><p>Reductive elimination of intermediate Ir-III affords the hydrogenated product (2S,3S)-3a and regenerates the IrIII dihydride catalyst Ir-I. Intermediate Ir-III may also undergo a competitive β-hydride elimination and simultaneous reductive elimination of H2 to give intermediate Ir-IV. The calculated energy barrier corresponds to a 98:2 preference for reductive elimination over the β-hydride elimination. Complex Ir-IV undergoes either H2 coordination and oxidative addition/migratory insertion to produce Ir-V, which reductively eliminates leading to formation of product (2S,3S)-3a, or exchange of Re- and Si-bound olefin complexes by decomplexation/recomplexation to arrive at Ir-IV′.[15c] From Ir-IV′, a related sequence produces the minor diastereomer meso-3a.</p><p>Interestingly, performing the reaction with (E)-2a under 2 bar of H2 (instead of 50 bar) still leads to full conversion and high er (99:1) but with significantly eroded dr (78:22). Calculations on an alternative IrIII/IrI catalytic cycle (see the Supporting Information),[18] which was only +1.1 kcal mol−1 higher in energy than the proposed IrIII/IrV cycle, exhibited high er (≫99:1) but an attenuated difference in energy between reductive elimination and β-hydride elimination (0.5 kcal mol−1). Both mechanisms include intermediate Ir-II (Figure 6). However, while H2 coordination is required in the IrIII/IrV cycle, this intermediate undergoes direct hydride transfer (migratory insertion) in the IrIII/IrI cycle to give IrIII intermediate Ir-VI, which can reductively eliminate to give product and IrI species Ir-VII. Given these results, it is likely that a H2 pressure-dependent shift in mechanism from IrIII/IrV at high pressure to IrIII/IrI at low pressure is operative, accounting for the reduced dr at lower pressure.</p>\n\nConclusion\n<p>Asymmetric hydrogenation of olefins has been a heavily studied area for decades. However, despite recent advances, a general method for highly selective asymmetric hydrogenation of unfunctionalized tetrasubstituted acyclic olefins has remained elusive. In this report, we have developed a catalyst which allows for the first time to successfully hydrogenate these challenging substrates and we have also demonstrated its generality, scalability, and selectivity. We believe that this advance will have an important impact in catalysis and synthesis as it allows rapid access to complex structural motifs inaccessible with other state-of-the-art methods.</p>\n\nExperimental Section\n<p>A representative procedure for the asymmetric hydrogenation of unfunctionalized tetrasubstituted acyclic olefins is as follows. In a glove box, (E)-2a (20.8 mg, 100 μmol) and (R)-1o (3.3 mg, 2.00 μmol) are added to a 30 mL autoclave with a glass inset. PhCl (1.0 mL, degassed under Ar) is added to the mixture and the autoclave is sealed and removed from the glove box. The autoclave is connected to a H2 line, flushed with H2 (3 × 20 bar), and sealed under H2 (50 bar). The autoclave is placed in a preheated metal block and the reaction is stirred at 30°C for 24 h. The pressure is released and the solvent is removed under reduced pressure. The crude reaction mixture is dissolved in TBME (1 mL) and filtered through a pad of silica gel with TBME (5 mL). The solvent is removed under reduced pressure and the product is analyzed by 1H NMR, GC, and chiral HPLC. Full experimental details and characterization of the compounds are given in the Supporting Information.</p>",
"role": "user"
},
{
"content": "Asymmetric hydrogenation has evolved as one of the most powerful tools to construct stereocenters. However, the asymmetric hydrogenation of unfunctionalized tetrasubstituted acyclic olefins remains the pinnacle of asymmetric synthesis and an unsolved challenge. We report herein the discovery of an iridium catalyst for the first, generally applicable, highly enantio- and diastereoselective hydrogenation of such olefins and the mechanistic insights of the reaction. The power of this chemistry is demonstrated by the successful hydrogenation of a wide variety of electronically and sterically diverse olefins in excellent yield and high enantio- and diastereoselectivity.",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.52616 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Review of Virtual Sensing Algorithms for Active Noise Control \nChapter Title: The remote moving microphone technique \n1. Abstract of Cited Paper (BIB001): Local active noise control systems generate a zone of quiet at the physical error sensor using secondary sources to cancel the acoustic pressure at the sensor location. The resulting zone of quiet is generally limited in size and as such, placement of the physical error sensor at the location of desired attenuation is required, which is often inconvenient. Virtual acoustic sensors overcome this by projecting the zone of quiet away from the physical error sensor to a remote location. While virtual acoustic sensors have shown potential to improve the performance of local active noise control systems, it is, however, likely that the desired location of maximum attenuation is not spatially fixed. The work described here presents a virtual sensing method capable of tracking a desired location in a modally dense three‐dimensional sound field. The developed algorithm has been experimentally verified in a three‐dimensional enclosure and the experimental results are presented.\n\nLiterature Review:",
"role": "user"
},
{
"content": "The remote moving microphone technique BIB001 uses the remote microphone technique to obtain estimates of the virtual error signals at the moving virtual locations. In this section it is assumed that there are L secondary sources, M p physical sensors and M v moving virtual sensors. The time-variant locations of the M v moving virtual microphones are contained in matrix x v (n) of size 3 × M v , defined as where each of the moving virtual locations, x vmv (n), are defined by three spatial co-ordinates with respect to a reference frame and are given by It is assumed here that the M v moving virtual locations, x v (n), are known at every time step. In practice, the moving virtual locations could be measured using a 3D head tracking system based on camera vision or on ultrasonic position sensing . The remote moving microphone technique is used to compute estimates of the virtual error signals, e v (n), at the moving virtual locations, x v (n). A block diagram of the remote moving virtual sensing algorithm is given in Fig. 10 . In this moving virtual sensing algorithm, the remote microphone technique is first used to obtain estimates of the virtual error signals,ẽ v (n), atM v spatially fixed virtual microphone locations,x v . It is assumed here that the moving virtual locations, x v (n), are confined to a threedimensional region and that the spatially fixed virtual microphone locations,x v , are therefore located within this region. The vector of theM v spatially fixed virtual microphone locations is given bȳ where each of the spatially fixed virtual locations,x vmv , are defined by three spatial co-ordinates with respect to a reference frame and are given bȳ The virtual error signals,ẽ v (n), at the spatially fixed virtual locations,x v , are calculated using the remote microphone technique as described in Section 2.2. The remote microphone technique requires a preliminary identification stage in which the secondary transfer matrices,G pu of size M p × L andG vu of sizeM v × L, are modelled as matrices of FIR or IIR filters. TheM v × M p sized matrix of primary transfer functions at the spatially fixed virtual locations from the physical locations,M, is also estimated as a matrix of FIR or IIR filters during this preliminary identification stage. Estimates of the primary disturbances,d p (n), at the M p physical error sensors are first calculated usingd Estimates of the primary disturbances,d p (n), at the M p physical error sensors are first calculated usingd Next, estimates of the primary disturbances,d v (n), at the spatially fixed virtual locations,x v , are obtained usingd Estimates,ẽ v (n), of the total virtual error signals at the spatially fixed virtual locations,x v , are calculated asẽ As shown in Fig. 10 , estimates,ẽ v (n), of the virtual error signals at the moving virtual locations, x v (n), are now obtained by spatially interpolating the virtual error signals,ẽ v (n), at the spatially fixed virtual locations,x v . The performance of the remote moving microphone technique has been experimentally investigated in an acoustic duct, at an acoustic resonance BIB001 . In the acoustic duct, the virtual microphone moved sinusoidally between a virtual distance of v = 0.02 m and 0.12 m with a period of 10 s. Minimising the moving virtual error signal using a feedforward control approach achieved greater than 34 dB of attenuation at the moving virtual location. This is 20 dB of attenuation greater than that achieved by minimising the error signal at a fixed physical microphone at v = 0 m or a fixed virtual microphone at v = 0.02 m. Moreau et al. BIB002 then extended the remote moving virtual microphone technique to generate a virtual microphone capable of tracking the ear of a rotating artificial head inside a three-dimensional cavity. For Next, estimates of the primary disturbances,d v (n), at the spatially fixed virtual locations,x v , are obtained usingd Estimates,ẽ v (n), of the total virtual error signals at the spatially fixed virtual locations,x v , are calculated asẽ As shown in Fig. 10 , estimates,ẽ v (n), of the virtual error signals at the moving virtual locations, x v (n), are now obtained by spatially interpolating the virtual error signals,ẽ v (n), at the spatially fixed virtual locations,x v . The performance of the remote moving microphone technique has been experimentally investigated in an acoustic duct, at an acoustic resonance BIB001 . In the acoustic duct, the virtual microphone moved sinusoidally between a virtual distance of v = 0.02 m and 0.12 m with a period of 10 s. Minimising the moving virtual error signal using a feedforward control approach achieved greater than 34 dB of attenuation at the moving virtual location. This is 20 dB of attenuation greater than that achieved by minimising the error signal at a fixed physical microphone at v = 0 m or a fixed virtual microphone at v = 0.02 m. Moreau et al. BIB002 then extended the remote moving virtual microphone technique to generate a virtual microphone capable of tracking the ear of a rotating artificial head inside a three-dimensional cavity. For ±45 • head rotations with a period of 10 s, between 30 dB and 40 dB of attenuation was experimentally achieved at the ear of the rotating artificial head at an acoustic resonance.",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.5832 | [
{
"content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: These data are consistent with the previously demonstrated defect of ubc9-1 mutants in the degradation of Clb2 and Clb5 (Seufert et al., 1995).",
"role": "user"
},
{
"content": "Result",
"role": "assistant"
}
] |
science.acl_arc_intent_classification | science.acl_arc_intent_classification.1514 | [
{
"content": "Classify the intent of this citation from an NLP research paper. \n\nYour answer must be one word from: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. No other text allowed.\n\nSection Title:\nexperiments\n\nContext before:\nAs an alternative, we rely on PubMed to retrieve an initial set of hits that we then postprocess in greater detailâthis is the standard pipeline architecture commonly employed in other question-answering systems (Voorhees and Tice 1999; Hirschman and Gaizauskas 2001). However, we do not have access to the computational resources necessary to apply knowledge extractors to the 15 million plus citations in the MEDLINE database and directly index their results. Ideally, we would like to match structured representations derived from the question with those derived from MEDLINE citations (taking into consideration other EBMrelevant factors).\n\nCitation:\nAs an alternative , we rely on PubMed to retrieve an initial set of hits that we then postprocess in greater detail -- this is the standard pipeline architecture commonly employed in other question-answering systems ( Voorhees and Tice 1999 ; Hirschman and Gaizauskas 2001 ) .\n\nContext after:\nThe architecture of our system is shown in Figure 1. The query formulator is responsible for converting a clinical question (in the form of a query frame) into a PubMed search query. Presently, these queries are already encoded in our test collection (see Section 6). PubMed returns an initial list of MEDLINE citations, which is then analyzed by our knowledge extractors (see Section 5).",
"role": "user"
},
{
"content": "CompareOrContrast",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.2764 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: Social distancing should be practiced in combination with other everyday preventive actions to reduce the spread of COVID-19, including wearing masks, avoiding touching your face with unwashed hands, and frequently washing your hands with soap and water for at least 20 seconds.\n\nAbstract:\nBackground: The Greek authorities implemented the strong social distancing measures within the first few weeks after the first confirmed case of the virus to curtail the COVID-19 growth rate.\nObjectives: To estimate the effect of the two-stage strong social distancing measures, the closure of all non-essential shopping centers and businesses on March 16 and the shelter in place orders (SIPOs) on March 23 on the COVID-19 growth rate in Greece Methods: We obtained data on COVID-19 cases in Greece from February 26th through May 4th from publicly available sources.\nAn interrupted time-series regression analysis was used to estimate the effect of the measures on the exponential growth of confirmed COVID-19 cases, controlling for the number of daily testing, and weekly fixed-effects.\nResults: The growth rate of the COVID-19 cases in the pre-policies implementation period was positive as expected (p=0.003).\nBased on the estimates of the interrupted time-series, our results indicate that the SIPO on March 23 significantly slowed the growth rate of COVID-19 in Greece (p=0.04).\nHowever, we did not find evidence on the effectiveness of standalone and partial measures such as the non-essential business closures implemented on March 16 on the COVID-19 spread reduction.\nDiscussion: The combined social distancing measures implemented by the Greek authorities within the first few weeks after the first confirmed case of the virus reduced the COVID-19 growth rate.\nThese findings provide evidence and highlight the effectiveness of these measures to flatten the curve and to slow the spread of the virus.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.1157 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: Depending on the material, the coronavirus can last on surfaces like countertops and doorknobs anywhere from several hours to days.\n\nAbstract:\nThe aim in this study was to assess the effectiveness of a quaternary ammonium chloride (QAC) surfactant in reducing surface staphylococcal contamination in a routinely operating medical ward occupied by patients who had tested positive for methicillin-resistant Staphylococcus aureus (MRSA).\nThe QAC being tested is an antibacterial film that is sprayed onto a surface and can remain active for up to 8 h. A field experimental study was designed with the QAC plus daily hypochlorite cleaning as the experimental group and hypochlorite cleaning alone as the control group.\nThe method of swabbing on moistened surfaces was used for sampling.\nIt was found that 83% and 77% of the bedside surfaces of MRSA-positive and MRSA-negative patients respectively were contaminated with staphylococci at 08:00 hours, and that the staphylococcal concentrations increased by 80% at 1200 h over a 4-hour period with routine ward and clinical activities.\nIrrespective of the MRSA status of the patients, high-touch surfaces around the bed-units within the studied medical ward were heavily contaminated (ranged 1 to 276 cfu/cm(2) amongst the sites with positive culture) with staphylococcal bacteria including MRSA, despite the implementation of daily hypochlorite wiping.\nHowever, the contamination rate dropped significantly from 78% to 11% after the application of the QAC polymer.\nIn the experimental group, the mean staphylococcal concentration of bedside surfaces was significantly (p < 0.0001) reduced from 4.4 ± 8.7 cfu/cm(2) at 08:00 hours to 0.07 ± 0.26 cfu/cm(2) at 12:00 hours by the QAC polymer.\nThe results of this study support the view that, in addition to hypochlorite wiping, the tested QAC surfactant is a potential environmental decontamination strategy for preventing the transmission of clinically important pathogens in medical wards.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"It was found that 83% and 77% of the bedside surfaces of MRSA-positive and MRSA-negative patients respectively were contaminated with staphylococci at 08:00 hours, and that the staphylococcal concentrations increased by 80% at 1200 h over a 4-hour period with routine ward and clinical activities.\"]}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.1790 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: In a mad dash to discover effective treatment for the novel coronavirus, doctors and scientists are testing existing antivirals, antimalarials, monoclonal antibodies, and other medications against COVID-19\n\nAbstract:\nOBJECTIVE Coronavirus disease 2019 (COVID-19) is a fatal and fast-spreading viral infection.\nTo date, the number of COVID-19 patients worldwide has crossed over six million with over three hundred and seventy thousand deaths (according to the data from World Health Organization; updated on 2 June 2020).\nAlthough COVID-19 can be rapidly diagnosed, efficient clinical treatment of COVID-19 remains unavailable, resulting in high fatality.\nSome clinical trials have identified vitamin C (VC) as a potent compound pneumonia management.\nIn addition, glycyrrhizic acid (GA) is clinically as an anti-inflammatory medicine against pneumonia-induced inflammatory stress.\nWe hypothesized that the combination of VC and GA is a potential option for treating COVID-19.\nMETHODS The aim of this study was to determine pharmacological targets and molecular mechanisms of VC + GA treatment for COVID-19, using bioinformational network pharmacology.\nRESULTS We uncovered optimal targets, biological processes and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways of VC + GA against COVID-19.\nOur findings suggested that combinatorial VC and GA treatment for COVID-19 was associated with elevation of immunity and suppression of inflammatory stress, including activation of the T cell receptor signaling pathway, regulation of Fc gamma R-mediated phagocytosis, ErbB signaling pathway and vascular endothelial growth factor signaling pathway.\nWe also identified 17 core targets of VC + GA, which suggest as antimicrobial function.\nCONCLUSIONS For the first time, our study uncovered the pharmacological mechanism underlying combined VC and GA treatment for COVID-19.\nThese results should benefit efforts to address the most pressing problem currently facing the world.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Although COVID-19 can be rapidly diagnosed, efficient clinical treatment of COVID-19 remains unavailable, resulting in high fatality.\"]}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.5251 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nCurrently, there are no treatment options available for the deadly contagious disease, coronavirus disease 2019 (COVID-19). Drug repurposing is a process of identifying new uses for approved or investigational drugs and it is considered as a very effective strategy for drug discovery as it involves less time and cost to find a therapeutic agent in comparison to the de novo drug discovery process. The present review will focus on the repurposing efficacy of the currently used drugs against COVID-19 and their mechanisms of action, pharmacokinetics, dosing, safety, and their future perspective. Relevant articles with experimental studies conducted in-silico, in-vitro, in-vivo, clinical trials in humans, case reports, and news archives were selected for the review. Number of drugs such as remdesivir, favipiravir, ribavirin, lopinavir, ritonavir, darunavir, arbidol, chloroquine, hydroxychloroquine, tocilizumab and interferons have shown inhibitory effects against the SARS-CoV2 in-vitro as well as in clinical conditions. These drugs either act through virus-related targets such as RNA genome, polypeptide packing and uptake pathways or target host-related pathways involving angiotensin-converting enzyme-2 (ACE2) receptors and inflammatory pathways. Using the basic knowledge of viral pathogenesis and pharmacodynamics of drugs as well as using computational tools, many drugs are currently in pipeline to be repurposed. In the current scenario, repositioning of the drugs could be considered the new avenue for the treatment of COVID-19.",
"role": "user"
},
{
"content": "[\"Treatment\"]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.52525 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\ndyslipidemia is a leading cause of cardiovascular disease and mortality worldwide . according to data from statistics korea , the leading cause of death was cancer in 2015 , with the second and third ranks occupied by heart disease and cerebrovascular disease , respectively .\natherosclerosis is a preceding pathologic condition in the development of both heart and cerebrovascular diseases , and dyslipidemia is the greatest risk factor for atherosclerosis .\ntherefore , there is no doubt that dyslipidemia is a major risk factor for cardiovascular disease and mortality in koreans .\nthe statistics regarding the prevalence of dyslipidemia in korean adults in this article were obtained from the korean nutrition and health examination survey ( knhanes ) conducted from 2010 to 2015 .\nthe prevalence of hypercholesterolemia in koreans 30 years old and over was 19.5% in 2015 ( table 1 ) .\nhypercholesterolemia was defined as a serum total cholesterol level 240 mg / dl or the use of lipid - lowering medications .\nthe prevalence of hypercholesterolemia increased in an age - dependent manner ; in particular , it increased approximately 3-fold in women in their 50s and older .\ntrends in the prevalence of hypercholesterolemia over the last 5 years are shown in table 2 .\nmoreover , it increased from 8.0% in 2005 to 13.5% in 2010 ; it then remained steady for the following 5 years , but increased by 3.3% points in 2015 . according to data from the national health and nutrition examination surveys ( nhanes ) in the united states ,\nthe prevalence of hypercholesterolemia in adults aged 20 and over increased from 22.8% in the 1988 to 1994 surveys to 27.8% in the 2011 to 2014 surveys ( table 3 ) .\nconsidering the difference in the age of the population between the knhanes ( 30 and over ) and nhanes ( 20 and over ) , its prevalence in the united states is quite high .\nhowever , the percent of the population with high cholesterol declined from 20.8% in the 1988 to 1994 surveys to 11.9% in the 2011 to 2014 surveys .\nthe mean serum total cholesterol level also declined during the same period in the united states .\nthis means that the increasing pattern of hypercholesterolemia in united states is mainly due to an increasing percentage of the population taking lipid - lowering medications .\nin contrast , the mean serum total cholesterol level did not decline in korea during roughly the same period ( from 183.90.64 mg / dl in 1998 to 185.10.64 mg / dl in 2010 ) , even though the percentage of the population taking lipid - lowering medications increased over the past 10 years in korea ( from 2.8% in 2003 to 12.9% in 2013 ) .\nthe prevalence of hypertriglyceridemia in adults aged 30 and over was 16.8% in 2015 , and the estimated population with hypertriglyceridemia was approximately 5.8 million ( table 4 ) .\nmen had a 2-fold higher prevalence of hypertriglyceridemia than was observed in women ( 23.9% vs. 10.4% ) . among participants under 50 years of age , this difference was much larger , with a 3-fold higher prevalence in men ( 27.5% vs. 7.9% ) .\nthis means that one out of every three or four korean men aged 30 to 49 has hypertriglyceridemia .\nalthough little change was observed in the prevalence of hypertriglyceridemia in the most recent 5 years ( from 16.6 % in 2010 to 16.8% in 2015 ) , a large increase has taken place over the last 20 years , since the prevalence of hypertriglyceridemia in 1998 was only 10.2% , and a much lower level was also observed in men ( 14.9% ) ( table 5 ) .\nsimilar trends regarding sex and age differences in hypertriglyceridemia can be seen in the nhanes .\neven though the definition for hypertriglyceridemia is different in the nhanes , where it is defined as a triglyceride level of 150 mg / dl or more , approximately one - fourth of adults aged 20 and over had hypertriglyceridemia in the 2009 to 2012 surveys .\na sex difference was seen in adults aged 20 to 39 ( 25.1% in men vs. 14.7% in women ) and 40 to 59 ( 34.9% in men vs. 23.0% women ) , but not in adults aged 60 and over . however , the prevalence of hypertriglyceridemia declined from 33.3% in the 2001 to 2004 surveys to 25.1% in the 2009 to 2012 surveys .\nthis declining pattern was seen in both sexes and all age groups , unlike what was observed in koreans .\nalthough low density lipoprotein cholesterol is the primary target for lipid control in the prevention of cardiovascular diseases , high triglycerides combined with low levels of high density lipoprotein cholesterol is a typical profile of atherogenic dyslipidemia .\nhigh triglycerides in males may occur due to alcohol consumption and unhealthy lifestyles in those age groups .\nfor example , high - risk alcohol consumption , defined as an average consumption of 7 drinks in men ( 5 drinks in women ) on 2 occasions per week was most common in males 30 to 49 years of age , and was 4- to 5-fold more common than in women of the same age group .\nfurthermore , the proportion of high - fat diets was approximately twice as high among men as among women in respondents 30 to 49 years of age ( 15.7% vs. 8.9% ) . therefore , active screening for hypertriglyceridemia and managing hypertriglyceridemia through lifestyle modifications , including reductions in alcohol consumption , are urgent and necessary for korean men , especially those aged 30 to 49 .\nawareness of hypercholesterolemia in koreans was found to be greater among women than among men ( 62.4% vs. 51.4% ) ( table 6 ) .\nit increased with age ; the level of awareness in participants 30 to 49 years of age ( 32.1% in men and 32.6% in women ) was less than half of that observed among respondents > 65 years old ( 77.5% in men and 78.0% in women ) .\nfurthermore , the rate of treatment , which was defined as the proportion of the population taking a lipid - lowering medication 20 days a month , was only 45.5% in the population with hypercholesterolemia . due to this unacceptably low rate of treatment , the rate of control in those with hypercholesterolemia , which was defined as the proportion of the population with hypercholesterolemia with a serum total cholesterol level < 200 mg / dl , was also low , at 39.7% .\nhowever , the rate of control in the population treated with lipid - lowering medications was 84.3% , which means that when it is appropriately diagnosed and treated , hypercholesterolemia is easy to control .\ntable 7 shows the management status of hypercholesterolemia during the most recent 10 years ; the awareness rate in 2005 was only 24.0% , and the treatment rate was 17.3% . since then ,\nthrough the fourth to sixth stages of the knhanes , the rate of awareness and treatment has increased by 10% points at every stage .\nthus , all parameters regarding management have improved by 20% to 30% points compared with those of the third knhanes conducted in 2005 .\nthe total cardiovascular disease mortality rate declined over the 30-year period from 1983 to 2012 , but the mortality rate due to ischemic heart disease continuously increased .\nas discussed above , the prevalence of hypercholesterolemia has increased in both the united states and korea , but with different underlying patterns .\ntherefore , to reduce the burden of cardiovascular disease in korea , increasing the rate of treatment among those with hypercholesterolemia is essential .\nthe prevalence of hypercholesterolemia in koreans 30 years old and over was 19.5% in 2015 , meaning that one - fifth of adults had hypercholesterolemia .\nalthough the management of hypercholesterolemia has improved over the most recent 10 years , this improvement has not proven sufficient to reduce the burden of cardiovascular disease , so it remains necessary to improve awareness and treatment among adults aged 30 to 49 .\nfor hypertriglyceridemia , regular check - ups and active management are urgent in korean men aged 30 to 49 . for women , active screening in the perimenopausal period with the goal of preventing hypercholesterolemia and",
"role": "user"
},
{
"content": "the prevalence of hypercholesterolemia in koreans 30 years old and over was 19.5% in 2015 according to the korean nutrition and health examination survey , which means that one - fifth of adults had hypercholesterolemia . \n the prevalence of hypertriglyceridemia in adults 30 years of age and older was 16.8% in 2015 , and men had a 2-fold higher prevalence of hypertriglyceridemia than women ( 23.9% vs. 10.4% ) . \n the awareness of hypercholesterolemia in koreans was higher in women than among men ( 62.4% vs. 51.4% ) . \n it increased with age ; the level of awareness in participants 30 to 49 years of age ( 32.1% in men and 32.6% in women ) was less than half of that observed among respondents 65 years old ( 77.5% in men and 78.0% in women ) . \n regular check - ups for dyslipidemia and the active management thereof are urgent in korean men aged 30 to 49 . in women , \n the perimenopausal period is crucial for the prevention and management of metabolic syndrome , including dyslipidemia . \n overall , improvements in awareness and treatment in the age group of 30 to 49 years in both men and women remain necessary .",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.190 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\nRepair efficiency and PUVA therapeutic response variation in patients with vitiligo. Vitiligo is an acquired depigmentation disorder affecting 0.1% to >8.8% in Indian population. Psoralen and ultraviolet A radiation (PUVA) is a gold standard treatment for vitiligo, however, response is still empirical. In order to investigate whether drug response variation is influenced by the repair ability of PUVA treated vitiligo subjects, single cell gel electrophoresis (SCGE) for genotoxicity and serum malonaldehyde (MDA) for cytotoxicity were performed on 107 subjects (77 cases and 30 healthy controls) in South Indian population. In vitro repair ability was assessed by considering the residual damage. A significant difference was observed between the patients and controls with regard to their mean values of DNA damage and MDA levels (p<0.05). On categorization to fast and slow responders based on the time of response, patients exhibited a significant deviation in residual DNA damage, suggestive of variation with respect to DNA repair efficiency (p<0.05). This is the first study to our knowledge with respect to PUVA drug response variation in vitiligo in relation to DNA repair. Large systematic studies on DNA repair may help in a better understanding of the mechanisms involved in the PUVA drug response variation.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"malonaldehyde\", \"Psoralen\", \"MDA\"]}",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.103659 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nAn assay to measure the affinity of proteins for microtubules by quantitative fluorescent microscopy\n\nPaper sections:\n\n<p>The biochemical analysis of microtubule-binding proteins has dramatically expanded in recent years as kinetochore proteins have been identified and produced in recombinant form. The kinetochore is the complex of proteins that connects chromosomes to the microtubules of the mitotic spindle. Kinetochores alter the dynamics of the spindle microtubules and maintain their attachment as the MTs polymerize and depolymerize under their grip. Models to account for this behavior have been developed but are difficult to test without knowing the affinity of the kinetochore proteins for MTs. Microtubule affinity has been measured by assays in which the amount of a MT binding protein co-sedimenting with MTs is quantified by SDS-PAGE and Coomassie blue staining or Western blots[1; 2; 3; 4; 5; 6]. These assays are unable to distinguish non-specific aggregation, from trapping by the microtubules, from true binding, any of which causes sedimentation of protein. We developed an assay that overcomes these difficulties and used it to measure the affinity of the Dam1 complex for microtubules[7]. The amount of GFP-labeled Dam1 complex bound to MTs was quantified by fluorescence microscopy. Using this assay, we measured a 30-fold higher affinity for microtubules than previously measured [6]. Here we characterize the assay and provide evidence of the increased sensitivity of our method over the current one widely used.</p><p>The assays were performed as described previously except as noted in the Figure legends [7]. A detailed protocol and video can be found at faculty.washington.edu/tdavis. Briefly, in a 250 μL assay, Dam1 complex (0–15 nM) was added to bovine brain taxol-stabilized MTs (2.5 nM, 1:41 Alexa-568-labeled tubulin/unlabeled tubulin) in BRB80 (80 mM PIPES buffer (pH 6.8), 1 mM EGTA, 1 mM MgCl2) containing 10 μM taxol. (Note, we started with PIPES in the acid form instead of K-PIPES to minimize the amount of K+ in the buffer.) The final buffer composition was 87.5% BRB80 and 12.5% Dam1 complex protein buffer (50 mM sodium phosphate buffer, pH 6.8, containing 350 mM NaCl). The mixture was incubated for 10 minutes at 21°C. The reaction was fixed by addition of 750 μL of 2% glutaraldehyde in BRB80 and incubated for 15 minutes at 21°C. A custom spacer (Ellard Instrumentation, Monroe, WA) was placed in a centrifuge tube (11 × 34 mm) and a polylysine coated coverslip (5 mm diameter) was placed on top of the spacer. A 1 mL cushion of 15% glycerol in Phosphate Buffered Saline (PBS) was added on top of the coverslip. An aliquot of the reaction mixture was layered onto the cushion. The MTs were pelleted onto the coverslip by centrifugation in a Beckman TLS55 rotor at 135,000 × g for 10 minutes at 21°C. The slides were imaged on a DeltaVision microscopy system (Applied Precision, Issaquah, WA). Ten sections (0.3 μm) were taken of 10 consecutive panels (512 × 512) for each slide using a TRITC filter set (555/28×, 617/73m) to detect Alexa-tubulin and then an FITC filter set (490/20×, 555/28m) to detect Dad1-GFP. (Dad1 is the component of the Dam1 complex labeled with GFP). The average pixel intensity for the second section of the FITC stack for each panel from a slide was averaged to give the observed fluorescence for the bound Dam1 complex. Background was measured on the slide in which MTs were incubated with no Dam1 complex. Values obtained on different days were corrected for lamp intensity using the photosensor value.</p><p>A standard curve was made to convert observed fluorescence to nM Dam1 complex bound as described [7]. Briefly, increasing concentrations of Dam1 complex (0 nM – 30 nM) were bound to saturating amounts of MTs (50 nM) such that all the Dam1 complex in the assay was bound to MTs. The MTs were pelleted and the fluorescence was measured as described above. An immunoblot of the supernatant detected no Dam1. A plot of fluorescence vs. Dam1 concentration fit a straight line with an R2 value of at least 0.99 (demonstrating that we were indeed at saturating conditions). Concentration of Dam1 bound to MTs (in nM) was calculated as the observed GFP fluorescence pelleting with MTs divided by the slope of that line. Concentration of free Dam1 complex was calculated as the concentration of total Dam1 complex added to the assay minus the concentration bound.</p><p>Our assay offers an important advantage over previously described assays because examination of the protein bound to microtubules by microscopy allowed for a clear distinction between binding and non-specific pelleting. Recombinant protein complexes are sensitive to buffering conditions, which must be optimized to prevent aggregation (Figure 1).</p><p>Fixation of the microtubules prior to imaging is required. We tested if fixation altered the results of the assay. In our normal protocol we dilute the binding assay into fixative and then separate the bound Dam1 complex from the unbound by sedimentation through a glycerol cushion. We reasoned that if the fixation process was driving the binding reaction then we would obtain different results if we fixed after separating the bound Dam1 complex from the unbound by sedimentation. Instead, we found that fixation of the bound Dam1 complex before sedimentation gave the same result as fixation after sedimentation (Figure 2A). Therefore, fixation was not changing the amount of Dam1 complex bound to microtubules. In fact we found that the results were generally insensitive to fixation conditions. Changing the concentration of glutaraldehyde from 0.5 to 9%, the duration of fixation from 2 to 20 minutes and the dilution by the fixative from 30% to 500% had little effect on the amount of binding detected (Figure 2 B, C, D). Binding is complete by 1 minute (Figure 2E). Figure 2F shows the binding of the Dam1 complex compared to a negative control (GFP). The negative control shows no binding. The binding curve for the Dam1 complex could be fit to the McGhee and VonHippel model for binding as described [7]. This fit gives a Kd of 6.5 nM with positive cooperativity of 2.3.</p><p>Because 80–90% of the tubulin polymerizes, a small amount of free tubulin is present in each assay. We tested whether free tubulin competed with the binding of Dam1 complex to polymerized tubulin. We measured binding in the presence of increasing concentrations of unpolymerized tubulin (Figure 2G). The free tubulin decreased binding when present at 500 nM, but did not have an effect at lower concentrations of free tubulin. Therefore, the low amount of free tubulin in our standard assay (0.25 – 0.3 nM) is not competing with polymerized tubulin for the Dam1 complex. However, assays that measure binding at high concentrations of taxol-stabilized microtubules (e.g. 4 μM total tubulin with about 0.4 μM free) might be affected by the presence of the unpolymerized tubulin [6; 8]. This may partially explain the difference between the Kd we measured [7] and the Kd measured previously by Westermann and coworkers (200 nM) [6].</p><p>Binding decreases with the addition of salt, and is nearly undetectable when 150 mM NaCl is present in the binding buffer in addition to the 120 mM K+ in BRB80 (Figure 2H). Because the binding of Dam1 complex to microtubules is sensitive to the salt concentration of the binding buffer, careful reporting of the buffer conditions for each binding assay is crucial for comparison of binding constants measured in different labs.</p><p>In vitro microtubule binding techniques provide key insights into the binding affinities of individual protein complexes, uncovering potential functions of individual kinetochore complexes. In our fluorescence microtubule binding assay we rigorously tested several parameters to ensure the careful measurement of dissociation constants, providing visualization as well as a greater sensitivity in measuring binding. The detailed description of our protocol provides a useable resource in revealing both qualitative and quantitative variations in binding affinities that might be missed in a more standard pelleting assay.</p><p>This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.</p><p>Visualization of microtubule binding can distinguish between non-specific aggregation and microtubule binding.</p><p>A. Dam1 complex bound to microtubules. Microtubule binding assay was performed as described except that the buffer conditions were BRB80 containing 33 mM NaCl, 3 mM HEPES, and 3 mM sodium phosphate. Green is Dam1 complex tagged with GFP. In this assay 10 nM Dam1 complex was incubated for 5 minutes with taxol-stabilized microtubules and then 2.5 nM unlabeled Ndc80 complex was added.</p><p>B. Aggregates of the Dam1 complex. The Dam1 complex can form aggregates in the binding assay. In this assay, the order of addition was reversed. 2.5 nM unlabeled Ndc80 complex was incubated with taxol-stabilized microtubules for 5 minutes, and then 10 nM Dam1 complex was added.</p><p>Characterization of the microtubule binding assay.</p><p>The microtubule binding assays were performed and quantified as described in the text except as noted below.</p><p>A. Fixation prior to sedimentation gives the same result as fixation after sedimentation. After: Dam1 complex (2.5 or 5 nM) was incubated with taxol-stabilized microtubules (2.5 nM) for 10 minutes. The solution was not fixed. Instead, the bound Dam1 complex was separated from unbound by sedimentation through a 15% glycerol cushion made in BRB80 buffer. The supernatant and cushion were removed by aspiration and the coverslip was dipped in 2% glutaraldehyde before mounting on the slide. Before: Dam1 complex (2.5 or 5 nM) was incubated with taxol-stabilized microtubules for 10 minutes and then the solution was fixed before sedimentation through the glycerol cushion. The error bars represent standard error of the mean (SEM) for 4 replicates.</p><p>B. The effect of glutaraldehyde concentration on measuring MT binding</p><p>Dam1 complex (5 nM) was incubated with taxol-stabilized MTs (2.5 nM) for 10 minutes. An aliquot of glutaraldehyde (750 μl) was added to each assay tube (250 μl) and the samples were incubated for 15 min. The final concentrations of glutaraldehyde were as shown. The error bars represent the SEM for 4 replicates.</p><p>C. The effect of fixation time on measuring MT binding</p><p>Dam1 complex (5 nM) was added to taxol-stabilized MTs (3.2 nM) and incubated for 10 minutes. Samples were fixed for the given times. The error bars represent the SEM for 3 replicates.</p><p>D. The effect of fixation volume on measuring MT binding</p><p>Dam1 complex (5 nM) was added to taxol-stabilized MTs (2.5 nM) and incubated for 10 minutes. Different volumes of glutaraldehyde were added to each assay to yield the given ratio of volume of glutaraldehyde/volume of assay. The final concentration of glutaraldehyde was constant (1.5%). Error bars represent the SEM for 3–7 replicates.</p><p>E. Time course of Dam1 complex binding to MTs.</p><p>Dam1 complex (10 nM) was added to taxol-stabilized microtubules (2.5 nM). At the given times, glutaraldehyde was added to end the reaction. The error bars represent the SEM for 2–8 replicates.</p><p>F. Non-specific binding was not detected</p><p>Dam1 complex (filled circles) or GFP (open circles) (0.5–15 nM) were assayed. The data for the Dam1 complex are reproduced for comparison from [7]. They are plotted as a Hill plot [9] and the dotted line shows the fit to the McGhee and VonHippel model as described [7]. For the Dam1 complex the error bars represent the SEM for 8–11 replicates. The assays for binding GFP were done in quadruplicate. The SEM for the GFP data were smaller than the size of the symbols and are not shown.</p><p>G. Dam1 complex binding to MTs is decreased by free tubulin.</p><p>Dam1 complex binding to MTs was measured in the presence of the given quantities of unpolymerized tubulin (in nM). Unpolymerized tubulin in BRB80 containing 1 mM GTP was diluted 1/100 in BRB80 containing 10 μM taxol before addition to the assay. The error bars represent the SEM for 2–5 replicates.</p><p>H. Dam1 complex binding to MTs is sensitive to salt.</p><p>Dam1 complex (7.5 nM) was added to taxol-stabilized microtubules (2.5 nM) in BRB80 buffer containing the given concentration of NaCl. (Note that BRB80 is 120 mM K+). Error bars represent SEM for 3 replicates.</p>",
"role": "user"
},
{
"content": "We report a fluorescence-based assay for measuring the affinity of microtubule binding proteins for microtubules. The affinity of any fluorescently tagged protein for taxol-stabilized microtubules can be measured with this assay. We describe the assay and provide a detailed protocol. Using this assay we find that the affinity of the Dam1 complex for microtubules is decreased by the presence of free unpolymerized tubulin and is sensitive to the salt concentration in the binding buffer. These effects likely account for the previous differences in binding affinities reported.",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.668.0 | [
{
"content": "Read the following paper excerpts and answer the question. Output a JSON object with \"answer\" and \"evidence\" fields. The answer must be extracted directly from the paper text.\n\nIf the question cannot be answered from the given text, respond with \"null\".\n\nPaper: A Deep Learning Architecture for De-identification of Patient Notes: Implementation and Evaluation\n\nDe-identification is the process of removing 18 protected health information (PHI) from clinical notes in order for the text to be considered not individually identifiable. Recent advances in natural language processing (NLP) has allowed for the use of deep learning techniques for the task of de-identification. In this paper, we present a deep learning architecture that builds on the latest NLP advances by incorporating deep contextualized word embeddings and variational drop out Bi-LSTMs. We test this architecture on two gold standard datasets and show that the architecture achieves state-of-the-art performance on both data sets while also converging faster than other systems without the use of dictionaries or other knowledge sources.\n\nUnnamed section\n[block] 1 5mm *\n2pt*22pt\n[block] 2.1 5mm\n[block] 2.1.1 5mm\n\nRule-based Systems\nRule-based systems make heavy use of pattern matching such as dictionaries (or gazetteers), regular expressions and other patterns. BIBREF2 Systems such as the ones described in BIBREF5 , BIBREF6 do not require the use any labeled data. Hence, they are considered as unsupervised learning systems. Advantages of such systems include their ease of use, ease of adding new patterns and easy interpretability. However, these methods suffer from a lack of robustness with regards to the input. For example, different casings of the same word could be misinterpreted as an unknown word. Furthermore, typographical errors are almost always present in most documents and rule-based systems often cannot correctly handle these types of inaccuracies present in the data. Critically, these systems cannot handle context which could render a medical text unreadable. For example, a diagnosis of “Lou Gehring disease” could be misidentified by such a system as a PHI of type Name. The system might replace the tokens “Lou” and “Gehring” with randomized names rendering the text meaningless if enough of these tokens were replaced.\n\nDeep Learning Systems\nWith the disadvantages of the past three approaches to building a de-identification system in mind, the current state-of-the-art systems employ deep learning techniques to achieve better results than the best hybrid systems while also not requiring the time-consuming process of feature engineering. Deep learning is a subset of machine learning that uses multiple layers of Artificial Neural Networks (ANNs), which has been very succesful at most Natural Language Processing (NLP) tasks. Recent advances in the field of deep learning and NLP especially in regards to named entity recognition have allowed systems such as the one by Dernoncourt et al. BIBREF9 to achieve better results on the 2014 i2b2 de-identification challenge data set than the winning hybrid system proposed by Yang et al. BIBREF10 . The advances in NLP and deep learning which have allowed for this performance are detailed below.\nANNs cannot take words as inputs and require numeric inputs, therefore, past approaches to using ANNs for NLP have been to employ a bag-of-words (BoW) representation of words where a dictionary is built of all known words and each word in a sentence is assigned a unique vector that is inputted into the ANN. A drawback of such a technique is such that words that have similar meanings are represented completely different. As a solution to this problem, a technique called word embeddings have been used. Word embeddings gained popularity when Mikolov et al. BIBREF12 used ANNs to generate a distributed vector representation of a word based on the usage of the word in a text corpus. This way of representing words allowed for similar words to be represented using vectors of similar values while also allowing for complex operations such as the famous example: INLINEFORM0 , where INLINEFORM1 represents a vector for a particular word.\nWhile pre-trained word embeddings such as the widely used GloVe BIBREF12 embeddings are revolutionary and powerful, such representations only capture one context representation, namely the one of the training corpus they were derived from. This shortcoming has led to the very recent development of context-dependent representations such as the ones developed by BIBREF1 , BIBREF13 , which can capture different features of a word.\nThe Embeddings from Language Models (ELMo) from the system by Peters et al. BIBREF1 are used by the architecture in this paper to achieve state-of-the-art results. The ELMo representations, learned by combining Bi-LSTMs with a language modeling objective, captures context-depended aspects at the higher-level LSTM while the lower-level LSTM captures aspects of syntax. Moreover, the outputs of the different layers of the system can be used independently or averaged to output embeddings that significantly improve some existing models for solving NLP problems. These results drive our motivation to include the ELMo representations in our architecture.\nThe use of ANNs for many machine learning tasks has gained popularity in recent years. Recently, a variant of recurrent neural networks (RNN) called Bi-directional Long Short-Term Memory (Bi-LSTM) networks has been successfully employed especially in the realm of NER.\nIn fact, several Bi-LSTM architectures have been proposed to tackle the problem of NER: LSTM-CRF, LSTM-CNNs-CRF and LSTM-CNNs BIBREF9 . The current best performing system on the i2b2 dataset is in fact a system based on LSTM-CRF BIBREF9 .\n\nBi-LSTM Layer\nThe Bi-LSTM layer is composed of two LSTM layers, which are a variant of the Bidirectional RNNs. In short, the Bi-LSTM layer contains two independent LSTMs in which one network is fed input in the normal time direction while the other network is fed input in the reverse time direction. The outputs of the two networks can then be combined using either summation, multiplication, concatenation or averaging. Our architecture uses simple concatenation to combine the outputs of the two networks.\nOur architecture for the Bi-LSTM layer is similar to the ones used by BIBREF16 , BIBREF17 , BIBREF18 with each LSTM containing 100 hidden units. To ensure that the neural networks do not overfit, we use a variant of the popular dropout technique called variational dropout BIBREF19 to regularize our neural networks. Variational dropout differs from the traditional naïve dropout technique by having the same dropout mask for the inputs, outputs and the recurrent layers BIBREF19 . This is in contrast to the traditional technique of applying a different dropout mask for each of the input and output layers. BIBREF20 shows that variational dropout applied to the output and recurrent units performs significantly better than naïve dropout or no dropout for the NER tasks. As such, we apply a dropout probability of 0.5 for both the output and the recurrent units in our architecture.\n\nEvaluation Metrics\nFor de-identification tasks, the three metrics we will use to evaluate the performance of our architecture are Precision, Recall and INLINEFORM0 score as defined below. We will compute both the binary INLINEFORM1 score and the three metrics for each PHI type for both data sets. Note that binary INLINEFORM2 score calculates whether or not a token was identified as a PHI as opposed to correctly predicting the right PHI type. For de-identification, we place more importance on identifying if a token was a PHI instance with correctly predicting the right PHI type as a secondary objective. INLINEFORM3 INLINEFORM4 \nNotice that a high recall is paramount given the risk of accidentally disclosing sensitive patient information if not all PHI are detected and removed from the document or replaced by fake data. A high precision is also desired to preserve the integrity of the documents, as a large number of false positives might obscure the meaning of the text or even distort it. As the harmonic mean of precision and recall, the INLINEFORM0 score gives an overall measure for model performance that is frequently employed in the NLP literature.\nAs a benchmark, we will use the results of the systems by Burckhardt et al. BIBREF22 , Liu et al. BIBREF18 , Dernoncourt et al. BIBREF9 and Yang et al. BIBREF10 on the i2b2 dataset and the performance of Burckhardt et al. on the nursing corpus. Note that Burckhardt et al. used the entire data set for their results as it is an unsupervised learning system while we had to split our data set into 60% training data and 40% testing data.\n\nResults\nWe evaluated the architecture on both the i2b2-PHI categories and the HIPAA-PHI categories for the i2b2 data set based on token-level labels. Note that the HIPAA categories are a super set of the i2b2-PHI categories. We also ran the analysis 5+ times to give us a range of maximum scores for the different data sets.\nTable TABREF25 gives us a summary of how our architecture performed against other systems on the binary INLINEFORM0 score metrics while Table TABREF26 and Table TABREF27 summarizes the performance of our architecture against other systems on HIPAA-PHI categories and i2b2-PHI categories respectively. Table TABREF28 presents a summary of the performance on the nursing note corpus while also contrasting the performances achieved by the deidentify system.\n\nDiscussion and Error Analysis\nAs we can see in Table TABREF26 , with the exception of ID, our architecture performs considerably better than systems by Liu et al. and Yang et al. Dernoncourt et al. did not provide exact figures for the HIPAA-PHI categories so we have excluded them from our analysis. Furthermore, Table TABREF25 shows that our architecture performs similarly to the best scores achieved by Dernoncourt et al., with our architecture slightly edging out Dernoncourt et al. on the precision metric. For the nursing corpus, our system, while not performing as well as the performances on i2b2 data set, managed to best the scores achieved by the deidentify system while also achieving a binary INLINEFORM0 score of over 0.812. It is important to note that deidentify was a unsupervised learning system, it did not require the use of a train-valid-test split and therefore, used the whole data set for their performance numbers. The results of our architecture is assessed using a 60%/40% train/test split.\nOur architecture noticeably converges faster than the NeuroNER, which was trained for 100 epochs and the system by Liu et al. BIBREF18 which was trained for 80 epochs. Different runs of training our architecture on the i2b2 dataset converge at around 23 INLINEFORM0 4 epochs. A possible explanation for this is due to our architecture using the Adam optimizer, whereas the NeuroNER system use the Stochastic Gradient Descent (SGD) optimizer. In fact, Reimers et al. BIBREF20 show that the SGD optimizer performed considerably worse than the Adam optimizer for different NLP tasks.\nFurthermore, we also do not see any noticeable improvements from using the PubMed database trained word embeddings BIBREF14 instead of the general text trained GloVe word embeddings. In fact, we consistently saw better INLINEFORM0 scores using the GloVe embeddings. This could be due to the fact that our use case was for identifying general labels such as Names, Phones, Locations etc. instead of bio-medical specific terms such as diseases which are far better represented in the PubMed corpus.\n\nError Analysis\nWe will mainly focus on the two PHI categories: Profession and ID for our error analysis on the i2b2 data set. It is interesting to note that the best performing models on the i2b2 data set by Dernoncourt et al. BIBREF9 experienced similar lower performances on the same two categories. However, we note the performances by Dernoncourt et al. were achieved using a “combination of n-gram, morphological, orthographic and gazetteer features” BIBREF9 while our architecture uses only POS tagging as an external feature. Dernoncourt et al. posits that the lower performance on the Profession category might be due to the close embeddings of the Profession tokens to other PHI tokens which we can confirm on our architecture as well. Furthermore, our experiments show that the Profession PHI performs considerably better with the PubMed embedded model than GloVe embedded model. This could be due to the fact that PubMed embeddings were trained on the PubMed database, which is a database of medical literature. GloVe on the other hand was trained on a general database, which means the PubMed embeddings for Profession tokens might not be as close to other tokens as is the case for the GloVe embeddings.\nFor the ID PHI, our analysis shows that some of the errors were due to tokenization errors. For example, a “:” was counted as PHI token which our architecture correctly predicted as not a PHI token. Since our architecture is not custom tailored to detect sophisticated ID patterns such as the systems in BIBREF9 , BIBREF10 , we have failed to detect some ID PHIs such as “265-01-73”, a medical record number, which our architecture predicted as a phone number due to the format of the number. Such errors could easily be mitigated by the use of simple regular expressions.\nWe can see that our architecture outperforms the deidentify system by a considerable margin on most categories as measured by the INLINEFORM0 score. For example, the authors of deidentify note that Date PHIs have considerably low precision values while our architecture achieve a precision value of greater than 0.915% for the Date PHI. However, Burckhardt et al. BIBREF22 achieve an impressive precision of 0.899 and recall of 1.0 for the Phone PHI while our architecture only manages 0.778 and 0.583 respectively. Our analysis of this category shows that this is mainly due a difference in tokenization, stand alone number are being classified as not a PHI.\nWe tried to use the model that we trained on the i2b2 data set to predict the categories of the nursing data set. However, due to difference in the text structure, the actual text and the format, we achieved less than random performance on the nursing data set. This brings up an important point about the transferability of such models.\n\nAblation Analysis\nOur ablation analysis shows us that the layers of our models adds to the overall performance. Figure FIGREF33 shows the binary INLINEFORM0 scores on the i2b2 data set with each bar being a feature toggled off. For example, the “No Char Embd” bar shows the performance of the model with no character embeddings and everything else the same as our best model.\nWe can see a noticeable change in the performance if we do not include the ELMo embeddings versus no GloVe embeddings. The slight decrease in performance when we use no GloVe embeddings shows us that this is a feature we might choose to exclude if computation time is limited. Furthermore, we can see the impact of having no variational dropout and only using a naïve dropout, it shows that variational dropout is better at regularizing our neural network.\n\nConclusion\nIn this study, we show that our deep learning architecture, which incorporates the latest developments in contextual word embeddings and NLP, achieves state-of-the-art performance on two widely available gold standard de-identification data sets while also achieving similar performance as the best system available in less epochs. Our architecture also significantly improves over the performance of the hybrid system deidentify on the nursing data set.\nThis architecture could be integrated into a client-ready system such as the deidentify system. However, as mentioned in Section SECREF8 , the use of a dictionary (or gazetter) might help improve the model even further specially with regards to the Location and Profession PHI types. Such a hybrid system would be highly beneficial to practitioners that needs to de-identify patient data on a daily basis.\n\nTable 2: Summary of architecture parameters\n\nFigure 1: Deep Learning Architecture\n\nTable 5: HIPAA-PHI result\n\nTable 6: Scores broken down by i2b2 PHI categories\n\nQuestion: What is their baseline?",
"role": "user"
},
{
"content": "{\"answer\": [\"Burckhardt et al. BIBREF22\", \"Liu et al. BIBREF18\", \"Dernoncourt et al. BIBREF9\", \"Yang et al. BIBREF10\"], \"evidence\": [\"As a benchmark, we will use the results of the systems by Burckhardt et al. BIBREF22 , Liu et al. BIBREF18 , Dernoncourt et al. BIBREF9 and Yang et al. BIBREF10 on the i2b2 dataset and the performance of Burckhardt et al. on the nursing corpus. Note that Burckhardt et al. used the entire data set for their results as it is an unsupervised learning system while we had to split our data set into 60% training data and 40% testing data.\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.2298 | [
{
"content": "Based on the dataset descriptions below, which datasets would be suitable for the following research question? Your response should be a JSON array.\n\nQuestion: We propose a simple but effective framework for unsupervised landmark detection. We build upon an existing landmark detector\n\nKeywords: landmark detection\n\nDataset descriptions:\nMPII: The MPII Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 different human activities and the poses are manually annotated with up to 16 body joints.\nAFW: AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box.\nCelebA: CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age.\nAFLW: The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image.\nWildDeepfake: WildDeepfake is a dataset for real-world deepfakes detection which consists of 7,314 face sequences extracted from 707 deepfake videos that are collected completely from the internet. WildDeepfake is a small dataset that can be used, in addition to existing datasets, to develop more effective detectors against real-world deepfakes.\nGoogle Landmarks Dataset v2: This is the second version of the Google Landmarks dataset (GLDv2), which contains images annotated with labels representing human-made and natural landmarks. The dataset can be used for landmark recognition and retrieval experiments.\n\nOptions: A: MPII B: AFW C: CelebA D: AFLW E: WildDeepfake F: Google Landmarks Dataset v2",
"role": "user"
},
{
"content": "[\"MPII\", \"CelebA\", \"AFLW\"]",
"role": "assistant"
}
] |
science.cdr_ner | science.cdr_ner.425 | [
{
"content": "Find all entities of these types in the text: [\"Chemical\", \"Disease\"].\n\nOutput a JSON object.\n\nText:\nWhen drugs disappear from the patient: elimination of intravenous medication by hemodiafiltration. Twenty-three hours after heart transplantation, life-threatening acute right heart failure was diagnosed in a patient requiring continuous venovenous hemodiafiltration (CVVHDF). Increasing doses of catecholamines, sedatives, and muscle relaxants administered through a central venous catheter were ineffective. However, a bolus of epinephrine injected through an alternative catheter provoked a hypertensive crisis. Thus, interference with the central venous infusion by the dialysis catheter was suspected. The catheters were changed, and hemodynamics stabilized at lower catecholamine doses. When the effects of IV drugs are inadequate in patients receiving CVVHDF, interference with adjacent catheters resulting in elimination of the drug by CVVHDF should be suspected.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"epinephrine\", \"catecholamines\", \"catecholamine\"], \"Disease\": [\"right heart failure\", \"hypertensive\"]}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.1019 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: Coronavirus: No proof yet that pets can get COVID-19 from owners, but human to animal spread has happened before.\n\nAbstract:\nLittle information on the SARS-CoV-2 virus in animals is available to date.\nWhereas no one husbandry animal case has been reported to date, which would have significant implications in food safety, companion animals play a role in COVID-19 epidemiology that opens up new questions.\nThere is evidence that SARS-CoV-2 can infect felines, dogs and minks, and there is evidence of human-to-animal infection.\nLikewise, the S protein nucleotide sequence of the SARS-CoV-2 virus isolated in domestic animals and humans is identical, and the replication of the SARS-CoV-2 in cats is efficient.\nBesides, the epidemiological evidence for this current pandemic indicates that the spillover to humans was associated with close contact between man and exotic animals, very probably in Chinese wet markets, thus there is a growing general consensus that the exotic animal markets, should be strictly regulated.\nThe examination of these findings and the particular role of animals in COVID-19 should be carefully analyzed in order to establish preparation and containment measures.\nAnimal management and epidemiological surveillance must be also considered for COVID-19 control, and it can open up new questions regarding COVID-19 epidemiology and the role that animals play in it.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"There is evidence that SARS-CoV-2 can infect felines, dogs and minks, and there is evidence of human-to-animal infection.\", \"Likewise, the S protein nucleotide sequence of the SARS-CoV-2 virus isolated in domestic animals and humans is identical, and the replication of the SARS-CoV-2 in cats is efficient.\"]}",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.878.0 | [
{
"content": "Read the following paper excerpts and answer the question. Output a JSON object with \"answer\" and \"evidence\" fields. The answer must be extracted directly from the paper text.\n\nIf the question cannot be answered from the given text, respond with \"null\".\n\nPaper: Back to the Future -- Sequential Alignment of Text Representations\n\nLanguage evolves over time in many ways relevant to natural language processing tasks. For example, recent occurrences of tokens 'BERT' and 'ELMO' in publications refer to neural network architectures rather than persons. This type of temporal signal is typically overlooked, but is important if one aims to deploy a machine learning model over an extended period of time. In particular, language evolution causes data drift between time-steps in sequential decision-making tasks. Examples of such tasks include prediction of paper acceptance for yearly conferences (regular intervals) or author stance prediction for rumours on Twitter (irregular intervals). Inspired by successes in computer vision, we tackle data drift by sequentially aligning learned representations. We evaluate on three challenging tasks varying in terms of time-scales, linguistic units, and domains. These tasks show our method outperforming several strong baselines, including using all available data. We argue that, due to its low computational expense, sequential alignment is a practical solution to dealing with language evolution.\n\nSubspace Alignment ::: Semi-supervised subspace alignment\nIn semi-supervised alignment, one performs subspace alignment per class. As such, at least 1 target label per class needs to be available. However, even then, with only 1 target label per class, we would only be able to find 1 principal component. To allow for the estimation of more components, we provisionally label all target samples using a 1-nearest-neighbour classifier, starting from the given target labels. Using pseudo-labelled target samples, we estimate $d$ components.\nNow, the optimal linear transformation matrix for each class can be found with an equivalent procedure as in Equation DISPLAY_FORM4:\nAfterwards, we transform the source samples of each class $X_{t-1}^k$ through the projection onto class-specific components $C_{{\\cal S},k}$ and the optimal transformation: $X_{t-1}^{k} C_{{\\cal S}, k} M_{k}^{*}$. Additionally, we centre each transformed source class on the corresponding target class. Figure FIGREF2 (right middle) shows the source documents transformed through semi-supervised alignment. Now, the classes match the classes of the target data.\n\nSubspace Alignment ::: Considering Sample Similarities between Classes\nSince intermediary spaces, such as $t^{\\prime }_{1,2}$ and $t^{\\prime }_{2,3}$, do not necessarily share the same semantic properties, we add a step to the semi-supervised alignment procedure. Given that the initial unaligned spaces do encode similarities between instances, we run the $k$-means clustering algorithm ($k=5$) to give us some course-grained indication of instance similarities in the original embedding space. This cluster ID is passed to SSA, resulting in an alignment which both attempts to match classes across time steps, in addition to instance similarities. Hence, even though $t^{\\prime }_{1,2}$ and $t^{\\prime }_{2,3}$ are not necessarily semantically coherent, an alignment to $t^{\\prime \\prime }_{1,2,3}$ is made possible.\n\nExperimental Setup\nIn the past year, several approaches to pre-training representations on language modelling based on transformer architectures BIBREF7 have been proposed. These models essentially use a multi-head self-attention mechanism in order to learn representations which are able to attend directly to any part of a sequence. Recent work has shown that such contextualised representations pre-trained on language modelling tasks offer highly versatile representations which can be fine-tuned on seemingly any given task BIBREF1, BIBREF0, BIBREF8, BIBREF9. In line with the recommendations from experiments on fine-tuning representations BIBREF10, we use a frozen BERT to extract a consistent task-agnostic representation. Using a frozen BERT with subsequent subspace alignment allows us to avoid re-training a neural network each time-step while still working in an embedding learned by a neural language model. It also allows us to test the effectiveness of SSA without the confounding influence of representation updates.\n\nExperimental Setup ::: Three Tasks.\nWe consider three tasks representing a broad selection of natural language understanding scenarios: paper acceptance prediction based on the PeerRead data set BIBREF2, Named Entity Recognition (NER) based on the Broad Twitter Corpus BIBREF3, and author stance prediction based on the RumEval-19 data set BIBREF6. These tasks were chosen so as to represent i) different textual domains, across ii) differing time scales, and iii) operating at varying levels of linguistic granularity. As we are dealing with dynamical learning, the vast majority of NLP data sets can unfortunately not be used since they do not include time stamps.\n\nPaper Acceptance Prediction\nThe PeerRead data set contains papers from ten years of arXiv history, as well as papers and reviews from major AI and NLP conferences BIBREF2. From the perspective of evaluating our method, the arXiv sub-set of this data set offers the possibility of evaluating our method while adapting to ten years of history. This is furthermore the only subset of the data annotated with both timestamps and with a relatively balanced accept/reject annotation. As arXiv naturally contains both accepted and rejected papers, this acceptance status has been assigned based on BIBREF11 who match arXiv submissions to bibliographic entries in DBLP, and additionally defining acceptance as having been accepted to major conferences, and not to workshops. This results in a data set of nearly 12,000 papers, from which we use the raw abstract text as input to our system. The first three years were filtered out due to containing very few papers. We use the standard train/test splits supplied with the data set.\nBIBREF2 show that it is possible to predict paper acceptance status at major conferences at above baseline levels. Our intuition in applying SSA to this problem, is that the topic of a paper is likely to bias acceptance to certain conferences across time. For instance, it is plausible that the likelihood of a neural paper being accepted to an NLP conference before and after 2013 differs wildly. Hence, we expect that our model will, to some extent, represent the topic of an article, and that this will lend itself nicely to SSA.\n\nPaper Acceptance Prediction ::: Model\nWe use the pre-trained bert-base-uncased model as the base for our paper acceptance prediction model. Following the approach of BIBREF0, we take the final hidden state (i.e., the output of the transformer) corresponding to the special [CLS] token of an input sequence to be our representation of a paper, as this has aggregated information through the sequence (Figure FIGREF14). This gives us a $d$-dimensional representation of each document, where $d=786$. In all of the experiments for this task, we train an SVM with an RBF kernel on these representations, either with or without SSA depending on the setting.\n\nNamed Entity Recognition\nThe Broad Twitter Corpus contains tweets annotated with named entities, collected between the years 2009 and 2014 BIBREF3. However, as only a handful of tweets are collected before 2012, we focus our analysis on the final three years of this period (i.e. two test years). The corpus includes diverse data, annotated in part via crowdsourcing and in part by experts. The inventory of tags in their tag scheme is relatively small, including Person, Location, and Organisation. To the best of our knowledge no one has evaluated on this corpus either in general or per year, and so we cannot compare with previous work.\nIn the case of NER, we expect the adaptation step of our model to capture the fact that named entities may change their meaning across time (e.g. the example with ”Bert” and ”BERT” in Figure FIGREF1). This is related to work showing temporal drift of topics BIBREF12.\n\nNamed Entity Recognition ::: Model\nSince casing is typically an important feature in NER, we use the pre-trained bert-base-cased model as our base for NER. For each token, we extract its contextualised representation from BERT, before applying SSA. As BIBREF0 achieve state-of-the-art results without conditioning the predicted tag sequence on surrounding tags (as would be the case with a CRF, for example), we also opt for this simpler architecture. The resulting contextualised representations are therefore passed to an MLP with a single hidden layer (200 hidden units, ReLU activation), before predicting NER tags. We train the MLP over 5 epochs using the Adam optimiser BIBREF13.\n\nNamed Entity Recognition ::: Experiments & Results\nAs with previous experiments, we compare unsupervised and semi-supervised subspace alignment with baselines corresponding to using all data, data from the same year as the evaluation year, and data from the previous year. For each year, we divide the data into 80/10/10 splits for training, development, and test. Results on the two test years 2013 and 2014 are shown in Table TABREF17. In the case of NER, we do not observe any positive results for unsupervised subspace alignment. In the case of semi-supervised alignment, however, we find increased performance as compared to training on the previous year, and compared to training on all data. This shows that learning an alignment from just a few data points can help the model to generalise from past data. However, unlike our previous experiments, results are somewhat better when given access to the entire set of training data from the test year itself in the case of NER. The fact that training on only 2013 and evaluating on the same year does not work well can be explained by the fact that the amount of data available for 2013 is only 10% of that for 2012. The identical results for the unbounded extension is because aligning from a single time step renders this irrelevant.\n\nSDQC Stance Classification ::: Model\nFor this task, we use the same modelling approach as described for paper acceptance prediction. This method is also suitable here, since we simply require a condensed representation of a few sentences on which to base our temporal adaptation and predictions. In the last iteration of the task, the winning system used hand-crafted features to achieve a high performance BIBREF14. Including these would complicate SSA, so we opt for this simpler architecture instead. We use the shorter time-scale of approximately weeks rather than years as rumours can change rapidly BIBREF15.\n\nAnalysis and Discussion ::: Example of Aligning Tweets\nFinally, we set up the following simplified experiment to investigate the effect of alignment on SDQC data. First, we consider the rumour charliehebdo, where we picked the following tweet:\npx\nSupport:France: 10 people dead after shooting at HQ of satirical weekly newspaper #CharlieHebdo, according to witnesses <URL>\npx\nIt has been labeled to be in support of the veracity of the rumour. We will consider the scenario where we use this tweet and others involving the charliehebdo incident to predict author stance in the rumour germanwings-crash. Before alignment, the following 2 germanwings-crash tweets are among the nearest neighbours in the embedding space:\npx\nQuery: @USER @USER if they had, it’s likely the descent rate would’ve been steeper and the speed not reduce, no ?\npx\nComment: @USER Praying for the families and friends of those involved in crash. I'm so sorry for your loss.\nThe second tweet is semantically similar (both are on the topic of tragedy), but the other is unrelated. Note that the news agency tweet differs from the comment and query tweets in that it stems from a reputable source, mentions details and includes a reference. After alignment, the charliehebdo tweet has the following 2 nearest neighbours:\npx\nSupport: “@USER: 148 passengers were on board #GermanWings Airbus A320 which has crashed in the southern French Alps <URL>”\npx\nSupport: Report: Co-Pilot Locked Out Of Cockpit Before Fatal Plane Crash <URL> #Germanwings <URL> Now, both neighbours are of the support class. This example shows that semi-supervised alignment maps source tweets from one class close to target tweets of the same class.\n\nAnalysis and Discussion ::: Related Work\nThe temporal nature of data can have a significant impact in natural language processing tasks. For instance, BIBREF16 compare a number of approaches to diachronic word embeddings, and detection of semantic shifts across time. For instance, such representations can be used to uncover changes of word meanings, or senses of new words altogether BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Other work has investigated changes in the usage of parts of speech across time BIBREF22. BIBREF23 investigate the changing meanings and associations of words across time, in the perspective of language change. By learning time-aware embeddings, they are able to outperform standard word representation learning algorithms, and can discover, e.g., equivalent technologies through time. BIBREF24 show that lexical features can change their polarity across time, which can have a significant impact in sentiment analysis. BIBREF12 show that associating topics with continuous distributions of timestamps yields substantial improvements in terms of topic prediction and interpretation of trends. Temporal effects in NLP have also been studied in the context of scientific journals, for instance in the context of emerging themes and viewpoints BIBREF25, BIBREF26, and in terms of topic modelling on news corpora across time BIBREF27. Finally, in the context of rumour stance classification, BIBREF28 show that temporal information as a feature in addition to textual content offers an improvement in results. While this previous work has highlighted the extent to which language change across time is relevant for NLP, we present a concrete approach to taking advantage of this change. Nonetheless, these results could inspire more specialized forms of sequential adaptation for specific tasks.\nUnsupervised subspace alignment has been used in computer vision to adapt between various types of representations of objects, such as high-definition photos, online retail images and illustrations BIBREF5, BIBREF29. Alignment is not restricted to linear transformations, but can be made non-linear through kernelisation BIBREF30. An extension to semi-supervised alignment has been done for images BIBREF31, but not in the context of classification of text embeddings or domain adaptation on a sequential basis.\n\nFigure 2: Example of subspace alignment procedure. (Left) Source data, (left middle) unsupervised alignment of the source data, (right middle) semi-supervised alignment of the source data, (right) target data.\n\nTable 1: Paper acceptance prediction (accuracy) on the PeerRead data set (Kang et al. 2018). The abbreviated columns represent Unsupervised, Semi-supervised, Unsupervised Unbounded, Semi-supervised Unbounded, and Semi-supervised Unbounded with Clustering.\n\nFigure 3: Paper acceptance model (BERT and SSA).\n\nTable 3: F1 score in SDQC task of RumourEval-2019 (Gorrell et al. 2018)\n\nQuestion: What are three challenging tasks authors evaluated their sequentially aligned representations?",
"role": "user"
},
{
"content": "{\"answer\": [\"paper acceptance prediction\", \"Named Entity Recognition (NER)\", \"author stance prediction\"], \"evidence\": [\"We consider three tasks representing a broad selection of natural language understanding scenarios: paper acceptance prediction based on the PeerRead data set BIBREF2, Named Entity Recognition (NER) based on the Broad Twitter Corpus BIBREF3, and author stance prediction based on the RumEval-19 data set BIBREF6. These tasks were chosen so as to represent i) different textual domains, across ii) differing time scales, and iii) operating at varying levels of linguistic granularity. As we are dealing with dynamical learning, the vast majority of NLP data sets can unfortunately not be used since they do not include time stamps.\"]}",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.247 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\nComparative antioxidant effects of lycopene, apo-10'-lycopenoic acid and apo-14'-lycopenoic acid in human macrophages exposed to H2O2 and cigarette smoke extract. Much of the beneficial effects of tomato lycopene in the prevention of chronic diseases has been attributed to its antioxidant properties, which could be mediated by its metabolites and/or oxidation products. However, the biological functions of these lycopene derivatives remain still unknown. In the present study, we evaluated and compared the antioxidant efficacy of the lycopene eccentric cleavage products apo-10'-lycopenoic acid and apo-14'-lycopenoic acid in counteracting the oxidative effects of H(2)O(2) and cigarette smoke extract (CSE) in THP-1 macrophages. Both apo-10'-lycopenoic acid and apo-14'-lycopenoic acid were able to inhibit spontaneous and H(2)O(2)-induced ROS production in a dose-dependent manner. Such an effect was accompanied by an inhibition of MAPK phosphorylation, by NF-κB inactivation, and by inhibition of hsp-70 and hsp-90 expressions. Both apo-lycopenoic acids also decreased CSE-induced ROS production, 8-OHdG formation and reduced the increase in NOX-4 and COX-2 expressions caused by CSE. However, in both the models of oxidative stress, apo-14'-lycopenoic acid was much more potent as an antioxidant than apo-10'-lycopenoic acid, showing antioxidant properties similar to lycopene. These data strongly suggest that apo-lycopenoic acids, and particularly apo-14'-lycopenoic acid, may mediate some of the antioxidant functions of lycopene in cells.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"apo-14\\u0027-lycopenoic acid\", \"apo-10\\u0027-lycopenoic acid\", \"H(2)O(2)\", \"8-OHdG\", \"apo-lycopenoic acids\", \"lycopene\", \"H2O2\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.3783 | [
{
"content": "Which dataset would be most relevant for this research query?\n\nQuery: A method for the estimation of the contact force and contact location is proposed.\n\nKeyphrases: contact force estimation\n\nDataset descriptions:\nGRAB: GRAB is a dataset of full-body motions interacting and grasping 3D objects. It contains accurate finger and facial motions as well as the contact between the objects and body.\nContactPose: ContactPose is a dataset of hand-object contact paired with hand pose, object pose, and RGB-D images. ContactPose has 2306 unique grasps of 25 household objects grasped with 2 functional intents by 50 participants, and more than 2.9 M RGB-D grasp images.\nMuJoCo: MuJoCo (multi-joint dynamics with contact) is a physics engine used to implement environments to benchmark Reinforcement Learning methods.\nContactDB: ContactDB is a dataset of contact maps for household objects that captures the rich hand-object contact that occurs during grasping, enabled by use of a thermal camera. ContactDB includes 3,750 3D meshes of 50 household objects textured with contact maps and 375K frames of synchronized RGB-D+thermal images.\n\nOptions: A: GRAB B: ContactPose C: MuJoCo D: ContactDB",
"role": "user"
},
{
"content": "MuJoCo",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.3706 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nMetabolic and ventilatory responses during very low level exercise . 1 . Nine male and six female healthy subjects were studied during supine bicycle exercise at workloads of 12 and 37 W ; pedalling rates varied between 30 and 50 cycles/min at each workload . Measurements were made of oxygen consumption ( VO2 ) , carbon dioxide production ( VCO2 ) , minute ventilation ( VE ) , tidal volume ( VT ) , respiratory frequency ( fR ) , inspiratory and expiratory time ( TI , TE ) and mean inspiratory flow ( VT/TI ) using a non-invasive canopy-computer-spirometer system . 2 . At rest , males had greater values of VE , VT , TI , inspiratory duty cycle ( TI/TTOT ) , VCO2 and VO2 , and a lower fR , than females . 3 . At the lower workload , VO2 , VCO2 , VE , VT and VT/TI increased linearly with increasing pedalling rate , whereas at the higher workload there was a decrease in VO2 and little or no change in ventilatory parameters from 30 to 50 cycles/min except for an increase in fR in females . 4 . While performing supine exercise , there was an effect of pedalling rate on ventilatory and metabolic parameters at the low workload ( 12 W ) which diminished at the higher workload ( 37 W ) . An increase in pedalling rate appears to enhance efficiency at these low workloads . 5 . Differences between the sexes during exercise generally include : ( a ) a higher breathing frequency , ( b ) a greater mechanical efficiency , and ( c ) lower ventilatory equivalents of O2 and CO2 ( VE/VO2 and VE/VCO2 ) during the higher workload in females than males .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"supine bicycle exercise\", \"very low level exercise\", \"supine exercise\"], \"Outcome\": [\"VCO2\", \"inspiratory duty cycle\", \"ventilatory parameters\", \"VO2\", \"fR\", \"VT\", \"lower ventilatory equivalents of O2 and CO2 ( VE/VO2 and VE/VCO2 )\", \"VT/TI\", \"( TI/TTOT )\", \"pedalling rate on ventilatory and metabolic parameters\", \"a greater mechanical efficiency\", \"respiratory frequency ( fR ) ,\", \"higher breathing frequency\", \"pedalling rate\", \"lower fR\", \"mean inspiratory flow ( VT/TI )\", \"TI\", \"oxygen consumption ( VO2 ) , carbon dioxide production ( VCO2 ) , minute ventilation ( VE ) , tidal volume ( VT )\", \"inspiratory and expiratory time ( TI , TE )\", \"VE\"], \"Participant\": [\"Nine male and six female healthy subjects were studied during supine bicycle exercise at workloads of 12 and 37 W ; pedalling rates varied between 30 and 50 cycles/min at each workload .\"]}",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.16781 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey on Resiliency Techniques in Cloud Computing Infrastructures and Applications \nChapter Title: D. Resiliency in Cloud Integrated Infrastructure \n1. Abstract of Cited Paper (BIB001): For optical grid/cloud scenarios, the dimensioning problem comprises not only deciding on the network dimensions (i.e., link bandwidths), but also choosing appropriate locations to install server infrastructure (i.e., data centers), as well as determining the amount of required server resources (for storage and/or processing). Given that users of such grid/cloud systems in general do not care about the exact physical locations of the server resources, a degree of freedom arises in choosing for each of their requests the most appropriate server location. We will exploit this anycast routing principle (i.e., source of traffic is given, but destination can be chosen rather freely) also to provide resilience: traffic may be relocated to alternate destinations in case of network/server failures. In this study, we propose to jointly optimize the link dimensioning and the location of the servers in an optical grid/cloud, where the anycast principle is applied for resiliency against either link or server node failures. While the data center location problem has some resemblance with either the classical p-center or k-means location problems, the anycast principle makes it much more difficult due to the requirement of link disjoint paths for ensuring grid resiliency. \n2. Abstract of Cited Paper (BIB002): In this paper we address the problem of dimensioning infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We will provide an overview of our work in this area, and in particular focus on how to design the resulting grid/cloud to be resilient against network link and/or server site failures. To this end, we will exploit relocation: under failure conditions, a request may be sent to an alternate destination than the one under failure-free conditions. We will provide a comprehensive overview of related work in this area, and focus in some detail on our own most recent work. The latter comprises a case study where traffic has a known origin, but we assume a degree of freedom as to where its end up being processed, which is typically the case for e.g., grid applications of the bag-of-tasks (BoT) type or for providing cloud services. In particular, we will provide in this paper a new integer linear programming (ILP) formulation to solve the resilient grid/cloud dimensioning problem using failure-dependent backup routes. Our algorithm will simultaneously decide on server and network capacity. We find that in the anycast routing problem we address, the benefit of using failure-dependent (FD) rerouting is limited compared to failure-independent (FID) backup routing. We confirm our earlier findings in terms of network capacity savings achieved by relocation compared to not exploiting relocation (order of 6–10% in the current case studies). \n3. Abstract of Cited Paper (BIB003): Survivability against disasters-both natural and deliberate attacks, and spanning large geographical areas-is becoming a major challenge in communication networks. Cloud services delivered by datacenter networks yield new opportunities to provide protection against disasters. Cloud services require a network substrate with high capacity, low latency, high availability, and low cost, which can be delivered by optical networks. In such networks, path protection against network failures is generally ensured by providing a backup path to the same destination (i.e., a datacenter), which is link-disjoint to the primary path. This protection fails to protect against disasters covering an area which disrupts both primary and backup paths. Also, protection against destination (datacenter) node failure is not ensured by a generic protection scheme. Moreover, content/service protection is a fundamental problem in a datacenter network, as the failure of a datacenter should not cause the disappearance of a specific content/service from the network. So content placement, routing, and protection of paths and content should be addressed together. In this work, we propose an integrated Integer Linear Program (ILP) to design an optical datacenter network, which solves the above-mentioned problems simultaneously. We show that our disaster protection scheme exploiting anycasting provides more protection, but uses less capacity than dedicated single-link failure protection. We show that a reasonable number of datacenters and selective content replicas with intelligent network design can provide survivability to disasters while supporting user demands. We also propose ILP relaxations and heuristics to solve the problem for large networks. \n4. Abstract of Cited Paper (BIB004): Cloud-computing services are provided to consumers through a network of servers and network equipment. Cloud-network (CN) providers virtualize resources [e.g., virtual machine (VM) and virtual network (VN)] for efficient and secure resource allocation. Disasters are one of the worst threats for CNs as they can cause massive disruptions and CN disconnection. A disaster may also induce post-disaster correlated, cascading failures which can disconnect more CNs. Survivable virtual-network embedding (SVNE) approaches have been studied to protect VNs against single physical-link/-node and dual physical-link failures in communication infrastructure, but massive disruptions due to a disaster and their consequences can make SVNE approaches insufficient to guarantee cloud-computing survivability. In this work, we study the problem of survivable CN mapping from disaster. We consider risk assessment, VM backup location, and post-disaster survivability to reduce the risk of failure and probability of CN disconnection and the penalty paid by operators due to loss of capacity. We formulate the proposed approach as an integer linear program and study two scenarios: a natural disaster, e.g., earthquake and a human-made disaster, e.g., weapons-of-mass-destruction attack. Our illustrative examples show that our approach reduces the risk of CN disconnection and penalty up to 90 % compared with a baseline CN mapping approach and increases the CN survivability up to 100 % in both scenarios. \n5. Abstract of Cited Paper (BIB005): We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful. \n6. Abstract of Cited Paper (BIB006): Cloud services delivered by high-capacity optical datacenter networks are subject to disasters which may cause large-area failures, leading to huge data loss. Survivable service provisioning is crucial to minimize the effects of network/datacenter failures and maintain critical services in case of a disaster. We propose a novel disaster-aware service-provisioning scheme that multiplexes service over multiple paths destined to multiple servers/datacenters with manycasting. Our scheme maintains some bandwidth (i.e., degraded service) after a disaster failure vs. no service at all. We formulate this problem into a mathematical model which turns out to be an Integer Linear Program (ILP), and we provide heuristic optimization approaches as ILP is intractable for large problem instances. Numerical examples show that exploiting manycasting by intelligently selecting destinations in a risk-aware manner for service provisioning offers high level of survivability against link and node failures that may be caused by disasters and post-disaster failures at no extra cost compared to the other survivable schemes. \n7. Abstract of Cited Paper (BIB007): The rapid increase in network traffic with new bandwidth-hungry applications such as cloud computing and telemedicine makes disaster survivability a crucial concern as the data (and revenue) loss caused by large-scale correlated cascading failures can be very high. To alleviate their impact, new measures should be taken since the nature of the network changes dramatically as available resources decrease during disasters. We develop a metric, called degraded-service tolerance, which can reduce protection cost and network disruption, and support maximal carried traffic in case of disasters. Degraded-service-tolerant connections can be admitted and recovered with reduced bandwidth under resource crunch. Our scheme re-assigns resources among connections by leveraging their degraded-service tolerance. A case study shows how our proposal can be applied to boost network performance during the resource crunch following a disaster. \n8. Abstract of Cited Paper (BIB008): We optimize the disaster backup in multi-datacenter (multi-DC) cloud systems and design disaster-aware algorithms to realize rapid backup with the objective of minimizing the backup window for all the DCs in the network. A mixed integer linear programming (MILP) model is first formulated to optimize the backup processes of all production DCs jointly. We then develop three heuristics that use the one-step or two-step approaches for the selection of backup DCs and the calculation of backup routing paths. Simulation results show that the Two-Step algorithm can achieve the shortest backup window with the lowest operation complexity. \n9. Abstract of Cited Paper (BIB009): In case of large-scale disasters, cloud networks may be vulnerable to data loss. We propose a rapid data-evacuation strategy to move maximum amount of data from disaster regions using survived resources under strict time constraints.\n\nLiterature Review:",
"role": "user"
},
{
"content": "Cloud providers connect computation and communication elements in an integrated infrastructure to offer their services to tenants and customers. This integration results in a socalled Cloud Oriented Federated Computing and Networking System (CFCNS) BIB001 . Then, cloud functionalities (as seen in Section II-B) are enabled by virtualizing the CFCNS infrastructure, i.e., slicing the CFNS resource, thanks to virtualized servers (i.e., VMs and VSs) that are accessed using virtualized networks (i.e., VN, VON, and SDN). In the following, we will call the integration of virtual servers and virtual networks as a Cloud Network (CN) BIB005 , i.e., the CN is a virtualized slice of the CFCNS infrastructure (see Glossary in Section VIII). In such an integrated system, failures in network elements as well as in servers might induce failure in cascades that affect other elements of the CFCNS, given their strict interdependence. So, resiliency techniques specifically designed for communication network or server systems in isolation can be insufficient to ensure the resiliency of a CFCNS. An efficient coordination and integration of these resiliency approaches is necessary. In continuation to the previous classifications, we categorize approaches for CFCNS resiliency as those related to: (i) the physical layer or (ii) the virtualization layer. Surveyed approaches for resiliency in cloud integrated infrastructures are organized according to the classification in Fig. 10 Resource degradation is a temporary reduction of the server or bandwidth capacity enforced to distribute surviving resources to all customers. Three approaches using one or both of these techniques are: routing aware placement (R-Aware) BIB002 , failure independent rerouting (FID) BIB003 , BIB006 , and degradationaware provisioning disaster recovery (DAP) BIB007 . The three approaches minimize cost and RT by using: connection relocation (in FID), two types of routing protection (in R-Aware), and two types of backup routing with gradual degradation in case of multiple failures (in DAP). An extension of DAP for gradual capacity degradation to recover connections after a disaster is suggested in BIB008 . Disaster resilient data center dimensioning (DRDD) in BIB004 also adds content replication and placement in the capacity dimensioning. c) Content evacuation for recovery. In presence of predictable failures (e.g., an upcoming hurricane), an alert is typically issued and the cloud provider might have time to \"evacuate\" data from the data center locations under risk towards other safe data center facilities. Two approaches implementing this concept are disaster backup in multi-data center (DB-multi-DC) BIB009 and rapid data-evacuation strategy (DR-RAPID-EVA) BIB010 . DB-multi-DC uses a two-step algorithm to solve the fast-backup evacuation of data center contents. DR-RAPID-EVA exploits the time made available by the disaster alert and the extent of the forecast damage as input to run a fast reactive evacuation algo- Type of approach \"DES\" design of cloud infrastructure, \"RUN\" run-time, \"PLA\" planning of capacity, and \"R/P\" Run-time or planning or both. Element used by the approach, \"C\" content or cloud computing data, \"R\" routing or provisioning for communication, \"DC\" entire data center, \"CN\" whole cloud network, \"VN\" virtual network, \"VM\" virtual machine. Strategies used by the approach, \"VA\" vulnerability aware, \"CD\" capacity dimensioning, \"Place\" content/DC placement, \"AnyCast\" routing, \"DR\" disaster resiliency, \"VBN\" virtual backup node mapping, \"CR\" central controller replication functionality, \"AM\" address mobility functionality, \"VM survivability\" general VM survivability functionality, which can be any or combination of \"VM migration\" virtual machine migration, \"VM replication\" virtual machine replication, \"VM chekpointing\" virtual machine checkpointing, \"VN survivability\" VN survivability functionality, which can be \"Cut\" Cut-disjoint, \"PC\" pre-configured capacity. Methods \"PR\" is protection, \"RE\" recovery, \"FR\" forecasting, \"RM\" removal, \"RE/PR\" either recovery or protection or both.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.102478 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\ngeneral anaesthesia is associated with lower oxygen tension due to increased alveolar - arterial oxygen gradient and occasionally oxygen desaturation leading to hypoxaemia .\nthe other mechanical causes of hypoxaemia are equipment failure , endotracheal tube blockade , endobronchial or oesophageal intubation and clinical causes are aspiration , hypoventilation , or pulmonary embolism especially in patients with trauma .\nintraoperative atelectasis of a bronchopulmonary segment , entire lobe , or lung due to mucus plug as a cause of hypoxaemia is uncommon and may be potentially fatal .\nwe report a case of intraoperative right lung collapse due to mucus plug in a patient undergoing left laparoscopic nephrectomy .\na 46-year - old male , chronic smoker , weighing 55 kg was diagnosed to have bilateral obstructive uropathy due to renal stone disease .\nhe was posted for left laparoscopic nephrectomy for left - sided non - functioning kidney .\nstenting and cystolitholapaxy under spinal anaesthesia and right - sided percutaneous nephrolithotomy ( pcnl ) and ureterorenoscopy ( urs ) before 2 months .\npreoperative blood investigations were within normal limits with serum creatinine of 1.12 mg / dl .\nx - ray of the chest was also reported as normal [ figure 1 ] .\ntrachea was intubated with an 8.5 mm cuffed endotracheal tube ( et ) after intubation , secretory sounds were felt in the bag with 100% oxygen saturation . when the patient was put in the right lateral kidney bridge position ,\non auscultation , breath sounds were reduced in the right supraclavicular , mammary , and scapular areas .\nsince there was no improvement in the oxygen saturation , he was put in the supine position . on examination respiratory movement\non auscultation , breath sounds were absent in right supraclavicular , mammary areas and bronchial breathing was present with crepitations in the right inframammary and infra - axillary regions .\nportable chest radiograph was suggestive of right upper lobe lung collapse [ figure 2 ] .\npreparation for fibreoptic bronchoscopy was done . in the mean time , postural drainage for the right upper lobe in the reverse trendelenberg position with head end of the operation table elevated by 18 inches was attempted and yielded a thick yellowish green mucus plug .\na sudden improvement was seen in the respiratory movement in the supraclavicular and mammary area with normal vesicular breathing and 100% oxygen saturation .\nthe repeat chest radiograph taken after 10 min showed complete expansion of the right upper lobe [ figure 3 ] .\nsurgery was postponed and the patient was extubated and shifted to the postanaesthetic care unit .\nretrospective examination of the preoperative chest radiograph which was reported normal , showed multiple retrocardiac cystic shadows .\nhigh resolution computed tomography ( hrct ) thorax was advised and it showed cystic bronchiectasis in the left lower lobe [ figure 4 ] .\na successful operation was done after 15 days with preoperative chest physiotherapy , incentive spirometry , antibiotic coverage , and intraoperative use of humidification and avoidance of anti - sialogogues .\nacute pulmonary collapse is common in the postoperative and intensive care unit ( icu ) setting but is a rare event after induction of anaesthesia and repositioning .\nthe most common cause is endobronchial intubation or blockage of tube by secretions , blood , or herniated cuff .\nour report highlights the precipitous onset of hypoxaemia associated with considerable pulmonary collapse due to mucus plug .\nnormally , general anaesthesia is associated with lower oxygen tensions , and occasionally oxygen desaturation leading to hypoxaemia can occur .\nmechanisms for hypoxaemia include ventilation perfusion ( v / q ) mismatch , hypoventilation , diffusion impairment , and physiologic or anatomic shunt . during anaesthesia and neuromuscular relaxation\n, ventilation is redistributed to the nondependent regions of the lungs due to atelectasis and airway closure in the dependent lung regions , resulting in increased alveolar - arterial oxygen gradients and hypoxaemia . in the lateral position , gravity - related preferential perfusion to the dependent lung worsens this v / q mismatch and hypoxaemia . in our patient , this was not the cause as hypoxaemia developed acutely with physical findings . in our patient , the initial clinical presentation included sudden onset of hypoxaemia on assuming a lateral position so initial diagnosis was blockage of et tube / endobronchial intubation as there was reduction or absence of breath sounds on right lung .\net blockage was ruled out by passing of the suction catheter which aspirated few secretions and endobronchial intubation was ruled out by auscultation . due to persistent hypoxaemia\n, the patient was turned to the supine position which revealed diminished movement on right upper chest .\nimmediate x - ray confirmed right upper lobe collapse . to find out the cause of collapse , fiberoptic bronchoscopy was planned but before that with postural drainage mucus plug was aspirated and the patient improved clinically and radiologically . to know the cause of mucus plug\n, we reviewed the chest x - ray taken on table and compared it with previous preoperative x - ray which was reported normal , which actually showed cystic cavity with air - fluid level in the left lower lobe which was concealed behind cardiac shadow .\non giving right lateral position , secretions from this cavity must have shifted to right upper lobe bronchus causing right upper lobe collapse .\nsignificant lung collapse is usually associated with haemodynamic instability due to mediastinal shift and decreased venous return . however\n, our patient remained haemodynamically stable . the probable reason could be that in the setting of proximal airway obstruction the elevated airway pressure is not transmitted to the extraparenchymal intrathoracic structures like great vessels , so venous return and cardiac output remain preserved as it has been shown previously .\nwhile lung collapse and intraoperative hypoxaemia have been reported previously , few involve mucous plugging and atelectasis . in reviewing the literature\n, we found five reports describing mucous plugging , all associated with orthopaedic surgery following trauma and all diagnosed with chest radiographs . in these patients ,\nthe combination of recent trauma and surgery was the contributing factor to intraoperative atelectasis and lung collapse from mucous obstruction .\nfour other cases of mucous plugging have been reported in quadriplegic patients initially diagnosed with pulmonary embolism . however , the chest radiograph was inconclusive , and diagnosis was made in these cases only by the use of ventilation scintiscans .\nthis report highlights a rare but important cause of intraoperative hypoxaemia . though written normal ,\nchest x - ray showed signs of bronchiectasis , but on reviewing the chest x - ray we understood there was missed diagnosis . if this was identified earlier thorough toileting before positioning could have prevented this complication .\nalthough intraoperative fibreoptic bronchoscopy is a gold standard to aid diagnosis and management , a trial of postural drainage may be life - saving .",
"role": "user"
},
{
"content": "desaturation after induction of anaesthesia and repositioning due to mucus plug causing atelectasis ( lung collapse ) is a rare event . \n we present a case of intraoperative right lung collapse due to mucus plug in a patient undergoing left laparoscopic nephrectomy . \n hypoxaemia occurred after the induction of anaesthesia and repositioning . \n x - ray chest revealed right lung collapse and surgery was subsequently postponed . \n lung re - expanded after postural drainage and suction . \n postoperatively patient was diagnosed to have retrocardiac bronchiectasis . \n after preoperative preparation with postural drainage , chest physiotherapy , and antibiotics , the patient underwent surgery uneventfully .",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.801 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nEffect of insulating existing houses on health inequality: cluster randomised study in the community.\nAbstract 1:\nTo determine whether insulating existing houses increases indoor temperatures and improves occupants' health and wellbeing.\n Community based, cluster, single blinded randomised study.\n Seven low income communities in New Zealand.\n 1350 households containing 4407 participants.\n Installation of a standard retrofit insulation package.\n Indoor temperature and relative humidity, energy consumption, self reported health, wheezing, days off school and work, visits to general practitioners, and admissions to hospital.\n Insulation was associated with a small increase in bedroom temperatures during the winter (0.5 degrees C) and decreased relative humidity (-2.3%), despite energy consumption in insulated houses being 81% of that in uninsulated houses. Bedroom temperatures were below 10 degrees C for 1.7 fewer hours each day in insulated homes than in uninsulated ones. These changes were associated with reduced odds in the insulated homes of fair or poor self rated health (adjusted odds ratio 0.50, 95% confidence interval 0.38 to 0.68), self reports of wheezing in the past three months (0.57, 0.47 to 0.70), self reports of children taking a day off school (0.49, 0.31 to 0.80), and self reports of adults taking a day off work (0.62, 0.46 to 0.83). Visits to general practitioners were less often reported by occupants of insulated homes (0.73, 0.62 to 0.87). Hospital admissions for respiratory conditions were also reduced (0.53, 0.22 to 1.29), but this reduction was not statistically significant (P=0.16).\n Insulating existing houses led to a significantly warmer, drier indoor environment and resulted in improved self rated health, self reported wheezing, days off school and work, and visits to general practitioners as well as a trend for fewer hospital admissions for respiratory conditions.\n\n\nTitle 2:\nThe remediation of mold damaged school--a three-year follow-up study on teachers' health.\nAbstract 2:\nThe health effects in teachers of a mold-damaged school before and during an extensive remediation process were assessed. Health data were collected with self-administered questionnaires from teachers (n=31) working in a moisture and mold damaged school and from the reference group of teachers (n=13) working in a non-damaged school. The questionnaire study was repeated three times. Spirometry was measured in 33 individuals in the spring 1997 and repeated in the spring 1999 and 2000. In the damaged school, a cluster of eight asthma cases was identified, the prevalence of asthma being 26%. Before the remediation, the number of sinusitis episodes was higher (p=0.040) and the mean duration of sick leaves longer (p=0.015) among the study group than in the reference group. A higher prevalence of hoarseness and perceived poor quality of indoor air were reported. During the follow-up, no new asthma cases appeared. After the remediation, bronchitis, conjunctivitis, symptoms of allergic rhinitis and the sum of respiratory infection episodes decreased significantly. Some of the asthmatics had low values in the spirometry but no changes in the lung function were observed at the group level. The remediation of the mold damage had beneficial effects on teachers' health.\n\n\nTitle 3:\nAllergic respiratory disease and fungal remediation in a building in a subtropical climate.\nAbstract 3:\nAn outbreak of allergic respiratory disease occurred in a new building that was characterized from initial occupancy by the presence of extensive visible mold (especially Aspergillus versicolor) on interior surfaces. Epidemiological study of the occupants of both the affected building and a comparison neighboring structure indicated high rates of respiratory and other symptoms among persons working in the affected building. Subsequent clinical evaluations of some persons occupying the building for up to five years identified several cases of building-related allergic respiratory disease, including asthma and hypersensitivity pneumonitis. Based on these findings, the building was evacuated before remediation began. The mycological goal of the three-year building restoration project was to reduce concentration of non-phylloplane fungi such as A. versicolor to the lowest feasible level. All visibly colonized materials in the building were discarded and all fine dust on interior surfaces was removed by vacuuming and/or damp wiping. A medical surveillance program utilizing serial self-reported questionnaires and readily available clinical evaluations was designed to monitor the health of building occupants after re-entry. Symptom rates just prior to building reentry were substantially lower than those found before evacuation and have remained unchanged after re-occupancy was completed. No new or recrudescent cases of illness are known to have occurred after building re-entry.\n\n\nTitle 4:\nReduction in asthma morbidity in children as a result of home remediation aimed at moisture sources.\nAbstract 4:\nHome dampness and the presence of mold and allergens have been associated with asthma morbidity. We examined changes in asthma morbidity in children as a result of home remediation aimed at moisture sources.\n In this prospective, randomized controlled trial, symptomatic, asthmatic children (n = 62), 2-17 years of age, living in a home with indoor mold, received an asthma intervention including an action plan, education, and individualized problem solving. The remediation group also received household repairs, including reduction of water infiltration, removal of water-damaged building materials, and heating/ventilation/air-conditioning alterations. The control group received only home cleaning information. We measured children's total and allergen-specific serum immuno-globulin E, peripheral blood eosinophil counts, and urinary cotinine. Environmental dust samples were analyzed for dust mite, cockroach, rodent urinary protein, endotoxin, and fungi. The follow-up period was 1 year.\n Children in both groups showed improvement in asthma symptomatic days during the preremediation portion of the study. The remediation group had a significant decrease in symptom days (p = 0.003, as randomized; p = 0.004, intent to treat) after remodeling, whereas these parameters in the control group did not significantly change. In the postremediation period, the remediation group had a lower rate of exacerbations compared with control asthmatics (as treated: 1 of 29 vs. 11 of 33, respectively, p = 0. 003; intent to treat: 28.1% and 10.0%, respectively, p = 0.11).\n Construction remediation aimed at the root cause of moisture sources and combined with a medical/behavioral intervention significantly reduces symptom days and health care use for asthmatic children who live in homes with a documented mold problem.\n\n\nTitle 5:\nImproved health after intervention in a school with moisture problems.\nAbstract 5:\nIn a school with floor moisture problems, the personnel had complaints consistent with the sick-building syndrome (SBS). Interventive measures including the laying of a ventilated floor were undertaken to eliminate the emissions. To examine if the intervention resulted in positive health effects, 34 personnel and 336 pupils were interviewed just before the intervention and also 7 months after. Also were interviewed 21 personnel and 224 pupils at an adjacent school serving as a control. Compared with the control school, the problem school showed more complaints, more general symptoms and more symptoms from the eyes, airways and skin, both among the personnel and the pupils. In the post-intervention examinations, the excess of symptoms among the personnel had almost disappeared. Among the pupils, the frequency of eye irritation was reduced but a general improvement of the other symptoms was not as obvious. However, after adjustment for a recent common cold, atopy and stress among the pupils, only one symptom (\"stuffy nose\") remained significantly elevated. In conclusion, the intervention was followed by positive health effects, supporting the hypothesis that emissions from building material had contributed to the excess of symptoms. A recent common cold was highly related to the symptoms and should be considered in future SBS studies.\n\n\nTitle 6:\nRespiratory morbidity among children following renovation of a water-damaged school.\nAbstract 6:\nThe authors sought to determine whether exposure to molds, resulting from moisture damage in a school, was associated with increased respiratory symptoms and morbidity among schoolchildren and whether the renovation of this building resulted in a decrease in prevalence of respiratory symptoms and morbidity. The study was a follow-up (1-y interval) of children between the ages of 7 and 12 y from two elementary schools in a Finnish suburb. In addition to a questionnaire completed by the parents, the authors assessed the respiratory health of children by examining the health records of a local health center. In the cross-sectional study, the prevalence of symptoms and infections were higher in the exposed group, as were visits to a physician and use of antibiotics. The school was renovated, after which all prevalence decreased and no significant differences remained, except for visits to a physician (according to questionnaire responses). Therefore, moisture damage and exposure to molds increased the indoor air problems of schools and affected the respiratory health of children.\n\n\nTitle 7:\n\"The walls were so damp and cold\" fuel poverty and ill health in Northern Ireland: results from a housing intervention.\nAbstract 7:\nThis article reports the findings from an evaluation of a fuel poverty programme in the Armagh and Dungannon Health Action Zone in Northern Ireland. Focusing on a rural community, it adds to the debate surrounding the hidden nature of rural fuel poverty. As part of the programme, energy efficiency measures, including some central heating systems, were installed in 54 homes. Surveys were conducted both pre and post intervention and analysed to assess any changes. The programme demonstrated that energy efficiency intervention can lead to improvements in health and well being, increased comfort levels in the home and a reduction in the use of health services, therefore having potential cost savings for the NHS. Some households, however, remain in fuel poverty after having full central heating installed, reflecting the significant contribution of low income on the production of fuel poverty. The article concludes by suggesting that interventions in this area require commitment from multiple sectors of society, including health professionals and local communities.",
"role": "user"
},
{
"content": "We found moderate to very low-quality evidence that repairing mould-damaged houses and offices decreases asthma-related symptoms and respiratory infections compared to no intervention in adults. There is very low-quality evidence that although repairing schools did not significantly change respiratory symptoms in staff or children, pupils' visits to physicians due to a common cold were less frequent after remediation of the school. Better research, preferably with a cRCT design and with more validated outcome measures, is needed.",
"role": "assistant"
}
] |
science.chemprot_re | science.chemprot_re.399 | [
{
"content": "Identify relationships between chemical compounds and genes in this abstract. Use only these relation types: ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nFormat your output as a JSON array with entries like [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"].\n\nAbstract:\nEffect of fenoterol-induced constitutive beta (2) -adrenoceptor activity on contractile receptor function in airway smooth muscle. In the present study, we investigated the effect of fenoterol-induced constitutive beta (2) -adrenoceptor activity on muscarinic receptor agonist- and histamine-induced bovine tracheal smooth muscle contractions. Bovine tracheal smooth muscle strips were incubated with 10 microM fenoterol or vehicle for various periods of time (5, 30 min, 18 h) at 37 degrees C. After extensive washout (3 h, 37 degrees C), isometric contractions were measured to the full muscarinic receptor agonist methacholine, the partial muscarinic receptor agonist 4- (m-chlorophenyl-carbamoyloxy) -2-butynyltrimethylammonium (McN-A-343) and histamine. Fenoterol treatment significantly reduced the sensitivity (pEC (50)) to methacholine in a time-dependent manner, without affecting maximal contraction (E (max)). Fenoterol treatment similarly reduced the pEC (50) of McN-A-343 and histamine; however, E (max) values were also reduced, to approximately 70% of control after 18-h treatment. The inverse agonist timolol, having no effect on control preparations, consistently restored the reduced pEC (50) and E (max) values of the contractile agonists. Remarkably, in the presence of timolol the pEC (50) values of McN-A-343 and histamine in fenoterol-treated airways were significantly enhanced compared to controls. In conclusion, fenoterol-induced constitutive beta (2) -adrenoceptor activity reduces muscarinic receptor agonist- and histamine-induced contractions of bovine tracheal smooth muscle, which can be reversed by the inverse agonist timolol. Moreover, after beta (2) -adrenoceptor agonist treatment, inverse agonism by beta-adrenoceptor antagonists may cause enhanced airway reactivity to contractile mediators.",
"role": "user"
},
{
"content": "[[\"histamine\", \"AGONIST\", \"muscarinic receptor\"], [\"methacholine\", \"AGONIST\", \"muscarinic receptor\"], [\"4- (m-chlorophenyl-carbamoyloxy) -2-butynyltrimethylammonium\", \"AGONIST\", \"muscarinic receptor\"], [\"timolol\", \"AGONIST-INHIBITOR\", \"beta (2) -adrenoceptor\"]]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.15137 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nSingle image defogging is a classical and challenging problem in computer vision. Existing methods towards this problem mainly include handcrafted priors based methods that rely on the use of the atmospheric degradation model and learning based approaches that require paired fog-fogfree training example images. In practice, however, prior-based methods are prone to failure due to their own limitations and paired training data are extremely difficult to acquire. Inspired by the principle of CycleGAN network, we have developed an end-to-end learning system that uses unpaired fog and fogfree training images, adversarial discriminators and cycle consistency losses to automatically construct a fog removal system. Similar to CycleGAN, our system has two transformation paths; one maps fog images to a fogfree image domain and the other maps fogfree images to a fog image domain. Instead of one stage mapping, our system uses a two stage mapping strategy in each transformation path to enhance the effectiveness of fog removal. Furthermore, we make explicit use of prior knowledge in the networks by embedding the atmospheric degradation principle and a sky prior for mapping fogfree images to the fog images domain. In addition, we also contribute the first real world nature fog-fogfree image dataset for defogging research. Our multiple real fog images dataset (MRFID) contains images of 200 natural outdoor scenes. For each scene, there are one clear image and corresponding four foggy images of different fog densities manually selected from a sequence of images taken by a fixed camera over the course of one year. Qualitative and quantitative comparison against several state-of-the-art methods on both synthetic and real world images demonstrate that our approach is effective and performs favorably for recovering a clear image from a foggy image.\n\nCited Abstract(s):\n[1]: In this paper, we propose a novel learning-based approach for single image dehazing. The proposed approach is mostly inspired by the observation that the color of the objects fades gradually along with the increment of the scene depth. We regard the RGB values of the pixels within the image as the important feature, and use the back propagation neural network to mine the internal link between color and depth from the training samples, which consists of the hazy images and their corresponding ground truth depth map. With the trained neural network, we can easily restore the depth information as well as the scene radiance from the hazy image. Experimental results show that the proposed approach is able to produce a high-quality haze-free image with the single hazy image and achieve the real-time requirement.\n\n[2]: Haze is one of the major factors that degrade outdoor images. Removing haze from a single image is known to be severely ill-posed, and assumptions made in previous methods do not hold in many situations. In this paper, we systematically investigate different haze-relevant features in a learning framework to identify the best feature combination for image dehazing. We show that the dark-channel feature is the most informative one for this task, which confirms the observation of [8] from a learning perspective, while other haze-relevant features also contribute significantly in a complementary way. We also find that surprisingly, the synthetic hazy image patches we use for feature investigation serve well as training data for realworld images, which allows us to train specific models for specific applications. Experiment results demonstrate that the proposed algorithm outperforms state-of-the-art methods on both synthetic and real-world datasets.\n\n[3]: Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.",
"role": "user"
},
{
"content": "Recently, some learning-based methods have drawn significant attention in the defogging reseach community. Tang [2] proposed a method by using the random forest methods to train dark primary colors and other multiple color features to improve the estimation accuracy of transmittance. Mai [1] found that the RGB color feature of the haze image had a strong linear relationship with the depth of the scene, and established the intrinsic relation between the color feature and the scene depth through the back propagation neural network to effectively restore the scene depth. Cai [3] proposed a concept of a dehazing network that used a convolutional neural network to train the color characteristics (such as dark primary colors, color fading, maximum contrast, etc.) of foggy images and to optimize the transmission. All of those methods can achieve better defogging effect. However, they still have to estimate the transmission map and atmospheric light first, and then remove the fog with the atmospheric degradation model. Thus, the artifacts could not be avoided in the final defogged results when the transmission or atmospheric light is wrongly estimated.",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.47945 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nProgress in the Total Synthesis of Rocaglamide\n\nPaper sections:\n1. Introduction\n<p>During the past few years, indigenous to southeast Asia, the plant genus Aglia includes several species that produce a range of cyclopenta[b]tetrahydrobenzofuran containing metabolites [1–3], including rocaglamide (1), isolated from the roots and stems of Aglia elliptifolia by King et al. [4]. King's initial report indicated that rocaglamide showed significant in vivo activity in P388 lymphocytic leukemia-infected mice [4]. Since then, rocaglamide and related compounds have shown cytostatic, and cytotoxic activity against a variety of human cancer cell lines, with IC50 values in the range 1.0–6.0 ng/mL [5–9], has attracted more attention in recent years because of its insecticidal and growth inhibitory activity [10–15]. In order to be useful as drugs, a constant supply of such compounds in a large quantity is required. However, their natural abundance in the plant is quite low, and large-scale isolation from natural sources may not be feasible. Chemical synthesis, either total- or semisynthesis, is an option to produce this type of compounds. Both the structural complexity of rocaglamide and its significant activity make it an attractive synthetic target. Stereoselective synthesis of the dense substitution pattern of these targets is a formidable synthetic challenge: the molecules bear five contiguous stereocenters and cis aryl groups on adjacent carbons. In past years of effort, only a handful of completed total syntheses have been reported, evidence of the difficulties associated with the synthesis of rocaglate natural products. In the present work, several total synthetic approaches of rocaglamide will be reviewed (Scheme 1).</p>\n\n2.1. Synthesis of Di-Epi-Rocaglamide\n<p>An earlier attempt to synthesize rocaglamide (1) by Kraus and Sy in 1989 resulted in the synthesis of the di-epi analog of rocaglamide (6), as shown in Scheme 2 [16]. Michael addition of benzofuranone 2 to cinnamonitrile 3 gave keto-nitrile 4 in a 5 : 1 diastereomeric ratio. The major isomer was used to prepare 5 via an SmI2-mediated cyclization, followed by the introduction of the dimethylcarboxamido group in six steps to give 6. Although the investigators did not succeed in synthesizing rocaglamide (1), this approach was the first to utilize pinacolic coupling to generate the cyclopenta[b]benzofuran skeleton. Intramolecular pinacolic coupling later became a routine methodology used by other groups in the synthesis of rocaglamide and rocaglate derivatives [17–20]. 2,3-Di-epi-rocaglamide (6) is an interesting compound that can be used in the SAR study of rocaglamide derivatives.</p>\n\n2.2. Synthesis of Racemic (±)-Rocaglamide\n<p>Davey and Taylor [21] were the first research group to utilize benzofuranone 2 as the precursor in the synthesis of cyclopenta[b]benzofuran skeleton. Treatment of benzofuranone 2 with NaH followed by iododithiane 7 gave the C-alkylated product 8, which, through a direct 1,3-dithiane lithiation and an intramolecular carbonyl addition gave the cyclized product 9 as shown in Scheme 3. In order to complete the synthesis of rocaglamide (1), all that remained were dithiane hydrolysis, introduction of the C-2 dimethylcarboxamide group, and carbonyl reduction. However, as reported in a follow-up publication, hydrolysis of dithiane 9 failed to give rise to the required β-phenyl isomer, which has the right stereochemistry for rocaglamide type compounds [18]. Under different reaction conditions, only the α-phenyl isomer was obtained in very low yield, and attempts to invert the stereochemistry also failed [18].</p><p>Taylor et al. [17, 18] then followed an alternative synthetic strategy utilizing the intramolecular keto-aldehyde pinacolic coupling as outlined in Scheme 4. Michael addition of the benzofuranone 2 to cinnamaldehyde (11), followed by SmI2-mediated intramolecular pinacolic coupling of ketoaldehyde 12, gave diols 13a and 13b, which could be separated by chromatography. Swern oxidation of diol 13b yielded ketone 14, which was converted into β-keto ester 15 using the CS2-based procedure as utilized by Kraus and Sy [16]. The keto ester 15 was then converted into ketoamide 16, followed by a stereoselective reduction with Me4NBH(OAc)3 to give (±)-rocaglamide (1).</p><p>In 2001, Dobler et al. modified Taylor's method [18, 20] to give a higher overall yield of rocaglamide (40%) in a fewer number of steps, as outlined in Scheme 5 [22]. Following Taylor's scheme, aldehyde was synthesized in 57% yield [18, 20]. Dobler et al. then proceeded with an umpolung sequence, where aldehyde 18b was subjected to treatment with TMSCN to give the cyanohydrin 19 in a quantitative yield, followed by a deprotection to give ketone 14 [22]. Compared to Taylor's scheme, Dobler's method is compatible with substituents sensitive to reduction. For the introduction of the dimethylcarboxamide group, Dobler et al. utilized Styles reagent to convert the ketone 14 directly to ketomide 16 [22]. In the final reduction step, both the groups of Taylor and Dobler used Me4NBH(OAc)3 to give an 81% yield (Taylor) [18] and a 95% yield (Dobler) [22] of racemic (±)-rocaglamide (1) (the yield is calculated from ketomide 16).</p><p>In 2008, Qin's group synthesized (±)-rocaglamide and its 2,3-di-epi analogue by introducing the strategy of intramolecular reductive coupling to construct the cyclopenta[b]benzofuran skeleton [23], the shortest and most efficient synthetic method hitherto was now established to rocaglamide 1 and its 2,3-di-epi analogue in racemic form, as outlined in Scheme 6. Michael addition of the benzofuranone (2) to methyl α-formylcinnaminate (20), followed by SmI2-mediated intramolecular keto-ester coupling, gave β-keto ester 15, amination of the ester intermediate, and reduction of carbonyl with Me4NBH(OAc)3 to give (±)-rocaglamide (1).</p><p>In 2009, Frontier group reported the total synthesis of aglafolin, rocagloic acid, and rocaglamide using Nazarov cyclization initiated by peracid oxidation (Scheme 7) [24]. Alkylation of benzofuranone (2) using vinyl magnesium bromide was followed by osmylation and periodate cleavage of the resulting 3-vinyl benzofuran to give aldehyde 22. Alkylation with phenylacetylene and protection of the resultant propargyl alcohol with ethyl iodide and p-methoxybenzyl chloride gave propargyl ethers 23a and 23b, respectively. Deprotonation at the propargylic position of 23 with tertbutyllithium gave rise to an allenyl anion, which was trapped with tri-n-butyltin chloride to give stannyl alkoxyallene 24 [25]. Treatment of 24 with excess m-CPBA gave 25, and treatment of 25b with excess DDQ gave diosphenol 26 in excellent yield. Enol 26 was converted to triflate and then subjected to palladium-mediated carbonylation to install the final C-C linkage and produced 27. Hydrogenation of 27 over PtO2 gave 15 as a single diastereomer. Templated reduction of the ketone afforded the natural product aglafolin and saponification followed by amide formation furnished (±)-rocaglamide (1).</p>\n\n3. Biomimetic Cycloaddition Approaches\n<p>Trost et al. were successful in the enantioselective synthesis of (−)-rocaglamide (1), by utilizing a novel DDQ- (2,3-dichloro-5,6-dicyano-1,4-benzoquinone) mediated oxidative cyclization to generate the dihydrobenzofuran ring [26]. Some important steps for the formation of key intermediates are shown in Scheme 8. Trost et al. employed a Pd-catalyzed asymmetric [3+2] cycloaddition of TMM [(trimethylsilyl) methyl] precursor (30) and oxazepinedione (31) to give cyclopentanone (32), followed by condensation with dimethylphloroglucinol to give the adduct (33). A DDQ-mediated oxidative cyclization gave the dihydrobenzofuran ring (34), and the adjustment of the stereochemistry proceeded through the enone 35. Amidation and desylation of (35) gave (36) and 37, and reduction of 37 yielded (−)-rocaglamide (1), which was identical to the natural product as shown by its chromatographic, spectroscopic, and physical properties. This synthetic method gave the enantiomerically pure rocaglamide and consisted of 17 steps with <6% overall yield [26].</p><p>In 2004, Gerard group introduced a biomimetic approach to the rocaglates employing photogeneration of oxidopyryliums derived from 3-hydroxyflavones (Scheme 9) [27]. [3+2] cycloaddition of photoirradiation (uranium filter) of kaempferol derivative 3-hydroxyflavone 38 and methyl cinnamate 39 (MeOH, °C) afforded the aglain 41, as well as benzo[b]cyclobutapyran-8-one 42 (33% and 17%, resp.) after purification on SiO2. Basic conditions (NaOMe, MeOH) were used to perform α-ketol rearrangement of both 41 and 42 which afforded a mixture of endo and exo cycloadducts 15 in which the endo isomer was obtained as a mixture of keto-enol tautomers. Reduction of 15 afforded (±)-methyl rocaglate 43 (51%) and the corresponding exo stereoisomer 44 (27%).</p><p>Subsequently, Gerard group completed the asymmetric synthesis of the rocaglamides by enantioselective photocycloaddition mediated by chiral Brønsted acids [28] (Scheme 10). The approach involves enantioselective [3+2] photocyclo-addition promoted by chiral Brønsted acids (TADDOLs) to afford an aglain precursor followed by a ketol shift/reduction sequence to the rocaglate core, and the highest enantioselectivity of (−)-rocaglamide was 89% ee.</p>\n\n4. Other Synthetic Approaches\n<p>Bruce et al. synthesized the analogue of (±)-rocaglamide (1) by ten steps reactions from cyclopentanone, as shown in Scheme 11 [29]. A key feature of this route is a highly efficient intramolecular condensation reaction which cleanly leads to the tricyclic skeleton. In 2008, Giese and Moser [30] carried out stereoselective synthesis of the rocaglamide skeleton via a silyl vinylketene formation [4+1] annulation sequence (Scheme 12), and this novel approach affords the ABC ring system where the adjacent phenyl and aryl substituents of the C ring have the required cis relationship.</p><p>To summarize, in past years of effort, the synthetic methods of the rocaglamide have been developed rapidly, but valuable approaches are still few. At present, only intramolecular cyclization and biomimetic cycloaddition are effective and applied approaches for the synthesis of rocaglamide. It is very essential to perfect asymmetric Michael cycloaddition by the rigidity of molecular in intramolecular cyclization approach and to increase the synthetic total yield and region and stereoselectivity of the cycloaddition reaction in biomimetic cycloaddition approach. In the future, we believe more novel and effective approaches for the synthesis of rocaglamide will be developed.</p>",
"role": "user"
},
{
"content": "The first cyclopenta[b]benzofuran derivative, rocaglamide, from Aglaia elliptifolia, was found to exhibit considerable insecticidal activities and excellent potential as a therapeutic agent candidate in cancer chemotherapy; the genus Aglaia has been subjected to further investigation. Both the structural complexity of rocaglamide and its significant activity make it an attractive synthetic target. Stereoselective synthesis of the dense substitution pattern of these targets is a formidable synthetic challenge: the molecules bear five contiguous stereocenters and cis aryl groups on adjacent carbons. In past years of effort, only a handful of completed total syntheses have been reported, evidence of the difficulties associated with the synthesis of rocaglate natural products. The advance on total synthesis of rocaglamide was mainly reviewed from intramolecular cyclization and biomimetic cycloaddition approach.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.102297 | [
{
"content": "You will be presented with the full text of a scientific research paper. Write an abstract summarizing the key points. Your response should contain only the abstract without any other text.\n\nArticle text:\nthe high energy resolution and signal - to - noise provided by current x - ray telescopes has led to reports of narrow absorption lines in the spectra of black hole systems .\nthese features , assumed to arise from iron ( @xmath2 ) can be highly redshifted ( e.g. turner 2002 , 2004 ; dadina 2005 ; nandra 1999 , 2007 ; longinotti 2007 ; pounds 2003a , 2005 ) or blueshifted ( e.g. nandra 2007 ; dadina 2005 ; pounds 2003b ; reeves 2009 ) .\nmany of these features are transient and their actual existence have been called into question on statistical grounds ( e.g. vaughan & uttley 2008 ) .\nothers are persistent in that they have been identified in subsequent observations or with other instruments .\nthese features are normally unresolved with current ccds , constraining their widths to be less than a few hundred electon volts .\nthe most common explanation is that these features originate in high - velocity outflows and inflows ( see cappi 2006 for a review ) in some cases reaching speeds in excess of @xmath3 ( e.g. reeves 2009 ; tombesi 2010 ) .\nan alternative explanation for the redshifted features are resonant absorption lines due to highly ionised iron that arise naturally in the accretion disc ( ruszkowski & fabian 2000 ) .\na hot and diffuse plasma located above the accretion disc will imprint resonant absorption lines on the reflection spectrum as it emerges from the disc .\nthe medium is rotating with the accretion disc thus experiences the same dynamical effects as material in the disc . here\nwe show that both redshifted and blueshifted features can occur and potentially shift features to apparently high velocities without the need to invoke high - velocity outflows .\nthis is a proof of concept work where we examine the possibility that resonant absorption by highly ionised iron could account for the narrow features over a wide range of energies by making use of the velocities already present in the disc .\ndetailed computational modelling is left to future work .\nit is certainly plausible that narrow absorption features close to @xmath4 are due to highly ionised outflows .\nhowever , we consider that alternative explanations for the wide range of velocities reported need to be investigated . in the following section we present the general picture and motivation for the study . in section 3\nwe describe the potential features that would appear depending on the geometry of the system .\nwe discuss our results in section 4 .\nwe adopt the standard accretion disc picture that has developed over recent years .\nthe optically thick and geometrically thin accretion disc is illuminated by a primary power law emitter that is located on the spin axis of the black hole ( fig .\n[ fig : res ] left panel ) .\nwe note that the location of the primary emitter is arbitrary and does not need to be on the spin axis of the black hole , however from recent x - ray studies ( e.g. wilkins & fabian 2011 ) and gravitational microlensing observations of agns ( e.g. chartas 2009 ) we know the primary emitter must be compact and centrally concentrated . located above the accretion disc is an optically thin , highly ionised plasma that is corotating with the disc .\nthe reflection spectrum ( i.e. reflection continuum and fluorescent lines , ross & fabian 2005 ) emitted from the disc will cross through the plasma and may be subject to resonant absorption as it emerges from the disc .\nsince the absorbing material is corotating with the disc the resonant absorption features are subject to the same kinematic and gravitational effects influencing the reflection spectrum .\nvarious lines - of - sight could influence the significance of the resonant features as the optical depth through the absorbing medium changes ( fig .\n[ fig : res ] left panel ) .\nthis will alter the depth of the feature as is shown in fig .\n[ fig : res ] ( right panel ) . as a simple test to examine if such a model could reproduce narrow absorption features in the appropriate energy band we considered the observed energies of the red and blue peaks of an intrinsically narrow absorption profile ( gaussian ) at the rest energy of h - like iron ( ly@xmath5 , @xmath6 ) .\nthe line is broadened by doppler and relativistic effects using the kerrconv model in xspec ( brenneman & reynolds 2006 ) for a black hole spin parameter of @xmath7 ( fig .\n[ fig : lines ] ) . depending on the inclination of the disc , features attributed to narrow absorption could appear anywhere from @xmath8 to over @xmath9 .\nthe energy range could expand to even lower energies if one considers absorption by ( @xmath10 ) or higher energies if one considers ly@xmath11 ( @xmath12 ) .\nin this section we consider various geometries of the model in sect .\n[ sect : picture ] and examine the appearance of the absorption imprinted on the reflection spectrum .\nthe direct component of the primary emitter viewed by the observer is not absorbed by the plasma .\nwe assume the hot atmosphere is highly ionised and consider resonant absorption only from hydrogen - like ( ly@xmath5 ) and helium - like ( ) iron .\nthe scattering is treated as simple line absorption , that is once the photon is scattered out of the line of sight it is neglected . when considering two lines , for simplicity we treat both features as equally strong .\nwe adopt a modest agn spectrum for our trials .\nthe power law photon index is @xmath13 .\nthe disc is ionised with @xmath14 and solar iron abundances ( morrison & mccammon 1983 ) .\nthe inner and outer disc edges are set at @xmath15 ( @xmath16 gmc@xmath17 ) and @xmath18 , respectively , and the emissivity index for the disc is @xmath19 . for each case\nconsidered , we simulate the appearance the spectrum could have when viewed during a @xmath20 exposure with the pn detector .\nthe @xmath0 flux is @xmath21 .\nthe simulated spectra and models are shown in the agn rest - frame .\nthe first case examined builds on the situation in fig .\n[ fig : res ] .\nwe consider an accretion disc that is blanketed by the ionised plasma .\nthe plasma has a line optical depth of @xmath22 at all radii and is irradiated with the same emissivity profile as the disc .\n[ fig : blanket ] depicts the spectrum for a plasma made up of h - like iron ( left panel ) and both h- and he - like iron ( right panel ) resonant absorption at different line - of - sight inclinations . as seen in the left panel of fig .\n[ fig : blanket ] such features can be shifted and appear significantly broad and deep as they blend with the iron absorption edge at 7.1 kev . in the case of absorption\nviewed at 60 degrees inclination , the feature can be redshifted well over @xmath23 in the rest frame . in fig .\n[ fig : simblanket ] we demonstrate how such a feature may appear in an pn observation after one has presumable modelled the reflection spectrum correctly .\nif absorption were attributed to and , the situation would resemble the right panel of fig .\n[ fig : blanket ] .\nthe lines would be blended due to relativistic broadening of each and would not be distinguished as two features . in this and section [\nsect : xvar ] we still adopt the lamppost model described above , but consider the situation in which the absorption is enhanced in a certain region above the disc effectively forming an annulus of absorbing material .\nthis could originate from a hot spot above the disc illuminating a certain region , or anisotropies in disc leading to increased concentration of hot plasma , or enhanced structure in the disc that amplifies the reflection continuum . in the simulations shown in fig .\n[ fig : ringi ] , the absorption is localised between @xmath24 from the black hole and viewed at different inclinations . depending on the inclination , significant absorption lines can be seen between @xmath25 in the rest - frame .\ndetectable features are predicted in the pn simulation ( fig .\n[ fig : simringi ] ) .\nthe final situation examined corresponds to an absorbing annulus at various distances from the black hole . at distances less than @xmath26\nthe absorption profile is rather blurred ( see also section [ sect : ivar ] ) , but as the distance increases and general relativistic effects are less dominant , doppler effects prevail and the line profiles begins to take on the inverted disc line appearance . in fig .\n[ fig : ringd ] , we examine a disc inclined 30 degrees with absorption coming from different distances .\nthe double - peaked profile becomes apparent at distances beyond @xmath27 .\nthe blue edge of the profile is driven by the inclination of the disc and is shifted to higher energies as the disc inclination increases ( see fig .\n[ fig : lines ] ) . the simulation in fig .\n[ fig : simringd ] depicts absorption from a ring between @xmath28 .\nthe single feature could easily mimic multiple absorption features .\nwe examine the possibility that narrow absorption features regularly observed in the spectra of agn and attributed to fast , ionised inflows and outflows , could arise naturally from resonant absorption of the reflection spectrum making use of velocities already present in the disc .\nwe consider various geometries ( e.g. rings and blankets ) for the hot plasma that is corotating with the accretion disc and subject to the same kinematic and relativistic effects as the reflection spectrum . in all cases ,\nwe demonstrate that absorption features could be easily detected between @xmath29 in typical observations .\nwe note that the observed energy range could be significantly expanded by adding the contribution of other transitions ( e.g. ly@xmath11 at @xmath12 ) .\nnotable from the simulations in section 3 is that the absorption features are not genuinely narrow , but have significant width .\nhowever , the ccd resolution provide by current instruments limits our ability to distinguish features narrower than a few hundred ev . calorimeter observations with _ astro - h _ and _ athena _ will likely be capable of resolving such features and thereby distinguishing models .\nthe work here is highly simplified and we only consider resonant absorption by and .\nmore complicated situations , in which elemental abundances are non - solar ( or variable ) and/or there exists greater diversity in composition of the plasma are easy to envision . line of sight effects are also treated simply here .\nif there is a composition gradient , in the atmosphere then long sight lines through the hot plasma ( e.g. low inclinations ) can produces very complicated spectra with many features ( e.g. the spectrum of the x - ray binary gro j165540 , miller 2008 ) .\nlong sight lines will also mean that different species experience different blurring effects and a specific species will be associated with a specific distance from the black hole .\nthe purpose of this work was to demonstrate a concept leaving more complex situations for further study .\nlcg thanks colleagues at ioa , cambridge for their hospitality .\nacf thanks the royal society for support .\nwe thank the referee for providing comments that helped to clarify the paper .",
"role": "user"
},
{
"content": "narrow absorption lines seen in the @xmath0 spectra of active galaxies and galactic black holes are normally attributed to iron in high velocity outflows or inflows . \n we consider the possibility that such features could arise naturally in the accretion disc . \n resonant absorption by highly ionised iron ( e.g. and ) in an optically - thin plasma that is located above the disc and rotating with it could reproduce narrow features in the reflection component of the spectrum as it emerges from the disc . depending on the inclination of the disc and the exact geometry of the hot plasma ( e.g. does it blanket the disc or a ring ) apparently narrow absorption features \n could be detected between @xmath1 . \n such an explanation requires no high velocity outflow / inflow and is consistent with a reflection - based interpretation for accreting black holes systems . \n [ firstpage ] accretion , accretion discs black hole physics relativistic processes \n line : formation , identification galaxies : active x - ray : galaxies",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.1546 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nContinuous Bag of Words (CBOW) is a powerful text embedding method.\nDue to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute.\nHowever, CBOW is not capable of capturing the word order.\nThe reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same.\nIn order to address this shortcoming, we propose a\n\nlearning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW).\nOur algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text.\nWe empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content.\nMotivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW.\nOur results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%.\nAs a result, the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%.\nWord embeddings are perceived as one of the most impactful contributions from unsupervised representation learning to natural language processing from the past few years BID10 .\nWord embeddings are learned once on a large-scale stream of words.\nA key benefit is that these pre-computed vectors can be re-used almost universally in many different downstream applications.\nRecently, there has been increasing interest in learning universal sentence embeddings.\nBID24 have shown that the best encoding architectures are based on recurrent neural networks (RNNs) BID5 BID25 or the Transformer architecture BID2 .\nThese techniques are, however, substantially more expensive to train and apply than word embeddings BID14 BID2 .\nTheir usefulness is therefore limited when fast processing of large volumes of data is critical.More efficient encoding techniques are typically based on aggregated word embeddings such as Continuous Bag of Words (CBOW), which is a mere summation of the word vectors BID19 .\nDespite CBOW's simplicity, it attains strong results on many downstream tasks.\nUsing sophisticated weighting schemes, the performance of aggregated word embeddings can be further increased BID0 , coming even close to strong LSTM baselines BID26 BID13 such as InferSent BID5 .\nThis raises the question how much benefit recurrent encoders actually provide over simple word embedding based methods BID32 .\nIn their analysis, BID13 suggest that the main difference may be the ability to encode word order.\nIn this paper, we propose an intuitive method to enhance aggregated word embeddings by word order awareness.The major drawback of these CBOW-like approaches is that they are solely based on addition.\nHowever, addition is not all you need.\nSince it is a commutative operation, the aforementioned methods are not able to capture any notion of word order.\nHowever, word order information is crucial for some tasks, e.g., sentiment analysis BID13 .\nFor instance, the following two sentences yield the exact same embedding in an addition-based word embedding aggregation technique: \"The movie was not awful, it was rather great.\" and \"The movie was not great, it was rather awful.\"\nA classifier based on the CBOW embedding of these sentences would inevitably fail to distinguish the two different meanings (Goldberg, 2017, p. 151) .To\nalleviate this drawback, BID28 propose to model each word as a matrix rather than a vector, and compose multiple word embeddings via matrix multiplication rather than addition. This\nso-called Compositional Matrix Space Model (CMSM) of language has powerful theoretical properties that subsume properties from vector-based models and symbolic approaches. The\nmost obvious advantage is the non-commutativity of matrix multiplication as opposed to addition, which results in order-aware encodings.In contrast to vector-based word embeddings, there is so far no solution to effectively train the parameters of word matrices on large-scale unlabeled data. Training\nschemes from previous work were specifically designed for sentiment analysis BID34 BID1 . Those require\ncomplex, multi-stage initialization, which indicates the difficulty of training CMSMs. We show that\nCMSMs can be trained in a similar way as the well-known CBOW model of word2vec BID19 . We make two\nsimple yet critical changes to the initialization strategy and training objective of CBOW. Hence, we present\nthe first unsupervised training scheme for CMSMs, which we call Continual Multiplication Of Words (CMOW). We evaluate our model\n's capability to capture linguistic properties in the encoded text. We find that CMOW and\nCBOW have properties that are complementary. On the one hand, CBOW\nyields much stronger results at the word content memorization task. CMOW, on the other hand\n, offers an advantage in all other linguistic probing tasks, often by a wide margin. Thus, we propose a hybrid\nmodel to jointly learn the word vectors of CBOW and the word matrices for CMOW.Our experimental results confirm the effectiveness of our hybrid CBOW-CMOW approach. At comparable embedding size\n, CBOW-CMOW retains CBOW's ability to memorize word content while at the same time improves the performance on the linguistic probing tasks by 8%. CBOW-CMOW outperforms CBOW\nat 8 out of 11 supervised downstream tasks scoring only 0.6% lower on the tasks where CBOW is slightly better. On average, the hybrid model\nimproves the performance over CBOW by 1.2% on supervised downstream tasks, and by 0.5% on the unsupervised tasks.In summary, our contributions are: (1) For the first time, we present an unsupervised, efficient training scheme for the Compositional Matrix Space Model. Key elements of our scheme are\nan initialization strategy and training objective that are specifically designed for training CMSMs. (2) We quantitatively demonstrate\nthat the strengths of the resulting embedding model are complementary to classical CBOW embeddings. (3) We successfully combine both\napproaches into a hybrid model that is superior to its individual parts.After giving a brief overview of the related work, we formally introduce CBOW, CMOW, and the hybrid model in Section 3. We describe our experimental\nsetup and present the results in Section 4. The results are discussed in\nSection 5, before we conclude.\nOur CMOW model produces sentence embeddings that are approximately at the level of fastSent BID14 .\nThus, CMOW is a reasonable choice as a sentence encoder.\nEssential to the success of our training schema for the CMOW model are two changes to the original word2vec training.\nFirst, our initialization strategy improved the downstream performance by 2.8% compared to Glorot initialization.\nSecondly, by choosing the target word of the objective at random, the performance of CMOW on downstream tasks improved by 20.8% on average.\nHence, our novel training scheme is the first that provides an effective way to obtain parameters for the Compositional Matrix Space Model of language from unlabeled, large-scale datasets.\nRegarding the probing tasks, we observe that CMOW embeddings better encode the linguistic properties of sentences than CBOW.\nCMOW gets reasonably close to CBOW on some downstream tasks.\nHowever, CMOW does not in general supersede CBOW embeddings.\nThis can be explained by the fact that CBOW is stronger at word content memorization, which is known to highly correlate with the performance on most downstream tasks ).\nYet, CMOW has an increased performance on the TREC question type classification task (88.0 compared to 85.6).\nThe rationale is that this particular TREC task belongs to a class of downstream tasks that require capturing other linguistic properties apart from Word Content .Due\nto joint training, our hybrid model learns to pick up the best features from CBOW and CMOW simultaneously. It\nenables both models to focus on their respective strengths. This\ncan best be seen by observing that H-CMOW almost completely loses its ability to memorize word content. In return\n, H-CMOW has more capacity to learn other properties, as seen in the increase in performance at BShift and others. A complementary\nbehavior can be observed for H-CBOW, whose scores on Word Content are increased. Consequently, with\nan 8% improvement on average, the hybrid model is substantially more linguistically informed than CBOW. This transfers to\nan overall performance improvement by 1.2% on average over 11 supervised downstream tasks, with large improvements on sentiment analysis tasks (SST2, SST5), question classification (TREC), and the sentence representation benchmark (STS-B). The improvements\non these tasks is expected because they arguably depend on word order information. On the other tasks\n, the differences are small. Again, this can be\nexplained by the fact that most tasks in the SentEval framework mainly depend on word content memorization , where the hybrid model does not improve upon CBOW.Please note, the models in our study do not represent the state-of-the-art for sentence embeddings. BID24 show that better\nscores are achieved by LSTMs and Transformer models, but also by averaging word embedding from fastText BID21 . These embeddings were\ntrained on the CBOW objective, and are thus very similar to our models. However, they are trained\non large corpora (600B tokens vs 3B in our study), use large vocabularies (2M vs 30k in our study), and incorporate numerous tricks to further enhance the quality of their models: word subsampling, subword-information, phrase representation, n-gram representations, position-dependent weighting, and corpus de-duplication. In the present study, we\nfocus on comparing CBOW, CMOW, and the hybrid model in a scenario where we have full control over the independent variables. To single out the effect\nof the independent variables better, we keep our models relatively simple. Our analysis yields interesting\ninsights on what our models learn when trained separately or jointly, which we consider more valuable in the long term for the research field of text representation learning.We offer an efficient order-aware extension to embedding algorithms from the bag-of-words family. Our 784-dimensional CMOW embeddings\ncan be computed at the same rate as CBOW embeddings. We empirically measured in our experiments\n71k for CMOW vs. 61k for CBOW in terms of encoding sentences per second. This is because of the fast implementation\nof matrix multiplication in GPUs. It allows us to encode sentences approximately\n5 times faster than using a simple Elman RNN of the same size (12k per second). Our matrix embedding approach also offers valuable\ntheoretical advantages over RNNs and other autoregressive models. Matrix multiplication is associative such that only\nlog 2 n sequential steps are necessary to encode a sequence of size n. Besides parallelization, also dynamic programming techniques\ncan be employed to further reduce the number of matrix multiplication steps, e. g., by pre-computing frequent bigrams. We therefore expect our\nmatrix\nembedding approach to be specifically\nwell-suited for large-scale, time-sensitive text encoding applications. Our hybrid model serves as a blueprint for using CMOW in conjunction\nwith other existing embedding techniques such as fastText BID21 .\nWe have presented the first efficient, unsupervised learning scheme for the word order aware Compositional Matrix Space Model.\nWe showed that the resulting sentence embeddings capture linguistic features that are complementary to CBOW embeddings.\nWe thereupon presented a hybrid model with CBOW that is able to combine the complementary strengths of both models to yield an improved downstream task performance, in particular on tasks that depend on word order information.\nThus, our model narrows the gap in terms of representational power between simple word embedding based sentence encoders and highly non-linear recurrent sentence encoders.We made the code for this paper available at https://github.com/florianmai/ word2mat.",
"role": "user"
},
{
"content": "We present a novel training scheme for efficiently obtaining order-aware sentence representations.",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.1306 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nPostural tremor of Parkinson's disease.\nAbstract 1:\nPrevious studies have reported the resting tremor (RT) of Parkinson's disease to occur at frequencies between 3-7 Hz and to be characterised by an alternating pattern of electromyographic (EMG) bursting activity between opposing muscles. A postural tremor (PT), of higher frequency (> 6 Hz) and with a synchronous pattern of EMG activity, has also been previously described in Parkinson's disease. We investigated the electrophysiological and pharmacological properties of both the RT and PT of 11 patients with Parkinson's disease and 10 patients with essential tremor in a double-blind, placebo-controlled study of L-Dopa/benserazide and propranolol. Tremor amplitude and frequency were assessed via bidirectional accelerometry, and the pattern of activation of the antagonist muscles of the forearm was determined with use of surface EMG. In the Parkinson's disease group studied, the frequency, EMG pattern of bursts, and response to L-Dopa were similar for the two tremors (median improvement of RT by 70% and PT by 61%). Despite some overlap between the Parkinson's disease and essential tremor groups in the electrophysiology of the tremor, there was no such dramatic pharmacological response in the latter group. These results suggest that the RT and PT of Parkinson's disease share a common pathophysiology and are distinct from essential tremor.\n\n\nTitle 2:\nLetter: Propranolol in Parkinson's disease.\nAbstract 2:\nnan",
"role": "user"
},
{
"content": "In view of this lack of evidence, it is impossible to determine whether beta-blocker therapy is effective and safe for the treatment of tremor in Parkinson's disease. The high frequency of bradycardia in one trial raises some concerns about the prescription of beta-blockers to normotensive elderly patients but the study was too small for the true degree of risk to be calculated.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.19036 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nDeep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.\n\nCited Abstract(s):\n[1]: Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95 to less than 0.5 on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10^30. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800 on one of the DNNs we tested.\n\n[2]: We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in different image regions. To see this, first, we report results in ImageNet that lead to a revision of the hypothesis that adversarial perturbations are a consequence of CNNs acting as a linear classifier: CNNs act locally linearly to changes in the image regions with objects recognized by the CNN, and in other regions the CNN may act non-linearly. Then, we corroborate that when the neural responses are linear, applying the foveation mechanism to the adversarial example tends to significantly reduce the effect of the perturbation. This is because, hypothetically, the CNNs for ImageNet are robust to changes of scale and translation of the object produced by the foveation, but this property does not generalize to transformations of the perturbation. As a result, the accuracy after a foveation is almost the same as the accuracy of the CNN without the adversarial perturbation, even if the adversarial perturbation is calculated taking into account a foveation.\n\n[3]: We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"role": "user"
},
{
"content": "A different technique for increasing a DCNN's ability to handle and correctly classify adversarial examples was put forward by [2] . This technique uses a transformation of the image that selects a region in which the convolutional neural network (CNN) is applied, denoted a foveation, discarding the information from the other regions'' as the input to the DCNN. This technique enabled the DCNN to correctly classify over 70 The previous state of the art [3] takes crops of the original input and uses the average prediction over the classified crops which mimics natural perturbations. Crops like the ones used in GoogLeNet predate the discovery of adversarial examples and are used to improve accuracy of DCNNs. The improvement of accuracy in the DCNN also also helped to mitigate classification errors caused by adversarial examples. By taking crops of images, the accuracy of the DCNN when classifying adversarial examples is greatly increased. The networks tested by [1] did not use this previous state-of-the-art. This research aims to assess the level of threat imposed by adversarial examples to DCNNs as suggested by [1] .",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.11154 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nSubjective effects and tolerability of the South American psychoactive beverage Ayahuasca in healthy volunteers\nAbstract 1:\nAbstract . Rationale : Ayahuasca is a South American psychoactive beverage that contains the naturally occurring psychedelic agent N , N-dimethyltryptamine ( DMT ) . This \" tea \" has been used for centuries in religious and medicinal context s in the rain forest areas of South America and is presently gaining the attention of psychedelic users in North America and Europe . Objectives : In the present study , the psychological effects and tolerability of ayahuasca were assessed . Methods : Three increasing doses of encapsulated freeze-dried ayahuasca ( 0.5 , 0.75 , and 1.0 mg DMT/kg body weight ) were administered to six healthy male volunteers with prior experience in the use of this tea , in a single-blind crossover placebo-controlled clinical trial . Results : Ayahuasca produced significant dose-dependent increases in five of the six subscales of the Hallucinogen Rating Scale , in the LSD , MBG , and A scales of the Addiction Research Center Inventory , and in the \" liking \" , \" good effects \" and \" high \" visual analogue scales . Psychological effects were first noted after 30–60 min , peaked between 60–120 min , and were resolved by 240 min . The tea was well tolerated from a cardiovascular point of view , with a trend toward increase for systolic blood pressure . Modified physical sensations and nausea were the most frequently reported somatic-dysphoric effects . The overall experience was regarded as pleasant and satisfactory by five of the six volunteers , while one volunteer experienced an intensely dysphoric reaction with transient disorientation and anxiety at the medium dose and voluntarily withdrew from the study . Conclusions : Ayahuasca can be described as inducing changes in the perceptual , affective , cognitive , and somatic spheres , with a combination of stimulatory and visual psychoactive effects of longer duration and milder intensity than those previously reported for intravenously administered DMT\n\n\nTitle 2:\nPharmacology of lemongrass (Cymbopogon citratus Stapf). III. Assessment of eventual toxic, hypnotic and anxiolytic effects on humans.\nAbstract 2:\nA herbal tea ( called an abafado in Brazil ) prepared from the dried leaves of lemongrass was administered to healthy volunteers . Following a single dose or 2 weeks of daily oral administration , the abafado produced no changes in serum glucose , urea , creatinine , cholesterol , triglycerides , lipids , total bilirubin , indirect bilirubin , GOT , GPT , alkaline phosphatase , total protein , albumin , LDH and CPK . Urine analysis ( proteins , glucose , ketones , bilirubins , occult blood and urobilinogen ) as well as EEG and EKG showed no abnormalities . There were slight elevations of direct bilirubin and of amylase in some of the volunteers , but without any clinical manifestation . These results taken together indicate that lemongrass as used in Brazilian folk medicine is not toxic for humans . The eventual hypnotic effect of lemongrass was investigated in 50 volunteers who ingested sample s of lemongrass and a placebo under double-blind conditions . The parameters ( i.e. sleep induction , sleep quality , dream recall and rewakening ) did not show any effect of lemongrass as compared to the placebo . Eighteen subjects with high scores of trait-anxiety were su bmi tted to an anxiety-inducing test following taking lemongrass or placebo under double-blind conditions . Their anxiety levels were similar , indicating that the abafado of the plant does not have anxiolytic properties . It is concluded that lemongrass , one of the most popular Brazilian herbal medicines , used for its alleged CNS-depressant effects , is atoxic but lacks hypnotic or anxiolytic properties\n\n\nTitle 3:\nPassionflower in the treatment of generalized anxiety: a pilot double-blind randomized controlled trial with oxazepam.\nAbstract 3:\nOBJECTIVE Passionflower ( Passiflora incarnata ) is a folk remedy for anxiety . A double-blind r and omized trial compared the efficacy of Passiflora incarnata extract with oxazepam in the treatment of generalized anxiety disorder . METHODS The study was performed on 36 out- patients diagnosed with GAD using DSM IV criteria . Patients were allocated in a r and om fashion : 18 to the Passiflora extract 45 drops/day plus placebo tablet group , and 18 to oxazepam 30 mg/day plus placebo drops for a 4-week trial . RESULTS Passiflora extract and oxazepam were effective in the treatment of generalized anxiety disorder . No significant difference was observed between the two protocol s at the end of trial . Oxazepam showed a rapid onset of action . On the other h and , significantly more problems relating to impairment of job performance were encountered with subjects on oxazepam . CONCLUSION The results suggest that Passiflora extract is an effective drug for the management of generalized anxiety disorder , and the low incidence of impairment of job performance with Passiflora extract compared to oxazepam is an advantage . A large-scale trial is justified\n\n\nTitle 4:\nEffect of valepotriates (valerian extract) in generalized anxiety disorder: a randomized placebo-controlled pilot study.\nAbstract 4:\nThe aim of the present study was to carry out a controlled pilot study on the putative anxiolytic effect of valepotriates . Thirty-six out patients with generalized anxiety disorder ( DSM III-R ) , after a 2-week wash-out , were r and omized to one of the following three treatments for 4 weeks ( n = 12 per group ) : valepotriates ( mean daily dose : 81.3 mg ) , diazepam ( mean daily dose : 6.5 mg ) , or placebo . A parallel , double-blind , flexible-dose , placebo-controlled design was employed . No significant difference was observed among the three groups at baseline or in the change from baseline on the Hamilton anxiety scale ( HAM-A ) or in the trait part of the state-trait anxiety inventory ( STAI-trait ) . Moreover , the three groups presented a significant reduction in the total HAM-A scores . On the other h and , only the diazepam and valepotriates groups showed a significant reduction in the psychic factor of HAM-A. The diazepam group also presented a significant reduction of the STAI-trait . Although the principal analysis ( HAM-A between group comparison ) found negative results ( probably due to the small sample size in each group ) , the preliminary data obtained in the present study suggest that the valepotriates may have a potential anxiolytic effect on the psychic symptoms of anxiety . However , since the number of subjects per group was very small , the present results must be viewed as preliminary . Thus , further studies addressing this issue are warranted\n\n\nTitle 5:\nA double-blind randomised trial to investigate three different concentrations of a standardised fresh plant extract obtained from the shoot tips of Hypericum perforatum L.\nAbstract 5:\nThe aim of this r and omised double-blind multi-centre parallel group comparative study was to investigate the efficacy and tolerability of a new st and ardised fresh-plant extract obtained from the shoot tips of St. John 's wort ( Hypericum perforatum L. ) in the treatment of mild to moderate depression . 348 out- patients ( 259 female , 89 male ) with mild to moderate depression were recruted by 12 psychiatrc specialty practice s and 26 general practice s. The patients took during 6 weeks 3 times a day 1 tablet of a Hypericum preparation st and ardised to either 0.17 mg ( 114 patients ) , 0.33 mg ( 115 patients ) , or 1 mg ( 119 patients ) total hypericin per day ( Hyperiforce ) . The main outcome measure was the Hamilton Psychiatric Rating Scale for Depression ; additional measures were the Hospital Anxiety and Depression Scale and the Clinical Global Impression . At the end of treatment , a reduction in the average Hamilton Depression score from an initial 16 - 17 to 8 - 9 , i.e. a relative reduction of about 50 % , was observed in all groups ( 280 patients , par protocol analysis ) . The response rates were 62 % , 65 % and 68 % , respectively ( 348 patients , intention to treat analysis ) . Overall , the intergroup comparison revealed no significant differences . Tolerability was excellent , with mild adverse reactions probably causally related to the treatment occurring in only 7 of the 348 patients ( 2 % ) . This Hypericum preparation is effective in all three doses and is well tolerated\n\n\nTitle 6:\nAn investigation into the efficacy of Scutellaria lateriflora in healthy volunteers.\nAbstract 6:\nScutellaria lateriflora is an herbal medicine with long-st and ing traditional use as a relaxing nervine . There has been controversy in the literature with regards to its efficacy , and this study was design ed to clarify its effectiveness in reducing anxiety , one of the phytotherapeutic indications . A double blind , placebo-controlled study of healthy subjects demonstrated noteworthy anxiolytic effects . The use of phytomedicines for the treatment of anxiety is review ed , as is the published literature on S. lateriflora and its putative toxicity\n\n\nTitle 7:\nDouble-blind, randomised, placebo-controlled study to evaluate the efficacy and safety of a fixed combination containing two plant extracts (Crataegus oxyacantha and Eschscholtzia californica) and magnesium in mild-to-moderate anxiety disorders\nAbstract 7:\nSUMMARY Objective : To assess the clinical efficacy of a neurotonic component containing fixed quantities of two plant extracts ( Crataegus oxyacantha and Eschscholtzia californica ) and magnesium versus placebo in mild-to-moderate anxiety disorders with associated functional disturbances , under usual general practice prescription conditions . Research design and methods : A total of 264 patients ( 81 % female ; mean age : 44.6 years ) presenting with generalised anxiety ( DSM-III-R ) of mild-to-moderate intensity ( total Hamilton anxiety scale score between 16 and 28 ) were included in a double-blind , r and omised , placebo-controlled trial . Patients were r and omly assigned to two groups : 130 received the study drug ( Sympathyl ) , and 134 a placebo ( two tablets twice daily for 3 months ) . Efficacy and safety data were recorded before first administration and 7 , 14 , 30 , 60 and 90 days after start of treatment . Sympathyl is produced and marketed by Laboratoire Innotech International , Arcueil , France . Main outcome measures : Efficacy was assessed by ( a ) change in Hamilton anxiety scale total and somatic scores ; ( b ) change in patient self- assessment ; ( c ) number and percentage of responsive subjects ( reduction of at least 50 % in Hamilton or self- assessment score ) ; and ( d ) the physician 's clinical global impression . Tolerance was assessed by undesirable events spontaneously reported by the patients over the study period . Results : Total and somatic Hamilton scale scores and subjective patient-rated anxiety fell during treatment , indicating clinical improvement . The decrease was greater in the study drug than in the placebo group . End of treatment clinical improvement , as measured by the mean difference between final and pre-treatment scores , was , for the study drug and placebo groups respectively : −10.6 and −8.9 on the total anxiety score ( p = 0.005 ) ; −6.5 and −5.7 on the somatic score ( p = 0.054 ) ; and −38.5 and −29.2 for subjectively assessed anxiety ( p = 0.005 ) . The risk/benefit ratio as judged by the investigating physicians was also significantly greater in the study drug than in the placebo group . In all , 15 patients ( 11.5 % ) in the study drug group and 13 patients ( 9.7 % ) in the placebo group experienced 22 and 15 adverse events , respectively . Undesirable events were mainly mild or moderate digestive or psychopathological disorders . Conclusions : The preparation containing fixed quantities of Crataegus oxyacantha , Eschscholtzia californica , and magnesium proved safe and more effective than placebo in treating mild-tomoderate anxiety disorders\n\n\nTitle 8:\nAcute effects of the Paulinia cupana, \"Guaraná\" on the cognition of normal volunteers.\nAbstract 8:\nThe authors studied the acute effects of \" Guaraná \" , when compared to caffeine and placebo , ( double blind study ) on cognition , anxiety and sleep in 30 normal volunteers . Although results were negative it can not be concluded that \" Guaraná \" is harmless . Other studies shall be undertaken , administering \" Guaraná \" on a long term basis , as popularly procl aim ed\n\n\nTitle 9:\nThe chronic effects of an extract of Bacopa monniera (Brahmi) on cognitive function in healthy human subjects\nAbstract 9:\nAbstract . Rationale : Extracts of Bacopa monniera have been reported to exert cognitive enhancing effects in animals . However , the effects on human cognition are inconclusive . Objective : The current study examined the chronic effects of an extract of B. monniera ( Keenmind ) on cognitive function in healthy human subjects . Methods : The study was a double-blind placebo-controlled independent-group design in which subjects were r and omly allocated to one of two treatment conditions , B. monniera ( 300 mg ) or placebo . Neuropsychological testing was conducted pre-(baseline ) and at 5 and 12 weeks post drug administration . Results : B. monniera significantly improved speed of visual information processing measured by the IT task , learning rate and memory consolidation measured by the AVLT ( P<0.05 ) , and state anxiety ( P<0.001 ) compared to placebo , with maximal effects evident after 12 weeks . Conclusions : These findings suggest that B. monniera may improve higher order cognitive processes that are critically dependent on the input of information from our environment such as learning and memory",
"role": "user"
},
{
"content": "It was concluded that there is a lack of rigorous studies in this area and that only kava has been shown beyond reasonable doubt to have anxiolytic effects in humans",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.1009 | [
{
"content": "Based on the dataset descriptions below, which datasets would be suitable for the following research question? Your response should be a JSON array.\n\nQuestion: I want to generate a street-view image from an aerial image.\n\nKeywords: cross-view image synthesis images\n\nDataset descriptions:\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nSVT: The Street View Text (SVT) dataset was harvested from Google Street View. Image text in this data exhibits high variability and often has low resolution.\nDayton: The Dayton dataset is a dataset for ground-to-aerial (or aerial-to-ground) image translation, or cross-view image synthesis. It contains images of road views and aerial views of roads.\nWHU: Created for MVS tasks and is a large-scale multi-view aerial dataset generated from a highly accurate 3D digital surface model produced from thousands of real aerial images with precise camera parameters.\nUAVA: The UAVA,<i>UAV-Assistant</i>, dataset is specifically designed for fostering applications which consider UAVs and humans as cooperative agents. We employ a real-world 3D scanned dataset (<a href=\"https://niessner.github.io/Matterport/\">Matterport3D</a>), physically-based rendering, a gamified simulator for realistic drone navigation trajectory collection, to generate realistic multimodal data both from the user’s exocentric view of the drone, as well as the drone’s egocentric view.\nOkutama-Action: A new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes.\n\nOptions: A: ImageNet B: SVT C: Dayton D: WHU E: UAVA F: Okutama-Action",
"role": "user"
},
{
"content": "[\"ImageNet\", \"Dayton\"]",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.8454 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Longitudinal transcriptomic characterization of the immune response to acute hepatitis C virus infection in patients with spontaneous viral clearance\nArticle:\nAbstract:\nMost individuals exposed to hepatitis C virus( HCV) become persistently infected while a minority spontaneously eliminate the virus. Although early immune events influence infection outcome, the cellular composition, molecular effectors, and timeframe of the host response active shortly after viral exposure remain incompletely understood. Employing specimens collected from people who inject drugs( PWID) with high risk of HCV exposure, we utilized RNA-Seq and blood transcriptome module( BTM) analysis to characterize immune function in peripheral blood mononuclear cells( PBMC) before, during, and after acute HCV infection resulting in spontaneous resolution. Our results provide a detailed description of innate immune programs active in peripheral blood during acute HCV infection, which include prominent type I interferon and inflammatory signatures. Innate immune gene expression rapidly returns to pre-infection levels upon viral clearance. Comparative analyses using peripheral blood gene expression profiles from other viral and vaccine studies demonstrate similarities in the immune responses to acute HCV and flaviviruses. Of note, both acute dengue virus( DENV) infection and acute HCV infection elicit similar innate antiviral signatures. However, while transient in DENV infection, this signature was sustained for many weeks in the response to HCV. These results represent the first longitudinal transcriptomic characterization of human immune function in PBMC during acute HCV infection and identify several dynamically regulated features of the complex response to natural HCV exposure.\nIntroduction:\nDespite the recent breakthrough of highly effective direct acting antiviral therapies, hepatitis C virus( HCV) remains a significant public health threat. New infections, especially among people who inject drugs( PWID), are likely to increase in the absence of a prophylactic vaccine[1]. Effective vaccine development is hampered by our limited understanding of how protective immunity is established in the acute stages of natural infections. Acute HCV infection has two dichotomous outcomes, spontaneous resolution( ~25% of infections) or chronic infection( ~75% of infections)[2]. Immune functions following viral exposure remain incompletely understood due in part to the limited availability of paired pre-infection and longitudinal acute infection research samples from recently exposed, largely asymptomatic individuals. Previous work has established roles for innate and adaptive immunity in the host response to acute HCV. Genetic polymorphisms at the IFNL3/4 locus, NK cell activity, and dendritic cell function influence infection outcomes[3, 4]. Effective adaptive immunity is also essential for HCV clearance. HCV-specific CD4 and CD8 T cell responses are induced in most acutely infected individuals irrespective of outcome. However, failure to sustain CD4 T cell responses is associated with viral persistence, which in turn leads to CD8 T cell dysregulation and exhaustion[3, 4]. The role of B cells in acute HCV is less clear. Although not consistent across all studies, anti-HCV neutralizing antibodies have been associated with spontaneous clearance during primary and secondary infections, suggesting that they may contribute to long-term protective immunity[5]. Kinetics and crosstalk between these innate and adaptive responses remain incompletely defined. Systems-level transcriptomic methods have emerged as powerful tools for profiling human immune responses[6]. Examining peripheral blood transcriptome data has provided integrated maps of host response dynamics following vaccination or infection, and the associated interplay of innate and adaptive immune components[7–10]. Studies of the responses to yellow fever and influenza vaccines have identified shared gene expression signatures associated with strong antibody responses[7, 9–12]. Related studies of influenza[13] and hepatitis B virus( HBV) vaccines[14] have underscored the role of baseline inflammation and host factors in determining the outcome of vaccination. Although challenging due to logistical demands and interindividual variation, analogous methods have also been successful in characterizing the human immune response to “real world” acute infections by pathogens such as dengue virus( DENV)[15]. We reasoned that similar transcriptomic approaches, with the potential to extract large amounts of data from relatively limited sample material would be useful in characterizing the response to acute HCV. The first microarray studies of acute HCV infection performed on serial liver biopsies from a limited number of chimpanzees showed that innate immune responses are rapidly induced in the liver irrespective of infection outcome[16, 17]. Spontaneous HCV clearance was associated with upregulation of genes linked to CD4 T cells and lymphocyte migration to the liver[16, 17]. More recent transcriptomic studies of HCV-specific T cells indicate that metabolic dysregulation during acute infection may influence the outcome of the antiviral T cell response[18]. Transcriptome analysis in human livers demonstrated elevated IFNγ–stimulated gene expression in acute infection, but elevated IFNα–stimulated gene expression during chronic infection[19]. Additional microarray studies described elevated interferon stimulated gene( ISG) expression in peripheral immune cells during chronic HCV[20]. Despite our understanding of certain aspects of the host response to HCV, the composition and dynamics of the early antiviral response in acute infection have not been fully defined. Here, we characterize the host response to acute HCV through transcriptional profiling of peripheral blood mononuclear cells( PBMC). We performed RNA-Seq on longitudinal samples collected before, during and after acute HCV infections that resulted in spontaneous resolution or chronic infection. Our analysis provides a detailed characterization of the inflammatory and ISG signatures active in the spontaneous resolution of acute HCV, and identifies similarities with responses to flavivirus vaccines and infections.\nDiscussion:\nHere we present a detailed transcriptomic characterization of early events in the human immune response to acute HCV infection in individuals that progress to spontaneous resolution. We detected wide-ranging changes in PBMC gene expression patterns, including those consistent with pronounced innate antiviral programs and inflammatory mediators. The patterns observed in the PBMC response to acute HCV shared many features with effective immune responses to YFV and acute DENV infection. To our knowledge, this study represents the first longitudinal transcriptomic investigation of the PBMC response to acute HCV in humans. Studying the early immune response to HCV in humans is challenging, as infected individuals are not usually recognized until they progress to chronic infection. Even if research subjects are identified during acute stages, obtaining pre-infection samples for comparison is often impossible. We addressed these difficulties in a rare longitudinal study of HCV-naïve PWID[21, 34], through which we obtained acute infection and corresponding pre-infection baseline samples. Given this “real world” setting, we observed considerable variability in datasets from different patients. Differences in gene expression patterns could be due to complex factors associated with the PWID population enrolled in this study, including drug use, unstable socioeconomic conditions and exposures to additional( non-HCV) infections. Furthermore, the time of HCV infection is necessarily estimated and the samples grouped for each time point are not perfectly synchronized. Problems in assigning datasets to distinct analysis groups were particularly pronounced in samples from individuals who eventually progressed to chronic infection. Although reasons for greater consistency of expression patterns in the Resolution group remain unclear, much of the variation in the Chronic group samples appears unrelated to HCV infection timepoint( S1 Fig). We do not believe this heterogeneity is related to infection outcome. These issues might be overcome with a larger sample size, which was unfortunately not possible in the present study. This study’s original goal was to identify disparities in the initial response to infection that contribute to the differential outcomes of spontaneous infection versus chronicity. Indeed, recently reported analyses of HCV-specific T cells suggest that differences in metabolic networks engaged early in infection contribute to viral clearance[18]. In our datasets, the heterogeneous gene expression patterns measured in the Chronic group precluded a direct comparison to spontaneous resolution at this sample size. However, this longitudinal study did enable a detailed characterization of the peripheral immune response to acute HCV infection that results in spontaneous resolution. Within the Resolution group, we structured our analysis to evaluate changes relative to patient-specific Pre-infection samples and we observed several consistent and concordant transcriptomic patterns across most subjects analyzed. With the range of estimated infection dates, these results suggest that certain immune functions are active to some extent for at least several weeks during acute HCV. Our results define a pronounced innate antiviral gene expression program active in PBMC during the Early acute stage of HCV infection. This response persisted through the Late acute time point in individuals with detectable HCV viremia. These observations are consistent with transcriptomic data from the chimpanzee model demonstrating rapid induction of ISGs in the liver during early acute HCV[16, 17]. Furthermore, we observed overlap in the PBMC ISG signature identified here and ISG expression patterns in acute HCV liver biopsy samples from infected patients and chimpanzees[19, 25]. This suggests that the innate antiviral response against acute HCV is not restricted to the liver and that the peripheral blood response corresponds to some extent with that at the site of infection. This response returned to baseline levels following viral clearance, suggesting a dependence on viral RNA. We also observed a decrease in gene signatures associated with B cells. Such changes could reflect a decrease in B cell transcriptional activity, a decrease in the relative frequency of B cells, or both. Based in part on trends observed in flow cytometry analysis of a limited number of samples, we speculate that these patterns result from a diminished fraction of B cells within PBMC during acute HCV viremia. Although it is possible that this pattern could simply be a consequence of a corresponding relative increase in another cell type( e. g. monocytes, as suggested by BTM enrichment), we did not detect significant decreases in BTMs associated with other cell types( e. g. T cells, NK cells). As both RNA-Seq( as applied here to bulk PBMC samples) and flow cytometry are relative quantification methods, we cannot ascertain if these changes correspond to a decrease in the absolute frequency of B cells in peripheral blood. However, several recent studies have described mechanisms of virus- and IFN- mediated B cell dysregulation[35–37]. Future studies including sample collection focused on absolute immune cell quantification will be required to further explore this observation in the context of acute HCV infection. Our comparative analyses revealed notable similarities between host responses to acute HCV and to effective vaccines. More specifically, we observed many shared features with the response to YFV, a live attenuated flavivirus that may be thought of as approximating an acute viral infection. As yellow fever virus is a related hepatotropic flavivirus, these similarities are not entirely surprising. In addition, the immune response during Early acute HCV infection shared many features with the immune response to acute DENV infection. Although challenging to make a formal comparison due to discrepancies in study design and timing, these qualitative results suggest that at least at some point during acute HCV, a response similar to that elicited by DENV is induced, likely reflecting the core innate antiviral programs activated in response to RNA virus infection. However, in acute HCV, ISG expression remains elevated long after initial infection and is maintained for many weeks of detectable viremia. This pattern is consistent with models in which adaptive immunity to HCV is delayed as compared to flavivirus infections, despite apparently similar robust IFN responses[16, 38, 39]. Even in the case of eventual spontaneous resolution, HCV outpaces these innate antiviral effectors, resulting in prolonged high-level viremia[38, 39]. Eventually, HCV-specific CD4 and CD8 T cells arise to eliminate the virus, after which innate antiviral signatures rapidly normalize. Failure to prime effective adaptive immunity results in chronic infection that is associated with persistent ISG expression( reviewed in[3, 4]). This longitudinal study provides an initial assessment of the peripheral immune response to acute HCV at a systems level. Follow-up studies with a larger cohort and sufficient sample availability for complementary experimental methods( e. g. comprehensive flow cytometry analysis) will be required to validate these findings beyond the patients described here and determine the impact of these dynamically regulated immune processes on the differential outcome of acute HCV infection.",
"role": "user"
},
{
"content": "Hepatitis C virus ( HCV ) is a leading cause of liver disease , with an estimated 71 million people infected worldwide . Following exposure , a subset of individuals spontaneously clears the virus while a majority progress to chronic infection . The immune functions active during the initial , acute infection by HCV are not well understood , due in part to difficulties associated with studying the early stages of this disease in humans . Most individuals acutely infected with HCV are asymptomatic and do not initially seek medical care , which presents challenges in conducting prospective longitudinal studies . Here , by employing specimens from a rare research cohort of individuals sampled before infection , during infection , and after spontaneous viral clearance , we use RNA-Seq to characterize the early immune processes active in acute HCV . Our analysis identified a robust innate antiviral gene signature that corresponds with HCV viremia . In addition , comparisons to other immune transcriptomics datasets demonstrated that the immune response to acute HCV shares many features with responses to flaviviruses . These results offer a detailed longitudinal description of immune function active during the spontaneous resolution of acute HCV infection and provide insight into the early events that may contribute to viral clearance .",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.3912 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Tothisend,thePGC-1alpha-/-micewereexercisedonamotorizedtreadmillapparatususingarun-to-exhaustionformat.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.2851 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\nNew Analysis of the ν3 and ν4 Bands of HNO3 in the 7.6 μm Region. A new line position analysis of the nu3 and nu4 bands of nitric acid (HNO3) at 1326.186 and 1303.072 cm-1 together with its associated interacting bands is presented. The 3^1 and 4^1 energy levels were obtained from an extended analysis of high resolution Fourier transform spectra recorded at Giessen in the 7.6 µm region. The energy levels of 3^1 and 4^1 upper states of nitric acid are strongly interacting with those of the 9^3, 6^2, 5^19^1 and 7^18^1 dark states centered at 1288.899, 1289.46, 1341.05 and 1343.78 cm-1, respectively. Informations on these perturbing dark states were achieved through previous partial investigations of hot bands in high resolution Fourier transform spectra recorded at 22 µm in Giessen (for 3nu9-2nu9 and 3nu9-nu5), at 12 µm in Denver (for 3nu9-nu9), and at 11 µm in Orsay (for nu5+nu9-nu9). The energy levels calculation accounts for the various Fermi, anharmonic, A-type, B-type and C-type Coriolis resonances which couple together the {6^2,9^3,4^1,3^1,5^19^1,7^18^1} interacting energy levels. For nitric acid the nu9 mode (OH torsion relative to the -NO2 moiety) is a large amplitude motion. The theoretical model used in this work accounts also for large amplitude effects in the 93 dark state which lead to a splitting of the 93 energy levels of about 0.060 cm-1. In this way the existence of \"torsional\" splittings for several nu4 perturbed lines was explained by the occurrence of local A-type and B-type Coriolis resonances coupling the 4^1 energy levels with those of 93. Because four dark bands had to be accounted for in the model, the results of the energy levels calculations are reasonable, although not perfect. However a very significant improvement was achieved in term of understanding the 7.6 µm absorbing bands of nitric acid as compared to the analysis of the nu3 and nu4 bands performed several years ago [Perrin, A.; Lado-Bordowski, O.; Valentin, A. Mol. Phys. 1989, 67, 249-267]. Finally the present analysis also features, for the first time, the nu3+nu9-nu9 hot band located at 1331.09 cm-1.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"OH\", \"nitric acid\", \"NO2\", \"HNO3\"]}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.480 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nThe use of background knowledge remains largely unexploited in many text classification tasks. In this work, we explore word taxonomies as means for constructing new semantic features, which may improve the performance and robustness of the learned classifiers. We propose tax2vec, a parallel algorithm for constructing taxonomy based features, and demonstrate its use on six short-text classification problems, including gender, age and personality type prediction, drug effectiveness and side effect prediction, and news topic prediction. The experimental results indicate that the interpretable features constructed using tax2vec can notably improve the performance of classifiers; the constructed features, in combination with fast, linear classifiers tested against strong baselines, such as hierarchical attention neural networks, achieved comparable or better classification results on short documents. Further, tax2vec can also serve for extraction of corpus-specific keywords. Finally, we investigated the semantic space of potential features where we observe a similarity with the well known Zipf's law.\n\nCited Abstract(s):\n[1]: We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources.\n\n[2]: Plenty of feature selection methods are available in literature due to the availability of data with hundreds of variables leading to data with very high dimension. Feature selection methods provides us a way of reducing computation time, improving prediction performance, and a better understanding of the data in machine learning or pattern recognition applications. In this paper we provide an overview of some of the methods present in literature. The objective is to provide a generic introduction to variable elimination which can be applied to a wide array of machine learning problems. We focus on Filter, Wrapper and Embedded methods. We also apply some of the feature selection techniques on standard datasets to demonstrate the applicability of feature selection techniques.",
"role": "user"
},
{
"content": "Feature selection thus filters out the (unnecessary) features, with the aim of yielding a compact, information-rich representation of the unstructured input. There exist many approaches to feature selection. They can be based on the individual feature's information content, correlation, significance etc. [2] . Feature selection is for example relevant in biological data sets, where e.g., only a handful of the key gene markers are of interest, and can be identified by assessing the impact of individual features on the target space [1] .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.83025 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Recent developments on the stability of systems with aperiodic sampling: An overview \nChapter Title: Input/Output stability approach \n1. Abstract of Cited Paper (BIB001): This paper will very briefly review the history of the relationship between modern optimal control and robust control. The latter is commonly viewed as having arisen in reaction to certain perceived inadequacies of the former. More recently, the distinction has effectively disappeared. Once-controversial notions of robust control have become thoroughly mainstream, and optimal control methods permeate robust control theory. This has been especially true in H-infinity theory, the primary focus of this paper. \n2. Abstract of Cited Paper (BIB002): This paper introduces a unified approach to robustness analysis with respect to nonlinearities, time variations, and uncertain parameters. From an original idea by Yakubovich (1967), the approach has been developed under a combination of influences from the Western and Russian traditions of control theory. It is shown how a complex system can be described, using integral quadratic constraints (IQC) for its elementary components. A stability theorem for systems described by IQCs is presented that covers classical passivity/dissipativity arguments but simplifies the use of multipliers and the treatment of causality. A systematic computational approach is described, and relations to other methods of stability analysis are discussed. Last, but not least, the paper contains a summarizing list of IQCs for important types of system components. \n3. Abstract of Cited Paper (BIB003): Stability robustness of systems with uncertain time delays is considered. A new delay-dependent state-space stability criterion is formulated in the form of an easily checked LMI condition. Two applications of the main result are presented, one with only time-delay uncertainty and one with both delay and parametric uncertainty. Copyright © 2001 John Wiley & Sons, Ltd. \n4. Abstract of Cited Paper (BIB004): Preliminaries.- Examples.- Stability sets and regions.- Reducible discrete delays and LTIs.- Liapunov's second method and LMIs.- Robustness issues in closed-loop.- Applications. \n5. Abstract of Cited Paper (BIB005): Preface, Notations 1.Introduction to Time-Delay Systems I.Frequency-Domain Approach 2.Systems with Commensurate Delays 3.Systems withIncommensurate Delays 4.Robust Stability Analysis II.Time Domain Approach 5.Systems with Single Delay 6.Robust Stability Analysis 7.Systems with Multiple and Distributed Delays III.Input-Output Approach 8.Input-output stability A.Matrix Facts B.LMI and Quadratic Integral Inequalities Bibliography Index \n6. Abstract of Cited Paper (BIB006): This paper considers the problem of checking stability of linear feedback systems with time-varying but bounded delays. Simple but powerful criteria of stability are presented for both continuous-time and discrete-time systems. Using these criteria, stability can be checked in a closed loop Bode plot. This makes it easy to design the system for robustness. \n7. Abstract of Cited Paper (BIB007): Stability and L2 (l2)-gain of linear (continuous-time and discrete-time) systems with uncertain bounded time-varying delays are analyzed under the assumption that the nominal delay values are not equal to zero. The delay derivatives (in the continuous-time) are not assumed to be less than q< 1. An input–output approach is applied by introducing a new input–output model, which leads to effective frequency domain and time domain criteria. The new method significantly improves the existing results for delays with derivatives not greater than 1, which were treated in the past as fast-varying delays (without any constraints on the delay derivatives). New bounded real lemmas (BRLs) are derived for systems with state and objective vector delays and norm-bounded uncertainties. Numerical examples illustrate the efficiency of the new method. © 2006 Elsevier B.V. All rights reserved. \n8. Abstract of Cited Paper (BIB008): Stability of systems in the presence of bounded uncertain time-varying delays in the feedback loop is studied. The delay parameter is assumed to be an unknown time-varying function for which the upper bounds on the magnitude and the variation are given. The stability problem is treated in the integral quadratic constraint (IQC) framework. Criteria for verifying robust stability are formulated as feasibility problems over a set of frequency-dependent linear matrix inequalities. The criteria can be equivalently formulated as semi-definite programs (SDP) using Kalman-Yakubovich-Popov lemma. As such, checking robust stability can be performed in a computationally efficient fashion. \n9. Abstract of Cited Paper (BIB009): Abstract This paper presents a survey of the most significant results on robust control theory. In particular, we study the modeling of uncertain systems, robust stability analysis for systems with unstructured uncertainty, robustness analysis for systems with structured uncertainty, and robust control system design including H ∞ control methods. The paper also presents some more recent results on deterministic and probabilistic methods for systems with uncertainty.\n\nLiterature Review:",
"role": "user"
},
{
"content": "In this subsection we present several methods that study sampled-data systems from a robust control point of view. The main idea of the Input/Output stability approach is to consider the sampling error as a perturbation with respect to a nominal continuous-time control-loop. Classical robust control tools are used in order to assess the stability of the sampled-data systems BIB001 BIB002 BIB003 BIB010 . Some of the presented methods are reminiscent from the Input/Output stability approach used for the analysis of time delay systems BIB004 BIB005 BIB006 BIB007 BIB008 BIB009 , and have been further developed independently of the time delay approach.",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.3580 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nOBJECTIVE: To study the clinical characteristics of patients in Zhejiang province, China, infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2) responsible for coronavirus disease 2019 (covid-2019). DESIGN: Retrospective case series. SETTING: Seven hospitals in Zhejiang province, China. PARTICIPANTS: 62 patients admitted to hospital with laboratory confirmed SARS-Cov-2 infection. Data were collected from 10 January 2020 to 26 January 2020. MAIN OUTCOME MEASURES: Clinical data, collected using a standardised case report form, such as temperature, history of exposure, incubation period. If information was not clear, the working group in Hangzhou contacted the doctor responsible for treating the patient for clarification. RESULTS: Of the 62 patients studied (median age 41 years), only one was admitted to an intensive care unit, and no patients died during the study. According to research, none of the infected patients in Zhejiang province were ever exposed to the Huanan seafood market, the original source of the virus; all studied cases were infected by human to human transmission. The most common symptoms at onset of illness were fever in 48 (77%) patients, cough in 50 (81%), expectoration in 35 (56%), headache in 21 (34%), myalgia or fatigue in 32 (52%), diarrhoea in 3 (8%), and haemoptysis in 2 (3%). Only two patients (3%) developed shortness of breath on admission. The median time from exposure to onset of illness was 4 days (interquartile range 3-5 days), and from onset of symptoms to first hospital admission was 2 (1-4) days. CONCLUSION: As of early February 2020, compared with patients initially infected with SARS-Cov-2 in Wuhan, the symptoms of patients in Zhejiang province are relatively mild.",
"role": "user"
},
{
"content": "[\"Mechanism\"]",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2937 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nRiluzole as an adjunctive therapy to risperidone for the treatment of irritability in children with autistic disorder : a double-blind , placebo-controlled , randomized trial . BACKGROUND A hyperglutamatergic state has been shown to play a possible role in the pathophysiology of autistic disorders . Riluzole is a glutamate-modulating agent with neuroprotective properties , which has been shown to have positive effects in many neuropsychiatric disorders . OBJECTIVE The aim of this study was to assess the efficacy and tolerability of riluzole as an adjunctive to risperidone in the treatment of irritability in autistic children who were not optimally responding to previous medications . STUDY DESIGN This was a 10-week , randomized , double-blind , parallel-group , placebo-controlled trial . PARTICIPANTS The study enrolled male and female outpatients aged 5-12 years with a diagnosis of autistic disorder based on the DSM-IV-TR criteria and a score of ≥12 on the Aberrant Behavior Checklist-Community ( ABC-C ) irritability subscale who had discontinued other medications because of a lack of efficacy . INTERVENTIONS Subjects received riluzole ( titrated to 50 or 100 mg/day based on bodyweight ) or placebo in addition to risperidone ( titrated up to 2 or 3 mg/day based on bodyweight ) for 10 weeks . OUTCOME Patients were assessed at baseline , week 5 , and week 10 . The primary outcome measure was the difference in the change in the ABC-C irritability subscale score from baseline to week 10 between the two groups . We also compared changes in other ABC-C subscale scores and Clinical Global Impressions-Improvement ( CGI-I ) scale scores between the two groups . RESULTS Forty-nine patients were enrolled in the study , and forty children completed the trial ( dropouts : placebo = 4 , riluzole = 5 ) . A significantly greater improvement in the study primary outcome ( the ABC-C irritability subscale score ) was achieved by the riluzole-treated children compared with the placebo group ( P = 0.03 ) . Patients in the riluzole group also showed significantly greater improvement on the lethargy/social withdrawal ( P = 0.02 ) , stereotypic behavior ( P = 0.03 ) , and hyperactivity/non-compliance subscales ( P = 0.005 ) , but not on the inappropriate speech subscale ( P = 0.20 ) than patients in the placebo group . Eleven patients in the riluzole group and five patients in the placebo group were classified as responders based on their CGI-I scores [ χ ( 2 ) ( 1 ) = 3.750 , P = 0.05 ] . Children in the riluzole group experienced significantly more increases in their appetite and bodyweight than children in the placebo group by the end of the study . CONCLUSION Riluzole add-on therapy shows several therapeutic outcomes , particularly for improving irritability , in children with autism . However , its add-on to risperidone also results in significantly increased appetite and weight gain .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"riluzole\", \"risperidone\", \"Riluzole\", \"placebo-controlled\"], \"Outcome\": [\"inappropriate\", \"ABC-C subscale scores\", \"stereotypic\", \"hyperactivity/non-compliance\", \"lethargy/social\", \"irritability\", \"neuropsychiatric disorders\", \"efficacy and tolerability\", \"ABC-C irritability subscale\", \"Clinical Global Impressions-Improvement ( CGI-I )\"], \"Participant\": [\"autistic disorders\", \"irritability in children with autistic disorder\", \"irritability\", \"autistic\"]}",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.60901 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Optimized Clustering Algorithms for Large Wireless Sensor Networks: A Review \nChapter Title: Swarm Intelligence \n1. Abstract of Cited Paper (BIB001): Swarm intelligence is a relatively novel field. It addresses the study of the collective behaviors of systems made by many components that coordinate using decentralized controls and self-organization. A large part of the research in swarm intelligence has focused on the reverse engineering and the adaptation of collective behaviors observed in natural systems with the aim of designing effective algorithms for distributed optimization. These algorithms, like their natural systems of inspiration, show the desirable properties of being adaptive, scalable, and robust. These are key properties in the context of network routing, and in particular of routing in wireless sensor networks. Therefore, in the last decade, a number of routing protocols for wireless sensor networks have been developed according to the principles of swarm intelligence, and, in particular, taking inspiration from the foraging behaviors of ant and bee colonies. In this paper, we provide an extensive survey of these protocols. We discuss the general principles of swarm intelligence and of its application to routing. We also introduce a novel taxonomy for routing protocols in wireless sensor networks and use it to classify the surveyed protocols. We conclude the paper with a critical analysis of the status of the field, pointing out a number of fundamental issues related to the (mis) use of scientific methodology and evaluation procedures, and we identify some future research directions. \n2. Abstract of Cited Paper (BIB002): WSN has been directed from military applications to various civil applications. However, many applications are not ready for real world deployment. Most challenging issues are still unresolved. The main challenge facing the operation of WSN is saving energy to prolong the network lifetime. Clustering is an efficient technique used for managing energy consumption. However, clustering is an NP hard optimization problem that can’t be solved effectively by traditional methods. Computational Intelligence (CI) paradigms are suitable to adapt for WSN dynamic nature. This paper explores the advantages of CI techniques and how they may be used to solve varies problems associated to WSN. Finally, a short conclusion and future recommendation is being provided. \n3. Abstract of Cited Paper (BIB003): Advances in the technology along with reduction in processor size, its memory, and wireless antenna size has facilitated the construction of low cost, low powered and multifunctional Sensor nodes which in turn led to high demand for development of Wireless Sensor Networks. A lot of research work has been done regarding the development of routing protocols for WSNs. This paper provides a brief overview of the routing protocols using Reinforcement learning approach for WSNs.\n\nLiterature Review:",
"role": "user"
},
{
"content": "The Swarm Intelligence (SI) is defined in BIB002 as \"any attempt to design algorithms or distributed problem-solving devices inspired by the collective behavior of social insects and other animal societies\". Most of the proposed are based on the social behaviors of flocks of birds, schools of fishes, and insect cooperation like ants, bees, butterflies, etc. which have limited resources like sensor nodes used in WSN. SI approaches can be classify into Particle Swarm, Ant Colony, and Bee Colony Optimizations BIB003 BIB004 BIB001 .",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.1035 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Moreover,itsegregatesintovesicleswhicharenotassociatedwithannexinA7(Fig.3).",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.14187 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nIn complex processes, various events can happen in different sequences. The prediction of the next event activity given an a-priori process state is of importance in such processes. Recent methods leverage deep learning techniques such as recurrent neural networks to predict event activities from raw process logs. However, deep learning techniques cannot efficiently model logical behaviors of complex processes. In this paper, we take advantage of Petri nets as a powerful tool in modeling logical behaviors of complex processes. We propose an approach which first discovers Petri nets from event logs utilizing a recent process mining algorithm. In a second step, we enhance the obtained model with time decay functions to create timed process state samples. Finally, we use these samples in combination with token movement counters and Petri net markings to train a deep learning model that predicts the next event activity. We demonstrate significant performance improvements and outperform the state-of-the-art methods on eight out of nine real-world benchmark event logs in accuracy.\n\nCited Abstract(s):\n[1]: In semi-structured case-oriented business processes, the sequence of process steps is determined by case workers based on available document content associated with a case. Transitions between process execution steps are therefore case specific and depend on independent judgment of case workers. In this paper, we propose an instance-specific probabilistic process model (PPM) whose transition probabilities are customized to the semi-structured business process instance it represents. An instance-specific PPM serves as a powerful representation to predict the likelihood of different outcomes. We also show that certain instance-specific PPMs can be transformed into a Markov chain under some non-restrictive assumptions. For instance-specific PPMs that contain parallel execution of tasks, we provide an algorithm to map them to an extended space Markov chain. This way existing Markov techniques can be leveraged to make predictions about the likelihood of executing future tasks. Predictions provided by our technique could generate early alerts for case workers about the likelihood of important or undesired outcomes in an executing case instance. We have implemented and validated our approach on a simulated automobile insurance claims handling semi-structured business process. Results indicate that an instance-specific PPM provides more accurate predictions than other methods such as conditional probability. We also show that as more document data become available, the prediction accuracy of an instance-specific PPM increases.",
"role": "user"
},
{
"content": "[1] developed a method which models a process in a probabilistic and instance-specific way. This model is able to predict next event activities and can be translated into a Markov chain. Their approach has been implemented on a simulated automobile insurance claim process.",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.4452 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: hydroxychloroquine helped coronavirus patients survive better\n\nAbstract:\nHydroxychloroquine has been promoted for its use in treatment of COVID-19 patients based on in-vitro evidences.\nWe searched the databases to include randomized and observational studies evaluating the effect of Hydroxychloroquine on mortality in COVID-19 patients.\nThe outcome was summarized as odds ratios (OR) with a 95% confidence interval (CI).We used the inverse-variance method with a random effect model and assessed the heterogeneity using I2 test.\nWe used ROBINS-I tool to assess methodological quality of the included studies.\nWe performed the meta-analysis using 'Review manager software version 5.3'.\nWe identified 6 observationalstudies satisfying the selection criteria.\nIn all studies, Hydroxychloroquine was given as add on to the standard care and effect was compared with the standard care alone.\nA pooled analysis observed 251 deaths in 1331 participants of the Hydroxychloroquine arm and 363 deaths in 1577 participants of the control arm.\nThere was no difference in odds of mortality events amongst Hydroxychloroquine and supportive care arm [1.25 (95% CI: 0.65, 2.38); I2 = 80%].\nA similar trend was observed with moderate risk of bias studies [0.95 (95% CI: 0.44, 2.06); I2 = 85%].\nThe odds of mortality were significantly higher in patients treated with Hydroxychloroquine + Azithromycin than supportive care alone [2.34 (95% CI: 1.63, 3.34); I2 = 0%].\nA pooled analysis of recently published studies suggests no additional benefit for reducing mortality in COVID-19 patients when Hydroxychloroquine is given as add-on to the standard care.\nGraphical Abstract.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"The odds of mortality were significantly higher in patients treated with Hydroxychloroquine + Azithromycin than supportive care alone [2.34 (95% CI: 1.63, 3.34); I2 = 0%].\", \"A pooled analysis of recently published studies suggests no additional benefit for reducing mortality in COVID-19 patients when Hydroxychloroquine is given as add-on to the standard care.\"]}",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.111719 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nElectronic Properties and 13C NMR Structural Study of Y3N@C88\n\nPaper sections:\nIntroduction\n<p>The separation and characterization of higher fullerenes beyond C80 is very difficult, not only because of their extremely low yield, but also the possible presence of multiple isomers, instability, and lower solubility.1 For example, Yang et al. reported the separation of Di-and Tridysprosium Endohedral Metallofullerenes in the fullerene cages from C94 to C100.2 Due to the limited available amount of sample, they did not obtain any structural information of those large cages. 2 Among the large fullerene cages, the C88 cage has been widely studied. As early as 1995, C88 empty cage fullerenes were separated from soot generated in a Kratschmer-Huffman electric-arc generator and subsequently characterized using 13C NMR by Achiba et al.3 In a later report, Miyake et al.4 described a more detailed 13C NMR study for the C88 empty cage fullerene. However, the structural assignments were ambiguous because at least three C88 isomers exist.4 Due to the difficulty in experimental part, the extensive theoretical studies were performed by several groups to aid the structural determination.5,6 It was predicted that C2(7), Cs(17) and C2(33) isomers are the mostly likely to be experimentally-isolated isomers among the 35 IPR-obeying isomers of C88.5,6 Derivatization of empty fullerene cages has proved to be an effective tool for cage stabilization and meanwhile provides a path for further structural characterization. For example, Troyanov and Tamm 7 synthesized and characterized trifluoromethyl derivatives of C88, C88(CF3)18. Based on their X-ray investigation, they determined that C88(CF3)18 processes a C2(33)-C88 cage, which is one of the three most stable isomers predicted on a theoretical basis. Encapsulation of atoms or clusters into fullerene cages is another method used to stabilize empty fullerene cages by electron transfer processes between the encapsulated and the carbon cage.</p><p>Trimetallic nitride template (TNT) endohedral metallofullerenes (EMFs) are particularly important not only because of their relatively high yields, but also because of their intriguing potential applications in biomedical, optoelectronic, and photovoltaic fields.8,9 Insertion of a TNT cluster into a C88 cage (M3N@C88, M= Gd, Tm, Dy, Tb, Nd, Pr, Ce, La) has been reported by several laboratories.10–13 Echegoyen and coworkers suggested that the C88 cage is preferentially templated by a TNT cluster for metal ions with ionic radius larger than gadolinium, such as, Nd, Pr, Ce.10,11 However, most of the TNT EMFs with the C88 cage have been characterized by mass spectroscopy without definitive structural characterization. Notable exceptions are the single crystal X-ray studies reported for Tb3N@C88 which have been found to have an IPR-obeying D2(35) structure.12 This is in contrast with the theoretical predictions for the empty C88 cage. Thus, it is interesting to examine whether clusters with different metals (A3N)6+ encapsulated within a C88 cage have the same cage structure as IPR-obeying Tb3N@D2(35)-C88.</p><p>In this paper, we report the synthesis, separation and structural characterization of diamagnetic Y3N@C88. This diamagnetic molecule allows high-resolution 13C NMR structural studies, which are generally not feasible for the paramagnetic lanthanide C88 cage TNT EMFs reported above. Density function theory calculations were utilized to augment the structural determination. The electronic properties of Y3N@C88 were also studied by both electrochemical and DFT computational approaches to compare a non-lanthanide TNT EMF with the corresponding lanthanide TNT EMFs reported to date.</p>\n\nExperimental Section\n<p>A sample of Y3N@C88 was synthesized in an electric arc-discharge reactor by vaporizing graphite rods containing a mixture of Y2O3 and graphite powder and using Cu as catalyst with a weight ratio of 1.1:1.0:2.1 in a dynamic flow of N2 and He (flow rate ratio of N2/He=3:100).14 The toluene extract from the raw soot was applied to a cyclopentadiene-functionalized Merrifield peptide resin. The eluent was further separated by two-stage HPLC. The first stage was carried out on a 5PBB column and the sixth fraction contains Y3N@C88, which was further purified by a 5PYE column.</p><p>The 13C NMR measurements (150 MHz) were performed on a Bruker Advance spectrometer (600 MHz, 1H). The sample was dissolved in CS2 with Cr(acac)3 as the relaxation agent and acetone-d6 as the internal lock at 25 °C. Cyclic voltammetry was conducted using a CH Instruments 600A potentiostat (Austin, TX) with a single-compartment, three electrode, electrochemical cell. A 2.0 mm glassy carbon working electrode, platinum wire auxiliary, and a silver wire pseudo-reference electrode was used; ferrocene was used as an internal standard.</p><p>Density functional theory (DFT) computations were performed using the Gaussian 03 program package. All of the molecules were geometrically optimized at the UB3LYP level with a DZVP basis set for yttrium atoms and a 6-31G* basis set for carbon and nitrogen atoms.14 DFT-optimized energy values were obtained starting from the X-ray crystallographic structures of the corresponding Tb3N@D2(35)-C88.11</p>\n\nResults and Discussion\n<p>As illustrated in Figure 1, the HPLC, positive ion LD-TOF MS, and UV-vis spectra for Y3N@C88 is compared with data that was previously reported for [email protected] The close correspondence of the data in Figure 1 strongly suggests that Y3N@C88 has the same D2(35)cage structure as Tb3N@D2(35)-C88.12 Because the Y3N@C88 molecule is diamagnetic, we were able to obtain high-resolution 13C NMR data for Y3N@C88 to further confirm the cage structure.</p><p>The 13C NMR spectrum (Figure 2a) for Y3N@C88 exhibits a total of 22 lines (lines at 139.10 and 139.69 ppm, double intensity) with a shift range from 131.0 to 150.0 ppm. This result is completely consistent with the 22 13C NMR signals observed for Y3N@C88. An IPR-allowed isomer is suggested because of the absence of a 13C NMR signal above 155 ppm, which is characteristic of non-IPR pentalene motifs.14 There are 35 IPR-allowed C88 isomers and two isomers (1) and (35) that have D2 symmetry (22 × 4 lines) which is consistent with the observed spectrum.15 Among the 35 IPR-obeying isomers for the C88 cage, no other IPR-allowed structure would exhibit fewer than 22 lines except for isomer (34) with T symmetry that would exhibit 8 13C NMR resonances (1×4 and 7×12). All other C88 IPR isomers have lower symmetry and would exhibit more than 22 spectral lines. In addition, there are 7 pyrene 6,6,6 type carbons that range from 132.0–137.9 ppm, which is in reasonably good agreement with 7 DFT predicted values (Figure 2b) ranging from 131.9–139.2 ppm.14 The correlation between experimental 13C and the DFT predicted chemical shielding values are shown in Figure 2c. Tb3N@D2(35)-C88 exhibits a carbon cage with D2 symmetry as determined by previous single-crystal X-ray diffraction studies. Thus, our current results are consistent with a Y3N@D2(35)-C88 structure in analogous fashion to Tb3N@D2(35)-C88.12</p><p>As previously indicated, there are three isomers of empty-cage C88 cage, C2(7), Cs(17) and C2(33), that are predicted to be thermodynamically and kinetically stable.5,6 However, C88 with D2(35) symmetry is not included in this group. Computational calculations (DFT) summarized in Figure 3 suggest that the neutral IPR D2(35)-C88 has a small HOMO-LUMO gap (1.27 eV), indicating the lower stability of the neutral D2(35)-C88 cage. However, upon accepting six electrons, the HOMO-LUMO gap becomes significantly larger (1.77 eV), consistent for higher stability of the Y3N@D3(35)-C88 molecule. To our knowledge, neither the Y3N cluster or the D2(35)-C88 cage have been isolated, but when associated together, they form a stable Y3N@D3(35)-C88 structure by electron transfer between the cluster and the cage.</p><p>As illustrated in Figure 4, the CV electrochemistry of Y3N@C88 and Gd3N@C88 are nearly equivalent. The Gd3N@C88 sample for this CV electrochemistry comparison was isolated and purified (see SI) in the same fashion as described for Y3N@C88 vide supra and is consistent with previously reported data by Echegoyen and coworkers.16 The electrochemistry of Y3N@C88 and Gd3N@C88 can be described as having two distinct oxidative processes accompanied by a first and second reduction peak potential of −1.43 V and −1.70V, respectively. Redox potentials and resulting electrochemical gap (ΔEgap) for Y3N@C88 are in good agreement with reported literature values for other M3N@C88 systems (Table 1). This is consistent with a single (non-degenerate) LUMO level for Y3N@C88 obtained from the DFT calculations described above. The resulting electrochemical bandgap (oxE1-redE1) for the Y3N@C88 is 1.46 V, reasonably consistent with the DFT predictions above. Although (oxE1-redE1) for Y3N@C88 is smaller than the more stable Y3N@C80 case,16 this bandgap is similar to other M3N@C88 family members, such as, Gd3N@C88 (1.49 V),17 Nd3N@C88 (1.43 V),11 Pr3N@C88 (1.43 V),9 Ce3N@C88(1.38 V)9 and La3N@C88(1.57 V)9. These results suggests that the (oxE1-redE1) bandgap (and corresponding HOMO-LUMO gap) of A3N@C88 trimetallic nitride endohedral metallofullerenes is a property dominated by the properties of the D2(35)-C88 carbon cage and not the nature of the yttrium or lanthanide electronic metal cluster (A3N)6+.</p>\n\nConclusion\n<p>In summary, we have synthesized, purified and characterized diamagnetic Y3N@C88 for the first time. The 13C NMR study indicates that Y3N@C88 exhibits an IPR-obeying D2(35)-C88 cage. The electrochemical data suggested that Y3N@C88 has a smaller HOMO-LUMO gap than Y3N@C80 which is consistent with the computational DFT study. Also, our results show that encapsulation of an Y3N cluster does not significantly alter the electrochemical properties of these trimetallic nitride endohedral metallofullerenes and this result is consistent with other lanthanide M3N clusters in D2(35)-C88 cages. This suggests that the unique D2(35)-C88 cage properties strongly influence the electrochemical and electronic properties of these trimetallic nitride endohedral metallofullerenes.</p>",
"role": "user"
},
{
"content": "In this paper we report the synthesis, purification, 13C NMR and other characterization studies of Y3N@C88. The 13C NMR, UV-vis and chromatographic data suggest an Y3N@C88 having IPR allowed cage with D2(35)-C88 symmetry. In earlier density functional theory (DFT) computational and X-ray crystallographic studies, it was reported that lanthanide (A3N)6+ clusters are stabilized in D2(35)-C88 symmetry cages and have reduced HOMO-LUMO gaps relative to other trimetallic nitride endohedral metallofullerene cage systems, for example, A3N@C80. In this paper, we report that the non-lanthanide (Y3N)6+ cluster in the D2(35)-C88 cage exhibits a HOMO-LUMO gap consistent with other lanthanide A3N@C88 molecules based on electrochemical measurements and DFT computational study. These results suggests that the reduced HOMO-LUMO gap of A3N@C88 systems is a property dominated by the D2(35)-C88 carbon cage and not f-orbital lanthanide electronic metal cluster (A3N)6+ orbital participation.",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.12127 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nAssociation study of the serotonin transporter gene polymorphism in obsessive-compulsive disorder.\nAbstract 1:\nThe hypothesis implicating the serotonergic system in the pathophysiology of obsessive-compulsive disorder ( OCD ) is supported by the therapeutic efficacy of selective serotonin reuptake inhibitors ( SSRIs ) . Since SSRIs act on the serotonin transporter ( 5-HTT ) , it has been suggested that the 5-HTT gene ( SCL6A4 ) could be a good c and i date for OCD . The SCL6A4 gene has a 44-bp insertion/deletion polymorphism in its promoter region ( 5-HTTLPR ) . Previous studies have revealed an association between OCD and the l allele . We analysed the 5-HTTLPR polymorphic system in 115 Mexican OCD patients and 136 controls . No significant association was found between l allele and OCD ( chi2 = 1.54 , d.f . = 1 , p = 0.21 ) . Furthermore , we assessed alternative methods that employ family-based design s in a sample of 43 trios . Haplotype-based haplotype relative risk and transmission disequilibrium analysis did not show a preferential transmission of l allele to OCD prob and s. Our results indicate the need to analyse larger sample s using family-based methods\n\n\nTitle 2:\nA family study of early‐onset obsessive‐compulsive disorder\nAbstract 2:\nResults from family studies have suggested that obsessive‐compulsive disorder ( OCD ) is a genetically heterogeneous disorder and have emphasized the importance of identifying valid subgroups of patients . The current study focused on early‐onset OCD prob and s and examined the recurrence risks of OCD and tics among first‐degree family members . One hundred six children and adolescents with OCD were recruited from a specialty clinic for OCD and 44 control individuals without OCD were identified by r and om‐digit dialing . These 150 prob and s and their 465 first‐degree relatives were assessed by trained interviewers , using st and ardized semi‐structured interviews . Diagnoses were assigned according to DSM‐IV criteria by two experts blind to the prob and 's diagnosis , through the best‐estimate process . These data were analyzed using χ2 tests , t‐tests , logistic regression , and generalized estimating equations ( GEE ) . Case prob and s had a mean age of onset of OC symptoms of 6.7 years ( SD = 2.8 ) , and high comorbid rates with Tourette syndrome ( 33 % ) and chronic tics ( 13.2 % ) . Compared to control relatives , case relatives had higher age‐corrected recurrence risks of OCD ( 22.7 % vs. 0.9 % , odds ratio ( OR ) = 32.5 , 95 % confidence interval ( CI ) = 4.5–230.8 , P = 0.0005 ) , and chronic tics ( 11.6 % vs. 1.7 % , OR = 7.9 , 95 % CI = 1.9–33.1 , P = 0.005 ) . A comorbid diagnosis of tics in the relatives was the best predictor of their diagnosis of OCD ( OR = 7.35 , 95 % CI = 3.79–14.25 , P < 0.0001 ) . There was a significant correlation between the ages of onset of OCD in prob and s and their affected relatives . Childhood onset OCD is a highly familial disorder . Some early‐onset cases may represent a valid subgroup , with higher genetic loading and shared vulnerability with chronic tic disorders . © 2005 Wiley‐Liss ,\n\n\nTitle 3:\nHardy-Weinberg disequilibrium identified genotyping error of the serotonin transporter (SLC6A4) promoter polymorphism.\nAbstract 3:\nWe analyzed the putative functional promoter polymorphism of the serotonin transporter ( 5-HTTLPR ) in two large autism spectrum disorder sample s and a control sample . A Hardy-Weinberg disequilibrium was detected for 5-HTTLPR in the unaffected founders of both autism spectrum disorder sample s and control sample s. When we lowered the total magnesium concentration in the polymerase chain reaction below levels reported in previously published studies , we observed a shift in relative allele frequencies and restoration of the Hardy-Weinberg equilibrium . Our data suggest that higher magnesium concentrations caused allele-dependent , non-r and om genotyping errors . Genotyping data obtained from the 2 mM magnesium protocol increased the significance of linkage and gave suggestive ( P=0.06 ) association with autism spectrum disorder , whereas the corrected genotypes of 5-HTTLPR provide no linkage information beyond the results we have previously published and no evidence of association with autism spectrum disorder . We present details regarding appropriate polymerase chain reaction conditions for the accurate genotyping of this polymorphism\n\n\nTitle 4:\nPlatelet serotonin transporter in children and adolescents with obsessive-compulsive disorder or Tourette's syndrome.\nAbstract 4:\nUNLABELLED Previous studies of serotonin transporter protein ( 5HTPR ) indexed in platelets by 3H-imipramine demonstrate reduction in children with comorbid obsessive-compulsive disorder ( OCD ) and Tourette 's syndrome ( TS ) . OBJECTIVE To use the 5HTPR selective lig and 3H-paroxetine and homogeneous diagnostic groups to reevaluate these findings . METHOD Platelet Kinetic binding parameters were evaluated using st and ard techniques from medication-free child and adolescent patients with OCD ( n = 18 ) , with TS ( n = 10 ) , and normal controls ( n = 19 ) . RESULTS Baseline binding capacity ( Bmax ) was significantly reduced in patients with OCD ( 1,342 + /- 952 fmol/mg ; protein p < .01 ) compared with normal controls ( 2,486 + /- 1309 fmol/mg ) and TS patients ( 2,420 + /- 1,069 fmol/mg ; p < .05 ) . Among OCD patients who were subsequently treated on an open-label basis with selective serotonin reuptake inhibitor ( SSRI ) , Bmax values at baseline differentiated between responders ( 1,718 + /- 1,041 fmol/mg ) and nonresponders ( 802 + /- 713 fmol/mg protein ; p < .05 ) . Response to SSRI was greatest in patients with a positive family history of OCD . Among responders ( n = 10 ) , baseline Yale-Brown Obsessive Compulsive Scale and Bmax were positively correlated ( r = .76 , p = .01 ) , as was Clinical Global Impression ( r = .67 , p = .03 ) . CONCLUSIONS Platelet 5HTPR capacity ( Bmax ) is reduced in children and adolescents with OCD , but not in those with TS . 5HTPR may be an indirect measure of basal serotonergic tone\n\n\nTitle 5:\nBiochemical changes during clomipramine treatment of childhood obsessive-compulsive disorder.\nAbstract 5:\nPeripheral measures of serotonergic and noradrenergic function were obtained in 29 obsessive-compulsive adolescents and 31 age- and sex-matched controls , as well as in a sub sample of 22 patients after five weeks of treatment with clomipramine hydrochloride ( 134 + /- 33 mg/d ) ( mean + /- SD ) given in a double-blind placebo-controlled trial . Drug-free obsessive-compulsive subjects did not differ from controls on measures of platelet serotonin and monoamine oxidase ( MAO ) activity , nor on plasma epinephrine or norepinephrine concentrations at rest and after a st and ard orthostatic challenge procedure . Compared with placebo , treatment with clomipramine was clinical ly effective and produced a marked decrease in platelet serotonin concentration , a trend toward a reduction in platelet MAO activity , and a rise in st and ing plasma norepinephrine . Clinical improvement during drug therapy was closely correlated with pretreatment platelet serotonin concentration and MAO activity , as well as with the decrease in both measures during clomipramine administration . This suggests that the effects of clomipramine on serotonin uptake may be essential to the antiobsessional action observed\n\n\nTitle 6:\nAssociation analysis of the catechol‐o‐methyltransferase (COMT ), serotonin transporter (5‐HTT ) and serotonin 2A receptor (5HT2A) gene polymorphisms with obsessive‐compulsive disorder\nAbstract 6:\nFamily and twin studies have supported a strong genetic factor in the etiology of obsessive‐compulsive disorder ( OCD ) , although the precise mechanism of inheritance is unclear . Clinical and pharmacological studies have implicated the serotonergic and dopaminergic systems in disease pathogenesis . In this cross‐sectional study , we have examined the allelic and genotypic frequencies of a Val‐158‐Met substitution in the COMT gene , a 44‐base pair ( bp ) length variation in the regulatory region of the serotonin transporter gene ( 5‐HTTLPR ) and the T102C and C516 T variants in the serotonin receptor type 2A ( 5HT2A ) gene in 79 OCD patients and 202 control subjects . There were no observed differences in the frequencies of allele and genotype between patients and control groups for the COMT , the 5HTTLPR and the T102C 5HT2A gene polymorphisms . In contrast , a statistically significant difference between OCD patients and controls was observed on the genotypic distribution ( χ2 = 16.7 , 2df , P = 0.0002 ) and on the allelic frequencies ( χ2 = 15.8 , 1df , P = 0.00007 ) for the C516 T 5HT2A gene polymorphism . The results suggest that the C516 T variant of the 5HT2A gene may be one of the genetic risk factors for OCD in our sample . However , further studies using larger sample s and family based methods are recommended to confirm these findings",
"role": "user"
},
{
"content": "In the overall meta- analysis we found no evidence of association between genetic variation at the 5-HTTLPR locus and OCD .\nIn the stratified meta-analyses , we demonstrated a significant association between the l-allele and OCD in family-based association studies and in studies involving children and Caucasians .\nOur meta- analysis suggests the possibility that the l-allele may be associated with OCD in specific OCD subgroups such as childhood-onset OCD and in Caucasians .\nFurther meta-analyses based on individual patient data would be helpful in determining whether age of OCD onset , gender and the presence of comorbid illness ( e.g. , tics ) moderates the relationship between 5-HTTLPR and OCD",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.1481 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\nTGF-β1 Conjugated to Gold Nanoparticles Results in Protein Conformational Changes and Attenuates the Biological Function. Gold nanoparticles (AuNPs) are widely used as carriers or therapeutic agents due to their great biocompatibility and unique physical properties. Transforming growth factor-beta 1 (TGF-β1), a member of the cysteine-knot structural superfamily, plays a pivotal role in many diseases and is known as an immunosuppressive agent that attenuates immune response resulting in tumor growth. The results reported herein reflect strong interactions between TGF-β1 and the surface of AuNPs when incubated with serum-containing medium, and demonstrate a time- and dose-dependent pattern. Compared with other serum proteins that can also bind to the AuNP surface, AuNP-TGFβ1 conjugate is a thermodynamically favored compound. Epithelial cells undergo epithelial-mesenchymal transition (EMT) upon treatment with TGF-β1; however, treatment with AuNPs reverses this effect, as detected by cell morphology and expression levels of EMT markers. TGF-β1 is found to bind to AuNPs through S-Au bonds by X-ray photoelectron spectroscopy. Fourier transform infrared spectroscopy is employed to analyze the conformational changes of TGF-β1 on the surface of AuNPs. The results indicate that TGF-β1 undergoes significant conformational changes at both secondary and tertiary structural levels after conjugation to the AuNP surface, which results in the deactivation of TGF-β1 protein. An in vivo experiment also shows that addition of AuNPs attenuates the growth of TGF-β1-secreting murine bladder tumor 2 cells in syngeneic C3H/HeN mice, but not in immunocompromised NOD-SCID mice, and this is associated with an increase in the number of tumor-infiltrating CD4(+) and CD8(+) T lymphocytes and a decrease in the number of intrasplenic Foxp3(+) lymphocytes. The findings demonstrate that AuNPs may be a promising agent for modulating tumor immunity through inhibiting immunosuppressive TGF-β1 signaling.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"cysteine\", \"S-Au\"]}",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.690 | [
{
"content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: Trends in Behavior-Analytic Gambling Research and Treatment The purpose of the present review was to analyze research outcomes for all gambling studies reported in the behavior analysis literature. We used the search term \" gambling \" to identify articles that were published in behaviorally oriented journals between the years 1992 and 2012 and categorized the content of each article as empirical or conceptual. Next, we examined and categorized the empirical articles by inclusion of an experimental manipulation and treatment to alleviate at least some aspect of pathological gambling, participant population used, type of gambling task employed in the research, whether the participants in the study actually gambled, and the behavioral phenomena of interest. The results show that the rate of publication of gambling research has increased in the last 6 years, and a vast majority of articles are empirical. Of the empirical articles, examinations of treatment techniques or methods are scarce; slot machine play is the most represented form of gambling, and slightly greater than half of the research included compensation based on gambling outcomes within experiments. We discuss implications and future directions based on these observations of the published literature.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"interest\", \"identify\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"examined\", \"employed\"], \"Food\": [], \"HealthCareActivity\": [\"manipulation\", \"Treatment\", \"treatment\", \"alleviate\", \"examinations\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"methods\", \"article\", \"literature\", \"articles\", \"published literature\", \"published\", \"reported\", \"publication\", \"journals\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [\"participant population\", \"participants\"], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"analyze\", \"experiments\", \"research\", \"Research\", \"studies\", \"study\"], \"SpatialConcept\": [\"directions\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.51503 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Deep learning for molecular generation and optimization - a review of the state of the art \nChapter Title: Stability and synthesizability \n1. Abstract of Cited Paper (BIB001): The fast identification of quality lead compounds in the pharmaceutical industry through a combination of high throughput synthesis and screening has become more challenging in recent years. Although the number of available compounds for high throughput screening (HTS) has dramatically increased, large-scale random combinatorial libraries have contributed proportionally less to identify novel leads for drug discovery projects. Therefore, the concept of ‘drug-likeness’ of compound selections has become a focus in recent years. In parallel, the low success rate of converting lead compounds into drugs often due to unfavorable pharmacokinetic parameters has sparked a renewed interest in understanding more clearly what makes a compound drug-like. Various approaches have been devised to address the drug-likeness of molecules employing retrospective analyses of known drug collections as well as attempting to capture ‘chemical wisdom’ in algorithms. For example, simple property counting schemes, machine learning methods, regression models, and clustering methods have been employed to distinguish between drugs and non-drugs. Here we review computational techniques to address the drug-likeness of compound selections and offer an outlook for the further development of the field. © 2003 Wiley Periodicals, Inc. Med Res Rev, 23, No. 3, 302-321, 2003 \n2. Abstract of Cited Paper (BIB002): The occurrence of idiosyncratic adverse drug reactions during late clinical trials or after a drug has been released can lead to a severe restriction in its use and even in its withdrawal. Metabolic activation of relatively inert functional groups to reactive electrophilic intermediates is considered to be an obligatory event in the etiology of many drug-induced adverse reactions. Therefore, a thorough examination of the biochemical reactivity of functional groups/structural motifs in all new drug candidates is essential from a safety standpoint. A major theme attempted in this review is the comprehensive cataloging of all of the known bioactivation pathways of functional groups or structural motifs commonly utilized in drug design efforts. Potential strategies in the detection of reactive intermediates in biochemical systems are also discussed. The intention of this review is not to \"black list\" functional groups or to immediately discard compounds based on their potential to form reactive metabolites, but rather to serve as a resource describing the structural diversity of these functionalities as well as experimental approaches that could be taken to evaluate whether a \"structural alert\" in a new drug candidate undergoes bioactivation to reactive metabolites. \n3. Abstract of Cited Paper (BIB003): With de novo rational drug design, scientists can rapidly generate a very large number of potentially biologically active probes. However, many of them may be synthetically infeasible and, therefore, of limited value to drug developers. On the other hand, most of the tools for synthetic accessibility evaluation are very slow and can process only a few molecules per minute. In this study, we present two approaches to quickly predict the synthetic accessibility of chemical compounds by utilizing support vector machines operating on molecular descriptors. The first approach, RSsvm, is designed to identify the compounds that can be synthesized using a specific set of reactions and starting materials and builds its model by training on the compounds identified as synthetically accessible or not by retrosynthetic analysis. The second approach, DRsvm, is designed to provide a more general assessment of synthetic accessibility that is not tied to any set of reactions or starting materials. The training set compounds for this approach are selected from a diverse library based on the number of other similar compounds within the same library. Both approaches have been shown to perform very well in their corresponding areas of applicability with the RSsvm achieving a receiver operator characteristic score of 0.952 in cross-validation experiments and the DRsvm achieving a score of 0.888 on an independent set of compounds. Our implementations can successfully process thousands of compounds per minute. \n4. Abstract of Cited Paper (BIB004): Drug-likeness is a key consideration when selecting compounds during the early stages of drug discovery. However, evaluation of drug-likeness in absolute terms does not reflect adequately the whole spectrum of compound quality. More worryingly, widely used rules may inadvertently foster undesirable molecular property inflation as they permit the encroachment of rule-compliant compounds towards their boundaries. We propose a measure of drug-likeness based on the concept of desirability called the quantitative estimate of drug-likeness (QED). The empirical rationale of QED reflects the underlying distribution of molecular properties. QED is intuitive, transparent, straightforward to implement in many practical settings and allows compounds to be ranked by their relative merit. We extended the utility of QED by applying it to the problem of molecular target druggability assessment by prioritizing a large set of published bioactive compounds. The measure may also capture the abstract notion of aesthetics in medicinal chemistry. \n5. Abstract of Cited Paper (BIB005): A compound’s synthetic accessibility (SA) is an important aspect of drug design, since in some cases computer-designed compounds cannot be synthesized. There have been several reports on SA prediction, most of which have focused on the difficulties of synthetic reactions based on retro-synthesis analyses, reaction databases and the availability of starting materials. We developed a new method of predicting SA using commercially available compound databases and molecular descriptors. SA was estimated from the probability of existence of substructures consisting of the compound in question, the number of symmetry atoms, the graph complexity, and the number of chiral centers of the compound. The probabilities of the existence of given substructures were estimated based on a compound library. The predicted SA results reproduced the expert manual assessments with a Pearson correlation coefficient of 0.56. Since our method required a compound database and not a reaction database, it should be easy to customize ... \n6. Abstract of Cited Paper (BIB006): In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics. \n7. Abstract of Cited Paper (BIB007): This work presents a proof-of-concept study in artificial-intelligence-assisted (AI-assisted) chemistry where a machine-learning-based molecule generator is coupled with density functional theory (DFT) calculations, synthesis, and measurement. Although deep-learning-based molecule generators have shown promise, it is unclear to what extent they can be useful in real-world materials development. To assess the reliability of AI-assisted chemistry, we prepared a platform using a molecule generator and a DFT simulator, and attempted to generate novel photofunctional molecules whose lowest excited states lie at desired energetic levels. A 10 day run on the 12-core server discovered 86 potential photofunctional molecules around target lowest excitation levels, designated as 200, 300, 400, 500, and 600 nm. Among the molecules discovered, six were synthesized, and five were confirmed to reproduce DFT predictions in ultraviolet visible absorption measurements. This result shows the potential of AI-assisted chemist...\n\nLiterature Review:",
"role": "user"
},
{
"content": "Enforcement of synthesizability has thus far mainly been done using the synthetic accessibility (SA) score developed by Ertl & Schuffenhauer, 157 although other synthesizability scoring functions exist. BIB004 BIB006 The model underlying the SA score was designed and fit specifically to match scores from synthetic chemists on a set of drug molecules, and therefore may be of limited applicability to other types of molecules. When using SA score as a reward in their molecular autoencoder, Gómez-Bombarelli et al. found that it still produced a large number of molecules with unrealistically large carbon rings. Therefore, they added an additional \"RingPenalty\" term to penalize rings with more than six carbons. In the OR-GANIC GAN code, Sánchez-Lengling et al. added several penalty terms to the original SA score function, and also developed an additional reward for chemical symmetry, based on the observation that symmetric molecules are typically easier to synthesize. For drug molecules, the use of scoring functions developed to estimate how \"drug-like\" or \"natural\" a compound is can also help improve the synthesizability, stability, and usefulness of the generated molecules. BIB007 Examples of such functions include Lipinski's Rule of Five score, 5 the natural product-likeness score, 160 the quantitative estimate of drug-likeness, BIB005 and the Muegge metrics. BIB001 BIB002 Another option of particular utility to drug discovery is to apply medicinal chemistry filters either during training or post-training to tag unstable, toxic, or unsynthesizable molecules. For drug molecules, catalogs of problematic functional groups to avoid have been developed in order to limit the probability of unwanted side-effects. BIB003 For energetic molecules and other niche domains an analogous set of functional groups to avoid has yet to be developed. Finally, many software packages exist for checking molecules which may be integrated into training or as a post-training filter. For example Popova et al. use the ChemAxon structure checker software to do a validity check on the generated molecules. Bjerrum et al. use the Wiley ChemPlanner software post-training to find synthesis routes for 25-55% of the molecules generated by their RNN. Finally, Sumita et al. check for previously discovered synthetic routes for their generated molecules using a SciFinder literature search. BIB008",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.16232 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Genome-Wide Diet-Gene Interaction Analyses for Risk of Colorectal Cancer\nArticle:\nAbstract:\nDietary factors, including meat, fruits, vegetables and fiber, are associated with colorectal cancer; however, there is limited information as to whether these dietary factors interact with genetic variants to modify risk of colorectal cancer. We tested interactions between these dietary factors and approximately 2. 7 million genetic variants for colorectal cancer risk among 9, 287 cases and 9, 117 controls from ten studies. We used logistic regression to investigate multiplicative gene-diet interactions, as well as our recently developed Cocktail method that involves a screening step based on marginal associations and gene-diet correlations and a testing step for multiplicative interactions, while correcting for multiple testing using weighted hypothesis testing. Per quartile increment in the intake of red and processed meat were associated with statistically significant increased risks of colorectal cancer and vegetable, fruit and fiber intake with lower risks. From the case-control analysis, we detected a significant interaction between rs4143094( 10p14/near GATA3) and processed meat consumption( OR = 1. 17; p = 8. 7E-09), which was consistently observed across studies( p heterogeneity = 0. 78). The risk of colorectal cancer associated with processed meat was increased among individuals with the rs4143094-TG and-TT genotypes( OR = 1. 20 and OR = 1. 39, respectively) and null among those with the GG genotype( OR = 1. 03). Our results identify a novel gene-diet interaction with processed meat for colorectal cancer, highlighting that diet may modify the effect of genetic variants on disease risk, which may have important implications for prevention.\nIntroduction:\nColorectal cancer is the third most common neoplasm and the third leading cause of cancer death in both men and women across most ethnic-racial groups[1]. Intake of various dietary factors, most notably, meat, fruits/vegetables, and fiber, have been extensively investigated in relation to colorectal cancer risk. Overall, the evidence suggests that consumption of red and processed meat modestly increase the risk of colorectal cancer[2],[3]; and fruits[4], vegetables[4],[5], and fiber[6]–[8] decrease risk, although these associations have not been observed across all studies[2],[9],[10], perhaps due to methodological differences and unaccounted modifying effects. More recently, studies have focused on the potential modifying effects of common genetic variants, single nucleotide polymorphisms( SNPs), on the relationship between dietary factors and risk of colorectal cancer. However, attention has largely focused on candidate SNPs in genes directly involved in the metabolism of selected nutrients; for example, metabolism of B-vitamins[11], key nutrients found in fruits and vegetables; or the metabolism of carcinogenic by-products resulting from cooking or processing of meat[12]. From these candidate gene/pathway-approaches, few genetic variants have been consistently identified and further investigation is warranted. Large datasets from genome-wide association studies of colorectal cancer are now available for a comprehensive analysis of gene-diet interactions on the risk of colorectal cancer. To date, one genome-wide study of gene-diet interactions focusing on microsatellite stable/microsatellite-instability low colorectal cancer( 1, 191 cases, 990 controls) reported no statistically significant gene-diet interactions after replication in an independent dataset[13]. The authors highlighted the need for collaborative consortia to increase sample size, with central quality control procedures and careful standardization and harmonization of definitions and measurements. Hutter et al., using data from the Genetics and Epidemiology of Colorectal Cancer Consortium( GECCO) on 7, 106 colorectal cancer cases and 9, 723 controls from 9 studies focused on 10 previously identified colorectal cancer-susceptibility loci and conducted a systematic search for interaction with selected lifestyle and dietary factors. The strongest statistical evidence was observed for interaction for vegetable consumption and rs16892766, located on chromosome 8q23. 3 near the EIF3H and UTP23 genes( p = 1. 3E-04)[14]. In this large combined analysis using GECCO from 10 case-control and nested cohort studies comprising 9, 287 colorectal cancer cases and 9, 120 controls, we build upon these previous reports[13],[14] to examine over 2. 7 million common polymorphisms for multiplicative interactions with selected dietary factors( red meat, processed meat, fiber, fruit and vegetables) and risk of colorectal cancer. For our primary analyses we used conventional case-control logistic regression that included an interaction term as well as our recently developed Cocktail method, which integrates several novel GxE methods to improve statistical power under various scenarios[15].\nDiscussion:\nGenome-wide scans have successfully identified numerous risk loci for colorectal cancer; consortia pooling multiple studies for increased statistical power have continued to identify additional susceptibility loci[18]–[24]. However, only limited work has been pursued at a genome-wide scale to identify gene-diet interactions. Using individual-level data from ten studies with harmonized dietary intake variables on a total of over 9, 000 cases and 9, 000 controls, we have conducted a genome-wide analysis for GxE interactions. Using conventional statistical methods, as well as our novel method aiming to improve statistical power, we provide evidence for a novel interaction between rs4143094 and processed meat intake. The variants in the 10p14 region interacting with processed meat consumption reside within and upstream of GATA binding protein 3( GATA3) gene. GATA3 has long been associated with T cell development, specifically Th2 cell differentiation[25]. GATA3 is up-regulated in ulcerative colitis[26], which is associated with increased risk of colorectal cancer[27]. However, the role of GATA genes as transcription factors extends to epithelial structures with a known role in breast, prostate and other cancers[28]–[30]. GATA factors are involved in cellular maturation with proliferation arrest and cell survival. Loss of GATA genes or silencing of expression have been described for breast, colorectal and lung cancers[30]. To further explore this locus, we evaluated the potential functional impact of the most significant SNP in this locus as well as correlated SNPs querying multiple bioinformatics databases, such as Encode and NIH Roadmap( Table S6). The most significant SNP rs4143094 is about 7. 2 kb upstream of GATA and resides in a 9. 5 kb LD block( r2>0. 8) containing 19 highly correlated SNPs, including rs1269486, which shows the third most significant interaction in this region( Table 1). The rs1269486 variant is located 1420 bases upstream of GATA3 in a region of open chromatin( DNase I hypersensitivity) with histone methylation patterns consistent with promoter activity in a colorectal cancer cell line( CACO2; Figure S1). As would be expected of a promoter region, experimental evidence supports Pol2 binding along with the transcription factors c-Fos, JunD, and c-Jun[31]. Many of the other SNPs upstream of GATA3 are located in GATA3-antisense RNA1( GATA3-AS1)( formerly FLJ45983). GATA3-AS1 is a non-coding RNA that may regulate GATA3 transcript levels in the cell. Further studies are required to elucidate the relationship between GATA3 and GATA3-AS1 and determine whether variants in the 10p14 region cause perturbations in regulation. A plausible though speculative biological basis for our findings is that processed meat triggers a pro-tumorigenic inflammatory or immunological response[32] that may necessitate proper GATA3 transcription levels. Nonetheless, the precise mechanism by which deregulation of GATA3 is linked to colorectal cancer upon consumption of high levels of processed meat remains unclear. Further study of the role of variants in GATA3 in colorectal cancer will yield more insight into their functional significance. The interaction between variants in locus 10p14 and processed meat were identified by the conventional case-control logistic regression analysis. This locus was not identified through our Cocktail method or any of the other exploratory methods( Text S2). However, this is not surprising given that the SNPs in this locus are not strongly associated with colorectal cancer( p = 0. 26 for rs4143094) and not strongly correlated with processed meat( p = 0. 25 for rs4143094) and, accordingly, SNPs in this locus were not prioritized in the Cocktail analysis. However, we were somewhat surprised to not identify additional interactions with any of the dietary factors using our Cocktail method, given the expected improvement in power under various scenarios. We recognize that the field of GxE analyses is at an early stage compared with studies for marginal gene-diseases associations. It will be important to see more large-scale empirical GxE studies to judge the impact and potential power gain of the novel GxE methods. Our analysis has some limitations and notable strengths. We adopted a flexible approach to data harmonization of dietary factors, in a similar fashion to those proposed by other projects[33],[34]. We focused on dietary variables that were collected in a similar manner and allowed for harmonization across a large subset of the studies. Ideally, our findings will be replicated in other populations. While a substantial larger number of GWAS have been conducted for colorectal cancer, limited studies have collected information on processed meat and other dietary variables. In the present study, we did not divide our large sample into discovery and replication sets, as it has been shown that the most powerful analytical approach is a combined analysis across all studies[35]. This approach is increasingly used as more samples with GWAS data are becoming available[36]. Importantly, we observed no evidence of heterogeneity in the estimates by study, which suggests that results are consistent across studies. We not only used the conventional case-control logistic regression, but also took advantage of our recently developed Cocktail method as a second primary analysis approach to potentially improve statistical power. We note that even though for the Cocktail method different interaction tests( case-only and case-control) were used depending on the screening step, the overall genome-wide type I error is controlled at 0. 05( genome-wide level of α was set to 5E-08), just like the conventional case-control method. As we investigated five dietary factors and used two primary methods additional adjustment for multiple comparisons may be warranted. However, we want to point out that the dietary variables were correlated, e. g. correlation between fruits and vegetables was 0. 38, between fruits and fiber was 0. 52 or between red and processed meat was 0. 62 adjustments for these not independent test is less straight forward. Similarly, the primary methods are not independent from each other, for instance the testing step of the Cocktail method used the case-control or case-only testing, which are consistent or correlated with the conventional case-control analysis. Accordingly, additional multiple comparison adjustment for 5 variables and 2 tests would be too conservative, nevertheless our interaction finding for 10p14 and processed meat would likely remain marginally significant. With the investment of large GWAS consortium built on well-characterized studies, we are now well-positioned to identify potential interactions between genetic loci and environmental risk factors with respect to colorectal cancer risk. In this study, we have identified a novel interaction between rs4143094 and processed meat. This genetic locus may have interesting biological significance given its proximity to genes plausibly associated with pathways relevant to colorectal carcinogenesis. Nonetheless, further functional analysis is required to uncover the specific mechanisms by which this genetic locus modulates the association between intake of processed meat and colorectal cancer risk.",
"role": "user"
},
{
"content": "High intake of red and processed meat and low intake of fruits , vegetables and fiber are associated with a higher risk of colorectal cancer . We investigate if the effect of these dietary factors on colorectal cancer risk is modified by common genetic variants across the genome ( total of about 2 . 7 million genetic variants ) , also known as gene-diet interactions . We included over 9 , 000 colorectal cancer cases and 9 , 000 controls that were not diagnosed with colorectal cancer . Our results provide strong evidence for a gene-diet interaction and colorectal cancer risk between a genetic variant ( rs4143094 ) on chromosome 10p14 near the gene GATA3 and processed meat consumption ( p = 8 . 7E-09 ) . This genetic locus may have interesting biological significance given its location in the genome . Our results suggest that genetic variants may interact with diet and in combination affect colorectal cancer risk , which may have important implications for personalized cancer care and provide novel insights into prevention strategies .",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.3438 | [
{
"content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: In addition, although the C-terminal cleavage fragment activates cytokine production elicited by CpG-DNA or NF-kB signaling, the levels of TNF-a production or NF-kB activation seem to be lower than those seen with full-length TLR9 (10, 19).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.34728 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nQuantitative Characterization of Configurational Space Sampled by HIV-1 Nucleocapsid Using Solution NMR, X-ray Scattering and Protein Engineering\n\nPaper sections:\n\n<p>The human immunodeficiency virus 1 (HIV-1) nucleocapsid protein plays a critical role in various stages of the HIV-1 replication cycle including viral genome recognition and packaging, immature virion assembly and reverse transcription.[1] Nucleocapsid is a 55-residue polypeptide, one of the smallest constituents of the Gag precursor polyprotein, comprising two highly conserved zinc (Zn) knuckles that are separated by a basic linker (407APRKKG412; numbering based on Gag polyprotein precursor from strain HXB2) and flanked by disordered N- and C-termini (Figure 1A). Both Zn knuckles—N-Zn and C-Zn—are characterized by a C-X2-C-X4-H-X4-C motif. Despite their high (≈57%) sequence identity (Figure 1B), the two Zn knuckles are functionally distinct,[2] and enable nucleocapsid to interact with almost any nucleic acid, albeit with different specificity. Nucleocapsid binds to single stranded RNA with very high affinity culminating in selective packaging of the viral genome, and drives nucleic acids into thermodynamically stable conformations, a vital prerequisite for reverse transcription.[3] The exact mechanism for this chaperone activity is unclear, but is thought to involve entropy exchange[4] whereby binding to nucleic acids is coupled to a structural transition in nucleocapsid and partial melting of the oligonucleotide, allowing the oligonucleotide to sample multiple conformations, eventually resulting in a stable protein-nucleic acid complex.</p><p>The functional versatility of nucleocapsid relies upon plasticity in the spatial orientation of the two Zn knuckles, a quantitative description of which would provide a deeper mechanistic understanding of nucleocapsid function. Several structures of intact nucleocapsid have been solved by NMR in the presence of nucleic acids which lock the two Zn knuckles into a single configuration.[6,7] In the absence of nucleic acids, however, 15N relaxation data indicate significant motion between the two knuckles, precluding crystallization, and the few, observed weak inter-knuckle nuclear Overhauser enhancements (NOE) are inconsistent with a single structure.[8] Moreover, both the N- and the C-termini, which constitute significant portions of nucleocapsid, are unstructured in the absence of nucleic acids, further complicating structural characterization. In such cases integrating NMR-derived residual dipolar couplings (RDCs) with small angle solution X-ray scattering (SAXS) can be used to generate conformational ensembles that accurately describe the accessible conformational space.[9] This is because RDCs acquired using a steric alignment medium, such as neutral bicelles, provide orientational information on bond vectors relative to an external alignment tensor whose elements (magnitude and orientation) can be directly calculated from molecular shape (derived from the coordinates) using a steric obstruction model.[9,10] SAXS provides complementary information on overall molecular size and shape. Combining these two different sets of experimental data is necessary to remove their respective intrinsic ambiguities. Here we make use of a novel simulated annealing approach driven by RDC and SAXS data acquired on two engineered nucleocapsid constructs—full length (residues 378–432) and truncated (residues 390–428), where the latter lacks the disordered N- and C-termini of the former (Figure 1A), to obtain a quantitative description of the configurational space sampled by the two Zn-knuckles relative to one another, as well as the disposition of the disordered tails.</p><p>Comparison of 1HN/15N chemical shifts of full-length and truncated nucleocapsid, acquired under identical experimental conditions, reveal only a few perturbations—specifically for Glu398 and Gly399 in the N-Zn knuckle and Lys415 in the C-Zn knuckle, primarily due to interactions between the disordered termini and surface exposed regions of the knuckles (Supporting Information, Figure S1 A). The 1HN/15N chemical shifts for the linker residues (407APRKKG412) connecting the N-and C-Zn knuckles are virtually identical for the two constructs indicating that the conformational and dynamic properties of the linker remain unchanged.</p><p>The structures of the knuckles are unaffected by the presence or absence of the disordered termini as the RDCs of the individual knuckles (measured in neutral lipid bicelles) display excellent agreement to those calculated from the coordinates of the respective knuckles taken from the NMR structure of a nucleocapsid–inhibitor complex (PDB 2M3Z)[11] with RDC R-factors[13] less than 25% (Figure 2). Moreover, the magnitude (Da) and rhombicity (η) of the alignment tensors calculated for the N-Zn and C-Zn knuckles are different from one another (especially evident for the full-length construct; cf. right panels in Figure 2), indicating that the two knuckles align (and therefore tumble) semi-independently of one another.[14] Although the RDCs for the full-length and truncated constructs are correlated to one another, the correlation is poor (ρ=0.83 and 0.75 for N-Zn and C-Zn, respectively; Supporting Information Figure S1 B) as the presence of disordered tails alters the overall shape of the conformational ensemble sampled by nucleocapsid resulting in different alignment tensors.</p><p>As the useful residue-specific experimental data for the disordered N- and C-terminal tails are limited to RDCs, we used a two-stage calculation strategy. Specifically, RDC and SAXS-driven ensemble simulated annealing was used to first characterize the conformational space sampled by the N-Zn and C-Zn knuckles in the truncated nucleocapsid construct lacking the tails. Subsequently, the full-length construct was used to validate the conformational ensemble obtained for the truncated construct and to characterize the conformational space sampled by the tails. Full details of the calculation strategy using Xplor-NIH[15] is provided in the Supporting Information.</p><p>The initial coordinates for the simulated annealing calculations were taken from PDB 2M3Z.[11] The N-Zn and C-Zn knuckles were treated as rigid bodies while the linker (residues 407–412) was given all degrees of freedom. Calculations were carried out with ensemble sizes Ne ranging from one to eight, and the population weights of the ensemble members were optimized during the course of simulated annealing (Figure 3A). While a Ne=1 ensemble can account for the SAXS data, it fails to account satisfactorily for the RDC data. The optimal ensemble size for which both RDC R-factors for the N-Zn and C-Zn knuckles reach their target values (from the SVD fits to the individual Zn knuckles; cf. Figure 2) and the SAXS χ2 approaches a value of one, is Ne=3. Agreement between observed and calculated RDC and SAXS data for the Ne=3 truncated ensemble is shown in the left panels of Figures 3B and C, respectively, and the results are summarized in Table 1. The Ne=3 ensemble is characterized by two major, approximately equally populated, clusters (pcluster1≈0.40; pcluster2≈0.50) and one minor cluster (pcluster3 ≈0.10) which are best visualized when the ensembles are fit to the C-Zn knuckle (Figure 4A). Since only a single set of 1HN/15N cross-peaks is observed, interconversion between the three states is fast on the chemical shift time scale (i.e. sub-millisecond).</p><p>The results obtained for the truncated nucleocapsid were validated against the RDC and SAXS data for full-length nucleocapsid as follows. The ten lowest energy Ne=3 ensembles for the truncated nucleocapsid were used as starting coordinates with the disordered N- and C-terminal tails added to the coordinates. For each ensemble member, the tails were represented by a five-member sub-ensemble. The knuckles and linker for each ensemble member were held fixed in the configuration obtained from the calculations with the truncated construct, while only the tails were allowed to move (in torsion space) subject to RDC and SAXS-driven simulated annealing. Seven of the ten resulting ensembles satisfy both the RDC and SAXS data within experimental error for the full-length nucleocapsid, including the RDCs within the tails (Figures 3B and 3C, right panels, and Table 1). Three of the full-length ensembles, however, showed significant deviations for the N-C′ and/or Cα-C′ RDCs of Arg 409 (the third residue of the linker) due to very subtle changes in the distribution of the ϕ/ψ ensemble for this residue (see Supporting Information Figure S2), illustrating the discriminating power of cross-validation against the full-length nucleocapsid data. Note also how the two-fold increase in the span of the RDCs for the N-Zn (≈ ±20 Hz) versus C-Zn (≈ ±10 Hz) knuckle of the full-length nucleocapsid is faithfully reproduced. The origin of this effect lies in the impact of the longer N-terminal tail (12 residues) relative to the shorter C-terminal one (4 residues) on the alignment of the respective adjacent Zn-knuckles.</p><p>Reweighted atomic probability maps[16] generated from the seven lowest energy full-length nucleocapsid ensembles are shown in Figure 4, and depict the positional distributions of the N- and C-Zn of nucleocapsid relative to one another. While the overall configurational space sampled by the full-length nucleocapsid is large, the conformational distribution of the N-Zn is clearly distinct from that of the C-Zn as a result of differential flexibility along the linker region, with increased conformational flexibility from the N- to the C-terminal end of the linker. Thus, the three clusters are clearly visualized when best-fitting to the C-Zn knuckle where the N-Zn knuckle exhibits a boomerang-like distribution (Figure 4A), but not when best-fitting to the N-Zn knuckle (Figure 4B).</p><p>Examination of the ϕ/ψ backbone torsion angles for the linker residues reveal that interconversion between the three configurational states involves changes in ϕ/ψ angles within the most favorable regions of Ramachandran space (Supporting Information, Figure S2). For Ala 407, Pro 408, Arg 409 and Lys410, the variations in ϕ/ψ are relatively minor, occur within the β-region and therefore involve minimal energy barriers. Likewise, the larger variation in ϕZ/ψ space sampled by Gly 412 involves contiguous low energy regions. For Lys411, however, significant barriers have to be overcome since hops between β, α-helix and possibly left-handed helix regions are observed. The larger ϕ/ψ space sampled by Lys411 and Gly 412 is fully consistent with their smaller (0 to 0.1) heteronuclear 15N-{1H} NOE values compared to those for Ala 407 (0.46), Arg 409 (0.33) and Lys410 (0.36) (Figure 4C), in agreement with previous NMR relaxation data.[8e] Interestingly, the residues at either end of the linker, Pro 408 and Gly 412, are highly (≥99%) conserved in all HIV-1 strains,[5] and further, Pro 408 and Arg 409 are completely conserved in HIV-1, HIV-2 and SIV (Figure 1C). Moreover, mutational analysis of linker residues indicate that both Pro 408 and Arg 409 are essential for HIV-1 replication[17] which would support the functional importance of Pro408 in limiting the conformational flexibility at the N-terminal end of the linker.</p><p>There are inter-knuckle contacts ≤ 4 Å in Clusters 1 and 2, but none in Cluster 3 (Supporting Information, Figure S3). Those in cluster 1 are fully consistent with the previously observed weak inter-knuckle 1H-1H NOEs.[8d,e] All three clusters exhibit contacts ≤ 4 Å between Asn393 of the N-Zn knuckle and residues within the linker, also consistent with previously reported NOE data.[8e] The majority of residues involved in inter-knuckle and knuckle-linker contacts are highly conserved throughout all HIV-1 strains (Supporting Information, Figure S3).</p><p>Comparison of the three spatial configurations of the Zn-knuckles of free nucleocapsid with that observed in ligand-bound forms complexed to either DNA[6c] or RNA[6b] indicate that cluster 1 is closest to the ligand-bound conformation (Figure 4D). Thus, ligand binding involves a degree of conformational selection in which a configuration close, but not necessarily identical, to the final ligand-bound form is preferentially recognized by the ligand.</p><p>It is tempting to speculate from the current structural and dynamic data that differential flexibility of the residues within the linker connecting the N- and C-Zn knuckles of nucleocapsid may play an important role in the functional diversity of nucleocapsid. For example, the binding specificity of nucleocapsid is dependent upon the oligomerization state of the Gag precursor, and varies from high affinity towards A-rich mRNA sequences in an assembled immature Gag lattice to GU-rich motifs in cytosol and mature virions, as well as under in vitro conditions.[18]</p><p>The coordinates and experimental restraints have been deposited in the Protein Data Bank accession code 5I1R.</p>",
"role": "user"
},
{
"content": "Nucleic-acid-related events in the HIV-1 replication cycle are mediated by nucleocapsid, a small protein comprising two zinc knuckles connected by a short flexible linker and flanked by disordered termini. Combining experimental NMR residual dipolar couplings, solution X-ray scattering and protein engineering with ensemble simulated annealing, we obtain a quantitative description of the configurational space sampled by the two zinc knuckles, the linker and disordered termini in the absence of nucleic acids. We first compute the conformational ensemble (with an optimal size of three members) of an engineered nucleocapsid construct lacking the N- and C-termini that satisfies the experimental restraints, and then validate this ensemble, as well as characterize the disordered termini, using the experimental data from the full-length nucleocapsid construct. The experimental and computational strategy is generally applicable to multidomain proteins. Differential flexibility within the linker results in asymmetric motion of the zinc knuckles which may explain their functionally distinct roles despite high sequence identity. One of the configurations (populated at a level of \\xe2\\x89\\x8840%) closely resembles that observed in various ligand-bound forms, providing evidence for conformational selection and a mechanistic link between protein dynamics and function.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.105052 | [
{
"content": "Given the full text of a scientific research paper, write an abstract section for the paper. Your response should contain only the abstract without any other text.\n\nArticle text:\nseveral x - ray point sources with no radio counterpart , generically called ccos , have recently been detected near the center of snrs . in most cases ,\nthese sources are claimed to be the neutron stars ( nss ) left behind after the supernova explosions .\none of them , 1e 1207.4 - 5209 , was found to lie at the center of an depression ( giacani et al .\n2000 ) , raising the question of whether the hot atmosphere of the ns is capable of heating up the neighboring gas and producing the observed depression .\nwe present the results of a search for similar traces in the interstellar gas towards a sample of ccos in southern snrs .\nthe environs of the ccos rx j08224300 in puppis a , 1e 1613485055 in rcw 103 , cxou j08521.4461753 in vela jr and 1 wga j1713.43949 in g347.30.5 were observed in the @xmath1 cm line and in continuum using the australia telescope compact array .\nthe data were combined with single dish data from the parkes telescope ( mcclure - griffiths et al .\nrx j08224300 , the cco in puppis a , was found to lie between two opposite lobe - like minima ( reynoso et al . 2003a ) .\nthe lobes are aligned with the proper motion of the cco , assuming that the explosion site of the supernova is given by the optical expansion center measured by winkler et al .\nthe lobes are centered at the same systemic velocity previously measured for puppis a , + 16 .\nreynoso et al .\n( 2003a ) propose that this structure is created by the ejection of two opposite jets from the cco .\nmore x - ray observations towards rx j08224300 are needed to search for jets ( like in vela or the crab pulsar ) and measure the cco s proper motion\n. at + 3 , the shows another depression coincident with rx j08224300 , but in this case the morphology and size are similar to the minimum found by giacani et al .\n( 2000 ) around the cco associated with g296.5 + 10.0 .\nrcw 103 represents the third case in which the associated cco has created an depression ( reynoso et al .\n2003b ) . in all three cases ,\nthese features have sizes of 1 to 3 pc , the ccos are off - centered by @xmath2 pc , and the missing masses are estimated to be 0.1 to 0.3 @xmath3 .\nit is very unlikely that these depressions are due to self - absorption , since the involved temperatures should be @xmath4 k , too low for a snr interior .\nmost probably , the ccos swept up the surrounding gas ( reynoso et al .\n2003b ) . in all cases ,\nthe measured column densities favor blackbody rather than power law fits to the x - ray spectra .\nfor g347.30.5 , a preliminary analysis of our data did not allow us to find any feature suggestive of being associated with the cco down to a limit of @xmath5 k ( 3@xmath6 ) . at a velocity compatible with the distance proposed to this snr , there appears a tiny depression , marginally enclosing the cco , but it does not look different than many other spots in the image . finally , the radio continuum data towards the snr vela jr reveal an elongated nebula , approximately @xmath7 in length and @xmath8 in width , centered at the position of the cco .\nin addition , a compact source is found at the location of the cco .\nthe flux of this compact source is 7.2 , and its size , @xmath9 .\npolarization and spectral index studies will provide information to confirm if this emission arises from the pulsar wind nebula created by cxou j085201.4461753 .\nsuch observations are proposed early in 2004 .\ngiacani , e. b. , dubner , g. m. , green , a. j. , goss , w. m. , gaensler , b. m. , 2000 , , 119 , 281 mcclure griffiths , n. m. , green , a. j. , dickey , j. , m. , gaensler , b. m. , haynes , r. f. , wieringa , m. h. , 2001 , , 551 , 394 reynoso , e. m. , green , a. j. , johnston , s. , dubner , g. m. , giacani , e. b. , goss , w. m. , 2003a , , 345 , 671 reynoso , e. m. , green , a. j. , johnston , s. , goss , w. m. , dubner , g. m. , giacani , e. b. , 2003b , pasa , in press ( astro - ph/0310100 ) winkler , p. f. , turtle , j. h. , kirshner , r. p. , 1988\n, in supernova remnants and the interstellar medium , cambridge university press , p65",
"role": "user"
},
{
"content": "we have carried out an survey towards x - ray central compact objects ( ccos ) inside supernova remnants ( snrs ) which shows that many of them are placed within local minima . \n the nature of these minima is not clear , but the most likely explanation is that the ccos have evacuated the neighboring gas . \n this survey also allowed us to detect a weak , diffuse radio nebula inside the snr vela jr , probably created by the winds of its associated cco . \n # 1_#1 _ # 1_#1 _ = 2@xmath0 # 1 1.25 in .125 in .25 in",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.127355 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nwood & churchwell ( 1989a ; hereafter wc89 ) published the first survey of a new class of radio continuum sources in the galaxy called ultracompact ( uc ) hii regions .\nthese sources represent a subset of `` standard '' hii regions characterized by unusually small sizes ( @xmath00.01 pc ) and high emission measures ( @xmath110@xmath2 pc @xmath3 ) , as derived from high angular resolution imaging observations with the very large array ( vla ) at 6 and 2 centimeters . wc89\nsystematically tabulated the physical properties of 75 sources found in regions at a variety of distances and placed them in five distinct morphological classes : cometary , shell , irregular , core - halo , and spherical / unresolved .\nthe percentages of uc hii region morphologies found in this survey were 4% shell , 20% cometary , and the remainder ( 76% ) irregular , spherical / unresolved and core - halo sources .\na subsequent survey of 59 additional regions by kurtz et al .\n( 1994 ; hereafter k94 ) extended these results and detected 75 more uc hii regions with a similar distribution of morphologies .\nwood & churchwell ( 1989b ) estimate the lifetime of an uc hii region ( @xmath410@xmath5 yr ) from the comparison between the number of o stars within 2.5 kpc of the sun ( 436 ; conti et al .\n1983 ) and the number of the 1646 embedded ob star candidates from the iras sample that one would expect to find within 2.5 kpc of the sun if the sources were uniformly distributed in the galactic disk ( 45 ) . from this comparison\n, they determine that 10%-20% of the lifetime of an o star is spent in the uc hii region phase , in agreement with the estimate by mezger & smith ( 1975 ) that between 15% and 25% of all o stars are hidden by dust .\nfollowing the arguments of wood & churchwell ( 1989b ) , if there are approximately 1700 embedded ob star candidates in the galactic disk , the surveys of wood & churchwell ( wc89 ) , kurtz et al ( 1994 ) and the current work sample approximately 20% of the embedded ob stars in the galactic disk . in recent years , we have used the vla to study radio continuum and radio recombination line emission from two of the galaxy s most luminous regions of massive star formation , sgr b2 and w49a .\nother luminous massive star forming regions include the arches , the galactic center cluster , and ngc 3603 .\nthe observations of these `` target rich '' regions , with resolutions from @xmath42 to 004 ( @xmath40.1 to 0.002 pc ) , have identified nearly 100 additional uc hii regions .\nthe imaging observations of this new sample motivate a modification to the uc hii region morphological classification scheme of wc89 . in particular , we suggest removal of the `` core - halo '' morphology , and we add a new `` bipolar '' designation . in this _ letter _ , we describe these modifications to the classification scheme , the criteria used in the classification process , and we discuss the uc hii region morphologies found in sgr b2 and w49a .\nour vla radio continuum and recombination line observations of sgr b2 and w49a have been presented and analyzed in a series of previous papers : 7 mm and 1.3 cm observations of sgr b2 are discussed in de pree et al .\n( 1996 , 1998 ) , and gaume et al .\n( 1995 ) ; 7 mm , 1.3 cm , and 3.6 cm observations of w49a are discussed in de pree et al .\n( 1997 , 2000 ) and wilner et al .\n( 2001 ) ; further details of the high resolution line and continuum imaging of sgr b2 and w49a are given in de pree et al .\n( 2004 ) and de pree et al .\n( 2005 ) .\nthe vla observations of these two massive star forming regions have provided a large sample of uc hii regions suitable for morphological study .\nthe sample has the following characteristics : ( 1 ) a total number of sources of 97 , comparable to the wc89 and k94 surveys , ( 2 ) sensitivity to radio continuum structures on a wide range of size scales , ( 3 ) good emission measure sensitivity , resulting from long integration times on a large number of sources within just two fields of view , and ( 4 ) unambiguous relative size measurements within each of the two regions , as the sources within each region are associated and at the same distance .\nwe classified the morphologies of each of the uc hii regions detected in w49a and sgr b2 using the highest angular resolution images available .\nmost of the sources can be placed in one of the morphological classes defined by wc89 , but characteristics of the data prompts us to modify the wc89 classification scheme in three ways : _ a new bipolar morphology : _ there is a significant population of sources that exhibit a regular , previously uncategorized , morphology , elongated along one axis .\nchurchwell ( 2002 ) , in his recent review of uc hii regions , refers to a `` bipolar '' morphology , and we adopt that term here . the addition of the bipolar morphology is clearly needed to accommodate the presence of this class of sources .\n_ elimination of core - halo morphology : _ with multi - configuration vla observations that are sensitive to emission at large size scales , it is apparent that many of the uc hii regions in sgr b2 and w49a are associated with a faint shelf of radio emission ( e.g. see 3.6 cm images of w49a ; de pree et al .\nrecent studies of other , more isolated , radio continuum sources in star forming regions indicate that essentially all uc hii regions are associated with large scale , diffuse emission , when observed with sufficient sensitivity ( kim & koo 2003 ) .\nconsequently , the `` core - halo '' designation is not a useful morphological discriminant in modern data sets of high quality .\ninstead , we prefer to concentrate on the morphology of the compact source within the extended emission .\n_ rename `` shell '' morphology to `` shell - like '' : _ many of the shell - like sources show considerable clumpiness and departures from circular symmetry in their brightness , while retaining a clearly identifiable and approximately circular edge - brightened structure . a slight revision in nomenclature to shell - like ( sl )\noffers a better description of this morphology . in summary , the revised uc hii region\nmorphological types are : ( 1 ) shell - like ( sl ) , ( 2 ) bipolar , ( 3 ) cometary , ( 4 ) spherical , ( 5 ) irregular , and ( 6 ) unresolved .\nthe definitions of these morphologies ( abbreviations in parenthesis ) are : * shell - like ( sl ) : roughly circular sources that are not centrally peaked and are more than 50% complete around their perimeter , with no evidence of elongation away from the brightened edge .\nfigure 1a shows the sl source w49a / d . *\nbipolar ( b ) : sources with an axial ratio of at least 2:1 .\nif radio recombination line data are available , then there may be evidence of a velocity gradient along the long axis .\nfigure 1b shows the bipolar source w49a / a1 . * cometary ( c ) : sources with a bright edge and a tail that extends at least 1.5 times the width of the face .\nfigure 1c shows the cometary source sgr b2 main / i . *\nspherical ( s ) : sources that are symmetric , centrally brightened , resolved , and can be fit well by a two - dimensional gaussian .\nfigure 1d shows the spherical source w49a / r .\n* irregular ( i ) : any spatially resolved source that does not fit in one of the specified categories .\nfigure 1e shows the irregular source sgr b2 main / f2 * unresolved ( u ) : sources that are symmetrical , centrally brightened , and can be fit well by a two - dimensional gaussian that is comparable to the beam size .\nfigure 1f shows the unresolved source w49a / a2 the newly recognized class of bipolar uc hii regions is especially interesting .\nkinematic evidence from radio recombination lines suggest that the bipolar morphology may be associated with the bipolar outflow phase of early stellar evolution .\nseveral of the bipolar sources in w49a and sgr b2 show velocity gradients along their long axis , or they show remarkably broad line widths ( @xmath6v@xmath750 ) that are not the result of pressure broadening ( de pree et al .\nthe motions of the ionized gas , together with the elongation along one axis , suggest that ionized gas is flowing away from the young massive star at highly supersonic velocities .\nin addition , garay et al .\n( 2004 ) have observed a collimated stellar wind emanating from iras 16547 - 4247 and conclude that the thermal jet phenomenon may be common for high - mass as well as for low - mass stars .\ntable 1 lists the percentage of sources found in each morphological class for the sources in the surveys of wc89 , k94 , and in sgrb2 and w49 .\nas indicated in table 1 , we find that 28% are shell - like , 8% are bipolar , 14% are cometary , and the remainder(50% ) are spherical , irregular , or unresolved . in most respects ,\nthe findings of the three surveys are compatible within the uncertainties , which are generally dominated by counting statistics ( and the subjective nature of the classification process ) .\nthe frequencies of cometary , irregular and unresolved morphologies are compatible .\nk94 find a marginally higher percentage of spherical sources than in the wc89 survey or the w49a and sgr b2 sample .\nthe bipolar morphology was not used by wc89 or k94 , and we do not use the core - halo morphology , thus we do not have percentages to compare for these two morphologies . the only significant difference between the morphological classifications within sgr b2 and w49a and those of wc89 and k94 is the much higher percentage of sl sources ( 28% vs. 3% ) , about an order of magnitude more frequent occurrence of sources in this class .\nthe rarity of shell - like sources in the earlier surveys is surprising , considering the results of existing hydrodynamical models .\nstudies of the evolution of uc hii regions that examine the expansion of photoionization and stellar wind bubbles in dense molecular gas suggest that shell - like morphologies should be common ( e.g. garcia - segura & franco 1996 ) .\nthe vla observations of the sgr b2 and w49a regions indicate that sl is the most commonly observed morphology in these regions , comprising approximately one - third of the sources .\nthere are several possible explanations for the striking difference in the percentage of shell sources found by wc89 , k94 , and in the current work .\nsince the difference has clear statistical significance , it must result either from real physical differences in the populations of uc hii regions sampled , or from selection effects related to differences in the observational strategies of the galactic survey work ( wc89 and k94 ) and the current work .\nthe star forming environments represented by sgr b2 and w49a are extreme and present differences from the typical galactic plane star forming environment . since the wc and k94 surveys targeted isolated sources identified from _ iras _ 100 @xmath8 measurements , these surveys contained primarily isolated star forming regions , rather than highly clustered , high source density regions like sgr b2 and w49a .\nthe difference in the percentage of shell - like sources could indicate that physical conditions in these highly clustered regions lead to a higher prevalence of this morphology .\none possibile explanation for the larger numbers of regular shell - like morphologies is that the molecular environments into which the uc hii regions expand are smoother , with fewer fluctuations in ambient pressure due to density , temperature , or turbulence that lead to the development of substantial asymmetries in the expansion phase . the observing strategies and goals of wc89 and k94 differed from the current work .\nthe wc89 and k94 surveys were comprised of short `` snapshot '' observations that targeted a large number of relatively isolated regions at the expense of integration time on each region .\nthe sgr b2 and w49a surveys , by contrast , consist of observations of just two ( crowded ) pointings , which allowed for substantially longer integration times , better fourier plane coverage , and higher sensitivity . given data with lower sensitivity , many of the sources identified in sgr b2 and w49a with sl morphology might have been classified as cometary or irregular .\nlikewise , sources classified by wc89 and k94 as cometary and irregular might ( with better sensitivity and multiple vla configurations ) be classified as shell - like .\nfey et al . ( 1992 ) show several examples of such changes in uc hii region morphological classification in light of data of higher quality and sensitivity to a broader range of spatial scales .\nwe have classified the morphologies of nearly 100 uc hii regions in the sgr b2 and w49a regions .\nthese results , taken together with previous work , suggest modifications in the uc hii region classification scheme , as follows : 1 .\nvla observations with sensitivity to a wide range of spatial scales detect faint extended radio continuum emission throughout massive star forming regions , which makes nearly all compact sources appear with `` core - halo '' morphologies . as a result ,\nwe drop the `` core - halo '' designation from the uc hii region classification scheme .\na small percentage of uc hii regions have clear bipolar morphologies ; in cases where recombination line data are available , the kinematics suggest directed ionized outflows originating from a young central source .\nthis new `` bipolar '' morphological class is more common in w49a and sgr b2 than the shell morphology was in the wc89 or k94 surveys .\nwe suggest that the shell classification should be modified to `` shell - like '' ( or sl ) , in order to describe more accurately the appearance of these sources , many of which are clumpy or partially complete . the sgr b2 and w49a regions exhibit @xmath410 times as many sources with shell - like morphologies as found in the wc89 or k94 surveys .\ndeep , multi - configuration vla observations of sources previously classified as cometary and irregular should determine whether this discrepancy is due to limitations of snapshot data .\nthe next step in this study is to utilize the morphological findings and derived physical parameters of the sources and their environments as inputs and constraints in hydrodynamic models of uc hii region evolution .",
"role": "user"
},
{
"content": "we have used very large array ( vla ) observations of the massive star forming regions w49a and sgr b2 , obtained with resolutions from 20 to 004 , to classify the morphologies of nearly 100 ultracompact hii regions . \n these high resolution , multi - frequency , multi - configuration vla observations motivate several modifications of the existing morphological classification scheme for uc hii regions . in this work \n , we describe the modified morphology scheme and the criteria used in source classification . \n in particular , we drop the `` core - halo '' classification , add a `` bipolar '' classification , and change the shell classification to `` shell - like '' . \n we tally the percentage of each morphology found in the sgr b2 and w49a regions and find broad agreement with the galactic plane surveys in the distribution of morphologies for most types . \n however , we find that nearly a third of the sources in these regions are shell - like , which is a higher percentage by nearly a factor of ten than found in the surveys of galactic plane star forming regions by wood & churchwell ( 1989a ) and kurtz et al . \n ( 1994 ) . \n this difference may be due to physical differences in the environments of these two extreme star forming regions . \n alternatively , differences in observational technique may be responsible .",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.1524 | [
{
"content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: Lessons learned from an unstable genomic landscape This brief historical perspective will highlight the many accomplishments of the late William 'Bill' Morgan, and how his laboratory during the mid-1990s shaped the field of genomic instability. Bill focused on the processes responsible for radiation - induced genomic instability, and while ionizing radiation was known to induce this phenomenon, the precise causes were poorly understood. Here we revisit Bill's unique approach to these problems, as he advocated the use of novel mammalian cell lines to tease apart the mechanisms responsible for destabilizing an otherwise stable nuclear genome. Genomic instability is a multifaceted process posited to be the driving force behind multistep carcinogenesis. Bill used a variety of innovative techniques that ultimately refined our understanding of the causes and consequences of radiation - induced chromosomal instability and the role it played in cancer predisposition. The central concepts of genomic instability fit nicely with the mutator phenotype hypothesis proposed by Lawrence Loeb, both of which represent functionally similar frameworks for describing how genomic stability can be compromised. The field of genomic instability has since advanced considerably, and much of our current knowledge is due to the efforts of Bill Morgan.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"cell lines\", \"genome\"], \"Bacterium\": [], \"BiologicFunction\": [\"genomic stability\", \"cancer\", \"genomic instability\", \"carcinogenesis\", \"Genomic instability\", \"chromosomal instability\"], \"BiomedicalOccupationOrDiscipline\": [\"genomic landscape\"], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [\"mammalian\"], \"Finding\": [\"historical\", \"problems\"], \"Food\": [], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"knowledge\"], \"MedicalDevice\": [], \"Organization\": [\"laboratory\"], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [\"Lawrence Loeb\", \"William \\u0027Bill\\u0027 Morgan\", \"Bill\\u0027s\", \"Bill Morgan\", \"Bill\"], \"ResearchActivity\": [], \"SpatialConcept\": [\"multifaceted\", \"perspective\", \"nuclear\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.chemprot_re | science.chemprot_re.463 | [
{
"content": "Identify relationships between chemical compounds and genes in this abstract. Use only these relation types: ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nFormat your output as a JSON array with entries like [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"].\n\nAbstract:\nMechanism of the haemostatic effect of ethanolamine oleate in the injection sclerotherapy for oesophageal varices. Changes in coagulation and fibrinolysis were investigated in 20 patients with oesophageal varices, who underwent endoscopic injection sclerotherapy (EIS) with 5 per cent ethanolamine oleate (EO), by means of serial determination of plasma fibrinopeptide A (FPA) and fibrinopeptide B beta 15-42 (B beta 15-42). One hour after the completion of EIS, the value of FPA was significantly increased to 38.1 + /- 11.1 ng/ml (mean + /- s.e.m.) from a pre-EIS value of 7.1 + /- 1.4 ng/ml (P less than 0.01) and it gradually returned to normal range by 48 h after EIS. A very similar change was observed in the value of B beta 15-42 (P less than 0.01). These observations indicated that EIS provokes transient activation of coagulation and fibrinolysis. In vitro studies, however, revealed that EO inhibits fibrin clot formation because of the Ca2 + -chelating ability of its constituent ethanolamine, although oleate or benzyl alcohol exhibited procoagulant activity in FPA formation in vitro. Nevertheless, an external application of EO or oleate over decapsulized kidney of rat resulted in a significant accumulation of 125I-labelled fibrin (ogen). From these results it was suggested that intravascular injection of EO, which exerts an inhibitory effect on coagulation in vitro, activates the local coagulation system. The activation may be accelerated by an acute inflammatory process provoked by oleate, which is supported by such clinical manifestations as mild fever, retrosternal pain leukocytosis and an increase in plasma fibrinogen level which was observed in all during the period.",
"role": "user"
},
{
"content": "[[\"oleate\", \"INDIRECT-UPREGULATOR\", \"fibrinogen\"], [\"oleate\", \"REGULATOR\", \"FPA\"], [\"benzyl alcohol\", \"REGULATOR\", \"FPA\"], [\"EO\", \"INHIBITOR\", \"fibrin\"]]",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.5912 | [
{
"content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: These reference genome sequences were used: Arabidopsis thaliana (Ath) TAIR10, Arabidopsis lyrata (Aly [30]), and Arabis alpina (Aal; sequence [18] and gene annotation [18]).",
"role": "user"
},
{
"content": "Method",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.2494 | [
{
"content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: Identification of novel 1,2,3,6-tetrahydropyridyl-substituted benzo[d]thiazoles: Lead generation and optimization toward potent and orally active EP1 receptor antagonists Herein we described the design, synthesis and evaluation of a novel series of benzo[d]thiazole derivatives toward an orally active EP1 antagonist. Lead generation studies provided benzo[d]thiazole core from the four designed scaffolds. Optimization of this scaffold in terms of EP1 antagonist potency and ligand-lipophilicity efficiency (LLE; pIC50-clogP) led to a 1,2,3,6-tetrahydropyridyl-substituted benzo[d]thiazole derivative, 7r (IC50 1.1nM; LLE 4.7), which showed a good pharmacological effect when administered intraduodenally in a 17-phenyl trinor-PGE2 (17-PTP)- induced overactive bladder model in rats.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"antagonist\", \"overactive bladder\", \"antagonists\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"benzo[d]thiazole derivatives\", \"1,2,3,6-tetrahydropyridyl-substituted benzo[d]thiazole derivative, 7r\", \"1,2,3,6-tetrahydropyridyl-substituted benzo[d]thiazoles\", \"EP1 receptor\", \"17-phenyl trinor-PGE2\", \"EP1\", \"17-PTP\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"rats\"], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"Lead generation\", \"Lead generation studies\"], \"SpatialConcept\": [], \"Virus\": []}",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.2014 | [
{
"content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: CD169 is a marker for highly pathogenic phagocytes in multiple sclerosis Phagocytes, such as macrophages and microglia, are key effector cells in the pathophysiology of multiple sclerosis (MS). It is widely accepted that they instigate and promote neuroinflammatory and neurodegenerative events in MS. An increasing amount of studies indicate that Siglec-1, also known CD169, is a marker for activated phagocytes in inflammatory disorders. In this study, we set out to define how CD169(+) phagocytes contribute to neuroinflammation in MS. CD169 - diphtheria toxin receptor (DTR) mice, which express human DTR under control of the CD169 promoter, were used to define the impact of CD169(+) cells on neuroinflammation. Flow cytometry and immunohistochemistry were utilized to determine the expression and distribution of CD169. We show that CD169 is highly expressed by lesional and circulating phagocytes in MS and experimental autoimmune encephalomyelitis (EAE). Our data further indicate that CD169 represents a selective marker for early activated microglia in MS and EAE lesions. Depletion of CD169(+) cells markedly reduced neuroinflammation and ameliorated disease symptoms in EAE - affected mice. Our findings indicate that CD169(+) cells promote neuroinflammation. Furthermore, they suggest that CD169(+) phagocytes play a key role in the pathophysiology of MS. Hence, targeting CD169(+) phagocytes may hold therapeutic value for MS.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"macrophages\", \"activated microglia\", \"Phagocytes\", \"microglia\", \"CD169\", \"phagocytes\", \"effector cells\", \"cells\"], \"Bacterium\": [], \"BiologicFunction\": [\"inflammatory disorders\", \"MS\", \"disease\", \"circulating\", \"neuroinflammation\", \"experimental autoimmune encephalomyelitis\", \"neurodegenerative\", \"multiple sclerosis\", \"EAE\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"DTR\", \"CD169\", \"diphtheria toxin receptor\", \"CD169(+)\", \"Siglec-1\", \"promoter\"], \"ClinicalAttribute\": [\"marker\"], \"Eukaryote\": [\"human\", \"mice\"], \"Finding\": [\"symptoms\", \"findings\", \"play\", \"lesional\", \"lesions\"], \"Food\": [], \"HealthCareActivity\": [\"Flow cytometry\", \"immunohistochemistry\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\", \"studies\"], \"SpatialConcept\": [], \"Virus\": []}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.892 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Integrated Assessment of Genomic Correlates of Protein Evolutionary Rate\nArticle:\nAbstract:\nRates of evolution differ widely among proteins, but the causes and consequences of such differences remain under debate. With the advent of high-throughput functional genomics, it is now possible to rigorously assess the genomic correlates of protein evolutionary rate. However, dissecting the correlations among evolutionary rate and these genomic features remains a major challenge. Here, we use an integrated probabilistic modeling approach to study genomic correlates of protein evolutionary rate in Saccharomyces cerevisiae. We measure and rank degrees of association between( i) an approximate measure of protein evolutionary rate with high genome coverage, and( ii) a diverse list of protein properties( sequence, structural, functional, network, and phenotypic). We observe, among many statistically significant correlations, that slowly evolving proteins tend to be regulated by more transcription factors, deficient in predicted structural disorder, involved in characteristic biological functions( such as translation), biased in amino acid composition, and are generally more abundant, more essential, and enriched for interaction partners. Many of these results are in agreement with recent studies. In addition, we assess information contribution of different subsets of these protein properties in the task of predicting slowly evolving proteins. We employ a logistic regression model on binned data that is able to account for intercorrelation, non-linearity, and heterogeneity within features. Our model considers features both individually and in natural ensembles( “meta-features”) in order to assess joint information contribution and degree of contribution independence. Meta-features based on protein abundance and amino acid composition make strong, partially independent contributions to the task of predicting slowly evolving proteins; other meta-features make additional minor contributions. The combination of all meta-features yields predictions comparable to those based on paired species comparisons, and approaching the predictive limit of optimal lineage-insensitive features. Our integrated assessment framework can be readily extended to other correlational analyses at the genome scale.\nIntroduction:\nDifferent proteins evolve at drastically different rates[1]. Some proteins are highly conserved across distantly diverged species, such as the ribosomal and histone proteins in eukaryotes[2]. Other proteins evolve much more quickly, often to the point where they occur in one species but cannot be identified in other closely related species, possibly due to deletion or major sequence divergence[3]. What are the main driving forces of such differences in protein evolutionary rate? What percentage of this variation can be attributed to simple protein properties that we can quantitatively measure in a genome-wide fashion? The answers to such questions are critical to achieving a systematic understanding of molecular evolution. With the advent of reliable high-throughput functional genomic measurements, particularly in the model organism Saccharomyces cerevisiae( baker's yeast), it is now possible to rigorously assess the functional genomic correlates of protein evolutionary rate. Many studies have focused on calculating the correlation between protein evolutionary rate and a single protein feature that can be determined for a large fraction of yeast proteins, followed by statistical hypothesis testing of the observed correlation. This method has been successful in identifying a number of key correlates of protein evolutionary rate, such as protein abundance[4], essentiality[5], and number of interactors[6]. For further review of individual correlates, see[1],[7],[8]. Assessing the relative strengths, synergistic effects, and redundancy among such correlations requires more sophisticated statistical methods. Multivariate techniques have already been applied in a number of studies aimed at simultaneously dissecting multiple correlates of evolutionary rate[9]–[16]. Partial correlation and principle component regression, two popular techniques in this area, have been shown to produce discrepant results when applied to similar data[13],[15],[17]. Arguments have been made against both techniques regarding their sensitivity to noise among protein features and a tendency to over- or under-estimate the number of independent determinants of evolutionary rate. Analyses of evolutionary rate correlation have been historically limited by less-than-complete coverage of the genome—often far less. Consider the calculation of evolutionary rate itself. A commonly used reference dataset was produced by Wall et al., where a set of evolutionary rate calculations were meticulously performed in yeast[12]. In generating these data, they placed demands on gene orthology and phylogenetic relationships that substantially reduced genome coverage( to roughly 3, 000 genes, relative to the roughly 6, 000 open reading frames in S. cerevisiae). Moreover, a reduction in genome coverage may be accompanied by the introduction of specific biases. For example, stringent demands on gene orthology automatically bias a dataset toward more slowly evolving proteins. Coverage will tend to be further limited—and the dataset further biased—as more genomes and more protein features are added to an analysis. In this study, we used an integrated probabilistic modeling approach to assess genomic correlates of protein evolutionary rate for 5, 537 proteins in the yeast genome( 94. 5% coverage relative to the 5, 861 total yeast ORFs). We assembled a list of diverse protein sequence, physicochemical, and functional genomic features with high coverage of the proteins in S. cerevisiae and assessed their correlations with an approximate, high coverage measure of protein evolutionary rate. To manage potential outliers, noise, and non-linear relationships, we employed robust measures of correlation, such as rank correlation and mutual information. By considering many protein attributes simultaneously, it was possible to rank them according to their degrees of association with evolutionary rate. Our high-coverage framework allows us to re-assess known genomic correlates of evolutionary rate, while simultaneously identifying new, statistically significant correlates. In addition, we employed a logistic regression framework on binned data to assess the information contribution of sets of features in the task of predicting slowly evolving proteins. Our framework is flexible and robust, and is able to account for intercorrelation, non-linearity, and heterogeneity within features. Using this framework, we were able to group overlapping and interrelated features into natural ensembles( “meta-features”), and quantitatively assess their combined predictive power. Next, natural ensembles were evaluated in progressively larger groups to measure the independent significance of their contributions. Finally, we show that our optimal predictions of S. cerevisiae protein evolutionary rate are comparable to those based on paired species comparisons, and approaching those based on the lineage-independent component of evolutionary rate.\nDiscussion:\nThis study focused on identifying genomic features which contribute to the task of predicting evolutionary rate. While the purpose and relevance of many prediction tasks is immediately clear—for example, predicting gene essentiality in order to avoid the difficulty and expense of experimental determination[32]—one may question the need for predicting evolutionary rate. Simple methods for evaluating evolutionary rate based on species comparisons exist( e. g., the dN/dS ratio) and can be evaluated with relative ease at the genomic scale. In the absence of such comparisons, we would have few means by which to test the validity of our predictions, given the timescale over which natural evolution operates. Why then do we wish to predict evolutionary rate? The answer is that we are not interested so much in the predictions themselves, but rather the features which provide them. Biologists have long been interested in understanding the forces that drive evolution at various scales of life. However, our knowledge of the causal forces which underlie evolution at the molecular scale remains limited. By ranking the degree of correlation between various protein features and evolutionary rate, we hope to highlight those features which best dictate the selective constraint on a given protein. From the careful dissection of these individual correlations, one stands to gain a deeper understanding of their underlying biological significance. For example, the observed correlation between protein abundance and evolutionary rate led to biological insights regarding the evolution of translational robustness[33]. In a similar spirit, the correlation between evolutionary rate and number of transcriptional regulators that we discovered here leads to biological insights regarding the evolution of transcriptional regulation: target hubs in the transcriptional regulatory network are evolutionarily more constrained than non-hubs. Protein abundance, biological function, and amino acid composition consistently appeared in our analyses as dominant correlates of evolutionary rate. As we have mentioned before, the significance( if not dominance) of abundance is generally accepted; the significance of function has also been previously described[34]. However, the significance of amino acid composition in determining evolutionary rate has been a subject of some debate[24],[35]. The information contribution analysis indicated that the predictive power of amino acid composition is high( relative to other meta-features) in the task of classifying slowly evolving proteins. We are able to partially explain the correlation between amino acid composition and evolutionary rate by revealing a hidden correlation with protein expression( see Table 4). For example, the top three amino acids that are negatively correlated with evolutionary rate, glycine, alanine, and valine, are also the most enriched in highly expressed proteins( perhaps reflecting a preference for metabolically inexpensive building blocks). On the other hand, the correct classification rate improved when we combined abundance and amino acid composition, suggesting that amino acid composition makes at least a partially independent contribution. This additional contribution can be partially attributed to differences in amino acid mutability, as defined by Jones et al.[36]( see Table 4). For example, the top two amino acids that are positively correlated with evolutionary rate, serine and asparagine, are among the top three in terms of mutability. It is interesting to note that something as simple as amino acid composition can be highly predictive for both protein abundance and evolutionary rate. The previous section highlighted the importance of understanding within-feature correlation, specifically that between protein abundance and amino acid composition, in the search for determinants of evolutionary rate. Figure 4 explores the network of within-feature correlations for the twenty numerical features that best correlate with evolutionary rate, as listed in Table 2( glutamine content, the weakest of these correlates, has no strong within-feature correlations and is not depicted). We notice that related features can occur in tightly correlated clusters( for example, the cluster of Codon Adaption Index, codon bias, protein expression, and absolute mRNA expression). This observation reinforces the value of considering such features together as meta-features, as we have done here. In general, the network exhibits clique-like behavior, characterized by dense connections among the feature nodes. This is not entirely surprising, as many features are known to be related to one another, and they all share a common correlation with evolutionary rate. However, this rampant intercorrelation is a significant hindrance to the task of isolating specific features as evolutionary determinants using traditional multivariate statistical techniques. By taking an integrated probabilistic approach( considering all features together), we have been able to largely circumvent this issue. Figure 4 further reveals that the vast majority of correlations between pairs of features and evolutionary rate are transitive: if features A and B both positively( or both negatively) correlate with feature C, then feature A usually correlates positively with feature B. For example, increased GC content and increased codon bias are both associated with decreased evolutionary rates. At the same time, GC content and codon bias are positively correlated with one another. These transitive correlations are easy to understand. Interestingly, we also observed non-transitive correlations, for example between evolutionary rate, number of transcriptional regulators, and marginal essentiality. Both number of transcriptional regulators and marginal essentiality are negative correlates of evolutionary rate( rs = −0. 142 and −0. 146, respectively; p≪0. 001 in both cases). However, as previously noted[37], number of transcriptional regulators and marginal essentiality correlate in a negative manner with one another( rs = −0. 104, p≪0. 001). The observed non-transitive correlations are statistically significant( p≪0. 001), although we note that the correlations are rather weak and account for 1. 1% to 2. 1% of the variance. This seemingly counter-intuitive observation can be explained in the following way. Slowly evolving proteins can be divided into two largely non-overlapping groups:( i) those that are important under all conditions, meaning that they are essential, but not necessarily highly regulated, and( ii) those that are important only under specific conditions, which may experience sophisticated regulation, but are not necessarily annotated as essential. Proteins in the first group drive the negative correlation between essentiality and evolutionary rate, while proteins in the second group drive the negative correlation between number of regulators and evolutionary rate. This explains the observed non-transitive correlations among evolutionary rate, number of transcriptional regulators, and marginal essentiality. The patterns of correlation among features in Figure 4 provide further insights into our observed correlation between evolutionary rate and number of transcriptional regulators. Number of regulators is correlated with many other genomic features, most significantly with codon bias and Codon Adaption Index. Highly regulated proteins, though not necessarily essential or even expressed under laboratory conditions, may be strongly selected in the real world for their roles in stress response. Take, for example, HSP26—a player in the yeast response to heat shock, and the most highly regulated protein in our dataset. This protein is neither expressed nor essential under laboratory conditions. However, its coding sequence contains high codon bias, consistent with selection for efficient translation under stress. We therefore expect that translational selection and selection for the protein's stress-induced function have constrained its evolution in the wild. The integrated probabilistic approach we have taken in this study has both drawbacks and advantages. Like other correlational approaches, our approach is not able to distinguish correlation from causation, nor is it able to isolate cause from effect. We do not explicitly model the noise within the feature data as some other methods do[15], which will tend to underestimate the predictive power of single features. On the other hand, the effect of noise is minimized by the binning of single features and introduction of meta-features. Our approach is flexible and robust, and is able to distinguish between dominant correlations and marginal ones. We are able to consider any features we choose, including those that are categorical( rather than continuous) or correlated with evolution in a non-linear manner. Furthermore, our approach compensates for redundancy among features, which, as with noise, we expect to be significant. Most importantly, our analyses feature high coverage of the yeast genome, thus making our results highly general. Accomplishing this requires the introduction of several approximations( a relaxed definition of evolutionary rate, collecting feature data from a single species, and modeling missing data), though none of these are found to have a major effect on accuracy. To our surprise, we found that integrating a diverse collection of single-genome features was roughly equivalent to paired species comparison for identifying slowly evolving proteins, but still worse than what lineage-insensitive features can in principle predict. Our conclusion from this finding is that the dominant, independent correlates of evolutionary rate are likely known, even though other significant and interesting correlates may remain to be found( see[38] for one recent example). Further dissection of individual correlations between protein features and evolutionary rate will be needed in order to gain a deeper understanding of their biological significance. As we have demonstrated in the cases of amino acid composition, protein abundance, essentiality, and number of transcriptional regulators, there is also great insight to be had by exploring the relationships between protein features.",
"role": "user"
},
{
"content": "Proteins encoded within a given genome are known to evolve at drastically different rates . Through recent large-scale studies , researchers have measured a wide variety of properties for all proteins in yeast . We are interested to know how these properties relate to one another and to what extent they explain evolutionary rate variation . Protein properties are a heterogeneous mix , a factor which complicates research in this area . For example , some properties ( e . g . , protein abundance ) are numerical , while others ( e . g . , protein function ) are descriptive; protein properties may also suffer from noise and hidden redundancies . We have addressed these issues within a flexible and robust statistical framework . We first ranked a large list of protein properties by the strength of their relationships with evolutionary rate; this confirms many known evolutionary relationships and also highlights several new ones . Similar protein properties were then grouped and applied to predict slowly evolving proteins . Some of these groups were as effective as paired species comparison in making correct predictions , although in both cases a great deal of evolutionary rate variation remained to be explained . Our work has helped to refine the set of protein properties that researchers should consider as they investigate the mechanisms underlying protein evolution .",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.2883 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\n4-Methyl-7-hydroxycoumarin antifungal and antioxidant activity enhancement by substitution with thiosemicarbazide and thiazolidinone moieties. According to literature data, thiosemicarbazide and thiazolidinone moieties should enhance biological properties of coumarin. Antioxidant, metal-chelating and antifungal activities of all compounds were investigated and compared to the activity of the starting material, 7-hydroxy-4-methylcoumarin, and were proven to possess potent antioxidant and antifungal activity. In general, thiosemicarbazides showed higher scavenging activity towards DPPH and galvinoxyl radicals than did 4-thiazolidinones and some of them had the same or even better activity than had ascorbic acid itself, depending on the free radical used. In antifungal activity tests towards four foodborne mycotoxigenic fungi, Aspergillus flavus, Aspergillus ochraceus. Fusarium graminearum and Fusarium verticillioides, coumarin derivatives were proven to possess a very high activity in terms of growth inhibition, depending on the fungi investigated. In general, 4-thiazolidinones showed better antifungal activity than did thiosemicarbazides. F. graminearum was the most susceptible to the compounds investigated and F. verticillioides was proven to be the most resistant. Two compounds, both coumarinyl thiosemicarbazides, were found to possess both antifungal and antioxidant activity which could be useful for applications in medicine, food industry and agriculture.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"DPPH\", \"ascorbic acid\", \"4-thiazolidinones\", \"galvinoxyl\", \"coumarinyl thiosemicarbazides\", \"thiosemicarbazide\", \"thiosemicarbazides\", \"4-Methyl-7-hydroxycoumarin\", \"7-hydroxy-4-methylcoumarin\", \"thiazolidinone\", \"coumarin\"]}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.20249 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Conditional Cooperativity of Toxin - Antitoxin Regulation Can Mediate Bistability between Growth and Dormancy\nArticle:\nAbstract:\nMany toxin-antitoxin operons are regulated by the toxin/antitoxin ratio by mechanisms collectively coined “conditional cooperativity”. Toxin and antitoxin form heteromers with different stoichiometric ratios, and the complex with the intermediate ratio works best as a transcription repressor. This allows transcription at low toxin level, strong repression at intermediate toxin level, and then again transcription at high toxin level. Such regulation has two interesting features; firstly, it provides a non-monotonous response to the concentration of one of the proteins, and secondly, it opens for ultra-sensitivity mediated by the sequestration of the functioning heteromers. We explore possible functions of conditional regulation in simple feedback motifs, and show that it can provide bistability for a wide range of parameters. We then demonstrate that the conditional cooperativity in toxin-antitoxin systems combined with the growth-inhibition activity of free toxin can mediate bistability between a growing state and a dormant state.\nIntroduction:\nMany bacteria and archaea have multiple Toxin-Antitoxin( TA) loci[1], where the toxin normally inhibits cell growth, while the antitoxin neutralizes the activity of the toxin by forming a tight TA complex. One of the known functions of TA loci is to respond to nutritional stress[2], namely, toxins are activated upon nutritional starvation and slow down the rate of translation. When cells are under normal fast growth conditions, on the other hand, the majority of the cells will be in the antitoxin-dominated state, such that toxin activity is fully inhibited. It has been found that many bacterial TA loci are auto-regulated at the transcriptional level by a mechanism called “Conditional Cooperativity”( CC)[3], where the transcription factor can bind cooperatively to the operator only if the concentrations of two different proteins satisfy a certain stoichiometric ratio. CC was quantitatively studied in one of the Escherichia coli TA loci, relBE[3]–[6]. Here the two proteins, the toxin( mRNase) RelE and the antitoxin RelB, are encoded by the same operon, which is negatively auto-regulated. The tight dimer is a weak transcriptional auto-repressor, but this repression is strongly enhanced by the presence of RelE and becomes strongest at ratio. Over-expression of RelE above twice of, though, will result in an abrupt de-repression of the promoter. This unique behavior is a consequence of formation of alternative hetero-complexes of RelB and RelE; and. Two bind to the promoter site cooperatively to repress the promoter strongly, while does not bind to the promoter. Interestingly, all plasmid and chromosome-encoded TA loci investigated are found to be regulated by CC so far, including relBE of E. coli[3],[4], vapBC of Salmonella enterica[7], phd/doc of plasmid P1[8],[9] and ccdA/ccdB of plasmid F[10]. This suggest that CC is a common feature for TA loci. In our previous work, we have explored the function of CC in the starvation response of the RelBE system, and showed that CC prevents random toxin activation and promotes fast translational recovery when starvation conditions terminate. However, to reproduce the full dynamics of the starvation response, we took into account details of the RelBE system, which made the model rather specific to it. The primary purpose of this paper is to construct a simple mathematical model that demonstrates the functions of CC in a more general perspective. TA loci have been suggested to be involved in persister formation[11]–[16]. When an antibiotic is applied to a growing bacterial population, the majority of the bacteria are killed. However, a very small fraction of them survives and re-grows after the antibiotic is removed. If the progeny of the bacteria is again sensitive to the same antibiotic, they are called persisters, in contrast to the resistant bacteria that have acquired resistance to antibiotic by mutation. Persisters are genetically identical to the sensitive cells, but believed to be in a non- or slow-growing, dormant state. Since the majority of antibiotics interferes with the cell growth and division process, cells can survive if they grow slowly or not at all. The exact molecular mechanism underlying persistence is not fully understood. However, it has been found that mutations in hipAB genes severely increase the level of of persisters formation. Interestingly hipAB is one of the TA loci in E. coli[11],[13],[14]. In addition, recent experiments[15] showed that removal of 10 mRNase-encoding TA loci reduced the persister fraction significantly. These observations strongly suggest that TA loci are important factors for persister formation. One of the possible explanations is that stochastic activation of the toxin will slow down cell growth, resulting in a dormant state. This will be possible if the TA locus dynamics exhibits bistability, where a cell can be either in the antitoxin-dominated state that ensures the growth or in the toxin-dominated state that inhibits the growth. This viewpoint is also consistent with the observation that the persister state can be described as a metastable state with a constant stochastic switching rate to and from normal growing state[12]. This idea was theoretically pursued by Lou et al.[17] with a simple mathematical model that did not take CC into account. They concluded that, for bistability to be achieved, high cooperativity( Hill-coefficients) is necessary, both in transcriptional auto-regulation of the TA operon and in the free toxin activity. In this paper, we explore the basic features of CC as a regulation mechanism mediated by heteromer formation. We demonstrate that CC provides bistability in a simple feedback motif in a wide range of the parameters. We then construct a simplified model of TA system regulation and demonstrate that CC with growth rate-mediated feedback via toxin activity can provide the bistable alternatives between the antitoxin-dominated and the toxin-dominated states.\nDiscussion:\nUsing known parameters for the RelBE system in E. coli, we constructed a minimal model for TA activity, combining conditional regulation with a feedback from free toxin to the cell growth. It was demonstrated that this model shows bistability for a wide range of parameters, with a stable state corresponding to the antitoxin-dominated, normal growing state, and another metastable state corresponding the toxin dominated state, potentially corresponding to the persister state. Noticeably, the model eqs.( 6) –( 7) did not rely on details of the molecular mechanisms of how the toxin works, and therefore the model is not limited to the RelBE system. The important assumptions are:( i) The TA system is conditionally regulated,( ii) toxins are stable and diluted mainly by cell division, while antitoxins are metabolically unstable, and( iii) free toxins reduce the productions of proteins and hence cell growth. All the conditions are satisfied in the TA loci that are confirmed to be regulated by conditional cooperativity[3],. Our simple model pinpoints minimal ingredients for obtaining a persister state, but did not include stochastic production and/or degradation, and therefore cannot address the switching rates. In order to understand stochastic persister formation in E. coli, just performing stochastic simulation of the present motif is not enough, because the frequency of persisters depends on multiple parallel TA systems. In E. coli, 11 simultaneously interfering TA systems maintain a probability of persisters to be about 0. 01%, while this probability is changed substantially first when about 50% of the TA systems is removed[15]. This clearly suggests that the interference of parallel systems has a strong influence to the switching behavior. Furthermore, comparing the stochastic simulations with the experimentally observed frequency of persisters requires a knowledge of the underlying distribution of the expression levels and corresponding growth rates in the cell population. It is not a simple task when the single cell growth rate depends on expression levels, because it feedbacks to the frequency of the cells as pointed out by Nevizhay et al. in[28]. In addition, it has been suggested that there is a strong link between the activation of the protease Lon and the TA-mediated persister formation, through the increase of the antitoxin degradation rate[15],[16]. The fluctuation of the Lon activity may be particularly important in determining switching rates, because it can provide coherent noise that favours simultaneous switching of many TAs to the persister state. It should also be noted that the Lon activity is activated by polyphosphate, which is regulated by the stringent response signalling molecule( p) ppGpp[16]. We plan to extend the present model to include these features and study the switching behavior in near future. It is still interesting to think about possible implication of the observed switching rate to the present model. The fact that the persister formation is a rare event may indicate that the actual parameter value in the real system is located close to the boundary between the bistable region and the monostable region of the antitoxin-dominated state. Such parameter values can be chosen through selection process in a fluctuating environment, where slow growth of the persister pays off as a risk hedging strategy; the switching rate is expected to reflect the time scale of the temporal fluctuation of the environment[29]. Conditional regulation is an example of mixed feedback motifs[30], where protein-protein interactions and transcriptional repression are combined. In natural systems, protein-protein interaction mediated bistable switch was previously found for example in the epigenetic switch of the TP901 phage[23],[25] and in the sigma-factor/antisigma-factor system[24]. Conditional cooperativity in TA systems opens for a toolbox of regulatory units that can exhibit sufficient bistability to support also epigenetics. When removing the toxic ability of toxin, which has been done for RelE[3], and separating antitoxin from the operon to allow independent control, the strong binding between RelE and RelB should provide extreme ultrasensitivity, and thus very well separated metastable states. This conditional cooprativity-mediated bistability is the base for the bistability in full TA systems, and thus for the type II persister formation[12],[13], where a cell can spontaneously switch between the dormant state and the growing state( Fig. 5). While simple protein-protein heteromers could produce ultrasensitivity, the non-monotonicity of the conditional cooperativity also secure that the antitoxin dominated state has a substantial amount of toxins present( Fig. 5). These toxins' activity is normally inhibited by short lived antitoxins, but the stored toxins can be used for faster switching to a dormant state if overall protein productions are externally inhibited, for example by starvation( Fig. 5). Therefore, the non-monotonicity may enhance the transition to type I persister formation[12],[13], where environmental stress triggers persister formation. The importance of the protein-protein interaction mediated ultrasensitivty[22]–[25] and the growth rate-mediated feedback[17],[26]–[28] to bistable systems have been discussed as independent regulatory features in recent literature[31]. The uniqueness of the bistability in the TA system is that it combines both of these mechanisms. The need for combining these two mechanisms is closely associated with the fact that T and A are produced from the same operon, and thus are exposed to identical transcription regulation. Though it is difficult to get bistability with only one of the mechanisms[17], the TA system realizes a persister state by regulating the products of one operon through a combination of growth modulation and hetero-complex formation.",
"role": "user"
},
{
"content": "The effectiveness of antibiotics on many pathogenic bacteria is compromised by multidrug tolerance . This is caused by a small sub-population of bacteria that happen to be in a dormant , non-dividing state when antibiotics are applied and thus are protected from being killed . These bacteria are called persisters . Unraveling the basic mechanism underlying this phenomenon is a necessary first step to overcome persistent and recurring infections . Experiments have shown a connection between persister formation and the battle between a toxin and its antitoxin inside an E . coli cell . Toxin inhibits the cell growth but is neutralized by the antitoxin by forming a complex . The proteins also regulate their own production through this complex , thereby forming a feedback system that controls the growth of the bacterium . In this work we provide mathematical modeling of the feedback module and explore its abilities . We find that the auto-regulation with reduced growth associated with free toxins allows the cell to be bistable between two states: an antitoxin-dominated , normal growing one , or a dormant one caused by the activity of the toxin . The latter can be the simplest description of persister state . The toxin-antitoxin system presents a powerful example of mixed feedback design , which can support epigenetics .",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.13491 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nHemodynamic Responses to Resistance Exercise in Patients with Coronary Artery Disease.\nAbstract 1:\nPURPOSE Investigate hemodynamic responses of resistance exercise ( RE ) with moderate load ( i.e. , international guidelines for RE of patients ) versus RE with high load in patients with coronary artery disease ( CAD ) . METHODS Medically stable male ( n = 11 ) and female patients ( n = 4 ) treated with PCI or percutaneous coronary intervention , or coronary artery bypass surgery a minimum of 6 months before this study , performed three sets of 15RM and 4RM RE in a r and omized order on separate days . Beat-to-beat systolic ( SBP ) , diastolic ( DBP ) blood pressure , heart rate ( HR ) , stroke volume ( SV ) , cardiac output ( CO ) , and systemic vascular resistance ( SVR ) were monitored at preexercise , and continuously during RE . RESULTS Compared with preexercise , SBP and DBP ( mean of three sets ) increased by 12 % to 13 % ( both ; P < 0.001 ) and 35 % to 40 % after 15RM RE ( both ; P < 0.001 ) . 15RM SBP and DBP were higher than 4RM SBP and DBP ( both ; P < 0.001 ) . The SBP of the fourth repetition of 15 RM RE was similar to the SBP of the fourth repetition of 4RM RE . Compared with preexercise , SV increased moderately after 4RM and 15 RM RE , respectively ( both , P < 0.001 ) . HR increased more after 15RM compared with 4RM RE ( P < 0.05 ) ; thus , higher CO after 15RM ( compared with 4RM RE ; P < 0.05 ) was mainly caused by higher HR . SVR decreased by 15 % ( P < 0.001 ) and 50 % ( P < 0.01 ) after 4RM and 15RM RE . CONCLUSIONS SBP and DBP increased significantly more during moderate load RE ; thus , the magnitude of the external load is not the prime determinant of the pressure response during RE . If management of blood pressure is of concern , high load/low rep RE is preferable to medium load/high rep RE\n\n\nTitle 2:\nIntra-Arterial Blood Pressure Response in Hypertensive Subjects during Low- and High-Intensity Resistance Exercise\nAbstract 2:\nOBJECTIVE : The aim of this study was to describe blood pressure responses during resistance exercise in hypertensive subjects and to determine whether an exercise protocol alters these responses . INTRODUCTION : Resistance exercise has been recommended as a complement for aerobic exercise for hypertensive patients . However , blood pressure changes during this kind of exercise have been poorly investigated in hypertensives , despite multiple studies of normotensives demonstrating significant increases in blood pressure . METHODS : Ten hypertensive and ten normotensive subjects performed , in r and om order , two different exercise protocol s , composed by three sets of the knee extension exercise conducted to exhaustion : 40 % of the 1-repetition maximum ( 1RM ) with a 45-s rest between sets , and 80 % of 1RM with a 90-s rest between sets . Radial intra-arterial blood pressure was measured before and throughout each protocol . RESULTS : Compared with normotensives , hypertensives displayed greater increases in systolic BP during exercise at 80 % ( + 80±3 vs. + 62±2 mmHg , P<0.05 ) and at 40 % of 1RM ( + 75±3 vs. + 67±3 mmHg , P<0.05 ) . In both exercise protocol s , systolic blood pressure returned to baseline during the rest periods between sets in the normotensives ; however , in the hypertensives , BP remained slightly elevated at 40 % of 1RM . During rest periods , diastolic blood pressure returned to baseline in hypertensives and dropped below baseline in normotensives . CONCLUSION : Resistance exercise increased systolic blood pressure considerably more in hypertensives than in normotensives , and this increase was greater when lower-intensity exercise was performed to the point of exhaustion\n\n\nTitle 3:\nThe effect of different intensity modalities of resistance training on beat-to-beat blood pressure in cardiac patients.\nAbstract 3:\nBACKGROUND Resistance training has been introduced in cardiac rehabilitation to give more benefit than traditional training . Haemodynamic evaluation of cardiac patients to resistance training has generally consisted of continuous HR monitoring and discontinuous blood pressure measurements . DESIGN AND METHODS Blood pressure ( BP ) and heart rate ( HR ) responses to resistance training were evaluated using continuous monitoring ( Finapres ) during low ( four sets of 17 repetitions at 40 % of the one-repetition maximum strength [ 1-RM ] ) and high intensity resistance training ( four sets of 10 repetitions at 70 % of 1-RM ) on a leg extension machine in 14 patients who participated in a rehabilitation programme . Work volume was identical in the low- and high-level resistance training . RESULTS The HR and systolic blood pressure ( SBP ) during low intensity resistance training were always larger than during high intensity ( P<0.001 ) . Peak SBP increased from set 1 to set 3 and 4 during both low and high intensity resistance training ( P<0.05 ) . Peak HR was larger in set 4 ( 95+/-11 bpm ) than in set 1 only during low intensity resistance training ( 91+/-12 bpm ) ( P<0.05 ) . One-minute recovery periods did not allow a return to baseline HR and SBP during both low and high intensity modalities . CONCLUSIONS The SBP and HR responses to resistance training are related to the duration of exercise . Sets with < or = 10 repetitions of high intensity should be preferred to longer sets with low intensity . Pauses between exercise sets should exceed 1 min . Blood pressure should be measured during the last repetitions of the exercise set\n\n\nTitle 4:\nAcute hemodynamic and cardiovascular responses following resistance exercise to voluntary exhaustion. Effects of different loadings and exercise durations.\nAbstract 4:\nBACKGROUND The aim of this study was to compare the acute hemodynamic and cardiovascular responses of high load/low repetition resistance training ( RT ) to low load/high repetition RT . METHODS Thirteen healthy men performed four sets of 4 repetition maximum ( RM ) and 20RM leg-extensions without breath-holding . The RT was conducted in a r and omized order and with 48 hours between bouts . Non-invasive beat-to-beat systolic and diastolic blood-pressure ( SBP/DBP ) was measured on the finger , while non-invasive cardiac output ( CO ) was assessed beat-to-beat by impedance-cardiography . RESULTS Mean±SD resting SBP/DBP and CO were 126±14/73±9 mmHg and 5.6±9 L min-1 , respectively . Exercise SBP/DBP values increased to 154±22/99 ±18 and 203±33/126±19 mmHg following 4RM and 20RM RT , respectively ( compared to rest , all ; P<0.001 ) , and 20RM SBP/DBP values were higher than 4RM values ( both , P<0.001 ) . The SBP increased from the first to the fourth set of exercise following the 20RM load ( P<0.01 ) , but not so for the 4RM load . Exercise SBP/DBP values following the 4th rep of 20RM exercise ( 154±18/91±14 ) , was similar to the 4RM values , but different to the 20th rep of the 20 RM loading ( both ; P<0.001 ) . CO increased to 10.8±2.6 and 13.9±2.2 L min-1 , following 4RM and 20RM RT , respectively ( compared to rest , both ; P<0.001 ) and 20RM CO was higher than 4RM CO ( P<0.01 ) . CONCLUSIONS 20RM RT result ed in higher blood-pressure than 4RM RT when performed to voluntary exhaustion . Differences in hemodynamic responses seems to be related to training duration and not to difference in loading\n\n\nTitle 5:\nGreater Neural Adaptations following High- vs. Low-Load Resistance Training\nAbstract 5:\nWe examined the neuromuscular adaptations following 3 and 6 weeks of 80 vs. 30 % one repetition maximum ( 1RM ) resistance training to failure in the leg extensors . Twenty-six men ( age = 23.1 ± 4.7 years ) were r and omly assigned to a high- ( 80 % 1RM ; n = 13 ) or low-load ( 30 % 1RM ; n = 13 ) resistance training group and completed leg extension resistance training to failure 3 times per week for 6 weeks . Testing was completed at baseline , 3 , and 6 weeks of training . During each testing session , ultrasound muscle thickness and echo intensity , 1RM strength , maximal voluntary isometric contraction ( MVIC ) strength , and contractile properties of the quadriceps femoris were measured . Percent voluntary activation ( VA ) and electromyographic ( EMG ) amplitude were measured during MVIC , and during r and omly ordered isometric step muscle actions at 10–100 % of baseline MVIC . There were similar increases in muscle thickness from Baseline to Week 3 and 6 in the 80 and 30 % 1RM groups . However , both 1RM and MVIC strength increased from Baseline to Week 3 and 6 to a greater degree in the 80 % than 30 % 1RM group . VA during MVIC was also greater in the 80 vs. 30 % 1RM group at Week 6 , and only training at 80 % 1RM elicited a significant increase in EMG amplitude during MVIC . The peak twitch torque to MVIC ratio was also significantly reduced in the 80 % , but not 30 % 1RM group , at Week 3 and 6 . Finally , VA and EMG amplitude were reduced during submaximal torque production as a result of training at 80 % 1RM , but not 30 % 1RM . Despite eliciting similar hypertrophy , 80 % 1RM improved muscle strength more than 30 % 1RM , and was accompanied by increases in VA and EMG amplitude during maximal force production . Furthermore , training at 80 % 1RM result ed in a decreased neural cost to produce the same relative submaximal torques after training , whereas training at 30 % 1RM did not . Therefore , our data suggest that high-load training results in greater neural adaptations that may explain the disparate increases in muscle strength despite similar hypertrophy following high- and low-load training programs\n\n\nTitle 6:\nAerobic and resistance training in coronary disease: single versus multiple sets.\nAbstract 6:\nPURPOSE The purpose of this study was to compare resistance training ( RT ) ( one set vs three sets ) combined with aerobic training ( AT ) versus AT alone in persons with coronary artery disease . METHODS Subjects ( n = 72 ) were r and omized to AT ( 5 d x wk(-1 ) ) or combined AT ( 3 d x wk(-1 ) ) with either one set ( AT/RT1 ) or three sets ( AT/RT3 ) of RT performed 2 d x wk(-1 ) . VO2peak , ventilatory anaerobic threshold ( VAT ) , strength and endurance , body composition , and adherence were measured before and after 29 wk of training . RESULTS Fifty-three subjects ( mean + /- SEM age 61 + /- 2 ) completed the training . The increase from baseline in VO2peak ( L x min(-1 ) ) averaged 11 % for AT ( P < 0.05 ) , 14 % for AT/RT1 ( P < 0.01 ) , and 18 % for AT/RT3 ( P < 0.001 ) , however , the difference between groups was not significant . VAT improved significantly in the AT/RT3 group only ( P < 0.05 ) . The AT/RT3 group gained more lean mass than the AT group ( 1.5 versus 0.4 kg , P < 0.01 ) , yet gains between AT/RT1 and AT were similar ( P = 0.2 ) . Only AT + RT groups demonstrated a reduction in body fat ( P < 0.05 ) . Strength and endurance increased more in the AT + RT groups than AT alone ( P < 0.05 ) . Adherence to number of sets performed was lower in AT/RT3 than AT/RT1 ( P < 0.02 ) . CONCLUSIONS Combined AT + RT yields more pronounced physiological adaptations than AT alone and appears to be superior in producing improvements in VO2peak , muscular strength and endurance , and body composition . The data support the use of multiple set RT for patients desiring an increased RT stimulus which may further augment parameters that affect VO2peak , VAT , lower body endurance , and muscle mass in a cardiac population",
"role": "user"
},
{
"content": "When added to endurance training , dynamic strength training leads to significantly greater improvements in peripheral muscle strength and power output in patients with cardiovascular disease , which may be relevant to enhance the patient ’s prognosis .\nMoreover , multiple studies report that high-intensity strength training causes fewer increments in (intra-)arterial blood pressure and cardiac output , as opposed to low-intensity strength training , thus potentially pointing towards sufficient medical safety for the cardiovascular system .",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.1538 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nEffect of a ferrule and increased clinical crown length on the in vitro fracture resistance of premolars restored using two dowel-and-core systems . This study investigated the effect of a crown-lengthening ferrule on the fracture resistance of endodontically-treated teeth restored with two dowel-core systems . Thirty-two extracted mandibular first premolars were sectioned perpendicular to the long axis at a point 1.0 mm occlusal to the buccal cementoenamel junction . Following endodontic treatment , the teeth were randomly assigned to four groups : cast Ni-Cr alloy dowel-core with no ferrule ( Group A1 ) , cast Ni-Cr alloy dowel-core with 2.0 mm ferrule ( Group A2 ) , prefabricated carbon fiber-reinforced dowel-resin core with no ferrule ( Group B1 ) and carbon fiber-reinforced dowel-resin core with 2.0 mm ferrule ( Group B2 ) . Each specimen was embedded in a self-cured acrylic resin block from 2.0 mm apical to the margins of a cast Ni-Cr alloy crown , then loaded at 150 degrees from the long axis in a universal testing machine at a crosshead speed of 1.0 mm/minute until fracture . The data were recorded and analyzed using ANOVA and Fisher 's exact tests , with alpha = 0.05 . Mean failure loads ( kN ) for the A1 , A2 , B1 and B2 Groups were : 1.46 ( S.D . 0.45 ) , 1.07 ( 0.21 ) , 1.13 ( 0.30 ) and 1.02 ( 0.27 ) . The teeth restored with cast Ni-Cr dowel-cores and 2.0 mm ferrules demonstrated significantly lower fracture strengths , p = 0.04 . There were significant differences in the root fracture patterns between the two dowel systems , with the carbon fiber-reinforced dowel-resin core system , being the less severe p < 0.05 . Crown lengthening with a 2.0 mm apical extended ferrule resulted in reduced fracture strengths for endodontically-treated teeth restored using two dowel-core systems and cast metal crowns . The carbon fiber-reinforced dowel-resin core system reduced the severity of the root fractures .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"crown-lengthening ferrule\", \"carbon fiber-reinforced dowel-resin core with 2.0 mm ferrule\", \"carbon fiber-reinforced dowel-resin core system\", \"cast Ni-Cr alloy dowel-core with no ferrule ( Group A1 ) , cast Ni-Cr alloy dowel-core with 2.0 mm ferrule ( Group A2 ) , prefabricated carbon fiber-reinforced dowel-resin core with no ferrule ( Group B1 )\", \"cast Ni-Cr dowel-cores and 2.0 mm ferrules\", \"ferrule\", \"2.0 mm apical extended ferrule\"], \"Outcome\": [\"ANOVA and Fisher \\u0027s exact tests\", \"Mean failure loads\", \"root fracture patterns\", \"fracture strengths\"], \"Participant\": [\"root fractures\", \"Thirty-two\"]}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.4284 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: drinking tea is effective in curing and relieving symptoms of the novel coronavirus, COVID-19. The claim is false; \n\nAbstract:\nThis review focused on the use of plant-based foods for enhancing the immunity of all aged groups against COVID-19.\nIn humans, coronaviruses are included in the spectrum of viruses that cause the common cold and, recently, severe acute respiratory syndrome (SARS).\nEmerging infectious diseases, such as SARS present a major threat to public health.\nThe novel coronavirus has spread rapidly to multiple countries and has been declared a pandemic by the World Health Organization.\nCOVID-19 is usually caused a virus to which most probably the people with low immunity response are being affected.\nPlant-based foods increased the intestinal beneficial bacteria which are helpful and make up of 85% of the immune system.\nBy the use of plenty of water, minerals like magnesium and Zinc, micronutrients, herbs, food rich in vitamins C, D and E, and better life style one can promote the health and can overcome this infection.\nVarious studies investigated that a powerful antioxidant glutathione and a bioflavonoid quercetin may prevent various infections including COVID-19.\nIn conclusion, the plant-based foods play a vital role to enhance the immunity of people to control of COVID-19.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"COVID-19 is usually caused a virus to which most probably the people with low immunity response are being affected.\", \"Plant-based foods increased the intestinal beneficial bacteria which are helpful and make up of 85% of the immune system.\", \"In conclusion, the plant-based foods play a vital role to enhance the immunity of people to control of COVID-19.\"]}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.15150 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Sequential infection experiments for quantifying innate and adaptive immunity during influenza infection\nArticle:\nAbstract:\nLaboratory models are often used to understand the interaction of related pathogens via host immunity. For example, recent experiments where ferrets were exposed to two influenza strains within a short period of time have shown how the effects of cross-immunity vary with the time between exposures and the specific strains used. On the other hand, studies of the workings of different arms of the immune response, and their relative importance, typically use experiments involving a single infection. However, inferring the relative importance of different immune components from this type of data is challenging. Using simulations and mathematical modelling, here we investigate whether the sequential infection experiment design can be used not only to determine immune components contributing to cross-protection, but also to gain insight into the immune response during a single infection. We show that virological data from sequential infection experiments can be used to accurately extract the timing and extent of cross-protection. Moreover, the broad immune components responsible for such cross-protection can be determined. Such data can also be used to infer the timing and strength of some immune components in controlling a primary infection, even in the absence of serological data. By contrast, single infection data cannot be used to reliably recover this information. Hence, sequential infection data enhances our understanding of the mechanisms underlying the control and resolution of infection, and generates new insight into how previous exposure influences the time course of a subsequent infection.\nIntroduction:\nThe influenza virus infects epithelial cells in the respiratory tract, causing respiratory symptoms such as coughing and sneezing, and systemic symptoms such as fever. Three main components of the immune response—innate, humoral adaptive and cellular adaptive immunity—work together to control an infection. Experiments have revealed the contribution of each major immune component to resolution of an infection, by suppressing each immune component in turn[1–5]. However, current mathematical models do not agree on how each major immune component contributes quantitatively. A study by Dobrovolny et al.[1] highlights these discrepancies. The study showed that eight existing viral dynamics models[6–13] made different qualitative predictions when different components of the immune response were removed. Each model failed to reproduce the effect of removing at least one of the three components discussed above. The discrepancies arose because many models were only fitted to viral load data from a single infection. It has been shown that many models can fit the viral load for a single infection well, including models without a time-dependent immune response which are thought to be less biologically realistic[6]; however, if data for multiple initial conditions are available, the viral load may have more features to distinguish between competing models[14–16]. One way of altering the initial conditions is through a previous or ongoing infection. We previously conducted a series of experiments where ferrets were sequentially infected with two influenza strains[17, 18]. When a short time interval( 1–14 days) separated exposures, a primary infection protected against a subsequent infection. This protection likely arose through cross-immunity, whereby the immune response stimulated by one strain also protects against infection with another. While immune markers indicated the approximate timing of each arm of the immune response[19], the strength of cross-protection due to each component was difficult to measure experimentally. We hypothesised that mathematical models can be used to gain further insight from these types of experiments. Few existing models include interactions between influenza strains on short timescales; hence, we constructed viral dynamics models to reproduce the qualitative observations of these experiments[20, 21]. The models also reproduce observations from a range of experiments where immune components were suppressed[22]. Here, we use simulations to show that these mathematical models allow us to extract the timing and strength of cross-protection from sequential infection data. By attributing cross-protection to specific immune components, the models lead to new insight into how previous exposure influences the time course of a subsequent infection. Moreover, we find that compared to single infection experiments, sequential infection experiments provide richer information on host immunity, and thus are potentially a powerful tool to study immune-mediated control of a primary infection.\nDiscussion:\nIn this study, we have simulated experiments which investigate the interaction of influenza strains through sequential infections, then explored how mathematical models could be applied to the data to gain insight into immune mechanisms. Our analysis has shown that the sequential infection study design, compared to the single infection study design, provides richer information for inferring the timing and strength of each immune component. We have identified three main advantages of sequential infection data. The first advantage is in inferring how each immune component helps to resolve a single infection. We found that the synthetic sequential infection data captures the timing of innate and adaptive immunity during a single infection, and thus enables accurate prediction of the outcomes of some in silico experiments where immune components were removed. In contrast, we could not consistently infer the timing and strength of innate immunity from single infection data. Moreover, single infection data contains information only on the timing of adaptive immunity, but not the effects of suppressing adaptive immunity. The second advantage is that sequential infection data contains more information on the effects of cross-protection. We were able to use the model fitted to the sequential infection data to precisely predict outcomes of further such experiments using the same strains but different inter-exposure intervals. Using the model fitted to the single infection data greatly reduced predictive power. The third advantage is in inferring the contribution of each immune component to this cross-protection. For the dataset in the main text, we were able to infer that cellular adaptive immunity played little role in cross-protection, and that innate immunity and/or target cell depletion led to the observed cross-protection. We also showed that target cell depletion alone could not explain this cross-protection. Collectively, the above findings strongly suggest that analysing real sequential infection data using mathematical models will help infer the timing and strength of host immunity, which are difficult to measure directly in laboratory experiments. Such mathematical models will not only have the ability to explain observed experimental outcomes, but the ability to predict outcomes of new experiments, which can then be tested in the laboratory. These findings are particularly important as sequential infection experiments are increasingly being used to study the role of the immune response during infection with influenza and other respiratory pathogens[18, 26]. This study has highlighted some limitations of quantifying the immune response using virological data from sequential infection experiments alone. Firstly, using the synthetic sequential infection data, we could not discriminate between the effects of cellular and humoral adaptive immunity in controlling a primary infection. If the effects of cellular and humoral adaptive immunity need to be distinguished, such as to predict the effects of vaccines that boost these components separately, quantities other than the viral load may need to be measured. Secondly, we could not definitively rule out the possibility that target cell depletion contributed significantly to cross-protection. We were also unable to distinguish the roles of different innate immune mechanisms in cross-protection. Some modelling applications may require the strengths of different innate immune mechanisms to be known separately. An example of such an application is modelling the effect of treatments that modulate the innate immune response, such as the toll-like receptor-2 agonist Pam2Cys which has been shown to stimulate innate immune signals and reduce influenza-associated mortality and morbidity in animal studies[27]. In this simulation-based study, we were able to compare inferred quantities to a ‘ground truth’, to understand which quantities were inferred accurately, and which inferences may need to be treated with caution. For example, in S4 File, we show that the marginal posterior distributions for some parameters exhibit bias, such as that for the basic reproduction number R0. These apparent biases reflect correlations in the joint posterior distribution, and would be difficult to identify without a simulation-based study. This approach is thus crucial for sound interpretation of future studies fitting models to experimental data. In addition to total viral load data, the study by Laurie et al.[17] also included infectious viral load measurements for single infection ferrets, and serological responses and cytokine levels at limited time points. Inclusion of this data could help to address the above limitations; the utility of this additional data can be assessed by further simulation-based studies. New experiments could also be conducted to improve parameter estimates, leading to more accurate inference of the timing and strength of immune components. Previous studies have measured viral decay rates in vitro and incorporated these estimates into model fitting[28, 29]. In vitro studies can also directly measure the time course of those immune mechanisms which are active in vitro[30]. Now that we have shown how mathematical models can increase the utility of sequential infection experiments, fitting the model to the experimental data by Laurie et al.[17] is a priority. A simulation-estimation study alone cannot validate the mathematical model used, or infer the effects of host immunity against the pathogens in the experiments. However, this simulation-based study ensures that results will be interpreted appropriately when the models are fitted to data. We have demonstrated that compared to single infection experiments, the sequential infection study design helps us to better understand cross-protection on short timescales. Further, data from sequential infection experiments helps to discriminate between existing models for a primary infection, leading to an improved understanding of the control and resolution of infection.",
"role": "user"
},
{
"content": "The resolution of an influenza infection requires different components of the immune response to work together . Despite recent advances in mathematical modelling , we do not well understand how much each broad immune component contributes to this process at a given time . In this study , we show that laboratory data on the amount of virus over the course of a single infection is insufficient for inferring the contribution of each broad immune component . However , if the animals are exposed to two different virus strains with only days separating exposures , then the timing and strength of protection provided by the first infection against the second provides crucial additional information . We show how mathematical models can be used to recover the timing and strength of each immune component , thus enhancing our understanding of how an infection is controlled , and how a previous exposure changes the time course of a subsequent infection .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.171378 | [
{
"content": "Given the full text of a scientific research paper, write an abstract section for the paper. Your response should contain only the abstract without any other text.\n\nArticle text:\nmulti - frequency electrical impedance tomography ( mfeit ) @xcite can be applied to the non - invasive assessment of abdominal obesity , which is a predictor of health risk .\nmfeit data of the boundary current - voltage relationship at various frequencies of @xmath0 1 mhz reflect the regional distribution of body fat , which is less conductive than water and tissues such as muscle , and can therefore be used to estimate the thicknesses of visceral and subcutaneous adipose tissue .\nthis diagnostic information can be used to assess abdominal obesity , which is considered a cause of metabolic syndrome as well as a risk factor for various other health conditions @xcite .\nthe spatial resolution of computed tomography ( ct ) and magnetic resonance ( mr ) images is high enough for the assessment of abdominal obesity @xcite ; however , there are concerns and limitations regarding their use for this purpose ; e.g. , ct exposes the subject to ionizing radiation , while mr imaging has poor temporal resolution .\nelectrical impedance tomography ( eit ) is a noninvasive , low - cost imaging technique that provides real - time data without using ionizing radiation .\nhowever , experiences over the past three decades have not succeeded in making eit robust against forward modeling errors such as those associated with uncertainties in boundary geometry and electrode positions . in time - difference eit , which images changes in the conductivity distribution with time , forward modeling errors\nare effectively handled and largely eliminated when the data are subtracted from the reference data at a predetermined time .\ntime - difference eit for imaging the time changes in the conductivity distribution effectively handle the forward modeling errors , which are somewhat eliminated from the use of time - difference data subtracting reference data at a predetermined fixed time . in static eit , however , there are no reference data that can be used to eliminate the forward modeling error @xcite .\ncreating reference - like data would be the key issue in static eit .\nthis paper proposes a new reconstruction method that uses prior anatomical information , at the expense of spatial resolution , to compensate for this fundamental drawback of static eit and improve its reproducibility . in the case of abdominal eit , it is possible to use a spatial prior to handle its inherent ill - posed nature .\nthe proposed method employs a depth - based reconstruction algorithm that takes into account the background region , boundary geometry , electrode configuration , and current patterns . here\n, we could take advantage of recent advances in 3d scanner technology to minimize the forward modeling error by extracting accurate boundary geometry and electrode positions .\nthe proposed method uses a specially chosen current pattern to obtain a depth - dependent data set , generating reference - like data that are used to outline the borders between fat and muscle . from the relation between a current injected through one pair of electrodes and the induced voltage drop though the other pair of electrodes , we obtain the transadmittance , which is the ratio of the current to the voltage .\nhence , the transadmittance depends on the positions of two pairs of electrodes , body geometry , and admittivity distribution .\nwe can extract the corresponding apparent admittivity in term of two pairs of electrodes from the transadmittance divided by a factor involving electrode positions and the body geometry .\nthis apparent admittivity changes with the choice of pairs of electrodes .\n( in the special case when the subject is homogeneous , the apparent admittivity does not depend on electrode positions . )\nnoting that the change in apparent admittivity in the depth direction can be generated by varying the distance between electrodes , we could probe the admittivity distribution by developing a proper and efficient algorithm based on a least - squares minimization .\nthe performance of the proposed least - squares approach is demonstrated using numerical simulations with a 32-channel eit system and human like domain .\nfuture research study is to adopt an mfeit technique to exploit the frequency - dependent behavior of human tissue @xcite .\nthe distribution of visceral fat in abdominal region can then be estimated from the linear relation between the data and the admittivity spectra , and thus obtain a clinically useful absolute conductivity image .\nlet an imaging object occupy three ( or two ) dimensional domain @xmath1 with its admittivity distribution @xmath2 where @xmath3 is the conductivity , @xmath4 the permittivity , and @xmath5 the angular frequency .\nthe domain @xmath1 can be divide by 4 subregions ; subcutaneous fat region @xmath6 , muscle region @xmath7 , bone region @xmath8 , and remaining region @xmath9 as shown in figure [ fig : illustrate - abdomen - tissue - spectroscopy ] ( a ) . [\ncols=\"^,^ \" , ] + ( a ) & ( b ) as shown in figure [ fig : recon - mesh ] for numerical tests , we use the inject - measure set in @xmath10 by choosing eight electrodes depicted in red and generating the corresponding triangular mesh . for computing geometry - free data @xmath11 ,\nthe @xmath12 is essential which can be obtained numerically by using the fem with homogeneous admittivity @xmath13 in @xmath1 . to give conditions similar to real situations , the fem uses optimally generated meshes\ncorresponds to each admittivity distributions for @xmath14 and @xmath12 .\nwe tested all inject - measure index set @xmath15 with corresponding triangular mesh in the reconstructed region @xmath16 corresponds to the choice of @xmath17 .\nthe image reconstruction result of applying the method using @xmath10 index set with 1063 triangular mesh ( figure [ fig : recon - mesh ] ) is represented in figure [ fig : recon - humanbody-4methods ] .\nindex set.,width=188 ] now , we add gaussian random noise with snr 15db to the data @xmath14 .\nthe reconstruction results using index sets @xmath15 of applying the linearized method with tikhonov regularization using the noisy data are represented in figure [ fig : recon - humanbody - linearized-32 ] . for visual comfort ,\nwe merge the 32 result images to 1 image as shown in figure [ fig : recon - humanbody - linearized-32 ] . with noisy data.,width=491 ] it is worth pointing out that the column vectors of @xmath18 are highly correlated .\nthe correlation function @xmath19 between the column vectors can be defined by @xmath20 for @xmath21 , where @xmath22 , @xmath23 are column vectors of @xmath18 .\nwe compute the correlation @xmath24 , @xmath25 , @xmath26 for @xmath27 which correspond to the mesh elements @xmath28 near the boundary , @xmath29 in the middle of @xmath6 , and @xmath30 far from the boundary @xmath31 .\nas shown in figure [ fig : correlation - smat ] , the column vectors are highly correlated .\ncc , @xmath25 , @xmath26 with mesh 1 , 2 , and 3 , respectively.,width=158 ] & , @xmath25 , @xmath26 with mesh 1 , 2 , and 3 , respectively.,width=158 ] , @xmath25 , @xmath26 with mesh 1 , 2 , and 3 , respectively.,width=26 ] + ( a ) & ( b ) + + , @xmath25 , @xmath26 with mesh 1 , 2 , and 3 , respectively.,width=158 ] , @xmath25 , @xmath26 with mesh 1 , 2 , and 3 , respectively.,width=26 ] & , @xmath25 , @xmath26 with mesh 1 , 2 , and 3 , respectively.,width=158 ] , @xmath25 , @xmath26 with mesh 1 , 2 , and 3 , respectively.,width=26 ] + ( c ) & ( d ) the success of the proposed least - squares method arises from taking advantage of the tikhonov regularization .\nthe tikhonov regularization minimizes both @xmath32 and @xmath33 as follows : @xmath34 the component of @xmath35^t$ ] can be considered as a coefficient of the linear combination of column vectors @xmath36 to generate @xmath37 . when minimizing @xmath32 , high correlation of @xmath36 causes uncertainty in finding @xmath38 . however , minimizing @xmath33 can compensate the mismatch of @xmath39 caused by high correlations between the @xmath36 for @xmath40 , since @xmath39 and @xmath41 tend to have a similar value when @xmath36 and @xmath42 are highly correlated .\nit is also worth emphasizing that other imaging algorithms such as music @xcite can not perform well because precisely of the high correlation of column vectors of the sensitivity matrix @xmath18 .\nin this work , static eit image reconstruction algorithm of human abdomen for identifying subcutaneous fat region is developed .\nthe proposed depth - based reconstruction method relies on theorem [ thm : main ] which shows that subcutaneous fat influence in the data can be eliminated by using a reference - like data and geometry information ( domain shape and electrode configuration ) to overcome a fundamental drawback in static eit ; lack of reference data for handling the forward modeling errors .\nwe suggest a linearized method with tikhonov regularization which uses the subcutaneous influence eliminated data with a specially chosen current pattern .\nnumerical simulations show that the reconstruction result of identifying the subcutaneous fat region is quite satisfactory .\nthe suggested way of eliminating influence of homogeneous background admittivity can be applied in other static eit area , for instance , ground detection .\nthe knowledge of subcutaneous fat region can be a useful information of developing an algorithm of estimating visceral fat occupation . for clinical use , estimating visceral fat occupation is required to provide useful information in abdominal obesity .\nthe following result holds .\nlet @xmath1 be a lower half - space of @xmath43 .\nlet @xmath44 and @xmath45 be circular electrodes centered at points @xmath46 and @xmath47 on @xmath48 , respectively , with radius @xmath49 .\nlet @xmath50 satisfy @xmath51 where @xmath52 and @xmath53 are constants .\ndefine @xmath54 . for given @xmath55 , there exists a constant @xmath56 such that , for @xmath57 , @xmath58 where @xmath59 .\n[ lem : cemispem ] let @xmath60 be the solution of @xmath61 then @xmath60 can be represented by @xmath62(x ) \\quad\\mbox{for}~x\\in\\omega .\\ ] ] for @xmath63 , we get @xmath64 it follows from mean - value theorem that , for @xmath63 , @xmath65 since @xmath66 for @xmath63 , we have @xmath67 therefore @xmath68 for @xmath69 , @xmath70 and @xmath60 can be represented by @xmath71(x)-2\\ms\\left[\\frac{\\partial \\widetilde{u}^h}{\\partial n}\\chi_{\\me^h_-}\\right](x ) , \\\\\nv^h(x ) & = & -2\\ms\\left[\\frac{\\partial v^h}{\\partial n}\\chi_{\\me^h_+}\\right](x)-2\\ms\\left[\\frac{\\partial v^h}{\\partial n}\\chi_{\\me^h_-}\\right](x).\\end{aligned}\\ ] ] then , for @xmath63 , @xmath72 where @xmath73 on @xmath74 . since @xmath75 , @xmath76 it follows from mean - value theorem that for @xmath63 @xmath77 a similar argument shows that @xmath78 let @xmath1 be a simply connected domain in @xmath43 .\nlet @xmath44 and @xmath45 be circular electrodes centered at points @xmath46 and @xmath47 on @xmath48 , respectively , with radius @xmath49 .\nlet @xmath79 satisfy @xmath80 where @xmath52 and @xmath53 are constants .\nlet @xmath81 satisfy @xmath82 for given @xmath55 , there exists a constant @xmath56 such that , for @xmath57 , @xmath58 where @xmath59 .\n[ thm : cemispem ] let @xmath60 be the solution of the following equation @xmath83 then @xmath60 and @xmath84 can be represented by @xmath85 and @xmath86 hence , for @xmath63 , @xmath87 therefore , from mean - value theorem it follows that , for @xmath63 , @xmath88 according to the decay estimation for the neumann function in @xcite and the fact that @xmath66 for @xmath63 , there exists a positive constant @xmath56 such that @xmath89 consequently , @xmath90 the functions @xmath70 and @xmath60 can be represented by @xmath91 then , for @xmath63 , @xmath92 where @xmath93 on @xmath74 . since @xmath75 , @xmath94 again , from the mean - value theorem it follows that , for @xmath63 , @xmath95 a similar argument shows that @xmath96 therefore @xmath97 which completes the proof of the theorem .",
"role": "user"
},
{
"content": "this paper presents a static electrical impedance tomography ( eit ) technique that evaluates abdominal obesity by estimating the thickness of subcutaneous fat . \n eit has a fundamental drawback for absolute admittivity imaging because of its lack of reference data for handling the forward modeling errors . to reduce the effect of boundary geometry errors in imaging abdominal fat \n , we develop a depth - based reconstruction method that uses a specially chosen current pattern to construct reference - like data , which are then used to identify the border between subcutaneous fat and muscle . \n the performance of the proposed method is demonstrated by numerical simulations using 32-channel eit system and human like domain . \n abdominal electrical impedance tomography , reference - like data , outermost region estimation , sensitivity matrix . 35r30 , 49n45 , 65n21 .",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.187 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nDeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks.\nDeePa optimizes parallelism at the granularity of each individual layer in the network.\nWe present an elimination-based algorithm that finds an optimal parallelism configuration for every layer.\nOur evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×.\nTraining convolutional neural networks (CNNs) is increasingly compute-intensive and timeconsuming.\nIt takes days or even weeks to train deep CNNs from scratch BID10 BID12 BID9 BID11 .\nExisting deep learning frameworks such as TensorFlow, PyTorch, and Caffe2 parallelize the training process onto multiple processors (usually GPUs) using image parallelism 1 dividing the entire image dataset into batches with the same number of images and assigning each batch to a dedicated processor.The standard parallelization of CNN training only exploits image parallelism.\nHowever, other dimensions can also parallelize the training process.\nFor example, in CNNs for 2D images, data is commonly organized as 4-dimensional tensors (i.e., image, height, width, channel).\nThe image dimension includes an index for each image in the input dataset.\nThe height and width dimensions specify a position in an image.\nFor a particular position, the channel dimension 2 indexes different neurons for that position.\nExploring these other parallelizable dimensions can potentially reduce the compute time and data transfer cost when training CNNs (see Section 2).\nMoreover, different layers in a CNN may prefer different parallelism configurations for achieving optimal performance.We propose DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs.\nTo the best of our knowledge, DeePa is the first system that models and exploits the parallelism of neural networks at the granularity of each individual layer.\nTo generate a parallelism configuration for each layer, DeePa uses an elimination-based algorithm that automatically finds the configuration with the best estimated performance.The main contributions of this paper are:• We present DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs.•\nThe parallelization strategy is selected at the granularity of each individual layer.•\nWe present an elimination-based algorithm for finding the parallelism configuration with optimal estimated performance for each layer.•\nOur evaluation shows that, compared to state-of-the-art deep learning frameworks (e.g., TensorFlow and PyTorch), DeePa achieves 6.5×, 1.9×, and 1.5× speedup for AlexNet, TAB0 Conv3x3 in an Inception-v3 module respectively . The\nperformance improvement comes from reducing overall data transfers, automatically overlapping computation with data movement, and accelerating computation throughput.\nWe have presented DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs.\nDeePa optimizes the parallelism configuration chosen at the granularity of individual layers.\nDeePa achieves up to 6.5× for training CNNs and reduces overall data transfers by up to 23× compared to state-of-the-art deep learning frameworks.Proof.\nThe Cost function is defined in Equation 1.\nLet g be any configuration.\nWe first compute the difference between Cost(g, (V, E)) and Cost(g, (V , E )).\nDISPLAYFORM0 =w.compute(g(w\n)) + w.update(g(w\n)) DISPLAYFORM1 Now assume g is an optimal configuration for (V, E). Then\nwe have w.compute(g(w))\n+ w.update(g(w))\n+ e 1 .xfer(g(u\n), g(w\n)) + e\n2 .xfer(g(w)\n, g(v)\n) = min\ncw {w.compute(c w ) + w.update(c w ) + e 1 .xfer(g(u),\nc w )\n+ e 2 .xfer(c w ,\ng(v))}Therefore\n, g is an optimal configuration of (V , E ). For the other\ndirection, note that if g is an optimal configuration of (V , E ), then it can be extended to an optimal configuration of (V, E) by adding the node w with the same minimal assignment.",
"role": "user"
},
{
"content": "To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of CNNs in all parallelizable dimensions at the granularity of each layer.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.15889 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nIn earlier work, we have shown that articulation rate in Swedish child-directed speech (CDS) increases as a function of the age of the child, even when utterance length and differences in articulation rate between subjects are controlled for. In this paper we show on utterance level in spontaneous Swedish speech that i) for the youngest children, articulation rate in CDS is lower than in adult-directed speech (ADS), ii) there is a significant negative correlation between articulation rate and surprisal (the negative log probability) in ADS, and iii) the increase in articulation rate in Swedish CDS as a function of the age of the child holds, even when surprisal along with utterance length and differences in articulation rate between speakers are controlled for. These results indicate that adults adjust their articulation rate to make it fit the linguistic capacity of the child.\n\nCited Abstract(s):\n[1]: The speech rate and pitch (F0) characteristics of naturalistic, longitudinally recorded infant- and adult-directed speech are reported for three, genetically diverse languages. Previous research has suggested that the prosodic characteristics of infant-directed speech are slowed speech rate, raised mean pitch, and expanded pitch range relative to adult-directed speech. Sixteen mothers (5 Sri Lankan Tamil, 5 Tagalog, 6 Korean) were recorded in their homes during natural interactions with their young infants, and adults, over the course of 12 months beginning when the infant was 4 months old. Regression models indicated that the difference between infant- and adult-directed speech rates decreased across the first year of infants' development. Models of pitch revealed predicted differences between infant- and adult-directed speech but did not provide evidence for cross-linguistic or longitudinal effects within the time period investigated for the three languages. The universality of slowed speech rate, raised pitch, and expanded pitch range is discussed in light of individuals' highly variable implementation of these prosodic features in infant-directed speech.",
"role": "user"
},
{
"content": "However, less is known about the extent to which the characteristics of CDS change as the child grows older. In the case of AR, it has been shown that it increases in mothers' (n=16) CDS as a function of the age of the child for children from 0;4 to 1;4 of age in Korean (n=6), Sri Lankan Tamil (n=5) and Tagalog (n=5). This is referred to as speech rate'', but it is stated in a footnote that what is actually meant is AR, under the constraint of including utterances where silences never exceeded a duration of 300 ms. Utterance length was controlled for by choosing utterances around 5 s long [1] .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.96911 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\npolymyositis ( pm ) and dermatomyositis ( dm ) are autoimmune myopathies characterized by inflammation and weakness of proximal muscles with extra muscular manifestations . in dm\nalthough both these disorders share some common clinical and histological features like muscle weakness , inflammatory infiltrates on muscle biopsy , but they have certain differences both in terms of presentation and patho - physiology . involvement of skin is the primary clinical feature , which distinguishes dm from pm .\nlater on , it has been reported that inflammatory myopathies can also occur in the absence of skin involvement . in 1891\nthe term dermatomyositis was coined by unverricht to describe patients with dermatological findings and myopathy .\npatients with pm and dm usually presented with sub acute or chronic onset of symmetrical proximal muscle weakness , often associated with high serum creatinine kinase levels . in electromyography and magnetic resonance imaging ( mri )\nthese two diseases can be distinguished from one another by histological features on biopsy . from the immunological point of view\nin dm , the histopathology of muscles indicates primary involvement of microcirculation mediated by humoral processes associated with secondary ischemic changes of muscle fibers.[79 ] the hallmark feature of dm is presence of peri - fascicular atrophic regenerating and degenerating myofibers , which has been resulted from destruction of capillaries of this region .\nit has been proposed that depletion of capillaries induces localized hypoxia and injury to the myofibers .\nin fact this loss of capillary and abnormal morphology is an early feature of dm and can occur in the absence of inflammatory infiltrates .\nbut one recent study showed that in both pm and dm there is early capillary loss so it is not disease specific . in different studies from dm muscle tissues have demonstrated that deposition of the c5b-9 membrane attack complex ( mac ) on endothelial cells and the presence of abnormal tubo - reticular structures within the smooth endoplasmic reticulum of endothelial cells occur before the depletion of capillaries .\nanother characteristic feature of dm is presence of peri - vascular inflammation in the muscle . in this inflammatory collections b cells\ngreenberg et al . have reported that most of the cd4 t cells in the dm muscle biopsies are plasmacytoid dendritic cells , which are potent source of ifn-. therefore genes induced by ifn- are highly expressed in muscle biopsies of dm patients compared with other inflammatory myopathies .\nmoreover , it has been found in other studies that in periphery ifn-/ inducible gene expression correlate with disease activity of dm . ,\nin pm the characteristic feature is presence of cd8 t cells and macrophages in the myofibers . compared with healthy muscles , mhc - i is up regulated in myofibers of pm.[1921 ] the expression of mhc - i on myositis muscle suggest that these cells may be killed by cd8 t cells which contain perforin , a pore - forming protein that mediates the entry of cytotoxic proteases and calcium into target cells . in confocal microscopic studies it has been found that these perforin - containing granules are oriented toward muscle fibers , consistent with a cytotoxic mechanism of cell death in pm .\nalthough mhc - i has been proposed to mediate cell death in pm , but it is also expressed in dm patients mainly on perifascicular fibers . in one study fasth\net al . have reported that in muscles of patients of pm and dm , the main effector t - cell subset is cd28 t - cells , which are resistant to apoptosis and thus may be responsible for recurrence of disease following treatment .\nskin biopsies of dm lesions demonstrate sparse infiltrate of inflammatory cells at the dermo - epidermal junction .\nthe dermal infiltrates consists of activated t cells and there is deposition of immune complex along the vessel walls of the dermis .\nthis finding along with muscle pathology findings suggests that blood vessels may be the primary target of immune response in dm .\nit has also been shown that in dm patients there is increase in ki-67 positive keratinocytes and decreased number of bcl-2 positive cells in the basal layer of epidermis suggesting increase proliferation and abnormal apoptotic pathway in skin of dm patients .\nlike other auto - immune diseases , in pm and dm also strong association of auto - antibodies with distinct clinical phenotypes is found .\nthese auto - antibodies have been classified into myositis - associated autoantibodies ( maas ) , which can be found in patients with other connective tissue diseases , and myositis specific antibodies ( msas ) , which are found primarily in patients with myositis .\nthe exact role of these auto - antibodies in the disease pathology is not clear . in 1980 ,\nauto - antibodies against the histidyl - trna synthetase ( anti - jo-1 ) were first described and are the most common msas . in 2530% of myositis patients anti - jo-1 may be present .\npatients with these antibodies are associated with a unique clinical syndrome including myositis , interstitial lung disease ( ild ) , nonerosive arthritis , fever , and characteristic hyperkeratotic lesions along the radial and palmar aspects of the fingers known as mechanic 's hands . among the jo-1\nthere are some mixed results about the association of anti - jo-1 and pm / dm . in couple of studies\nit has been found that anti - jo-1 is more associated with pm than dm .\nin contrast , another study found that anti - jo-1 is almost equally present in patients with pm and dm .\nmoreover from few studies it has been revealed that titers of anti - jo-1 are correlated with disease activity.[303537 ] in one study by stone et al .\nit has been found that in 81 patients with positive anti - jo-1 , there was modest correlation between auto - antibody titer , creatinine kinase level , muscle and lung involvement .\ntherefore serial anti - jo-1 titer follow up in an individual may be a good indicator of disease activity .\nanother important auto - antibody is anti - mi-2 , which was first described in a patient with dm in 1960 .\nlater on it was found that by immunoprecipitation technique , this auto - antibody is present in 2030% of dm patients whereas few patients with pm are positive for this auto - antibody.[3942 ] however , studies using elisa detection assay found significant number of mi-2 positive cases in patients with pm.[4345 ] this may be due to high false positive result with elisa .\npatients of dm with anti - mi-2 are more susceptible to have severe form of cutaneous manifestations like heliotrope rashes , shawl rashes over the upper back and neck and cuticular over growth . additionally , dm patients with anti - mi-2 positive has a more favorable prognosis , respond better to steroid treatment and low incidence of malignancy compared with other dm patients .\na correlation between surface uv radiation intensity and development of anti - mi-2 antibody in patients with dm has been established . at lower latitudes , like in guatemala city where the uv exposure is more , 60% of dm patients are anti - mi-2 positive whereas in glasgow\nwhere the uv exposure is relatively less , only 6.7% are anti - mi-2 positive . in a study , casciola - rosen et al . showed that mi-2 protein levels were found to be relatively low in muscles of healthy individuals and patients with pm whereas dm patients have increased expression of mi-2 . in 1986 , reeves first described the presence of another auto - antibody , anti - signal recognition particle auto - antibodies ( anti - srp ) in a typical polymyositis patient .\nthere are few reports available on this antibody and its association with pm / dm . in one study\n, targoff showed that this antibody is present in 4% of pm / dm patients .\nthese patients did not have dm rashes or overlap syndrome , but they have severe muscle disease .\nsubsequently it has been found that these ( anti - srp positive ) patients have severe and rapidly progressive muscle disease , which responds to steroid initially , and their biopsy showed much less lymphocytic infiltrates than seen in patients with pm / dm . in other studies involved\n, relatively larger cohort also established that patients with anti - srp positive have severe muscle disease and lacks typical histological features of pm / dm .\nanother auto - antibody , anti-155/140 is highly specific and found in 1321% of patients with dm .\nmoreover , patients with this antibody had higher rate of malignancy ( 71% ) than patients of dm negative ( 11% ) for this antibody .\npatients with anti-155/140 had lower rate of interstitial lung disease than other patients of dm . in one study on japanese patients ,\nkaji et al . showed that patients with this antibody were more likely to have heliotrope rash and gottron 's sign than other patients of dm but in another study no such differences were found .\nmoreover , this antibody was found in patients with juvenile form of dm , where other msa are rare to find .\nbohan and peter criteria are most widely used for diagnosis of polymyositis and dermatomyositis . the original bohan and peter criteria proposed in 1975 , included the following features : \n symmetric proximal muscle weaknesstypical rash of dm ( a distinguishing feature for dm and pm)elevated serum muscle enzymesmyopathic changes on electromyography \n symmetric proximal muscle weakness typical rash of dm ( a distinguishing feature for dm and pm ) elevated serum muscle enzymes myopathic changes on electromyography according to the bohan and peter diagnostic criteria for a definite diagnosis of dm it requires four criteria ( including rash ) and for a definite diagnosis of pm it requires four criteria ( without rash ) .\nprobable disease comprises three criteria ( including rash ) for dm and three criteria ( without rash ) for pm .\npossible disease requires two criteria ( including rash ) for dm and two criteria ( without rash ) for pm .\nthe most commonly used clinical classification of inflammatory idiopathic polymyositis was also proposed by bohan and peter .\ntheir classification system was ( i ) primary idiopathic pm , ( ii ) primary idiopathic dm , ( iii ) juvenile dm , ( iv ) pm or dm with malignancy , and ( v ) pm or dm with associated collagen vascular diseases .\nhowever , they did not recognize inclusion body myositis ( ibm ) or other inflammatory myopathies , such as granulomatous and eosinophilic myositis .\nwhen this criteria was formulated , tests for myositis specific auto - antibodies were not available . to overcome these limitations different classification criteria\nhave been proposed for dm and pm.[5961 ] a classification criteria scheme has been proposed by an international workshop of myositis experts focused on criteria for the inclusion of patients in clinical trials . it was stressed strongly that these criteria should be validated and tested for reliability in prospective studies in order to allow further improvements in the future .\nthis classification exclude myositis associated with either a connective tissue disease or malignancy but included the following categories : \n inclusion body myositisdefinite pmprobable pmdefinite dmprobable dmamyopathic dm , also called dermatomyositis sine myositispossible dermatomyositis sine dermatitisnonspecific myositis \n inclusion body myositis amyopathic dm , also called dermatomyositis sine myositis possible dermatomyositis sine dermatitis clinicians widely accept that modern - day histopathology and genetic techniques need to exclude ibm , hereditary and other myopathies when applying the bohan and peter criteria .\nthese newer classification systems intend to provide insight in respect to disease prognosis as well as response to treatment compared to the bohan and peter criteria .",
"role": "user"
},
{
"content": "muscle inflammation and weakness are the key features of idiopathic inflammatory myopathies ( iims ) . in addition iims \n are frequently associated with cutaneous and pulmonary involvement . in clinical practice \n the three common inflammatory myopathies we come across are polymyositis ( pm ) , dermatomyositis ( dm ) and inclusion body myositis ( ibm ) . \n the bohan and peter criteria combine clinical , laboratory , and pathologic features to define pm and dm . \n they did not recognize inclusion body myositis ( ibm ) or other inflammatory myopathies , such as granulomatous and eosinophilic myositis . thus the disease spectrum is wide and iims are a heterogeneous group of autoimmune disorders . to address these issues in this article \n we have discussed the currently developing newer classifications of iims .",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.740 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nNowadays, we are surrounded by more and more online news articles. Tens or hundreds of news articles need to be read if we wish to explore a hot news event or topic. So it is of vital importance to automatically synthesize a batch of news articles related to the event or topic into a new synthesis article (or overview article) for reader's convenience. It is so challenging to make news synthesis fully automatic that there is no successful solution by now. In this paper, we put forward a novel Interactive News Synthesis system (i.e. INS), which can help generate news overview articles automatically or by interacting with users. More importantly, INS can serve as a tool for editors to help them finish their jobs. In our experiments, INS performs well on both topic representation and synthesis article generation. A user study also demonstrates the usefulness and users' satisfaction with the INS tool. A demo video is available at this https URL .\n\nCited Abstract(s):\n[1]: In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.\n\n[2]: We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a margin-based objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set.\n\n[3]: \n\n[4]: We treat the text summarization problem as maximizing a submodular function under a budget constraint. We show, both theoretically and empirically, a modified greedy algorithm can efficiently solve the budgeted submodular maximization problem near-optimally, and we derive new approximation bounds in doing so. Experiments on DUC'04 task show that our approach is superior to the best-performing method from the DUC'04 evaluation on ROUGE-1 scores.\n\n[5]: We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the sub-sentence or \"concept-level, to a sentence-level model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics.\n\n[6]: \n\n[7]: In this paper, we present a supervised learning approach to training submodular scoring functions for extractive multidocument summarization. By taking a structured prediction approach, we provide a large-margin method that directly optimizes a convex relaxation of the desired performance measure. The learning method applies to all submodular summarization methods, and we demonstrate its effectiveness for both pairwise as well as coverage-based scoring functions on multiple datasets. Compared to state-of-the-art functions that were tuned manually, our method significantly improves performance and enables high-fidelity models with number of parameters well beyond what could reasonably be tuned by hand.\n\n[8]: Multi-document summarization involves many aspects of content selection and surface realization. The summaries must be informative, succinct, grammatical, and obey stylistic writing conventions. We present a method where such individual aspects are learned separately from data (without any hand-engineering) but optimized jointly using an integer linear programme. The ILP framework allows us to combine the decisions of the expert learners and to select and rewrite source content through a mixture of objective setting, soft and hard constraints. Experimental results on the TAC-08 data set show that our model achieves state-of-the-art performance using ROUGE and significantly improves the informativeness of the summaries.\n\n[9]: As a framework for extractive summarization, sentence regression has achieved state-of-the-art performance in several widely-used practical systems. The most challenging task within the sentence regression framework is to identify discriminative features to encode a sentence into a feature vector. So far, sentence regression approaches have neglected to use features that capture contextual relations among sentences. We propose a neural network model, Contextual Relation-based Summarization (CRSum), to take advantage of contextual relations among sentences so as to improve the performance of sentence regression. Specifically, we first use sentence relations with a word-level attentive pooling convolutional neural network to construct sentence representations. Then, we use contextual relations with a sentence-level attentive pooling recurrent neural network to construct context representations. Finally, CRSum automatically learns useful contextual features by jointly learning representations of sentences and similarity scores between a sentence and sentences in its context. Using a two-level attention mechanism, CRSum is able to pay attention to important content, i.e., words and sentences, in the surrounding context of a given sentence. We carry out extensive experiments on six benchmark datasets. CRSum alone can achieve comparable performance with state-of-the-art approaches; when combined with a few basic surface features, it significantly outperforms the state-of-the-art in terms of multiple ROUGE metrics.\n\n[10]: Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.\n\n[11]: People often read summaries of news articles in order to get reliable information about an event or a topic. However, the information expressed in news articles is not always certain, and some sentences contain uncertain information about the event. Existing summarization systems do not consider whether a sentence in news articles is certain or not. In this paper, we propose a novel system called CTSUM to incorporate the new factor of information certainty into the summarization task. We first analyze the sentences in news articles and automatically predict the certainty levels of sentences by using the support vector regression method with a few useful features. The predicted certainty scores are then incorporated into a summarization system with a graph-based ranking algorithm. Experimental results on a manually labeled dataset verify the effectiveness of the sentence certainty prediction technique, and experimental results on the DUC2007 dataset shows that our new summarization system cannot only produce summaries with better content quality, but also produce summaries with higher certainty.",
"role": "user"
},
{
"content": "One of the related fields is document summarization. The methods can be divided into extractive methods [5] [4] [2] [7] [8] [11] [6] [9] and abstractive methods [10] [1] [3] .",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.4373 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nRecombinant human erythropoietin therapy for anemic cancer patients on combination chemotherapy . BACKGROUND Patients with advanced cancer frequently experience clinically significant anemia , which is often exacerbated by myelosuppressive chemotherapy . Consistent with the anemia of chronic disease , studies have documented serum erythropoietin levels that are inappropriately low for the degree of anemia in cancer patients . Myelosuppressive chemotherapy impairs erythropoiesis , which may not fully recover between treatment cycles . Recombinant human erythropoietin ( rHuEPO ) has been used safely and effectively to treat anemia in AIDS patients receiving zidovudine ( AZT ) and in patients with chronic renal failure . PURPOSE This study was designed to evaluate the clinical role of rHuEPO in reducing symptomatic anemia in patients with advanced cancer who were receiving myelosuppressive chemotherapy ( excluding cisplatin ) . METHODS We studied 153 anemic cancer patients receiving cyclic combination chemotherapy in a prospective multicenter , double-blind , placebo-controlled trial . The patients were randomly assigned to receive either rHuEPO ( 150 U/kg ) or placebo subcutaneously three times a week for a maximum of 12 weeks or until the hematocrit level increased to 38 % -40 % . If the hematocrit reached this target level before 12 weeks , the rHuEPO dose could be reduced to maintain the hematocrit at that level for the duration of the study . Response to rHuEPO therapy was assessed by measuring changes in hematocrit level , transfusion requirements , and quality of life . Quality-of-life assessment was based on patients ' responses to questionnaires before and after the courses of therapy . RESULTS The increase in hematocrit in the rHuEPO-treated group compared with hematocrit in the placebo-treated group was statistically significant ( P = .0001 ) as measured by percentage point of change from baseline to final evaluation , by an increase in hematocrit level of six percentage points or more unrelated to transfusion , and by a rise in hematocrit level to 38 % or more unrelated to transfusion . There was a trend toward the reduction in mean units of blood transfused per patient during months 2 and 3 of therapy combined in rHuEPO-treated patients compared with placebo-treated patients ( 0.91 U versus 1.65 U ; P = .056 ) . In addition , rHuEPO-treated patients experienced a statistically significant improvement in energy level and ability to perform daily activities ( P < or = .05 ) . The two treatment groups showed no statistically significant differences in toxic effects except for increased incidence of diaphoresis ( P < .05 ) and diarrhea ( P = .05 ) in the rHuEPO-treated group . CONCLUSIONS We conclude that rHuEPO is safe and effective for reversing anemia related to advanced cancer or to chemotherapy for cancer .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"rHuEPO-treated\", \"Recombinant human erythropoietin ( rHuEPO )\", \"placebo\", \"rHuEPO\", \"placebo-treated\", \"Recombinant human erythropoietin therapy\"], \"Outcome\": [\"mean units of blood transfused per patient\", \"symptomatic anemia\", \", transfusion requirements , and quality of life\", \"increase in hematocrit\", \"hematocrit level\", \"anemia\", \"changes in hematocrit level\", \"Quality-of-life\", \"diaphoresis\", \"toxic effects\", \"diarrhea\", \"energy level and ability to perform daily activities\", \"clinically significant\", \"hematocrit\"], \"Participant\": [\"153\", \"patients with advanced cancer who were receiving myelosuppressive chemotherapy ( excluding cisplatin )\", \"cancer patients\", \"receiving cyclic combination chemotherapy\", \"anemic cancer patients on combination chemotherapy\", \"placebo-treated\", \"rHuEPO-treated patients\", \"rHuEPO-treated group\", \"Patients with advanced cancer frequently experience clinically significant anemia\"]}",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.1565 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nMulti-step greedy policies have been extensively used in model-based Reinforcement Learning (RL) and in the case when a model of the environment is available (e.g., in the game of Go).\nIn this work, we explore the benefits of multi-step greedy policies in model-free RL when employed in the framework of multi-step Dynamic Programming (DP): multi-step Policy and Value Iteration.\nThese algorithms iteratively solve short-horizon decision problems and converge to the optimal solution of the original one.\nBy using model-free algorithms as solvers of the short-horizon problems we derive fully model-free algorithms which are instances of the multi-step DP framework.\nAs model-free algorithms are prone to instabilities w.r.t. the decision problem horizon, this simple approach can help in mitigating these instabilities and results in an improved model-free algorithms.\nWe test this approach and show results on both discrete and continuous control problems.\nThe field of Reinforcement learning (RL) span a wide variety of algorithms for solving decisionmaking problems through repeated interaction with the environment.\nBy incorporating deep neural networks into RL algorithms, the field of RL has recently witnessed remarkable empirical success (e.g., Mnih et al. 2015; Lillicrap et al. 2015; Silver et al. 2017 ).\nMuch of this success had been achieved by model-free RL algorithms, such as Q-learning and policy gradient.\nThese algorithms are known to suffer from high variance in their estimations (Greensmith et al., 2004) and to have difficulties handling function approximation (e.g., Thrun & Schwartz 1993; Baird 1995; Van Hasselt et al. 2016; Lu et al. 2018 ).\nThese problems are intensified in decision problems with long horizon, i.e., when the discount factor, γ, is large.\nAlthough using smaller values of γ addresses the γ-dependent issues and leads to more stable algorithms (Petrik & Scherrer, 2009; Jiang et al., 2015) , it comes with a cost, as the algorithm may return a biased solution, i.e., it may not converge to an optimal solution of the original decision problem (the one with large value of γ).\nEfroni et al. (2018a) recently proposed another approach to mitigate the γ-dependant instabilities in RL in which they study a multi-step greedy versions of the well-known dynamic programming (DP) algorithms policy iteration (PI) and value iteration (VI) (Bertsekas & Tsitsiklis, 1996) .\nEfroni et al. (2018a) also proposed an alternative formulation of the multi-step greedy policy, called κ-greedy policy, and studied the convergence of the resulted PI and VI algorithms: κ-PI and κ-VI.\nThese two algorithms iteratively solve γκ-discounted decision problems, whose reward has been shaped by the solution of the decision problem at the previous iteration.\nUnlike the biased solution obtained by solving the decision problem with a smaller value of γ, by iteratively solving decision problems with a smaller γκ horizon, the κ-PI and κ-VI algorithms could converge to an optimal policy of the original decision problem.\nIn this work, we derive and empirically validate model-free deep RL (DRL) implementations of κ-PI and κ-VI.\nIn these implementations, we use DQN (Mnih et al., 2015) and TRPO (Schulman et al., 2015) for (approximately) solving γκ-discounted decision problems (with shaped reward), which is the main component of the κ-PI and κ-VI algorithms.\nThe experiments illustrate the performance of model-free algorithms can be improved by using them as solvers of multi-step greedy PI and VI schemes, as well as emphasize important implementation details while doing so.\nIn this work we formulated and empirically tested simple generalizations of DQN and TRPO derived by the theory of multi-step DP and, specifically, of κ-PI and κ-VI algorithms.\nThe empirical investigation reveals several points worth emphasizing.\n1. κ-PI is better than κ-VI for the Atari domains..\nIn most of the experiments on the Atari domains κ-PI-DQN has better performance than κ-VI-DQN. This might be expected as the former uses extra information not used by the latter: κ-PI estimates the value of current policy whereas κ-VI ignores this information. 2\n. For the Gym domains κ-VI performs slightly better than κ-PI.\nFor the Gym domains κ-VI-TRPO performs slightly better than κ-PI-TRPO.\nWe conjecture that the reason for the discrepancy relatively to the Atari domains lies in the inherent structure of the tasks of the Gym domains: they are inherently short horizon decision problems.\nFor this reason, the problems can be solved with smaller discount factor (as empirically demonstrated in Section 5.3) and information on the policy's value is not needed.\n3. Non trivial κ value improves the performance.\nIn the vast majority of our experiments both κ-PI and κ-VI improves over the performance of their vanilla counterparts (i.e., κ = 1), except for the Swimmer and BeamRider domains from Mujoco and Atari suites.\nImportantly, the performance of the algorithms was shown to be 'smooth' in the parameter κ.\nThis suggests careful hyperparameter tuning of κ is not of great necessity.\n4. Using the 'naive' choice of N (κ) = T deteriorates the performance.\nChoosing the number of iteration by Eq. 8 improves the performance on the tested domains.\nAn interesting future work would be to test model-free algorithms which use other variants of greedy policies (Bertsekas & Tsitsiklis, 1996; Bertsekas, 2018; Efroni et al., 2018a; Sun et al., 2018; Shani et al., 2019) .\nFurthermore, and although in this work we focused on model-free DRL, it is arguably more natural to use multi-step DP in model-based DRL (e.g., Kumar et al., 2016; Talvitie, 2017; Luo et al., 2018; Janner et al., 2019) .\nTaking this approach, the multi-step greedy policy would be solved with an approximate model.\nWe conjecture that in this case one may set κ -or more generally, the planning horizon -as a function of the approximate model's 'quality': as the approximate model gets closer to the real model larger κ can be used.\nWe leave investigating such relation in theory and practice to future work.\nLastly, an important next step in continuation to our work is to study algorithms with an adaptive κ parameter.\nThis, we believe, would greatly improve the resulting methods, and possibly be done by studying the relation between the different approximation errors (i.e., errors in gradient and value estimation, Ilyas et al., 2018) , the performance and the κ value that should be used by the algorithm.\nA DQN IMPLEMENTATION OF κ-PI AND κ-VI",
"role": "user"
},
{
"content": "Use model free algorithms like DQN/TRPO to solve short horizon problems (model free) iteratively in a Policy/Value Iteration fashion.",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.2282 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Pendrinwasfoundinthecochleainapicalmembranesofspiralprominencecellsandspindle-shapedcellsofstriavascularis,inoutersulcusandrootcells.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.7110 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nNavigating spaces is an embodied experience. Examples can vary from rescue workers trying to save people from natural disasters; a tourist finding their way to the nearest coffee shop, or a gamer solving a maze. Virtual reality allows these experiences to be simulated in a controlled virtual environment. However, virtual reality users remain anchored in the real world and the conventions by which the virtual environment is deployed influence user performance. There is currently a need to evaluate the degree of influence imposed by extrinsic factors and virtual reality hardware on its users. Traditionally, virtual reality experiences have been deployed using Head-Mounted Displays with powerful computers rendering the graphical content of the virtual environment; however, user input has been facilitated using an array of human interface devices including Keyboards, Mice, Trackballs, Touchscreens, Joysticks, Gamepads, Motion detecting cameras and Webcams. Some of these HIDs have also been introduced for non-immersive video games and general computing. Due to this fact, a subset of virtual reality users has greater familiarity than others in using these HIDs. Virtual reality experiences that utilize gamepads (controllers) to navigate virtual environments may introduce a bias towards usability among virtual reality users previously exposed to video-gaming. This article presents an evaluative user study conducted using our ubiquitous virtual reality framework with general audiences. Among our findings, we reveal a usability bias among virtual reality users who are predominantly video gamers. Beyond this, we found a statistical difference in user behavior between untethered immersive virtual reality experiences compared to untethered non-immersive virtual reality experiences.\n\nCited Abstract(s):\n[1]: A person's level of physical fitness can affect their health and many other factors in their lives. However, little is known about the effect of physical fitness on factors relevant to virtual environments. Towards addressing this knowledge gap, we performed a research study examining the relationship of several physical fitness measures with performance, presence, and simulator sickness during use of an HMD-based maze-type virtual environment. We recorded the trajectory of each participant through the maze. Following the virtual environment, participants reported simulator sickness, presence, and provided written and verbal feedback. Our analysis of the data shows a positive correlation between self-reported physical fitness and user performance. Further research is necessary to establish a causal relationship, and methods to make use of this new information in the design of virtual environments.\n\n[2]: A convergence between consumer electronics and virtual reality is occurring. We present an immersive head-mounted-display-based, wearable 3D user interface that is inexpensive (less than $900 USD), robust (sourceless tracking), and portable (lightweight and untethered). While the current display has known deficiencies, the user tracking quality is within the constraints of many existing applications, while the portability and cost offers opportunities for innovative applications that are not currently feasible.\n\n[3]: Novel approaches are needed to reduce the high rates of childhood obesity in the developed world. While multifactorial in cause, a major factor is an increasingly sedentary lifestyle of children. Our research shows that a mixed reality system that is of interest to children can be a powerful motivator of healthy activity. We designed and constructed a mixed reality system that allowed children to exercise, play with, and train a virtual pet using their own physical activity as input. The health, happiness, and intelligence of each virtual pet grew as its associated child owner exercised more, reached goals, and interacted with their pet. We report results of a research study involving 61 children from a local summer camp that shows a large increase in recorded and observed activity, alongside observational evidence that the virtual pet was responsible for that change. These results, and the ease at which the system integrated into the camp environment, demonstrate the practical potential to impact the exercise behaviors of children with mixed reality.\n\n[4]:",
"role": "user"
},
{
"content": "Prior work on the evaluation of ubiquitous, smartphone driven 3DUIs helped guide our system architecture [2] [4] . A light-weight VR system is logistically a sound design when it comes to recruiting higher number of subjects for a research study. Other studies exist that looked into physical activity as a potential marker for performances in immersive VR environments. One such study suggested that physical activities of users can be a predictor for VR task performance [2] . In regards to physical activity, virtual reality has also been explored as a means to motivate individuals to exercise, a concept better known as VR exergaming [3] . While all these studies explore the relationship between physical activity and VR performance, there has been a lack of concrete connections between them. In contrast to quantifying physical fitness of users as suggested by [1] [3] , our approach shifts the attention of the user performance predictor to the gaming profile.",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.10140 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nBackground: The novel coronavirus disease, COVID-19, causes massive death, threatens the life and health of the world population. Thousands of health professionals were died and tested positive. Objective: This study was designed to determine knowledge and precautionary behavior practice for coronavirus disease-19 among health professionals working in public university hospitals in Ethiopia. Methods: A web-based online survey was conducted on health professionals working in Ethiopian public university hospitals. A survey questionnaire consisted of socio-demographic, coronavirus disease knowledge questions and precautionary behavioral practice. The survey questions were designed using Google form. All health professionals working (academic and clinical staff) in university hospitals were invited to participate in the online survey carried out from May 1 to 14, 2020. The data were analyzed using the Statistical Package for Social Sciences version 24.0. Descriptive statistics were computed, and tables and figures were used to present the results. Linear regression analysis was used to identify knowledge-related factors independently associated with precautionary behavior practice. Results: A total of 273 health professionals participated in this study. The mean (+/- SD) age of participants was 31.03 +/- 5.11. Two-third (61.5%) and one-fourth (26%) of participants attended second degree and medical doctors, respectively. More than one-fourth of the study participants (27.5%) reported social media as the source of information. In this study, we found a significant gap between the level of knowledge and practical implementation of the recommended precautionary measures, especially for wearing masks and gloves. The final multiple linear regression analysis indicated a positive association between knowledge of the source of infection, incubation period, and mode of disease transmission with recommended behavioral practice. Conclusion and Recommendation: There was a significant gap in the implementation of the behavioral practice, especially for wearing masks and gloves. Therefore, there is a need to motivate and monitor health professionals' adherence to recommended precautionary measures.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.qasa_abstractive_qa | science.qasa_abstractive_qa.1122 | [
{
"content": "Read this passage from a research paper and answer the question.\n\nContext: We have presented graph attention networks (GATs), novel convolution-style neural networks that operate on graph-structured data, leveraging masked self-attentional layers. The graph attentional layer utilized throughout these networks is computationally efficient (does not require costly matrix operations, and is parallelizable across all nodes in the graph), allows for (implicitly) assigning different importances to different nodes within a neighborhood while dealing with different sized neighborhoods, and does not depend on knowing the entire graph structure upfront—thus addressing many of the theoretical issues with previous spectral-based approaches. Our models leveraging attention have successfully achieved or matched state-of-the-art performance across four well-established node classification benchmarks, both transductive and inductive (especially, with completely unseen graphs used for testing).\n\nQuestion: It is not necessary to know the graph structure when using the proposed method. Is it true?",
"role": "user"
},
{
"content": "It is true.",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.4299 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: PGC-PPARgammacoactivator",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2448 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nComparison of two artificial tear formulations for dry eye through high-resolution optical coherence tomography . PURPOSE The aim was to determine the efficacy of two artificial eye-drop formulations by analysing the lower tear film meniscus volume through a commercial high-resolution spectral-domain optical coherence tomographer . METHODS Twenty dry eye patients ( 12 men , eight women , aged 57.5 ± 8.4 years ) with refractive errors from -2.50 to +0.75 D ( mean -1.34 ± 1.02 D ) and cylinders lower than 1.00 D were examined . Tear meniscus volume was measured before , immediately after and 10 , 30 and 60 minutes after instillation using the Copernicus high-resolution spectral-domain optical coherence tomographer ( Optopol Tech SA , Zawiercie , Poland ) . Volume was calculated from the local area obtained from tomograms considering a regular distribution of the tear meniscus across the eyelid . Ten subjects were randomly assigned to first receive either polyethylene glycol ( Blink Intensive , Abbot Medical Optics Inc , CA , USA ) or hypromellose ( Artific , Farma-Lepori SA , Barcelona , Spain ) three times daily in both eyes for one month . Measures were then repeated and after a one-week wash-out period they were switched to the other eye-drop for another month . RESULTS Mean baseline volume was 0.38 ± 0.10 µL , while mean baseline volume after the wash-out period was slightly higher , 0.39 ± 0.10 µL ( p = 0.638 ) . Analysis of variance showed significant differences in meniscus volume with time after instillation with both formulations ( p < 0.001 ) , mean volume decreasing with time . At 30 and 60 minutes following instillation , values decreased to almost baseline ( average difference 0.02 ± 0.03 µL at t ( 30 ) , p = 0.016 and 0.01 ± 0.01 µL at t ( 60 ) , p = 0.098 ) . CONCLUSION An increase in tear film meniscus volume in dry eyes from the use of eye-drops has been shown . High resolution imaging of lower tear film meniscus with clinical optical coherence tomography systems provides useful measures of tear volume . Both formulations assessed in the present study are efficient in increasing tear meniscus volume and reducing dry eye signs and symptoms , although results in terms of increase in meniscus volume were higher with the polyethylene glycol formulation .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"artificial tear formulations\", \"two artificial eye-drop formulations\"], \"Outcome\": [\"Tear meniscus volume\"], \"Participant\": []}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.17492 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nThe novel coronavirus disease 2019 (COVID-19) pandemic has exposed critical failures in global public policy preparedness and response. Despite over a decade of exposure to other epidemics and pandemics, many, if not most, nation states have failed to integrate lessons learned into their pandemic preparedness and response plans. The United Kingdom's response to COVID-19 is an archetype of how the pandemic has overwhelmed traditional public health-led approaches. In this paper, we explore the UK experience and propose that pandemics constitute multivector threats meriting attention within a health intelligence framework. They employ the processes of information management used by the intelligence sector to illustrate a procedural matrix for guiding public policy during complex health security events.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.19152 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nInfection with severe acute respiratory syndrom coronavirus 2 (SARS-CoV-2) is more likely to lead to poor outcomes in the elderly and those with cardiovascular disease, obesity or metabolic syndrome. Here, we consider mechanisms by which dyslipidaemia and the use of cholesterol-modifying drugs could influence the virus-host relationship. Cholesterol is essential for the assembly, replication and infectivity of enveloped virus particles; we highlight several cholesterol-modifying drugs with the potential to alter the SARS-CoV-2 life cycle that could be tested in in vitro and in vivo models. Although cholesterol is an essential component of immune cell membranes, excess levels can dysregulate protective immunity and promote exaggerated pulmonary and systemic inflammatory responses. Statins block the production of multiple sterols, oxysterols and isoprenoids, resulting in a pleiotropic range of context-dependent effects on virus infectivity, immunity and inflammation. We highlight antiviral, immunomodulatory and anti-inflammatory effects of cholesterol-modifying drugs that merit further consideration in the management of SARS-CoV-2 infection.",
"role": "user"
},
{
"content": "[\"Treatment\", \"Mechanism\"]",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.107 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: InstrainB6,however,malesinfrequentlyhadsignificantlyhigherIOPthanfemales.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.anat_em_ner | science.anat_em_ner.516 | [
{
"content": "Given this abstract, identify all unique entities of these types: [\"AnatomicalEntity\"].\n\nOnly output the JSON object, no other text.\n\nAbstract:\nIntraoperative handling and wound healing of arthroscopic portal wounds: a clinical study comparing nylon suture with wound closure strips.\nThis prospective, single-centre study compared wound closure methods in patients undergoing arthroscopy. Closure of arthroscopic portal wounds with sterile adhesive strips is effective and convenient for wound management. The method was associated with a reduced potential for infection, faster renewal of tensile strength, greater cost effectiveness, and better cosmetic effects comparing with suture closure. This method of wound closure may also reduce the incidence of needle stick injury in the theatre environment. Thereby the incidence of percutaneous exposure following a surgical procedure may not facilitate transmission of blood borne pathogens such as human immunodeficiency virus (HIV), hepatitis C virus and hepatitis B virus. As a result it may reduce litigation in today's changing healthcare climate.",
"role": "user"
},
{
"content": "{\"AnatomicalEntity\": [\"wounds\", \"percutaneous\", \"blood\", \"wound\"]}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.20061 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nBackground and Objectives: With this study, for the first time among patients diagnosed with COVID-19, the neutrophil-lymphocyte-ratios of men and women were compared. Materials and Methods: The study was conducted with 80 patients and the data was gained retrospectively on the electronic documents of the hospital. Results: The neutrophil-lymphocyte-ratio was statistically significant and higher in the male than the women for all ages and geriatric patients (p<0.05). Conclusion: The higher neutrophil-lymphocyte-ratio in older males diagnosed with COVID-19 could be a causative reason for the higher mortality rates in men. We hope that these findings would be helpful for further studies.",
"role": "user"
},
{
"content": "[\"Diagnosis\", \"Treatment\"]",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.4537 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: AcloserinspectionrevealedthatAnnexinA7ismainlylocalizedinthecytosolatE13(Fig.3B,E).",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.5748 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: ForallRIlinestested,everyTas2rwithinanindividualRIlineoriginatedfromthesameparentalstrain(Figures5,6).",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.3740 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nalpha1-Proteinase inhibitor therapy for the prevention of chronic lung disease of prematurity: a randomized, controlled trial.\nAbstract 1:\nAn imbalance between increased neutrophil elastase and a decreased antiprotease shield has been suggested as a factor contributing to the development of chronic lung disease (CLD). We hypothesized that administration of alpha1-proteinase inhibitor (A1PI), also known as alpha1-antitrypsin, to premature neonates would prevent CLD.\n A randomized, placebo-controlled, prospective study of A1PI supplementation was performed. Neonates <24 hours of age with birth weights 600-1000 g on respiratory support, and 1001-1250 g with respiratory distress syndrome (RDS) were eligible. Intravenous A1PI (60 mg/kg) or placebo was infused on days 0, 4, 7, and 14. Primary outcome was CLD in survivors, defined as the need for supplemental oxygen on day 28.\n A total of 106 patients were recruited. There were no significant differences between groups in birth weight or incidence of RDS. The incidence of CLD in survivors was lower in the treated group, but the difference did not reach statistical significance (relative risk [RR], 0.79; confidence interval [CI], 0.60-1.02). This beneficial trend persisted at 36 weeks corrected gestational age (RR, 0.48; CI, 0.23-1.00). The incidence of pulmonary hemorrhage was lower in the treated group (RR, 0.22; CI, 0.05-0.98). Other complications were not significantly different between groups.\n In this, the first trial of a protease inhibitor for the prevention of CLD in premature infants, the infusions were well-tolerated. A1PI therapy may impede the development of CLD and appears to reduce the incidence of pulmonary hemorrhage in some neonates born prematurely.",
"role": "user"
},
{
"content": "Prophylactic administration of a1PI did not reduce the risk of CLD at 36 weeks or long term adverse developmental outcomes in preterm neonates.",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.1792 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Thetissueswerethenincubatedfor20minat37degreesCinMEM(minimumessentialmedium)modifiedforsuspensionculture(LifeTechnologies)plus0.25%trypsin(LifeTechnologies).",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.456 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nRanibizumab for macular edema following central retinal vein occlusion: six-month primary end point results of a phase III study.\nAbstract 1:\nTo assess the efficacy and safety of intraocular injections of 0.3 mg or 0.5 mg ranibizumab in patients with macular edema after central retinal vein occlusion (CRVO).\n Prospective, randomized, sham injection-controlled, double-masked, multicenter clinical trial.\n A total of 392 patients with macular edema after CRVO.\n Eligible patients were randomized 1:1:1 to receive monthly intraocular injections of 0.3 or 0.5 mg of ranibizumab or sham injections.\n The primary efficacy outcome measure was mean change from baseline best-corrected visual acuity (BCVA) letter score at month 6. Secondary outcomes included other parameters of visual function and central foveal thickness (CFT).\n Mean (95% confidence interval [CI]) change from baseline BCVA letter score at month 6 was 12.7 (9.9-15.4) and 14.9 (12.6-17.2) in the 0.3 mg and 0.5 mg ranibizumab groups, respectively, and 0.8 (-2.0 to 3.6) in the sham group (P<0.0001 for each ranibizumab group vs. sham). The percentage of patients who gained > or =15 letters in BCVA at month 6 was 46.2% (0.3 mg) and 47.7% (0.5 mg) in the ranibizumab groups and 16.9% in the sham group (P<0.0001 for each ranibizumab group vs. sham). At month 6, significantly more ranibizumab-treated patients (0.3 mg = 43.9%; 0.5 mg = 46.9%) had BCVA of > or = 20/40 compared with sham patients (20.8%; P<0.0001 for each ranibizumab group vs. sham), and CFT had decreased by a mean of 434 microm (0.3 mg) and 452 microm (0.5 mg) in the ranibizumab groups and 168 microm in the sham group (P<0.0001 for each ranibizumab group vs. sham). The median percent reduction in excess foveal thickness at month 6 was 94.0% and 97.3% in the 0.3 mg and 0.5 mg groups, respectively, and 23.9% in the sham group. The safety profile was consistent with previous phase III ranibizumab trials, and no new safety events were identified in patients with CRVO.\n Intraocular injections of 0.3 mg or 0.5 mg ranibizumab provided rapid improvement in 6-month visual acuity and macular edema following CRVO, with low rates of ocular and nonocular safety events.\n Copyright 2010 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.\n\n\nTitle 2:\nPegaptanib sodium for macular edema secondary to central retinal vein occlusion.\nAbstract 2:\nTo assess the safety and efficacy of intravitreous pegaptanib sodium for the treatment of macular edema following central retinal vein occlusion (CRVO).\n This dose-ranging, double-masked, multicenter, phase 2 trial included subjects with CRVO for 6 months' or less duration randomly assigned (1:1:1) to receive pegaptanib sodium or sham injections every 6 weeks for 24 weeks (0.3 mg and 1 mg, n=33; sham, n=32).\n Visual acuity at week 30.\n In the primary analysis at week 30, 12 of 33 (36%) subjects treated with 0.3 mg of pegaptanib sodium and 13 of 33 (39%) treated with 1 mg gained 15 or more letters from baseline vs 9 of 32 (28%) sham-treated subjects (P= .48 for 0.3 mg and P= .35 for 1 mg of pegaptanib sodium vs sham). In secondary analyses, subjects treated with pegaptanib sodium were less likely to lose 15 or more letters (9% and 6%; 0.3-mg and 1-mg pegaptanib sodium groups, respectively) compared with sham-treated eyes (31%; P= .03 for 0.3 mg and P= .01 for 1 mg of pegaptanib sodium vs sham) and showed greater improvement in mean visual acuity (+7.1 and +9.9, respectively, vs -3.2 letters with sham; P= .09 for 0.3 mg and P= .02 for 1 mg of pegaptanib sodium vs sham). By week 1, the mean central retinal thickness decreased in the 0.3-mg and 1-mg pegaptanib sodium groups by 269 microm and 210 microm, respectively, vs 5 microm with sham (P< .001).\n Based on this 30-week study, intravitreous pegaptanib sodium appears to provide visual and anatomical benefits in the treatment of macular edema following CRVO.\n Benefits accrued with intravitreous pegaptanib sodium treatment of macular edema following CRVO suggest a role for vascular endothelial growth factor in the pathogenesis of this condition.\n clinicaltrials.gov Identifier: NCT00088283.",
"role": "user"
},
{
"content": "Ranibizumab and pegaptanib sodium have shown promise in the short-term treatment of non-ischemic CRVO-ME. However, effectiveness and safety data from larger RCTs with follow up beyond six months are not yet available. There are no RCT data on anti-VEGF agents in ischemic CRVO-ME. The use of anti-VEGF agents to treat this condition therefore remains experimental.",
"role": "assistant"
}
] |
Subsets and Splits