dataset
stringclasses 45
values | id
stringlengths 17
64
| messages
listlengths 2
2
|
---|---|---|
science.chemprot_ner | science.chemprot_ner.792 | [
{
"content": "Extract entities of these types from the text: [\"Chemical\", \"Protein\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values.\n\nText:\nCross-inhibition of SR-BI- and ABCA1-mediated cholesterol transport by the small molecules BLT-4 and glyburide.\nScavenger receptor class B type I (SR-BI) and ABCA1 are structurally dissimilar cell surface proteins that play key roles in HDL metabolism. SR-BI is a receptor that binds HDL with high affinity and mediates both the selective lipid uptake of cholesteryl esters from lipid-rich HDL to cells and the efflux of unesterified cholesterol from cells to HDL. ABCA1 mediates the efflux of unesterified cholesterol and phospholipids from cells to lipid-poor apolipoprotein A-I (apoA-I). The activities of ABCA1 and other ATP binding cassette superfamily members are inhibited by the drug glyburide, and SR-BI-mediated lipid transport is blocked by small molecule inhibitors called BLTs. Here, we show that one BLT, [1-(2-methoxy-phenyl)-3-naphthalen-2-yl-urea] (BLT-4), blocked ABCA1-mediated cholesterol efflux to lipid-poor apoA-I at a potency similar to that for its inhibition of SR-BI (IC(50) approximately 55-60 microM). Reciprocally, glyburide blocked SR-BI-mediated selective lipid uptake and efflux at a potency similar to that for its inhibition of ABCA1 (IC(50) approximately 275-300 microM). As is the case with BLTs, glyburide increased the apparent affinity of HDL binding to SR-BI. The reciprocal inhibition of SR-BI and ABCA1 by BLT-4 and glyburide raises the possibility that these proteins may share similar or common steps in their mechanisms of lipid transport.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"glyburide\", \"1-(2-methoxy-phenyl)-3-naphthalen-2-yl-urea\", \"BLT-4\", \"cholesteryl esters\", \"ATP\", \"BLT\", \"cholesterol\"], \"Protein\": [\"apoA-I\", \"ABCA1\", \"SR-BI\", \"ATP binding cassette superfamily\", \"Scavenger receptor class B type I\", \"HDL\", \"apolipoprotein A-I\"]}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.21667 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nNeurologic symptoms have been reported in some COVID-19 patients. However, little is known on what factors influence the risk of developing these symptoms. While some studies suggest that exposure to pollution is associated with higher rates of SARS-CoV-2 infection, its role is unknown in the development of neurologic symptoms in COVID-19 patients. The response of the central nervous system (CNS) to a SARS-CoV-2 infection may be influenced by its inflammatory state. Interestingly, environmental pollutants such as particulate matter may have neuroinflammatory effects, providing a possible link between exposure to these pollutants and the outcome of SARS-CoV-2 infection in the CNS. This article explores the hypothesis that the neurologic symptoms in COVID-19 may be exacerbated through a neuroinflammatory mechanism that is promoted by environmental pollutant exposure.",
"role": "user"
},
{
"content": "[\"Mechanism\"]",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.24445 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nBACKGROUND: The coronavirus disease 2019 pandemic has resulted in various changes in knowledge, attitude and practice among doctors. A survey was conducted of otolaryngologists in India regarding these aspects in relation to the coronavirus disease 2019 pandemic. METHOD: Otolaryngologists from West Bengal (India) were invited to participate in an online self-administered survey. Data were collected and analysed using appropriate methods. RESULTS: Responses from 133 participants, grouped into 4 groups by their career stage, were collected and analysed. Of the participants, 36.8 per cent were directly involved in treating a known or suspected coronavirus disease 2019 patient, although 66.2 per cent considered the personal protective equipment inadequate. Ninety-four per cent indicated that their willingness to perform procedures depended on personal protective equipment availability. Of the respondents, 83.5 per cent revealed additional mental stress due to the pandemic. Of the participants, 41.4 per cent took hydroxychloroquine as coronavirus disease 2019 prophylaxis. CONCLUSION: This study provides an insight into which issues may need attention, to help ENT surgeons tackle the coronavirus disease 2019 pandemic more effectively based on analysis of responses in the survey.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.496 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nThe SARS-CoV-2 is the causative agent of the COVID-19 disease, a severe acute respiratory syndrome-coronavirus (SARS-CoV). Its main transmission pathway is through large respiratory droplets, as well as direct and indirect contact. Copper in different formats has been used in research and clinical settings to reduce the risk of bacterial and viral contamination. Therefore, this review aims to search for evidence about the biocidal properties of copper over the Coronaviridae family. A literature review was performed using PubMed and Ovid servers without date or language restrictions. The search was carried out on March 7, 2020, using the following search terms: [Copper] Coronavirus OR CoV OR SARS OR MERS OR Influenza. Copper destroys the replication and propagation abilities of SARS-CoV, influenza, and other respiratory viruses, having high potential disinfection in hospitals, communities, and households. Copper can eliminate pathogenic organisms such as coronavirus bacterial strains, influenza virus, HIV, and fungi after a short period of exposure. Copper seems to be an effective and low-cost complementary strategy to help reduce the transmission of several infectious diseases by limiting nosocomial infectious transmission. Copper oxide or nanocompounds may be used as filters, face masks, clothing, and hospital common surfaces to reduce viruses and bacterial incubation.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.169.1 | [
{
"content": "Read the following paper excerpts and answer the question. Output a JSON object with \"answer\" and \"evidence\" fields. The answer must be extracted directly from the paper text.\n\nIf the question cannot be answered from the given text, respond with \"null\".\n\nPaper: Semantic Role Labeling for Learner Chinese: the Importance of Syntactic Parsing and L2-L1 Parallel Data\n\nThis paper studies semantic parsing for interlanguage (L2), taking semantic role labeling (SRL) as a case task and learner Chinese as a case language. We first manually annotate the semantic roles for a set of learner texts to derive a gold standard for automatic SRL. Based on the new data, we then evaluate three off-the-shelf SRL systems, i.e., the PCFGLA-parser-based, neural-parser-based and neural-syntax-agnostic systems, to gauge how successful SRL for learner Chinese can be. We find two non-obvious facts: 1) the L1-sentence-trained systems performs rather badly on the L2 data; 2) the performance drop from the L1 data to the L2 data of the two parser-based systems is much smaller, indicating the importance of syntactic parsing in SRL for interlanguages. Finally, the paper introduces a new agreement-based model to explore the semantic coherency information in the large-scale L2-L1 parallel data. We then show such information is very effective to enhance SRL for learner texts. Our model achieves an F-score of 72.06, which is a 2.02 point improvement over the best baseline.\n\nIntroduction\nA learner language (interlanguage) is an idiolect developed by a learner of a second or foreign language which may preserve some features of his/her first language. Previously, encouraging results of automatically building the syntactic analysis of learner languages were reported BIBREF0 , but it is still unknown how semantic processing performs, while parsing a learner language (L2) into semantic representations is the foundation of a variety of deeper analysis of learner languages, e.g., automatic essay scoring. In this paper, we study semantic parsing for interlanguage, taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.\nBefore discussing a computation system, we first consider the linguistic competence and performance. Can human robustly understand learner texts? Or to be more precise, to what extent, a native speaker can understand the meaning of a sentence written by a language learner? Intuitively, the answer is towards the positive side. To validate this, we ask two senior students majoring in Applied Linguistics to carefully annotate some L2-L1 parallel sentences with predicate–argument structures according to the specification of Chinese PropBank BIBREF1 , which is developed for L1. A high inter-annotator agreement is achieved, suggesting the robustness of language comprehension for L2. During the course of semantic annotation, we find a non-obvious fact that we can re-use the semantic annotation specification, Chinese PropBank in our case, which is developed for L1. Only modest rules are needed to handle some tricky phenomena. This is quite different from syntactic treebanking for learner sentences, where defining a rich set of new annotation heuristics seems necessary BIBREF2 , BIBREF0 , BIBREF3 .\nOur second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts.\nWhile the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent.\nOur last concern is to explore the potential of a large-scale set of L2-L1 parallel sentences to enhance SRL systems. We find that semantic structures of the L2-L1 parallel sentences are highly consistent. This inspires us to design a novel agreement-based model to explore such semantic coherency information. In particular, we define a metric for comparing predicate–argument structures and searching for relatively good automatic syntactic and semantic annotations to extend the training data for SRL systems. Experiments demonstrate the value of the L2-L1 parallel sentences as well as the effectiveness of our method. We achieve an F-score of 72.06, which is a 2.02 percentage point improvement over the best neural-parser-based baseline.\nTo the best of our knowledge, this is the first time that the L2-L1 parallel data is utilized to enhance NLP systems for learner texts.\nFor research purpose, we have released our SRL annotations on 600 sentence pairs and the L2-L1 parallel dataset .\n\nInter-annotator Agreement\nFor inter-annotator agreement, we evaluate the precision (P), recall (R), and F1-score (F) of the semantic labels given by the two annotators. Table TABREF5 shows that our inter-annotator agreement is promising. All L1 texts have F-score above 95, and we take this as a reflection that our annotators are qualified. F-scores on L2 sentences are all above 90, just a little bit lower than those of L1, indicating that L2 sentences can be greatly understood by native speakers. Only modest rules are needed to handle some tricky phenomena:\nThe labeled argument should be strictly limited to the core roles defined in the frameset of CPB, though the number of arguments in L2 sentences may be more or less than the number defined.\nFor the roles in L2 that cannot be labeled as arguments under the specification of CPB, if they provide semantic information such as time, location and reason, we would labeled them as adjuncts though they may not be well-formed adjuncts due to the absence of function words.\nFor unnecessary roles in L2 caused by mistakes of verb subcategorization (see examples in Figure FIGREF30 ), we would leave those roles unlabeled.\nTable TABREF10 further reports agreements on each argument (AN) and adjunct (AM) in detail, according to which the high scores are attributed to the high agreement on arguments (AN). The labels of A3 and A4 have no disagreement since they are sparse in CPB and are usually used to label specific semantic roles that have little ambiguity.\nWe also conducted in-depth analysis on inter-annotator disagreement. For further details, please refer to duan2018argument.\n\nAnalysis\nTo better understand the overall results, we further look deep into the output by addressing the questions:\nWhat types of error negatively impact both systems over learner texts?\nWhat types of error are more problematic for the neural syntax-agnostic one over the L2 data but can be solved by the syntax-based one to some extent?\nWe first carry out a suite of empirical investigations by breaking down error types for more detailed evaluation. To compare two systems, we analyze results on ENG-L2 and JPN-L2 given that they reflect significant advantages of the syntax-based systems over the neural syntax-agnostic system. Note that the syntax-based system here refers to the neural-parser-based one. Finally, a concrete study on the instances in the output is conducted, as to validate conclusions in the previous step.\nWe employ 6 oracle transformations designed by he2017deep to fix various prediction errors sequentially (see details in Table TABREF19 ), and observe the relative improvements after each operation, as to obtain fine-grained error types. Figure FIGREF21 compares two systems in terms of different mistakes on ENG-L2 and JPN-L2 respectively. After fixing the boundaries of spans, the neural syntax-agnostic system catches up with the other, illustrating that though both systems handle boundary detection poorly on the L2 sentences, the neural syntax-agnostic one suffers more from this type of errors.\nExcluding boundary errors (after moving, merging, splitting spans and fixing boundaries), we also compare two systems on L2 in terms of detailed label identification, so as to observe which semantic role is more likely to be incorrectly labeled. Figure FIGREF24 shows the confusion matrices. Comparing (a) with (c) and (b) with (d), we can see that the syntax-based and the neural system often overly label A1 when processing learner texts. Besides, the neural syntax-agnostic system predicts the adjunct AM more than necessary on L2 sentences by 54.24% compared with the syntax-based one.\nOn the basis of typical error types found in the previous stage, specifically, boundary detection and incorrect labels, we further conduct an on-the-spot investigation on the output sentences.\nPrevious work has proposed that the drop in performance of SRL systems mainly occurs in identifying argument boundaries BIBREF18 . According to our results, this problem will be exacerbated when it comes to L2 sentences, while syntactic structure sometimes helps to address this problem.\nFigure FIGREF30 is an example of an output sentence. The Chinese word “也” (also) usually serves as an adjunct but is now used for linking the parallel structure “用 汉语 也 说话 快” (using Chinese also speaking quickly) in this sentence, which is ill-formed to native speakers and negatively affects the boundary detection of A0 for both systems.\nOn the other hand, the neural system incorrectly takes the whole part before “很 难” (very hard) as A0, regardless of the adjunct “对 我 来说” (for me), while this can be figured out by exploiting syntactic analysis, as illustrated in Figure FIGREF30 . The constituent “对 我 来说” (for me) has been recognized as a prepositional phrase (PP) attached to the VP, thus labeled as AM. This shows that by providing information of some well-formed sub-trees associated with correct semantic roles, the syntactic system can perform better than the neural one on SRL for learner texts.\nA second common source of errors is wrong labels, especially for A1. Based on our quantitative analysis, as reported in Table TABREF37 , these phenomena are mainly caused by mistakes of verb subcategorization, where the systems label more arguments than allowed by the predicates. Besides, the deep end-to-end system is also likely to incorrectly attach adjuncts AM to the predicates.\nFigure FIGREF30 is another example. The Chinese verb “做饭” (cook-meal) is intransitive while this sentence takes it as a transitive verb, which is very common in L2. Lacking in proper verb subcategorization, both two systems fail to recognize those verbs allowing only one argument and label the A1 incorrectly.\nAs for AM, the neural system mistakenly adds the adjunct to the predicate, which can be avoided by syntactic information of the sentence shown in Figure FIGREF30 . The constituent “常常” (often) are adjuncts attached to VP structure governed by the verb “练习”(practice), which will not be labeled as AM in terms of the verb “做饭”(cook-meal). In other words, the hierarchical structure can help in argument identification and assignment by exploiting local information.\n\nEnhancing SRL with L2-L1 Parallel Data\nWe explore the valuable information about the semantic coherency encoded in the L2-L1 parallel data to improve SRL for learner Chinese. In particular, we introduce an agreement-based model to search for high-quality automatic syntactic and semantic role annotations, and then use these annotations to retrain the two parser-based SRL systems.\n\nExperimental Setup\nOur SRL corpus contains 1200 sentences in total that can be used as an evaluation for SRL systems. We separate them into three data sets. The first data set is used as development data, which contains 50 L2-L1 sentence pairs for each language and 200 pairs in total. Hyperparameters are tuned using the development set. The second data set contains all other 400 L2 sentences, which is used as test data for L2. Similarly, all other 400 L1 sentences are used as test data for L1.\nThe sentence pool for extracting retraining annotations includes all English- and Japanese-native speakers' data along with its corrections. Table TABREF43 presents the basic statistics. Around 8.5 – 11.9% of the sentence can be taken as high L1/L2 recall sentences, which serves as a reflection that argument structure is vital for language acquisition and difficult for learners to master, as proposed in vazquez2004learning and shin2010contribution. The threshold ( INLINEFORM0 ) for selecting sentences is set upon the development data. For example, we use additional 156,520 sentences to enhance the Berkeley parser.\n\nAcknowledgement\nThis work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers and for their helpful comments. We also thank Nianwen Xue for useful comments on the final version. Weiwei Sun is the corresponding author.\n\nTable 1: Inter-annotator agreement.\n\nTable 2: Inter-annotator agreement (F-scores) relative to languages and role types.\n\nTable 4: Oracle transformations paired with the relative error reduction after each operation. The operations are permitted only if they do not cause any overlapping arguments\n\nFigure 1: Relative improvements of performance after doing each type of oracle transformation in sequence over ENG-L2 and JPN-L2\n\nFigure 2: Confusion matrix for each semantic role (here we add up matrices of ENG-L2 and JPNL2). The predicted labels are only counted in three cases: (1) The predicated boundaries match the gold span boundaries. (2) The predicated argument does not overlap with any the gold span (Gold labeled as “O”). (3) The gold argument does not overlap with any predicted span (Prediction labeled as “O”).\n\nTable 5: Causes of labeling unnecessary A1\n\nTable 6: Statistics of unlabeled data.\n\nTable 7: Accuracies different PCFGLA-parserbased models on the two test data sets.\n\nTable 8: Accuracies of different neural-parserbased models on the two test data sets.\n\nTable 9: F-scores of the baseline and the bothretrained models relative to role types on the two data sets. We only list results of the PCFGLAparser-based system.\n\nQuestion: Do the authors suggest why syntactic parsing is so important for semantic role labelling for interlanguages?",
"role": "user"
},
{
"content": "{\"answer\": [\"syntax-based system may generate correct syntactic analyses for partial grammatical fragments\"], \"evidence\": [\"While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent.\"]}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.106878 | [
{
"content": "You will be presented with the full text of a scientific research paper. Write an abstract summarizing the key points. Your response should contain only the abstract without any other text.\n\nArticle text:\nin the standard model of particle physics , the @xmath0 scattering amplitude violates the unitarity bound at a center of mass energy c.o.m . @xmath1 tev @xcite .\none solution to this problem is offered by the higgs mechanism @xcite through the introduction of a massive scalar particle with non - zero vacuum expectation value .\nloop corrections to this `` higgs boson s '' mass involving other massive particles are quadratically divergent , implying that the physical mass of the higgs boson corresponds to a scale at which new physics is present . to successfully address the @xmath0 scattering amplitude problem the higgs boson mass\nis constrained to @xmath2 tev/@xmath3 , and if fermions acquire their masses through coupling to the higgs boson , then @xmath4 gev/@xmath3 @xcite .\nif the higgs boson , whether elementary or composite , does nt exist , some other form of new physics must be present at the tev scale to prevent the @xmath0 scattering amplitude from violating the unitarity bound . among the models of new physics , supersymmetric ones are certainly , and justifiably , the most popular . these address the divergences in the corrections to the higgs mass in a very natural way , improve the unification of couplings at a high scale , provide a good dark matter candidate and lead to radiative electroweak symmetry breaking .\nunfortunately , they do not tell us anything about the origin of the number of generations of fermions , their mass spectrum , charges , etc . in this\ntalk , recent results from the cdf and d collaborations on searches for manifestations of non - supersymmetric new physics are presented .\nnew heavy gauge bosons are predicted in many extensions of the standard model : * in so - called little higgs models , the quadratically divergent radiative corrections to the higgs mass are canceled `` individually '' , leading to the appearance of partners of the @xmath5 and @xmath6 bosons at the tev scale . * in grand unified theories ( guts ) and left - right symmetric models ( lrsms ) heavy partners of the electroweak bosons generally appear , although their masses are less well constrained .\n* in extra dimensional models where the gauge bosons are allowed to propagate in ( some of ) the additional space dimensions kaluza - klein excitations will be observable . to address the hierarchy problem\n, these should typically have masses @xmath7 tev/@xmath3 .\n* if the higgs field is a triplet rather than a doublet , doubly charged higgs bosons exist .\nit should also be noted that graviton resonances which appear in randall - sundrum type models @xcite lead to similar signatures .\ncdf recently released a new result in the search for dielectron resonances using almost 900 @xmath8 of data .\nthe invariant mass spectrum of dielectron events used in this search is shown in figure [ fig : cdf - diem ] .\ndielectron invariant mass spectrum in the cdf search for electron - positron resonances . ] no significant excess is seen at any mass value leading to a limit on the mass of a sequential , standard model - like @xmath9 of @xmath10 gev@xmath11 at 95% c.l .\ncdf also searches for pair production of doubly charged higgs bosons in the lepton flavor violating modes @xmath12 and @xmath13 ( and charge conjugates ) .\nat least three leptons are required in the final states so that the backgrounds are very small , and no events are observed in approximately 350 @xmath8 of data . interpreted in a left - right symmetric model , the corresponding mass limit on doubly charged higgs bosons coupling to left - handed particles is @xmath14 gev@xmath11 at 95% c.l . in the @xmath15 channel . in the search for singly charged gauge bosons ,\ncdf looked for a @xmath16 decaying to an electron and a neutrino in 205 @xmath8 and derived a limit @xmath17 gev@xmath11 from a study of the transverse mass spectrum .\nleptoquarks are a natural consequence of the unification of quarks and leptons into a single multiplet , and as such are expected to be gauge bosons as well . while their masses can logically be expected to be of the order of the unification scale , in some models they can be relatively light .\nexperimentally , it is customary to consider one leptoquark per generation .\nthese are assumed to be very short - lived and decay to a quark and a lepton .\nthe branching ratio to a charged lepton and a quark is then denoted as @xmath18 . at hadron colliders\n, leptoquarks can be pair - produced through the strong interaction or singly produced . in the latter case\nthe production cross - section depends on the ( unknown ) quark - lepton coupling , which is generally taken to be of the same order of magnitude as the fine structure constant .\nthere are two recent results from leptoquark searches at the tevatron .\nd has searched for leptoquarks decaying to a quark and a neutrino ( @xmath19 ) in the jets plus missing tranverse energy in 370 @xmath8 of data .\nexperimentally this is a difficult analysis which suffers from substantial qcd dijets background due to mismeasured jets . to mitigate this , d requires exactly two acoplanar jets .\nthe ensuing missing transverse energy distribution , before final analysis cuts , is shown in figure [ fig : d0-met ] .\nmissing transverse energy distribution in the dsearch for @xmath19 leptoquarks before final cuts .\nthe inset shows the two different fitting functions used to evaluate the background from mismeasured qcd dijet events . ]\nthe background from qcd dijets , dominant at low missing transverse energy , is extrapolated to higher values using two different fitting functions as shown in the inset .\nthe dominant non - qcd standard model background is @xmath6 boson plus jets production with the @xmath6 decaying to a pair of neutrinos .\nno excess is observed so d sets a limit at @xmath20 gev@xmath11 at 95% c.l .\ncdf has released results on a search for vector leptoquarks of the third generation in the @xmath21 decay channel .\nthe signature consists of a dijet plus ditau final state , in which one tau is required to decay leptonically and the other hadronically .\nthe main discriminating variables are the number of jets and an @xmath22-type variable which is a scalar sum of the transverse energies of all jets , leptons and the event s missing transverse energy .\nthis allows cdf to set a limit at @xmath23 gev@xmath11 assuming @xmath24 . note that this limit is much higher than the typical limits on leptoquark masses at the tevatron due to the model choice of vector leptoquarks , which have a much larger production cross - section than the scalar leptoquarks which are usually chosen .\nin the original large extra dimensions model of arkani - hamed , dimopoulos and dvali @xcite in which only gravitons propagate in the bulk but all standard model fields are confined to a 3-brane , a tower of kaluza - klein excitations of the graviton emerges .\nthe graviton states are too close in mass to be distinguished individually , and the coupling remains small , but the number of accessible states is very large .\nit is therefore possible to produce gravitons which immediately disappear into bulk space , leading to an excess of events with a high transverse energy jet and large missing transverse energy , the so - called monojet signature .\nthe dominant standard model backgrounds are the production @xmath6 or @xmath5 bosons plus jets , with the @xmath6 decaying to a pair of neutrinos or the lepton from @xmath5 decay escaping detection . using 368 @xmath8 of data , cdf sets limits on the fundamental planck scale between @xmath25 tev@xmath11 and @xmath26 tev@xmath11 for a number of extra dimensions ranging from 2 to 6 .\nboth cdf and d have new results on the search for resonances decaying to @xmath27 .\ncdf has two analyses , with the first using @xmath22 as its main discriminating variable . after selecting events with leptonic\n@xmath6 decays , a control region with @xmath28 gev is used to establish a good understanding of the data , and the signal region is chosen to have @xmath29 gev , including at least two jets , each with transverse energy @xmath30 gev . using 305 @xmath8 of data , no events are observed in the signal region .\na limit is set on the production cross - section of @xmath31 gev@xmath11 right - handed down - type quarks as proposed in reference @xcite at @xmath32 pb at 90% c.l . in a second analysis , cdf studies the transverse momentum distribution of @xmath6 bosons .\nthis has the advantage of being insensitive to the nature of the other decay products , but is of course more difficult and potentially less sensitive than a direct resonance search . selecting events with a dielectron invariant mass compatible with the mass of the @xmath6 boson , @xmath33 gev@xmath34 gev@xmath11 , cdf measures the @xmath6 transverse momentum distribution shown in figure [ fig : cdf - zpt ] .\ntransverse momentum distribution of @xmath6 bosons decaying to electron - positron pairs measured by cdf in 305 @xmath8 of data . ] from this they determine an upper limit on the anomalous production of @xmath6 bosons as a function of transverse momentum using 305 @xmath8 of data .\nthe 95% c.l .\nlimit ranges from about 1 pb for @xmath35 gev@xmath36 to approximately 2 fb for @xmath37 gev@xmath36 .\nd explicitely searches for a @xmath6 boson plus jet resonance by combining the @xmath6 boson plus jet mass spectrum and the @xmath6 boson transverse momentum distribution as discriminating variables .\nthe invariant mass distribution measured in 380 @xmath8 of data is shown in figure [ fig : d0-zjm ] .\ninvariant mass of the @xmath6 boson and leading jet as measured by d in 380 @xmath8 of data .\nthe curves with different @xmath38 thresholds correspond to different generation thresholds for 2 @xmath39 2 processes in pythia , and the open circles represent the signal due to an excited quark of mass @xmath40 gev@xmath11 and narrow width . ] no excess is observed and a limit is set on the mass of an excited quark as proposed in reference @xcite at @xmath41 gev@xmath11 at 95% c.l . in the search for excited muons both cdf and d have results as well . both experiments\nsearch for associated production of a muon and an excited muon , with the latter decaying to a muon and a photon .\nthe production is approximated as a contact interaction , while the decay is assumed to proceed either through a gauge interaction ( cdf ) or a combination of a gauge and a contact interaction , with the relative fraction of the two depending on the mass of the excited muon and the compositeness scale .\nboth experiments obtain very similar results using 371 ( cdf ) and 380 ( d ) @xmath8 of data . to make comparison with lep results easier\n, cdf also reinterprets the result in a gauge mediated model with drell - yan - like production of the @xmath42 pair with coupling @xmath43 .\nthis result is shown in figure [ fig : cdf - muslim ] .\nexclusion region from the cdf search for excited muons in a gauge mediated model with coupling @xmath43 for comparison with lep results . ]\nin run i , cdf reported an excess of events with a photon , a lepton and large missing transverse energy compared to standard model expectations @xcite .\nthis excess , corresponding to a 2.7 sigma effect , was deemed `` an interesting result '' , but not `` a compelling observation of new physics . ''\ncdf has repeated this analysis in run ii using 305 @xmath8 of data @xcite .\nthe data is now compatible with standard model expectations in all channels , suggesting that the excess observed in run i was due to a statistical fluctuation . in a similar spirit , cdf searches for events with two photons and an electron or muon ( 700 @xmath8 ) , or three photons ( 1 @xmath44 )\nthis is primarily a counting experiment , and again , unfortunately , no excess is observed over expectations .\nin 2000 , the nutev collaboration reported @xcite on a search for heavy neutral leptons decaying to @xmath45 and a few other final states . in the dimuon channel , they observed three events while only 0.07 were expected . because of the asymmetry in the muon momenta , and the absence of signal in other channels , nutev argued that the signal was unlikely to be due to neutral heavy leptons . to further investigate this ,\nd has searched for pairs of muons originating from a common vertex located between 5 and 20 @xmath46 from the beamline .\nthis allows a good calibration of efficiencies using @xmath47 mesons .\nno signal is seen in 380 @xmath8 of data , and to compare results the momentum of the hypothetical new particles in the neutrino beam was converted to the tevatron center of mass frame .\nthe result , which is weakly dependent on assumptions made in regard to the new particle s decay process , is illustrated in figure [ fig : d0-nutev ] in terms of the particle s production cross - section times branching ratio and lifetime .\nexcluded area in the search for a long - lived particle decaying to muon pairs plus anything in terms of the particle s production cross - section times branching ratio , and lifetime . ]\nas the tevatron continues its good performance , cdf and d collect large amounts of data and search for new physics in many channels .\nonly recent results have been reported here , and many other analyses are in progress .\nthere is _ still _ nothing interesting , and previous excesses have been shown to be due to fluctuations .\nso far , new physics has remained hidden , but a substantial amount of data is yet to be recorded at the tevatron and the lhc is scheduled to start operations soon .\npositive results should be obtained in the next few years\n. 9 b.w .\nlee , c. quigg and h.b .\nthacker , .\nf. englert and r. brout , , and p.w .\nlep electroweak working group , most recent results at http://lepewwg.web.cern.ch / lepewwg/. l. randall and r. sundrum , _ phys.rev.lett._ , * 83 * , 4690 ( 1999 ) . n. arkani - hamed , s. dimopoulos and g. dvali , .\nbjrken , s. pakvasa , s.f .\nu. baur , m. spira , p. zerwas , .\nd. acosta _ et al .\n_ ( cdf collaboration ) , .\na. abulencia _\n( cdf collaboration ) , .\nt. adams _ et al . _\n( nutev collaboration ) , .",
"role": "user"
},
{
"content": "recent results on searches for new physics at run ii of the tevatron are reported . \n the searches cover many different final states and previous hints of signals , but all analyses have at this point led to negative results . \n address = columbia university , new york , ny",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.721 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nThe peripheral nervous system represents the input/output system for the brain.\nCuff electrodes implanted on the peripheral nervous system allow observation and control over this system, however, the data produced by these electrodes have a low signal-to-noise ratio and a complex signal content.\nIn this paper, we consider the analysis of neural data recorded from the vagus nerve in animal models, and develop an unsupervised learner based on convolutional neural networks that is able to simultaneously de-noise and cluster regions of the data by signal content.\nRecent advances have made chronic observation [1] of, and limited control [2] over the peripheral nervous system possible.\nTo characterise the dynamics of the signals passing to and from the brain, we wish to categorise patterns of activity within the peripheral nervous system.\nHowever, consistent detection of single neuron activity remains a challenge with current cuff electrode technology suitable for in vivo neural data acquisition.\nThe relative position of an extracellular recording electrode and neuronal axons close enough to sit above the noise floor affects the polarity of presented signal components [3] , and their summation at the electrode occludes the presence of individual action potentials during periods of neuronal activity.\nInstead, local field potentials (LFPs), the combination of many neuronal responses arriving concurrently at the electrode are observed.\nThese population level responses are potentially informationally richer [4] , but preclude the use of conventional spike-sorting [5] methodologies on such data.\nInstead, we develop a method based on convolutional neural networks (CNN) that simultaneously de-noises the data and categorises the observed signals.\nWe train this model on approximately one hour of data taken from a single subject approximately twelve hours post surgical implantation.\nWe further show that it is applicable without further training to data from a second subject thirty days post surgical implantation, demonstrating cross-time, cross-subject applicability of the trained models.\nThe recent development of chronic neural interfacing implant systems that are able to record neural signals over period of months or years will create large sets of primarily unlabelled data, with numerous signals occurring over a range of time-scales.\nThese data are currently un-characterisable with standard methods (e.g. spike-sorting).\nPrevious work in this field has relied on mixing categorical and real-valued latent vectors.\nWesthuizen et al [9] used an adversarial auto-encoder to project neural data to labels, incorporating an approximately one-hot encoding in the latent space but also including an approximately Gaussian vector to allow reconstruction.\nSince both vectors are trained simultaneously, the Gaussian component of the latent space may contain the relevant labelling information for one or more true classes.\nInfoGAN [10] , a GAN implementation in which the discriminator identifies components of the latent space is similarly capable of one-hot latent representation of the data, but without constraints on the information carried within the one-hot encoding.\nThe Coordinate-VAE approach, in restricting the information available to the encoder creating the non-categorical portion of the latent space, allows unsupervised characterisation of the signals in timeseries data, while simultaneously de-noising the signal.\nModels are transferable between individuals, suggesting that we may gain the ability to pre-train large models for the reduction to latent space representations.\nAs shown in Figure 3 , there is some evidence to suggest that these latent space representations are also informative for physiological features.\nWe might then rapidly train a final classifier or agent for monitoring or control of individual patients, as in Pandarianth et al [11] , in which an auto-encoder is used as a dimension reduction technique on collections of neural spiking data acquired from macaque motor and pre-motor cortices, following which a GLM is used to map the complex latent space to spiking activity.",
"role": "user"
},
{
"content": "Unsupervised analysis of data recorded from the peripheral nervous system denoises and categorises signals.",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.6515 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nIn languages like C, buffer overflows are widespread. A common mitigation technique is to use tools that detect them during execution and abort the program to prevent the leakage of data or the diversion of control flow. However, for server applications, it would be desirable to prevent such errors while maintaining availability of the system. To this end, we present an approach to handle buffer overflows without aborting the program. This approach involves implementing a continuation logic in library functions based on an introspection function that allows querying the size of a buffer. We demonstrate that introspection can be implemented in popular bug-finding and bug-mitigation tools such as LLVM's AddressSanitizer, SoftBound, and Intel-MPX-based bounds checking. We evaluated our approach in a case study of real-world bugs and show that for tools that explicitly track bounds data, introspection results in a low performance overhead.\n\nCited Abstract(s):\n[1]: We propose a reactive approach for handling a wide variety of software failures, ranging from remotely exploitable vulnerabilities to more mundane bugs that cause abnormal program termination (e.g., illegal memory dereference) or other recognizable bad behavior (e.g., computational denial of service). Our emphasis is in creating \"self-healing\" software that can protect itself against a recurring fault until a more comprehensive fix is applied. Briefly, our system monitors an application during its execution using a variety of external software probes, trying to localize (in terms of code regions) observed faults. In future runs of the application, the \"faulty\" region of code will be executed by an instruction-level emulator. The emulator will check for recurrences of previously seen faults before each instruction is executed. When a fault is detected, we recover program execution to a safe control flow. Using the emulator for small pieces of code, as directed by the observed failure, allows us to minimize the performance impact on the immunized application. We discuss the overall system architecture and a prototype implementation for the x86 platform. We show the effectiveness of our approach against a range of attacks and other software failures in real applications such as Apache, sshd, and Bind. Our preliminary performance evaluation shows that although full emulation can be prohibitively expensive, selective emulation can incur as little as 30 performance overhead relative to an uninstrumented (but failure-prone) instance of Apache. Although this overhead is significant, we believe our work is a promising first step in developing self-healing software.",
"role": "user"
},
{
"content": "devised a system that monitors an application for failures such as buffer overflows [1] . If a fault occurs, the current function is aborted and---based on heuristics---an appropriate value is returned. In order to avoid crashes because a pointer returns NULL , the heuristics take into account whether the parent function dereferences the pointer thereafter. While this approach takes into account the context of the fault, it lacks the ability of our introspection approach to benefit from programmer knowledge.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.27312 | [
{
"content": "Given the full text of a scientific research paper, write an abstract section for the paper. Your response should contain only the abstract without any other text.\n\nArticle text:\nparticles properties like mass , dispersion relation , lifetime and width , are expected to be modified under extreme conditions such as high temperature and density .\nthese conditions can be encountered in the nucleus of some stars or fabricated in ultrarelativistic heavy ion collisions . in this fascinating context\none is motivated to deal with a model that is simpler to work with than qcd and that contains the same symmetry structure of qcd . in this set of models ,\nwe choose the njl model @xcite in one of its renormalizable extensions @xcite to investigate the neutral pion decay in a hot and dense medium .\none of the most important contributions to the production of low - energy photons in an ultrarelativistic heavy - ion collision comes from the electromagnetic decay of the neutral pion . to be more precise , photons from @xmath0 and @xmath2 decays\nare responsible for almost @xmath3 of the inclusive photon spectrum @xcite . then , given the importance of such a phenomena , one is motivated to study the modifications experienced by the neutral pions in a hot and dense medium .\nthe @xmath4 decay at finite temperature has been studied before by various authors , but we focus here only on the results of refs . @xcite and\n@xcite which are closely related with ours .\net al _ @xcite found that for a finite pion mass at the critical temperature @xcite , there is an enhancement of the neutral pion decay . on the other hand , bi pin - zhen and rafelski @xcite considered a vanishing pion mass at the critical point , which implies in a suppression of neutral pion width near this temperature\nwe show that , depending only on the solution taken for the vacuum spectation value of the sigma field , we reproduce those results of refs . @xcite and @xcite .\nwe also extend our investigation to finite densities , not considered in refs . @xcite and @xcite .\nit has been shown that from the canonical form of the njl model one gets with both the @xmath5 @xcite and mean - field approximations @xcite an effective renormalizable theory which is like the lsm ( linear sigma model ) @xcite .\nthe reader which is unfamiliar with the lsm see , e.g. refs .\nthe bosonic part of this effective theory is written as @xmath6 - \\frac{\\lambda}{4}(\\sigma'^2 + \\vec\\pi^2 - f_\\pi^2 ) ^2 + \\cdots \\ , , \\ ] ] where @xmath7 and @xmath8 represent the sigma and pion fields , respectively , @xmath9 is a positive dimensionless coupling constant and @xmath10 is the pion decay constant in vacuum .\nthe lagrangian density above is symmetric and invariant under an @xmath11 chiral group .\nthe vector and axial vector currents associated with this model are given respectively by @xmath12 which are conserved .\nif the up and down quark masses were zero , qcd would have a chiral @xmath11 symmetry . in the vacuum\nthis symmetry is spontaneously broken with the appearance of the goldstone bosons . in reality\nthe quark masses are very small but nonzero , so that chiral symmetry is only approximate and the pion has a small mass .\nan explicit chiral symmetry breaking term is added to the lagrangian which generates the realistic finite pion mass so that @xmath13 with @xmath14 where @xmath15 is small and positive .\nthe axial vector current is now only `` approximately conserved '' , @xmath16 .\nthe minimum of the classical potential in the symmetry broken theory is found to be @xmath17\nfrom the renormalized version of the njl model @xcite , one can express the neutral pion width at zero temperature and density as @xmath18 where we consider @xmath19 as the formfactor associated with the neutral pion decay into two on - shell photons . in the chiral limit ,\none arrives at the cutoff independent result @xmath20 it is well known that this formula is a result of a relation between the pcac ( partial conservation of axial - vector current ) and an anomaly @xcite .\nsince it has been proved that the anomaly does not depend on temperature @xcite , the thermal effects should be taken into account through @xmath10 .\nwe extend this also for finite densities . from eq .\n( [ sol1 ] ) we get up to first order in @xmath15 , @xmath21 from ( [ sol2 ] ) we see that @xmath7 has a non - zero vacuum expectation value .\nit is convenient to redefine the sigma field as @xmath22 such that @xmath23 has zero expectation value . from the shifted lagrangian the pion mass read @xmath24 where @xmath25 . with the experimental value of the pion decay constant @xmath10\n, @xmath15 has to be adjusted to reproduce the neutral pion mass at zero temperature . in the equation above we have made use of eq .\n( [ sol2 ] ) . then we rewrite eq .\n( [ width ] ) as @xmath26 where @xmath27 . in ( [ width2 ] )\nwe have used @xmath28 given by eq .\n( [ massp ] ) .\nthus , in order to know the behavior of the pion properties in extreme conditions such as high temperatures or densities , it is crucial the knowledge of @xmath10 as function of these variables @xcite . for simplicity , we parametrized @xmath29 and @xmath30 obtained from the njl model @xcite , which are given by @xmath31 @xmath32 with @xmath33 , where @xmath34 is the nuclear matter density .\nthe constants of the expressions above are adjusted such that @xmath35 leading to @xmath36 , @xmath37 for the neutral pion lifetime we define @xmath38 where @xmath39 .\nwe note that all meson properties of interest for us in this work are expressed merely as a function of @xmath10 , in which the properties of the vacuum are encoded . the results for @xmath10 and for the pion mass ,\nwidth and lifetime as a function of temperature and density for _ case i _ are shown in the figures ( [ 1 ] ) to ( [ 8 ] ) below .\nwe observe that the pion mass remains almost constant as the chiral transition point is approached and increases after this point , both as a function of temperature and density .\nthis is usually referred as decoupling of the pion from matter @xcite .\nthe neutral pion width is also approximately constant until the phase transition temperature ( density ) , where it grows up , whereas its lifetime tends to zero around this same temperature .\nthese same results have been found in ref .\n@xcite where the triangle diagram for the @xmath40 decay has been explicitly calculated .\n_ @xmath41 as a function of temperature .\n_ , height=211 ] _ @xmath42 as a function of temperature .\n_ , height=211 ] _ @xmath0 width as a function of temperature for case i. _ , height=211 ] _ @xmath0 lifetime as a function of temperature for case i. _ , height=211 ] _ @xmath43 as a function of the density ratio @xmath44 for case i. _ , height=211 ] _ @xmath45 as a function of the density ratio @xmath44 for case i. _ , height=211 ] _ @xmath0 width as a function of the density ratio @xmath44 for case i. _ , height=211 ] _ @xmath0 lifetime as a function of the density ratio for case i. _ , height=211 ]\nnow we turn to the case considered in ref .\n@xcite where the pion mass vanishes at the phase transition temperature . to avoid a divergent @xmath0 width near this critical temperature , the authors of ref .\n@xcite allowed for the coefficient of the symmetry breaking term @xmath15 to be temperature dependent such that , near @xmath46 , @xmath47 ( where @xmath46 is the critical temperature ) in order to have a melting of the neutral pion decay width near @xmath46 .\nthe basic argumentation of the authors of ref .\n@xcite to impose the vanishing of @xmath48 at @xmath46 is that if the gell - mann - oakes - renner @xcite relation is valid at finite temperature , then it would dictate this behavior for @xmath48 .\nhowever , @xmath48 as derived in ref .\n@xcite has that features only around @xmath46 and under certain conditions .\nwe now discuss how to obtain the same results of ref .\n@xcite without the problems and limitations they found in the derivation of @xmath49 .\nif we go back to the second equation of ( [ sol1 ] ) , we find up to second order in @xmath15 @xmath50 inserting @xmath51 from this equation in the equations for @xmath52 and @xmath53 ( keeping up to second order in @xmath15 ) extracted form the shifted lagrangian , @xmath54 we find the values of @xmath15 and @xmath9 that give the experimental value of the pion and sigma masses at zero temperature and density .\nwe have used @xmath55 . for this case the neutral pion width in the medium is @xmath56 with @xmath57 as before given by eqs .\n( [ fp1 ] ) and ( [ fp2 ] ) respectively .\nthe constants necessary to reproduce the vanishing of @xmath57 near the critical point ( again parametrized from ref .\n@xcite ) read as @xmath58 the behavior of the meson masses as well as the neutral pion width as a function of temperature and density for _ case ii _ can be seen in figures ( [ 9 ] ) to ( [ 11 ] ) .\n_ @xmath59 and @xmath60 as a function of temperature for case ii .\n_ , height=211 ] _ @xmath0 width as a function of temperature for case ii .\n_ , height=211 ] _ @xmath0 width as a function of the density ratio @xmath61 for case ii .\n_ , height=211 ]\nwe have studied the behavior of some static properties ( mass , width and lifetime ) of the neutral pion under extreme conditions like high temperature and density in the framework of a renormalized version ( extension ) of the njl model which is basically the lsm .\nwe provided simple expressions for this analysis based on the fact that @xmath10 take into account the modifications imposed by the medium .\nwe have found that depending on the solution chosen for the vacuum expectation value of the sigma field ( i.e. , depending on the investigated scenario ) , we reproduce those results of refs .\n@xcite ( finite pion mass at the critical point ) and @xcite ( vanishing pion mass at the critical point ) .\nas the true vacuum is only one , the final word on this subject belongs to the experiment .\nthus , in order to get a conclusive answer for this question , it is crucial to observe and analyze the hadron and photon spectra near the critical temperature in a heavy - ion collision experiment\n. we note that if _ case ii _ ( vanishing pion mass at the critical point ) is correct , then there would be no alteration in the @xmath40 decay rate in hot or dense matter near @xmath62 .\nthis is because the hot hadronic matter lifetime is @xmath63 which is much less than the ( rest ) neutral pions lifetime that are @xmath64 . in this case\nthe neutral pions would mostly decay in the final state ( i.e. , after freeze - out ) . in the other scenario ( _ case i _\n) , where the neutral pion decay is enhanced near the critical temperature , some of the pions that are located in regions of the hadronic matter near to @xmath62 would decay inside the plasma and their width would be affected by it .\nsuch an increase might be detected in heavy - ion collisions .\nit is worth to point out here that the present photon momentum resolution of the star experiment @xcite does not allow any decisive conclusions about the possible enhancement of the @xmath0 width spectra via the @xmath40 decay by the phenix experiment . ] .\na natural extension of this work would be the verification of the ideas presented here in other processes where the decay rates also depend only on @xmath65 and the masses involved ( such as in @xcite ) , with these quantities expressed as a function of temperature and density derived accordingly within the model under consideration .\nthe author would like to thank a.l .\nmota , a. majumder , x.n .\nwang , j. kapusta , d. boyanovsky and s. soff for useful discussions and p. bedaque for helpful discussions and for a critical reading of the manuscript .\ni also thank s. klein for illuminating conversations about the star experiment .\nthe author thanks the nuclear science division of lbl for hospitality and support .\nthis work was partially supported by capes / brazil .\nv. koch , int .\nj. mod.phys .\n* e6 * , 203 ( 1997 ) , nucl - th/9706075 .\ncaldas , a.l .\nmota and m.c .\nnemes , phys .\n* d63 * , 056011 ( 2001 ) , hep - ph/0005180 .\nr. jackiw , _ lectures on current algebra and its application _\n( princeton university press , princeton , nj , 1972 ) .\nv. bernard , ulf - g .\nmeissner and i. zahed , phys . rev .\n* d36 * , 819 ( 1987 ) .\nkumar , d. boyanovsky , h.j .\nde vega and r. holman , phys . rev .\n* d61 * 065002 ( 2000 ) , hep - ph/9905374 . m. gell - mann , r.j .\noaks and b. renner , phys . rev . * 175 * , 2195 ( 1968 ) ; l.j .\nreinders , h. rubinstein and s. yazaki , phys .\n* 127 * , 1 ( 1985 ) ; j. gasser and h. leutwyler , phys\n* 87 * , 77 ( 1982 ) .\n_ , phys . rev\n* 88 * , 022301 ( 2002 ) , nucl - ex/0109003 .\nlett . * 88 * , 242301 ( 2002 ) , nucl - ex/0112006 . j. kapusta and i. shovkovy , phys .\n* c68 * 014901 ( 2003 ) , nucl - th/0209075 .",
"role": "user"
},
{
"content": "we study the behavior of the @xmath0 width as a function of temperature and density . \n we provide simple expressions for the neutral pion width in a hot and dense medium based on a renormalized version of the microscopic nambu jona lasinio model . \n then we consider the two cases where the pion mass is finite or vanishes at the critical temperature and discuss the phenomenological consequences of both cases . \n pacs numbers : @xmath1",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.11730 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nThis paper proposes a method to gain extra supervision via multi-task learning for multi-modal video question answering. Multi-modal video question answering is an important task that aims at the joint understanding of vision and language. However, establishing large scale dataset for multi-modal video question answering is expensive and the existing benchmarks are relatively small to provide sufficient supervision. To overcome this challenge, this paper proposes a multi-task learning method which is composed of three main components: (1) multi-modal video question answering network that answers the question based on the both video and subtitle feature, (2) temporal retrieval network that predicts the time in the video clip where the question was generated from and (3) modality alignment network that solves metric learning problem to find correct association of video and subtitle modalities. By simultaneously solving related auxiliary tasks with hierarchically shared intermediate layers, the extra synergistic supervisions are provided. Motivated by curriculum learning, multi task ratio scheduling is proposed to learn easier task earlier to set inductive bias at the beginning of the training. The experiments on publicly available dataset TVQA shows state-of-the-art results, and ablation studies are conducted to prove the statistical validity.\n\nCited Abstract(s):\n[1]: We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. To address this issue, we propose the Moment Context Network (MCN) which effectively localizes natural language queries in videos by integrating local and global video features over time. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions. We demonstrate that MCN outperforms several baseline methods and believe that our initial results together with the release of DiDeMo will inspire further research on localizing video moments with natural language.\n\n[2]: This paper focuses on temporal localization of actions in untrimmed videos. Existing methods typically train classifiers for a pre-defined list of actions and apply them in a sliding window fashion. However, activities in the wild consist of a wide combination of actors, actions and objects; it is difficult to design a proper activity list that meets users’ needs. We propose to localize activities by natural language queries. Temporal Activity Localization via Language (TALL) is challenging as it requires: (1) suitable design of text and video representations to allow cross-modal matching of actions and language queries; (2) ability to locate actions accurately given features from sliding windows of limited granularity. We propose a novel Cross-modal Temporal Regression Localizer (CTRL) to jointly model text query and video clips, output alignment scores and action boundary regression results for candidate clips. Lor evaluation, we adopt TaCoS dataset, and build a new dataset for this task on top of Charades by adding sentence temporal annotations, called Charades-STA. We also build complex sentence queries in Charades-STA for test. Experimental results show that CTRL outperforms previous methods significantly on both datasets.\n\n[3]: Temporal Action Proposal (TAP) generation is an important problem, as fast and accurate extraction of semantically important (e.g. human actions) segments from untrimmed videos is an important step for large-scale video analysis. We propose a novel Temporal Unit Regression Network (TURN) model. There are two salient aspects of TURN: (1) TURN jointly predicts action proposals and refines the temporal boundaries by temporal coordinate regression: (2) Fast computation is enabled by unit feature reuse: a long untrimmed video is decomposed into video units, which are reused as basic building blocks of temporal proposals. TURN outperforms the previous state-of-the-art methods under average recall (AR) by a large margin on THUMOS-14 and ActivityNet datasets, and runs at over 880 frames per second (FPS) on a TITAN X GPU. We further apply TURN as a proposal generation stage for existing temporal action localization pipelines, it outperforms state-of-the-art performance on THUMOS-14 and ActivityNet.",
"role": "user"
},
{
"content": "Temporal localization aims at localizing temporal parts from the given video. [1] proposed the Moment Context Network (MCN) for temporal localization with natural language query. The MCN effectively localizes temporal parts related to natural language query by integrating local and global video feature over time. [2] proposed a multi-task learning approach for temporal localization with natural language query. Location regression and visual-semantic alignment are jointly learned for temporal localization. Temporal Unit Regression Network (TURN) [3] jointly predicts action proposals and refines the temporal boundaries by temporal coordinate regression. Long untrimmed video is decomposed into video clips, which are reused as basic building blocks of temporal proposals for fast computation.",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.3306 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Mice",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.105599 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nNon-Alcoholic Fatty Liver Disease: Current Perspectives and Future Direction in Disease pathogenesis, Treatment and Diagnosis\n\nPaper sections:\nOverview\n<p>Liver disease is a broad term for any kind of disease manifestation or condition that affects the liver and its function. Being the largest solid organ in the human body, the liver is considered a vital organ with multiple functions, which include detoxification, protein synthesis, metabolism, glycogen storage and decomposition of blood cells. Any disease of this organ may manifest itself in a broad range of symptoms that affect different mechanisms and pathways. This has been well demonstrated in an emerging metabolic liver condition known as non-alcoholic fatty liver disease (NAFLD). The manifestation of NAFLD usually encompasses a broad spectrum of symptoms and clinical presentation ranging from simple steatosis to nonalcoholic steatohepatitis (NASH), which eventually may progress to liver cirrhosis and/or hepatocellular carcinoma (HCC) [1–3]. NAFLD is one of the most common liver diseases in the world with a prevalence of 20–30% in the United States [1,3] and approximately 15% in the Asian population [3]. This disease has also been reported among children with an estimate of 3 to 10%, and a higher prevalence has been observed among obese children [1]. It is well-established that NAFLD is linked to obesity, insulin resistance and metabolic syndrome [1,3]. However, due to the presentation of the disease condition and the wide range of clinical presentation of the disease, there is no clear delineation of the disease pathogenesis. It has been elucidated that this disease has a multifactorial emergence and a genetic predisposition due to the interethnic variations observed [4,5] and higher preponderance within families [6,7]. The two-hit hypothesis which was initially proposed in 1998 [8] has been widely used as the pathogenesis pathway of steatosis (first hit) and progression of steatosis (second hit) to NASH. This hypothesis seems to correlate with some of the biochemical and genetic studies conducted on NAFLD [9]. Liver biopsy and histology have long been identified as the \"gold standard\" method for diagnosing and confirming NASH. This is the only test used to accurately assign the different disease stages that can be used as a prognostic indicator of the disease condition. However, the invasiveness of the procedure and the susceptibility of sampling error limit the usefulness of this method for routine diagnostic. Most clinicians and physicians utilize the diagnosis of exclusion approach instead, for ruling in NASH and NAFLD. Even though this approach is not directly useful in confirming NASH, it has been extensively used as predictors for further evaluation and biopsy request. Alternatively, different radiographic imaging techniques have served to identify the presence and extent of steatosis in the liver of NAFLD patients.</p><p>Many different treatment options have been tested for NAFLD and NASH thus far. However, it was not until very recently that consistent pharmacotherapy strategy for the disease was possible. One of the most frequently recommended treatment regimens for patients diagnosed with NAFLD is to lose excess weight, especially in cases with obese or overweight patients. Several early small trials have demonstrated the effectiveness of vitamin E and C in treating NAFLD in adults and children [10,11]. These studies initiated the two large, standardized random clinical trials in adults and children [Pioglitazone versus Vitamin E versus Placebo for the Treatment of No diabetic Patients with Nonalcoholic Steatohepatitis (PIVENS) and Treatment of Nonalcoholic Fatty Liver Disease in Children (TONIC)] to demonstrate the usefulness of vitamin E as a therapeutic agent for treating NASH and NAFLD [12,13]. Both studies showed a promising effect on improving NASH clinical and histologic symptoms in adults and children, though failed to demonstrate significant outcome in improving the alanine aminotransferase (ALT) level in children. This finding sheds new light on the pharmacotherapy and early prevention of NAFLD and NASH.</p>\n\nCurrent Perspectives\n<p>Increasing awareness of the disease manifestation, emergence of the disease among pediatric patients, and the importance of detecting this disease in its early stage have molded more genetics and omics related research for understanding the pathogenesis of NAFLD and NASH. Although numerous studies focusing on candidate genes and pathways have been conducted in the past decades, most findings remained contradicting. Recent genome-wide association studies (GWAS) in large patient populations have resulted in a great leap forward to understanding the natural history of NAFLD and NASH. The study by Romeo et al. [14] demonstrated a very strong association between hepatic fat content, hepatic inflammation, and a variant (rs738409) of PNPLA3 gene, which results in the substitution of isoleucine to leucine in this gene product. Although the specific effect of this gene in the liver seems vague and more studies are currently in place to understand the role of this gene in lipid metabolism, this susceptibility allele has been broadly confirmed independently and has been associated with a wide spectrum of the phenotypes in NAFLD and NASH [15,16]. Additional GWAS studies focusing on NAFLD phenotypes defined with different approaches have also identified more risk alleles in NCAN, PPP1R38, GCKR, LYPLAL1, FDFT1, COL13A1, LTBP3, EFCAB4B, ZP4, PZP and DDX60L/PALLD genes [16,17]. Besides these GWAS, other omics analyses using high-throughput transcriptomic, proteomic, metabolomic and lipidomic technologies also made a great progress in identifying pathways involved in the pathogenesis of the disease as well as potential diagnostic markers. Transcriptomic studies were able to demonstrate the differential expression of mRNAs correlating with the different NAFLD clinical spectrum even though the hepatic expression pattern slightly differed between the studies [18,19]. Different analysis platforms and/or sample types were utilized for all the different proteomic studies on NAFLD to date and yet these studies were able to demonstrate marked variations in the protein profiles between different spectrums of NAFLD [20–23]. Some of these studies were successful in distinguishing distinct protein markers that were suggested as possible diagnostic markers. Charlton et al. [22] reported the significance of keratin sulfate proteoglycan over expression directly correlating with progressive NASH symptoms and another protein marker, fatty acid binding protein-1 (FABP-1) showed reduced expression in NASH when compared to its expression in simple steatosis. Another study by Gray et al. [23] identified five significant protein markers that correlated with NAFLD associated fibrosis and HCC and suggested the usefulness of these proteins as possible diagnostic markers. Lipidomics studies in NAFLD were successful in demonstrating the involvement of peroxisomal polyunsaturated fatty acid (PUFA) metabolism [24] and free radical mediated linoleic acid oxidation [24,25] with NASH and the disease progression. Plasma metabolomics study showed strong association between bile salts and glutathione metabolites with NAFLD samples [26]. Data collected from all these different omics studies together strongly suggest the involvement of multiple metabolic and pathogenesis pathways in NAFLD.</p>\n\nFuture Directions\n<p>It is anticipated that the integration of the wide spectrum of information retrieved from genomics, transcriptomics, metabolomics, lipidomics and proteomics studies conducted in NAFLD and NASH will mediate future unequivocal interpretation of these multitudes of data for understanding the disease pathogenesis and clinical variation. This definitely emphasizes the importance of open access availability of research information and ability to interact between different research areas. One of the key elements for future research direction may be the identification of the linkage between significant genes, transcripts, lipids, metabolites and proteins identified from omics research and to finally determine the key pathobiology network and pathway involved in the development of hepatic steatosis, initiation of hepatic inflammation and fibrosis, transition to NASH, and progression to liver cirrhosis and hepatocellular carcinoma. Significant genetic and biological markers, especially those involved in a specific signaling pathway, can be used for novel drug target development in the future. Moreover, the integrated information may also provide a better understanding of the drug response variation in NAFLD and NASH and form the solid basis for pharmacogenomics-based studies on specific drugs used for treating NAFLD and NASH. The question remains whether the polymorphisms in some of the significant drug targets and metabolism genes directly associated with NAFLD pathogenesis and drug metabolic genes can be correlated with a specific drug response, especially drugs that have shown promising therapeutic intervention such as Vitamin E. In addition, future research in the development and validation of non-invasive diagnostic tissue markers or biomarkers, for example, carrying of PNPLA3 risk allele for early intervention, might represent another area of scope to focus. These important markers can be used for diagnosing NAFLD and NASH less or non-invasively and/or be useful predictors for drug administration and drug response monitoring. This is only possible if the biochemical role of PNPLA3, the specific PNPLA3 substrate and the role of this gene in NAFLD development were accurately investigated. Furthermore, other prospective omics research areas which will benefit the understanding of NAFLD pathogenesis include metabolomic and lipidomic analysis for both human samples and samples from animal models. This field of study, combined with the genetic information may ultimately lead to personalized nutrition strategies for early prevention and dietary treatment for both NAFLD and NASH.</p>",
"role": "user"
},
{
"content": "Non-alcoholic fatty liver disease (NAFLD) is one of the most common liver diseases in the world. An important implication of this disease is the progression of the disease to a more complicated condition called non-alcoholic steatohepatitis (NASH) and the wide variety of clinical presentations. Over the past 5 years, remarkable progresses have been made in understanding the genetic basis for the disease. Recent clinical trials in pharmacotherapy for the disease have been encouraging as well. It is anticipated that the integration of the wide spectrum information retrieved from genomics, transcriptomics and proteomics studies conducted in NAFLD and NASH will mediate a better understanding of the disease pathogenesis and facilitate the postulation of disease pathobiology pathways. Genetic and biological markers identified from the omics studies may hold promise for diagnosis, personalized treatment, early prevention and new drug development.",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.9079 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nAn overview of randomized trials of rehabilitation with exercise after myocardial infarction.\nAbstract 1:\nOf 22 r and omized trials of rehabilitation with exercise after myocardial infa rct ion ( MI ) , one trial had results that achieved conventional statistical significance . To determine whether or not these studies , in the aggregate , show a significant benefit of rehabilitation after myocardial infa rct ion , we performed an overview of all r and omized trials , involving 4,554 patients ; we evaluated total and cardiovascular mortality , sudden death , and fatal and nonfatal reinfa rct ion . For each endpoint , we calculated an odds ratio ( OR ) and 95 % confidence interval ( 95 % CI ) for the trials combined . After an average of 3 years of follow-up , the ORs were significantly lower in the rehabilitation than in the comparison group : specifically , total mortality ( OR = 0.80 [ 0.66 , 0.96 ] ) , cardiovascular mortality ( OR = 0.78 [ 0.63 , 0.96 ] ) , and fatal reinfa rct ion ( OR = 0.75 [ 0.59 , 0.95 ] ) . The OR for sudden death was significantly lower in the rehabilitation than in the comparison group at 1 year ( OR = 0.63 [ 0.41 , 0.97 ] ) . The data were compatible with a benefit at 2 ( OR = 0.76 [ 0.54 , 1.06 ] ) and 3 years ( OR = 0.92 [ 0.69 , 1.23 ] ) , but these findings were not statistically significant . For nonfatal reinfa rct ion , there were no significant differences between the two groups after 1 ( OR = 1.09 [ 0.76 , 1.57 ] ) , 2 ( OR = 1.10 [ 0.82 , 1.47 ] ) , or 3 years ( OR = 1.09 [ 0.88 , 1.34 ] ) of follow-up . The observed 20 % reduction in overall mortality reflects a decreased risk of cardiovascular mortality and fatal reinfa rct ion throughout at least 3 years and a reduction in sudden death during the 1st year after infa rct ion and possibly for 2 - 3 years . With respect to the independent effects of the physical exercise component of cardiac rehabilitation , the relatively small number of \" exercise only \" trials , combined with the possibility that they may have had a formal or informal nonexercise component precludes the possibility of reaching any definitive conclusion . To do so would require a r and omized trial of sufficient size to distinguish between no effect and the most plausible effect based on the results of this overview\n\n\nTitle 2:\nPhysical activity and risk of cancer in the NHANES I population.\nAbstract 2:\nWe studied the relation between self-reported physical activity and cancer in the first National Health and Nutrition Examination Survey ( NHANES I ) cohort , originally examined between 1971 - 75 , and followed prospect ively through the Epidemiologic Follow-up Study ( NHEFS ) , conducted between 1982 - 84 . Among 5,138 men and 7,407 women 25 - 74 years old , for nonrecreational activity we observed increased risk of cancer among inactive individuals compared to very active persons ( for men , relative risk [ RR ] 1.8 , 95 % confidence interval [ CI ] = 1.4 , 2.4 ; for women RR 1.3 , 95 % CI = 1.0 , 1.8 ) . These findings were unchanged after adjustment for cigarette smoking , body mass index ( BMI ) , and other potential confounders . Sites which demonstrated stronger inactivity-cancer associations included colorectum ( RR 1.6 , 95 % CI = 0.7 , 3.5 ) and lung ( RR 1.6 ; 95 % CI = 1.2 , 3.5 ) among men , and breast ( post-menopausal ) ( RR 1.7 ; 95 % CI = 0.8 , 2.9 ) and cervix ( RR 5.2 ; 95 % CI = 1.4 , 14.5 ) among women , although these findings for women were based on relatively few cases . The association between inactivity and cancer was greater among persons of moderate ( or lower ) BMI , those cases occurring three or more years after baseline , and , in women , those more than 60 years old . In contrast , recreational exercise showed little relation to cancer , with the exception of prostate cancer . The results suggest that inactive individuals are at increased risk of cancer\n\n\nTitle 3:\nThe effect among older persons of a general preventive visit on three health behaviors: smoking, excessive alcohol drinking, and sedentary lifestyle. The Medicare Preventive Services Research Team.\nAbstract 3:\nBACKGROUND The U.S. Congress m and ated evaluations , initiated in 1989 , to determine whether extending Medicare benefits to include preventive services would improve health status , reduce costs of care , and improve health risk behaviors of beneficiaries . METHODS The Johns Hopkins Medicare Preventive Services Demonstration was a r and omized trial in which Medicare beneficiaries were assigned either to an intervention group that was offered yearly preventive visits for 2 years and optional counseling visits to their primary care provider or to a control group that received usual care . This report describes the effect of the intervention over a period of 2 years on smoking , problem alcohol use , and sedentary lifestyle . RESULTS Differences were observed between the intervention and control groups in the extent to which changes occurred in smoking and problem alcohol use , but none of the differences was statistically significant . The proportion of smokers who quit was higher in the intervention group than in the control group ( 24.2 vs 17.9 % , P = 0.09 ) . However , a higher proportion of problem drinkers in the control group improved ( 67.1 vs 57.0 % , P = 0.183 ) . There was virtually no difference between the intervention and the control groups in the proportion with improvement in sedentary lifestyle . CONCLUSIONS This study demonstrates the difficulty of bringing about health behavior change in older patients in the course of a yearly preventive visit for 2 years with their primary care physician when the visit encompasses screening and immunizations , as well as health behavior counseling directed by the physician . Further study is required to determine whether a more intense program of counseling for health behavior change among older persons by their primary care providers would be effective\n\n\nTitle 4:\nSelf-reported physical activity predicts long-term coronary heart disease and all-cause mortalities. Twenty-one-year follow-up of the Israeli Ischemic Heart Disease Study.\nAbstract 4:\nOBJECTIVE To determine whether self-reported physical activity predicts a decreased rate of coronary heart disease ( CHD ) and all-cause mortalities in middle-aged men when rates are adjusted for known confounders . DESIGN Cohort Analytic Study of Israeli government employees in 1963 . SUBJECTS Eight thous and four hundred sixty-three Israeli male government employees , aged 40 years or older , representing six areas of birth , excluding those with known cardiovascular disease in either 1963 or 1965 , from an original cohort of 10,059 . MAIN OUTCOME MEASURE Comparison of rates of death due to CHD and all causes , determined from death certificates in 21 years of follow-up , for subjects with different baseline levels of self-reported leisure-time and work-related physical activities measured in 1965 . RESULTS Self-reported leisure-time but not work-related physical activity was inversely related to both CHD ( adjusted relative risk , 0.79 ; 95 % confidence interval , 0.66 to 0.95 ) and all-cause mortalities ( adjusted relative risk , 0.91 ; 95 % confidence interval , 0.83 to 0.99 ) . Most of the apparent benefit accrued was from light physical activity on less than a daily basis . These inverse relationships persisted after adjustment for age , systolic blood pressure , cigarette smoking , total and high-density lipoprotein cholesterol levels , body mass index , psychosocial factors , and other potential confounders . CONCLUSION Baseline levels of self-reported leisure-time physical activity predicted a decreased rate of CHD and all-cause mortalities in employed middle-aged Israeli men followed up prospect ively for 21 years\n\n\nTitle 5:\nControlled trial of a time-efficient method of health promotion.\nAbstract 5:\nIntegrating health promotion into daily clinical practice is a challenging undertaking . This project was a prospect i ve trial of a brief physician intervention program , performed as part of routine patient care , that addressed six areas of patients ' lifestyle risks ( nutrition , exercise , alcohol use , smoking , stress , and seat belt use ) . The components of the program were lifestyle assessment , physician prescription of lifestyle change , and educational material s. Patients visiting a family practice residency model office for any reason were selected to receive all program components , parts of the program , or no program components . Lifestyle behavior changes over the ensuing four-week period were measured for all groups by a structured phone interview . Patients who received one or more components of the program reported significantly more change at follow-up than those who received no program components ( P = .001 ) ; the greatest change was seen in the group that received all of the program components . Demographic variables that were significantly related to a higher degree of change were female sex , previous contact with the physician , and being married . In this sample age , income , and education were not related to behavior change . Motivation to change -- an expressed desire to work on a given lifestyle area -- was a stronger predictor of change than any demographic variable\n\n\nTitle 6:\nTraining physicians to conduct physical activity counseling.\nAbstract 6:\nBACKGROUND In accordance with the U.S. Preventive Services Task Force recommendations , the current pilot study tests the feasibility and efficacy of a physician-delivered physical activity counseling intervention . METHODS A sequential comparison group design was used to examine change in self-reported physical activity between experimental ( counseling and self-help material s ) and control ( usual care ) patients at base-line and 6 weeks after the initial office visit . Patients in both groups were contacted by telephone 2 weeks after their office visit and asked about the physical activity counseling at their most recent physician visit . Experimental patients also received a follow-up appointment to discuss physical activity with their physician 4 weeks after their initial visit . RESULTS Counseling was feasible for physicians to do and produced short-term increases in physical activity levels . Both groups increased their physical activity , but the increase in physical activity was greater for patients who reported receiving a greater number of counseling messages . CONCLUSIONS Physician-delivered physical activity interventions may be an effective way to achieve wide-spread improvements in the physical activity of middle-aged and older adults\n\n\nTitle 7:\nA controlled trial of physician counseling to promote the adoption of physical activity.\nAbstract 7:\nBACKGROUND In accordance with one of the Year 2000 Health Objectives , the current study tests the efficacy of brief physician-based counseling to increase physical activity in sedentary patients in a nonr and omized controlled trial . METHODS Control and intervention physicians were matched on medical practice variables . Two hundred fifty-five apparently healthy , sedentary , adult patients were recruited from 17 physician offices ( mean age = 39 years , 84 % female , 28 % ethnic minority ) . Intervention physicians delivered 3 to 5 min of structured physical activity counseling during a well visit or follow-up for a chronic condition . A health educator made a brief booster phone call to patients 2 weeks after receiving physician counseling . Self-reported physical activity and stage of change ( i.e. , behavioral readiness to adopt or maintain activity ) were collected at baseline and at 4- to 6-week follow-up . Objective activity monitoring was conducted on a sub sample . RESULTS Intervention patients reported increased walking more than control patients ( + 37 min/week vs. + 7 min/week ) . There was a significant intervention effect on the activity monitor . Intervention participants also demonstrated a greater increase in readiness to adopt activity than control subjects . CONCLUSIONS Physician-based counseling for physical activity is efficacious in producing short-term increases in moderate physical activity among previously sedentary patients\n\n\nTitle 8:\nThe feasibility of behavioral risk reduction in primary medical care.\nAbstract 8:\nThis prospect i ve study evaluated changes in patients ' health-risk behavior one year after preventive intervention by primary care physicians . The trial used a quasiexperimental design with 2,218 adults ( 1,409 study subjects and 809 controls ) at five multispecialty , group practice sites in three regions of the United States . Pre- and postintervention surveys showed that the treated study patients with behavioral risks were more likely to report positive changes than were controls in regard to beginning regular exercise ( P = .02 ) , using auto seat belts , ( P less than .001 ) , losing weight ( P = .05 ) , decreasing alcohol intake ( P = .01 ) , and to performance of monthly breast self-examination by women ( P less than .001 ) . The smoking cessation rate was greater among the treated study group compared with the controls , although the change was not significantly different . Greater behavioral risk changes also occurred among the total study group ( treated and untreated ) in comparison with the control group . An additive index of these behavioral risks showed greater reduction among the treated and the total group of study patients compared with controls . These initial results suggest that clinical preventive services , which include risk factor education and counseling by primary care physicians , can improve short-term health-related behavior of patients",
"role": "user"
},
{
"content": "There is limited evidence from well design ed trials that office based physical activity promotion in primary care setting s is efficacious in promoting changes in physical activity that could conceivably have lasting clinical benefits",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.13551 | [
{
"content": "Write an abstract for this chemistry paper.\n\nTitle:\nPhosphine-catalyzed enantioselective [3 + 2] cycloadditions of γ-substituted allenoates with β-perfluoroalkyl enones\n\nPaper sections:\nIntroduction\n<p>Cyclopentenes (or cyclopentanes) are valuable skeletons found in several natural products and pharmaceuticals (Fig. 1). 1 Among existing methodologies for their preparation, phosphinecatalyzed [3 + 2] cycloaddition of allenoates with electron-decient olens was rst reported by Lu in 1995 as a powerful and straightforward strategy for the construction of functionalized cyclopentene rings. 2,3 As a result of tremendous effort from numerous research groups, Lu's enantioselective [3 + 2] cycloaddition reaction of terminal allenoates with electron-decient olens has been well established over the past years. 4 However, asymmetric [3 + 2] cycloaddition reaction of g-substituted allenoates with electron-decient olens has been less explored despite the increase in stereochemical diversity of the cycloaddition products. In 2007, Miller's group rst realized a unique \"deracemization\" reaction upon cycloaddition of chalcone with racemic g-methyl allenoates but requisite the use of a stoichiometric amount of chiral phosphine catalyst A (Scheme 1a). 4c Subsequently, Fu and co-workers have accomplished the cycloaddition reaction of racemic g-substituted allenoates with heteroatom-bearing olens with the use of a catalytic amount of chiral phosphine B, furnishing a facile access to functionalized cyclopentenes with two adjacent stereo centers (Scheme 1b). 5 Recently, Marinetti and coworkers have reported a highly enantioselective [3 + 2] cycloaddition of g-substituted allenoates with benzylidenemalononitrile by utilizing chiral phosphahelicenes catalyst C (Scheme 1b). 6 Despite this progress, the scope of g-substituted allenoates and electron-decient olen partner for enantioselective Lu's annulation is very limited, and the construction of cyclopentene derivatives with three contiguous chiral stereocenters has been a major challenge but a highly desirable task. Moreover, introduction of peruoroalkylated, particularly triuoromethylated, stereocenters into chiral compounds have garnered special attention in pharmaceutical and pesticide industry since the polarity, bioavailability, metabolic stability and other properties of the parent molecules could be inuenced greatly by these peruoroalkyl groups. 7 During the course of our continuous interest in design, synthesis and application of novel chiral baminephosphines 8,9 in asymmetric catalysis and the synthesis of enantio-enriched triuoromethylated building blocks, 8d,g we envisaged that the challenging enantioselective [3 + 2] cycloadditions of g-substituted allenoates with b-peruoralkyl a,benones might be addressed by systematic screening of known phosphines or rational design of new catalysts (Scheme 1b). In the present study, we report our efforts in addressing this challenging reaction by identifying two phosphine catalysts, commercially available bisphosphine (R,R)-DIPAMP and novel multifunctional (S)-P3 which have been developed in our group. Further control experiments have shown that the reaction under the catalysis of (R,R)-DIPAMP was a deracemization process, while the kinetic resolution reaction was observed under the multifunctional phosphine catalyst (Scheme 1).</p>\n\nResults and discussion\n<p>In order to validate the feasibility of the asymmetric [3 + 2] cycloaddition of g-substituted allenoates with b-peruoralkyl a,b-enones, allenoate 2a and enone 1a were exposed to a range of commercially available chiral bisphosphine catalysts (Table 1). A small amount of the desired 3aa was observed when (S,S)-DIOP or (R,R)-Et-DUPHOS was utilized as the catalyst (Table 1, entries 1 and 2). Interestingly, (R,R)-Et-BPE exhibited a promising level of reactivity with 64% yield and stereoinduction with 39% ee (Table 1, entry 3). Fortunately, 86% yield of 3aa with 89% ee was obtained using (R,R)-DIPAMP as a catalyst (Table 1, entry 4). It can be noted that multifunctional chiral phosphines (S)-P1-P6 bearing hydrogen bond donors, such as amide and (thio) urea groups, could deliver higher chemical yield but with unacceptable enantioselectivity (Table 1, entries 5-10). Gratifyingly, the enantioselectivity was improved to 92%, albeit with a slightly lower yield when decreasing the reaction temperature from 25 C to À20 C (Table 1, entries 11-13). However, much lower reaction temperature was not benecial for enantioselectivity and reactivity (Table 1, entry 14). In addition, much lower yield and enantioselectivity was observed when (Z)-1a was utilized in the reaction, indicating that the conguration of enone also affected the reaction signicantly (Table 1, entry 15). Further screening of solvents demonstrated that toluene was the best reaction medium for this transformation (see ESI † for details). Then, the optimized reaction conditions were identied: 10 mol% (R,R)-DIPAMP as the catalyst and toluene as the reaction medium at À20 C.</p><p>Under optimal reaction conditions, we investigated the scope of the enantioselective [3 + 2] cycloaddition reaction (Scheme 2). Remarkably, a wide range of b-triuoromethyl substituted enones containing different electron nature functional groups worked well with allenoate 2a, thereby resulting in a highly regioselective a-addition products 3ba-3ha in good yields with 88-94% ee. However, the introduction of an ortho substituent, such as Cl and Br, to the phenyl ring of enone led to dramatic decrease in the enantioselectivity (3ia and 3ja). To our delight, naphthyl-and heteroaryl-containing substrates 1k-1o were also compatible, efficiently furnishing a set of tri-uoromethylated cyclopentenes containing naphthyl-and heteroaryl frameworks 3ka-3oa. In addition, the present protocol could be readily extended to the challenging synthesis of cyclohexenyl and cyclohexyl based triuoromethyl enone 1p and 1q. It was noteworthy that both b-pentauoroethyl and b-hepta-uoropropyl enone were particularly effective in the present transformation, forming valuable peruoroalkyl substituted cyclopentene 3ra and 3sa in good yields with 94% ee. Furthermore, g-aryl allenoates 2b-2d with substituted aryl and hetereoaryl groups were well applicable and formed corresponding products 3ab-3ad with high regioselectivity and enantioselectivity. The absolute conguration of product 3aa was conrmed by single-crystal X-ray diffraction analysis. 10 Aer intensive screening of various chiral phosphine catalysts, it was found that multifunctional phosphine (S)-P3 displayed good performance in the substrates with ortho-substituent, and the desired products 3ia and 3ja could be isolated in 85-88% yields with 96% and 99% ee, respectively (Scheme 3).</p><p>Unfortunately, the performance of (R,R)-DIPAMP in the cycloaddition of g-alkyl substituted allenoates was not as good as that in the cases of g-aryl substituted allenoates. For example, the reaction of 2e with 1c resulted in the formation of desired 3ce in 67% yield but with only 86% ee. Aer further screening of a series of chiral phosphine catalysts, solvents and reaction temperature, it was found that (S)-P3 was a privileged catalyst for cycloaddition of g-alkyl allenoates. In general, allenoates 2e-2g with different alkyl substituents at g position participated in the annulation process with good regio-and enantioselectivity. In addition, diverse alkyl substituents such as benzyl, halogen and ester group were well tolerant, furnishing the corresponding cycloadducts 3ch-3cj in moderate to good yields with high enantioselectivity. Furthermore, allenoates with bulky substituents such as isopropyl, cyclopentyl and cyclohexyl at g position worked well, thereby forming the desired 3ck-3cm in good yields with 92-94% ee. Good to excellent regioselectivity and enantioselectivity were also obtained in the cycloaddition reactions of allenoate 2g with a wide range of b-triuoromethyl substituted enones (Scheme 4). Next, we turned our attention to gain insight into catalytic process for the proposed [3 + 2] cycloaddition reaction. In case of (R,R)-DIPAMP catalysed cycloaddition of 1d and racemic 2a, the starting material 2a was recovered in 38% yield (based on 2a) with 0% ee (eqn (1)). Furthermore, when optically active allenoate (+)-2a (76% ee) served as the substrate, ee of 3da did not improve but the recovered (+)-2a had a higher ee (eqn (2)). These results have supported that a deracemization process was followed in the (R,R)-DIPAMP catalysed cycloaddition of 1d and 2a.</p><p>(1)</p><p>(2)</p><p>To examine both the phosphines in (R,R)-DIPAMP induce enantioselectivity independently or cooperatively, (R,R)-SDI-PAMP that contained only one nucleophilic phosphine was synthesized and subjected to the reaction of 1d and racemic 2a (Scheme 5). Although the reaction became slower, enantioselectivity of 3da remained unchanged, demonstrating that both the phosphines in (R,R)-DIPAMP might induce enantioselectivity independently (Scheme 5b).</p><p>Based on the abovementioned results and earlier reports, 11 a plausible catalytic cycle for (R,R)-DIPAMP catalysed asymmetric [3 + 2] cycloaddition reaction of g-aryl allenoates with triuoromethyl enones has been illustrated in Scheme 6. The zwitterionic intermediate I was formed through nucleophilic addition of (R,R)-DIPAMP to racemic 2a. The deracemization process resulted from the same nucleophilic attack rate (K 1 ¼ K 2 ) of (R,R)-DIPAMP to both the enantiomers of allenoates 2a.</p><p>The subsequent [3 + 2] cycloaddition favoured a-addition to provide intermediate II, which then underwent proton transfer to provide intermediate III. Finally, (R,R)-DIPAMP and cyclopetene 3da were released from intermediate III.</p><p>In contrast to (R,R)-DIPAMP, a kinetic resolution reaction takes place with multifunctional chiral phosphine (S)-P3 as the catalyst and (+)-2a 12 and (+)-2g 13 is recovered in 76% ee (in toluene, 77% ee in CHCl 3 ) and 81% ee respectively (eqn ( 3) and ( 4)). In order to conrm the possible hydrogen-bonding interaction during the catalytic process, (S)-P7 without hydrogenbond donor was synthesized and subjected to the cycloaddition reaction (Scheme 7). The conversion decreased dramatically under higher catalyst loading and higher reaction temperature. The ee value of the recovered 2g also vanished (Scheme 7b). These results demonstrated that the hydrogen- On the other hand, the nucleophilic attack of (S)-P3 with (+)-2 might be suppressed by the steric interaction of the bulky R 2 group with the phenyl moiety (Scheme 8, TS-2). Accordingly, different nucleophilic attack rates (K 1 > K 2 ) of (S)-P3 to both the enantiomers of allenoates 2 contribute to the kinetic resolution process. It should be note that further experimental and theoretical studies are required to gain insights into kinetic resolution process.</p>\n\nConclusions\n<p>In conclusion, we have developed a highly regio-, diastereo-and enantioselective [3 + 2] cycloaddition of g-substituted allenoates with b-peruoroalkyl enones with (R,R)-DIPAMP or (S)-P3 as a catalyst; it provides a facile access to a wide range of tri-uoromethylated cyclopentenes with three contiguous chiral centers (up to 88% yield with 99% ee). In case of g-aryl allenoate, commercially available chiral phosphine (R,R)-DIPAMP was identied as an efficient catalyst. In contrast, presently developed multifunctional phosphine (S)-P3 has displayed high performance in the asymmetric cycloaddition of g-alkyl allenoates with triuoromethyl enones. In addition, control experiments have demonstrated that under the catalysis of (R,R)-DIPAMP, racemic allenoate reacted with triuoromethyl enone through a \"deracemization\" process, whereas a clearly kinetic resolution reaction takes place with multifunctional chiral phosphine (S)-P3 as a catalyst due to the hydrogenbonding interaction between catalyst and the allenoate. Efforts toward other transformations of allenoate under the catalysis of our developed catalysts P1-P6 are currently underway and will be reported in due course.</p>",
"role": "user"
},
{
"content": "The enantioselective construction of densely functionalized cyclopentene bearing contiguous three stereocenters has been a challenging task in organic synthesis. Herein, we present a phoshine-catalyzed highly regio-, diastereo-and enantioselective [3 + 2] cycloaddition of g-substituted allenoates with b-perfluoroalkyl enones, delivering a wide range of densely functionalized perfluoroalkylated cyclopentenes with three contiguous chiral stereocenters.",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.10542 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nClinical efficacy of racemic albuterol versus levalbuterol for the treatment of acute pediatric asthma.\nAbstract 1:\nSTUDY OBJECTIVE An efficacy treatment study is conducted comparing levalbuterol to racemic albuterol for acute pediatric asthma in the emergency department ( ED ) . METHODS This was a prospect i ve , double-blind , r and omized , controlled study involving 129 children ( 2 to 14 years ) , presenting to a pediatric ED with an acute moderate or severe asthma exacerbation . Children were treated using a st and ard ED asthma pathway . Primary outcomes were changes from baseline in clinical asthma score and the percentage of predicted forced expiratory volume in 1 second after the first , third , and fifth treatment . Secondary outcomes included number of treatments , length of ED care , rate of hospitalization , and changes in pulse rate , respiratory rate , and oxygen saturation . Occurrence of adverse events was recorded . RESULTS Sixty-four children in the racemic albuterol and 65 children in the levalbuterol group completed the study . There were no differences between groups in primary outcomes , secondary outcomes , or adverse events . CONCLUSION There was no difference in clinical improvement in children with acute moderate to severe asthma exacerbations treated with either racemic albuterol or levalbuterol\n\n\nTitle 2:\nPreferential pulmonary retention of (S)-albuterol after inhalation of racemic albuterol.\nAbstract 2:\nThe (R)-enantiomer of racemic albuterol produces bronchodilation , whereas the (S)-enantiomer may increase airway reactivity . After oral or intravenous administration of racemic albuterol , the (R)- enantiomer is metabolized several times faster than the (S)-enantiomer ; however , enantiomer disposition after inhaling racemic albuterol with a metered-dose inhaler ( MDI ) is not known . Accordingly , 10 healthy subjects inhaled racemic albuterol with a MDI alone and with a MDI and holding chamber . We measured plasma levels of unchanged (R)- and (S)-albuterol before and up to 4 h after inhalation of racemic albuterol , and determined the unchanged R/S ratio in urine before and at 0.5 , 4 , 8 , and 24 h later . The disposition of albuterol 's enantiomers with a MDI and holding chamber was similar to that with a MDI alone . The area under the curve ( AUC ) of the plasma levels over time was significantly lower for the (S)- than for the (R)-enantiomer-395.5 + /- 141.0 ( SE ) versus 882.7 + /- 126.4 ng . ml(-)(1 ) . min ( p < 0.05)-indicating preferential retention of (S)-albuterol in the lung . The R/S ratio in urine at 0 . 5 h after albuterol was > 1 , reflecting the higher plasma level of the (R)-enantiomer . In conclusion , preferential retention of the (S)- compared with the (R)-enantiomer in the lung could lead to accumulation of the (S)-enantiomer after long-term use of racemic albuterol\n\n\nTitle 3:\nLevosalbutamol vs racemic salbutamol in the treatment of acute exacerbation of asthma\nAbstract 3:\nObjective To compare efficacy and tolerability of levosalbutamol ( Group 1 ) and racemic salbutamol ( Group 2 ) for the treatment of acute exacerbation of asthma in children age 5 to 18 yr . Methods A r and omized double blind clinical study involving 60 children was undertaken between October ’ 06 to December ’ 07 . Results The following baseline clinical characteristic were recorded initially and after giving 3 nebulizations at 20 min intervals in the Ist hour of presentation viz respiratory rate ( RR ) , heart rate ( HR ) , oxygen saturation in room air SPO2 , PEFR ( peak expiratory flow rate ) , serum K+ level and asthma score . In Group 1 patients ( levosalbutamol ) , there was significant increment in SPO2 and PEFR ( P<0.05 ) values with decrease in tachypnea and asthma score while no significant difference was found in pre and post treatment HR & Serum K+ levels . In Group 2 patients although there was clinical improvement in terms of SPO2 , PEFR , RR and asthma score , it result ed in significant tachycardia and decrease in K+ levels . Conclusion Levosalbutamol appears to be more efficacious than racemic salbutamol in terms of improvement in PEFR , SPO2 and asthma score while deleterious effects of tachycardia and fall in serum K+ were seen with racemic salbutamol\n\n\nTitle 4:\nLevalbuterol Versus Racemic Albuterol in the Treatment of Acute Exacerbation of Asthma in Children\nAbstract 4:\nObjective : To compare levalbuterol and racemic albuterol for the treatment of acute exacerbation of asthma in pediatric population . Design : Prospect i ve , double-blind , r and omized research trial in a pediatric emergency department of an urban tertiary care hospital . Participants : Children 5 to 21 years with a history of asthma presenting to the emergency department in acute exacerbation . Interventions : As per a computer-generated r and omization sequence , patients received either 1.25 mg of levalbuterol or albuterol 2.5 mg via nebulization along with ipratropium hydrochloride . Patients received 3 back-to-back treatments as needed every 20 minutes , maximum of 3 ; 2 mg/kg of oral prednisone was administered to the patients after the second treatment . Baseline respiratory parameters such as oxygen saturations , respiratory rates , and peak flow rates were measured and repeated after every treatment . The decision for further treatments and or hospitalization was made by the treating emergency department physician as per his/her clinical judgement of the respiratory parameters at the end of 3 treatments . Results : Seventy patients completed the study . Most of the patients were in moderate severity of asthma exacerbation . All patients in both groups showed improvement in oxygen saturations , respiratory rates , and peak flow rates . However , no statistically significant difference was observed in the 2 groups regarding the respiratory parameters ( P > 0.05 ) . Conclusion : Levalbuterol is not more efficacious than racemic albuterol in improving respiratory parameters in children presenting with acute exacerbation of asthma\n\n\nTitle 5:\nEfficacy of Racemic Albuterol versus Levalbuterol Used as a Continuous Nebulization for the Treatment of Acute Asthma Exacerbations: A Randomized, Double-Blind, Clinical Trial\nAbstract 5:\nObjective . To compare racemic albuterol ( RAC ) with levalbuterol ( LEV ) in continuous form for the treatment of acute pediatric asthma exacerbations in the emergency department . Study design . Children between the ages of 6 and 17 inclusive were enrolled if they had a history of asthma , presented to the emergency department with an acute asthma exacerbation , and had an initial forced expiratory volume in 1 second ( FEV1 ) < 70 % predicted . Patients were then r and omized to receive either 7.5 mg of RAC or 3.75 mg of LEV over 1 hour , in addition to st and ard asthma therapies . Spirometry and asthma scoring were performed at the end of the first hour , and a second hour-long nebulization with the same drug was administered if deemed necessary . Spirometry and asthma scoring were again performed and the final disposition was recorded . As a second , optional part of the study , baseline serum albuterol levels were collected on some patients before treatment . Results . A total of 99 patients completed the study ( 44 RAC and 55 LEV ) . Baseline characteristics were similar except that the RAC group had a higher baseline asthma score . Children in the RAC group had a greater improvement in their FEV1 ( p = .043 ) as well as in their asthma scores ( p = .01 ) after 1 hour of continuous treatment compared to the LEV group . The greater improvement in asthma scores was maintained after the second hour of continuous therapy in the RAC group ( p = .008 ) but not for FEV1 measurements ( p = .57 ) . There were no differences between groups for changes in heart rate , respiratory rate , oxygen saturation , or rates of admission . Conclusions . At the doses used , RAC appears to be superior to LEV with respect to changes in FEV1 and asthma score . There was no significant difference between the drugs with respect to admission rates or side-effect profile\n\n\nTitle 6:\nPharmacokinetics and pharmacodynamics of cumulative single doses of inhaled salbutamol enantiomers in asthmatic subjects.\nAbstract 6:\nObjectives of this study were to compare the pharmacokinetics , pharmacodynamics and safety of single cumulative doses of active (R)-salbutamol given either as the single enantiomer or racemic mixture by inhalation to subjects with mild to moderate asthma . This was a double-blind , crossover , cumulative-dose , r and omized study where all subjects received either four doses of 1.25 mg of (R)-salbutamol or 2.5 mg of racemic ( RS- ) salbutamol by nebulization . The pharmacokinetic parameters were determined by noncompartmental analysis and model-fitting . Changes in FEV(1 ) , plasma potassium , plasma glucose , heart rate , and QTc interval were measured . The potassium and glucose data were fitted to indirect response pharmacodynamic models . The heart rate and QTc data were evaluated using data descriptors . No significant differences in pharmacokinetics of (R)-salbutamol given as either (R)- or (RS)-salbutamol were found with AUC values of 11.90 + /- 4.37 and 11 . 47 + /- 2.88 ng.h/ml . The t(max)of about 2 h reflected serial dosing rather than delayed absorption . The t(1/2)averaged about 3.5 h. The (S)-salbutamol showed AUC of 48.46 + /- 12.11 ng.h/ml with a t(1/2)of about 5 h. The changes in FEV(1)reached a plateau after an initial increase and did not return to pre-drug values for 10 h. All pharmacodynamic parameters were similar whether (R)- or (RS)-salbutamol was given . The exposure to (R)-salbutamol was identical after inhalation of ( R ) - and (RS)-salbutamol by subjects with asthma . Several pharmacological responses including FEV(1)were also similar and there were no unique safety concerns with either treatment\n\n\nTitle 7:\nA comparison of levalbuterol with racemic albuterol in the treatment of acute severe asthma exacerbations in adults.\nAbstract 7:\nUNLABELLED This multicenter , r and omized , double-blind trial compared nebulized levalbuterol ( Lev ) and racemic albuterol ( Rac ) in the treatment of acute asthma . METHODS Adults with acute asthma exacerbations ( FEV(1 ) 20%-55 % predicted ) received prednisone and either Lev ( 1.25 mg , n = 315 ) or Rac ( 2.5 mg , n = 312 ) . Nebulized treatments were administered every 20 minutes in the first hour , then every 40 minutes for 3 additional doses , then as necessary for up to 24 hours . The primary end point was time to meet discharge criteria . Secondary end points included changes in lung function and hospitalization rates . A subset of 160 patients had plasma (S)-albuterol concentrations determined at study entry . RESULTS Time to meet discharge criteria did not differ between the 2 treatments . FEV(1 ) improvement was greater following Lev compared with Rac , both after dose 1 and cumulatively over the entire treatment period ( dose 1 in intent to treat [ ITT ] group : Lev 0.50 + /- 0.43 L , Rac 0.43 + /- 0.37 L ; P = .02 ) , particularly among the 60 % of patients not on recent steroid therapy ( dose 1 : Lev 0.58 + /- 0.47 L , Rac 0.44 + /- 0.37 L ; P < .01 ) , and patients whose entry (S)-albuterol concentrations were in the highest quartile of those measured . A small and similar proportion of Lev-treated ( 7.0 % ) and Rac-treated ( 9.3 % ) patients required hospitalization ( P = .28 ) . Among patients not on steroids , fewer Lev- than Rac-treated patients required admission ( 3.8 % vs 9.3 % , P = .03 ) , as was also the case for patients with high plasma (S)-albuterol concentrations . Asthma relapses ( 5 % in 30 days ) were lower than in previous reports and did not differ between groups . CONCLUSIONS This study suggests that early , regular nebulized beta(2)-agonist and systemic corticosteroid therapy may reduce hospitalization and relapse rates in patients with acute severe asthma . Lev was well tolerated and compared favorably with Rac in improving airway function , particularly in those who were not on inhaled or oral corticosteroids and in those who had high plasma (S)-albuterol concentrations at presentation",
"role": "user"
},
{
"content": "There were no significant differences in side effects between groups .\nLevalbuterol was not superior to albuterol regarding efficacy and safety in subjects with acute asthma .\nWe suggest that levalbuterol should not be used over albuterol for acute asthma",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.2343 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nA randomized placebo-controlled prevention trial of aspirin and/or resistant starch in young people with familial adenomatous polyposis . Evidence supporting aspirin and resistant starch ( RS ) for colorectal cancer prevention comes from epidemiologic and laboratory studies ( aspirin and RS ) and randomized controlled clinical trials ( aspirin ) . Familial adenomatous polyposis ( FAP ) strikes young people and , untreated , confers virtually a 100 % risk of colorectal cancer and early death . We conducted an international , multicenter , randomized , placebo-controlled trial of aspirin ( 600 mg/d ) and/or RS ( 30 g/d ) for from 1 to 12 years to prevent disease progression in FAP patients from 10 to 21 years of age . In a 2 × 2 factorial design , patients were randomly assigned to the following four study arms : aspirin plus RS placebo ; RS plus aspirin placebo ; aspirin plus RS ; RS placebo plus aspirin placebo ; they were followed with standard annual clinical examinations including endoscopy . The primary endpoint was polyp number in the rectum and sigmoid colon ( at the end of intervention ) , and the major secondary endpoint was size of the largest polyp . A total of 206 randomized FAP patients commenced intervention , of whom 133 had at least one follow-up endoscopy and were therefore included in the primary analysis . Neither intervention significantly reduced polyp count in the rectum and sigmoid colon : aspirin relative risk = 0.77 ( 95 % CI , 0.54-1.10 ; versus nonaspirin arms ) ; RS relative risk = 1.05 ( 95 % CI , 0.73-1.49 ; versus non-RS arms ) . There was a trend toward a smaller size of largest polyp in patients treated with aspirin versus nonaspirin -- mean 3.8 mm versus 5.5 mm for patients treated 1 or more years ( adjusted P = 0.09 ) and mean 3.0 mm versus 6.0 mm for patients treated more than 1 year ( P = 0.02 ) ; there were similar weaker trends with RS versus non-RS . Exploratory translational endpoints included crypt length ( which was significantly shorter in normal-appearing mucosa in the RS group over time ) and laboratory measures of proliferation ( including Ki67 ) . This clinical trial is the largest ever conducted in the setting of FAP and found a trend of reduced polyp load ( number and size ) with 600 mg of aspirin daily . RS had no clinical effect on adenomas .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"resistant starch\", \"aspirin\", \"RS\", \"placebo-controlled\", \"resistant starch ( RS )\"], \"Outcome\": [], \"Participant\": [\"young\", \"familial adenomatous polyposis\", \"10 to 21 years\", \"colorectal cancer\"]}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.10817 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: A Molecular Mechanism for Eflornithine Resistance in African Trypanosomes\nArticle:\nAbstract:\nHuman African trypanosomiasis, endemic to sub-Saharan Africa, is invariably fatal if untreated. Its causative agent is the protozoan parasite Trypanosoma brucei. Eflornithine is used as a first line treatment for human African trypanosomiasis, but there is a risk that resistance could thwart its use, even when used in combination therapy with nifurtimox. Eflornithine resistant trypanosomes were selected in vitro and subjected to biochemical and genetic analysis. The resistance phenotype was verified in vivo. Here we report the molecular basis of resistance. While the drug's target, ornithine decarboxylase, was unaltered in resistant cells and changes to levels of metabolites in the targeted polyamine pathway were not apparent, the accumulation of eflornithine was shown to be diminished in resistant lines. An amino acid transporter gene, TbAAT6( Tb927. 8. 5450), was found to be deleted in two lines independently selected for resistance. Ablating expression of this gene in wildtype cells using RNA interference led to acquisition of resistance while expression of an ectopic copy of the gene introduced into the resistant deletion lines restored sensitivity, confirming the role of TbAAT6 in eflornithine action. Eflornithine resistance is easy to select through loss of a putative amino acid transporter, TbAAT6. The loss of this transporter will be easily identified in the field using a simple PCR test, enabling more appropriate chemotherapy to be administered.\nIntroduction:\nHuman African trypanosomiasis( HAT) is a neglected tropical infectious disease transmitted by biting tsetse flies and is prevalent in sub-Saharan Africa[1],[2]. In humans, the disease is caused by two sub-species of the protozoan Trypanosoma brucei – T. b. gambiense and T. b. rhodesiense. T. b. gambiense is responsible for around 95% of all cases of the disease. An alarming resurgence of the disease in the latter part of the twentieth century stimulated a renewed interest in HAT control[2]. There are two stages of HAT. The first stage is characterised by parasite proliferation in the blood and lymph, while the second stage occurs when parasites enter the CSF( cerebrospinal fluid) and brain, resulting in symptoms that include confusion, depression, personality changes and the altered sleep-wake patterns that give the disease its common name of sleeping sickness. Death follows, inevitably, without treatment. Chemotherapy in stage two HAT requires melarsoprol, a melaminophenyl arsenical, or eflornithine, an amino acid analogue which inhibits the polyamine biosynthetic enzyme ornithine decarboxylase( ODC). Melarsoprol is exceedingly toxic, killing 5% of recipient HAT patients[2]. Furthermore, treatment failure with melarsoprol has led to its being superseded by eflornithine. Recently, nifurtimox use with eflornithine has been recommended[3],[4] and the combination added to the WHO list of essential medicines. Eflornithine targets ornithine decarboxylase in trypanosomes[5],[6], and this causes diminished polyamine biosynthesis[5] and reduced production of the trypanosome specific redox active metabolite trypanothione[7]. Accumulation of S-adenosyl methionine has been reported in eflornithine treated cells, which might perturb cellular methylation reactions[8] although recent data identified increased levels of decarboxylated S-adenosyl methionine, but not its precursor[9]. How eflornithine enters trypanosomes is a subject of debate. An early report that eflornithine uptake by trypanosomes was not saturable established the idea that eflornithine enters trypanosomes by passive diffusion[10]. However, studies on eflornithine resistant procyclic trypanosomes showed reduced accumulation of eflornithine[11] and uptake of eflornithine was by a saturable process typical of a transporter. Bellofatto et al[12] also found uptake of eflornithine to be temperature dependent and thus likely to be transporter mediated. Indeed as a zwitterionic, charged amino acid, eflornithine would not be expected to diffuse across membranes and transport mediated uptake would be a pre-requisite for uptake. In T. brucei loss of transport has been shown to be a key determinant in resistance to melaminophenylarsenicals[13] and diamidine drugs[14]–[16]. Given the increased use of eflornithine, alone or in combination with nifurtimox, a better understanding of the risk of resistance is critical. Such an understanding may help limit its spread and allow the development of diagnostic tools such as those described for melarsoprol resistance[15],[16]. We have investigated the mechanism of resistance to eflornithine and show that acquisition of selected resistance is accompanied by loss of a specific transporter. We further show, using genetic manipulation, that this transporter mediates uptake of eflornithine and that its loss confers resistance, whilst its expression in resistant lines restores sensitivity.\nDiscussion:\nHuman African trypanosomiasis, also known as sleeping sickness in its second stage when parasites have invaded the brain, is a neglected tropical disease[1]. Major epidemics at the end of the twentieth century were brought under control largely through increased efforts in distribution and treatment with the few drugs available to treat the disease[2]. An alarming increase in the incidence of treatment failure with melarsoprol has led to its being replaced with eflornithine as first line treatment for stage 2 HAT[2]. Combination therapy using eflornithine with the nitrofuran, nifurtimox, licensed for use in Chagas' disease has been added to the World Health Organisation's list of essential medicines as part of the nifurtimox-eflornithine combination therapy for HAT[3]. Although several initiatives are underway to develop new drugs for human African trypanosomiasis, none are currently in human trials and a minimum of five years will elapse before a new drug could complete trials and reach the market place. The loss of eflornithine, alone or in the nifurtimox combination, would represent a calamity in terms of sustaining control of HAT. The data presented here show that resistance to eflornithine is easily selected in the laboratory. Selection of resistance in two independently derived lines led to deletion of the TbAAT6 gene. Eflornithine uptake was lost indicating that this gene encodes a transporter capable of carrying the drug into trypanosomes. The loss of TbAAT6 either by gene deletion as observed in the selected drug resistance lines, or by RNAi is sufficient to render trypanosomes over 40 fold less sensitive to eflornithine than wildtype cells. Furthermore, ectopic expression of TbAAT6 in trypanosomes that have deleted the gene is sufficient to restore wildtype levels of eflornithine sensitivity confirming that loss of TbAAT6 alone is necessary and sufficient to generate resistance. We have, as yet, been unable to assign a physiological function to TbAAT6 in African trypanosomes, and this is a topic of ongoing research. However, it is one of a large family of related genes described in the kinetoplastida belong to the amino acid transporter 1 superfamily. Only a few other members of the family have been functionally characterised. These include an arginine transporter in Leishmania donovani[25], an arginine transporter in T. cruzi[26] and polyamine transporters in L. major[27] and T. cruzi[28]. The AAT6 gene is not syntenic with genes in Leishmania spp. or T. cruzi. Furthermore, the evolution of the AAT family[22] makes it impossible, currently, to define specific functionality to any of these transporters based on homology alone. Previous work with bloodstream and procyclic form trypanosomes also revealed a relative simplicity in selecting eflornithine resistance[11],[12],[29]. In procyclic forms reduced rates of eflornithine uptake were identified[11],[12] with possible changes to other transporters for ornithine and putrescine also suggested. In bloodstream forms reduction in eflornithine uptake was noted in two of six eflornithine refractory T. b. rhodesiense lines[29], but in the majority of cases no difference in eflornithine uptake was noted leading the authors to dismiss altered drug uptake as an underlying mechanism for the natural refractoriness of many strains of T. b. rhodesiense in the field[30]. Possible changes to S-adenosylmethionine metabolism instead were inferred as being significant in that study[29]. Our metabolomics experiments showed that none of the measured polyamine pathway metabolites differed significantly between wildtype and resistant lines in our study. Furthermore, as noted above, the reduced uptake of eflornithine by trypanosomes lacking TbAAT6, without further requirement of changes in metabolism, is both necessary and sufficient to yield a resistance phenotype without any requirement for changes to metabolic pathways which will be essentially unchanged as drug no longer accumulates to inhibitory levels in trypanosomes. Recently, two groups have employed high throughput RNAi screening to determine whether knockdown of any genes correlate with to resistance to various trypanocides including eflornithine. In both instances, TbAAT6 was implicated in loss of sensitivity to eflornithine( David Horn, London School of Hygiene and Tropical Medicine, personal communication) and Isabel Roditi[31]. Since eflornithine has only recently been implemented as first line treatment for stage two HAT, formal published reports of clinical resistance have not yet appeared, although unpublished data( Enock Matovu( Makerere University), personal communication) points to a substantial increase in eflornithine treatment failures in Northern Uganda. Furthermore, given that the actions of nifurtimox and eflornithine are not synergistic, trypanosomes already bearing resistance, through loss of transport, to eflornithine would effectively be subject to nifurtimox monotherapy even in combination chemotherapy. Nifurtimox resistance has been selected in vitro and has been shown to be cross resistant with another emerging trypanocide, fexinidazole, currently in clinical trials[32]. Given nifurtimox's lack of efficiency[33], eflornithine resistance alone is likely to lead to large numbers of treatment failures from the combination. If the loss of TbAAT6 is involved in resistance in the field, then it will be possible to implement a simple PCR-based test for resistance, allowing for more suitable treatments to be administered.",
"role": "user"
},
{
"content": "We have found that the loss of a single gene , TbAAT6 , is sufficient to render African trypanosomes resistant to the only safe drug , eflornithine , in use against them . The fact that parasites lacking TbAAT6 are viable in animals and retain the resistance phenotype indicates a simple means by which parasite populations could develop resistance . The loss of this gene can be detected by PCR apparatus , offering the potential for a simple , cheap test in the field , meaning that the drug will not be prescribed when it would be inefficient . It will be critical to monitor parasite populations in endemic regions for the status of this gene as eflornithine is used increasingly in trypanosomiasis therapy .",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.1853 | [
{
"content": "Extract all unique entities of these types from the following clinical trial criteria: [\"Negation\", \"Observation\", \"ReferencePoint\", \"Procedure\", \"Drug\", \"Measurement\", \"Value\", \"Person\", \"Multiplier\", \"Device\", \"Qualifier\", \"Mood\", \"Temporal\", \"Condition\", \"Visit\", \"Scope\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values. If no entities are found for a type, use [].\n\nText:\n\nPatients with SOF/VEL treatment for the treatment of chronic HCV genotype 1 through 6.\nPatient is at least 18 at the day of screening.\nPatient is able and willing to sign the Informed Consent Form.\nPatient is able and willing to follow protocol requirements.",
"role": "user"
},
{
"content": "{\"Condition\": [], \"Device\": [], \"Drug\": [], \"Measurement\": [\"HCV genotype\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [], \"Procedure\": [\"SOF/VEL treatment\"], \"Qualifier\": [\"chronic\"], \"ReferencePoint\": [\"screening\"], \"Scope\": [], \"Temporal\": [\"at least 18 at the day of screening\"], \"Value\": [\"1 through 6\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.11315 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nImmune checkpoint inhibitors (ICI) block negative regulatory molecules, such as CTLA-4, PD-1 and PD-L1, in order to mount an antitumor response. T cells are important for antiviral defense, but it is not known whether patients with cancer treated with ICI are more or less vulnerable to viral infections such as COVID-19. Furthermore, immunosuppressive treatment of immune-related adverse events (irAE) may also impact infection risk. Rheumatic irAEs are often persistent, and can require long-term treatment with immunosuppressive agents. The aim of this study was to determine the incidence of COVID-19 infection and assess changes in ICI and immunosuppressive medication use among patients enrolled in a prospective rheumatic irAE registry during the height of the COVID-19 pandemic. On April 16 2020, following the 'surge' of COVID-19 infections in the New York Tri-State area, we sent a 23-question survey to 88 living patients enrolled in a single institutional registry of patients with rheumatic irAE. Questions addressed current cancer and rheumatic irAE status, ICI and immunosuppressant medication use, history of COVID-19 symptoms and/or diagnosed infection. A follow-up survey was sent out 6 weeks later. Sixty-five (74%) patients completed the survey. Mean age was 63 years, 59% were female, 70% had received anti-PD-(L)1 monotherapy and 80% had had an irAE affecting their joints. Six patients (10%) had definite or probable COVID-19, but all recovered uneventfully, including two still on ICI and on low-to-moderate dose prednisone. Of the 25 on ICI within the last 6 months, seven (28%) had their ICI held due to the pandemic. In patients on immunosuppression for irAE, none had changes made to those medications as a result of the pandemic. The incidence of COVID-19 was no higher in patients still on ICI. Ten percent of rheumatic irAE patients developed COVID-19 during the NY Tri-state 'surge' of March-April 2020. Oncologists held ICI in a quarter of the patients still on them, particularly women, those on anti-PD-(L)1 monotherapy, and those who had had a good cancer response. The incidence of COVID-19 was no higher on patients still on ICI. None of the patients on disease-modifying antirheumatic drugs or biological immunosuppressive medications developed COVID-19.",
"role": "user"
},
{
"content": "[\"Treatment\", \"Prevention\"]",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.1265 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nPreterm premature rupture of membranes: a randomized study of home versus hospital management.\nAbstract 1:\nTo compare length of latency period, gestational age at delivery, and safety in a carefully selected group of patients with preterm premature rupture of the membranes (PROM) randomized to home versus hospital management.\n After meeting strict inclusion criteria, 67 patients with preterm PROM were randomized by sealed envelope to home versus hospital expectant management. The groups were managed similarly with pelvic and bed rest. Management included recording of temperature and pulse every 6 hours, daily charting of fetal movements, twice-weekly nonstress test and complete blood count, and weekly ultrasound and visual examination of the cervix.\n There was no significant difference in clinical characteristics or perinatal outcome between the groups. There was, however, a significant decrease in both the days of maternal hospitalization and maternal hospital expenses in the home group.\n Only a very small proportion of cases of preterm PROM (18%) could meet the strict safety criteria for inclusion used in the study. In the home-management group, length of the latency period and gestational age at delivery were not significantly different than in hospitalized patients.",
"role": "user"
},
{
"content": "The review included two relatively small studies that did not have sufficient statistical power to detect meaningful differences between groups. Future large and adequately powered randomised controlled trials are required to measure differences between groups for relevant pre-specified outcomes. Special attention should be given to the assessment of maternal satisfaction with care and cost analysis as they will have social and economic implications in both developed and developing countries.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.11910 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: High-Resolution Phenotypic Profiling Defines Genes Essential for Mycobacterial Growth and Cholesterol Catabolism\nArticle:\nAbstract:\nThe pathways that comprise cellular metabolism are highly interconnected, and alterations in individual enzymes can have far-reaching effects. As a result, global profiling methods that measure gene expression are of limited value in predicting how the loss of an individual function will affect the cell. In this work, we employed a new method of global phenotypic profiling to directly define the genes required for the growth of Mycobacterium tuberculosis. A combination of high-density mutagenesis and deep-sequencing was used to characterize the composition of complex mutant libraries exposed to different conditions. This allowed the unambiguous identification of the genes that are essential for Mtb to grow in vitro, and proved to be a significant improvement over previous approaches. To further explore functions that are required for persistence in the host, we defined the pathways necessary for the utilization of cholesterol, a critical carbon source during infection. Few of the genes we identified had previously been implicated in this adaptation by transcriptional profiling, and only a fraction were encoded in the chromosomal region known to encode sterol catabolic functions. These genes comprise an unexpectedly large percentage of those previously shown to be required for bacterial growth in mouse tissue. Thus, this single nutritional change accounts for a significant fraction of the adaption to the host. This work provides the most comprehensive genetic characterization of a sterol catabolic pathway to date, suggests putative roles for uncharacterized virulence genes, and precisely maps genes encoding potential drug targets.\nIntroduction:\nMycobacterium tuberculosis( Mtb) is a deadly human pathogen, which is estimated to infect one third of the world's population and killed 1. 7 million people in 2009 alone[1]. This bacterium must adapt to a number of different microenvironments over the course of a chronic infection[2], and understanding the physiological state of the pathogen in these sites is central to the design of more effective antitubercular drugs. Defining these metabolic adaptations requires a holistic understanding of the cellular pathways that are critical for survival in each specific environment. While whole-genome methods to profile mRNA, protein, or transcription factor binding are valuable for understanding cellular responses, gene regulation has proven to be a poor predictor of essentiality[3],[4]. Therefore, genome-wide approaches to accurately and directly assess the relative contribution of each gene to the growth and survival of M. tuberculosis are needed. The genome-scale methods initially used to predict gene essentiality in bacteria employed either microarray hybridization[5],[6] or conventional sequencing[7],[8],[9] to map the sites of transposon insertions in large libraries of random mutants and identify regions of the chromosome that were unable to sustain mutation. Both of these methods suffer from significant limitations. Microarrays are not able to precisely map insertions, resulting in ambiguity regarding the precise location of an essential region. While conventional sequencing can surmount this problem, it is both cumbersome and expensive to apply this method to the very complex libraries necessary for the unambiguous identification of essential genes. Finally, neither of these approaches can be used to quantitatively compare the compositions of mutant pools over more than a very small dynamic range. In order to precisely define the genes that are important for the growth of Mtb, we used highly parallel Illumina sequencing to characterize transposon libraries. This approach achieves single base pair resolution of insertion sites in very complex libraries, resulting in the precise identification of essential open reading frames. Furthermore, the depth of sequencing that is possible allowed the relative abundance of individual mutants to be accurately assessed in libraries grown under different conditions. Using the latter approach, we comprehensively defined the pathways required for the bacterium to grow on cholesterol, a critical nutrient during infection. These previously ill-defined pathways account for an unexpectedly large fraction of the genes required for survival in animal models of TB, providing new functional insight into the adaptation to the host environment.\nDiscussion:\nIn this work, we have significantly refined our understanding of the cellular functions necessary for the viability of Mtb. While others have used similar deep sequencing methods to map insertion sites in other organisms[28],[29],[30],[31],[32] our new analytical tools allowed the statistically rigorous prediction of the genes essential for the viability of Mtb. The majority of these essential genes are consistent with those found by previous microarray-based methods. However, significant differences in these predictions were also noted, and the majority of these are attributable to technical and analytical refinements. As a result, this work provides a much more precise and statistically rigorous global assessment of essentiality than previously possible. A few genes that we found to contain very significant gaps in transposon coverage have been successfully deleted, and produce strains that grow normally under similar conditions to those used in our study[33],[34],[35],[36],[37],[38]. The reasons for these apparently contradictory results likely vary for each gene. It is possible that some of these genes are truly dispensable for in vitro growth, and the identified gaps in transposon coverage are either due to unappreciated transposon specificity, or selection against specific protein truncations. Conversely, it is possible that deletion mutants lacking these genes appear to grow normally because extragenic suppressor mutations have accumulated. This phenomenon is well-documented for mutants generated by homologous recombination[39], but is very unlikely to affect transposon-mapping studies. These differences highlight the advantages and limitations of different genetic approaches for defining essential genes. In addition to this qualitative analysis, we also quantitatively compared mutant pools grown under different conditions to understand how Mtb metabolizes cholesterol. A number of recent studies have demonstrated that Mtb mutants lacking the capacity to acquire or degrade cholesterol are defective for growth in animal models of TB[13],[14],[17],[40],[41]. In order to define a discrete set of ORFs required for growth on this carbon source, we applied a continuous function cutoff to our comparative data, which placed equal importance on fold change in representation and statistical significance. More traditional one-dimensional cutoffs would have excluded genes that were only moderately underrepresented but exceptionally significant, which our analysis includes. Nonetheless, in order to avoid false positive predictions, we set a relatively stringent cutoff, and it is likely that even more than the 96 identified genes contribute to growth in cholesterol. In addition to the dedicated cholesterol catabolic functions, we found that the use of this compound as a source of both cellular energy and biosynthetic carbon requires a variety of central metabolic pathways. Some of these requirements are likely due to the simple shift from a glycolytic substrate to one that relies heavily on β-oxidation. However, the precise pathways we identified appear specific to the mixture of metabolites derived from cholesterol. For example, gluconeogenesis under these conditions is likely to be initiated by the conversion of pyruvate to PEP via the action of pyruvate phosphate dikinase( PpdK), whereas this pathway relies exclusively on phosphenolpyruvate carboxykinase( PckA) during growth on acetate[42]. In other bacteria, PpdK-mediated gluconeogensis is favored during growth in the presence of pyruvate[43],[44]. As pyruvate is produced both as a direct product of sterol catabolism and through the activity of the methylcitrate cycle, we speculate that the relative abundance of cholesterol-derived pyruvate favors the PpdK-mediated pathway. These observations indicate that different gluconeogenic pathways may be preferentially used by Mtb depending on the relative abundance of precursor metabolites. Cholesterol acquisition is predominantly required for bacterial persistence during the chronic stage of Mtb infection in mice and for growth in the cytokine-stimulated macrophages that characterize this stage of infection[14],[17]. While cholesterol is metabolized by the bacterium throughout infection[13],[41], we have shown that this ability is not required for the initial growth of the pathogen in acutely-infected animals or in the resting macrophages that model this early pre-immune period. A recent study demonstrating that cholesterol catabolism is not necessary for bacterial growth in acutely-infected guinea pigs or a macrophage cell line confirms these observations[45]. Thus, it appears that a mixture of carbon sources, including cholesterol, fuel the initial growth of the bacterium, and this sterol becomes a uniquely essential nutrient in chronically-infected animals. Based on these observations, we predict that the requirements for both the dedicated sterol catabolic enzymes, as well as the central metabolic pathways we have defined, are likely to change as infection proceeds. Identifying the full complement of cellular functions necessary for cholesterol utilization has also revealed the scale of this nutritional adaptation during infection. When we compared these data to previous genome-wide screens for mutants attenuated in mouse models of infection[46] we found that a full ten percent of genes specifically required for bacterial growth in vivo are also required for the utilization of cholesterol in vitro( Table S3). These genes encode both dedicated sterol catabolic functions, as well as enzymes involved in central metabolism. Thus, while it is clear that Mtb must adapt to a variety of host-specific conditions to sustain a productive infection, our data suggest that a single nutritional change is responsible for a significant portion of this adaptation. These cholesterol catabolic functions, in conjunction with the hundreds of other genes that we found to be essential for bacterial viability, both expand and refine the repertoire of targets for new TB therapies. Whole genome profiling techniques have proven to be useful tools for understanding complex pathways, such as those required for cholesterol utilization. Most of these approaches rely on determining the relative transcript or protein abundance. However, these strategies make a major assumption – that critical genes are tightly regulated in response to metabolic changes. While this may be true in many cases, it is often not. In order to avoid this assumption, we have directly identified the genes required for growth. While every approach has its own inherent strengths and weaknesses, the phenotypic profiling strategy described here is a powerful complementary tool for understanding bacterial physiology.",
"role": "user"
},
{
"content": "The adaptation of a bacterial pathogen to the environments encountered during infection requires the wholesale remodeling of cellular physiology . One of the major changes encountered upon infection is nutritional , as the bacterium is forced to utilize compounds scavenged from the host . Mycobacterium tuberculosis has the unusual ability to use host cholesterol as a source of carbon and energy , and this capacity is required for persistence in animal models . However , our ignorance of the biochemical pathways involved in sterol degradation has limited our ability to assess the importance of this carbon shift in shaping the metabolic state of the bacterium . In this work , we developed a new method to quantitatively profile the fitness of thousands of mutants simultaneously . This allowed the identification of each bacterial gene that is required for the bacterium to grow using cholesterol as a carbon source . Reconstruction of the pathways comprised by these essential gene products revealed that adaptation to cholesterol required widespread metabolic changes . These genes account for a significant fraction of the bacterial functions important for growth in animal tissues , suggesting that the physiology of this intracellular pathogen is shaped by carbon sources available in this environment .",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.550 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nComparison of pulsed actinomycin D versus 5-day methotrexate for the treatment of low-risk gestational trophoblastic disease.\nAbstract 1:\nTo determine the effectiveness of 2 standard chemotherapy regimens for low-risk gestational trophoblastic disease according to the International Federation of Gynecology and Obstetrics (FIGO) staging system.\n From 2008 until 2010, 75 women with low-risk gestational trophoblastic disease received either pulsed actinomycin D (n=50) or 5-day methotrexate (n=25). The primary remission rate, the duration of treatment, the number of treatment courses, and the adverse effects were compared.\n The complete remission rates were 90% for the actinomycin D group and 68% for the methotrexate group (P=0.018). The mean number of chemotherapy courses administered to achieve complete remission (including courses of second-line therapy) was 3.1 in the methotrexate group and 5.3 in the actinomycin D group (P=0.01). No major adverse effects were experienced in either treatment group and there were no significant differences in terms of adverse effects. Second-line chemotherapy was indicated for 11 patients.\n Based on the present study, pulsed actinomycin D seems to be an appropriate first-line treatment for patients with low-risk gestational trophoblastic disease.\n Copyright © 2011 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.\n\n\nTitle 2:\nActinomycin d versus methotrexate-folinic acid as the treatment of stage I, low-risk gestational trophoblastic neoplasia: a randomized controlled trial.\nAbstract 2:\nThis study is to compare the remission and complication rates of actinomycin D (Act-D) and methotrexate-folinic acid (MTX-FA) as single-agent treatments of stage I, low-risk gestational trophoblastic neoplasia (GTN). From 1994 to 2005, all women with International Federation of Gynecology and Obstetrics stage I, low-risk GTN were randomly assigned to received either intravenous Act-D 10 microg/kg per day for 5 days every 2 weeks or intramuscular methotrexate 1 mg/kg per day on days 1, 3, 5, and 7 with intramuscular folinic acid 0.1 mg/kg per day on days 2, 4, 6, and 8 every 2 weeks. Forty-nine women met the eligibility criteria. Age, human chorionic gonadotropin level, and International Federation of Gynecology and Obstetrics score were similar in both treatment groups. Of the 22 women who received Act-D, 2 were lost to follow-up. Among the 27 women who received MTX-FA, 2 were lost to follow-up, and 6 had to switch to Act-D because of the rising levels of liver enzymes. All 20 women (100%) in the Act-D arm achieved remission compared with 14 (73.6%) in 19 women in the MTX-FA arm (P = 0.02). Mucositis and alopecia were reported more frequently in the Act-D group, whereas elevations of liver enzyme levels were more frequent in the MTX-FA group. Actinomycin D seems to be more effective than MTX-FA in the treatment of stage I, low-risk GTN. Larger multicenter randomized controlled trials should be conducted to establish the most appropriate regimen for these patients.\n\n\nTitle 3:\nComparison of pulse methotrexate and pulse dactinomycin in the treatment of low-risk gestational trophoblastic neoplasia.\nAbstract 3:\nMethotrexate and dactinomycin are efficient drugs in the treatment of patients with low-risk gestational trophoblastic neoplasia (LRGTN). To compare the effectiveness of these two drugs in LRGTN, 46 patients were randomised to receive weekly intramuscular methotrexate at 30 mg/m(2) (n = 28) or intravenous dactinomycin at 1.25 mg/m(2) every 2 weeks (n = 18). Fourteen patients (50%) in the methotrexate group and 16 patients (89%) in the dactinomycin group achieved complete response. Greater patient convenience and a lower number of required visits make dactinomycin superior to other alternatives.\n\n\nTitle 4:\nPhase III trial of weekly methotrexate or pulsed dactinomycin for low-risk gestational trophoblastic neoplasia: a gynecologic oncology group study.\nAbstract 4:\nThere is no consensus on the best regimen for the primary treatment of low-risk gestational trophoblastic neoplasia (GTN).\n Two commonly used single-drug regimens were compared with respect to the proportion of patients meeting the criteria for a complete response (CR) in a randomized phase III trial conducted by the Gynecologic Oncology Group. Eligibility was purposefully broad to maximize the generalizability of the results and included patients with a WHO risk score of 0 to 6 and patients with metastatic disease (limited to lung lesions < 2 cm, adnexa, or vagina) or choriocarcinoma.\n Two hundred forty women were enrolled, and 216 were deemed eligible. Biweekly intravenous dactinomycin 1.25 mg/m² was statistically superior to weekly intramuscular (IM) methotrexate 30 mg/m² (CR: 70% v 53%; P = .01). Similarly, in patients with low-risk GTN as defined before the 2002 WHO risk score revisions (risk score of 0 to 4 and excluding choriocarcinoma), response was 58% and 73% in the methotrexate and dactinomycin arms, respectively (P = .03). Both regimens were less effective if the WHO risk score was 5 or 6 or if the diagnosis was choriocarcinoma (CR: 9% and 42%, respectively). There were two potential recurrences; one at 4 months (dactinomycin) and one at 22 months (methotrexate). Not all patients completed follow-up. Both regimens were well tolerated.\n The biweekly dactinomycin regimen has a higher CR rate than the weekly IM methotrexate regimen in low-risk GTN, a generally curable disease.\n\n\nTitle 5:\nPulse methotrexate versus pulse actinomycin D in the treatment of low-risk gestational trophoblastic neoplasia.\nAbstract 5:\nTo compare the efficacy of methotrexate and actinomycin D as single agents in first-line chemotherapy for women with low-risk gestational trophoblastic neoplasia (LR-GTN).\n A total of 131 women with LR-GTN were randomized to receive a weekly pulsed dose of 30 mg/m(2) of methotrexate intramuscularly (n=81) or a pulsed intravenous bolus of 1.25 mg/m(2) of actinomycin D every 2 weeks (n=50). An additional cycle was administered as consolidation treatment following normalization of the serum level of beta-human chorionic gonadotropin (<5 IU/L).\n Complete remission was achieved in 48.14% of patients in the methotrexate group and 90.00% in the actinomycin D group (P<0.001). The mean number of treatment cycles needed to achieve response was lower in the actinomycin D group (4.8 vs 6.8). The risk of treatment failure was 26.4 greater with methotrexate than with actinomycin D (95% confidence interval, 5.7-22.6; P<0.001).\n Actinomycin D may be a better option than methotrexate as a first-line chemotherapy agent for patients with LR-GTN.",
"role": "user"
},
{
"content": "Dactinomycin is more likely to achieve a primary cure in women with low-risk GTN, and less likely to result in treatment failure, compared with methotrexate. There is limited evidence relating to side-effects, however, the pulsed dactinomycin regimen does not appear to be associated with significantly more side-effects than the low-dose methotrexate regimen and therefore should compare favourably to the five- and eight-day methotrexate regimens in this regard.\nWe consider pulsed dactinomycin to have a better cure rate than, and a side-effect profile at least equivalent to, methotrexate when used for first-line treatment of low-risk GTN. Data from a large ongoing trial of pulsed dactinomycin compared with five- and eight-day methotrexate regimens is likely to have an important impact on our confidence in these findings.",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.7332 | [
{
"content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: Although diversity exists between species and the sexes, EDHF makes a large contribution to endothelium-dependent vasodilatation in various vascular beds in men and nonpregnant women (3, 7, 21, 24, 25, 30, 31).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.3460 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\nUltrasound-assisted extraction of Na and K from swine feed and its application in a digestibility assay: A green analytical procedure. The study is aimed to evaluate the efficiency of ultrasound-assisted extraction (UAE) as a simple strategy focused on sample preparation for metal determination in biological samples. The extraction of sodium and potassium extraction was carried out from swine feed followed by determination of the concentration of these metals by flame atomic emission spectrometry (FAES). The experiment was performed to cover the study of the variables influencing the extraction process and its optimal conditions (sample mass, particle size, acid concentration, sonication time and ultrasound power); the determination of these analytical characteristics and method validation using certified reference material; and the analysis of pre-starter diets. The optimal conditions established conditions were as follows: mass: 100mg, particle size:<60μm, acid concentration: 0.10molL(-1) HCl, sonication time: 50s and ultrasound power: 102W. The proposed method (UAE) was applied in digestibility assays of those nutrients present in different piglet pre-starter feeds and their results proved to be compatible with those obtained from mineralized samples (P<0.05). The ultrasound extraction method was demonstrated to be an excellent alternative for handless sampling and operational costs and the method also has the advantage of does not generating toxic residues that may negatively affect human health and contaminate the environment.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"HCl\", \"K\", \"Na\", \"potassium\", \"sodium\"]}",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.3193 | [
{
"content": "Find all [\"Chemical\"] entities in this text. Output a JSON object.\n\nText:\nTwo new glycosides from the florets of Carthamus tinctorius. Two new glycoside compounds, named saffloquinoside C (1) and (-)-4-hydroxybenzoic acid-4-O-[6'-O-(2″-methylbutyryl)-β-D-glucopyranoside] (2), were isolated from the florets of Carthamus tinctorius. Their structures were elucidated by detailed spectroscopic means including UV, IR, HR-ESI-MS, 1D and 2D NMR, and CD data. Compound 1 was a rare quinochalcone glycoside with six five-membered dioxaspirocycle.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"quinochalcone glycoside\", \"dioxaspirocycle\", \"saffloquinoside C\", \"(-)-4-hydroxybenzoic acid-4-O-[6\\u0027-O-(2\\u2033-methylbutyryl)-\\u03b2-D-glucopyranoside]\"]}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.4021 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nModifying the Type A coronary-prone behavior pattern .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"Type A coronary-prone behavior pattern\"], \"Outcome\": [\"Modifying the Type A coronary-prone behavior pattern\"], \"Participant\": [\"Type A coronary-prone behavior\"]}",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.37982 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Security and Privacy in Location-Based Services for Vehicular and Mobile Communications: An Overview, Challenges, and Countermeasures \nChapter Title: 1) Mix-Zones Approaches in Vehicular Networks: \n1. Abstract of Cited Paper (BIB001): Vehicular Networks (VNs) seek to provide, among other applications, safer driving conditions. To do so, vehicles need to periodically broadcast safety messages providing preciseposition information ... \n2. Abstract of Cited Paper (BIB002): Vehicular ad hoc networks (VANETs) and vehicular communications are considered a milestone in improving the safety, efficiency and convenience in transportation. Vehicular ad hoc networks and many vehicular applications rely on periodic broadcast of the vehicles' location. For example, the location of vehicles can be used for detecting and avoiding collisions or geographical routing of data to disseminate warning messages. At the same time, this information can be used to track the users' whereabouts. Protecting the location privacy of the users of VANETs is important, because lack of privacy may hinder the broad acceptance of this technology. Frequently changing pseudonyms are commonly accepted as a solution to protect the privacy in VANETs. In this paper, we discuss their effectiveness and different methods to change pseudonyms. We introduce the context mix model that can be used to describe pseudonym change algorithms. Further, we asses in which situations, i.e. mix contexts, a pseudonym change is most effective and improves the privacy in vehicular environments. \n3. Abstract of Cited Paper (BIB003): Receiver-location privacy is an important security requirement in privacy-preserving Vehicular Ad hoc Networks (VANETs), yet the unavailable receiver's location information makes many existing packet forwarding protocols inefficient in VANETs. To tackle this challenging issue, in this paper, we propose an efficient social-tier-assisted packet forwarding protocol, called STAP, for achieving receiver-location privacy preservation in VANETs. Specifically, by observing the phenomena that vehicles often visit some social spots, such as well-traversed shopping malls and busy intersections in a city environment, we deploy storage-rich Roadside Units (RSUs) at social spots and form a virtual social tier with them. Then, without knowing the receiver's exact location information, a packet can be first forwarded and disseminated in the social tier. Later, once the receiver visits one of social spots, it can successfully receive the packet. Detailed security analysis shows that the proposed STAP protocol can protect the receiver's location privacy against an active global adversary, and achieve vehicle's conditional privacy preservation as well. In addition, performance evaluation via extensive simulations demonstrates its efficiency in terms of high delivery ratio and low average delay. \n4. Abstract of Cited Paper (BIB004): Location privacy is one of the Quality of Privacies (QoP) in vehicular ad hoc network (VANET) and imperative for the VANET's full flourish. Frequent pseudonym changing can provide a promising solution to achieve location privacy, however if the pseudonyms are changed in an improper occasion, the solution is ineffective. In this paper, to improve the effectiveness of this kind of solution, we first introduce the social spot where many vehicles could aggregate, e.g., a road intersection when the traffic light is red or a free parking lot near a shopping mall. We then propose a social spot based pseudonyms changing technique to achieve the location privacy. By taking the anonymity set size as the privacy metric, we develop two anonymity analytic models to quantitatively investigate the location privacy achieved in the technique. The analytical results show that better location privacy can be achieved when a vehicle changes its pseudonyms at some highly social spots, and as a result, the proposed models can be used to assist vehicles to change their pseudonyms for better location privacy at the right moment and place. \n5. Abstract of Cited Paper (BIB005): As a prime target of the quality of privacy in vehicular ad hoc networks (VANETs), location privacy is imperative for VANETs to fully flourish. Although frequent pseudonym changing provides a promising solution for location privacy in VANETs, if the pseudonyms are changed in an improper time or location, such a solution may become invalid. To cope with the issue, in this paper, we present an effective pseudonym changing at social spots (PCS) strategy to achieve the provable location privacy. In particular, we first introduce the social spots where several vehicles may gather, e.g., a road intersection when the traffic light turns red or a free parking lot near a shopping mall. By taking the anonymity set size as the location privacy metric, we then develop two anonymity set analytic models to quantitatively investigate the location privacy that is achieved by the PCS strategy. In addition, we use game-theoretic techniques to prove the feasibility of the PCS strategy in practice. Extensive performance evaluations are conducted to demonstrate that better location privacy can be achieved when a vehicle changes its pseudonyms at some highly social spots and that the proposed PCS strategy can assist vehicles to intelligently change their pseudonyms at the right moment and place. \n6. Abstract of Cited Paper (BIB006): Vehicular social network (VSN) is envisioned to serve as an essential data sensing, exchanging and processing platform for the future Intelligent Transportation Systems. In this paper, we aim to address the location privacy issue in VSNs. In traditional pseudonym-based solutions, the privacy-preserving strength is mainly dependent on the number of vehicles meeting at the same occasion. We notice that an individual vehicle actually has many chances to meet several other vehicles. In most meeting occasions, there are only few vehicles appearing concurrently. Motivated by these observations, we propose a new privacy-preserving scheme, called MixGroup, which is capable of efficiently exploiting the sparse meeting opportunities for pseudonym changing. By integrating the group signature mechanism, MixGroup constructs extended pseudonym-changing regions, in which vehicles are allowed to successively exchange their pseudonyms. As a consequence, for the tracking adversary, the uncertainty of pseudonym mixture is accumulatively enlarged, and therefore location privacy preservation is considerably improved. We carry out simulations to verify the performance of MixGroup. Results indicate that MixGroup significantly outperforms the existing schemes. In addition, MixGroup is able to achieve favorable performance even in low traffic conditions.\n\nLiterature Review:",
"role": "user"
},
{
"content": "Mix-zones enhance location privacy using anonymous communication zones. These zones are mostly road intersections where the speed and direction of a vehicle is likely to change. When a vehicle enters a mix-zone, it stops sending messages and updates its pseudonym. The effect of changing pseudonyms frequently in location privacy is examined by Buttyán et al. BIB001 . In this approach, traffic is generated on nontrivial road maps with realistic parameters. The authors assumed that the antennas are positioned by attackers in the network to overhear communication. In their analysis of the effectiveness of mix-zones, they conclude that the optimal frequency of pseudonym change depends on the attributes of the mix-zones such as the location, size and the number of entry points. In BIB002 , mix-zones are created using cryptographic techniques CMIX at road intersections within the broadcast distance of the RSUs. The CMIX protocol distributes symmetric keys using traditional asymmetric cryptography for the establishment of the cryptographic mix-zones. In the CMIX-zone, all broadcast messages by vehicles are encrypted with the symmetric key distributed by the RSU. The authors claim that CMIX makes it difficult for an attacker to link the identity of a vehicle since the same key is used by all vehicles. A context mix model is introduced in and BIB003 to protect location privacy in vehicular networks. They argue that even pseudonyms are changed at random intervals, several approaches can be used by the attacker to identify the nodes. They describe the mix-zones in as mix-context situations and include the use of context information such as speed, direction and number of vehicles to initiate a pseudonym change. To protect vehicles from location tracking, they define a threshold for the minimum entropy to identify the best opportunity for a pseudonym change. The user or the application where the pseudonym change is triggered is used to define the minimum entropy. To achieve receiver-location privacy in VANETs, Lin et al. BIB004 proposed a social-tier-assisted message forwarding protocol (STAP). STAP protects receiver location privacy from active global attackers by disseminating messages to social tiers. Similar to the works in BIB005 and BIB006 , they use social spots to observe busy intersections in a city environment. The social tier is a virtual tier formed by social spots. RSUs are deployed at social spots and a virtual tier is formed with them without knowledge of the location of the receiver. When the receiver enters one of the social spots, it can successfully receive the message that was disseminated to the social tier. The receiver holds the family of the pseudo-IDs which are unlinkable and the pseudo-IDs are repeatedly used in the mutual authentication between the RSUs and the receivers at social spots. Lu et al. BIB005 , BIB006 proposed changing of pseudonym at social spots (PCS) as a strategy to achieve high location privacy. Vehicles temporarily gather at social spots such as free parking lots that may be close to a shopping mall or at road intersections when the traffic light turns red. They state that a social spot becomes a mix-zone if pseudonyms are changed by all the vehicles before leaving that spot. An indistinguishable information is broadcast as safety message which shows the location of the vehicle as a social spot at a velocity of 0 with an unlinkable pseudonym. They show that when pseudonyms are changed by all the vehicles simultaneously, the high density makes it difficult for attackers to track the vehicles. However, pseudonym changing at social spots does not provide sufficient privacy protection when the vehicle density is low. In , a user-centric game-theoretic mix-zone approach is introduced to measure the evolution of location privacy over time and evaluates its behavior in mobile, vehicular and delay tolerant networks. They argue that even though the cost of location privacy is reduced by selfish nodes, it can also threaten the efficiency achieved by multiple pseudonyms in noncooperative scenarios. They carry out an analysis on static games using a complete information to get pure and mixed Nash equilibria. Using a Bayesian approach, they study an incomplete information scenario where nodes have no complete knowledge of the pay-offs of neighboring nodes. They look for a symmetric equilibrium where all the nodes cooperate with the same probability and analyze a dynamic version of the game showing how it copes with uncertainty. The pseudonym changes are coordinated by Pseudo-game protocols which are designed to implement pseudonym change strategies. Yu et al. BIB007 proposed a MixGroup scheme for location privacy. Based on sporadic observations, they exploit meeting opportunities to change pseudonyms and improve location privacy. Observations are made from real vehicle traces based on social spots and sporadic observations. MixGroup is used to construct an extended pseudonym group region where vehicles can change their accumulated pseudonyms. Each group has a group leader and a group identifier. When a vehicle enters a group, the group leader will assign a group ID with a certificate and a private key to the new vehicle after authentication. The group ID, certificate, and the private key are used for changing pseudonyms and broadcasting safety messages. An entropy-optimal negotiation procedure is used to facilitate the process of exchanging pseudonyms among vehicles. They quantitatively measure the risk and benefits associated with the pseudonym exchange using a predefined pseudonym entropy.",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.6525 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Thepathwaybywhichleptinregulatesboneresorptionwasidentifiedtoinvolvethesympatheticnervoussystemrelayingtotheosteoblastsviathebeta-adrenergicpathwayleadingtothereleaseofgrowthfactorsincludingRANKLthatcausestheosteoclaststothrive[8,18-20].",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.19937 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nINTRODUCTION: The pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has exposed vulnerable populations to an unprecedented global health crisis. The knowledge gained from previous human coronavirus outbreaks suggests that pregnant women and their fetuses are particularly susceptible to poor outcomes. The objective of this study was to summarize the clinical manifestations and maternal and perinatal outcomes of COVID-19 during pregnancy. MATERIAL AND METHODS: We searched databases for all case reports and series from 12 February to 4 April 2020. Multiple terms and combinations were used including COVID-19, pregnancy, maternal mortality, maternal morbidity, complications, clinical manifestations, neonatal morbidity, intrauterine fetal death, neonatal mortality and SARS-CoV-2. Eligibility criteria included peer-reviewed publications written in English or Chinese and quantitative real-time polymerase chain reaction (PCR) or dual fluorescence PCR-confirmed SARS-CoV-2 infection. Unpublished reports, unspecified date and location of the study or suspicion of duplicate reporting, cases with suspected COVID-19 that were not confirmed by a laboratory test, and unreported maternal or perinatal outcomes were excluded. Data on clinical manifestations, maternal and perinatal outcomes including vertical transmission were extracted and analyzed. RESULTS: Eighteen articles reporting data from 108 pregnancies between 8 December 2019 and 1 April 2020 were included in the current study. Most reports described women presenting in the third trimester with fever (68%) and coughing (34%). Lymphocytopenia (59%) with elevated C-reactive protein (70%) was observed and 91% of the women were delivered by cesarean section. Three maternal intensive care unit admissions were noted but no maternal deaths. One neonatal death and one intrauterine death were also reported. CONCLUSIONS: Although the majority of mothers were discharged without any major complications, severe maternal morbidity as a result of COVID-19 and perinatal deaths were reported. Vertical transmission of the COVID-19 could not be ruled out. Careful monitoring of pregnancies with COVID-19 and measures to prevent neonatal infection are warranted.",
"role": "user"
},
{
"content": "[\"Prevention\", \"Transmission\", \"Diagnosis\"]",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.787 | [
{
"content": "Answer the following question using the provided context. Only provide the answer without any additional text.\n\nQuestion: What is the mechanism of action of ozanimod?\nContext: Ozanimod is a novel, selective, oral sphingosine-1-phosphate (1 and 5) receptor modulator in development for multiple sclerosis and inflammatory bowel disease.\nEmerging strategies to interfere with cellular adhesion processes involved in lymphocyte recruitment are discussed, including both integrin blockade (natalizumab, vedolizumab, etrolizumab) and sphingosine-1-phosphate receptor inhibition (fingolimod, ozanimod).\nOzanimod (RPC1063) is an oral selective modulator of the sphingosine-1-phosphate 1 and 5 receptors under development for the treatment of relapsing multiple sclerosis and inflammatory bowel disease.\nIn addition, there are three more specific S1P agonists in late stages of development: siponimod, ponesimod, and ozanimod.\nOzanimod (RPC1063), a selective S1PR1 and S1PR5 modulator, reduces chronic inflammation and alleviates kidney pathology in murine systemic lupus erythematosus.\nOzanimod (RPC1063) is a specific and potent small molecule modulator of the sphingosine 1-phosphate receptor 1 (S1PR1) and receptor 5 (S1PR5), which has shown therapeutic benefit in clinical trials of relapsing multiple sclerosis and ulcerative colitis.\nThe sphingosine-1-phosphate receptor-1 (S1P __sub__ 1 __end_sub__ ) agonist ozanimod ameliorates ulcerative colitis, yet its mechanism of action is unknown.\nThis review will focus on SM drugs approved and under development, including JAK inhibitors (tofacitinib, filgotinib, upadacitinib, peficitinib) and S1PR agonists (KRP-203, fingolimod, ozanimod, etrasimod, amiselimod), and their mechanism of action.\nBACKGROUND Ozanimod is a sphingosine 1-phosphate receptor modulator, which selectively binds to sphingosine 1-phosphate receptor subtypes 1 and 5 with high affinity.\nBACKGROUND Ozanimod, a sphingosine 1-phosphate receptor modulator, selectively binds to receptor subtypes 1 and 5 with high affinity.\nOzanimod is a novel, selective, oral sphingosine-1-phosphate (1 and 5) receptor modulator in development for multiple sclerosis and inflammatory bowel disease.\nThe sphingosine-1-phosphate receptor-1 (S1P 1 ) agonist ozanimod ameliorates ulcerative colitis, yet its mechanism of action is unknown.\nBACKGROUND Ozanimod (RPC1063) is an oral agonist of the sphingosine-1-phosphate receptor subtypes 1 and 5 that induces peripheral lymphocyte sequestration, potentially decreasing the number of activated lymphocytes circulating to the gastrointestinal tract.\nBACKGROUND Ozanimod, an oral immunomodulator, selectively targets sphingosine 1-phosphate receptors 1 and 5.\nResults From the First-in-Human Study With Ozanimod, a Novel, Selective Sphingosine-1-Phosphate Receptor Modulator.\nThe sphingosine-1-phosphate receptor-1 ( S1P ) agonist ozanimod ameliorates ulcerative colitis , yet its mechanism of action is unknown\nMore selective S1P receptor agents-ponesimod ( ACT128800) , siponimod ( BAF312) , ozanimod ( RPC1063) , ceralifimod ( ONO-4641) , GSK2018682 , and MT-1303-are still in relatively early stages of development , but phase I and II trials showed promising efficacy and safety\nEffects of High- and Low-Fat Meals on the Pharmacokinetics of Ozanimod, a Novel Sphingosine-1-Phosphate Receptor Modulator.\nOzanimod (RPC1063) is an oral selective modulator of the sphingosine-1-phosphate 1 and 5 receptors under development for the treatment of relapsing multiple sclerosis and inflammatory bowel disease.\nOzanimod is a novel, selective, oral sphingosine-1-phosphate (1 and 5) receptor modulator in development for multiple sclerosis and inflammatory bowel disease.\nBACKGROUND Ozanimod (RPC1063) is an oral agonist of the sphingosine-1-phosphate receptor subtypes 1 and 5 that induces peripheral lymphocyte sequestration, potentially decreasing the number of activated lymphocytes circulating to the gastrointestinal tract.\nThe sphingosine-1-phosphate receptor-1 (S1P1) agonist ozanimod ameliorates ulcerative colitis, yet its mechanism of action is unknown.\nOzanimod (RPC1063) is a potent sphingosine-1-phosphate receptor-1 (S1P1 ) and receptor-5 (S1P5 ) agonist with autoimmune disease-modifying activity.",
"role": "user"
},
{
"content": "Ozanimod is a specific and potent small molecule modulator of the sphingosine 1-phosphate receptor 1 (S1PR1) and receptor 5 (S1PR5), which has shown therapeutic benefit in clinical trials of relapsing multiple sclerosis and ulcerative colitis.",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.4982 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: the covid-19 is a us biological weapon to reduce the planet's population\n\nAbstract:\nCoronavirus disease 2019 (COVID-19) is a pandemic with no specific drugs and high fatality.\nThe most urgent need is to find effective treatments.\nWe sought to determine whether hydroxychloroquine (HCQ) application may reduce the death risk of critically ill COVID-19 patients.\nIn this retrospective study, we included 550 critically ill COVID-19 patients who need mechanical ventilation in Tongji Hospital, Wuhan, from February 1, 2020 to April 4, 2020.\nAll 550 patients received comparable basic treatments including antiviral drugs and antibiotics, and 48 of them were treated with oral HCQ treatment (200 mg twice a day for 7-10 days) in addition to the basic treatments.\nPrimary endpoint is fatality of patients, and inflammatory cytokine levels were compared between HCQ and non-hydroxychloroquine (NHCQ) treatments.\nWe found that fatalities are 18.8% (9/48) in HCQ group, which is significantly lower than 47.4% (238/502) in the NHCQ group (P<0.001).\nThe time of hospital stay before patient death is 15 (10-21) days and 8 (4-14) days for the HCQ and NHCQ groups, respectively (P<0.05).\nThe levels of inflammatory cytokine IL-6 were significantly reduced from 22.2 (8.3-118.9) pg mL-1 at the beginning of the treatment to 5.2 (3.0-23.4) pg mL-1 (P<0.05) at the end of the treatment in the HCQ group but there is no change in the NHCQ group.\nThese data demonstrate that addition of HCQ on top of the basic treatments is highly effective in reducing the fatality of critically ill patients of COVID-19 through attenuation of inflammatory cytokine storm.\nTherefore, HCQ should be prescribed as a part of treatment for critically ill COVID-19 patients, with possible outcome of saving lives.\nhydroxychloroquine, IL-6, mortalities, COVID-19.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.96582 | [
{
"content": "You will be presented with the full text of a scientific research paper. Write an abstract summarizing the key points. Your response should contain only the abstract without any other text.\n\nArticle text:\nthe study of kaon photoproduction is motivated by the possibility of a model - independent amplitude reconstruction @xcite .\nthis type of analysis has been realized for nucleon - nucleon scattering @xcite and may soon be attempted for kaon photoproduction , given the existing and expected data for single- and double - polarization quantities .\na second motivation hinges on the possibility that a resonance analysis will reveal states not clearly seen in processes containing initial or final single - pion states .\nseveral recent @xcite and older @xcite studies have noted that these two goals ( amplitude reconstruction and multipole analysis ) are very different in nature . in the following ,\nwe have applied the method used in ref .\n@xcite to the low - energy kaon photoproduction database in order to do a multipole analysis . in ref .\n@xcite , the low - energy pion photoproduction database was analyzed using a two - step procedure .\ncharged - pion photoproduction data were fitted assuming a known t - channel pion exchange mechanism for the high ( unfitted ) multipoles .\nthis effectively removed the overall phase ambiguity that arises when only a finite number of multipoles are fitted , and the higher waves are neglected .\nthe obtained charged - pion multipoles , and an appliction of watson s theorem @xcite for multipoles coupled to the elastic @xmath0 @xmath1 partial wave , allowed for the determination of neutral - pion production multipoles . in a similar approach ,\nnot using watson s theorem directly , grushin @xcite was able to extract the @xmath1 phases from a fit to photoproduction data . as charged - kaon photoproduction has a t - channel process contributing to high partial waves , the method used in ref .\n@xcite was applied over a region where @xmath2 and @xmath3wave multipoles were expected to dominate .\nwe have focused on energies from threshold to 1.8 gev in center - of - mass energy ( @xmath4 ) , as this covers a region influenced by the n(1650)@xmath5 , n(1710)@xmath6 , and n(1720)@xmath7 resonances , included in the kaon maid fit , but is below the proposed n(1900)@xmath8 @xcite .\nthe t - channel process in kaon photoproduction is generally @xcite built up from the exchange of the @xmath9 , @xmath10 , and @xmath11 .\nthe latter two states have sizeable widths producing complex multipoles .\nthe contribution to the total cross section from born plus @xmath12 exchange diagrams is known to increase rapidly at moderate energies and this feature is usually handled with the introduction of form factors @xcite . as a result , the prediction of higher multipoles is more model - dependent here than in the case of charged - pion photoproduction .\nwe should also note that a recent single - energy multipole analysis has found broad bands of multipole solutions , which are experimentally indistinguishable , using the existing database @xcite .\nour findings are slightly more optomistic , showing at least qualitative agreement with some previous phenomenological fits .\nproblems noted in ref .\n@xcite have been found here as well .\ndata were fitted at single energies corresponding to the low - energy measurements of @xmath13 , @xmath14 , @xmath15 , @xmath16 , and @xmath17 from the graal collaboration @xcite .\ndifferential cross sections were taken from the clas collaboration @xcite . the lowest energy point from ref .\n@xcite was omitted , as contradictory differential cross section measurements exist just above and below this energy .\nconventions for the signs of polarized measurements were taken from ref .\n@xcite , differing from those adopted in ref .\n@xcite .\nin fitting the data , multipoles up to @xmath18 and @xmath19 were varied .\nwaves beyond those being fitted were taken from the born plus @xmath12 exchange diagrams used in the kaon maid fit @xcite . in practice , however , over the considered energy range , the @xmath18 and @xmath19 multipoles were found to be compatible with the born contribution , apart from contributions to the real parts of @xmath18 and @xmath19 .\nthe resulting values for chi - squared per degree of freedom were between 0.5 and unity for all searches . in fig . 1 we show the quality of our fit to the cross section , @xmath13 , @xmath14 , and @xmath15 data , corresponding to lab photon energy of 1027 mev ( @xmath20=1675 mev )\nfigure 2 shows the fit to @xmath16 data and a prediction of @xmath17 , @xmath21 , and @xmath22 .\nfitting @xmath17 and predicting the remaining observables gives a comparable result .\nthe results in fig . 3 , for beam - target quantities e , f , g and h , are predictions .\npreliminary clas data for e @xcite show agreement with this behavior , but rather poor agreement with other previous analyses .\nthe level of agreement , only slightly worse than a fit of all observables , provides a validation of the method .\nhowever , the problem of uniqueness for extracted multipoles , found in ref .\n@xcite , still persists .\nin fig . 3 , a set of single - energy multipoles is compared to the kaon maid solution .\nthe agreement is surprisingly good , particularly for the imaginary parts , which appear qualitatively consistent with the n(1650 ) and n(1710 ) resonances contained in kaon maid .\nthe behavior of @xmath23 is also evidence for a contribution from the n(1720 ) .\nhowever , the @xmath24 multipole has a large real part not included in kaon maid . a feature quite similar to this\nis seen in the energy - dependent multipole analysis of mart @xcite . a large near - threshold value , with a different sign , is also found in the bonn - gatchina analysis @xcite . in fig . 3\n, the multipoles at a particular energy were obtained through searches in parameter space near adjacent - energy solutions . however , in expanding the parameter space , we found indistinguishable solutions , with very low chi - squared , corresponding to different multipole sets .\nresults for the @xmath23 and @xmath25 were not stable .\nhowever , the trend seen in the @xmath26 and @xmath24 multipoles remains in these alternate solutions .\nsimilar difficulties were also found in the multipole analysis of ref .\n@xcite . in fig . 4\n, we show the dominant multipoles fitted up to 1.9 gev .\nthe fit remains quite good , retaining a chi - squared per degree of freedom less than unity , without requiring a search of higher multipoles .\nthe real part of the @xmath24 multipole shows a rapid drop to zero , associated with a rise in the imaginary part , at approximately 1.9 gev .\nthis supports the claim for a 2-star n(1900 ) contribution @xcite .\nwe have performed an exploratory single - energy multipole analysis of kaon photoproduction data near threshold - a region where we assumed only a few multipoles would deviate from born predictions .\nthe results are mixed .\nwe have been able to fit a minimal set of data , predicting other observables , thus testing the method .\nhowever , a unique set of multipoles was not found .\nthis confirms the findings of ref .\n@xcite with a somewhat different method .\nclearly , the validity of this method rests on the model - independence of the t - channel process\n. it would be useful to repeat these fits with a set of different models for this contribution .\nmany recipies exist to moderate the born contributions .\ntheir effect is likely to add another source of model - dependence with increased energy .\nthe fact that our search tended to require corrections to the real parts of the @xmath18 and @xmath19 multipoles suggests that a modified t - channel contribution may be required .\nthis issue is presently being studied .\nnumerical values for the single - energy solutions and included datasets will be provided through the said webpage @xcite .\nplots of the extracted multipoles show qualitative similarities to the kaon maid fit in some cases near threshold . for the @xmath24 ,\nhowever , the result is qualitatively different near threshold , with resonance - like behavior near the n(1900 ) resonance energy . in ref .\n@xcite , multiple solutions were also found in fits to a larger set of observables , with a low chi - squared .\nmultipoles up to l=3 were searched , whereas we have taken waves with l@xmath271 to be given by the born contribution , including @xmath12 exchange , with corrections to the real parts of @xmath18 and @xmath19 contributions found necessary in the fit . fitting higher multipoles decreases the chi - squared but also reduces the born contributions , which serve to remove the overall phase ambiguity . we have also found that including the @xmath18 and @xmath19 multipoles in the fit , when not required , tended to increase the uncertainties on the other fit parameters .\nmultipole analyses come in two main types : those with a significant t - channel component , which is taken as known to remove the overall sign ambiguity ( such as @xmath28 photoproduction ) , and others with a negligible t - channel contribution ( such as @xmath29 photoproduction at low energies ) . in the latter case ,\none is free to fix one multipole phase and determine multipoles only up to this unknown phase ( which is a function of energy ) . in ref .\n@xcite , born terms were added and a phase for the @xmath26 multipole was chosen .\nthis was not done in the present fit . here\nthe born contribution has fixed our overall phase and therefore the phase of @xmath26 as well .\ngrushin , a.a .\nshikanyan , e.m .\nleikin , and a. ya .\nrotvain , yad .\n38 , 1448 ( 1983 ) ; v.f .\ngrushin , in _ photoproduction of pions on nucleon and nuclei _ , edited by a.a .\nkomar ( nova science , new york , 1989 ) , p. 1ff .",
"role": "user"
},
{
"content": "methods used in the multipole analysis of low - energy pion photoproduction data have been tested against the database of kaon photoproduction measurements . \n results for some multipoles are in qualitative agreement with existing phenomenological models , while others are unstable , given the present database . \n these findings are compared to those of previous studies .",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.3175 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nWeight loss maintenance in overweight subjects on ad libitum diets with high or low protein content and glycemic index : the DIOGENES trial 12-month results . BACKGROUND A high dietary protein ( P ) content and low glycemic index ( LGI ) have been suggested to be beneficial for weight management , but long-term studies are scarce . OBJECTIVE The DIOGENES randomized clinical trial investigated the effect of P and GI on weight loss maintenance in overweight or obese adults in eight centers across Europe . This study reports the 1-year results in two of the centers that extended the intervention to 1 year . METHOD After an 8-week low-calorie diet ( LCD ) , 256 adults ( body mass index > 27 kg m ( - ) ( 2 ) ) were randomized to five ad libitum diets for 12 months : high P/LGI ( HP/LGI ) , HP/high GI ( HP/HGI ) , low P/LGI ( LP/LGI ) , LP/HGI and a control diet . During the first 6 months , foods were provided for free through a shop system and during the whole 12-month period , subjects received guidance by a dietician . Primary outcome variable was the change in body weight over the 12-month intervention period . RESULTS During the LCD period , subjects lost 11.2 ( 10.8 , 12.0 ) kg ( mean ( 95 % confidence interval ( CI ) ) ) . Average weight regain over the 12-month intervention period was 3.9 ( 95 % CI 3.0-4.8 ) kg . Subjects on the HP diets regained less weight than subjects on the LP diets . The difference in weight regain after 1 year was 2.0 ( 0.4 , 3.6 ) kg ( P=0.017 ) ( completers analysis , N=139 ) or 2.8 ( 1.4 , 4.1 ) kg ( P < 0.001 ) ( intention-to-treat analysis , N=256 ) . No consistent effect of GI on weight regain was found . There were no clinically relevant differences in changes in cardiometabolic risk factors among diet groups . CONCLUSION A higher protein content of an ad libitum diet improves weight loss maintenance in overweight and obese adults over 12 months .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"low glycemic index\", \"ad libitum diets\", \"high dietary protein ( P ) content\", \"control\"], \"Outcome\": [\"change in body weight\", \"GI on weight regain\", \"improves weight loss\", \"Average weight regain\", \"effect\", \"weight loss maintenance\", \"difference in weight regain\", \"regained less weight\", \"Weight loss maintenance\"], \"Participant\": [\"overweight or obese\", \"overweight\", \"overweight and obese\", \"256 adults\", \"adults\"]}",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.7727 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nA randomised phase III study on capecitabine, oxaliplatin and bevacizumab with or without cetuximab in first-line advanced colorectal cancer, the CAIRO2 study of the Dutch Colorectal Cancer Group (DCCG). An interim analysis of toxicity.\nAbstract 1:\nBACKGROUND Targeting the vascular endothelial growth factor or the epidermal growth factor receptor ( EGFR ) has shown efficacy in advanced colorectal cancer ( ACC ) , but no data are available on the combination of these strategies with chemotherapy in the first-line treatment . The CAIRO2 study evaluates the effect of adding cetuximab , a chimeric mAb against EGFR , to capecitabine , oxaliplatin and bevacizumab in the first-line treatment of ACC . PATIENTS AND METHODS In all , 755 patients were r and omly assigned between treatment with capecitabine , oxaliplatin and bevacizumab with or without cetuximab . The primary end point is progression-free survival . We here present the toxicity results in the first 400 patients that entered the study . RESULTS The incidence of overall grade 3 - 4 toxicity was significantly higher in arm B compared with arm A ( 81 % versus 72 % , P = 0.03 ) . This difference is fully attributed to cetuximab-related skin toxicity . The addition of cetuximab did not result in an increase of gastrointestinal toxicity or treatment-related mortality . CONCLUSIONS The addition of cetuximab to capecitabine , oxaliplatin and bevacizumab in the first-line treatment of ACC appears to be safe and feasible . No excessive or unexpected toxicity in the cetuximab-containing treatment arm was observed\n\n\nTitle 2:\nCapecitabine and irinotecan with and without bevacizumab for advanced colorectal cancer patients.\nAbstract 2:\nAIM To investigate the efficacy and safety of cape-citabine plus irinotecan + /- bevacizumab in advanced or metastatic colorectal cancer patients . METHODS Forty six patients with previously untreated , locally-advanced or metastatic colorectal cancer ( mCRC ) were recruited between 2001 - 2006 in a prospect i ve open-label phase II trial , in German community-based outpatient clinics . Patients received a st and ard capecitabine plus irinotecan ( CAPIRI ) or CAPIRI plus bevacizumab ( CAPIRI-BEV ) regimen every 3 wk . Dose reductions were m and atory from the first cycle in cases of > grade 2 toxicity . The treatment choice of bevacizumab was at the discretion of the physician . The primary endpoints were response and toxicity and secondary endpoints included progression-free survival and overall survival . RESULTS In the CAPIRI group vs the CAPRI-Bev group there were more female than male patients ( 47 % vs 24 % ) , and more patients had colon as the primary tumor site ( 58.8 % vs 48.2 % ) with fewer patients having sigmoid colon as primary tumor site ( 5.9 % vs 20.7 % ) . Grade 3/4 toxicity was higher with CAPIRI than CAPIRI-Bev : 82 % vs 58.6 % . Partial response rates were 29.4 % and 34.5 % , and tumor control rates were 70.6 % and 75.9 % , respectively . No complete responses were observed . The median progression-free survival was 11.4 mo and 12.8 mo for CAPIRI and CAPIRI-Bev , respectively . The median overall survival for CAPIRI was 15 mo ( 458 d ) and for CAPIRI-Bev 24 mo ( 733 d ) . These differences were not statistically different . In the CAPIRI-Bev , group , two patients underwent a full secondary tumor resection after treatment , whereas in the CAPIRI group no cases underwent this procedure . CONCLUSION Both regimens were well tolerated and offered effective tumor growth control in this outpatient setting . Severe gastrointestinal toxicities and thromboembolic events were rare and if observed were never fatal\n\n\nTitle 3:\nMechanisms of hypertension associated with BAY 43-9006.\nAbstract 3:\nPURPOSE BAY 43 - 9006 ( sorafenib ) is an inhibitor of Raf kinase , the vascular endothelial growth factor ( VEGF ) receptor-2 , and angiogenesis in tumor xenografts . The current study investigated the incidence , severity , and mechanism of blood pressure ( BP ) elevation in patients treated with BAY 43 - 9006 . PATIENTS AND METHODS Twenty patients received BAY 43 - 9006 400 mg orally twice daily . BP and heart rate were measured at baseline and then every 3 weeks for 18 weeks . VEGF , catecholamines , endothelin I , urotensin II , renin , and aldosterone were measured at baseline and after 3 weeks of therapy . We assessed vascular stiffness at baseline , after 3 to 6 weeks of therapy , and again after 9 to 10 months of therapy . RESULTS Fifteen ( 75 % ) of 20 patients experienced an increase of > or = 10 mmHg in systolic BP ( SBP ) , and 12 ( 60 % ) of 20 patients experienced an increase of > or = 20 mmHg in SBP compared with their baseline value , with a mean change of 20.6 mmHg ( P < .0001 ) after 3 weeks of therapy . There were no statistically significant changes in humoral factors , although there was a statistically significant inverse relationship between decreases in catecholamines and increases in SBP , suggesting a secondary response to BP elevation . Measures of vascular stiffness increased significantly during the period of observation . CONCLUSION Treatment with BAY 43 - 9006 is associated with a significant and sustained increase in BP . The lack of significant change in circulating factors suggests that these humoral factors had little role in the increase in BP\n\n\nTitle 4:\nCancer and Leukemia Group B/Southwest Oncology Group trial 80405: a phase III trial of chemotherapy and biologics for patients with untreated advanced colorectal adenocarcinoma.\nAbstract 4:\nRationale The management of patients with metastatic colorectal cancer ( CRC ) has changed dramatically over the past 5 years . The availability of new chemotherapy agents , eg , irinotecan , oxaliplatin , capecitabine , has altered the approach to these patients , and the introduction of the antibodies bevacizumab and cetuximab has created third and fourth options for many of these patients . These developments and the exposure of patients to multiple lines of only chemotherapy has dramatically improved the median survival for patients with advanced CRC.1 The potential impact of the biologic agents used in combination with chemotherapy could be even more dramatic , although the optimal timing and sequencing of the chemotherapy and biologic-agent therapy is unknown , and our current diagnostic tests make us so far unable to target patients with the “ optimal ” first-line regimen . This may be increasingly important because of the emerging evidence that certain patients with advanced CRC ( eg , those with isolated hepatic metastases ) could be cured when a surgical approach is used in conjunction with the “ optimal ” chemotherapy.2 As the current therapies now line up , it appears that oxaliplatin or irinotecan can be combined with infusional 5-fluorouracil ( 5-FU ) and leucovorin ( FOLFOX or FOLFIRI , respectively ) with relatively similar survival outcomes in the first-line setting 3,4 ; the decision of which to use is a function of comorbidities , anticipated toxicity , and the possibility of previous exposure to oxaliplatin in the adjuvant setting",
"role": "user"
},
{
"content": "These results show that the benefits of the treatment with bevacizumab outweigh the toxicity that may occur : enough to justify its use in advanced colorectal cancer",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.11369 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Identifying Anatomical Origins of Coexisting Oscillations in the Cortical Microcircuit\nArticle:\nAbstract:\nOscillations are omnipresent in neural population signals, like multi-unit recordings, EEG/MEG, and the local field potential. They have been linked to the population firing rate of neurons, with individual neurons firing in a close-to-irregular fashion at low rates. Using a combination of mean-field and linear response theory we predict the spectra generated in a layered microcircuit model of V1, composed of leaky integrate-and-fire neurons and based on connectivity compiled from anatomical and electrophysiological studies. The model exhibits low- and high-γ oscillations visible in all populations. Since locally generated frequencies are imposed onto other populations, the origin of the oscillations cannot be deduced from the spectra. We develop an universally applicable systematic approach that identifies the anatomical circuits underlying the generation of oscillations in a given network. Based on a theoretical reduction of the dynamics, we derive a sensitivity measure resulting in a frequency-dependent connectivity map that reveals connections crucial for the peak amplitude and frequency of the observed oscillations and identifies the minimal circuit generating a given frequency. The low-γ peak turns out to be generated in a sub-circuit located in layer 2/3 and 4, while the high-γ peak emerges from the inter-neurons in layer 4. Connections within and onto layer 5 are found to regulate slow rate fluctuations. We further demonstrate how small perturbations of the crucial connections have significant impact on the population spectra, while the impairment of other connections leaves the dynamics on the population level unaltered. The study uncovers connections where mechanisms controlling the spectra of the cortical microcircuit are most effective.\nIntroduction:\nUnderstanding the origin and properties of oscillations[see 1, for a review] is of particular interest due to their controversially discussed functional roles, such as binding of neurons into percepts and selective routing of information[reviewed in 2, esp. part VI]. Specific frequencies have been localized in different layers and linked to top-down and bottom-up processes[3, 4]. Oscillations in population signals correlate with multi-unit spiking activity[5], predominantly at high frequencies[6, 7], while firing probabilities relate to the phase of low frequency oscillations[8]. Coherent oscillations at the population level can arise from clock-like firing cells[9, 10] and more robustly[11] from irregularly firing neurons synchronizing weakly[12, 13]. Neurons in vivo tend to fire irregularly[14] and population oscillations resemble filtered noise rather than clock-like activity[15, 16]. Balanced random networks of leaky integrate-and-fire neurons in the asynchronous irregular( AI) regime can sustain such weakly synchronized oscillatory states[17] and reproduce the stochastic duration and power spectra of γ oscillations[18, 19]. Focusing on the network aspect, rather than on intrinsic cell properties, the PING and ING mechanisms have been suggested to underlie the generation of low- and high-γ frequencies[20, reviewed in 21]. Inter-neuron γ( ING) consists of a self-coupled inhibitory population producing an oscillation frequency primarily determined by the time course of the inhibitory postsynaptic potential( IPSP), the dynamical state of the neurons[20, 22, 10, 23] and the delays[24], constraining the generated frequency to the high-γ( > 70 Hz) range. Lower-γ frequencies( 30–70 Hz) arise from the interplay of pyramidal- and inter-neurons( PING) with the frequency determined by the dynamical state of the neurons and the connection parameters[25, 26, 27, 28]. Early network models combining ING and PING motifs[29], as well as self-coupled excitatory populations( studied later in[30]), enabled the phenomenological study of γ oscillations. The two mechanisms were originally formulated for the fully synchronized regime and the analytical treatment of weakly synchronizing networks is restricted to at most two populations[17, 31], neglecting the variety of dynamical states of neuronal populations embedded in a larger circuitry. Modeling studies considering neurons of various level of detail assess the link between network structure and induced oscillations[32]. Experiments find specific frequencies at different depths of the layered cortex[33], while these locations in the tissue are characterized by distinct connectivity patterns[34]. Pronounced slow oscillations( < 1 Hz) are found in deeper layers, such as layer 5[35, 36], and hypotheses regarding their origin range from intrinsic cell mechanisms[37] to network phenomena[38, 39]. In contrast, fast oscillations in the γ and high-γ range are primarily observed in the upper layers[40, 41] and show different phase relationships than the γ oscillations in the lower layers[42]. To the best of our knowledge, theoretical descriptions of coexisting oscillations requiring complicated network structures, as well as a method identifying these structures in a given circuit have not yet been established. The present work sheds light on the influence of sub-circuits integrated in larger networks and the properties of individual connections relevant for the emergence of specific oscillations.\nDiscussion:\nThe sensitivity measure reveals that the peak in the low-γ range is generated by a sub-circuit consisting of layer 2/3, layer 4 and the connections from layer 4 to 2/3E and from 2/3E to 4I. This finding is in agreement with experimental literature locating γ oscillations in the upper layers. Furthermore, we identify the feedback connection from 2/3E to 4I and the feed-forward connections from layer 4 to layer 2/3 as crucial for the amplitude of the peak. The oscillation generated by the cooperation of the two upper layers is of lower frequency than the oscillation produced by the layers in isolation. A hint on layers 2/3 and 4 teaming up to generate a low frequency γ peak has been found in Ainsworth et al.[58]. The frequency of the peak is predominantly determined by connections within the input layer 4. This implies that excitation of the column will be reflected in a frequency shift of the γ peak, which results from an alteration of the dynamical state of the populations and therefore of the effective connectivity. The variability of the generated frequency caused by inputs to layer 4 has been demonstrated experimentally[18, 59]. The collective oscillations could also be shaped by alterations of the synaptic efficacies between layers 2/3 and 4( e. g. by short term plasticity). Further experimental studies need to probe the influence of perturbations in weight and number of synapses on the amplitude and frequency of γ peaks in the population rate spectra. The sensitivity measure can be utilized to verify the parameters used in the model and to reveal shortcomings of the theoretical description, which potentially arise from the assumptions of simplified neuron-models and negligible auto-correlations. High-γ peaks are found to be generated in the I-I-loops of each layer, with the loop in layer 4 dominating the spectra. This mechanism, termed ING, has been analyzed previously[20] and experimentally located in upper layers. In the microcircuit, the second largest contribution arises from the I-I-coupling in layer 6; we hence propose to target this layer experimentally to test this hypothesis. Connections determining slow rate fluctuations and the stability of the circuit are identified by the sensitivity measure at zero frequency. The measure shows that connections within layer 5 as well as the connections from population 2/3E and 4E to layer 5 are crucial. We conclude that there are too few connections from 5E to 5I to counteract the rate fluctuations which accumulate due to the amplification within the strong 5E-5E loop. Our findings are in good agreement with experimental results demonstrating the initiation of slow frequency oscillations in layer 5, as well as the stronger amplification of low frequency oscillations in response to a stimulus applied to layer 5 than to a stimulation of layer 2/3[38]. Given the dynamical state of 5E, the circuit is stabilized when removing connections from 2/3E to 5E, resulting in a decrease of slow rate fluctuations. In contrast, an impairment of the connections from 4E to 5E has the effect of strengthening the self-amplification of fluctuations and therefore strengthens slow oscillations. With the emerging optogenetic toolbox it may be possible to experimentally test these two predictions in the future. Our analysis suggests a refinement of the parameters of the microcircuit model, which are so far deduced from direct measurements of anatomical and physiological connectivity alone[34]. Experimental studies show that the amplitude of γ oscillations depends on the stimulus strength[60], suggesting that the current microcircuit model captures the cortical tissue in a semi-stimulated regime. Lowering the external input to the excitatory neurons in layer 4 decreases the low-γ power in the idle state, which in addition sensitizes population 4E to evoke γ oscillations when stimulated. Synaptic delays do not influence the stationary state of the network, characterized by the time-averaged firing rates of all populations, but crucially shape the fluctuations around this stationary set point. We provide an intuitive understanding of the influence of delays on oscillations with parametric plots of the eigenvalues of the activity modes determining the spectra of the circuit. Small delays cause fast oscillations, while long delays support slow ones. Larger delays move the network towards the regime of sustained oscillations, which is counteracted by heterogeneity in the delays. The frequency of the oscillation is highly sensitive to the delays, but the static properties of the circuit, which depend on the dynamic state of the neurons and the anatomical connectivity, determine whether a network displays fast or extremely slow oscillations. The newly derived sensitivity measure determines crucial connections for the frequency and amplitude of population rate oscillations. Since its applicability is not constrained to the analysis of indegrees, it permits a systematic investigation of complicated networks with respect to parameters such as the synaptic delay, connection weight, or excitation-inhibition balance. In these pages we exemplified its use by the analysis of a particular model, but it can in principle be utilized to identify dynamically relevant circuits embedded in any high-dimensional network. Our work thus extends existing methods analyzing single- or two-population network models to more intricate structures. The significance of the identified connections is validated by demonstrating how small changes in the number of synapses can have a large impact on the spectra of all populations. The formalism requires the neurons to work in a regime where the activity fluctuations of the inputs are summed linearly on the considered time scale. Simulations of networks of LIF-model neurons confirm the validity of the linear approximation. Experimental evidence supports the existence of cortical networks operating in this regime[61, 62, 63, 64]. Since the sensitivity measure can be applied to any network whose dynamics can be approximated by a linear rate model, the applicability goes beyond circuits composed of LIF-model neurons. For example, responses of modified IF models have been shown to approximate neural responses in vivo[65]. Several studies treat the stationary and dynamical properties of these models in the linear regime( see[66] for EIF and[67] for QIF). Grabska-Barwinska et al.[68] emphasize that theoretical predictions for networks composed of QIF neurons in the asynchronous regime, by trend, also hold in networks operating in a more synchronized regime, in which individual neurons are exposed to larger input fluctuations. For neuron models with conductance-based synapses a reduction to effective current based synapses exists[69, 70] and therefore enables the usage of the theoretical framework developed in[13, 12]. Furthermore, networks of current and conductance based model neurons have been pointed out to be qualitatively comparable( see section 3. 5. 3. in[65]). Alternatively the measure can be fed with experimentally obtained firing rates and transfer functions[71, 61, 72] of neuronal populations to analyze the underlying circuits generating the oscillations. The proposed method also finds application in systems where the non-linearities affect the dynamics on a slower time scale than the considered oscillation. Such non-linearities can be taken into account by reevaluating the measure for different mean-inputs corresponding to different phases of the slow input fluctuations. Employing the measure in the described iterative fashion results in a phase-dependent identification of relevant connections for the generation of the fast rhythm and thus sheds light on the anatomical origin of phase-amplitude coupling[reviewed in 21]. The method can also be exploited in reverse to engineer circuits with a desired oscillatory behavior in a top-down fashion. The results presented here lead to clear interpretations of experimental data on network activity and to new hypotheses. It should be noted, however, that the model of the microcircuit represents an early draft and was purposefully designed by its authors as a minimal model with respect to the number of populations and the heterogeneity in the neuronal dynamics. Therefore, failure in the reproduction of certain phenomena found in nature or in the confirmation of a hypothesis should not be attributed to the mathematical method developed here, but to shortcomings of the investigated model. The method is applicable to any update of the original model as structural data and single neuron properties become more refined, given that the assumptions underlying the mean-field and linear response theory are still met. One potential extension is the subdivision of the inhibitory neurons into multiple populations representing different types of inter-neurons, with connection probabilities that yield specific connectivity motifs, as recently reported in[73]. The sensitivity measure uncovers the contribution of the laminar structure to the population rate spectra and produces predictions which can be tested experimentally by comparing the spectra generated by different species or brain areas with distinct laminar structures. In summary the current work introduces a method which elucidates the relation between anatomy and dynamical observables of layered cortical networks. Even though a specific model is used to exemplify the method and to derive concrete predictions, the novel method provides a general framework for the systematic integration of the anatomical and physiological data progressively becoming available into ever more consistent models of cortical circuitry.",
"role": "user"
},
{
"content": "Recordings of brain activity show multiple coexisting oscillations . The generation of these oscillations has so far only been investigated in generic one- and two-population networks , neglecting their embedment into larger systems . We introduce a method that determines the mechanisms and sub-circuits generating oscillations in structured spiking networks . Analyzing a multi-layered model of the cortical microcircuit , we trace back characteristic oscillations to experimentally observed connectivity patterns . The approach exposes the influence of individual connections on frequency and amplitude of these oscillations and therefore reveals locations , where biological mechanisms controlling oscillations and experimental manipulations have the largest impact . The new analytical tool replaces parameter scans in computationally expensive models , guides circuit design , and can be employed to validate connectivity data .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.65078 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\na 31-year - old male was referred to our department with a complaint of intermittent left - sided back pain of 6 months ' duration .\nhe also complained of early satiety , but had not observed any changes in bowel habits or weight loss .\nabdominal examination revealed a non - tender , palpable mass in the left upper quadrant extending to below the umbilicus .\nthe left testis was atrophic with an area of non - tender induration palpable in the upper pole .\nlaboratory investigations were within normal ranges , with ( -fetoprotein 3.2 iu / l ( 09\niu / l ) , human chorionic gonadotropin hormone < 0.1 iu / l ( < 0.1\niu / l ) , and a slightly elevated lactate dehydrogenase 291 iu / l ( 100190\nscrotal ultrasonography revealed a normal right testis and atrophic left testicle with heterogeneous architecture and scattered microcalcifications ( fig .\nabdominal computed tomography ( ct ) revealed a left retroperitoneal mass 56 cm in diameter with a cystic and solid component extending from below the left renal vein down to the bifurcation of the aorta ( fig .\nthere was no evidence of germ cell tumor , and immunostains including plap , oct 4 , and cd114 were negative .\nthe patient subsequently underwent a ct - guided retroperitoneal lymph node biopsy which revealed a metastatic germ cell tumor consistent with seminoma .\nimmunohistochemistry was positive for plap , oct 4 , and cd117 , and negative for lca , cd3 , cd15 , cd20 , and cd 30 ( fig .\nsperm banking was done for the patient followed by three cycles of chemotherapy with bleomycin , etoposide , and cisplatin .\nfollow - up ct after chemotherapy revealed that the left para - aortic lymphadenopathy had decreased in size to 3.6 1.8 cm .\nthe patient then underwent a retroperitoneal lymph node dissection with preservation of the left kidney .\npathology showed no evidence of malignancy , which was consistent with post - treatment effect .\nprimary retroperitoneal germ cell tumors are extremely rare and represent only 10% of all malignant primary retroperitoneal tumors [ 1 , 2 ] .\npatients with retroperitoneal germ cell tumors usually present late , after their tumors have reached large dimensions .\npresenting symptoms and signs include a palpable mass , with or without pain , weight loss , constipation , hip and back pain , dyspnea , leg edema , fever , varicocele , and urinary retention [ 3 , 4 ] .\nthe burned - out phenomenon in germ cell tumors refers to a germ cell tumor in extra - gonadal tissues with spontaneous regression of an intra - gonadal tumor .\nextragonadal germ cell tumors are usually found in the retroperitoneal , supraclavicular , cervical , and axillary lymph nodes and occasionally in the lung and liver .\nthe first is spontaneous regression of a primary germ cell tumor after metastasis of the germ cell tumor .\npossible explanations are an immune response or ischemia caused by the neoplasm outgrowing its blood supply due to its high metabolic rate .\na second hypothesis is the de - novo development of a primary germ cell tumor in extra - gonadal tissues .\npatients with extra - gonadal germ cell tumors with the burnedout phenomenon usually complain of many vague symptoms , such as flank pain , abdominal mass , night sweating , or scrotal pain , and occasionally reveal an elevated testicular tumor marker . in our case ,\nhistological features that are helpful in establishing a diagnosis of a regressed testicular germ cell tumor include , apart from the scar formation , intratubular calcifications , lymphoplasmacytic infiltrate , hemosiderincontaining macrophages , and testicular atrophy .\nburned - out tumors may cause some confusion in the diagnosis , because secondary tumors can often be mistaken for primary tumors .\ncareful evaluation of the testis is crucial for identifying the primary lesion site ; all abnormalities shown on clinical or sonographic scrotal examination call for orchiectomy .\nhigh - resolution sonography of the scrotum with linear high - frequency transducers allows the detection of small , highly echogenic foci , hypoechoic zones , microlithiasis or microcalcifications as in our case [ 2 , 8 , 9 ] .\nprimary extra - gonadal germ cell tumors are more aggressive than primary testicular cancer and are more resistant to chemotherapeutic agents .\nseminomatous extra - gonadal germ cell tumors are sensitive to chemotherapy and are treated similarly to primary seminomas with three or four cycles of cisplatin , etoposide , and bleomycin . on the other hand ,\nnon - seminomatous extragonadal germ cell tumors are frequently chemoresistant with 5 year survival rates of 65% .\nit is mandatory to perform a testicular ultrasound , despite a ( normal or suspicious ) physical examination of the scrotum in the work - up of male patients presenting with a retroperitoneal mass .\nmetastatic disease secondary to burned - out lesion has the same prognosis as a primary testicular malignancy ; therefore , it is important that it is recognized .",
"role": "user"
},
{
"content": "germ cell tumors constitute the majority of all testicular tumors , which are relatively rare overall and are mainly encountered in young adults and teenagers . the term \n burnedout germ cell tumor refers to the presence of a metastatic germ cell tumor with histological regression of the primary testicular lesion . \n clinical examination of the testes and scrotal sonography is pivotal in the initial diagnosis of such neoplasms . \n we present a case of a 31-year - old male with a retroperitoneal mass and no palpable lesion on testicular examination .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.60105 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Review on Human Body Communication: Signal Propagation Model, Communication Performance, and Experimental Issues \nChapter Title: Capacitive Coupling HBC. \n1. Abstract of Cited Paper (BIB001): The human body characteristics as a signal transmission medium are studied for the application to intrabody communication. The measurements of the body channel cover the frequency range from 100 kHz to 150 MHz and the distance on the body up to 1.2 m. A distributed RC model is developed to analyze the large variation of the channel properties according to the frequency and channel length. The simulation results using the channel model match well with the measurements in both the frequency and time domains. The effect of the ground plane to the body channel transceivers is also investigated and an empirical formula for the minimum ground size is obtained. Finally, the amount of the electromagnetic radiation due to the body antenna effect is measured. With regards to the Federal Communications Commission regulations, the proper frequency range for the intrabody communication is determined to satisfy given bit error rate requirements \n2. Abstract of Cited Paper (BIB002): Recently, wearable devices which use the human body as a transmission channel have been developed. However, there has been a lack of information related to the transmission mechanism of such devices in the physical layer. Electromagnetic communication trials using the human body as a transmission medium have more than a decade's history. However, most of the researches have been conducted by researchers who just want to utilize the fact and practically no physical mechanisms have been researched until recently. Hence, in this paper, the authors propose some calculation models of the human body equipped with the wearable devices by using the finite difference time domain (FDTD) method. Moreover, a biological tissue-equivalent solid phantom is utilized to show the validity of the calculation. From these investigations, the authors determine the transmission mechanism of the wearable devices using the human body as a transmission channel. \n3. Abstract of Cited Paper (BIB003): Electric-field intrabody communication (EF-IBC) is a promising new scheme for the data exchange among wearable biomedical sensors. It uses the body as the signal transmission media. Compared with existing body area network (BAN) schemes, EF-IBC can achieve higher data rate with less transmission power. Until now, the detailed EF-IBC channel mechanism is not well understood. In this work, finite-element method (FEM) is utilized for the first time to investigate the EF-IBC channel. A circuit-coupled FEM model is established for the EF-IBC channel. The FEM model is extensively verified by experimental measurements. The new physical model enables the revelation of characteristics and effects of different components in the EF-IBC channel. The FEM investigation finds that the capacitive return path is critical to the characteristics of the EF-IBC channel. Parameters of the capacitive return path are quantitatively measured. The investigation also finds that the body plays an important role to the return path capacitance. The forward body path can be well modeled by a cascade of π-shaped circuits. Based on the FEM model of the EF-IBC channel, a simplified circuit model is derived to provide an efficient tool for the transceiver design. \n4. Abstract of Cited Paper (BIB004): Modeling of intrabody communication (IBC) entails the understanding of the interaction between electromagnetic fields and living tissues. At the same time, an accurate model can provide practical hints toward the deployment of an efficient and secure communication channel for body sensor networks. In the literature, two main IBC coupling techniques have been proposed: galvanic and capacitive coupling. Nevertheless, models that are able to emulate both coupling approaches have not been reported so far. In this paper, a simple model based on a distributed parameter structure with the flexibility to adapt to both galvanic and capacitive coupling has been proposed. In addition, experimental results for both coupling methods were acquired by means of two harmonized measurement setups. The model simulations have been subsequently compared with the experimental data, not only to show their validity but also to revise the practical frequency operation range for both techniques. Finally, the model, along with the experimental results, has also allowed us to provide some practical rules to optimally tackle IBC design. \n5. Abstract of Cited Paper (BIB005): Utilizing the body surface as the signal transmission medium, capacitive coupling human body communication (CC-HBC) can achieve a much higher energy efficiency than conventional wireless communications in future wireless body area network (WBAN) applications. Under the CC-HBC scheme, the body surface serves as the forward signal path, whereas the backward path is formed by the capacitive coupling between the ground electrodes (GEs) of transmitter (TX) and receiver (RX). So the type of communication benefits from a low forward loss, while the backward loss depending on the GE coupling strength dominates the total transmission loss. However, none of the previous works have shown a complete research on the effects of GEs. In this paper, all kinds of GE effects on CC-HBC are investigated by both finite element method (FEM) analysis and human body measurement. We set the TX GE and RX GE at different heights, separation distances, and dimensions to study the corresponding influence on the overall signal transmission path loss. In addition, we also investigate the effects of GEs with different shapes and different TX-to-RX relative angles. Based on all the investigations, an analytical model is derived to evaluate the GE related variations of channel loss in CC-HBC. \n6. Abstract of Cited Paper (BIB006): Objective: The purpose of this contribution is to estimate the path loss of capacitive human body communication (HBC) systems under practical conditions. Methods: Most prior work utilizes large grounded instruments to perform path loss measurements, resulting in overly optimistic path loss estimates for wearable HBC devices. In this paper, small battery-powered transmitter and receiver devices are implemented to measure path loss under realistic assumptions. A hybrid electrostatic finite element method simulation model is presented that validates measurements and enables rapid and accurate characterization of future capacitive HBC systems. Results: Measurements from form-factor-accurate prototypes reveal path loss results between 31.7 and 42.2 dB from 20 to 150 MHz. Simulation results matched measurements within 2.5 dB. Comeasurements using large grounded benchtop vector network analyzer (VNA) and large battery-powered spectrum analyzer (SA) underestimate path loss by up to 33.6 and 8.2 dB, respectively. Measurements utilizing a VNA with baluns, or large battery-powered SAs with baluns still underestimate path loss by up to 24.3 and 6.7 dB, respectively. Conclusion: Measurements of path loss in capacitive HBC systems strongly depend on instrumentation configurations. It is thus imperative to simulate or measure path loss in capacitive HBC systems utilizing realistic geometries and grounding configurations. Significance: HBC has a great potential for many emerging wearable devices and applications; accurate path loss estimation will improve system-level design leading to viable products. \n7. Abstract of Cited Paper (BIB007): In wireless body area network, capacitive-coupling body channel communication (CC-BCC) has the potential to attain better energy efficiency over conventional wireless communication schemes. The CC-BCC scheme utilizes the human body as the forward signal transmission medium, reducing the path loss in wireless body-centric communications. However, the backward path is formed by the coupling capacitance between the ground electrodes (GEs) of transmitter (Tx) and receiver (Rx), which increases the path loss and results in a body posture dependent backward impedance. Conventional methods use a fixed inductor to resonate with the backward capacitor to compensate the path loss, while it's not effective in compensating the variable backward impedance induced by the body movements. In this paper, we propose a self-adaptive capacitive compensation (SACC) technique to address such a problem. A backward distance detector is introduced to estimate the distance between two GEs of Tx and Rx, and a backward capacitance model is built to calculate the backward capacitance. The calculated backward capacitance at varying body posture is compensated by a digitally controlled tunable inductor (DCTI). The proposed SACC technique is validated by a prototype CC-BCC system, and measurements are taken on human subjects. The measurement results show that 9dB–16 dB channel enhancement can be achieved at a backward path distance of 1 cm-10 cm.\n\nLiterature Review:",
"role": "user"
},
{
"content": "In 1995, Zimmerman did the research to perform the position measurement of Yo-Yo Ma's cello bow. It was discovered that placing one's hand in electric field attenuates the received signal significantly. Based on these facts and principle, he proposed the concept of Personal Area Network (PAN), a wireless system that allows electronic devices on and near the human body to exchange digital information through HBC, which does not need the sophisticated antenna to radiate the signal into air; instead, electrodes are used to couple the electric signal to human body, which is then served as the medium to convey the signal. Zimmerman also developed a prototype as an initial attempt to implement HBC technology. This prototype validates the feasibility of using HBC for data transfer. The capacitive coupling HBC, also called near-field coupling BIB002 , and electrostatic coupling BIB001 are described in Figure 2 . The signal electrodes at both transmitter and receiver are attached on the human skin, while the ground electrodes are floating in air. A large electric field Et is introduced to the human body via the signal electrode of transmitter BIB002 . The conductivity of human body not only couples electric field to environment (i.e., Ee) but also serves as a conducting plate that induces electric fields (i.e., Ea, Ec, and Ed) to other conducting plates (ground electrodes) BIB006 BIB007 . The return path of the transmitted signal is formed by the electric fields Ea, Ec, Ee-Eb, Er-Ef-Eb, and Ed-Ef-Eb. In other words, the electric fields in environment or through external earth ground serve as the return path BIB004 . At the receiver, the received signal as electric potential difference between Er, Ef, and Ed is allowed to be detected as Er is much larger than Ef and Ed (conductivity of human body is much higher than that of air) BIB007 . On the other hand, the electric potential difference is small due to the escaped electric fields Ee, Ea, and Ec. Furthermore, the detected signal is unstable and highly influenced by the environment on accounting that the electric fields Ef and Ee vary along with environment changes (e.g., the appearance of metallic furniture, wires, water, and office equipment will change the backward capacitance BIB008 ). To avoid the human body's antenna effect of radiating signal ( /2 dipole is around 1 m), the operating frequency should be lower than 150 MHz BIB002 . And most of the electric field will concentrate around the tip of arm wearing transmitter BIB003 . Capacitive coupling HBC is suitable for those applications requiring higher operating frequencies (approximately tens of MHz) and longer body transmission distances (usually along the whole body). Nevertheless, capacitive coupling HBC is vulnerable to external interferences and other unpredictable effects, such as the subjects' movements BIB005 .",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.14598 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: DNA Methylation Signatures of the Plant Chromomethyltransferases\nArticle:\nAbstract:\nDNA methylation in plants is traditionally partitioned into CG, CHG and CHH contexts( with H any nucleotide but G). By investigating DNA methylation patterns in trinucleotide contexts in four angiosperm species, we show that such a representation hides spatial and functional partitioning of different methylation pathways and is incomplete. CG methylation( mCG) is largely context-independent whereas, at CHG motifs, there is under-representation of mCCG in pericentric regions of A. thaliana and tomato and throughout the chromosomes of maize and rice. In A. thaliana the biased representation of mCCG in heterochromatin is related to specificities of H3K9 methyltransferase SUVH family members. At CHH motifs there is an over-representation of different variant forms of mCHH that, similarly to mCCG hypomethylation, is partitioned into the pericentric regions of the two dicots but dispersed in the monocot chromosomes. The over-represented mCHH motifs in A. thaliana associate with specific types of transposon including both class I and II elements. At mCHH the contextual bias is due to the involvement of various chromomethyltransferases whereas the context-independent CHH methylation in A. thaliana and tomato is mediated by the RNA-directed DNA methylation process that is most active in the gene-rich euchromatin. This analysis therefore reveals that the sequence context of the methylome of plant genomes is informative about the mechanisms associated with maintenance of methylation and the overlying chromatin structure.\nIntroduction:\nMethylation of cytosine residues plays important roles in gene regulation and transposon control in nuclear genomes of plants and animals. In both plant and animal genomes the methylation is highest on symmetric CG dinucleotides but also exists in CH contexts in which H is any base other than G[1–3]. This non-CG methylation is best characterised in plants where it is normally classified as CHG and CHH contexts[1, 4]. Corresponding to these patterns of DNA methylation in Arabidopsis thaliana the maintenance DNA methyltransferases MET1 and Chromomethyltransferase 3( CMT3) are responsible for the symmetric CG and CHG contexts, respectively[4]. The CMT2 methyltransferase and the small RNA-guided Domains Rearranged Methylase( DRM) 1/2 act at the non-symmetric CHH cytosines[5, 6]. There are, however, at least four lines of evidence for additional complexity in the nuclear genome methylome beyond the CG, CHG and CHH components. First, the original whole-genome methylation profiles at base resolution in A. thaliana highlighted the possible influence of the local sequence context beyond CG, CHG or CHH on the extent of DNA methylation[7, 8]. Second, in Physcomitrella patens and A. thaliana, methylation of the CCG trinucleotide context depends on both MET1 and CMT3 orthologs whereas CAG and CTG methylation only requires CMT3[9]. Third there is an effect of chromatin so that heterochromatic CHH methylation is dependent on CMT2 whereas euchromatic motifs are methylated by small RNA-guided DRM1/2[5, 10, 11]. The fourth evidence is from humans in which mCH is enriched for various nucleotide motifs depending on the tissue type[3]. To explore these potential methylome complexities we undertook a comprehensive reanalysis of methylation in trinucleotide contexts in A. thaliana, maize( Zea mays) and rice( Oryza sativa). We also analysed the genome-wide methylation in RdDM mutants of tomato( Solanum lycopersicum) that we generated by gene editing. We reveal that, at CHG motifs, the methylome is depleted for CCG relative to CAG or CTG throughout the chromosomes of maize and rice and in the pericentric heterochromatin of A. thaliana and S. lycopersicum where these marks are densest. In the CHH methylome there are also differences between the arms and pericentromere. The euchromatin component is maintained predominantly by the RNA-directed DNA methylation( RdDM) pathway and it is not affected by variations in the sequence motifs adjacent to the C. In the heterochromatin, in contrast, the CHH methylation is densest at CAA and CTA in A. thaliana and maize, at CAA and CAT in tomato, and at CTA in rice. This differential CHH subcontext methylation is caused by chromomethyltransferases including CMT2 in A. thaliana and ZMET2 and ZMET5 in maize. We also provide evidence that different members of the SUVH H3K9 methyltransferase family impact the differential methylation of CCG compared to CAG and CTG. Based on these findings we propose that analyses of plant DNA methylomes are more informative if they account for subcategories of the mCHG and mCHH motifs. CCG should be considered separately of other CHG contexts and CHH should be subdivided into CAA/CTA and different subcontexts depending on the species.\nDiscussion:\nThe conventional classification of DNA methylation in plant genomes in terms of CG, CHG and CHH sequence contexts reflects the action of various DNA methyltransferases associated with establishment and maintenance of epigenetic marks[4]. From this present analysis, however, we reveal that sequence subcontext in the DNA methylome is additionally informative about the partition of euchromatin and heterochromatin and the involvement of the DNA methyltransferases and H3K9 methyltransferases in these chromosomal domains. The partition of chromatin into pericentric( and possibly other types of) heterochromatin and euchromatic domains was most clearly evident in the dicot species A. thaliana and tomato( Fig 1). It is well known that the heterochromatin is the more methylated domain but we now show that at CHG motifs it has a lower ratio of mCCG/( mCAG or mCTG) than in the euchromatin( Figs 1 and 2) and that at CHH motifs there is a higher ratio of( mCAA-mCTA) /( other mCHH)( A. thaliana) or( mCAA-mCAT) /( other mCHH)( tomato)( Figs 1, 2 and 3). Methylation at CCG in A. thaliana requires both MET1 and CMT3( Fig 2) as in Physcomitrella patens[9], whereas other CHG contexts only require CMT3. In maize it is likely that ZMET2/5 and the MET1 orthologs are required for CCG methylation while CAG and CTG would require ZMET2/5 only. Yaari et al. proposed an explanatory model where CMT3 is unable to methylate the CGG motif( solely a substrate of MET1) on the strand opposite to CCG, and requires hemimethylation( mCGG) to methylate CCG[9]. However this hypothesis contradicts the molecular data on CMT3, which demonstrated that CMT3 efficiently methylates unmethylated substrates[24]. As an alternative we propose that MET1-mediated mCGG and CmCG recruits SUVH5/6, but not SUVH4, which would catalyse dimethylation of H3K9 and subsequent recruitment of CMT3 to methylate the first cytosine of CCG( S15 Fig). This model implies that the interaction between H3K9me2 and CHG methylation is very local, i. e. CHG methylation is controlled at the single-nucleosome level( or by the two adjacent nucleosomes only), which is consistent with the 167-bp periodicity of CHG methylation in the Arabidopsis genome[7]. A lower efficiency of the SUVH5/6-CMT3 feedback loop relative to the SUVH4-CMT3 loop would thus account for the lower methylation of CCG compared to CAG/CTG. Although trinucleotide contexts have not been taken into account, existing data on SUVH4/5/6 are consistent with our hypothesis: SUVH4 binds mCHG and mCHH much more strongly than mCG[25, 26], SUVH5 binds mCG/mCHG/mCHH with similar affinities[27], and SUVH6 prefers mCHG and mCHH to mCG but has not been tested on mCGG[25]. These differences in binding affinities may contribute to the locus-specificity that SUVH4, SUVH5 and SUVH6 exhibit[18]. Notably, the role of SUVH5/6 to the exclusion of SUVH4 in the regulation of A. thaliana rDNA loci[28] may be due to the high density of CCG/CGG duplexes in the 5’ external transcribed sequence( 5’ ETS) of the 45S rDNA repeat. The CHH methylation biases are influenced by various chromomethyltransferases. In A. thaliana( Fig 2) CMT2 preferentially methylates CAA and CTA. In tomato, there are three members of the CMT family[29, 30] but their activities have yet to be defined. Based on multiple independent CMT losses in eudicots and monocots, a recent analysis proposes that the different CMT clades( CMTϵ, encompassing CMT1 and CMT3, and CMT2) may have overlapping functions[31]. In agreement with this, in maize with no CMT2 ortholog, it is likely that ZMET2 and ZMET5 from the CMTϵ clade share roles that are separated in the A. thaliana CMT2 and CMT3. A similar situation may apply in rice in which there are three as yet uncharacterised CMT genes[32]. The subcontext differences in CHH methylation may be due to intrinsic affinities of the CMT proteins, to the affinities of factors that mediate CMT recruitment as was the case for CMT3 and SUVH4/5/6, or to a combination of both mechanisms. It is likely that at least certain members of the SUVH family of H3K9 methyltransferases have an affinity for methylated CTA/CAA motifs in A. thaliana, which would establish a positive feedback loop with CMT2-dependent DNA methylation similar to the well established feedback loop between H3K9 methylation by SUVH4/KYP and DNA methylation in CHG contexts by CMT3[24]. Furthermore, differential subcontext methylation may be informative to methylation readers: recognition of heterochromatic, CMT2-controlled mCAA/mCTA is likely to trigger different responses than binding to RdDM-controlled mCTT sites in a more open chromatin environment. It is possible that the CMTs evolved these affinities in part to control CG sites that would mutate via deamination of methylcytosine: mCG deamination would create a CAN site on the opposite strand, while deamination of mCAG or mCTG would give rise to CTA and CAA sites. This might be a way of maintaining methylation-dependent silencing of loci despite their tendency to lose cytosines. Contrasting with the motif biases and heterochromatic substrates of chromomethyltransferases, RdDM is mostly active in chromosome arms and does not have obvious sequence context bias. This sequence-independence likely reflects the fact that the DRM2 DNA methyltransferase is guided by RNA as opposed to the protein-DNA interactions of the maintenance DNA methyltransferases. It is striking that growth and development of tomato is greatly affected by perturbations of DNA methylation, in this case the RdDM pathway, as in rice and maize[14, 33]. By contrast, in A. thaliana, various methylation mutants including nrpd1 and nrpe1 are fully viable and exhibit near normal development[34, 35]. It is likely that differential effects of RdDM mutations between species are connected to transposons and their epigenetic influence on the expression of adjacent genes. RdDM would have a relatively small effect on genes adjacent to elements like LINE/L1( Fig 2E) at which methylation persists in a CMT2-dependent manner whereas, at elements like RathE1 that are subject to RdDM, the effect would be much greater. Until now it was necessary to use mutants to identify genes that are likely to be affected by RdDM but now, in the light of our analysis, it will be possible to screen methylomes for genes likely to be affected by RdDM, where CHH methylation is independent of sequence context. Our comprehensive analysis of methylation in trinucleotide contexts in A. thaliana, tomato, maize and rice has revealed additional complexity in the plant methylomes but it could just be a first step. Although analysis of trinucleotides does capture the largest differences while keeping the number of combinations manageable, extending to surrounding nucleotides may refine our understanding of methyltransferase and methyl-binding proteins affinities. In principle, there could be GC maintenance methylases in addition to the well-characterised enzymes with CG substrates. There could also be methyltransferases acting at any symmetric C( H) nG or G( H) nC patterns( in which n ≥ 1) provided that the enzyme, either as a monomer or multimer, could span the cytosines on the two DNA strands of these motifs. Extended analyses of existing and future methylome datasets will be informative about these possibilities.",
"role": "user"
},
{
"content": "Dense cytosine DNA methylation ( mC ) in eukaryotes is associated with closed chromatin and gene silencing . In plants it is well known that the sequence context of the mC ( either mCG , mCHG or mCHH ) provides a clue as to which of several mechanisms is involved but now , based on detailed analyses of the DNA methylome in wild type and mutants of four plant species , we reveal that there is additional information in the mC sequence context . Low mCCG and over-representation of mCAA and mCTA or mCAT in A . thaliana and tomato differentiates regions of the chromosomes near the centromere where methylation is dominated by chromomethyltransferases from the chromosome arms in which mCHH is context-independent and predominantly RNA-directed . Rice and maize have similar sequence context-dependent DNA methylation but the corresponding chromosome domains are not spatially separate as in the dicots . The discovery of the subcomponents of plant methylomes based on sequence context will allow greater resolution in past and future analyses of plant methylomes .",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_elife_single_doc_summ | science.scientific_lay_summarisation_elife_single_doc_summ.3655 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Rapid diversification associated with a macroevolutionary pulse of developmental plasticity\nArticle:\nAbstract:\nDevelopmental plasticity has been proposed to facilitate phenotypic diversification in plants and animals, but the macroevolutionary potential of plastic traits remains to be objectively tested. We studied the evolution of feeding structures in a group of 90 nematodes, including Caenorhabditis elegans, some species of which have evolved a mouthpart polyphenism, moveable teeth, and predatory feeding. Comparative analyses of shape and form, using geometric morphometrics, and of structural complexity revealed a rapid process of diversification associated with developmental plasticity. First, dimorphism was associated with a sharp increase in complexity and elevated evolutionary rates, represented by a radiation of feeding-forms with structural novelties. Second, the subsequent assimilation of a single phenotype coincided with a decrease in mouthpart complexity but an even stronger increase in evolutionary rates. Our results suggest that a macroevolutionary ‘pulse’ of plasticity promotes novelties and, even after the secondary fixation of phenotypes, permits sustained rapid exploration of morphospace.\nIntroduction:\nDevelopmental( phenotypic) plasticity has been proposed to affect evolution by facilitating adaptive change( Pigliucci, 2001; Schlichting, 2003; West-Eberhard, 2003; Moczek et al., 2011) but the relevant processes resulting in evolutionary diversity remain elusive. Identification of a switch gene for a dimorphism recently confirmed the link between developmental switches and microevolutionary divergence( Ragsdale et al., 2013), although insights from genetic mechanisms have yet to be put into a macroevolutionary context. For example, whether plasticity accelerates evolution by allowing faster evolutionary responses( Baldwin, 1896; Waddington, 1953; Suzuki and Nijhout, 2006) or hinders it by allowing adaptation without the need for genetic assimilation( Williams, 1966) is still a matter of debate( e. g., de Jong, 2005; Wund, 2012). To know the macroevolutionary potential of developmental plasticity, objectively measured plastic traits must be compared by deep taxon sampling in a robust phylogenetic framework. Here, we test the role of developmental plasticity in evolutionary tempo and novelty by measuring change in feeding structures in a group of 90 nematodes, including Caenorhabditis elegans, of which some species show a mouthpart polyphenism, moveable teeth, and predatory feeding. As a result we identified both the gain and loss of a developmental dimorphism to be associated with rapid evolutionary diversification. We made the surprising finding that whereas the appearance of polyphenism coincided with increased complexity and evolutionary rates, these rates were even higher after the assimilation of a single phenotype. The evolutionary and ecological success of nematodes is reflected by the extensive adaptation of their feeding structures, including hooks and stylets in animal- and plant-parasitic nematodes and teeth in predatory species. The latter adaptation is found in the genetic model Pristionchus pacificus and other nematodes of the family Diplogastridae, in which cuticularized teeth and predation are sometimes associated with a dimorphism( Fürst von Lieven and Sudhaus, 2000). Dimorphic species execute either a ‘narrow-mouthed’( stenostomatous, St) or ‘wide-mouthed’( eurystomatous, Eu) morph, which differ in the size, shape, and complexity of their mouthparts( Figure 1). In P. pacificus, the St and Eu morphs are advantageous for feeding on bacteria and nematode prey, respectively( Serobyan et al., 2013, 2014). The dimorphism results from an irreversible decision during development, enabling a rapid optimization of morphology to the environment( Bento et al., 2010). This response is mediated by small-molecule pheromones( e. g., dasc#1, ascr#1)( Bose et al., 2012), endocrine signaling( dafachronic acid-DAF-12)( Bento et al., 2010), and a switch mechanism executed by the sulfatase EUD-1( Ragsdale et al., 2013). 10. 7554/eLife. 05463. 003Figure 1. Mouth dimorphism and novelty in Diplogastridae.( A) The diplogastrid eurystomatous( Eu) morph, as shown here for Parapristionchus giblindavisi, is marked by a wider mouth, larger teeth, and often greater stomatal complexity than the stenostomatous( St) morph.( B) P. giblindavisi, St morph. False coloring in( A and B) indicates individual cuticular compartments of the mouth, providing a basis for tracking changes in homologous structures( yellow, cheilostom; blue, gymnostom; red, stegostom except telostegostom). View in( A and B) is right lateral and at same scale. Scale bar, 10 µm.( C) Opposing teeth, shown here for Fictor sp. 1, are a structural novelty of Diplogastridae and used for predatory feeding. Visible serrated plates are among other feeding innovations of Diplogastridae. Dorsal is right; scale bar, 5 µm. DOI: http://dx. doi. org/10. 7554/eLife. 05463. 003\nDiscussion:\nOur results provide original statistical and phylogenetic support for a role of developmental plasticity in evolutionary diversification. They are also congruent with a simple model for the role of plasticity in this process. First, the appearance of bimodal plasticity coincides with a burst of complexity and increase in evolutionary tempo. By this model, developmental plasticity can facilitate novel structures and their associated developmental networks( West-Eberhard, 2003), as well as new complexity in behavioral or enzymatic function, thereby providing additional substrate for future selection. Following this macroevolutionary ‘pulse’ of plasticity, the secondary loss of plasticity is accompanied by a decrease in complexity but a strong acceleration of measured evolutionary rates, which in our study were most pronounced in form change. The surprising limitation of rates in dimorphic relative to secondarily monomorphic lineages might be explained in part by genetic correlation( Cheverud, 1996), or the inability of overlapping genetic programs controlling alternative phenotypes to completely dissociate. We speculate that, where correlated morphologies were initially governed by a dimorphism, assimilation of a single morph would then give the freedom for single phenotypes to specialize and diversify, a phenomenon proposed as developmental ‘character release’( West-Eberhard, 1986). A complementary means by which evolutionary rates increase after the loss of plasticity may be through the release of genetic variation built up as a by-product of relaxed selection( Kawecki, 1994; Snell-Rood et al., 2010; Van Dyken and Wade, 2010). This possibility might be realized through the following scenario. If populations experience fluctuating environments and alternative mouth morphologies confer fitness advantages in those environments, then environmental sensitivity( i. e., plasticity) will be maintained( Moran, 1992). The presence of plasticity necessarily leads to relaxed selection on genes underlying the production of either trait, particularly those downstream of a developmental switch, facilitating the accumulation of genetic variation( Van Dyken and Wade, 2010). If populations then encounter a stable, predictable environment, promoting the loss of plasticity( Schwander and Leimar, 2011), this variation can be selected and refined by constitutively exposing a single morph to that environment. This would allow more rapid evolution of novel phenotypes than would be possible through the generation and selection of new genetic variation( Barrett and Schluter, 2008; Lande, 2009), thereby allowing rapid shifts to alternative niches such as novel diets( Ledón-Rettig et al., 2010). Combined with the ability of fixed morphs to more efficiently reach their fitness optima as permitted by character release, variation accumulated during periods of plasticity would thus enable rapid phenotypic specialization and diversification. Although accelerated rates of divergence due to built-up variation and character release should ultimately decline in monomorphic lineages( West-Eberhard, 2003; Lande, 2009), the net result would be an extreme radiation of forms, as has occurred in diplogastrid nematodes. In conclusion, the historical presence of polyphenism is strongly associated with evolutionary diversification. The degree to which the correlations observed are due to causation is presently unclear, although recent mechanistic advances in P. pacificus demonstrate the promise of functional genetic studies to test the causality of rapidly selected genes directly. Further work might also reveal that additional underlying causes, such as previously unseen ecological opportunities or selective pressures, may have jointly led to both complexity and plasticity. However, the simplicity of our results makes our proposed model sufficient to explain the observed correlations. We therefore hypothesize that developmental plasticity is required to cross a threshold of complexity that affords the degrees of freedom necessary for further diversification of form, and even after the assimilation of monomorphy this diversification can continue to be realized. The difference in rates between ancestrally and secondarily monomorphic lineages suggest a deciding role for a history of plasticity in diversification. It is possible that the processes inferred to accompany the gain of plasticity apply also to other systems with taxonomically widespread polyphenism, which sometimes likewise show a general coincidence of plasticity and diversity( e. g., Emlen et al., 2005; Pfennig and McGee, 2010). In principle, the model we propose can be generalized to other systems through dense taxon sampling, a resolved phylogeny, and quantification of alternative morphologies.",
"role": "user"
},
{
"content": "Every animal and plant grows to a body plan that is defined by its genes . However , the body plan must be flexible enough to allow the organism to respond to whatever the world throws at it . This flexibility—known as developmental plasticity—allows an organism to change certain characteristics in order to survive in varying environmental conditions . For example , nerve cells in the brain need to be able to remodel to form memories . It has been suggested that developmental plasticity can affect evolution because the ability to grow in different ways opens a diverse treasure trove of options from which to generate new forms and ways to exploit the environment . However , this potential had not previously been tested . Susoy et al . looked at 90 species of roundworm that look different from one another , particularly in their mouths . Some of the worms have moveable teeth while others are simple and streamlined . Furthermore , of those examined , 23 species were found to be ‘dimorphic’ and have the ability to develop one of two types of mouth: either narrow or wide , depending on their prey . Susoy et al . looked how similar the sequences of 14 genes were across all 90 species and used this information to build a family tree of how the roundworms are related to one another . Tracking which animals have dimorphic mouths on this tree produced an intriguing result: the strategy arose once in a single ancestor of the worms . Although this ability has been lost at least 10 times in the species that retained teeth , it has persisted in others through long periods of evolutionary time . Next , Susoy et al . estimated the speed of evolution in these worms based on how quickly the characteristics of the worms' mouths had changed over evolutionary time . The gain of a dimorphic trait was associated with an increased rate of evolution and the appearance of many new species with diverse and more complex mouthparts . However , evolution was even faster where a dimorphism had been lost , even though the mouthparts generally became less complex . Together , Susoy et al . 's findings demonstrate how developmental plasticity can introduce genetic diversity that can promote the evolution of new forms and species . The next challenges will be to find out how this genetic diversity is stored and released in the worms and to provide examples of the impact of environmental changes on developmental plasticity and shape .",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.6779 | [
{
"content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nBACKGROUND: To avoid a surge of demand on the healthcare system due to the COVID-19 pandemic, we must reduce transmission to individuals with chronic conditions who are at risk of severe illness with COVID-19. We aimed at understanding the perceptions, context and attitudes of individuals with chronic conditions during the COVID-19 pandemic to clarify their potential risk of infection. METHODS: A cross-sectional survey was nested in ComPaRe, an e-cohort of adults with chronic conditions, in France. It assessed participants' perception of their risk of severe illness with COVID-19; their context (i.e., work, household, contacts with external people); and their attitudes in situations involving frequent or occasional contacts with symptomatic or asymptomatic people. Data were collected from March 23 to April 2, 2020, during the lockdown in France. Analyses were weighted to represent the demographic characteristics of French patients with chronic conditions. The subgroup of participants at high risk according to the recommendations of the French High Council for Public Health was examined. RESULTS: Among the 7169 recruited participants, 63% patients felt at risk because of severe illness. About one quarter (23.7%) were at risk of infection because they worked outside home, had a household member working outside home or had regular visits from external contacts. Less than 20% participants refused contact with symptomatic people and <20% used masks when in contact with asymptomatic people. Among patients considered at high risk according to the recommendations of the French High Council for Public Health, 20% did not feel at risk, which led to incautious attitudes. CONCLUSION: Individuals with chronic conditions have distorted perceptions of their risk of severe illness with COVID-19. In addition, they are exposed to COVID-19 due to their context or attitudes.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.171 | [
{
"content": "Based on the dataset descriptions below, which datasets would be suitable for the following research question? Your response should be a JSON array.\n\nQuestion: I want to build a fast and accurate two-stream network for action recognition in videos.\n\nKeywords: action recognition video\n\nDataset descriptions:\nTVQA: The TVQA dataset is a large-scale vido dataset for video question answering. It is based on 6 popular TV shows (Friends, The Big Bang Theory, How I Met Your Mother, House M.D., Grey's Anatomy, Castle).\nUCF101: UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports).\nKinetics: The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class.\nFineGym: FineGym is an action recognition dataset build on top of gymnasium videos. Compared to existing action recognition datasets, FineGym is distinguished in richness, quality, and diversity.\nHMDB51: The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips.\n\nOptions: A: TVQA B: UCF101 C: Kinetics D: FineGym E: HMDB51",
"role": "user"
},
{
"content": "[\"UCF101\", \"HMDB51\"]",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.351 | [
{
"content": "Answer the following question using the provided context. Only provide the answer without any additional text.\n\nQuestion: Which deficiency is the cause of restless leg syndrome\nContext: Homozygosity for the T-allele of BTBD9 rs9296249 was associated with lower serum ferritin\nThis polymorphism has previously been associated with RLS, but not low iron stores in blood donors.\nIron deficiency is a common cause of anemia in patients with end stage renal disease (ESRD). Intravenous iron administration, especially in those requiring treatment with erythropoiesis stimulating agents (ESA) is an essential component of the management of anemia in ESRD patients. Iron improves hemoglobin, reduces ESA dose requirement and also has nonerythropoietic effects including improvement in physical performance, cognition and amelioration of restless leg syndrome.\nThe dopamine transporter is decreased in the striatum of subjects with restless legs syndrome.\nRLS subjects had significantly lower DAT binding in the striatum compared to controls on both the Day and the Night scans.\nSecondary RLS is a complication of PD. Deficiency of iron and decreased inhibition function of spinal cord may lead to the occurrence of RLS in PD patients.\nAssociation of iron deficiency with febrile seizures, pica, breath holding spells, restless leg syndrome and thrombosis is increasingly being recognized\nIron depletion is common in regular blood donors. The objective of the study was to investigate the frequency and severity of iron depletion in regular blood donors and whether IV iron is more effective than oral to avoid iron depletion and symptoms thereof, especially restless legs syndrome (RLS).\nIron status is poor in regular blood donors, restless legs syndrome is common,\nFurthermore, there appears to be an association between iron deficiency and those suffering from Restless Leg Syndrome (RLS).\nClinical studies have implicated the dopaminergic system in RLS, while others have suggested that it is associated with insufficient levels of brain iron\nIron deficiency is associated with paediatric sleep disturbances; in particular, restless leg syndrome (RLS)\nHere we described seven pediatric RLS patients. All of the parents of our patients had difficult times to make their children sleep due to irritability, restlessness, and demanding bedtime routines. All patients had asked their parents to rub their feet in bed, and it took more than half an hour to soothe them until they fell asleep. Their mothers had been exhausted from this night-time routine. However, they did not consider the routine abnormal, as it had been their habitual behavior since infancy. Some parents were too distressed or embarrassed to describe the symptoms of their child properly. Five patients had clear family history and none had obvious periodic leg movements during sleep. All patients showed low levels of ferritin and iron supplementation was effective in five cases.\nRLS may also be secondary to a number of conditions including iron deficiency, pregnancy and end-stage renal failure and, perhaps, neuropathy.\nThe pathogenesis of RLS probably involves the interplay of systemic or brain iron deficiency and impaired dopaminergic neurotransmission in the subcortex of the brain.\nRestless leg syndrome manifested by iron deficiency from chronic hemoptysis in cystic fibrosis.\nRestless leg syndrome (RLS) and periodic limb movement disorder (PLMD) are considered to be a continuum of a neurological sleep disorder associated with abnormal iron metabolism or deficiency. I describe a case of RLS and PLMD in a cystic fibrosis patient with iron deficiency from chronic hemoptysis.\nIron deficiency in the central nervous system is known to cause motor impairment and cognitive deficits; more recently, it has been suggested that it may play a role in the pathophysiology of the restless leg syndrome.\nThe syndrome is increasingly often diagnosed, particularly in association with iron deficiency, during pregnancy, in chronic renal failure and in patients with peripheral neuropathy.\nClinical, EEG, electromyographic and polysomnographic studies in restless legs syndrome caused by magnesium deficiency.",
"role": "user"
},
{
"content": "Iron deficiency (low serum ferritin) is a recognized cause for RLS. Further, in the striatum of subjects with restless legs syndrome, the dopamine transporter is decreased, which leads to impaired dopaminergic neurotransmission. There is also a report of magnesium deficiency underlying RLS.",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.1960 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nThis paper presents GumbelClip, a set of modifications to the actor-critic algorithm, for off-policy reinforcement learning.\nGumbelClip uses the concepts of truncated importance sampling along with additive noise to produce a loss function enabling the use of off-policy samples.\nThe modified algorithm achieves an increase in convergence speed and sample efficiency compared to on-policy algorithms and is competitive with existing off-policy policy gradient methods while being significantly simpler to implement.\nThe effectiveness of GumbelClip is demonstrated against existing on-policy and off-policy actor-critic algorithms on a subset of the Atari domain.\nRecent advances in reinforcement learning (RL) have enabled the extension of long-standing methods to complex and large-scale tasks such as Atari (Mnih et al., 2015) , Go , and DOTA (OpenAI, 2018) .\nThe key driver has been the use of deep neural networks, a non-linear function approximator, with the combination usually referred to as Deep Reinforcement Learning (DRL) (LeCun et al., 2015; Mnih et al., 2015) .\nHowever, deep learning-based methods are usually data-hungry, requiring millions of samples before the network converges to a stable solution.\nAs such, DRL methods are usually trained in a simulated environment where an arbitrary amount of data can be generated.\nRL algorithms can be classified as either learning in an off-policy or on-policy setting.\nIn the onpolicy setting, an agent learns directly from experience generated by its current policy.\nIn contrast, the off-policy setting enables the agent to learn from experience generated by its current policy or/and other separate policies.\nAn algorithm that learns in the off-policy setting has much greater sample efficiency as old experience from the current policy can be reused; it also enables off-policy algorithms to learn an optimal policy while executing an exploration-focused policy (Sutton et al., 1998) .\nThe most famous off-policy method is Q-Learning (Watkins & Dayan, 1992) which learns an actionvalue function, Q(s, a), that maps the value to a state s and action a pair.\nDeep Q-Learning (DQN), the marriage of Q-Learning with deep neural networks, was popularised by Mnih et al. (2015) and used various modifications, such as experience replay, for stable convergence.\nWithin DQN, experience replay (Lin, 1992) is often motivated as a technique for reducing sample correlation.\nUnfortunately, all action-value methods, including Q-Learning, have two significant disadvantages.\nFirst, they learn deterministic policies, which cannot handle problems that require stochastic policies.\nSecond, finding the greedy action with respect to the Q function is costly for large action spaces.\nTo overcome these limitations, one could use policy gradient algorithms (Sutton et al., 2000) , such as actor-critic methods, which learn in an on-policy setting at the cost of sample efficiency.\nThe ideal solution would be to combine the sample efficiency of off-policy algorithms with the desirable attributes of on-policy algorithms.\nWork along this line has been done by using importance sampling (Degris et al., 2012) or by combining several techniques together, as in ACER (Wang et al., 2016) .\nHowever, the resulting methods are quite complex and require many modifications to existing algorithms.\nThis paper, proposes a set of adjustments to A2C , a parallel on-policy actor-critic algorithm, enabling off-policy learning from stored trajectories.\nTherefore, our contributions are as follows:\n• GumbelClip, a fully off-policy actor-critic algorithm, the result of a small set of simple adjustments, in under 10 lines of code (LOC) 1 , to the A2C algorithm.\n• GumbelClip has increased sample efficiency and overall performance over on-policy actorcritic algorithms, such as A2C.\n• GumbelClip performs similarily to other off-policy actor-critic algorithms, such as ACER, while being significantly simpler to implement.\nThe paper is organized as follows: Section 2 covers background information, Section 3 describes the GumbelClip algorithm, Section 4 details the experiments along with results, discussion, and ablations of our methodology.\nSection 5 discusses possible future work, and finally Section 6 provides concluding remarks.\nIn this paper we have presented GumbelClip, a set of adjustments to the on-policy A2C algorithm, enabling full off-policy learning from stored trajectories in a replay memory.\nOur approach relies on aggressive clipping of the importance weight, large batchsize, and additive noise sampled from the Gumbel distribution.\nWe have empirically validated the use of each component in GumbelClip through ablations and shown the stability of the algorithm.\nFurthermore, we have shown that GumbelClip achieves superior performance and higher sample efficiency than A2C.\nGumbelClip nears the performance and sample efficiency of ACER on many of the tested environments.\nOur methodology requires minimal changes to the A2C algorithm, which in contrast to ACER, makes the implementation of GumbelClip straightforward.",
"role": "user"
},
{
"content": "With a set of modifications, under 10 LOC, to A2C you get an off-policy actor-critic that outperforms A2C and performs similarly to ACER. The modifications are large batchsizes, aggressive clamping, and policy \"forcing\" with gumbel noise.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.1856 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Nucleosome Free Regions in Yeast Promoters Result from Competitive Binding of Transcription Factors That Interact with Chromatin Modifiers\nArticle:\nAbstract:\nBecause DNA packaging in nucleosomes modulates its accessibility to transcription factors( TFs), unraveling the causal determinants of nucleosome positioning is of great importance to understanding gene regulation. Although there is evidence that intrinsic sequence specificity contributes to nucleosome positioning, the extent to which other factors contribute to nucleosome positioning is currently highly debated. Here we obtained both in vivo and in vitro reference maps of positions that are either consistently covered or free of nucleosomes across multiple experimental data-sets in Saccharomyces cerevisiae. We then systematically quantified the contribution of TF binding to nucleosome positiong using a rigorous statistical mechanics model in which TFs compete with nucleosomes for binding DNA. Our results reconcile previous seemingly conflicting results on the determinants of nucleosome positioning and provide a quantitative explanation for the difference between in vivo and in vitro positioning. On a genome-wide scale, nucleosome positioning is dominated by the phasing of nucleosome arrays over gene bodies, and their positioning is mainly determined by the intrinsic sequence preferences of nucleosomes. In contrast, larger nucleosome free regions in promoters, which likely have a much more significant impact on gene expression, are determined mainly by TF binding. Interestingly, of the 158 yeast TFs included in our modeling, we find that only 10–20 significantly contribute to inducing nucleosome-free regions, and these TFs are highly enriched for having direct interations with chromatin remodelers. Together our results imply that nucleosome free regions in yeast promoters results from the binding of a specific class of TFs that recruit chromatin remodelers.\nIntroduction:\nThe genomes of all eukaryotic organisms are packaged into nucleosomes, which are the fundamental units of chromatin, each composed of approximately 147 base pairs( bp) of DNA wrapped around a histone octamer. Recent developments in technologies for measuring chromatin marks by chromatin immunoprecipitation( ChIP) on microarrays( ChIP-Chip) or by sequencing( ChIP-seq) have enabled the construction of genome-wide maps of nucleosome positions and modifications at high resolution across various conditions. These experimental data have revealed that nucleosomes are not uniformly distributed across the genome but rather that transcription start and termination sites are relatively depleted of nucleosomes[1],[2]. Furthermore, nucleosome positioning has been shown to vary across physiological conditions[3]. It has long been accepted that nucleosomes have intrinsic sequence preferences which influence nucleosome positioning, e. g.[4]–[6]. At the same time, it has also long been known that barriers in the DNA can cause nucleosomes to be ‘statistically positioned’ relative to such barriers, introducing a periodic pattern of nucleosome occupancy on both sides of the barrier[7]. Given the fact that nucleosomes may cover more than of the genome[1], it is therefore also conceivable that a relatively small number of barriers on the DNA, in combination with statistical positioning relative to these barriers, determines most of the observed nucleosome positioning. For example, recent work suggests that nucleosome occupancy patterns around TSSs could at least partly be explained by such statistical positioning[8]. Probably the most obvious class of candidate molecules that could introduce condition-specific barriers on the DNA are sequence-specific transcription factors( TFs). Indeed, for some specific promoters in S. cerevisiae it has been established that binding of TFs is a major determinant of nucleosome positioning in the promoter region, e. g.[9]–[11]. Moreover, the resulting nucleosome positioning has major effects on gene regulation from these promoters. In addition, for a few TFs it has been established that their binding induces local nucleosome exclusion genome-wide[1],[12]–[14]. Although it is thus clear that both intrinsic sequence preferences of nucleosomes and competitive binding of other DNA binding factors play a role in nucleosome positioning, the relative importance of these factors have come under intense debate in recent years. For example, it has been proposed that the positioning of nucleosomes, in particular in S. cerevisiae, is mainly determined by intrinsic sequence preference of the nucleosomes, i. e.[15]. In this view, nucleosomes are mainly positioned by a ‘code’ in the DNA sequence and the accessibility of the DNA to TFs is downstream of this sequence-guided nucleosome positioning. However, these conclusions were challenged by several studies which suggested nucleosome sequence specificity can only explain a modest fraction of nucleosome positioning, and that statistical positioning likely also plays an important role[1],[2],[16],[17]. More recently, several groups have undertaken further experimental investigations into this question, in particular by experimentally comparing nucleosome positioning in vivo and in vitro[18],[19]. Although there is general agreement that these experimental studies confirmed that both intrinsic sequence preferences and the competitive binding of TFs play a role in nucleosome positioning, different authors came to strikingly different, and often seemingly contradictory conclusions regarding which of these factors play a dominant role[20]–[24]. It is thus clear that, rather than lacking sufficient experimental data, the current challenge in furthering our understanding of the determinants of nucleosome positioning lies in the quantitative interpretation of this data. Here we show that, by analyzing existing experimental data in combination with rigorous computational modeling, important novel insights can be gained that reconcile previous seemingly contradictory observations, and that suggest a new picture of the mechanisms regulating nucleosome positions. In particular, we use a biophysical model to quantitatively assess the role of TFs in determining nucleosome positioning in S. cerevisiae, to assess which aspects of nucleosome positioning TFs contribute to most, and to identify whether there are subsets of TFs that play a predominant roles in this process. S. cerevisiae is a particularly attractive system for such an analysis because extensive nucleosome positioning data are available, and because it is essentially the only organism in which sequence-specificities are available for the very large majority of TFs. Rather than assuming that intrinsic sequence preferences determine nucleosome positioning and that TF binding occurs preferentially at those regions not covered by nucleosomes, or vice versa, assuming that TF binding sets boundaries in the DNA against which nucleosomes are statistically positioned, in our model the TF binding and nucleosome positioning patterns are determined by a dynamic competition of all TFs and nucleosomes for binding to the DNA. Our model incorporates both the sequence preferences of the nucleosomes and of all TFs in a thermodynamic setting, and rigorously calculates the resulting equilibrium occupancies genome-wide as a function of the concentrations of all TFs and the nucleosomes. Using this model in combination with experimental data we find that TF binding makes a substantial contribution to nucleosome positioning but only at a specific subset of genomic positions. In particular, the linker regions between nucleosomes can be clearly divided into two classes based on their size: the large majority of linkers is small( bp) and occurs within large nucleosome arrays in gene bodies, whereas a minority of linkers is large( bp) and occurs predominantly in promoters. Our results show that the phasing of the small linkers within nucleosome arrays, and thereby the majority of nucleosome positioning genome-wide, is mainly determined by sequence preferences of nucleosomes. In contrast, the larger nucleosome free regions in promoters, which are likely most relevant for effects on gene expression, are mainly determined by competitive binding of TFs. By applying our model to data on nucleosome positioning in vitro we also confirm that the ability of TFs to explain nucleosome positioning in promoters is restricted to in vivo data. Thus, our model provides a quantitative and mechanistic explanation for the observed discrepancies between in vivo and in vitro nucleosome positioning. Most strikingly, our results also show that, rather than all TFs contributing roughly equally to the competition with nucleosomes, the effect of TFs on nucleosome positioning is restricted to a relatively small set of about TFs. Although one might expect that these TFs are simply the highest expressed TFs with the largest number of TFBSs genome-wide in the conditions in which the experiments were performed, we find this not to be the case. Instead, we find that these TFs are highly enriched for having known protein-protein interactions with chromatin remodeling complexes, histones, and chromatin modification enzymes. Thus, the mechanistic picture suggested by our results is that there is a specific class of TFs who, upon binding to the DNA, recruit chromatin modifiers that then mediate local expulsion of nucleosomes.\nDiscussion:\nIt is generally accepted that the packaging of DNA by nucleosomes in eukaryotes can modulate the accessibility of TFs to their cognate sites and thereby have major effects on gene regulation. In recent years there have been significant experimental efforts to determine nucleosome positioning patterns genome-wide, and to analyzing how these nucleosome-positioning patterns are established. As we discussed in the introduction, there has been a considerable debate as to whether nucleosome positioning in Saccharomyces cerevisiae is predominantly controlled by intrinsic sequence specificity of the nucleosomes, or that statistical positioning around barriers introduced by other DNA binding factors is more important for nucleosome positioning, and different researchers have presented seemingly contradictory results in this regard. We feel that these apparent contradictions may be reconciled by the results presented here. The large majority of annotated nucleosomes and linkers genome-wide concern the phasing of short linkers within dense arrays of nucleosomes, mainly inside genes. We find that the positioning of these nucleosomes and short linkers crucially depends on the sequence specificity of the nucleosomes, and that TFs contribute relatively little to their positioning. Therefore, predicting all linkers and nucleosomes on a genome-wide scale, the sequence specificity of the nucleosomes provides the main contribution to explaining their positions. In contrast, we find that nucleosome specificity contributes little to explaining larger nucleosome free regions, especially those within promoter regions. As our modeling shows, NFRs in promoters are predominantly explained by the DNA binding of a specific class of transcription factors. Thus, while genome-wide locations of nucleosomes and short linkers are predominantly determined by nucleosome sequence-specificity, the large nucleosome free regions in promoters that likely contribute much more significantly to gene regulation, are determined mainly through the competitive binding of TFs. Importantly, the fact that competition with TFs can not help explain the in vitro nucleosome positioning shows that the contributions of the TFs is restricted to in vivo positioning. Thus, the competitive binding of TFs provides a quantitative and mechanistic explanation for the differences between in vivo and in vitro nucleosome occupancies. That nucleosome free regions in promoters result from a competition between TF and nucleosome binding is supported by a number of recent studies of individual promoters, e. g.[9]–[11],[50]. In these studies the interplay of TF and nucleosome binding determines positions of NFRs and the resulting accessibility pattern has major consequences for gene expression. Our results suggest that this mechanism is not restricted to a few promoters, but is the typical situation genome-wide. Thus, whereas nucleosome sequence specificity does have a major impact on genome-wide nucleosome positioning, precisely those aspects of nucleosome positioning that have most impact on gene regulation are rather determined by the competition between nucleosomes and TF binding. Another major result from our study is that less than of the TFs that we analyzed appear to have a significant effect on nucleosome positioning. As we have shown, these TFs are not characterized by particularly high expression or large numbers of binding sites in YPD, nor do they possess particular sequence specificities or DNA binding domains. Instead, our analysis suggests that these TFs engage in specific protein-protein interactions with chromatin remodelers, thereby effecting nucleosome eviction much more dramatically than other TFs. Although the final predictions of our statistical mechanical model are quite competent, i. e. in promoters of all nucleosomes and of all NFRs are correctly identified, they are still far from perfect. This raises the question as to what additional elements are missing from the model. The main error the model makes is failing to identify roughly one third of nucleosome free regions as nucleosome free. This suggests that the model misses additional factors that promote displacement of nucleosomes. As most sequence-specific TFs in yeast are already represented in the model, and our results suggest that only a small fraction of these TFs significantly affect nucleosome positioning, it seems unlikely that the missing sequence-specific TFs play a major role in the overall quality of the results. In contrast, as shown in Figure S6 in Text S1, general TFs including the RNA polymerase itself may play an important role in nucleosome positioning. In this context it has also been suggested[19] that the well-positioned nucleosome immediately downstream of TSS may result from a direct interaction between general transcription factors and the RNA polymerase with this nucleosome. Thus, including the recruitment and binding of general TFs and RNA polymerase will likely further improve the model. In addition, TF binding can recruit chromatin modifying enzymes that displace nucleosomes and alter histone tails. The fact that experimentally observed NFRs are typically wider than the theoretically predicted ones suggest that the TF binding recruits chromatin modifiers which lead to a larger region of nucleosome exclusion than given by the TF binding itself. Thus, feed-back from TF binding to nucleosome modification and ejection as mediated by chromatin remodelers is a major feature that could improve the model's predictions. In summary, the picture that emerges from our study is that the binding of a specific class of TFs determines local recruitment of chromatin remodelers, which then mediate local expulsion of nucleosomes. The latter may further positively feed-back on TF binding and thereby expand and stabilize the nucleosome-free regions. Although this work has focused on yeast, the competition between nucleosomes and TFs for binding DNA may even be more crucial for transcription regulation in higher eukaryotes. For example, in multi-cellular eukaryotes many gene regulatory elements occur in distal enhancers, i. e. local clusters of TF binding sites a few hundred base pairs in length, to which a combination of TFs binds to effect transcription at a promoter that can be hundreds of kilobases away. Recent mapping of enhancers based on chromatin marks has suggested that these enhancers are bound and activated in a highly tissue- and condition-specific manner[51],[52]. An attractive simplified model for such tissue-specific binding is that nucleosomes by default cause DNA to be inaccessible and that TF binding is too weak to access individual TF binding sites. Only in areas where a cluster with many binding sites for precisely that subset of TFs that is highly expressed in the condition will these TFs jointly outcompete the nucleosomes and create a region of DNA accessibility and TF binding, i. e. similar to the qualitative model presented in[53]. We believe that the statistical mechanics model that we have used here, might also be useful to quantitatively investigate such models of enhancer function.",
"role": "user"
},
{
"content": "The DNA of all eukaryotic organisms is packaged into nucleosomes , which cover roughly of the genome . As nucleosome positioning profoundly affects DNA accessibility to other DNA binding proteins such as transcription factors ( TFs ) , it plays an important role in transcription regulation . However , to what extent nucleosome positioning is guided by intrinsic DNA sequence preferences of nucleosomes , and to what extent other DNA binding factors play a role , is currently highly debated . Here we use a rigorous biophysical model to systematically study the relative contributions of intrinsic sequence preferences and competitive binding of TFs to nucleosome positioning in yeast . We find that , on the one hand , the phasing of the many small spacers within dense nucleosome arrays that cover gene bodies are mainly determined by intrinsic sequence preferences . On the other hand , larger nucleosome free regions ( NFRs ) in promoters are explained predominantly by TF binding . Strikingly , we find that only 10–20 TFs make a significant contribution to explaining NFRs , and these TFs are highly enriched for directly interacting with chromatin modifiers . Thus , the picture that emerges is that binding by a specific class of TFs recruits chromatin modifiers which mediate local nucleosome expulsion .",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.4352 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nTiming of antibiotic administration in knee replacement under tourniquet . Cephamandole levels in serum and drain fluid were measured in 32 knee replacement operations to determine the benefit of an intravenous dose of antibiotic at the time of tourniquet deflation . Concentrations of cephamandole in drain fluid were directly proportional to the serum concentration at the time of tourniquet release . A 'tourniquet-release ' dose of antibiotic increased drain fluid concentration threefold .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"cephamandole\", \"Cephamandole\", \"antibiotic administration\", \"\\u0027tourniquet-release \\u0027 dose of antibiotic\"], \"Outcome\": [\"cephamandole in drain fluid\", \"Cephamandole levels in serum and drain fluid\", \"drain fluid concentration\"], \"Participant\": [\"knee replacement under tourniquet .\", \"32\", \"knee replacement operations\"]}",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.6627 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: MEF-mouseembryofibroblast",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.26891 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nall family practitioners who listed obstetrics as a secondary specialty and a 20% random sample of the remaining licensed family practitioners were surveyed . in 1998 , surveys were mailed to a random sample of 50% of obstetricians and 25% of family practitioners who indicated on their licensure application that they provided prenatal care .\nstatistical analysis was performed with epiinfo software ( centers for disease control and prevention , atlanta , ga , usa ) .\na total of 463 surveys ( 60% of those mailed ) were completed in 2002 , and 515 surveys ( 80% of those mailed ) were completed in 1998 .\nthe final sample included 97 midwives , 189 obstetricians , and 64 family practitioners in 2002 and 102 midwives , 128 obstetricians , and 201 family practitioners in 1998 .\nno significant differences were found in provider characteristics ( location , practice type , and number of deliveries performed ) from 1998 to 2002 . in 2002 ,\nof these , 318 ( 91% [ 96% of obstetricians , 92% of midwives , and 73% of family practitioners ] ) indicated their policy was based upon at least 1 previously published guideline .\nfamily practitioners ( p<0.05 ) and midwives ( p<0.05 ) were significantly more likely to follow published guidelines during 2002 than during 1998 . in 1998 , the risk - based approach was the most common method of preventing gbs disease ( table1 ) . in 2002 ,\nthe screening - based approach was the most common method . in 2002 , providers were significantly more likely to have adopted a screening - based approach to prevention than they were in 1998 ( p<0.001 ) . in 2002 , when risk - based providers were questioned , 14 ( 52% ) of 27 midwives , 5 ( 50% ) of 10 family practitioners , and 6 ( 32% ) of 19 obstetricians indicated they planned to implement the new guidelines . *\np<0.005 , change from 1998 to 2002 among all prenatal care provider groups . in 2002 , among those who reported a screening - based approach , 262 ( 89% ) of 293 providers routinely collected specimens from both vaginal and rectal sites .\nmidwives ( 97% ) were more likely than obstetricians ( 90% ) and family practitioners ( 77% ) to collect specimens from both sites .\nin 2002 , midwives ( p<0.001 ) were significantly more likely to use both vaginal and rectal sites to screen for gbs than in 1998 .\nno significant increase was seen in the proportion of obstetricians or family practitioners who screened vaginal and rectal specimens from 1998 to 2002 ( table 2 ) .\nprenatal care providers were asked if their laboratory used selective broth to isolate gbs . \np<0.01 , penicillin as first choice for iap from 1998 to 2002 . among providers who used a screening - based approach to prevent perinatal gbs infection in 2002 ,\nno change was seen in the proportion of providers who screened at 3537 weeks of gestation when responses from the 1998 and 2002 surveys were compared ( table 2 ) . in 2002 , when providers were asked if their laboratories used a selective broth to isolate gbs , 171 ( 58% ) of 293 indicated that they did .\nobstetricians were significantly more likely than midwives and family practitioners to report selective broth use in their laboratories .\nobstetricians ( p<0.001 ) were significantly more likely to report that their laboratory used selective broth in 2002 than in 1998 ( table 2 ) .\nlittle change was seen among midwives and family practitioners regarding their knowledge of selective broth use from 1998 to 2002 . in 2002 , a total of 225 ( 77% ) of 292 providers reported using penicillin most often for iap .\nmidwives ( p<0.01 ) were significantly more likely to use penicillin in 2002 than they were in 1998 .\nlittle change was seen in the proportion of family practitioners and obstetricians who used penicillin in 1998 versus 2002 ( table 2 ) .\nthe results of this survey suggest that all minnesota providers have adopted a policy on preventing perinatal gbs disease , and most follow established , published guidelines . in 1998 , a risk - based approach to gbs disease prevention was the most common strategy identified by providers . in 2002 , screening all pregnant women for gbs was the predominant strategy .\nthe effectiveness of the screening - based approach depends partly on the sensitivity and specificity of the specimens collected . a previous study by philipson et al .\nindicated that swabbing both vaginal and rectal sites significantly increased the sensitivity of isolating gbs compared with swabbing the vagina only ( 8) . in our study ,\n89% of minnesota providers indicated they routinely collected specimens for gbs screening from both vaginal and rectal sites . because vaginal and rectal swabs are likely to yield diverse bacteria , selective broth is recommended to limit growth of other organisms , thus increasing the chance of isolating gbs ( 9 ) . in a study by silver and struminski , 32% of women\nhad false - negative culture results when direct agar plating was used instead of selective broth to isolate gbs ( 10 ) . in our study\n, most obstetricians ( 72% ) indicated that their laboratories used selective broth ; however , less than half of midwives and family practitioners reported using selective broth . many providers ( 41% )\na recent survey found that 89% of laboratories that process gbs specimens use selective enrichment broth media for gbs isolation ( 11 ) .\npreliminary data from a 2004 survey of laboratories in minnesota indicated that 92% of laboratories use a selective enrichment broth media to isolate gbs ( minnesota department of health , unpub .\nis more likely to detect women who are colonized when they deliver , compared to screening at an earlier stage of a woman 's pregnancy . in 2002 , most ( 88% ) providers who reported a screening - based approach to perinatal gbs disease prevention obtained cultures at 3537 weeks of gestation .\nresearch in the 1980s showed that administering antimicrobial prophylaxis to women who are colonized with gbs was effective in preventing disease in newborns .\nampicillin , a broader - spectrum agent , is considered an acceptable alternative . in our study , > 80% of obstetricians and midwives reported using penicillin as their first choice for iap .\nalthough family practitioners were significantly more likely to use penicillin in 2002 than in 1998 , only 51% of family practitioners listed penicillin as their first choice .\nfirst , the survey was conducted only among minnesota providers , so the results may not be generalized to other states .\nthis decrease is most likely explained by a sampling change in which a greater proportion of family practitioners with a history of providing prenatal care were sampled in 1998 than in 2002 .\nwe suspect that most family practitioners who failed to complete the survey in 2002 did so because they did not provide prenatal care .\nwhen characteristics of responders in 1998 and 2002 were compared , no significant differences were noted regarding location of practice , practice type , size of practice , and median number of deliveries performed .\nfinally , surveys are measures of reported practices and may not reflect actual services provided .\nprenatal care providers , especially family practitioners , should continue to discuss and establish policies regarding perinatal gbs disease prevention .\neducation on using selective broth medium to isolate gbs should be provided to clinicians and laboratories .\nin addition , clinicians should be familiar with the appropriate antimicrobial agents used for iap and ensure rapid drug administration when it is indicated .",
"role": "user"
},
{
"content": "in 2002 , revised guidelines for preventing perinatal group b streptococcal disease were published . in 2002 , \n all minnesota providers surveyed reported using a prevention policy . \n most screen vaginal and rectal specimens at 3437 weeks of gestation . \n the use of screening - based methods has increased dramatically since 1998 .",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.8636 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nEffects of a Parenting Intervention on Features of Psychopathy in Children\nAbstract 1:\nThis study examined whether Project Support , a parenting intervention shown to reduce child conduct problems , also exerts positive effects on features of psychopathy in children . Participants were 66 families ( mothers and children ) recruited from domestic violence shelters who participated in a r and omized controlled trial evaluating Project Support . Each family included at least one child between the ages of 4 and 9 who was exhibiting clinical levels of conduct problems . Families were r and omly assigned to the Project Support intervention condition or to an existing services comparison condition , and they were assessed on 6 occasions over 20 months , following their departure from the shelter . Children in families in the Project Support condition , compared with those in the comparison condition , exhibited greater reductions in features of psychopathy . Moreover , the changes in features of psychopathy remained after accounting for changes in conduct problems . Project Support ’s effects on features of psychopathy were mediated by improvements in mothers ’ harsh and inconsistent parenting . These findings on the effects of an intervention on features of psychopathy are the first from a r and omized controlled trial . They inform the debate about whether features of psychopathy in children are responsive to intervention , and hold important implication s for clinical practice\n\n\nTitle 2:\nThe impact of prenatal maternal risk, fearless temperament and early parenting on adolescent callous-unemotional traits: a 14-year longitudinal investigation.\nAbstract 2:\nOBJECTIVE Proposals have been su bmi tted to the DSM-V for the addition of a callous-unemotional ( CU ) specifier for conduct problem ( CP ) youth ( CP/CU ) . While the addition of such a diagnostic category may aid in the identification of homogeneous CP subtypes , evidence on risks for the development of CP/CU remains limited . The present study sought to examine the extent to which CP/CU in early adolescence could be differentiated by family- and child-based risks from pregnancy to age 4 years . METHOD Using data from approximately 7,000 mothers and their offspring ( 51 % male ) participating in the Avon Longitudinal Study of Parents and Children , the authors examined maternal prenatal risks ( psychopathology , criminality , substance use ) , child 's fearless temperament ( age 2 years ) and harsh and warm parenting ( age 4 years ) as predictors of CP and CU at age 13 ; then used follow-back analyses to explore pre- and early post-natal risks in more detail . RESULTS Maternal prenatal risks increased fearless temperament and CP and CU . Fearless temperament was also prospect ively associated with higher levels of early adolescent CP and CU , above and beyond parenting and prenatal maternal risks . Follow-back analyses showed fearless temperament in boys manifested as lower response to punishment cues , while for girls this temperament was indexed by boldness toward novel situations and strangers , particularly for CP/CU youth . CONCLUSIONS The current findings suggest that ( i ) maternal prenatal risks and fearless temperament showed a dose-response relationship with CP and CU ( i.e. , higher clustering of risks tended to relate to both higher levels and the co-occurrence of CU with CP ) , and ( ii ) intervention programs that aim to improve behavioural outcomes may consider targeting specific temperamental features in both boys and girls\n\n\nTitle 3:\nPromoting self-regulation and cooperation in pre-kindergarten children with conduct problems: a randomized controlled trial.\nAbstract 3:\nOBJECTIVE To evaluate the effectiveness and mechanisms of Hitkashrut , a \" common elements \" co-parent training ( PT ) program for early intervention with preschoolers ( 3 - 5 years of age ) at risk for conduct problems ( CP ) . METHOD A r and omized controlled trial with 140 participants in PT and 69 in a minimal intervention control group . The primary outcome ( presence of CP ) was assessed at pre-intervention , post-intervention , and 1-year follow-up . Callous-unemotional traits , effortful control , parental distress , negative/inconsistent parenting , and couple relationship quality were assessed at pre- and post-intervention , whereas callous-unemotional traits and effortful control were also assessed in the PT group at follow-up . Retention was 87.1 % at post-intervention and 60 % at follow-up . Hitkashrut incorporated evidence -based components of established PT programs into a culturally adapted protocol . The facilitators were trained and supervised psychologists working in Israel 's Educational Psychology Services . RESULTS Intervention effect ( Cohen 's d ) was large at post-intervention ( ES = .76 , p < .001 ) , and an additional medium effect was found at follow-up ( ES = 0.63 , p < .001 ) . Parallel post-intervention and follow-up intervention effects were also found in the degree of clinical ly significant improvements . Effects of the intervention on callous-unemotional traits ( ES = 0.85 , p < .001 ) and effortful control ( ES = 0.47 , p < .001 ) were maintained at follow-up . Follow-up improvement in CP was mediated by improvements in parental practice s and distress . CONCLUSIONS Hitkashrut 's implementation and subsequent dissemination in real-world setting s demonstrates the potential effectiveness of common elements programs to promote innovations within service-delivery systems . Improvements in dispositional variables and the mediated follow-up effect support theoretical cascade models that emphasize early developmental malleability and the growing preventive effects of PT 's facilitated parental changes on disruptive developmental trajectories . Clinical trial registration information - The effect of a \" common elements \" co-parent training program ( called ' Hitkashrut ' ) on conduct in preschoolers at risk for conduct problems ; http://www.anzctr.org.au ; ACTRN12612000148875\n\n\nTitle 4:\nDo Childhood Callous-Unemotional Traits Drive Change in Parenting Practices?\nAbstract 4:\nThis study examined the relationship between callous-unemotional ( CU ) traits and parenting practice s over time in a mixed-sex community cohort ( N = 1,008 ; 52.6 % boys ) , aged 3 to 10 years ( M = 6.5 , SD = 1.3 ) . Measures of CU traits , externalizing psychopathology , parenting practice s , and socioeconomic risk factors were collected at baseline , and parenting practice s and CU traits were reassessed at 12-month follow-up . CU traits uniquely accounted for change in three domains of parenting ( inconsistent discipline , punishment , and parental involvement ) . Likewise , multiple domains of parenting ( positive parenting , parental involvement , and poor monitoring/supervision ) uniquely predicted change in CU traits . These seemingly bidirectional dynamics between CU traits and parenting were found to be largely moderated by child age and sex . Results partially replicate previous findings regarding the association between quality of parenting and prospect i ve change in CU traits , and provide initial evidence that CU traits disrupt parenting practice s over time\n\n\nTitle 5:\nCommunity vs. Clinic-Based Modular Treatment of Children with Early-Onset ODD or CD: A Clinical Trial with 3-Year Follow-Up\nAbstract 5:\nThis study examines the treatment outcomes of 139 , 6–11 year-old , clinical ly referred boys and girls diagnosed with Oppositional Defiant Disorder ( ODD ) or Conduct Disorder ( CD ) who were r and omly assigned to a modular-based treatment protocol that was applied by research study clinicians either in the community ( COMM ) or a clinic office ( CLINIC ) . To examine normative comparisons , a matched sample of 69 healthy control children was included . Multiple informants completed diagnostic interviews and self-reports at six assessment timepoints ( pretreatment to 3-year follow-up ) to evaluate changes in the child ’s behavioral and emotional problems , psychopathic features , functional impairment , diagnostic status , and service involvement . Using HLM and logistic regression models , COMM and CLINIC showed significant and comparable improvements on all outcomes . By 3-year follow-up , 36 % of COMM and 47 % of CLINIC patients no longer met criteria for either ODD or CD , and 48 % and 57 % of the children in these two respective conditions had levels of parent-rated externalizing behavior problems in the normal range . We discuss the nature and implication s of these novel findings regarding the role of treatment context or setting for the treatment and long-term outcome of behavior disorders\n\n\nTitle 6:\nDimensions of callousness in early childhood: links to problem behavior and family intervention effectiveness.\nAbstract 6:\nThis study examined dimensions of callous behaviors in early childhood and the role of these behaviors in the development of conduct problems , as well as responsiveness to a family-centered preventative intervention . Caregiver reports of callous behaviors were examined using exploratory and confirmatory factor analysis . Problem behavior was examined using within- and cross-informant reports of these behaviors . Parenting was measured using observational methods within the context of a r and omized control trial of the Family Check-Up with a sample of 731 ethnically diverse boys and girls ( followed from ages 2 to 4 ) at high risk for later conduct problems . Results demonstrated that a measure of deceitful-callous ( DC ) behaviors had acceptable factor loadings and internal consistency at ages 3 and 4 . DC behaviors at age 3 predicted problem behavior concurrently and longitudinally within and across informant . However , DC behaviors did not reduce the effectiveness of the family preventative intervention . These findings have implication s for our underst and ing of behaviors that may precede later callous-unemotional traits and for our underst and ing of the development and prevention of early starting conduct problems\n\n\nTitle 7:\nThe family check-up with high-risk indigent families: preventing problem behavior by increasing parents' positive behavior support in early childhood.\nAbstract 7:\nSeven hundred thirty-one income-eligible families in 3 geographical regions who were enrolled in a national food supplement program were screened and r and omized to a brief family intervention . At child ages 2 and 3 , the intervention group caregivers were offered the Family Check-Up and linked parenting support services . Latent growth models on caregiver reports at child ages 2 , 3 , and 4 revealed decreased behavior problems when compared with the control group . Intervention effects occurred predominantly among families reporting high levels of problem behavior at child age 2 . Families in the intervention condition improved on direct observation measures of caregivers ' positive behavior support at child ages 2 and 3 ; improvements in positive behavior support mediated improvements in children 's early problem behavior",
"role": "user"
},
{
"content": "The results suggest that dimensions of parenting are prospect ively related to changes in CU traits .\nSubgroups of youth with both high levels of CU traits and AB also appear to have experienced negative parenting practice s. However , negative parenting is not consistently related to AB in cross-sectional studies for youth with high levels of CU traits .\nAt the same time , parenting-focused interventions appear effective in reducing the level of AB and CU traits in youth .",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.4622 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: nearly 90% of patients hospitalized with coronavirus (COVID-19) had one or more underlying health conditions.\n\nAbstract:\nOBJECTIVE: To investigate the association between hypertension and outcome in patients with Coronavirus Disease 2019 (COVID-19) pneumonia.\nMETHODS: We performed a systematic literature search from several databases on studies that assess hypertension and outcome in COVID-19.\nComposite of poor outcome, comprising of mortality, severe COVID-19, acute respiratory distress syndrome (ARDS), need for intensive care unit (ICU) care and disease progression were the outcomes of interest.\nRESULTS: A total of 6560 patients were pooled from 30 studies.\nHypertension was associated with increased composite poor outcome (risk ratio (RR) 2.11 (95% confidence interval (CI) 1.85, 2.40), p < 0.001; I2, 44%) and its sub-group, including mortality (RR 2.21 (1.74, 2.81), p < 0.001; I2, 66%), severe COVID-19 (RR 2.04 (1.69, 2.47), p < 0.001; I2 31%), ARDS (RR 1.64 (1.11, 2.43), p = 0.01; I2,0%, p = 0.35), ICU care (RR 2.11 (1.34, 3.33), p = 0.001; I2 18%, p = 0.30), and disease progression (RR 3.01 (1.51, 5.99), p = 0.002; I2 0%, p = 0.55).\nMeta-regression analysis showed that gender (p = 0.013) was a covariate that affects the association.\nThe association was stronger in studies with a percentage of males < 55% compared to ⩾ 55% (RR 2.32 v. RR 1.79).\nCONCLUSION: Hypertension was associated with increased composite poor outcome, including mortality, severe COVID-19, ARDS, need for ICU care and disease progression in patients with COVID-19.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.1478 | [
{
"content": "Summarize the paper:\n\nPaper text:\n\nWe introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.\nTranslation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images.\nThe model embeds data sets into an infinite-dimensional function space, as opposed to finite-dimensional vector spaces.\nTo formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep-set.\nWe evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs.\nWe demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks.\nNeural Processes (NPs; Garnelo et al., 2018b; a) are a rich class of models that define a conditional distribution p(y|x, Z, θ) over output variables y given input variables x, parameters θ, and a set of observed data points in a context set Z = {x m , y m } M m=1 .\nA key component of NPs is the embedding of context sets Z into a representation space through an encoder Z → E(Z), which is achieved using a DEEPSETS function approximator (Zaheer et al., 2017 ).\nThis simple model specification allows NPs to be used for\n(i) meta-learning (Thrun & Pratt, 2012; Schmidhuber, 1987) , since predictions can be generated on the fly from new context sets at test time; and\n(ii) multi-task or transfer learning (Requeima et al., 2019) , since they provide a natural way of sharing information between data sets.\nMoreover, conditional NPs (CNPs; Garnelo et al., 2018a) , a deterministic variant of NPs, can be trained in a particularly simple way with maximum likelihood learning of the parameters θ, which mimics how the system is used at test time, leading to strong performance (Gordon et al., 2019) .\nNatural application areas of NPs include time series, spatial data, and images with missing values.\nConsequently, such domains have been used extensively to benchmark current NPs (Garnelo et al., 2018a; b; Kim et al., 2019) .\nOften, ideal solutions to prediction problems in such domains should be translation equivariant: if the data are translated in time or space, then the predictions should be translated correspondingly (Kondor & Trivedi, 2018; Cohen & Welling, 2016) .\nThis relates to the notion of stationarity.\nAs such, NPs would ideally have translation equivariance built directly into the modelling assumptions as an inductive bias.\nUnfortunately, current NP models must learn this structure from the data set instead, which is sample and parameter inefficient as well as impacting the ability of the models to generalize.\nThe goal of this paper is to build translation equivariance into NPs.\nFamously, convolutional neural networks (CNNs) added translation equivariance to standard multilayer perceptrons (LeCun et al., 1998; Cohen & Welling, 2016) .\nHowever, it is not straightforward to generalize NPs in an analogous way:\n(i) CNNs require data to live \"on the grid\" (e.g. image pixels form a regularly spaced grid), while many of the above domains have data that live \"off the grid\" (e.g. time series data may be observed irregularly at any time t ∈ R).\n(ii) NPs operate on partially observed context sets whereas CNNs typically do not.\n(iii) NPs rely on embedding sets into a finite-dimensional vector space for which the notion of equivariance with respect to input translations is not natural, as we detail in Section 3.\nIn this work, we introduce the CONVCNP, a new member of the NP family that accounts for translation equivariance.\n1 This is achieved by extending the theory of learning on sets to include functional representations, which in turn can be used to express any translation-equivariant NP model.\nOur key contributions can be summarized as follows.\n(i) We provide a representation theorem for translation-equivariant functions on sets, extending a key result of Zaheer et al. (2017) to functional embeddings, including sets of varying size.\n(ii) We extend the NP family of models to include translation equivariance.\n(iii) We evaluate the CONVCNP and demonstrate that it exhibits excellent performance on several synthetic and real-world benchmarks.\nWe have introduced CONVCNP, a new member of the CNP family that leverages embedding sets into function space to achieve translation equivariance.\nThe relationship to\n(i) the NP family, and\n(ii) representing functions on sets, each imply extensions and avenues for future work.\nDeep sets.\nTwo key issues in the existing theory on learning with sets (Zaheer et al., 2017; Qi et al., 2017a; Wagstaff et al., 2019) are\n(i) the restriction to fixed-size sets, and\n(ii) that the dimensionality of the embedding space must be no less than the cardinality of the embedded sets.\nOur work implies that by considering appropriate embeddings into a function space, both issues are alleviated.\nIn future work, we aim to further this analysis and formalize it in a more general context.\nPoint-cloud models.\nAnother line of related research focuses on 3D point-cloud modelling (Qi et al., 2017a; b) .\nWhile original work focused on permutation invariance (Qi et al., 2017a; Zaheer et al., 2017) , more recent work has considered translation equivariance as well (Wu et al., 2019) , leading to a model closely resembling CONVDEEPSETS.\nThe key differences with our work are the following: (i) Wu et al. (2019) Correlated samples and consistency under marginalization.\nIn the predictive distribution of CON-VCNP (Equation (2)), predicted ys are conditionally independent given the context set.\nConsequently, samples from the predictive distribution lack correlations and appear noisy.\nOne solution is to instead define the predictive distribution in an autoregressive way, like e.g. PixelCNN++ (Salimans et al., 2017) .\nAlthough samples are now correlated, the quality of the samples depends on the order in which the points are sampled.\nMoreover, the predicted ys are then not consistent under marginalization (Garnelo et al., 2018b; Kim et al., 2019) .\nConsistency under marginalization is more generally an issue for neural autoregressive models (Salimans et al., 2017; Parmar et al., 2018) , although consistent variants have been devised (Louizos et al., 2019) .\nTo overcome the consistency issue for CONVCNP, exchangeable neural process models (e.g. Korshunova et al., 2018; Louizos et al., 2019) may provide an interesting avenue.\nAnother way to introduce dependencies between ys is to employ latent variables as is done in neural processes (Garnelo et al., 2018b) .\nHowever, such an approach only achieves conditional consistency: given a context set, the predicted ys will be dependent and consistent under marginalization, but this does not lead to a consistent joint model that also includes the context set itself.\nFor each dataset, an image is randomly sampled, the first row shows the given context points while the second is the mean of the estimated conditional distribution.\nFrom left to right the first seven columns correspond to a context set with 3, 1%, 5%, 10%, 20%, 30%, 50%, 100% randomly sampled context points.\nIn the last two columns, the context sets respectively contain all the pixels in the left and top half of the image.\nCONVCNPXL is shown for all datasets besides ZSMM, for which we show the fully translation equivariant CONVCNP.",
"role": "user"
},
{
"content": "We extend deep sets to functional embeddings and Neural Processes to include translation equivariant members",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.30108 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nthe estimated low effectiveness of malaria vaccine and dengue vaccine in the field 1,2 have disappointed a certain fraction of vaccine enthusiasts . both vaccines offered\nsome hope in advance of the field studies , but clinical protection in infants and other groups appears to be lower than the earlier expectation based on laboratory experiments and smaller - scale clinical studies among older individuals . as part of lessons to be learnt from these unfavorable outcomes ( and from success in earlier vaccines ) , there is a growing need to clarify what risk factors determine successful and unsuccessful vaccines 3 .\nunderstanding common biological features among the existing ( recommended ) childhood vaccines , in contrast to recent unsuccessful vaccines , would be useful for designing optimal avenues for future vaccine development . here\nwe report the results from a simple epidemiological analysis which aimed to identify virological , pathophysiological and eco - evolutionary factors that may lead to a successful vaccine .\nthere are 14 infectious diseases against which vaccination among children aged from 0 - 6 years is recommended in the usa 4 , including chickenpox , diphtheria , haemophilus influenzae infection , hepatitis a virus infection , hepatitis b virus infection , influenza , measles , mumps , pertussis , polio , pneumococcal disease , rotavirus infection , rubella , and tetanus . on the contrary ,\nfour notable diseases without a promising vaccine include hiv / aids , tuberculosis , malaria and dengue virus infection 3,5 . in many instances ,\nthe difficulty in developing vaccines against these four infectious diseases has been implicitly attributed to two biological factors : ( i ) broad antigenic diversity ( except for tuberculosis ) and ( ii ) the limited role of humoral immunity and the requirement of t - cell immunity for protection 5 .\nthe existence of two distinct groups of diseases ( i.e. with and without successful vaccine ) provides us with an opportunity to implement a case - control analysis that helps identify epidemiological determinants of success in vaccine development . for each pathogen\n, we scale the abovementioned two factors by considering the following five dichotomous variables that are measurable in practice . as an explanatory variable that would partially reflect antigenic stability\n, one can examine if a disease is caused by ( i ) an rna virus or protozoa that is known to experience rapid evolution . as an indirect measure of evolutionary pressure , ( ii ) the existence of asymptomatic infection could act as a complementary factor that potentially mirrors the antigenic diversity .\nthat is , if an infection involves a substantial fraction of asymptomatic infections , the infection is unlikely to impose strong clinical selection pressure , and thus , we regard it as reflecting low pathogenicity ( or low virulence ) or the infection caused by an evolved pathogen .\nthird , ( iii ) a high affinity to cells in blood and immune systems is considered as a variable that reflects the limited protection by humoral immunity ( e.g. mycobacterium tuberculosis in phagocytic immune cells can escape from humoral immune response ) . moreover , ( iv ) the transmission via a common route ( i.e. respiratory or through direct contact ) is also considered as inversely associated with the requirement of cellular immunity , because the establishment of infection via the common route involves many opportunities of humoral immune response .\nfifth , ( v ) the existence of interactions between different pathogens , including enhancement within an identical pathogen species and epidemiological interference between different pathogens , is considered , because the interaction can be consistent with both broad antigenic diversity ( due to a rich multstrain evolutionary dynamics ) 6 and also with the requirement of cell - mediated immunity ( e.g. non - specific temporal immunity shortly after influenza virus infection is believed not to be humoral 7 ) .\nboth the outcome variable ( i.e. with or without successful vaccine ) and explanatory variables were dealt with as dichotomous . to examine statistical association between two dichotomous variables , we employed fisher 's exact test .\nsubsequently , to address potential confounding effect , a multiple logistic regression model was used to identify factors that are associated with our outcome variable .\na forward stepwise method was employed using the minimum corrected akaike information criterion ( aicc ) to choose the best model .\nall statistical data were analyzed using a statistical software jmp version 9.0.0 ( sas institute inc . ,\na total of 18 samples of infectious diseases , including 14 diseases with successful vaccines that are routinely adopted as recommended ones and 4 diseases without a successful vaccine , were examined .\ntable 1 shows the results of univariate and multivariate analyses . in both analyses , only two factors , i.e. ,\na high affinity to blood and immune cells and the existence of interactions between different pathogens were positively and significantly associated with diseases without a successful vaccine .\nmultivariate logistic model with step - wise method also identified these two variables to be left in the final model ( table 1 ) .\nadjusted odds ratios of diseases without successful vaccine for high affinity to blood / immune cells and interactions between pathogens were as large as 1.210 ( 95% confidence interval ( ci ) : 9.8 , ) and 4.910 ( 95% ci : 2.2 , ) , respectively .\nneither rna - virus / protozoa nor the existence of asymptomatic transmission appeared to be a significant predictor of infectious diseases without a successful vaccine ( p=0.577 and 0.082 , respectively ) .\nalthough our epidemiological analysis has rested on crude analysis , and the finding is likely vulnerable to confounding effect , table 1 critically shows that , rather than being an rna virus or protozoa , other markers ( i.e. high affinity with blood / immune cells and pathogen interactions ) are more likely to predict diseases without a promising vaccine .\nfirst , our epidemiological exercise has permitted us to identify a research gap in vaccine development .\nnamely , while studying antigenic evolution 8 and system biologic approach to vaccinology 3,5 is of utmost importance to address limited protective effect of recent vaccines , it is vital to understand the disease characteristics that prevent us to develop a promising vaccine 7,9 - 11 .\nfor instance , to understand the mechanism of unsuccessful vaccine development in a rudimentary fashion , it is critical to clarify the reasons for an efficient escape of a pathogen from vaccine - induced immunity among those causing infection in blood and immune cells\n. similarly , understanding eco - epidemiological mechanisms that underlie interactions between pathogens would play key roles in disentangling the complexity of the protective effect of vaccination in field studies ( e.g. clarification on this point may help understand complex immunological response among vaccinated individuals in natural settings ) .\nsecond , the abovementioned explanatory factors can identify diseases without a successful vaccine , and thus , allow us to objectively assess the feasibility of introducing a forthcoming future vaccine in advance of field studies .\nsuch a predictive model can also be revised and updated by considering other ( useful ) explanatory factors and adding other diseases of public health interest to the samples . despite our identification of the potential epidemiological determinants of successful vaccine ,\nthe abovementioned model was only aimed to learn from the existing armament of childhood vaccines . as long as truly novel approaches ( e.g. adjuvants , new regimes of immunization and novel antigens )\nare effectively used for developing a new vaccine , future vaccine development may be granted an opportunity to refute the simple expectation .",
"role": "user"
},
{
"content": "epidemiological determinants of successful vaccine development were explored using measurable biological variables including antigenic stability and requirement of t - cell immunity . employing a logistic regression model , \n we demonstrate that a high affinity with blood and immune cells and pathogen interactions ( e.g. interference ) would be the risk factors of failure for vaccine development .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.9420 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey \nChapter Title: B. Single-Antenna Wiretap Channels Since Wyner \n1. Abstract of Cited Paper (BIB001): Given two discrete memoryless channels (DMC's) with a common input, it is desired to transmit private messages to receiver 1 rate R_{1} and common messages to both receivers at rate R_{o} , while keeping receiver 2 as ignorant of the private messages as possible. Measuring ignorance by equivocation, a single-letter characterization is given of the achievable triples (R_{1},R_{e},R_{o}) where R_{e} is the equivocation rate. Based on this channel coding result, the related source-channel matching problem is also settled. These results generalize those of Wyner on the wiretap channel and of Korner-Marton on the broadcast Channel. \n2. Abstract of Cited Paper (BIB002): Consider the following situation. K data hits are to be encoded into N > K bits and transmitted over a noiseless channel. An intruder can observe a subset of his choice of size µ < N. The encoder is to be designed to maximize the intruder's uncertainty about the data given his N intercepted channel bits, subject to the condition that the intended receiver can recover the K data bits perfectly from the N channel bits. The optimal tradoffs between the parameters K, N, µ. and the intruder's uncertainty H (H is the \"conditional entropy\" of the data given the µ. intercepted channel bits) were found. In particular, it was shown that for µ = N - K, a system exists with H ? K - 1. Thus, for example, when N = 2K and µ = K, it is possible to encode the K data bits into 2K channel bits, so that by looking at any K channel bits, the intruder obtains essentially no information about the data. \n3. Abstract of Cited Paper (BIB003): In this correspondence, we extend the Gaussian wiretap channel model introduced by Leung-Yan-Cheong and Hellman to the Gaussian wiretap channel with side information by introducing additive white Gaussian interference in the main channel, which is available to the encoder in advance. This model is also an extension of the dirty-paper channel introduced by Costa since its main channel is the dirty-paper channel. A perfect-secrecy-achieving coding strategy for the model is proposed. It is used to derive achievable rates with asymptotic perfect secrecy and an achievable rate-equivocation region. The achievable rates with asymptotic perfect-secrecy are then compared to upper and lower bounds. The comparison indicates that the proposed coding strategy is optimal in some cases. \n4. Abstract of Cited Paper (BIB004): We consider the transmission of confidential data over wireless channels with multiple communicating parties. Based on an information-theoretic problem formulation in which two legitimate partners communicate over a quasi-static fading channel and an eavesdropper observes their transmissions through another independent quasi-static fading channel, we define the secrecy capacity in terms of outage probability and provide a complete characterization of the maximum transmission rate at which the eavesdropper is unable to decode any information. In sharp contrast with known results for Gaussian wiretap channels (without feedback), our contribution shows that in the presence of fading information-theoretic security is achievable even when the eavesdropper has a better average signal-to-noise ratio (SNR) than the legitimate receiver ? fading thus turns out to be a friend and not a foe. \n5. Abstract of Cited Paper (BIB005): We investigate the achievable secrecy rate with Gaussian random codes when the main channel is an AWGN channel, while the eavesdropper's channel is Rayleigh fading with additive Gaussian noise. Several transmission strategies according to the main channel's relative channel gain are proposed and evaluated. We show that even if the main channel channel gain is arbitrarily worse than the eavesdropper's average channel gain, positive secrecy rate can still be achieved with artificial noise injection and a burst signaling strategy. \n6. Abstract of Cited Paper (BIB006): We consider the secure transmission of information over an ergodic fading channel in the presence of an eavesdropper. Our eavesdropper can be viewed as the wireless counterpart of Wyner's wiretapper. The secrecy capacity of such a system is characterized under the assumption of asymptotically long coherence intervals. We analyze the full channel state information (CSI) case, where the transmitter has access to the channel gains of the legitimate receiver and eavesdropper, and the main channel CSI scenario, where only the legitimate receiver channel gain is known at the transmitter. In each scenario, the secrecy capacity is obtained along with the optimal power and rate allocation strategies. We then propose a low-complexity on/off power allocation strategy that achieves near-optimal performance with only the main channel CSI. More specifically, this scheme is shown to be asymptotically optimal as the average SNR goes to infinity, and interestingly, is shown to attain the secrecy capacity under the full CSI assumption. Remarkably, our results reveal the positive impact of fading on the secrecy capacity and establish the critical role of rate adaptation, based on the main channel CSI, in facilitating secure communications over slow fading channels. \n7. Abstract of Cited Paper (BIB007): This paper considers the compound wiretap channel, which generalizes Wyner's wiretap model to allow the channels to the (legitimate) receiver and to the eavesdropper to take a number of possible states. No matter which states occur, the transmitter guarantees that the receiver decodes its message and that the eavesdropper is kept in full ignorance about the message. The compound wiretap channel can also be viewed as a multicast channel with multiple eavesdroppers, in which the transmitter sends information to all receivers and keeps the information secret from all eavesdroppers. For the discrete memoryless channel, lower and upper bounds on the secrecy capacity are derived. The secrecy capacity is established for the degraded channel and the semideterministic channel with one receiver. The parallel Gaussian channel is further studied. The secrecy capacity and the secrecy degree of freedom (s.d.o. f.) are derived for the degraded case with one receiver. Schemes to achieve the s.d.o. f. for the case with two receivers and two eavesdroppers are constructed to demonstrate the necessity of a prefix channel in encoder design. Finally, the multi-antenna (i.e., MIMO) compound wiretap channel is studied. The secrecy capacity is established for the degraded case and an achievable s.d.o. f. is given for the general case. \n8. Abstract of Cited Paper (BIB008): We study the Gaussian wiretap channel with an energy harvesting transmitter which does not have a battery to save energy. In the absence of a battery, the necessary transmission energy is maintained by an i.i.d. energy arrival process. We observe that this channel is an instance of the state-dependent wiretap channel with state available only to the transmitter causally, where the state is the available energy at the transmitter. We prove that the entire capacity-equivocation region can be obtained by single-letter Shannon strategies and its boundary is achieved by input distributions with support set of Lebesgue measure zero.\n\nLiterature Review:",
"role": "user"
},
{
"content": "Early work in the field generally assumed non-fading channels, and knowledge of the (fixed) channel state was presumed at the transmitter. In , bounds on the equivocation rate for Wyner's wiretap channel model with finite code block lengths are derived. Carleial and Hellman considered a special case of Wyner's model where the main channel is noiseless and the wiretap channel is a binary symmetric channel, and analyzed the applicability of systematic linear codes for preserving the secrecy of an arbitrary portion of the transmitted message. For the degraded wiretap channel BIB001 with additive Gaussian noise, and C M and C W as the Shannon capacities of the main and wiretap channels, the essential result for the secrecy capacity C S was the following: Ultimately, it was established that a non-zero secrecy capacity can only be obtained if the eavesdropper's channel is of lower quality than that of the intended recipient. Csiszàr and Körner considered a more general (nondegraded) version of Wyner's wiretap channel in BIB002 , where they obtained a single-letter characterization of the achievable {private message rate, equivocation rate, common message rate}-triple for a two-receiver broadcast channel. For the special case of no common messages, the secrecy capacity was defined as which is achieved by maximizing over all joint probability distributions such that a Markov chain V, X, Y Z is formed, where V is an auxiliary input variable. In BIB004 it was shown that the availability of non-causal side information at the encoder can enhance the achievable secrecy rate region of (5), based on dirty-paper coding arguments. In BIB003 , Ozarow and Wyner studied the type-II wiretap channel, where the main communication channel is noiseless but the wiretapper has access to an arbitrary subset µ of the N coded bits, and optimal tradeoffs between code rate k/N and µ that guaranteed secrecy were characterized. The consideration of channel fading in wiretap channels has recently opened new avenues of research. Works in this area generally assume that at least the statistics of the eavesdropper's fading channel are known to the transmitter. Barros and Rodrigues et al. BIB005 - analyzed the outage probability and outage secrecy capacity of slow fading channels and showed that with fading, information-theoretic security is achievable even when the eavesdropper has a better average SNR than the legitimate receiver. Li et al. BIB006 examined the achievable secrecy rate for an AWGN main channel and a Rayleigh fading eavesdropper's channel with additive Gaussian noise, assuming that the eavesdropper channel realizations are unknown to legitimate transmitter Alice and receiver Bob. The main result of this paper was that with Gaussian random codes, artificial noise injection and power bursting, a positive secrecy rate is achievable even when the main channel is arbitrarily worse than the eavesdropper's average channel. A more exotic scenario was studied in BIB009 where the source has a stochastic power supply based on energy harvesting. Here, the i.i.d. energy arrivals are equated to channel states that are known causally to the source, and the optimal input distribution that attains the boundary of the capacity-equivocation region of the Gaussian wiretap channel was derived. Here, the capacity corresponds to the reliability of the main channel, while the equivocation refers to the normalized conditional entropy at the eavesdropper as described in Sec. II. Relatively fewer studies consider the case of a complete absence of eavesdropper CSI at the transmitter in fading wiretap channels. In BIB007 , the authors considered a blockfading scalar wiretap channel where the number of channel uses within each coherence interval is large enough to invoke random coding arguments. This assumption is critical for their achievable coding scheme which attempts to \"hide\" the secure message across different fading states. A recent approach towards understanding the information-theoretic limits of wiretap channels with no eavesdropper CSI has been taken by studying the compound wiretap channel BIB008 . The compound wiretap channel captures the situation in which there is no or incomplete CSI at the transmitter by assuming the eavesdropper's channel is always drawn from a finite, known set of states, and guarantees secure communication under any state that may occur.",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.4137 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: camostat mesylate was active against other coronaviruses, including SARS-CoV. \n\nAbstract:\nDifferent treatments are currently used for clinical management of SARS-CoV-2 infection, but little is known about their efficacy yet.\nHere we present ongoing results to compare currently available drugs for a variety of diseases to find out if they counteract SARS-CoV-2-induced cytopathic effect in vitro.\nOur goal is to prioritize antiviral activity to provide a solid evidence-driven rationale for forthcoming clinical trials.\nSince the most effective antiviral approaches are usually based on combined therapies that tackle the viral life cycle at different stages, we are also testing combinations of drugs that may be critical to reduce the emergence of resistant viruses.\nWe will provide results as soon as they become available, so data should be interpreted with caution, clearly understanding the limitations of the in vitro model, that may not always reflect what could happen in vivo.\nThus, our goal is to test the most active antivirals identified in adequate animal models infected with SARS-CoV-2, to add more information about possible in vivo efficacy.\nIn turn, successful antivirals could be tested in clinical trials as treatments for infected patients, but also as pre-exposure prophylaxis to avoid novel infections until an effective and safe vaccine is developed.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.66296 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nheavy quark production in electron - proton collisions at hera is dominated by boson gluon fusion at leading order ( @xmath0 ) in which a virtual photon coming from the electron vertex interacts with a gluon from the proton to produce a heavy quark pair .\nthere are two main kinematic regimes , deep inelastic scattering ( dis ) for photon virtualities q@xmath1gev@xmath2 and photoproduction ( @xmath3p ) for q@xmath4gev@xmath2 .\nstudying the production of heavy quarks enables comparison of the experimental results to theoretical next to leading order ( nlo ) quantum chromodynamics ( qcd ) calculations .\nthe experimental results can also be compared to leading order ( lo ) plus parton shower monte carlo ( mc ) programmes .\nmeasuring the d@xmath5 cross section at low q@xmath2 is a test of the nlo calculation for charm production in the transition region from dis to @xmath3p .\nq@xmath2 values in the range @xmath6gev@xmath2 are reached by measuring the scattered electron in the beam pipe calorimeter ( bpc ) @xcite .\nthe measured cross section is well described by the predictions of nlo qcd ( figure 1 ) showing that this kinematic region is well understood theoretically .\n[ figa ] [ figb ] charm mesons have long lifetimes enabling them to be tagged via their displaced secondary vertices @xcite . selecting tracks depending on their decay length significance ( s@xmath7 )\ngreatly improves the purity of the signal . for s@xmath8 ,\nthe d@xmath9 signal to background ratio improves by factor of 50 , and 20% of the signal is kept ( figure 2 ) . for events containing a d@xmath5 and two jets in @xmath3p\n, correlations can be studied between the jets to allow detailed comparisons with qcd calculations @xcite .\nx@xmath10 is the fraction of the photon s four momentum , manifest in the two highest p@xmath11 jets , entering the hard dijet subprocess .\nthe nlo calculation describes shape of the data for direct @xmath3p ( x@xmath12 ) , with the data favouring a lower charm mass .\nthe shape is not as well described by the nlo calculation for resolved @xmath3p ( x@xmath13 ) , indicating a need for higher order corrections to the nlo calculations .\nthe lo + parton shower mcs particularly herwig fit the data well for both direct and resolved @xmath3p ( figure 3 ) .\nsimilar results have been found looking at correlations between the d@xmath5 and a jet not containing the d@xmath5 @xcite .\nusing the impact parameter significance of tracks , charm and beauty fractions can be calculated by fitting distributions in different x - q@xmath2 intervals @xcite .\ndifferential cross sections can then be measured and the structure functions f@xmath14 and f@xmath15 evaluated from the expression @xmath16\\end{aligned}\\ ] ] ( and similarly for f@xmath15 ) and plotted in figure 4 .\nthe qcd calculations fit the data reasonably well with scaling violations apparent at low x. [ figd ]\nusing the semileptonic decay of a b meson to a muon with an accompanying jet , the beauty can be separated from charm and light flavours by exploiting the high mass and long lifetime of b mesons . by simultaneously fitting the impact parameter and the relative transverse momentum of the muon to the axis of the associated jet the beauty fraction\ncan be extracted @xcite .\nthe nlo calculation describes the data reasonably well , though h1 have excess of events at low p@xmath17 ( figure 5 ) . by selecting events with two muons\nthe background from charm and light flavours is suppressed . separating the sample into high and low mass , isolated and non - isolated\n, like and unlike sign regions further constrains the background @xcite and although this results in low statistics , the data agree with the theory and mc predictions ( figure 6 ) .\nfigure 7 displays a summary of the beauty results from hera showing a good coverage of measurements .\nthere is a tendency of the data to lie above the nlo prediction though measurements with smaller errors are closer to the theory .\nimproved theoretical understanding is needed to include higher orders but also improved precision from the experimental results .\n[ fige ] [ figf ] [ figg ]\nin general the charm results from hera are in good agreement with nlo qcd .\nthe beauty results are not as precise but tend to indicate that higher order theory calculations are needed .",
"role": "user"
},
{
"content": "the most recent beauty and charm results from the zeus and h1 collaborations are presented .",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.12580 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: Cross-Serotype Immunity Induced by Immunization with a Conserved Rhinovirus Capsid Protein\nArticle:\nAbstract:\nHuman rhinovirus( RV) infections are the principle cause of common colds and precipitate asthma and COPD exacerbations. There is currently no RV vaccine, largely due to the existence of ∼150 strains. We aimed to define highly conserved areas of the RV proteome and test their usefulness as candidate antigens for a broadly cross-reactive vaccine, using a mouse infection model. Regions of the VP0( VP4+VP2) capsid protein were identified as having high homology across RVs. Immunization with a recombinant VP0 combined with a Th1 promoting adjuvant induced systemic, antigen specific, cross-serotype, cellular and humoral immune responses. Similar cross-reactive responses were observed in the lungs of immunized mice after infection with heterologous RV strains. Immunization enhanced the generation of heterosubtypic neutralizing antibodies and lung memory T cells, and caused more rapid virus clearance. Conserved domains of the RV capsid therefore induce cross-reactive immune responses and represent candidates for a subunit RV vaccine.\nIntroduction:\nHuman rhinovirus( RV) infections are the most frequent cause of the common cold[1] and are highly associated with exacerbations of asthma and COPD[2],[3],[4]. Despite the great disease burden and healthcare costs therefore attributable to RV infections, there is currently neither a vaccine nor specific anti-viral therapy available. The requirements for immunity to RV are poorly understood. Experimental and natural infections induce antibodies which provide some protection against re-infection with the same RV serotype[5],[6],[7]. Intranasal and intramuscular inactivated virus vaccinations similarly induce neutralizing antibodies and provide protection against disease induced with the same RV serotype[8],[9]. There are however greater than 100 serotypes of RV[10], divided into major and minor groups based on receptor usage and A and B groups based on antiviral sensitivity and nucleotide sequence[11],[12], and a further ∼50–60 RV species more recently defined as group C RVs based on sequence data alone[13],[14]. Serological variability amongst RVs therefore means that vaccines designed to generate neutralizing antibodies are unlikely to provide sufficiently broad protection to prevent the frequent infections which occur throughout life. Alternative vaccination strategies based on inducing T cell responses to conserved antigens have been explored for a number of pathogens, including respiratory viruses[15],[16]. An advantage of this approach lies in the ability of T cells to recognize internal virus proteins which are typically more highly conserved than surface exposed regions containing neutralizing antibody epitopes. T cells are therefore potentially cross-reactive against different virus strains, as has been shown with influenza viruses[17],[18], for which surface antigenic variability is also an obstacle to effective vaccine design. For RVs, naturally occurring memory T cells can be cross-serotype responsive[19],[20] and immunization with RV peptides has been suggested to be capable of inducing cross-serotype reactive T cells in mice[21]. Most of the naturally occurring RV-specific memory T cells characterized to date have shown a Th1/Tc1 bias[19],[20]. In vitro responses to RV by mixed PBMCs have been associated with virus shedding or cold symptoms after subsequent infection[22] but there is no evidence that naturally occurring RV-specific memory T cells specifically provide benefit in terms of virus control or disease symptoms in vivo. Here we show that a vaccine composition which elicits a Th1/Tc1 biased T cell response to conserved RV antigens could have efficacy. We took a bioinformatic approach to identify regions of the RV polyprotein which are conserved across A and B group and major and minor receptor binding group viruses, and which might be used as immunogens in a cross-reactive vaccine. As in similar analyses by others[11], we show that areas of the capsid VP0 protein are highly conserved amongst RVs. Immunization with VP0 protein from major group RV16 combined with Th1 promoting adjuvants induced antigen-specific, type I orientated T cell responses in the airways, enhanced neutralizing antibody responses to infection and caused a more rapid decrease in lung virus load in mice. Importantly, these effects were seen in mice infected with heterologous RV strains, indicating that capsid protein immunization could provide broadly cross-reactive immunity against RVs.\nDiscussion:\nThe unmet medical need attributable to RV infections is enormous but serotypic heterogeneity represents a major barrier to the development of an RV vaccine. We therefore identified regions of the RV polyprotein which are highly conserved amongst RVs to select potential constituents of a broadly cross-reactive subunit vaccine and tested their efficacy in a mouse model. We found that domains of the VP4 and VP2( VP0) capsid proteins were highly conserved across A and B species RVs. Immunization with recombinant RV16 VP0 protein increased the magnitude of airway T cell, especially CD4+ T cell, responses to infection consistent with the recruitment to and expansion of immunization-induced memory T cells in the airways. Although the CD4+ T cell dominance of this response contrasts with the prominent CD8+ CTL responses characteristic of other respiratory virus infections[24],[25],[26], there is evidence to suggest this is representative of naturally occurring RV infection[20],[27]. CD4+ T cells provide B cell help and can also possess direct cytotoxic effector function similar to CD8+ CTL and could therefore have both direct and indirect roles in RV control[28],[29],[30],[31]. The observed increases in airway T cell number in immunized and infected mice might in part be explained by the enhanced levels of the T cell recruiting chemokine CXCL10 measured in BAL 24 hrs after infection. Locally induced or systemically transferred memory T cells have previously been shown to increase airway innate immune mediators after influenza challenge via both IFN-γ dependent and independent mechanisms[32]. CXCL10 is an interferon inducible gene and in our studies the increase in CXCL10 might be explained by the enhanced levels of IFN-γ in the lungs of immunized and infected mice at the same timepoint after infection. There is limited data available regarding T cell polarization during RV infections in humans. In the mouse model little T cell cytokine response was measurable in the airways of infected and adjuvant treated mice, but by combining Th1 promoting adjuvants with the VP0 immunogen we observed a strong type I response to RV challenge and an acceleration of rhinovirus clearance. This is the first clear evidence that such enhancement of type I polarized T cell responses to RV provides benefit in terms of virus control[19],[22],[33]. In addition, asthmatics are a major target group for RV vaccination and Th1/Tc1 responses may also suppress type 2 responses which are associated with increased disease severity during experimental RV-induced disease exacerbations in atopic asthmatics[33], a hypothesis which can now be tested by utilising the mouse RV-induced asthma exacerbation model we have described previously[23]. A key requirement for an RV vaccine is broad cross-reactivity against the ∼150 strains. Human memory CD4+ T cells specific for conserved influenza proteins have been demonstrated to be cross-subtype responsive[17],[18] and we hypothesized that immunization with conserved RV proteins might induce similarly cross-reactive cells. We found that RV16 VP0 immunization induced systemic T cells that were responsive to VP0 peptides from heterologous group A and group B RV serotypes. Following subsequent challenge, cells recovered from the lungs were reactive to the RV16 derived immunogen, to heterologous group A live viruses with which mice were infected and to group B RV VP0 peptides. This cross-reactivity likely represents the recognition of conserved epitopes within VP0, primarily by CD4+ T cells given their greater expansion. Whether this cross-reactivity will be similarly evident in human populations with diverse MHC is not known but a previous study encouragingly showed that VP2 peptides can induce cross-haplotype responses in mice[21]. Further, whilst these studies provide proof of concept for the generation of cross-reactive T cells to RVs, further studies should determine if similar cross-reactivity is seen for the ∼100 other known RV serotypes. Likewise the large number of genetically defined C species RVs which are to date not well characterized[13]. Whether vaccine induced enhancement of Th1 cell responses to RV will prove a safe strategy for preventing RV induced disease awaits confirmation in a clinical trial. However, influenza vaccines are already licenced which use adjuvants which promote strong CD4+ T cell responses and have been shown to be safe[34],[35]. Immunization also induced IgG antibodies which bound multiple RV serotypes and following subsequent infection, enhanced heterologous infection serotype specific antibody levels in serum and BAL. Notably, this included a BAL IgA response which as we have shown previously[36] is otherwise weak or absent after a single infection in this model. Importantly, immunization with RV16 VP0 also enhanced neutralizing antibody responses to infection with the heterosubtypic viruses RV1B and RV29. The fact that generation of neutralizing antibody was dependent upon infection suggests that the effect of immunization on production of serotype-specific neutralizing antibodies following subsequent infection results from B cell help provided by broadly responsive immunization-induced T cells. Enhancement of both the speed and magnitude of antibody responses may provide benefit in terms of accelerating virus clearance and reducing duration of disease caused by naturally occurring infections with virus strains heterologous to that upon which the sequence of the immunogen is based. Consistent with a role for immunization-induced responses in enhancing virus control, we found that viral RNA was cleared more rapidly from the lungs of immunized mice after subsequent virus infection. This effect was more evident at later stages of infection, which is likely attributable to the fact that virus replication in this mouse model is short-lived compared to human infection, lasting only around 24 hrs[23],[33],[37] and therefore before enhanced T cell responses are apparent. The fact that T cell and antibody responses were able to speed virus clearance in a mouse model where replication is short lived suggests however that in man, where replication is much more robust and of longer duration[33],[37], the magnitude of benefit might be substantially greater. Pre-existing neutralizing antibodies to RVs provide protection against infection and symptoms in humans[6],[7] and in addition to accelerating virus clearance during the first naturally acquired infection with a given serotype, enhanced neutralizing antibody responses may provide better and more durable protection against future RV infections. Likewise the enlarged effector memory T cell pool in immunized persons, because local memory T cells are likely to respond rapidly to secondary challenge and are proposed to possess more potent anti-viral function than systemic memory cells[38],[39]. Immunization with VP0 may therefore generate serotype specific protective humoral and cross-reactive lung T cell memory responses to natural infection. Because RV infections are frequent throughout life, typically comprising 8–10 per year in young children and 2–5 per year in adults[40], natural infection following immunization could result in protection against a broad range of previously unseen RVs. In summary, immunization with a recombinant RV capsid protein enhanced airways Th1 cell and airways and systemic antibody responses to infection with heterologous virus serotypes. Immunization also accelerated virus clearance. This study therefore provides proof of principle for a broadly cross-reactive subunit vaccine for RV infections.",
"role": "user"
},
{
"content": "Human rhinovirus infections cause the majority of common colds as well as asthma and chronic obstructive pulmonary disease ( COPD ) exacerbations . The disease burden attributable to rhinoviruses is therefore huge . Despite this and the fact that human rhinoviruses were discovered over 50 years ago , there are currently no specific antiviral therapies or vaccine available . The lack of a rhinovirus vaccine can at least in part be attributed to the fact that rhinoviruses like other pathogens have high variability in surface antibody binding regions , resulting in >100 serotypically distinct strains . We have defined areas of the rhinovirus polyprotein which are highly conserved across strains and which may therefore induce cross-reactive immune responses capable of providing broader protection . Using a mouse model , we show that immunization with a recombinant rhinovirus capsid protein induces cross-reactive cellular and humoral immune responses . After subsequent infection , immunization enhances both neutralising antibody and lung effector and memory T cell responses , expediting virus clearance . Importantly these effects were evident upon challenge with multiple heterologous rhinovirus serotypes , indicating that immunization with conserved rhinovirus capsid proteins may represent a viable strategy for producing a broadly cross-reactive vaccine .",
"role": "assistant"
}
] |
science.covidfact_entailment | science.covidfact_entailment.564 | [
{
"content": "Based on the abstract below, determine if the claim is supported or contradicted. Output a JSON object with the verdict (\"SUPPORT\" or \"CONTRADICT\") and evidence sentences from the abstract.\n\nClaim: Robust sars-cov-2-specific t-cell immunity is maintained at 6 weeks following primary infection\n\nText:\nThe immune response to SARS-CoV-2 is critical in both controlling primary infection and preventing re-infection.\nHowever, there is concern that immune responses following natural infection may not be sustained and that this may predispose to recurrent infection.\nWe analysed the magnitude and phenotype of the SARS-CoV-2 cellular immune response in 100 donors at six months following primary infection and related this to the profile of antibody level against spike, nucleoprotein and RBD over the previous six months.\nT-cell immune responses to SARS-CoV-2 were present by ELISPOT and/or ICS analysis in all donors and are characterised by predominant CD4+ T cell responses with strong IL-2 cytokine expression.\nMedian T-cell responses were 50% higher in donors who had experienced an initial symptomatic infection indicating that the severity of primary infection establishes a ‘setpoint’ for cellular immunity that lasts for at least 6 months.\nThe T-cell responses to both spike and nucleoprotein/membrane proteins were strongly correlated with the peak antibody level against each protein.\nThe rate of decline in antibody level varied between individuals and higher levels of nucleoprotein-specific T cells were associated with preservation of NP-specific antibody level although no such correlation was observed in relation to spike-specific responses.\nIn conclusion, our data are reassuring that functional SARS-CoV-2-specific T-cell responses are retained at six months following infection although the magnitude of this response is related to the clinical features of primary infection.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"In conclusion, our data are reassuring that functional SARS-CoV-2-specific T-cell responses are retained at six months following infection although the magnitude of this response is related to the clinical features of primary infection.\"]}",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.785 | [
{
"content": "Extract all unique entities of these types from the following clinical trial criteria: [\"Negation\", \"Observation\", \"ReferencePoint\", \"Procedure\", \"Drug\", \"Measurement\", \"Value\", \"Person\", \"Multiplier\", \"Device\", \"Qualifier\", \"Mood\", \"Temporal\", \"Condition\", \"Visit\", \"Scope\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values. If no entities are found for a type, use [].\n\nText:\n\nParticipants having H. pylori related chronic gastritis with/without peptic ulcers who are aged greater than 20 years old and are willing to received eradication therapy.",
"role": "user"
},
{
"content": "{\"Condition\": [\"chronic gastritis\", \"peptic ulcers\"], \"Device\": [], \"Drug\": [], \"Measurement\": [], \"Mood\": [\"willing to received\"], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [\"aged\"], \"Procedure\": [\"eradication therapy\"], \"Qualifier\": [\"H. pylori related\"], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [\"greater than 20 years old\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.anat_em_ner | science.anat_em_ner.223 | [
{
"content": "Given this abstract, identify all unique entities of these types: [\"AnatomicalEntity\"].\n\nOnly output the JSON object, no other text.\n\nAbstract:\nAppetite regulation by serotoninergic mechanisms and effects of d-fenfluramine.\nIn this literature review, evidence is presented for the theory that the neurotransmitter, serotonin (5-hydroxytryptamine, 5HT), in medial hypothalamic centres is an important regulator for appetite and for the selection of major food constituents. High local levels of 5HT cause a reduction of appetite and a preference for protein, low levels the opposite. The main antagonistic system is noradrenergic. The drug d-fenfluramine mimics the effects of 5HT by releasing 5HT from serotoninergic nerve endings and inhibiting its neuronal re-uptake. Further experimental data prove that a high-carbohydrate, low-protein diet promotes uptake of serum tryptophan in the brain and its conversion into 5HT. Hence, this serotoninergic system may function as a self-regulatory mechanism. In patients with decreased peripheral insulin sensitivity, the system may be disturbed, causing overconsumption of carbohydrates. This is sometimes compulsive (\"carbohydrate craving\"). It may be presumed that in the treatment of obesity, in addition to the use of serotoninergic drugs, successes with reducing diets may be enhanced by including periods of high-carbohydrate, low-protein intake. It would be worthwhile to explore whether similar alimentary self-regulatory mechanisms of neurotransmitter function exist in other regulatory systems.",
"role": "user"
},
{
"content": "{\"AnatomicalEntity\": [\"medial hypothalamic centres\", \"nerve endings\", \"brain\", \"neuronal\", \"serum\"]}",
"role": "assistant"
}
] |
science.drug_combo_extraction_re | science.drug_combo_extraction_re.854 | [
{
"content": "Extract drug combinations from this sentence and classify each combination into one of these categories:\n- POS: Positive interaction between drugs\n- NEG: Negative interaction between drugs\n- COMB: Drugs used together with unknown interaction type\n\nOutput only a JSON array where each entry is [drug_1, drug_2, ..., drug_n, category].\n\nparagraph: Triplet chemotherapy in patients with metastatic colorectal cancer: toward the best way to safely administer a highly active regimen in clinical practice. A major problem concerning the addition of more drugs in a chemotherapy combination is designing a proper schedule assuring the balance between dose intensity of each drug, efficacy of the combination, and tolerability lessening the burden of drug toxicity. We evaluated triplet chemotherapy-based intensive regimens proposed as first-line treatment in patients with metastatic colorectal cancer. Using a FOLFOXIRI ( 5-fluorouracil [ 5-FU ] , irinotecan , and oxaliplatin ) combination regimen , patients with metastatic colorectal cancer now have the possibility of longer survival , but disappointingly , with increased toxicities . Triplet chemotherapy regimen according to 5-fluorouracil, irinotecan /5-fluorouracil, oxaliplatin, characterized by timed flat-infusion 5-FU administration, without leucovorin, obtained efficacy equivalent to other reported similar combination regimens (5-FU, irinotecan, and oxaliplatin), with increased received 5-FU dose intensity and lower grade 3 to 4 neutropenia. To guarantee the proper balance between dose intensities, efficacy, and toxicity, triplet chemotherapy schedules could be further improved by abrogation of folinic acid and bolus 5-FU, a new and easy modality of 5-FU administration, such as timed flat-infusion 5-FU, associated with alternating irinotecan and oxaliplatin; this could favor diffusion of this intensive treatment in clinical practice.\n\nsentence: Using a FOLFOXIRI ( 5-fluorouracil [ 5-FU ] , irinotecan , and oxaliplatin ) combination regimen , patients with metastatic colorectal cancer now have the possibility of longer survival , but disappointingly , with increased toxicities .",
"role": "user"
},
{
"content": "[[\"5-FU\", \"irinotecan\", \"oxaliplatin\", \"POS\"]]",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.4522 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: AnnexinA7isexpressedinbothcelltypesoftheeggcylinderwithastrongstainingoftheendodermandaweakerstainingoftheectoderm.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.drug_combo_extraction_re | science.drug_combo_extraction_re.1145 | [
{
"content": "Extract drug combinations from this sentence and classify each combination into one of these categories:\n- POS: Positive interaction between drugs\n- NEG: Negative interaction between drugs\n- COMB: Drugs used together with unknown interaction type\n\nOutput only a JSON array where each entry is [drug_1, drug_2, ..., drug_n, category].\n\nparagraph: Conditional Survival in Patients with Advanced Renal Cell Carcinoma Treated with Nivolumab. BACKGROUND nivolumab is approved for the treatment of advanced renal cell carcinoma (RCC). However, traditional overall survival (OS) or progression-free survival (PFS) do not reflect patient prognosis after initial management. Therefore, this study aimed to evaluate conditional overall survival (COS) and conditional progression-free survival (CPFS) in patients with advanced RCC treated with nivolumab. MATERIAL AND METHODS There were 847 patients with advanced RCC treated with first-line nivolumab plus ipilimumab (n=425) and sunitinib (n=422), and 821 patients were treated with second-line nivolumab (n=410) and everolimus (n=411). Primary endpoints were COS and CPFS. Individual patient data of PFS and OS were digitally reconstructed from two large randomized controlled trials (CheckMate 025 and CheckMate 214). RESULTS In first-line treatment , compared with sunitinib , improvement of one-year CPFS for the nivolumab plus ipilimumab group after living for 0.5 and 0.75 years were 14 % ( from 53.0 % to 67.0 % ) and 16 % ( from 57.0 % to 73.0 % ) higher than the one-year PFS of 6.5 % ( from 42.9 % to 49.4 % ) , with similar results for one-year COS following first-line treatment . For second-line treatment, compared with everolimus, the improvement of one-year CPFS for the nivolumab group after living for 0.5 and 0.75 years were 19% (from 25.0% to 44.0%) and 19% (from 27.0% to 46.0%) and significantly higher than the one-year PFS of 4.5% (from 18.5% to 23.0%). CONCLUSIONS Survival benefit for patients with advanced RCC from nivolumab (plus ipilimumab) compared with sunitinib was more evident from conditional survival (CS) analysis of first-line treatment.\n\nsentence: RESULTS In first-line treatment , compared with sunitinib , improvement of one-year CPFS for the nivolumab plus ipilimumab group after living for 0.5 and 0.75 years were 14 % ( from 53.0 % to 67.0 % ) and 16 % ( from 57.0 % to 73.0 % ) higher than the one-year PFS of 6.5 % ( from 42.9 % to 49.4 % ) , with similar results for one-year COS following first-line treatment .",
"role": "user"
},
{
"content": "[[\"nivolumab\", \"ipilimumab\", \"POS\"]]",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.72230 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Registration of 3D Point Clouds and Meshes: A Survey from Rigid to Nonrigid \nChapter Title: Rigid Alignment with Nonrigid Correction \n1. Abstract of Cited Paper (BIB001): A key challenge in reconstructing high-quality 3D scans is registering data from different viewpoints. Existing global (multiview) alignment algorithms are restricted to rigid-body transformations, and cannot adequately handle non-rigid warps frequently present in real-world datasets. Moreover, algorithms that can compensate for such warps between pairs of scans do not easily generalize to the multiview case. We present an algorithm for obtaining a globally optimal alignment of multiple overlapping datasets in the presence of low-frequency non-rigid deformations, such as those caused by device nonlinearities or calibration error. The process first obtains sparse correspondences between views using a locally weighted, stability-guaranteeing variant of iterative closest points (ICP). Global positions for feature points are found using a relaxation method, and the scans are warped to their final positions using thin-plate splines. Our framework efficiently handles large datasets---thousands of scans comprising hundreds of millions of samples---for both rigid and non-rigid alignment, with the non-rigid case requiring little overhead beyond rigid-body alignment. We demonstrate that, relative to rigid-body registration, it improves the quality of alignment and better preserves detail in 3D datasets from a variety of scanners exhibiting non-rigid distortion.\n\nLiterature Review:",
"role": "user"
},
{
"content": "Displacement Field of Space-Time Surfaces (7 DoF). To allow nonrigid correction due to subtle movements of objects during acquisition, Mitra et al. BIB001 propose a space-time displacement model of the form fðp j Þ. The transformation consists of the unknown ðR j ; t j Þ that aligns surface in consecutive frame j in the space domain, and a translation j along the time axis. This technique requires dense sampling in both the spatial and temporal domain. Rigid iterative closest point (ICP) + Thin Plate Spline (TPS) (18 DoF). Calibration errors or device nonlinearity can lead to imprecise alignment of rigid scans. Brown and Rusinkiewicz BIB002 use a locally weighted ICP to establish reliable correspondences across multiple scans. Then they optimize the locations of these correspondences across multiple scans in a global manner. These correspondences define the anchors such that those misalignments can be corrected by TPS with 3n þ 12 DoF, where n is the number of anchor correspondences, resulting in a sharper final model. Since n is fixed once the rigid transformation is computed, it requires overall 18 DoF.",
"role": "assistant"
}
] |
science.acl_arc_intent_classification | science.acl_arc_intent_classification.574 | [
{
"content": "Classify the intent of this citation from an NLP research paper. \n\nYour answer must be one word from: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. No other text allowed.\n\nSection Title:\nexperiments\n\nContext before:\nThis choice is motivated by an observation we made previously (Hasan and Ng, 2013a): since each post in a sequence is a reply to the preceding post, we could exploit their dependencies by determining their stance labels together.3 As our sequence learner, we employ a maximum entropy Markov model (MEMM) (McCallum et al., 2000). In other words, we train a SC model that assumes as input a post sequence and outputs a stance sequence, with one stance label for each post in the input post sequence. In P2, on the other hand, we recast SC as a sequence labeling task.\n\nCitation:\nThis choice is motivated by an observation we made previously ( Hasan and Ng , 2013a ) : since each post in a sequence is a reply to the preceding post , we could exploit their dependencies by determining their stance labels together .3 As our sequence learner , we employ a maximum entropy Markov model ( MEMM ) ( McCallum et al. , 2000 ) .\n\nContext after:\nGiven an input post sequence PS = (p1, p2, . . . , pn), the MEMM finds the most probable stance sequence S = (s1, s2, ... , sn) by computing P(S|PS), where: This probability can be computed efficiently via dynamic programming (DP), using a modified version of the Viterbi algorithm (Viterbi, 1967). There is a caveat, however. Recall that the post sequences are generated from a thread.",
"role": "user"
},
{
"content": "Extends",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.3742 | [
{
"content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nDexamethasone therapy and cortisol excretion in severe pediatric head injury . Glucocorticoids are used in an attempt to reduce brain edema secondary to head injury . Nevertheless , their usefulness remains uncertain and contradictory . In a randomized study of 24 children with severe head injury , urinary free cortisol was measured by radioimmunoassay . Twelve patients ( group 1 ) received dexamethasone and 12 ( group 2 ) did not . All patients were treated with a standardized regimen . In group 1 there was complete suppression of endogenous cortisol production . In group 2 free cortisol was up to 20-fold higher than under basal conditions and reached maximum values on days 1-3 . Since the excretion of cortisol in urine reflects the production rate closely and is not influenced by liver function and barbiturates , the results in group 2 show that the endogenous production of steroids is an adequate reaction to severe head injury . Exogenous glucocorticoids are thus unlikely to have any more beneficial effects than endogenous cortisol .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"cortisol\", \"Glucocorticoids\", \"dexamethasone\", \"Dexamethasone therapy\"], \"Outcome\": [\"excretion of cortisol in urine\", \"suppression of endogenous cortisol production\"], \"Participant\": []}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.43909 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nuganda has more than 1.2 million people living with hiv infection and in 2010 more than 120,000 got infected with hiv . although there are various efforts to scale up aids treatment , many hiv - infected people are unable to access treatment and the increasing new hiv infections still exceed the government 's capacity to provide treatment .\nwhile attempts are being made to reduce new infections and to scale up antiretroviral therapy ( art ) , it is equally important to improve adherence to regimens among the few who are already accessing art for successful treatment .\nart delays progression to aids and death , hence , improved the quality of life for people living with hiv [ 4 , 5 ] . maintaining good adherence\nhas been found to be difficult to achieve and sustained by both patients and health care providers [ 6 , 7 ] .\na major concern with the use of art is the emergence of viral resistance , which is mainly due to insufficient compliance .\ndifferent tools have been used to determine art adherence rates and on average the rate of adherence to art is about 70% . a meta - analysis found adherence rates for african patients to be approximately 77% .\n, that assessed correspondence between multiple measures of adherence and viral load suppression , found that the mean adherence was 9194% by all measures .\npatients ' adherence is dynamic either improving or declining over time . a prospective study conducted in southwest ethiopia showed that adherence had declined by 3% by the third month of follow - up visit compared to the results of the baseline study at month zero .\nart is a long - term treatment , and sustaining acceptable level of drug adherence poses a challenge for both the patient and health care workers .\ntherefore , the benefits of art can only be achieved if health providers realize that adherence is a dynamic process , complex , and can not be predicted by only patient characteristics , but continuous monitoring and integration into regular clinic followup [ 6 , 12 ] .\nthese are a combination of patient demographic and behavioral factors and disease characteristics , medication factors and health systems factors [ 6 , 1315 ] . \nthe aim of this study is to describe the factors associated with incomplete adherence among adult patients on art in 15 health facilities in uganda .\nthe study was conducted in 15 faith - based and stand - alone hiv clinics that provide high - quality hiv care and treatment to rural and underserved communities in the northern , southern , central , eastern , and western regions of uganda .\ndata was collected using the aidsrelief adherence survey that has been piloted , validated , and used in six african countries .\nthe survey was designed to identify specific indicators that influence the adherence to antiretroviral therapy and the increase in quality of life .\nthe study assessed missed clinic appointments , missed doses , and whether patients had ever run out of medication as the measures of adherence .\na sample of 763 patient adherence surveys were done to assess disease , treatment knowledge and services received between august 2008 and march 2009 . in august 2008\n, we conducted a survey of patients having been on art ( interquartile range for duration of art = 2946 months , median=33 months , n = 763 , > 16 years ) , in 15 facilities in uganda . \ninformed consent forms were provided to the patients after full explanation from the health staff about the survey .\neach patient had the right to withdraw at anytime in the course of the survey . \ndata were entered , cleaned , and coded in to a computer and analyzed using stata version 11 .\nbivariate analysis was carried out to determine the statistical significant association between explanatory variables and the outcome variable .\nchi - square tests and their p values at the level of significance of 5% , as well as confidence intervals ( ci ) , were used to define statistical associations between explanatory variables and outcomes .\nout of the 763 patients who participated in this study , 50.5% were in the age group of 1639 years .\nmost ( 66.6% ) of the patients were engaged in an income generating activity before starting on art . over half ( 54% ) of the patients owned a home , and 31.6% were renting at the time of the survey .\nmany of the patients ( 42.1% ) had primary education and 38.4% had secondary education , while only 7.7% had attained university education .\nthe 763 patients who participated in this study varied in age , gender , socioeconomic status , and education ( table 1 ) . the survey revealed that 97% of the patients had not missed their doses in the last week and 93% of patients had not missed appointments in the past three months .\nthere was no significant association between the measures of adherence in this study ( missed doses , missed appointments , and ever run out of medication ) and the sociodemographic factors namely age , gender , and education level ( table 2 ) . out of the patients who reported ever missing their doses for the period on art\n, the most common reasons were travelled , forgot , work conflict , and felt unwell ( table 3 ) . \nthis study found that patients on art that missed their appointments ( 29.2% ) were more likely to miss their doses compared to those that did not ( 5.7% ) .\nthe findings are depicted in table 4 : p value = 0.0004 , or = 6.80 , 95% ci ( 2.4118.66 ) .\npatients on art who used alcohol were more likely to miss a dose during the course of their treatment compared to those who did not use alcohol ( table 5 : p value < 0.005 with ci 2.4027.83 ) .\nthis study found that patients on art that missed their appointments were more likely to miss their doses compared to those that did not .\nwhile we explored the reasons for missed doses , we did not do the same for missed appointments , therefore , it is difficult to understand the reasons for missed appointments .\nhowever , what was clear is that missing a scheduled appointment is associated with missing art doses .\nother authors reported other common reasons responsible for nonadherence that are related to income and they include failure to get transport to reach art clinics hence missing pharmacy refills , missed clinic appointments and missed doses [ 10 , 17 ] . however , another study found that patients in developing countries can achieve good adherence despite limited resources .\n. showed that in art centers , where patients were required to pay user fees , their adherence levels were low while orrell et al.found that low socioeconomic status was not a predictor of adherence for patients with fully subsidized therapy .\nthis clearly applies to this study context where all the patients despite their low income have good access to fully subsidized therapy and exhibited high adherence comparable to high income settings .\nnotwithstanding , it is important to consider the fact that some patients may lack the means to access the fully subsidized therapy due to poverty . in this way , poverty acts as an intervening variable in hindering access to the much needed lifesaving therapy .\nregardless , there would be need to further understand the experiences of the patients who miss appointments in order to tailor interventions that best suit them . \nour study was unique as it looked at patients who had been on treatment for a considerable amount of time .\nthe study by oyugi et al . followed up patients from baseline to 12 weeks while in our study patient duration of art was more than 2 years at least .\nmost studies have not considered patient adherence beyond 2 years after art initiation [ 4 , 12 , 15 ] .\nour study is also the first of its kind that has explored the risk factors associated with incomplete adherence among elderly hiv patients .\nwe found that there was no difference in adherence between the elderly who made up 12.5% of the study population and other age groups .\nhowever , this could be due to the small number of patients with nonadherence events to allow for risk factor analysis across all age groups including the elderly . in our study ,\nin contrast , another study found that women were more likely than men to forget to refill medications and not to know how to take the medications correctly .\na few studies have reported age as an influencing factor to adherence especially in infancy and adolescents [ 7 , 18 ] .\ngenerally however , it has been reported that sociodemographic factors seem not to predict adherence , although there are a few factors which have been reported to hinder adherence such as low income which has been cited as a strong predictor of failure to adhere to art [ 4 , 7 , 10 ] . \nin general , patient self - reported adherence in this survey was very high ( 97% ) .\nalthough self - reported outcomes of adherence may have a weakness of reliability , a study by oyugi et al . found no significant difference between patient - reported and objective measures of adherence .\nour findings are higher than the average adherence rates for african patients but consistent with those in a prospective observational cohort study conducted in dakar , senegal , where adherence was estimated to be over 91% and similar to the study by oyugi et al . .\nour findings are consistent with follow - up cohorts that were done in developed countries such as the aproco cohort study and the ciel bleu trial .\nthe high levels of adherence in this patient - population can be attributed to the model of hiv care and treatment that is used .\nthe model emphasizes adherence as a therapeutic intervention with intensive treatment preparation , patient specific care and treatment plan , community - based treatment services with home - based followup during art startup .\ndisclosure is highly encouraged and all clinics have well - defined catchment area as well as longitudinal medical records for all patients .\nthis is similar to other studies where alcohol consumption has also been reported to affect adherence [ 2123 ] .\nalcohol use is known to be a proxy for loss of control , thus , affecting adherence . in a study done in botswana , around 40% of the patients surveyed admitted missing a dose because of alcohol consumption .\nrelated studies also show that alcohol is greatly related to the reduced adherence [ 25 , 26 ] .\na systematic review conducted in 2009 revealed that hiv / aids patients that used alcohol were 5060% not likely to adhere to their medications . \nthere was some association between the educational status and missed appointments in the last three months in this study ( p <\na systematic review conducted in 2006 , revealed that education and literacy levels are important predictors of adherence .\nthis paper found out that patient understanding of their medication regimen and their knowledge of the relationship between nonadherence and buildup of resistance to medication also predict better adherence .\nit is not clear however , what the optimal educational level would be to get better adherence but perhaps what is more important is the patient understanding of adherence on the disease mechanism . \nwhile many studies have shown that disclosure is associated with adherence [ 7 , 28 , 29 ] , we did not find this association .\nthe rate of disclosure to a friend or a relative in our patient pool for this study was quite high ( 99% ) since patients are encouraged to disclose their hiv status to a treatment buddy before starting art in order to improve the social support .\nit is very important for health workers to find out the patients ' disclosure before administering art , while continuing to support those who would not have disclosed . \ndifferent studies have also revealed that when patients experience side effects of prescribed medication , they tend to stop treatment or take it irregularly [ 6 , 18 ] . in this study , only one patient out of those who missed their doses reported missing medication due to side effects and another due to pill fatigue .\nsimilarly , patients have reported taking drug holidays as a result of pill fatigue . additionally , pill burden has been found to contribute to poor adherence [ 4 , 7 , 28 ] .\nhowever , pill burden was not mentioned as a problem for the patients in our study .\na number of studies have reported on irregular drug supply as an important predictor of adherence to art [ 28 , 30 ] .\nas far as the researchers know , drug stockouts in this study 's patient population have never occurred but it is definitely appreciated that erratic drug supply can interrupt patient adherence .\nsome researchers have even argued that health care - related challenges could be the most important barriers to art adherence in resource - limited settings .\npatient - provider relationships , the time spent at health facility , and opening hours can influence adherence to art [ 17 , 31 ] .\nfuture research will need to study how these factors may be viewed as affecting adherence in this setting .\nwhile the level of good patient adherence to art in this study was generally high , it is important to continue to reinforce the importance of adherence .\nhigh adherence rates can be achieved if hiv clinics and providers reduce the number of missed appointments as this study has shown that this is an important predictor of adherence .\nmost patients reported that travelling and forgetfulness were important predictors of missed doses in this study .\nthus , creativity in collaboration with the patients is required to design strategies to reduce this .\nadditionally , treatment and care providers need to strengthen patient education and counseling on the negative impact of alcohol use on adherence .",
"role": "user"
},
{
"content": "background . missed doses and appointments are predictors of incomplete adherence among patients on art . \n the aidsrelief model emphasizes treatment preparation and continuous treatment support for art patients including community followup \n . methods . in august 2008 , \n a survey was conducted among patients on antiretroviral therapy ( art ) ( interquartile range for duration of art = 2946 months , median = 33 months , n = 763 , age > 16 years ) , in 15 health facilities in uganda . missed doses and appointments among adult patients on art and the factors most associated with these were identified . \n reasons for missed doses were also explored . results . \n the survey revealed that 97% of the patients had not missed their doses in the last week while 93% had not missed their appointments in the last three months . \n for those who had missed their doses , the most common reasons were travel ( 48% ) and forgetfulness ( 28% ) . there was a significant association between missing doses and missing appointments ( p = 0.0004 ) and between alcohol use and missed doses ( p < 0.005 ) . conclusions . \n the level of adherence to medication and clinic appointments for patients on art in the study population was very high . \n it is important to strengthen adherence strategies at both facility and community levels to assist patients that are likely to miss their doses or appointments .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.16180 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nwe analyzed 82 type and other culture collection strains encompassing all validly described staphylococcal species ( n = 38 ) and subspecies ( n = 21 ; according to the current list of bacterial names with standing in nomenclature , updated may 14 , 2005 ) ( 7 ) .\ntwo strains of the recently proposed candidate species s. pettenkoferi ( 8) were added to complete the rpob sequence reference database . using this database\n, we analyzed 55 clinical staphylococcal isolates collected from human ( n = 52 ) and animal ( s. intermedius , n = 2 ; s. felis , n = 1 ) specimens ; 6 of the human isolates exhibited the small - colony variant ( scv ) phenotype .\nthis strain collection was previously analyzed by the api i d 32 staph and vitek 2 systems ( both obtained from biomrieux , marcy l'etoile , france ) , partial 16s rdna sequencing , chemotaxonomy , and riboprinting to determine species designation ( 3 ) . the thermal cycling condition to amplify the partial rpob gene ( 899 bp )\nwas 35 cycles of denaturation at 94c for 45 s ( 300 s for the first cycle ) , annealing ( 60 s at 52c ) , and extension ( 90 s at 72c , 600 s for the last cycle ) .\nthe staphylococcus - specific primers used for amplification and sequencing of rpob are shown in table 1 .\nsequencing reactions were performed in a total volume of 10 l containing 0.5 l premix ( abi prism bigdye terminator v3.0 ready reaction cycle sequencing kit , applied biosystems , darmstadt , germany ) , 1.8 l 400 mmol / l tris - hcl , 10 mmol / l mgcl2 , 10 pmol sequencing primer , and 2 l polymerase chain reaction product .\nthe sequencing products were purified by using the centri - sep spin columns ( princeton separations , adelphia , nj , usa ) and analyzed with the abi prism 3100 avant genetic analyzer ( applied biosystems ) according to the manufacturer 's instructions .\nfor further analysis , nucleotides 14441928 ( corresponding to s. aureus rpob gene positions of the genbank accession no .\nthe sequences were analyzed by using ridom traceeditpro version 1.0 software ( ridom gmbh , wrzburg , germany ) .\nstaphylococcal partial rpob reference sequences determined in this study were deposited in genbank under accession nos .\n* primers are numbered from the 3 end of the primer on the forward strand of staphylococcus aureus ( genbank accession no .\nall staphylococcal type strains were distinguishable by rpob ; the only exception was the s. equorum subspecies that shared the same sequence ( figure a1 ) .\nthe mean pairwise distance of all type and other culture collection strains exhibiting a unique rpob sequence ( n = 68 ) was 13.7% ( range 0%21.4% ) and the standard deviation was 3.3% . when assuming a normal distribution for the distances and choosing a reporting criterion > 94.0% , the similarity for a distinct species correlates with a statistical error probability of 1.0% ( 9 ) .\nthe definitive identification of 55 clinical isolates and the rpob gene sequence similarity search results are shown in table 2 . at the species level ,\nthe correct species designation for all 55 clinical isolates was made by rpob sequence similarity search ( sequence similarity > 94.0% ) .\nof 21 clinical isolates belonging to species currently divided into subspecies , 17 isolates were correctly identified to the subspecies level .\nsubspecies identification for isolates m26 and m53 was unsuccessful by rpob or partial 16s rdna sequencing , riboprinting , and chemotaxonomy ( data not shown ) . only isolates m20 and m39 were misidentified by rpob sequencing as s. saprophyticus subsp .\nour previous study demonstrated the superiority of sequence - based methods over phenotypic approaches using the api i d 32 staph and vitek 2 systems ( 3 ) .\nthe advantage of a sequence - based method became most evident when differentiating isolates with the scv phenotype , in which the api i d 32 staph and vitek 2 systems misidentified 2 and 4 isolates , respectively .\nwhen both sequence - based approaches used were compared , rpob sequencing was superior to partial 16s rdna identification .\nalthough the 16s rdna procedure differentiated 50 ( 90.9% ) of all tested clinical isolates at species level , rpob identified 100% . therefore ,\nif an unknown organism needs to be identified , 16s rdna sequencing is the method of choice because of the availability of universal primers ( 10 ) .\nhowever , if the genus is already known , the rpob method should be used .\ncompared with other published molecular probes , rpob showed the highest discriminatory power , e.g. , hsp60 and soda sequencing did not differentiate subspecies of s. carnosus , s. cohnii , s. hominis , s. schleiferi , or s. succinus ( 4,5 ) . in a previous study , rpob\nsequence - based identification of staphylococcus species has been reported ( 6 ) . however ,\na limited number of taxa were included , and the primers used were not appropriate to detect all staphylococcal subspecies .\na higher discrimination with rpob sequencing compared with 16s rdna sequencing has been demonstrated for the genera corynebacterium ( 13 ) and bacillus ( 14 ) .\ndna sequencing is a rapid alternative to biochemical and other phenotypic procedures for the differentiation of bacterial pathogens because of its decreased costs and increased automation ( 15 ) .\nthus , rpob is a useful molecular target for differentiating staphylococcal isolates to the species and subspecies level .",
"role": "user"
},
{
"content": "the emerging clinical importance of staphylococcal infections prompted us to establish a reference database for partial rna polymerase b ( rpob ; nucleotides 14441928 ) gene sequences from type strains of all staphylococcal species and subspecies . \n this database correctly identified 55 clinical staphylococcal isolates ; all were correctly identified at the species level . at the subspecies level , rpob misidentified only 2 isolates .",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.4699 | [
{
"content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: outbreaks of the new coronavirus will wane as winter turns to spring\n\nAbstract:\nThe undefendable outbreak of novel coronavirus (SARS-COV-2) lead to a global health emergency due to its higher transmission rate and longer symptomatic duration, created a health surge in a short time.\nSince Nov 2019 the outbreak in China, the virus is spreading exponentially everywhere.\nThe current study focuses on the relationship between environmental parameters and the growth rate of COVID-19.\nThe statistical analysis suggests that the temperature changes retarded the growth rate and found that -6.28{degrees}C and +14.51{degrees}C temperature is the favorable range for COVID-19 growth.\nGutenberg- Richter's relationship is used to estimate the mean daily rate of exceedance of confirmed cases concerning the change in temperature.\nTemperature is the most influential parameter that reduces the growth at the rate of 13-16 cases/day with a 1{degrees}C rise in temperature.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"The statistical analysis suggests that the temperature changes retarded the growth rate and found that -6.28{degrees}C and +14.51{degrees}C temperature is the favorable range for COVID-19 growth.\", \"Temperature is the most influential parameter that reduces the growth at the rate of 13-16 cases/day with a 1{degrees}C rise in temperature.\"]}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.23029 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nA dialog state tracker is an important component in modern spoken dialog systems. We present an incremental dialog state tracker, based on LSTM networks. It directly uses automatic speech recognition hypotheses to track the state. We also present the key non-standard aspects of the model that bring its performance close to the state-of-the-art and experimentally analyze their contribution: including the ASR confidence scores, abstracting scarcely represented values, including transcriptions in the training data, and model averaging.\n\nCited Abstract(s):\n[1]: For robust spoken dialog management, various dialog state tracking methods have been proposed. Although discriminative models are gaining popularity due to their superior performance, generative models based on the Partially Observable Markov Decision Process model still remain attractive since they provide an integrated framework for dialog state tracking and dialog policy optimization. Although a straightforward way to fit a generative model is to independently train the component probability models, we present a gradient descent algorithm that simultaneously train all the component models. We show that the resulting tracker performs competitively with other top-performing trackers that participated in DSTC2.\n\n[2]: We present our work in Dialog State Tracking Challenge 5, the main task of which is to track dialog state on human-human conversations cross language. Firstly a probabilistic enhanced framework is used to represent sub-dialog, which consists of three parts, the input model for extracting features, the enhanced model for updating dialog state and the output model to give the tracking frame. Meanwhile, parallel language systems are proposed to overcome inaccuracy caused by machine translation for cross language testing. We also introduce a new iterative alignment method extended from our work in DSTC4. Furthermore, a slot-based score averaging method is introduced to build an ensemble by combining different trackers. Results of our DSTC5 system show that our method significantly improves tracking performance compared with baseline method.\n\n[3]: In spoken dialog systems, statistical state tracking aims to improve robustness to speech recognition errors by tracking a posterior distribution over hidden dialog states. This paper introduces two novel methods for this task. First, we explain how state tracking is structurally similar to web-style ranking, enabling mature, powerful ranking algorithms to be applied. Second, we show how to use multiple spoken language understanding engines (SLUs) in state tracking — multiple SLUs can expand the set of dialog states being tracked, and give more information about each, thereby increasing both recall and precision of state tracking. We evaluate on the second Dialog State Tracking Challenge; together these two techniques yield highest accuracy in 2 of 3 tasks, including the most difficult and general task.\n\n[4]: \n\n[5]: This paper describes a fully incremental dialogue system that can engage in dialogues in a simple domain, number dictation. Because it uses incremental speech recognition and prosodic analysis, the system can give rapid feedback as the user is speaking, with a very short latency of around 200ms. Because it uses incremental speech synthesis and self-monitoring, the system can react to feedback from the user as the system is speaking. A comparative evaluation shows that naive users preferred this system over a non-incremental version, and that it was perceived as more human-like.\n\n[6]: While belief tracking is known to be important in allowing statistical dialog systems to manage dialogs in a highly robust manner, until recently little attention has been given to analysing the behaviour of belief tracking techniques. The Dialogue State Tracking Challenge has allowed for such an analysis, comparing multiple belief tracking approaches on a shared task. Recent success in using deep learning for speech research motivates the Deep Neural Network approach presented here. The model parameters can be learnt by directly maximising the likelihood of the training data. The paper explores some aspects of the training, and the resulting tracker is found to perform competitively, particularly on a corpus of dialogs from a system not found in the training.",
"role": "user"
},
{
"content": "The only other incremental dialog tracker know to us is used in [2]3 . In this paper, the authors describe an incremental dialog system for number dictation as a specific instance of their incremental dialog processing framework. To track the dialog state, they use a discourse modeling system which keeps track of confidence scores from semantic parses of the input; these are produced by a grammar-based semantic interpreter with a hand-coded context-free grammar. Unlike our system, it requires handcrafted grammar and an explicit semantic representation of the input. Using RNN for dialog state tracking has been proposed before [2]6 [6] . The dialog state tracker in [2]6 uses an RNN, with a very elaborate architecture, to track the dialog state turn-by-turn. Similarly to our model, their model does not need an explicit semantic representation of the input. They also use a similar abstraction of low-occurring values (they call the technique tagged n-gram features''), which should result in better generalization on rare but well-recognized values. We use only 1-best ASR hypothesis and achieve near state-of-the-art results, while the other tracking models from the literature [2]5 [2]6 [1] [2] typically use the whole ASR SLU n-best list as an input.",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.12999 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nZiprasidone 40 and 120 mg/day in the acute exacerbation of schizophrenia and schizoaffective disorder: a 4-week placebo-controlled trial\nAbstract 1:\nAbstract A double-blind , placebo-controlled , multicenter study , was performed to evaluate the efficacy and safety of ziprasidone in 139 patients with an acute exacerbation of schizophrenia or schizoaffective disorder . Patients were r and omized to receive ziprasidone 40 mg/day , 120 mg/day or placebo for 28 days . Ziprasidone 120 mg/day was significantly more effective than placebo in improving the BPRS total , CGI-S , BPRS depression cluster and BPRS anergia cluster scores ( all P < 0.05 ) . Similarly , the percentages of patients classified as responders on the BPRS ( ≥30 % reduction ) and the CGI improvement ( score ≤2 ) were significantly greater with ziprasidone 120 mg/day compared with placebo ( P < 0.05 ) . The number of patients who experienced an adverse event was similar in all three treatment groups , and discontinuation due to adverse events was rare ( five of 91 ziprasidone-treated patients ) . The most frequently reported adverse events , that were more common in either ziprasidone group than in the placebo group , were dyspepsia , constipation , nausea and abdominal pain . There was a notably low incidence extrapyramidal side-effects ( including akathisia ) and postural hypotension and no pattern of laboratory abnormalities or apparent weight gain . Ziprasidone-treated patients were not clinical ly different from placebo-treated patients on the Simpson-Angus Rating scale , Barnes Akathisia scale and AIMS assessment s. These results indicate that ziprasidone 120 mg/day is effective in the treatment of the positive , negative and affective symptoms of schizophrenia and schizoaffective disorder with a very low side-effect burden\n\n\nTitle 2:\nZiprasidone 80 mg/day and 160 mg/day in the Acute Exacerbation of Schizophrenia and Schizoaffective Disorder: A 6-Week Placebo-Controlled Trial\nAbstract 2:\nIn this double-blind study , patients with an acute exacerbation of schizophrenia or schizoaffective disorder were r and omized to receive either ziprasidone 80 mg/day ( n = 106 ) or 160 mg/day ( n = 104 ) or placebo ( n = 92 ) , for 6 weeks . Both doses of ziprasidone were statistically significantly more effective than placebo in improving the PANSS total , BPRS total , BPRS core items , CGI-S , and PANSS negative subscale scores ( p < .05 ) . Ziprasidone 160 mg/day significantly improved depressive symptoms in patients with clinical ly significant depression at baseline ( MADRS ≥ 14 , over-all mean 23.5 ) ( p < .05 ) as compared with placebo . The percentage of patients experiencing adverse events was similar in each treatment group , and result ant discontinuation was rare . The most frequent adverse events associated with ziprasidone were generally mild dyspepsia , nausea , dizziness , and transient somnolence . Ziprasidone was shown to have a very low liability for inducing movement disorders and weight gain . The results indicate that ziprasidone is effective and well tolerated in the treatment of the positive , negative , and depressive symptoms of an acute exacerbation of schizophrenia or schizoaffective disorder\n\n\nTitle 3:\nBias in treatment assignment in controlled clinical trials.\nAbstract 3:\nControlled clinical trials of the treatment of acute myocardial infa rct ion offer a unique opportunity for the study of the potential influence on outcome of bias in treatment assignment . A group of 145 papers was divided into those in which the r and omization process was blinded ( 57 papers ) , those in which it may have been unblinded ( 45 papers ) , and those in which the controls were selected by a nonr and om process ( 43 papers ) . At least one prognostic variable was maldistributed ( P less than 0.05 ) in 14.0 per cent of the blinded-r and omization studies , in 26.7 per cent of the unblinded-r and omization studies , and in 58.1 per cent of the nonr and omized studies . Differences in case-fatality rates between treatment and control groups ( P less than 0.05 ) were found in 8.8 per cent of the blinded-r and omization studies , 24.4 per cent of the unblinded-r and omization studies , and 58.1 per cent of the nonr and omized studies . These data emphasize the importance of keeping those who recruit patients for clinical trials from suspecting which treatment will be assigned to the patient under consideration\n\n\nTitle 4:\nP-2-8: The effects of ziprasidone on steady-state lithium levels and renal clearance of lithium\nAbstract 4:\nAIMS To assess the potential of ziprasidone to alter the renal clearance and steady-state serum levels of lithium . METHODS Healthy subjects who had stable serum lithium levels during the first 7 days of treatment with lithium 900 mg day(-1 ) , given as two divided daily doses , were r and omized to receive concomitant treatment with either ziprasidone , 40 mg day(-1 ) , given as two divided daily doses , on days 9 - 11 followed by 80 mg day(-1 ) , given as two divided daily doses on days 12 - 15 ( n = 12 ) , or placebo twice daily ( n = 13 ) . Ziprasidone or placebo was administered 2 h before each dose of lithium . RESULTS Ziprasidone administration was associated with a 0.07 mmol l(-1 ) ( 13 % ) mean increase in steady-state serum lithium levels compared with a mean increase of 0.06 mmol l(-1 ) ( 10 % ) with placebo . Mean renal clearance of lithium decreased by 0.09 l h(-1 ) ( 5 % ) in the ziprasidone group and by 0.14 l h(-1 ) ( 9 % ) in the placebo group . None of these differences between the two groups was statistically or clinical ly significant . CONCLUSIONS Ziprasidone does not alter steady-state serum lithium concentrations or renal clearance of lithium\n\n\nTitle 5:\nA 28-week comparison of ziprasidone and haloperidol in outpatients with stable schizophrenia\nAbstract 5:\nBACKGROUND Ziprasidone is a novel antipsychotic with a unique pharmacologic profile . This study compared ziprasidone with the conventional antipsychotic haloperidol in out patients with stable schizophrenia . METHOD Three hundred one out patients with stable chronic or subchronic schizophrenia ( DSM-III-R ) were r and omized and participated in this double-blind , multicenter , parallel-group clinical study comparing flexible-dose oral ziprasidone , 80 - 160 mg/day ( N = 148 ) , with haloperidol , 5 - 15 mg/day ( N = 153 ) , over 28 weeks . Patients were assessed using the Positive and Negative Syndrome Scale ( PANSS ) , the Clinical Global Impressions-Severity of Illness scale , the Montgomery-Asberg Depression Rating Scale , the Simpson-Angus Scale , the Barnes Akathisia Scale , and the Abnormal Involuntary Movement Scale . RESULTS Modal doses at endpoint were 80 mg/day for ziprasidone and 5 mg/day for haloperidol . Improvements in all mean efficacy variables with both ziprasidone and haloperidol were observed . Significantly more patients were categorized as negative symptom responders ( > or = 20 % reduction in PANSS negative subscale score ) in the ziprasidone group ( 48 % ) compared with the haloperidol group ( 33 % ) ( p < .05 ) . Ziprasidone had clear advantages over haloperidol in all evaluations of movement disorders . Changes in body weight were negligible with both treatments . No pattern of laboratory or cardiovascular changes was observed . CONCLUSION Ziprasidone and haloperidol were both effective in reducing overall psychopathology ; ziprasidone demonstrated effective treatment of negative symptoms and was better tolerated than haloperidol . Ziprasidone appears to offer an effective alternative to haloperidol in the long-term treatment of stable out patients with schizophrenia\n\n\nTitle 6:\nAn exploratory haloperidol-controlled dose-finding study of ziprasidone in hospitalized patients with schizophrenia or schizoaffective disorder.\nAbstract 6:\nNinety patients with schizophrenia or schizoaffective disorder according to DSM-III-R criteria participated in this double-blind , exploratory , dose-ranging trial . After a single-blind washout period of 4 to 7 days , patients were r and omly assigned to receive one of four fixed doses of the new antipsychotic , ziprasidone 4 ( N = 19 ) , 10 ( N = 17 ) , 40 ( N = 17 ) , or 160 ( N = 20 ) mg/day or haloperidol 15 mg/day ( N = 17 ) for 4 weeks . A dose-response relationship among ziprasidone groups was established for improvements in Clinical Global Impression Severity ( CGI-S ) score ( p = 0.002 ) but not in Brief Psychiatric Rating Scale ( BPRS ) total score ( p = 0.08 ) . The intent-to-treat analysis of mean changes from baseline in the BPRS total , BPRS Psychosis core , and CGI-S scores demonstrated that ziprasidone 160 mg/day was comparable with haloperidol in reducing overall psychopathology and positive symptoms and was superior to ziprasidone 4 mg/day . Despite the small sample size and short duration of the trial , the improvement in CGI-S with both ziprasidone 160 mg/day and haloperidol 15 mg/day was statistically significantly greater than with ziprasidone 4 mg/day ( p = 0.001 and p = 0.005 , respectively ) . The percentage of patients classified as responders on both the BPRS total ( > or = 30 % improvement ) and CGI-Improvement ( score of 1 or 2 ) scales in the ziprasidone 160 mg/day group was similar to that in the haloperidol group and nonsignificantly greater than that in the ziprasidone 4 mg/day group . On all assessment s of clinical efficacy , the improvements associated with ziprasidone 4 mg/day , 10 mg/day , and 40 mg/day were similar . Concomitant benztropine use at any time during the study was less frequent with ziprasidone 160 mg/day ( 15 % ) than with haloperidol ( 53 % ) . Haloperidol was associated with a sustained hyperprolactinemia , unlike ziprasidone , where only transient elevations in prolactin that returned to normal within the dosing interval were observed . Ziprasidone was well tolerated , and the incidence of adverse events was similar in all groups . The results of this study suggest that ziprasidone 160 mg/day is as effective as haloperidol 15 mg/day in reducing overall psychopathology and positive symptoms of an acute exacerbation of schizophrenia or schizoaffective disorder but has a lower potential to induce extrapyramidal symptoms\n\n\nTitle 7:\nA RATING SCALE FOR EXTRAPYRAMIDAL SIDE EFFECTS\nAbstract 7:\nA modification of an earlier rating scale for extrapyramidal system disturbance is described , and evidence for the validity and reliability of the scale is presented . The usefulness of the scale in studies of neuroleptic drugs is discussed . By its application it is possible to quantify extrapyramidal side effects and to separate them into four principal factors",
"role": "user"
},
{
"content": "Ziprasidone may be an effective antipsychotic with less extrapyramidal effects than haloperidol .\nIt also , however , causes more nausea and vomiting than the typical drugs , and , at present , there is no data suggesting that it is different to other atypical compounds .",
"role": "assistant"
}
] |
science.anat_em_ner | science.anat_em_ner.456 | [
{
"content": "Given this abstract, identify all unique entities of these types: [\"AnatomicalEntity\"].\n\nOnly output the JSON object, no other text.\n\nAbstract:\nMagnesium for treatment of asthma in children.\nQUESTION: Magnesium is considered adjuvant therapy for moderate to severe asthma exacerbations in adults, but can it be used to treat children? ANSWER: Magnesium seems to be beneficial in the treatment of moderate to severe asthma in children. It is a safe drug to administer, but there have been minor side effects reported, such as epigastric or facial warmth, flushing, pain and numbness at the infusion site, dry mouth, malaise, and hypotension. Owing to its bronchodilating and anti-inflammatory effects, magnesium is an encouraging adjuvant therapy for pediatric patients who do not respond to conventional treatment in acute severe exacerbations. Future studies should focus on establishing the optimal dosage for maximal benefits and the best route of administration. Magnesium should also be considered as a prophylactic treatment.",
"role": "user"
},
{
"content": "{\"AnatomicalEntity\": [\"epigastric\", \"facial\", \"mouth\", \"site\"]}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.18498 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\ndystonia is a relatively common clinical condition , identified by its characteristic features : sustained muscle contractions , twisting , and abnormal postures .\nthe term primary dystonia is used to describe dystonia where no cause could be identified ( idiopathic ) , while the term secondary is used to describe dystonia which is found to result from an identifiable cause .\nthe clinical features of dystonia are often bizarre and can be varied as blepharospasms , aversive eye movements , trismus , mouth opening , tongue protrusion , grimacing , distortion of lips , displacement / rotation of head and neck , hyperpronation , finger / wrist posturing , opisthotonus , and other axial distortions and gait disturbances .\ndysarthria , dysphagia , jaw dislocation , tongue amputation , and respiratory cyanosis with stridor may follow .\na cramp - like discomfort or tightness of jaw or tongue may occur , making chewing , swallowing , or speaking difficult .\nit produces substantial disability and from a therapeutic perspective , the available treatments are for the most part unsatisfactory .\ndrug - induced acute dystonia ( did ) is one of the most common forms of secondary dystonia .\nthis complication is seen with varying frequency , with varying rates being reported , based on differences in the population under study , specific drugs prescribed , and indications for the drugs being prescribed .\ndrugs with antidopaminergic properties , such as first and second generation antipsychotics are among the agents most commonly implicated in drug - induced dystonia .\nless frequently , other drug classes , like anti - emetics such as metoclopramide and prochlorperazine , anticonvulsants such as phenytoin , antidepressants such as nefazodone , citalopram , escitalopram , bupropion and duloxetine , and antivertigo agents such as cinnarizine and flunarizine are involved in this type of complication . the dystonia can occur immediately after ingestion of a single dose , over the course of several days of therapeutic dose administration , after dose increase , or as a manifestation of overdosage .\nmost cases occur in the first 72 h of medication use , often with dramatic symptoms , especially in children .\npatients at higher risk for did include those with younger age , of male gender , a previous history of similar reactions , recent use of cocaine , comorbidity with mood disorders , and metabolic abnormalities ( hypokalemia , dehydration , hypoparathyroidism , etc .\n, we discuss the case of an adolescent , who developed dystonia at therapeutic doses of carbamazepine ( cbz ) .\na 16-year - old male with moderate mental retardation presented to our clinic with 2 years history of multiple episodes of chewing movements accompanied by a state of altered consciousness , during which he would not respond to his name being called out and would continue to give a vacant stare in one direction .\nthere was no history of unconsciousness , repeated rhythmic movements of the limbs , loss of tone resulting in falls , tongue bite , and urinary incontinence .\neach episode was abrupt in onset and lasted about 510 min , after which the patient would spontaneously become responsive to the surroundings once again .\na clinical impression of complex partial seizure was made based on the semiology , and the patient was started on cbz 200 mg twice daily ( 8 mg / kg / day ) for the same .\nthe patient was brought to the emergency department 4 days later with acute dystonia of the neck which had started approximately 2 h back .\nhis neck was found to be held firmly in an extended position with the head tilted toward the right .\nhe had been fully conscious , oriented , and responsive to external stimuli throughout this period , though he was noticed to be agitated .\nhe was administered promethazine 50 mg intramuscularly with the dual purpose of achieving sedation , as well as using its anticholinergic properties to relieve the dystonia .\na serum cbz level estimation was done , and levels of 8 mcg / ml were obtained . since it was within the normal range ( 512 mcg / ml ) , it was considered unlikely that the cbz toxicity could have led to the dystonia .\na literature review also confirmed that there was very little likelihood of the dystonia resulting from the use of cbz , especially below toxic levels .\nit was considered that cbz could be the offending agent , responsible for the episodes of dystonia .\nthus , it was decided to stop cbz and shift the patient over to another anticonvulsant drug .\nthe patient was shifted over to sodium valproate at 1200 mg / day ( 24 mg / kg / day ) for seizure prophylaxis and was followed up for a 2-week period during which the patient remained seizure free , and no recurrence of dystonia was found .\ncbz is a drug that is widely used all over the world for epilepsy , bipolar disorder , and trigeminal neuralgia , apart from off - label indications such as restless legs syndrome . while stabilization of sodium channels is believed to be the mechanism behind its antiepileptic properties , researchers have been looking into a possible action at the level of the dopaminergic system too .\nscientific literature on the connection between cbz and dopamine blockade , however , is far from conclusive at this point .\nanimal studies have shown cbzs ability to increase acetylcholine in striatum and inhibit dopamine transmission , with the latter being observed particularly at subtherapeutic levels , an effect which wanes as the concentration of cbz increases in the blood , thus indicating the possibility of dystonias resulting from the administration of cbz , at subtherapeutic blood concentrations .\nother literature points toward cbz causing a dyskinetic syndrome only in toxic doses , due to dopamine blockage .\nother studies , exploring the antipsychotic effects of cbz , have shed some light on its role in dopaminergic transmission , with indications that the antipsychotic effects of cbz might result from reduced sensitivity of presynaptic dopamine receptors .\nindeed , presynaptic effects after administration of cbz showed reduced synthesis of dopamine and a reduced rate of disappearance of dopamine after tyrosine hydroxylase inhibition , indicating a reduction in turnover .\nresearchers have also used this basis to explain how cbz causes a slight increase in prolactin levels , similar to neuroleptics which act by blocking d2 receptors .\nrare instances of dystonias associated with cbz have been reported in literature , but this has been largely confined to patients with overdoses / toxic blood levels or drug combinations .\nliterature regarding this happening at therapeutic levels is scant , especially in the case of adolescents . in this case , the patient developed dystonia at therapeutic levels of cbz , with the adverse drug reaction corresponding to a level of definite causality on the naranjo et al .\nadverse drug reaction probability scale and of certain causality based on the world health organization collaborating centre for international drug monitoring , the uppsala monitoring centre scale .\nhowever , we wish to caution the readers that findings from a single case should not be generalized , in spite of the high probability of causality being indicated by the scale used .\nthis case merely serves to indicate the possibility of cbz causing dystonia at blood levels within the therapeutic range , with the aim to usher in further research in this direction .",
"role": "user"
},
{
"content": "dystonia is sustained muscle contraction , which may be primary or secondary to other causes . \n drugs comprise one of the most important causes for the secondary dystonia , the usual mechanism being a dopaminergic blockade . \n there are very few reports describing dystonia resulting from carbamazepine ( cbz ) administration . in this case report , a 16-year - old male with mental retardation and seizure disorder developed dystonia at therapeutic blood levels of cbz .",
"role": "assistant"
}
] |
science.chemprot_ner | science.chemprot_ner.666 | [
{
"content": "Extract entities of these types from the text: [\"Chemical\", \"Protein\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values.\n\nText:\nAuranofin, as an anti-rheumatic gold compound, suppresses LPS-induced homodimerization of TLR4.\nToll-like receptors (TLRs), which are activated by invading microorganisms or endogenous molecules, evoke immune and inflammatory responses. TLR activation is closely linked to the development of many chronic inflammatory diseases including rheumatoid arthritis. Auranofin, an Au(I) compound, is a well-known and long-used anti-rheumatic drug. However, the mechanism as to how auranofin relieves the symptom of rheumatoid arthritis has not been fully clarified. Our results demonstrated that auranofin suppressed TLR4-mediated activation of transcription factors, NF-kappaB and IRF3, and expression of COX-2, a pro-inflammatory enzyme. This suppression was well correlated with the inhibitory effect of auranofin on the homodimerization of TLR4 induced by an agonist. Furthermore, auranofin inhibited NF-kappaB activation induced by MyD88-dependent downstream signaling components of TLR4, MyD88, IKKbeta, and p65. IRF3 activation induced by MyD88-independent signaling components, TRIF and TBK1, was also downregulated by auranofin. Our results first demonstrate that auranofin suppresses the multiple steps in TLR4 signaling, especially the homodimerization of TLR4. The results suggest that the suppression of TLR4 activity by auranofin may be the molecular mechanism through which auranofin exerts anti-rheumatic activity.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"Auranofin\", \"Au(I)\", \"auranofin\"], \"Protein\": [\"MyD88\", \"NF-kappaB\", \"TLR4\", \"IKKbeta\", \"Toll-like receptors (TLRs)\", \"IRF3\", \"p65\", \"TRIF\", \"TLR\", \"TBK1\", \"COX-2\"]}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.27671 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nIn this paper, we propose a framework to study a general class of strategic behavior in voting, which we call vote operations. We prove the following theorem: if we fix the number of alternatives, generate @math votes i.i.d. according to a distribution @math , and let @math go to infinity, then for any @math , with probability at least @math , the minimum number of operations that are needed for the strategic individual to achieve her goal falls into one of the following four categories: (1) 0, (2) @math , (3) @math , and (4) @math . This theorem holds for any set of vote operations, any individual vote distribution @math , and any integer generalized scoring rule, which includes (but is not limited to) almost all commonly studied voting rules, e.g., approval voting, all positional scoring rules (including Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also show that many well-studied types of strategic behavior fall under our framework, including (but not limited to) constructive destructive manipulation, bribery, and control by adding deleting votes, margin of victory, and minimum manipulation coalition size. Therefore, our main theorem naturally applies to these problems.\n\nCited Abstract(s):\n[1]: \n\n[2]: \n\n[3]: Efficient post-election audits select the number of machines or precincts to audit based in part on the margin of victory (the number of ballots that must be changed in order to change the outcome); a close election needs more auditing than a landslide victory. For a simple \"first-pastthe-post\" election, the margin is easily computed based on the number of votes the first and second place candidates received. However, for instant runoff voting (IRV) elections, it is not immediately obvious how to compute the margin of victory. This paper presents algorithmic techniques for computing the margin of victory for IRV elections. We evaluate our method by attempting to compute the margin of victory for 25 IRV elections in the United States. The margin of victory computed can then be used to conduct post-election audits more effectively for IRV elections.\n\n[4]: There are many sources of error in counting votes: the apparent winner might not be the rightful winner. Hand tallies of the votes in a random sample of precincts can be used to test the hypothesis that a full manual recount would find a different outcome. This paper develops a conservative sequential test based on the vote-counting errors found in a hand tally of a simple or stratified random sample of precincts. The procedure includes a natural escalation: If the hypothesis that the apparent outcome is incorrect is not rejected at stage s, more precincts are audited. Eventually, either the hypothesis is rejected-and the apparent outcome is confirmed-or all precincts have been audited and the true outcome is known. The test uses a priori bounds on the overstatement of the margin that could result from error in each precinct. Such bounds can be derived from the reported counts in each precinct and upper hounds on the number of votes cast in each precinct. The test allows errors in different precincts to be treated differently to reflect voting technology or precinct sizes. It is not optimal, but it is conservative: the chance of erroneously confirming the outcome of a contest if a full manual recount would show a different outcome is no larger than the nominal significance level. The approach also gives a conservative P-value for the hypothesis that a full manual recount would find a different outcome, given the errors found in a fixed size sample. This is illustrated with two contests from November, 2006: the U.S. Senate race in Minnesota and a school hoard race for the Sausalito Marin City School District in California, a small contest in which voters could vote for up to three candidates.\n\n[5]: Risk-limiting post-election audits have a pre-specified minimum chance of requiring a full hand count if the outcome of the contest is not the outcome that a full hand count of the audit trail would show. The first risk-limiting audits were performed in 2008 in California. Three refinements to increase efficiency were tested in Marin and Yolo counties, California, in November 2009. The first refinement is to audit a collection of contests as a group by auditing a random sample of batches of ballots and combining observed discrepancies in the contests represented in those batches in a particular way: the maximum across-contest relative overstatements (MACRO). MACRO audits control the familywise error rate (the chance that one or more incorrect outcomes fails to be corrected by a full hand count) at a cost that can be lower than that of controlling the per-comparison error rate with independent audits. A risk-limiting audit for the entire collection of contests can be built on MACRO using a variety of probability sampling schemes and ways of combining MACRO across batches. The second refinement is to base the test on the Kaplan-Markov confidence bound, drawing batches with probability proportional to an error bound (PPEB) on the MACRO. The Kaplan-Markov bound is especially well suited to sequential testing: After each batch is audited, a simple calculation--a product of fractions--determines whether to audit another batch or to stop the audit and confirm the apparent outcomes. The third refinement is to audit individual ballots rather than larger batches of ballots, comparing the cast vote record (the machine interpretation of the voter's marks on the ballot) to a human interpretation of the voter's intent. Such single-ballot audits can greatly reduce workload: When the outcome is correct, the number of ballots that must be audited to attain a given risk limit is roughly proportional to the number of ballots per batch. All three of these refinements can be used together, resulting in extremely efficient risk-limiting audits.\n\n[6]: Post-election audits use the discrepancy between machine counts and a hand tally of votes in a random sample of precincts to infer whether error affected the electoral outcome. The maximum relative overstatement of pairwise margins (MRO) quantifies that discrepancy. The electoral outcome a full hand tally shows must agree with the apparent outcome if the MRO is less than 1. This condition is sharper than previous ones when there are more than two candidates or when voters may vote for more than one candidate. For the 2006 U.S. Senate race in Minnesota, a test using MRO gives a P-value of 4.05 for the hypothesis that a full hand tally would find a different winner, less than half the value Stark [Ann. Appl. Statist. 2 (2008) 550-581] finds.\n\n[7]: A general definition is proposed for the margin of victory of an election contest. That definition is applied to Instant Runoff Voting (IRV) and several estimates for the IRV margin of victory are described: two upper bounds and two lower bounds. Given round-by-round vote totals, the time complexity for calculating these bounds does not exceed O(C2 log C), where C is the number of candidates. It is also shown that calculating the larger and more useful of the two lower bounds can be viewed, in part, as solving a longest path problem on a weighted, directed, acyclic graph. Worst-case analysis shows that neither these estimates, nor any estimates based only on tabulation round-by-round vote totals, are guaranteed to be within a constant factor of the margin of victory. These estimates are calculated for IRV elections in Australia and California. Pseudo code for calculating these estimates is provided.\n\n[8]: The margin of victory of an election, defined as the smallest number k such that k voters can change the winner by voting differently, is an important measurement for robustness of the election outcome. It also plays an important role in implementing efficient post-election audits, which has been widely used in the United States to detect errors or fraud caused by malfunctions of electronic voting machines. In this paper, we investigate the computational complexity and (in)approximability of computing the margin of victory for various voting rules, including approval voting, all positional scoring rules (which include Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also prove a dichotomy theorem, which states that for all continuous generalized scoring rules, including all voting rules studied in this paper, either with high probability the margin of victory is Θ(√n), or with high probability the margin of victory is Θ(n), where n is the number of voters. Most of our results are quite positive, suggesting that the margin of victory can be efficiently computed. This sheds some light on designing efficient post-election audits for voting rules beyond the plurality rule.",
"role": "user"
},
{
"content": "Our results has both negative and positive implications. On the negative side, our results provide yet another evidence that computational complexity is not a strong barrier against strategic behavior, because the strategic individual now has some information about the number of operations that are needed, without spending any computational cost or even without looking at the input instance. Although the estimation of our theorem may not be very precise (because we do not know which of the four cases a given instance belongs to), such estimation may be explored to designing effective algorithms that facilitate strategic behavior. On the positive side, this easiness of computation is not always a bad thing: sometimes we want to do such computation in order to test how robust a given preference-profile is. For example, computing the margin of victory is an important component in designing novel risk-limiting audit methods [3] [7] [8] [4] [6] [2] [2]2 [1] .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.77579 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Project scheduling under uncertainty: Survey and research potentials \nChapter Title: Stochastic resource-constrained project scheduling \n1. Abstract of Cited Paper (BIB001): AbstractThe project scheduling problem domain is an important research and applications area of engineering management. Recently introduced project scheduling software such as Risk+, @Risk for Project, SCRAM and Risk Master have facilitated the use of simulation to solve project scheduling problems with stochastic task durations. Practitioners, however, should be made aware that the solution algorithm used in these software systems is based on the implicit assumption of perfect information, an assumption that jeopardizes the feasibility of solution results. This paper discusses the impact of assuming perfect information, introduces a multi-period stochastic programming based model of the project scheduling problem with stochastic task durations, and presents an alternative simulation algorithm that does not assume the availability of perfect information. A simple case study is used to illustrate the practical implications of applying simulation to address project scheduling problems with stochastic task d...\n\nLiterature Review:",
"role": "user"
},
{
"content": "The stochastic resource-constrained project scheduling problem aims at scheduling project activities with uncertain durations in order to minimize the expected project duration subject to zero-lag finish-start precedence constraints and renewable resource constraints. The project is represented by an activity-on-the-node network G ¼ ðV ; EÞ, where the set V ¼ f1; 2; . . . ; ng denotes the set of activities. Activity 1 and n are dummy activities, representing the start and end of the project. Given the presence of both resource constraints and random activity durations, schedules are generated through the application of so-called scheduling policies or scheduling strategies, and no baseline schedule is used. According to the definitions given in BIB001 ,b) and M€ ohring et al. (1984 , a scheduling policy P makes decisions at the decision points t ¼ 0 (the start of the project) and the completion times of activities. A decision at time t is to start at time t a precedence and resource feasible set of activities SðtÞ, exploiting only information that has become available up to time t. As soon as the activities have been finished, the activity durations are known yielding a realization d of activity durations. The application of policy P leads to the creation of a schedule PðdÞ ¼ ðs 1 ; s 2 ; . . . ; s n Þ of activity starting times and a resulting schedule makespan C max ðPðdÞÞ. The common objective considered in the literature is to create a policy that minimizes the expected project duration EðC max ðPðdÞÞÞ over a class of policies. , BIB002 and show how to write the corresponding optimization problem in its general form as a multi-stage stochastic programming problem.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.92877 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nwilms ' tumor ( nephroblastoma ) is the second common abdominal malignancy and most common malignant renal tumor in children .\noften it presents with asymptomatic mass and is usually discovered by parents incidentally while bathing or dressing .\ninvolvement of both kidneys either at presentation or subsequently were seen in 5% to 7% of cases , but the majority of bilateral wt present with synchronous disease at diagnosis .\ntesticular and paratesticular metastasis due to wt have been reported but it is extremely rare .\nhere we report a 33-month -old male with bilateral wt and metastasis to right spermatic cord .\nthe patient is a 33-month - old male whom his mother had discovered right flank mass at the time of dressing one week before admission .\nultrasound of the abdomen revealed a 7379 mm and 7578 mm heterogeneous mass without calcification in the right and left kidney , respectively and lymph node in para - aortic area with 25 mm in diameter .\ncontrast - enhanced computed tomography ( ct ) scan of the abdomen findings were similar to that of ultrasound without evidence for inferior vena cava and renal vein involvement .\nthe histopathology showed infiltration of both kidneys with blastemal cell predominant component , vascular invasion in the tumoral tissue and favorable histology without anaplasia ( figure .\n1a ) histopathological examination showing the neoplastic cells ( blastemal component is predominant ) from right kidney .\n1b ) spermatic cord biopsy illustrating infiltration by neoplastic cells of wilms tumor para - aortic lymph node biopsy was positive for tumoral involvement .\ncomplete blood count with differential , liver and renal function tests were all within normal range .\nerythrocyte sedimentation rate was 89mm / hr . diagnosis was consistent with stage v favorable - histology wilms ' tumor and treatment was initiated with chemotherapeutic agents including vincristine , actinomycin d , and doxorubicin .\neight weeks following treatment the tumor size of both sides decreased more than 50% and ultrasound demonstrated a 2840 mm and 3436 mm tumor mass in the right and left kidney , respectively .\nhe underwent right radical nephrectomy and immediately received abdominal irradiation ( 1080 cgy in 10 fraction dose ) .\ntwo months later , ultrasonography revealed a mass measuring 6969 mm in right renal fossa , but tumor size in the left side was without any significant change .\nsubsequently , the chemotherapy regimen changed to ifosfamide , crboplatin and etoposide ( ice ) . following the fourth cycle of ice , at routine physical examination a right scorotal mass on spermatic cord was noted .\nthe patient then underwent exicisional biopsy and histologic examination suggested wt metastases to spermatic cord ( figure .\ntwo weeks later , lungs ct scan showed multiple nodules and on the ultrasound examination , liver metastases and widely disseminated tumor spread in abdomen were identified .\nchemotherapy with ice regimen continued but unfortunately , he died 2 months later owing to progressive disease and distant metastases .\nthe most frequent sites of metastases due to wt are lungs , liver and regional lymph nodes .\nwt metastases to testicular or partesticular is extremely rare and only eight cases have been reported in the literature so far ( table 1 ) .\nreported patients with testicular and/or paratesticular metastasis due to wt leukemia and lymphoma are the most common malignancy that metastases to the testis .\ndifferent mechanisms have been proposed for testicular and paratesticular metastases due to wt such as direct tumor cell spread from abdomen along the patent processus vaginalis by gravity , spermatic vein involvement by tumor cells , lymphatic or hematogeneous . \n the first case of testicular metastasis due to wt was reported in 1928 by dew h in a 27-month - old boy .\nmetastasis to left testis in this case occurred 6 months after nephrectomy with direct spread of the tumor cells through left spermatic vein .\n, in a 5-year - old boy . in this patient , metastases to the right testis occurred 8 months after ipsilateral nephrectomy and the mechanism of this was right spermatic cord involvement along processsus vaginalis .\na review of previous reports demonstrated that in four patients of them left testis involvement existed , whereas , only in 3 cases the involved kidney were on the left side and four patients had right testis involvement while only in 3 cases kidney involvement were in the same side . in all of these case reports , unilateral wt was present and our case is the first report of bilateral wt with paratesticular metastasis . in the present case ,\nalthough , testicular and paratesticular tumor are rare in children , any solid scrotal mass should be considered malignant until proved otherwise especially when malignant neoplasm exists outside the testis .\ntherefore , we recommended routine testes examination in patients with wt during treatment and follow - up period .",
"role": "user"
},
{
"content": "wilms ' tumor ( wt ) is the most common primary renal tumor in children . \n common sites of metastases are lungs , liver and regional lymph nodes . \n testicular and paratesticular metastasis due to wt have been reported but it is extremely rare . \n we report a 33-month -old male with bilateral wt and metastasis to right spermatic cord .",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.1918 | [
{
"content": "Extract all unique entities of these types from the following clinical trial criteria: [\"Negation\", \"Observation\", \"ReferencePoint\", \"Procedure\", \"Drug\", \"Measurement\", \"Value\", \"Person\", \"Multiplier\", \"Device\", \"Qualifier\", \"Mood\", \"Temporal\", \"Condition\", \"Visit\", \"Scope\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values. If no entities are found for a type, use [].\n\nText:\n\nallergy to Doxycycline or Methylprednisolone,\npregnancy,\ndiagnosis,\nInflammatory arthritis or diabetes,\nsecondary adhesive capsulitis (history of significant trauma, rotator cuff tear injury, stroke)\nevidence of arthritis on x-ray,\ncurrent infectious disease, and\nany previous treatment for the for adhesive capsulitis of the affected shoulder.",
"role": "user"
},
{
"content": "{\"Condition\": [\"infectious disease\", \"trauma\", \"pregnancy\", \"Inflammatory arthritis\", \"stroke\", \"allergy\", \"diabetes\", \"rotator cuff tear injury\", \"diagnosis\", \"adhesive capsulitis\", \"arthritis\"], \"Device\": [], \"Drug\": [\"Doxycycline\", \"Methylprednisolone\"], \"Measurement\": [], \"Mood\": [\"evidence of\"], \"Multiplier\": [], \"Negation\": [], \"Observation\": [\"history\"], \"Person\": [], \"Procedure\": [\"x-ray\", \"treatment\"], \"Qualifier\": [\"significant\", \"affected shoulder\", \"any\", \"secondary\"], \"ReferencePoint\": [], \"Scope\": [\"significant trauma, rotator cuff tear injury, stroke\", \"history of significant trauma, rotator cuff tear injury, stroke\", \"Doxycycline or Methylprednisolone\"], \"Temporal\": [\"current\", \"previous\"], \"Value\": [], \"Visit\": []}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.71306 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nwhen anastomotic urethroplasty is planned in such a patient , there is fear of anastomosis of distal urethra to proximal false lumen instead of the normal anatomical lumen .\nanastomosis to false lumen can further give rise to complications like failed urethroplasty and urinary incontinence that can increase morbidity as well as the cost of treatment .\nwe present a point of technique to prevent anastomosis of normal distal urethral lumen to false tract proximally using antegrade cystoscopy .\nshe had history of multiple episodes of failed voiding for 2 years following attempts of urethral dilatation and catheterization .\nher voiding cystourethrogram ( vcug ) showed complete obliteration of urethra at mid - urethral level with false passage [ figure 1a and b ] .\nthe catheter ( 18 fr ) had actually been dwelling in through the false passage all these years .\n( b ) diagrammatic representation showing obliterative stricture of mid - urethra with false passage under spinal anesthesia , an inverted u shaped incision was made close to anterior half circumference of urethral meatus [ figure 2a ] .\nthe urethra was dissected and exposed using a subsymphyseal approach with hegar dilator in urethra till its obliterated end , which helped in identifying the distal end of stricture . after dissecting the strictured segment ( 1 cm ) ,\nintra - operative photograph showing the inverted u shaped incision we used antegrade cystoscopy to verify the proximal blind end of urethra with the help of light of the cystoscope [ figure 2b and c ] .\nthe stay sutures were then placed on the tip of proximal urethra and appropriateness of passage of suture was confirmed by antegrade endoscopic vision [ figure 2d g ] .\nonce the proximal end of anatomical urethra was dissected , the lumen was opened between two stay sutures .\nthe strictured end of distal urethra was dissected over a 26 f hegar dilator tip , opened and spatulated .\nthereafter tension free end to end anastomosis of two anatomical urethral lumens was performed using six interrupted suture of 4 - 0 vicryl on 16 fr foley catheter [ figure 3a ] .\n( b ) identifying proximal blind end of anatomical urethra with the help of light of the cystoscope .\n( c ) diagrammatic representation showing identification of proximal blind end of anatomical urethra with the help of light of the cystoscope ( d ) taking stay suture under direct vision .\n( g ) diagrammatic representation showing two stay sutures in proximal urethra being taken under direct vision ( a ) diagrammatic representation showing completed anastomosis between normal urethral lumens avoiding false passage .\n( b ) post - operative vcug showing anastomosis between normal urethral lumens avoiding false passage\nshe had history of multiple episodes of failed voiding for 2 years following attempts of urethral dilatation and catheterization .\nher voiding cystourethrogram ( vcug ) showed complete obliteration of urethra at mid - urethral level with false passage [ figure 1a and b ] .\nthe catheter ( 18 fr ) had actually been dwelling in through the false passage all these years .\nunder spinal anesthesia , an inverted u shaped incision was made close to anterior half circumference of urethral meatus [ figure 2a ] .\nthe urethra was dissected and exposed using a subsymphyseal approach with hegar dilator in urethra till its obliterated end , which helped in identifying the distal end of stricture . after dissecting the strictured segment ( 1 cm ) , urethra\nintra - operative photograph showing the inverted u shaped incision we used antegrade cystoscopy to verify the proximal blind end of urethra with the help of light of the cystoscope [ figure 2b and c ] .\nthe stay sutures were then placed on the tip of proximal urethra and appropriateness of passage of suture was confirmed by antegrade endoscopic vision [ figure 2d g ] .\nonce the proximal end of anatomical urethra was dissected , the lumen was opened between two stay sutures .\nthe strictured end of distal urethra was dissected over a 26 f hegar dilator tip , opened and spatulated .\nthereafter tension free end to end anastomosis of two anatomical urethral lumens was performed using six interrupted suture of 4 - 0 vicryl on 16 fr foley catheter [ figure 3a ] .\n( b ) identifying proximal blind end of anatomical urethra with the help of light of the cystoscope .\n( c ) diagrammatic representation showing identification of proximal blind end of anatomical urethra with the help of light of the cystoscope ( d ) taking stay suture under direct vision .\n( g ) diagrammatic representation showing two stay sutures in proximal urethra being taken under direct vision ( a ) diagrammatic representation showing completed anastomosis between normal urethral lumens avoiding false passage .\n( b ) post - operative vcug showing anastomosis between normal urethral lumens avoiding false passage\nat 4 months follow - up , the patient is voiding well with uroflow showing maximum flow rate of 18 ml / sec and post - void residual urine was 15 ml .\nher american urological association ( aua ) symptom score was 5 at 4 months after operation .\nvoiding cystourethrogram at 4 months [ figure 3b ] showed patent anastomosis with anastomosis between normal anatomical urethral lumens and not with false passage .\nthe anastomotic urethroplasty in females is mostly performed with the help of hegar dilator or retrograde cystoscopy . if the case is having a wide false passage , there remains a possibility of anastomosis of distal urethra to proximal false lumen .\nthe retrograde cystoscopy may not help as the cystoscope may enter through the false passage into proximal urethra giving a false impression of normal urethral passage .\nthis problem is more likely in women on account of short urethra and absence of anatomical landmarks like verumontanum that helps to identify normal urethral lumen in males .\nanastomotic urethroplasty is the treatment of choice for short segment strictures of bulbous urethra in males .\nhowever , there is a paucity of data in the literature describing anastomotic urethroplasty as the treatment for female urethral stricture .\nrovner and wein have described anastomotic urethroplasty in females for management of complex urethral diverticula .\npodesta et al . , and hemal et al . , in their separate studies have evaluated pelvic fracture urethral distraction defect in females and performed anastomotic urethroplasty in selected cases .\ndorairajan et al . , have even suggested primary repair of the urethra with simple urethral realignment over a catheter , as the procedure of choice for female urethral injury associated with a pelvic fracture .\nthe anastomotic urethroplasty in females can be performed using subsymphyseal route approaching the urethra from dorsal aspect .\nfemale urethral stricture with complete occlusion of lumen with an established false passage is an uncommon entity and doing anastomotic urethroplasty has a risk of anastomosis to false tract .\nour proposed modification of using antegrade cystoscopy helps to identify normal anatomical proximal urethral lumen and ensures anastomosis between normal urethral lumens .\ninstruments required for antegrade cystoscopy are easily available in set up catering patients with urological ailments .\nwhile undertaking anastomotic urethroplasty for female urethral stricture , this simple modification of using antegrade cystoscopy facilitates recognition of normal anatomical lumens and thus ensures accurate anastomosis .",
"role": "user"
},
{
"content": "purpose : during anastomotic urethroplasty for stricture urethra with false passage using standard technique , there remains a chance of anastomosis of normal distal urethra to proximal false lumen . \n herein , we present a point of technique in which by using antegrade cystoscope , one can not just identify and dissect normal anatomical proximal urethral lumen , but also perform some of the steps for anastomosis under direct vision . \n this will avoid making anastomosis to false lumen and thus leading to further complications.materials and methods : we report a case of 35-years - female who was presented to us with total mid - urethral stricture with false passage following multiple urethral dilatation attempts . \n we used antegrade cystoscopy during anastomotic urethroplasty to identify and dissect the proximal end of urethra thereby avoiding anastomosis to false tract.results:we successfully performed anastomotic urethroplasty avoiding false passage . \n post - operative uroflow showed q max of 18 ml / sec . \n voiding cystourethrogram post - operatively showed anastomosis between normal anatomical lumens.conclusion:this modification of using antegrade cystoscopy helps to identify proximal urethral end which in turn helps in avoiding anastomosis to false tract and ensures anastomosis between normal lumens .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.32374 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Advanced Medical Displays: A Literature Review of Augmented Reality \nChapter Title: C. Error Estimation \n1. Abstract of Cited Paper (BIB001): A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays. From probabilistic estimates of the errors in optical tracking sensors, the uncertainty in head-to-object pose can be computed in the form of a covariance matrix. The positional uncertainty can be visualized as a 3D ellipsoid. One useful benefit of having an explicit representation of uncertainty is that we can fuse sensor data from a combination of fixed and head-mounted sensors in order to improve the overall registration accuracy. The method was applied to the analysis of an experimental augmented reality system, incorporating an optical see-through head-mounted display, a head-mounted CCD camera, and a fixed optical tracking sensor. The uncertainty of the pose of a movable object with respect to the head-mounted display was analyzed. By using both fixed and head mounted sensors, we produced a pose estimate that is significantly more accurate than that produced by either sensor acting alone. \n2. Abstract of Cited Paper (BIB002): All augmented reality (AR) systems must deal with registration errors. While most AR systems attempt to minimize registration errors through careful calibration, registration errors can never be completely eliminated in any realistic system. In this paper, we describe a robust and efficient statistical method for estimating registration errors. Our method generates probabilistic error estimates for points in the world, in either 3D world coordinates or 2D screen coordinates. We present a number of examples illustrating how registration error estimates can be used in AR interfaces, and describe a method for estimating registration errors of objects based on the expansion and contraction of their 2D convex hulls. \n3. Abstract of Cited Paper (BIB003): Our purpose is to provide an augmented reality system for Radio-Frequency guidance that could superimpose a 3D model of the liver, its vessels and tumors (reconstructed from CT images) on external video images of the patient. In this paper, we point out that clinical usability not only need the best affordable registration accuracy, but also a certification that the required accuracy is met, since clinical conditions change from one intervention to the other. Beginning by addressing accuracy performances, we show that a 3D/2D registration based on radio-opaque fiducials is more adapted to our application constraints than other methods. Then, we outline a lack in their statistical assumptions which leads us to the derivation of a new extended 3D/2D criterion. Careful validation experiments on real data show that an accuracy of 2 mm can be achieved in clinically relevant conditions, and that our new criterion is up to 9% more accurate, while keeping a computation time compatible with real-time at 20 to 40 Hz. \n4. Abstract of Cited Paper (BIB004): Optical tracking systems have been used for several years in image guided medical procedures. Vendors often state static accuracies of a single retro-reflective sphere or LED. Expensive coordinate measurement machines (CMM) are used to validate the positional accuracy over the specified working volume. Users are interested in the dynamic accuracy of their tools. The configuration of individual sensors into a unique tool, the calibration of the tool tip, and the motion of the tool contribute additional errors. Electromagnetic (EM) tracking systems are considered an enabling technology for many image guided procedures because they are not limited by line-of-sight restrictions, take minimum space in the operating room, and the sensors can be very small. It is often difficult to quantify the accuracy of EM trackers because they can be affected by field distortion from certain metal objects. Many high-accuracy measurement devices can affect the EM measurements being validated. EM Tracker accuracy tends to vary over the working volume and orientation of the sensors. We present several simple methods for estimating the dynamic accuracy of EM tracked tools. We discuss the characteristics of the EM Tracker used in the GE Healthcare family of surgical navigation systems. Results for other tracking systems are included. \n5. Abstract of Cited Paper (BIB005): For current surgical navigation systems optical tracking is state of the art. The accuracy of these tracking systems is currently determined statically for the case of full visibility of all tracking targets. We propose a dynamic determination of the accuracy based on the visibility and geometry of the tracking setup. This real time estimation of accuracy has a multitude of applications. For multiple camera systems it allows reducing line of sight problems and guaranteeing a certain accuracy. The visualization of these accuracies allows surgeons to perform the procedures taking to the tracking accuracy into account. It also allows engineers to design tracking setups interactively guaranteeing a certain accuracy. ::: ::: Our model is an extension to the state of the art models of Fitzpatrick et al. [1] and Hoff et al. [2]. We model the error in the camera sensor plane. The error is propagated using the internal camera parameter, camera poses, tracking target poses, target geometry and marker visibility, in order to estimate the final accuracy of the tracked instrument.\n\nLiterature Review:",
"role": "user"
},
{
"content": "Tracking in medical AR is mostly fiducial-based because it can guarantee a predictable quality of tracking, which is necessary for the approval of a navigation system. For an estimation of the overall error calibration, registration and tracking errors have to be computed, propagated, and accumulated. Nicolau and colleagues BIB004 propose a registration with error prediction for endoscopic augmentation. Fitzpatrick et al. BIB001 compute tracking based errors based on the spatial distribution of marker sets. Hoff et al. BIB002 predict the error for an HMD based navigation system. An online error estimation is a desirable feature, since physicians have to rely on the visualized data. In current clinical practice, navigation systems stop their visualization in case a factor that decreases the accuracy is known to the system. Instead of stopping the whole system it would be useful to estimate the remaining accuracy and visualize it, so that a surgeon can decide in critical moments, whether to carefully use the data or not. MacIntyre et al. BIB003 suggest in a non-medical setup to predict the error empirically by an adaptive estimator. Our group BIB006 suggests a way of dynamically estimating the accuracy for optical tracking modeling the physical situation. By integrating the visibility of markers for each camera into the model, a multiple camera setup for reducing the line of sight problem is possible, which ensures a desired level of accuracy. Nafis et al. BIB005 investigate the dynamic accuracy of electromagnetic (EM) tracking. The magnetic field in the tracking volume can be influenced by metallic objects, which can change the measurements significantly. Also the distance of the probe to the field generator and its speed have a strong influence on the accuracy. Finally it is not enough to estimate the error, but the whole system has to be validated (cf. Jannin et al. ). Standardized validation procedures have not been used to validate the described systems in order to make the results comparable. The validation of the overall accuracy of an AR system must include the perception of the visualization. In the next section we discuss the effect of misperception in spite of mathematically correct positions in visualizations.",
"role": "assistant"
}
] |
science.qasa_abstractive_qa | science.qasa_abstractive_qa.792 | [
{
"content": "Read this passage from a research paper and answer the question.\n\nContext: In this paper, we address the degradation problem by introducing a deep residual learning framework.Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as \\mathcal{H}(\\mathbf{x}), we let the stacked nonlinear layers fit another mapping of \\mathcal{F}(\\mathbf{x}):=\\mathcal{H}(\\mathbf{x})-\\mathbf{x}. The original mapping is recast into \\mathcal{F}(\\mathbf{x})+\\mathbf{x}.We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.\n\nQuestion: How does the deep residual learning framework address the degradation problem in deep convolutional neural networks?",
"role": "user"
},
{
"content": "The deep residual learning framework address the degradation problem using residual mapping instead of original mapping in deep convolutional neural network",
"role": "assistant"
}
] |
science.chemprot_re | science.chemprot_re.646 | [
{
"content": "Identify relationships between chemical compounds and genes in this abstract. Use only these relation types: ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nFormat your output as a JSON array with entries like [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"].\n\nAbstract:\nSynergistic anti-cancer effects of resveratrol and chemotherapeutic agent clofarabine against human malignant mesothelioma MSTO-211H cells. Dietary phytochemicals as adjuvants have been suggested to play important roles in enhancing chemotherapeutic potential owing to multitargeted chemopreventive properties and lack of substantial toxicity. Here, we investigated the efficacy of the combined treatment of various phytochemicals with the anticancer drug clofarabine in malignant mesothelioma MSTO-211H cells and normal mesothelial MeT-5A cells. The combined treatment of resveratrol and clofarabine produced a synergistic antiproliferative effect in MSTO-211H cells, but not in MeT-5A cells. In MSTO-211H cells, the nuclear accumulation of Sp1 and the levels of p-Akt, Sp1, c-Met, cyclin D1, and p21 were effectively decreased by the combined treatment of them. In combination with clofarabine, the ability of resveratrol to reduce the contents of Sp1 and its target gene products was also evident in a time- and dose-dependent experiment. The inhibition of phosphoinositide 3-kinase using Ly294002 augmented a decrease in the p21 level induced by their combination, but it showed no significant effects on expression of Sp1 and cyclin D1. Taken together, the data provide evidence that the synergistic antiproliferative effect of resveratrol and clofarabine is linked to the inhibition of Akt and Sp1 activities, and suggest that this combination may have therapeutic value in treatment of malignant mesothelioma.",
"role": "user"
},
{
"content": "[[\"clofarabine\", \"INHIBITOR\", \"Sp1\"], [\"Ly294002\", \"INDIRECT-DOWNREGULATOR\", \"p21\"], [\"resveratrol\", \"INHIBITOR\", \"Sp1\"], [\"Ly294002\", \"NOT\", \"cyclin D1\"], [\"resveratrol\", \"INHIBITOR\", \"Akt\"], [\"Ly294002\", \"NOT\", \"Sp1\"], [\"Ly294002\", \"INHIBITOR\", \"phosphoinositide 3-kinase\"], [\"clofarabine\", \"INHIBITOR\", \"Akt\"], [\"clofarabine\", \"INDIRECT-DOWNREGULATOR\", \"Sp1\"], [\"resveratrol\", \"INDIRECT-DOWNREGULATOR\", \"Sp1\"]]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.7069 | [
{
"content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nAutomation driving techniques have seen tremendous progresses these last years, particularly due to a better perception of the environment. In order to provide safe yet not too conservative driving in complex urban environment, data fusion should not only consider redundant sensing to characterize the surrounding obstacles, but also be able to describe the uncertainties and errors beyond presence absence (be it binary or probabilistic). This paper introduces an enriched representation of the world, more precisely of the potential existence of obstacles through an evidential grid map. A method to create this representation from 2 very different sensors, laser scanner and stereo camera, is presented along with algorithms for data fusion and temporal updates. This work allows a better handling of the dynamic aspects of the urban environment and a proper management of errors in order to create a more reliable map. We use the evidential framework based on the Dempster-Shafer theory to model the environment perception by the sensors. A new combination operator is proposed to merge the different sensor grids considering their distinct uncertainties. In addition, we introduce a new long-life layer with high level states that allows the maintenance of a global map of the entire vehicle's trajectory and distinguish between static and dynamic obstacles. Results on a real road dataset show that the environment mapping data can be improved by adding relevant information that could be missed without the proposed approach.\n\nCited Abstract(s):\n[1]: A prerequisite to the design of future Advanced Driver Assistance Systems for cars is a sensing system providing all the information required for high-level driving assistance tasks. Carsense is a European project whose purpose is to develop such a new sensing system. It will combine different sensors (laser, radar and video) and will rely on the fusion of the information coming from these sensors in order to achieve better accuracy, robustness and an increase of the information content. This paper demonstrates the interest of using probabilistic reasoning techniques to address this challenging multi-sensor data fusion problem. The approach used is called Bayesian Programming. It is a general approach based on an implementation of the Bayesian theory. It was introduced first to design robot control programs but its scope of application is much broader and it can be used whenever one has to deal with problems involving uncertain or incomplete knowledge.\n\n[2]: \n\n[3]: Both in science and in practical affairs we reason by combining facts only inconclusively supported by evidence. Building on an abstract understanding of this process of combination, this book constructs a new theory of epistemic probability. The theory draws on the work of A. P. Dempster but diverges from Depster's viewpoint by identifying his \"lower probabilities\" as epistemic probabilities and taking his rule for combining \"upper and lower probabilities\" as fundamental. The book opens with a critique of the well-known Bayesian theory of epistemic probability. It then proceeds to develop an alternative to the additive set functions and the rule of conditioning of the Bayesian theory: set functions that need only be what Choquet called \"monotone of order of infinity.\" and Dempster's rule for combining such set functions. This rule, together with the idea of \"weights of evidence,\" leads to both an extensive new theory and a better understanding of the Bayesian theory. The book concludes with a brief treatment of statistical inference and a discussion of the limitations of epistemic probability. Appendices contain mathematical proofs, which are relatively elementary and seldom depend on mathematics more advanced that the binomial theorem.\n\n[4]: Reliable and efficient perception and reasoning in dynamic and densely cluttered environments are still major challenges for driver assistance systems. Most of today's systems use target tracking algorithms based on object models. They work quite well in simple environments such as freeways, where few potential obstacles have to be considered. However, these approaches usually fail in more complex environments featuring a large variety of potential obstacles, as is usually the case in urban driving situations. In this paper, we propose a new approach for robust perception and risk assessment in highly dynamic environments. This approach is called Bayesian occupancy filtering; it basically combines a four-dimensional occupancy grid representation of the ecobstacle state space with Bayesian filtering techniques.",
"role": "user"
},
{
"content": "In the literature, the most popular framework for mapping is the Bayesian; it represents the sensor's uncertainty by means of probability [4] [2] [1] . On the other hand, the evidential approach, which is based on the Dempster-Shafer theory (DST) [3] , has recently received attention because of its robust combination methods able to manage both noisy and conflicting information. DST allows conflict to be addressed directly, by modeling and differentiating lack of information from conflicting information. Moreover, it is a useful tool to combine different sensor information with distinct uncertainty parameters.",
"role": "assistant"
}
] |
science.chemprot_ner | science.chemprot_ner.814 | [
{
"content": "Extract entities of these types from the text: [\"Chemical\", \"Protein\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values.\n\nText:\nCo-C bond activation in methylmalonyl-CoA mutase by stabilization of the post-homolysis product Co2+ cobalamin.\nDespite decades of research, the mechanism by which coenzyme B12 (adenosylcobalamin, AdoCbl)-dependent enzymes promote homolytic cleavage of the cofactor's Co-C bond to initiate catalysis has continued to elude researchers. In this work, we utilized magnetic circular dichroism spectroscopy to explore how the electronic structure of the reduced B12 cofactor (i.e., the post-homolysis product Co2+ Cbl) is modulated by the enzyme methylmalonyl-CoA mutase. Our data reveal a fairly uniform stabilization of the Co 3d orbitals relative to the corrin pi/pi*-based molecular orbitals when Co2+ Cbl is bound to the enzyme active site, particularly in the presence of substrate. Contrastingly, our previous studies (Brooks, A. J.; Vlasie, M.; Banerjee, R.; Brunold, T. C. J. Am. Chem. Soc. 2004, 126, 8167-8180.) showed that when AdoCbl is bound to the MMCM active site, no enzymatic perturbation of the Co3+ Cbl electronic structure occurs, even in the presence of substrate (analogues). Collectively, these observations provide direct evidence that enzymatic Co-C bond activation involves stabilization of the post-homolysis product, Co2+ Cbl, rather than destabilization of the Co3+ Cbl \"ground\" state.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"coenzyme B12\", \"adenosylcobalamin\", \"AdoCbl\", \"Co-C\", \"methylmalonyl-CoA\", \"cobalamin\", \"Co2+\", \"Co3+ Cbl\", \"Co2+ Cbl\"], \"Protein\": [\"MMCM\", \"methylmalonyl-CoA mutase\"]}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.22989 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\n3 mm 3 mm wedges were prepared from single crystal copper with a ( 100 ) orientation ( mateck gmbh , juelich , germany ) , using mechanical and electrochemical polishing .\nthe subsequent experimental procedure is shown in fig . 1 : a 5 m thick and 200 m long lamella was machined out of the single crystal wedge using an fib microscope ( fei strata 235 ; fei , hillsboro , or , usa ) operated with ga ions at 30 kev .\nthe wedge was then irradiated with a 1.1 mev proton beam , having a bragg peak at 7 m depth35 .\ntherefore , the proton beam passes through the fib lamella , as seen from the logarithmic dose profile in fig .\n1 , thereby avoiding hydrogen implantation in the relevant region of the specimen and having a relatively flat damage profile throughout the entire pillar . the irradiation setup was described in detail in ref .\n1 was calculated with the experimental beam current averaging of 17 acm using the srim code35 .\nthe actual sample temperature might be higher , which would affect the defect annihilation and result in a lower defect density .\nstill , as copper is an excellent thermal conductor and the ion stopping region is not within the lamella , we do not expect a significantly different temperature .\n1500 nm and heights at least five times the diameter were fabricated by cutting annular trenches with the fib .\nfinal milling currents were 10 pa and the samples had an inevitable axial taper of 2.5. the high aspect ratios of 5:1 were chosen to minimize for lateral boundary constraints and to avoid apparent hardening , especially after the formation of large slip offsets28 . from the dose profile and the pillar positions , the resultant dose for the nano - compression samples is estimated to be 0.8 0.01 dpa for the smaller samples and 0.8 0.09 dpa for the largest pillars .\nthis surface fib damage has a distinct influence on the mechanical properties of a defect free material4 .\nhowever , given the large number of proton irradiation defects in the whole volume of our material and the pronounced difference to the unirradiated material , it is considered of minor concern in the present case .\nsample testing was performed in situ in a tem ( jeol 3010 , tokyo , japan ) working at 300 kev using a hysitron picoindenter ( hysitron , minneapolis , mn , usa ) equipped with a flat conductive diamond tip and operating in displacement controlled mode with a nominal displacement rate of ~1 nms , resulting in strain rates of ~10 s. videos were recorded with a charge - coupled device ( ccd ) camera ( gatan orius sc200d ; gatan , pleasanton , ca , usa ) at 30 frames per second .\nthere is a slight difference in image contrast ( seen in the low intensity fig .\n2c and 2d ) between the upper third of the image and the lower part .\nthis emerges because the ccd image readout uses two channels to increase the frame rate , and there is a slight difference between the two dark currents . the recorded load \ndisplacement data is combined with the in situ video to properly evaluate yield stresses , calculated from the load right before the first load drop and the measured contact diameter ( assuming a circular pillar cross - section ) at this point .\nproper axial alignment was ensured by comparing the loading and unloading slopes for each test , as misalignment results in a pronounced reduction of the loading slope .\nfurther details on sample fabrication , experimental conditions and data evaluation are provided in ref .\nevaluation of the spiral dislocation source size to relate stress , , to source size , l , follows rao et al.33 : \n ( 1)=kgbmlln(lb ) , \n with k = 0.09 for a mixed type dislocation , shear modulus g = 47 gpa , burgers vector b = 0.256 nm , and schmid factor m = 0.408 . to calculate critical resolved shear stresses ( crss ) for single crystal or polycrystalline data , the normal stress has to be divided by 2.45 ( the inverse schmid factor ) and 3.1 ( the taylor factor ) , respectively . for more details\n3 mm 3 mm wedges were prepared from single crystal copper with a ( 100 ) orientation ( mateck gmbh , juelich , germany ) , using mechanical and electrochemical polishing .\nthe subsequent experimental procedure is shown in fig . 1 : a 5 m thick and 200 m long lamella was machined out of the single crystal wedge using an fib microscope ( fei strata 235 ; fei , hillsboro , or , usa ) operated with ga ions at 30 kev .\nthe wedge was then irradiated with a 1.1 mev proton beam , having a bragg peak at 7 m depth35 .\ntherefore , the proton beam passes through the fib lamella , as seen from the logarithmic dose profile in fig .\n1 , thereby avoiding hydrogen implantation in the relevant region of the specimen and having a relatively flat damage profile throughout the entire pillar . the irradiation setup was described in detail in ref .\n1 was calculated with the experimental beam current averaging of 17 acm using the srim code35 .\nthe actual sample temperature might be higher , which would affect the defect annihilation and result in a lower defect density .\nstill , as copper is an excellent thermal conductor and the ion stopping region is not within the lamella , we do not expect a significantly different temperature .\n1500 nm and heights at least five times the diameter were fabricated by cutting annular trenches with the fib .\nfinal milling currents were 10 pa and the samples had an inevitable axial taper of 2.5. the high aspect ratios of 5:1 were chosen to minimize for lateral boundary constraints and to avoid apparent hardening , especially after the formation of large slip offsets28 . from the dose profile and the pillar positions , the resultant dose for the nano - compression samples is estimated to be 0.8 0.01 dpa for the smaller samples and 0.8 0.09 dpa for the largest pillars .\nthis surface fib damage has a distinct influence on the mechanical properties of a defect free material4 .\nhowever , given the large number of proton irradiation defects in the whole volume of our material and the pronounced difference to the unirradiated material , it is considered of minor concern in the present case .\nsample testing was performed in situ in a tem ( jeol 3010 , tokyo , japan ) working at 300 kev using a hysitron picoindenter ( hysitron , minneapolis , mn , usa ) equipped with a flat conductive diamond tip and operating in displacement controlled mode with a nominal displacement rate of ~1 nms , resulting in strain rates of ~10 s. videos were recorded with a charge - coupled device ( ccd ) camera ( gatan orius sc200d ; gatan , pleasanton , ca , usa ) at 30 frames per second .\nthere is a slight difference in image contrast ( seen in the low intensity fig .\n2c and 2d ) between the upper third of the image and the lower part .\nthis emerges because the ccd image readout uses two channels to increase the frame rate , and there is a slight difference between the two dark currents .\nthe recorded load displacement data is combined with the in situ video to properly evaluate yield stresses , calculated from the load right before the first load drop and the measured contact diameter ( assuming a circular pillar cross - section ) at this point .\nproper axial alignment was ensured by comparing the loading and unloading slopes for each test , as misalignment results in a pronounced reduction of the loading slope .\nfurther details on sample fabrication , experimental conditions and data evaluation are provided in ref .\nevaluation of the spiral dislocation source size to relate stress , , to source size , l , follows rao et al.33 : \n ( 1)=kgbmlln(lb ) , \n with k = 0.09 for a mixed type dislocation , shear modulus g = 47 gpa , burgers vector b = 0.256 nm , and schmid factor m = 0.408 . to calculate critical\nresolved shear stresses ( crss ) for single crystal or polycrystalline data , the normal stress has to be divided by 2.45 ( the inverse schmid factor ) and 3.1 ( the taylor factor ) , respectively . for more details",
"role": "user"
},
{
"content": "increasing demand for energy and reduction of co2 emissions has revived interest in nuclear energy . designing materials for radiation environments \n necessitates fundamental understanding of how radiation - induced defects alter mechanical properties . \n ion beams create radiation damage efficiently without material activation , but their limited penetration depth requires small - scale testing . \n however , strength measurements of nano - scale irradiated specimens have not been previously performed . here \n we show that yield strengths approaching macroscopic values are measured from irradiated ~400 nm diameter copper specimens . \n quantitative in situ nano - compression testing in a transmission electron microscope reveals that the strength of larger samples is controlled by dislocation - irradiation defect interactions , yielding size - independent strengths . below ~400 nm , \n size - dependent strength results from dislocation source limitation . \n this transition length - scale should be universal , but depend on material and irradiation conditions . \n we conclude that for irradiated copper , and presumably related materials , nano - scale in situ testing can determine bulk - like yield strengths and simultaneously identify deformation mechanisms .",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.1101 | [
{
"content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: Cost-effectiveness analysis of apatinib treatment for chemotherapy - refractory advanced gastric cancer Apatinib, a third-line or later treatment for advanced gastric cancer (aGC), was shown to improve overall survival and progression-free survival (PFS) compared with placebo in the phase III trial. Given the modest benefit with high costs, we further evaluated the cost-effectiveness of apatinib for patients with chemotherapy - refractory aGC. A Markov model was developed to simulate the disease process of aGC (PFS, progressive disease, and death) and estimate the incremental cost-effectiveness ratio (ICER) of apatinib to placebo. The health outcomes and utility scores were derived from the phase III trial and previously published sources, respectively. Total costs were calculated from the perspective of the Chinese health-care payer. Sensitivity analysis was used to explore model uncertainties. Treatment with apatinib was estimated to provide an incremental 0.09 quality-adjusted life years (QALYs) at an incremental cost of $8113.86 compared with placebo, which resulted in an ICER of $90,154.00 per QALY. Sensitivity analysis showed that across the wide variation of parameters, the ICER exceeded the willingness-to-pay threshold of $23,700.00 per QALY which was three times the Gross Domestic Product per Capita in China. Apatinib is not a cost-effective option for patients with aGC who experienced failure of at least two lines chemotherapy in China. However, for its positive clinical value and subliminal demand, apatinib can provide a new therapeutic option.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"advanced gastric cancer\", \"disease\", \"progressive disease\", \"aGC\", \"death\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"apatinib\", \"Apatinib\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"uncertainties\"], \"Food\": [], \"HealthCareActivity\": [\"Treatment\", \"evaluated\", \"treatment\", \"therapeutic\", \"chemotherapy\", \"placebo\", \"third-line\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"published sources\"], \"MedicalDevice\": [], \"Organization\": [\"Chinese health-care payer\"], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"phase III trial\", \"simulate\", \"analysis\"], \"SpatialConcept\": [\"China\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.4114 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey of H.264 AVC/SVC Encryption \nChapter Title: A. Security/Quality/Intelligibility \n1. Abstract of Cited Paper (BIB001): In recent years, there has been an increasing trend for multimedia applications to use delegate service providers for content distribution, archiving, search, and retrieval. These delegate services have brought new challenges to the protection of multimedia content confidentiality. This paper discusses the importance and feasibility of applying a joint signal processing and cryptographic approach to multimedia encryption, in order to address the access control issues unique to multimedia applications. We propose two atomic encryption operations that can preserve standard compliance and are friendly to delegate processing. Quantitative analysis for these operations is presented to demonstrate that a good tradeoff can be made between security and bitrate overhead. In assisting the design and evaluation of media security systems, we also propose a set of multimedia-oriented security scores to quantify the security against approximation attacks and to complement the existing notion of generic data security. Using video as an example, we present a systematic study on how to strategically integrate different atomic operations to build a video encryption system. The resulting system can provide superior performance over both generic encryption and its simple adaptation to video in terms of a joint consideration of security, bitrate overhead, and friendliness to delegate processing. \n2. Abstract of Cited Paper (BIB002): Scalable video coding has long been known to provide important functionalities such as low-complexity adaptation for diverse clients with different resources and for delivery over heterogeneous networks with time-varying available bandwidths. Recent advancements in scalable video coding have significantly improved the achievable compression performance, and the scalable video coding (SVC) standard is currently under intense development. An additional capability of scalable coding, which we believe has received less attention than it deserves, is the possibility with careful cross-layer design to support secure adaptive streaming at an untrustworthy sender and secure adaptation at an untrustworthy mid-network node. Building on the secure scalable streaming framework for video, and its realization within the JPEG-2000 security (JPSEC) standard, we describe the architectural principles and design details necessary so that SVC can also enable secure adaptive streaming at a sender and secure R-D optimized adaptation at an untrustworthy mid-network node. \n3. Abstract of Cited Paper (BIB003): In this paper, we present a novel algorithm that scrambles and descrambles a H.264/AVC video element stream. In particular, the proposed method selectively exploits the unique characteristics of H.264/AVC video coding standard. Specifically, we manipulate DCT coefficients and CABAC initialization table, to secure the visual content of H.264/AVC video. This new algorithm is light-weight and able to support the level of security (scrambling strength) as well. Extensive experimental results indicate that the proposed algorithm is highly effective and feasible, yet computationally very efficient. \n4. Abstract of Cited Paper (BIB004): With the increase of video applications, the security of video data becomes more and more important. In this paper, we propose a new video encryption scheme for H.264/AVC video coding standard. We define Unequal Secure Encryption (USE) as an approach that applies different cryptographic algorithms (with different security strength) to different partitions of video data. The USE scheme includes two parts: video data classification and unequal secure video data encryption. For data classification, we propose 3 data classification methods and define 5 security levels in our scheme. For encryption, we propose a new stream cipher algorithm FLEX and XOR method to reduce computational cost. In this way, our scheme can achieve both high security and low computational cost. The experimental results show that the computational cost of the USE scheme is very low. In security level 0, the computational cost is about 18% of naive encryption. The USE scheme is very suitable for high security and low cost video encryption systems. \n5. Abstract of Cited Paper (BIB005): In traditional image and video content protection schemes, called fully layered, the whole content is first compressed. Then, the compressed bitstream is entirely encrypted using a standard cipher (DES, AES, IDEA, etc.). The specific characteristics of this kind of data (high-transmission rate with limited bandwidth) make standard encryption algorithms inadequate. Another limitation of fully layered systems consists of altering the whole bitstream syntax which may disable some codec functionalities. Selective encryption is a new trend in image and video content protection. It consists of encrypting only a subset of the data. The aim of selective encryption is to reduce the amount of data to encrypt while preserving a sufficient level of security. This computation saving is very desirable especially in constrained communications (real-time networking, high-definition delivery, and mobile communications with limited computational power devices). In addition, selective encryption allows preserving some codec functionalities such as scalability. This tutorial is intended to give an overview on selective encryption algorithms. The theoretical background of selective encryption, potential applications, challenges, and perspectives is presented. \n6. Abstract of Cited Paper (BIB006): This paper addresses the efficient adaptation of encrypted scalable video content (H.264/SVC). RTP-based in-network adaptation schemes on a media aware network element (MANE) in an IPTV and VoD scenario are considered. Two basic alternatives to implement encryption and adaptation of H.264/SVC content are investigated: (i) full, format-independent encryption making use of Secure RTP (SRTP); (ii) SVC-specific encryption that leaves the metadata relevant for adaptation (NAL unit headers) unencrypted. The SRTP-based scheme (i) is straightforward to deploy, but requires the MANE to be in the security context of the delivery, i.e., to be a trusted node. For adaptation, the content needs to be decrypted, scaled, and re-encrypted. The SVC-specific approach (ii) enables both full and selective encryption, e.g., of the base layer only. SVC-specific encryption is based on own previous work, which is substantially extended and detailed in this paper. The adaptation MANE can now be an untrusted node; adaptation becomes a low-complexity process, avoiding full decryption and re-encryption of the content. This paper presents the first experimental comparison of these two approaches and evaluates whether multimedia-specific encryption can lead to performance and application benefits. Potential security threats and security properties of the two approaches in the IPTV and VoD scenario are elementarily analyzed. In terms of runtime performance on the MANE our SVC-specific encryption scheme significantly outperforms the SRTP-based approach. SVC-specific encryption is also superior in terms of induced end-to-end delays. The performance can even be improved by selective application of the SVC-specific encryption scheme. The results indicate that the efficient adaptation of SVC-encrypted content on low-end, untrusted network devices is feasible. \n7. Abstract of Cited Paper (BIB007): The issue of privacy protection in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we put forward a framework to assess the capacity of privacy protection solutions to hide distinguishing facial information and to conceal identity. We then conduct rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by privacy protection techniques. Results show the ineffectiveness of naive privacy protection techniques such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.\n\nLiterature Review:",
"role": "user"
},
{
"content": "The security notions for video encryption are applicationcontext dependent, e.g., in a commercial content-distribution scenario the security notion for video cryptosystems is different to the conventional cryptographic security notion for cryptosystems. While conventional cryptographic security notions are built upon the notion of message privacy (referred to as MP-security), i.e., nothing of the plaintext message can be learnt/computed from the ciphertext message, the privacy of the message (video) is of limited concern for the content providers, but a redistribution of a sufficient quality version poses the threat (to their business model). The security of the video cryptosystem has to be defined with respect to this threat, i.e., the reconstruction of a sufficient quality version on the basis of the ciphertext, which leads to a specific security notion for multimedia encryption BIB001 - , which we refer to as MQ-security (message quality security) in this paper. The security requirements of many application scenarios in the context of video encryption can be pinned down to this definition. A video is encrypted and an adversary must not be capable to compute a reconstruction of the plaintext with higher quality than allowed in the application scenario. It is sufficient for DRM scenarios that the quality is severely reduced, such that a redistribution is prevented. In the context of privacy-preservation the quality/intelligibility of a video is measured in terms of recognizability of faces and persons BIB008 . The MQ-security notion has often been implicitly employed in literature and similar concepts have been put forward explicitly BIB001 , BIB002 . It has to be highlighted that different application scenarios require different quality levels to be protected, the leakage of an edge image might not be considered a security threat in a commercial content distribution scenario, but renders the application in a video conferencing scenario or a privacyprotection scenario impossible. Although the privacy of the content is not an objective of the content provider in the commercial content-distribution scenario, the customers may have privacy concerns if the content, e.g., the movie being watched in a video on demand (VoD) scenario, can be identified especially if the content is incriminated with social taboos. In the conventional cryptographic security notion, MPsecurity, no information about the plaintext has to be (efficiently) computable from the ciphertext. If one considers the raw video data as plaintexts the preservation of any information, even the preservation of the length of the compressed video stream or the length of units that comprise the compressed video stream constitutes a security violation, as even the compressibility of the raw video data leaks information on the raw video data. If encryption is conducted after compression the compressibility information is contained in the length of the compressed video data. This security notion appropriately models the security requirements of highly confidential video communications, e.g., video conferences in politics and economy. Thus in this very strict interpretation of MP-security for video cryptosystems (the raw video data is the plaintext space), the length of the video data (or packets) must not depend on the raw video data itself. This has major implications for the compression settings and video packetization, e.g., in a video conferencing or VoD scenario. In order to ensure that absolutely no information about the visual content is computable from the transmitted encrypted and compressed video data, even the length of the packets must not depend on the raw video data BIB007 . This implies that for this interpretation of MP-security the video has to be transmitted at a source-independent rate (e.g., constant) and in a source-independent fashion (e.g., constant packet lengths). The issue that video streams can be identified is present even in \"secure\" state-of-the-art technology, e.g., in SRTP BIB007 and IPsec, as the secure encryption of packets does not conceal the packet lengths. This strict security notion, which we refer to as MPV-security, for video cryptosystems conflicts with rate-distortion optimal packetization and an optimal secure adaption, also referred to as SSS in the context of network adaptation , BIB003 . MPV-security does not at all model the security requirements and threats of many application scenarios at all, e.g., of perceptual/transparent encryption and privacy preserving video surveillance. The preservation of format-compliance could be assumed to compromise security, recent contributions from the cryptographic community discuss the topic in depth and define a concise formal framework and re-formulate the MPsecurity notion for format-preserving encryption (MP-security is defined for equal length format-compliant datums) and also analyze format-preserving encryption algorithms, which are proved to be secure. However, there still is a gap between the theoretic availability and the practical applicability of formatpreserving and secure encryption algorithms and this security notion is also not applicable for many application scenarios in the context of multimedia encryption. 1) Lightweight/Soft/Partial/Selective Encryption: Some contributions to multimedia encryption propose the application of less secure but more efficient encryption algorithms (soft encryption), i.e., the computational complexity to break the employed cryptosystem with respect to MP-security is limited. E.g., in BIB005 it is proposed to employ a weaker cipher (an AES derivative with fewer rounds) for the less important parts of the bitstream. Often obviously insecure algorithms are employed (e.g., adding constants to the coefficients BIB004 ) which also fall into that category. Another approach to reduce the computational complexity of encryption is selective/partial encryption of the bitstream with a secure cipher BIB006 . In this paper, we will discuss the schemes in a cipherindependent fashion, i.e., all encryption proposals will be considered to employ the same secure cipher as single source of pseudo-randomness.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.5034 | [
{
"content": "Summarize the article in an uncomplicated manner.\n\nTitle: The Musashi 1 Controls the Splicing of Photoreceptor-Specific Exons in the Vertebrate Retina\nArticle:\nAbstract:\nAlternative pre-mRNA splicing expands the coding capacity of eukaryotic genomes, potentially enabling a limited number of genes to govern the development of complex anatomical structures. Alternative splicing is particularly prevalent in the vertebrate nervous system, where it is required for neuronal development and function. Here, we show that photoreceptor cells, a type of sensory neuron, express a characteristic splicing program that affects a broad set of transcripts and is initiated prior to the development of the light sensing outer segments. Surprisingly, photoreceptors lack prototypical neuronal splicing factors and their splicing profile is driven to a significant degree by the Musashi 1( MSI1) protein. A striking feature of the photoreceptor splicing program are exons that display a \"switch-like\" pattern of high inclusion levels in photoreceptors and near complete exclusion outside of the retina. Several ubiquitously expressed genes that are involved in the biogenesis and function of primary cilia produce highly photoreceptor specific isoforms through use of such “switch-like” exons. Our results suggest a potential role for alternative splicing in the development of photoreceptors and the conversion of their primary cilia to the light sensing outer segments.\nIntroduction:\nVertebrate nervous systems contain numerous types of neurons with characteristic morphology, connectivity, electrophysiological properties, and neurotransmitter signatures. Single cell transcriptome profiling studies reveal dozens of distinct gene expression profiles in the central nervous system( CNS) and the retina, suggesting that neuronal cell identity is established and maintained by specific gene expression programs[1–5]. A limitation of the single cell approaches is the relatively low coverage of the transcriptome that is biased towards the 3'-end of the transcripts[6]. The depth and distribution of the reads produced by the current single cell transcriptome profiling approaches do not allow the reliable assessment of the levels of transcript isoforms produced by alternative splicing. Thus, the posttranscriptional layer in the regulation of gene expression in neurons, which is required for the normal development and function of the CNS, remains hidden[7–12]. Alternative pre-mRNA splicing is a major mechanism for generating protein diversity in vertebrates. In particular, neurons use alternative splicing for generating protein diversity to a significantly higher degree than any other cell type[13, 14]. Characteristically, neurons broadly utilize microexons that are defined by different groups as being no longer than 27nt or 51nt[10, 15]. The neuronal splicing program and the inclusion of neuronal microexons are governed by splicing factors belonging to several families of RNA binding proteins: PTBP, ELAVL, NOVA, KHDRBS, SRRM and RBFOX[14, 16–19]. With the exception of PTBP1, these proteins have high expression levels in neurons and are not expressed or have limited expression outside of the nervous system. PTBP1, which represses splicing of neuronal exons outside of the nervous system, is replaced by the PTBP2 in the early stages of neuronal differentiation. Retinal photoreceptor cells provide an intriguing model to study how gene expression programs shape the cell structure and properties. Photoreceptors have a distinct morphology with a characteristic light sensing organelle termed the outer segment. The photoreceptor outer segment is a sensory cilium with an elaborate structure of membrane stacks. Surprisingly, the genes involved in the biogenesis and maintenance of the photoreceptor cilium are ubiquitously expressed in all ciliated cells. Recently, isoforms for two of these proteins, Arl6( BBS3) and Ttc8( BBS8), were shown to be preferentially expressed in photoreceptors[20–23]. The retinal variant of Arl6, a Ras related GTP-binding protein, is required for the survival of zebrafish photoreceptor cells and disruption of the gene in mice results in retinal defect[20, 21]. These findings raise the possibility that photoreceptor cells are at least in part shaped by post-transcriptional processes such as alternative pre-mRNA splicing. Here we use animal models to characterize in depth the alternative splicing profiles of photoreceptor cells, a sensory neuron type. We find that photoreceptors express a characteristic splicing program that includes a set of highly photoreceptor-specific isoforms. Surprisingly, key neuronal splicing regulators are either not expressed or downregulated in photoreceptor cells. We show that Musashi 1( MSI1) promotes the splicing of photoreceptor specific exons as part of a combinatorial mechanism that controls splicing in photoreceptor cells.\nDiscussion:\nThe lack of comprehensive gene expression profiles of defined neuronal subtypes is a major obstacle to understanding how the neuronal diversity of the vertebrate CNS is established. To delineate the gene expression and alternative splicing programs of a single neuronal subtype we turned to the vertebrate retina. The high abundance of rod photoreceptors in the mouse retina allowed us to isolate the characteristic features of their transcriptome by comparing the Aipl1 knockout model of retinal degeneration to wild type mice. The deduced splicing profile of photoreceptor cells is related to the spicing profiles of retinal and CNS neurons( Fig 2 and S1 Fig). Alternative splicing in neurons is known to be regulated by SRRM4 and members of the PTBP, RBFOX, KHDRBS, NOVA and ELAVL families of RNA binding proteins[14, 16–19]. Strikingly, photoreceptors do not express RBFOX, NOVA, KHDRBS2 and KHDRBS3 proteins, express lower amounts of ELAVL proteins and have markedly lower Srrm4 transcript levels. In light of these data, the switch from PTBP1 to PTBP2 expression during development emerges as a major determinant of the alternative splicing program that is shared between neurons and photoreceptor cells[55, 56]. In the absence of a “master” regulator of splicing specific to photoreceptors, the characteristic splicing program of these cells is likely determined by a unique combination of splicing factors with broader expression. We show that Musashi 1( MSI1) promotes the splicing of exons with elevated inclusion levels in photoreceptors. The Musashi proteins are notable for their expression in stem cells, where they are involved in stem cell maintenance and cell fate determination[57–59]. The best characterized function of Musashi is the regulation of mRNA stability and translation through binding to the 3’-UTR of the target transcripts[58]. In the retina, the subcellular localization of the Musashi proteins varies during development and in mature neurons the two proteins are confined to the cytoplasm( Fig 4)[45]. Here we show that MSI1 is present not only in the cytoplasm but is also abundant in the nuclei of photoreceptor cells, where it controls alternative pre-mRNA splicing. Our findings contrast recent studies of the Musashi activity in the central nervous system and glioblastoma, where the cytoplasmic localization of the Musashi proteins confines their function to control of mRNA stability and translation and limits their impact on splicing[60, 61]. Control of the subcellular localization of MSI1 provides a mechanism that can produce the characteristic splicing program of photoreceptors in the absence of a photoreceptor-specific RNA binding protein. Similar mechanisms that involve redistribution of RNA binding proteins between the nucleus and the cytoplasm are known to control alternative splicing in response to external stimuli[62–64]. While playing a significant role in promoting inclusion of photoreceptor specific exons, MSI1 is clearly not the sole determinant of the photoreceptor specific splicing program. The absence from photoreceptors of the RBFOX, NOVA and KHDRBS2/3 proteins also contributes to the differences in alternative splicing between photoreceptors and neurons as indicated by our motif enrichment analysis. Our results suggest that the characteristic splicing program of the photoreceptor cells may be determined by a unique suite of splicing factors with broader expression that can combinatorically form different exon recognition complexes depending on the sequence of the underlying RNA substrate. Such combinatorial control of splicing is a well-established paradigm and was recently demonstrated to control \"switch-like\" splicing events during reprograming of primary fibroblasts into pluripotent stem cells[65, 66]. Further research will be needed to directly demonstrate the combinatorial control of the inclusion of photoreceptor specific exons and identify the factors beyond MSI1 that are involved in this process. At present it is unclear how most of the alternative exons we identified affect protein function or to what degree alternative splicing shapes the properties of the photoreceptor cells. One exception is the 14nt exon 8 in the Arl6( BBS3) gene, which has high inclusion levels in photoreceptors( Fig 1). ARL6, a Ras family GTP-binding protein, is part of a network of proteins involved in the development and maintenance of primary cilia( Fig 8C). The exon 8 containing isoform of ARL6 is required for normal vision in zebra fish[20, 21]. The vision phenotype of mice lacking Arl6 exon 8 has not been reported in detail, however gross histological examination and immunofluorescent staining of the retina show that the inner segments of photoreceptors are disorganized[21]. Several other components of the protein network that ARL6 is part of are also differentially spliced in photoreceptors. Four of these genes, Cep290, Cc2d2a, Ttc8 and Prom1 contain “switch-like” exons that produce isoforms highly specific to photoreceptors( Figs 8C and 1B)[22, 23]. Strikingly, the splicing of these “switch-like” exons is coordinately regulated in development( Fig 3B) and their inclusion is promoted by MSI1. The photoreceptor splicing program is activated in the postmitotic progenitors, prior to the onset of outer segment development. A transcription factor cascade starting from Crx homeobox protein that is critical for photoreceptor morphogenesis is also activated during the same developmental time frame[31, 67, 68]. Interestingly, alternative splicing in photoreceptors is not affected in the Crx knockout animals and in the Crx dominant negative mutant[29]. Thus, the developmental switch to photoreceptor specific splicing is independent of the established transcriptional mechanism that activates the expression of photoreceptor specific genes. In summary, we demonstrate that photoreceptors express a characteristic splicing program that encompasses hundreds of alternative exons and affects the transcripts of multiple genes that are critical for vision.",
"role": "user"
},
{
"content": "Vertebrates possess extraordinarily complex nervous systems that are formed by hundreds of different types of neurons . This neuronal diversity is achieved in part through the expression of specific sets of genes . In addition , neurons use alternative splicing to a much higher degree than other cells to produce neuronal variants of broadly expressed proteins . Here we characterize the transcriptome of a single neuronal type , the vertebrate photoreceptors . We find that photoreceptors similar to other neurons utilize large number of alternative exons . Surprisingly , the splicing profile of photoreceptor cells and the factors that control it diverge from those of central nervous system and inner retinal neurons . We find multiple examples of genes producing photoreceptor specific splicing variants that completely replace the broadly expressed protein isoforms . Several of these genes participate in the formation of the primary cilium and the elaborated outer segments that are needed for light perception . Our work implies that alternative splicing may enable photoreceptors to acquire their characteristic shape and perform their light sensing function .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.92678 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nparry romberg syndrome ( prs ) also known as progressive facial hemiatrophy is characterized by a progressive atrophy of the skin and subcutaneous tissue , sometimes bones on one side of the face . in rare cases ,\nit was first described by caleb hillier parry in 1825 and later in more detail by moritz heinrich romberg in 1846 .\nthere is a progressive wasting of subcutaneous fat , sometimes accompanied by the atrophy of skin , cartilage , bone , and muscle .\na sharply demarcated line between normal and abnormal skin develops called as coup de saber . in most of the instances ,\natrophy is usually confined to one side of the face and cranium ; however , it may occasionally spread to the neck and one side of the body .\nprs usually becomes apparent during the first decade of life or early during the second decade .\nmigraine and facial pain such as trigeminal neuralgia are the most common neurological symptoms in this patient group .\nsometimes , patients developed epilepsy and it can be related directly to a brain abnormality ipsilateral to the skin lesion .\nthis may be visible only on magnetic resonance imaging ( mri ) as a lesion in the underlying gray or sometimes white matter .\nother cerebral abnormalities associated with prs are cerebral hemiatrophy , meningeal thickening , cortical dysgenesis , calcified lesions , aneurysms , and intracranial vascular malformations .\ndifferent theories have been proposed to explain this rare condition including abnormal development or inflammation of the sympathetic nervous system , viral infections , meningoencephalitis , trauma , abnormalities of blood vessel formation ( angiogenesis ) , or autoimmunity .\nthere are no published trials of the treatment ; however , the restorative plastic surgery which includes fat or silicone implants , flap / pedicle grafts , or bone implants can be used . in severe and progressive disease , immunosuppressive drugs such as methotrexate , corticosteroids , cyclophosphamide , and azathioprine should be considered , but their exact beneficial role is unclear .\nmore research is necessary to determine the long - term safety and effectiveness of immunosuppressive drugs in the treatment of prs .\nherein , the author presents the first case report of prs with multiple intracranial cyst .\na 14-year - old boy presented with a history of headache for one 1 month and vomiting and diminished vision for the past 8 days .\nthe patient 's medical and family history was noncontributory . on physical examination , facial asymmetry was present on the right side , the malar hypoplasia and the atrophy of facial skin and upper lip were present on the right side .\nloss of the subcutaneous fat with prominent bony ridges on the ipsilateral side was observed when compared to the normal side .\nthere was a deviation of nose and the angle of mouth to the affected side . \nen coup de sabre was present over the chin , i.e. , a sharply demarcated line between normal and abnormal skin present over the chin [ figure 1 ] .\nright - sided facial asymmetry with en coup de sabre present over the chin neurological examination revealed right - sided papilledema with left hemiparesis .\nmri brain and three - dimensional computed tomography ( 3d ct ) face were advised .\nmri brain revealed multiple cystic lesions on the right half of the brain of varying size , largest one was 6 cm 5 cm in size in parieto - temporal location .\nmagnetic resonance imaging t2-weighted image showing multiple cystic lesions in the right parieto - temporal region with mass effect magnetic resonance imaging brain axial section with contrast showing right - sided cystic lesions without contrast enhancement magnetic resonance imaging brain with sagittal section with contrast showing multiple nonenhancing cystic lesion in the right tempero - parietal region 3d ct face revealed involvement of soft tissues over the right side of the lower face .\nbony structures appeared to be normal except the smaller right maxillary sinus when compared to the left side [ figure 5 ] .\nbased on the clinical features and radiological examination , a diagnosis of prs with multiple intracranial cyst was made .\nthree - dimensional computed tomography face showing normal bony structures with small right maxillary sinus in view of symptoms of raised intracranial pressure ( icp ) and mid - line shift , the patient was subjected to surgery .\nbiopsy of the cyst wall and the surrounding abnormal - looking brain matter was taken .\nanother similar cyst was encountered anteriorly which was also similar in nature containing hemorrhagic fluid .\nhistopathology of the cyst wall revealed flattened out surface without identifiable epithelium , supported in part by collagenous tissue suggestive of arachnoid cyst .\nprs is an acquired poorly understood rare neurocutaneous syndrome of unknown etiology characterized by slow progressive atrophic changes commonly affecting one half of the face involving muscles , bones , and skin .\nthe atrophy is insidious in onset occurring in the first two decades of life lasting for 210 years followed by burning out of the atrophy with resultant stability .\nthe main feature is hemiatrophy of the facial tissues , typically fat , but variably skin , other connective tissue , and sometimes bone .\nall these features were present in the case herein . other important features present in prs area demarcation line between normal and abnormal skin , known as coup de sabre ( french term which means cut of the sword ) , and the deviation of mouth and nose to the affected side was also found in our case . compared to the healthy population , the involvement of central nervous system seems to occur more often in patients with prs , most commonly consisting of migraine - type headache , focal epilepsy , and trigeminal neuralgia .\na recent global internet survey of 205 patients with prs estimated the involvement of central nervous system to be over 50 percent .\nfifteen percent of patients have neurologic manifestations that include epilepsy , migraine headache , cranial nerve deficits , hemiplegia , cognitive abnormalities , and fixed focal neurologic deficits .\ncranial ct and mri findings in prs are usually ipsilateral to the facial hemiatrophy and include intraparenchymal calcifications , focal or diffuse white matter hyperintensities , focal infarction of the corpus callosum , loss of cortical gyration , cerebral hemiatrophy , and leptomeningeal enhancement\n. reported neurovascular findings include multiple intracranial aneurysms , reversible vascular narrowing , and vascular malformation .\nthere is one reported fatal case in a child with associated rasmussen - type encephalitis .\nneuropsychiatric problems are common in prs patients with reported prevalence as high as 50% . in accordance with these findings\n, several neuroradiological abnormalities have been described in patients with prs ( e.g. , cerebral hemiatrophy and calcifications ) , but there was no case report of prs associated with multiple intracranial cyst . in 1995 ,\nreported a case of spontaneous carotid - jugular fistula and carotid dissection in a patient with multiple intracranial arachnoid cysts and hemifacial atrophy , but the author considered this as a part of a previously undescribed generalized connective tissue disorder . although the etiology of prs is unknown , several causes have been proposed .\nit has been postulated that it actually falls within the spectrum of localized , linear scleroderma . to our knowledge , this is the first case report describing an association between prs and multiple intracranial cyst .\nthere are no published trials of treatment , so patients with prs are treated symptomatically . in our case ,\nthe patient had symptoms of raised icp and mass effect , so we treated him with craniotomy with microsurgical marsupialization along with cysto - ventriculostomy , and postoperatively , the patient had a relief of all the symptoms .\nalthough very rare , there may be an association between prs and multiple intracranial cysts . more study and case reports will be needed to confirm this association .\nwe hereby presenting only a case report of prs with multiple large nonspecific intracranial cysts producing mass effect and requiring surgical intervention for the same .\nthe authors certify that they have obtained all appropriate patient consent forms . in the form the patient(s )\nhas / have given his / her / their consent for his / her / their images and other clinical information to be reported in the journal .\nthe patients understand that their names and initials will not be published and due efforts will be made to conceal their identity , but anonymity can not be guaranteed .\nthe authors certify that they have obtained all appropriate patient consent forms . in the form the patient(s )\nhas / have given his / her / their consent for his / her / their images and other clinical information to be reported in the journal .\nthe patients understand that their names and initials will not be published and due efforts will be made to conceal their identity , but anonymity can not be guaranteed .",
"role": "user"
},
{
"content": "parry romberg syndrome ( prs ) is a rare , poorly understood degenerative condition characterized by atrophic changes affecting one side of the face . \n the cause of these changes remains obscure . \n migraine and facial pain such as trigeminal neuralgia are the most common neurological symptoms in this patient group . \n sometimes , it causes epilepsy and rarely cerebral hemiatrophy , meningeal thickening , cortical dysgenesis , calcified lesions , aneurysms , and intracranial vascular malformations . herein , the author reports the first case of prs with multiple large intracranial cysts producing raised intracranial pressure .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.46049 | [
{
"content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Video Processing From Electro-Optical Sensors for Object Detection and Tracking in a Maritime Environment: A Survey \nChapter Title: 6) Motion Segmentation Using Optical Flow Approach: \n1. Abstract of Cited Paper (BIB001): Visual surveillance in dynamic scenes is currently one of the most active research topics in computer vision, many existing applications are available. However, difficulties in realizing effective video surveillance systems that are robust to the many different conditions that arise in real environments, make the actual deployment of such systems very challenging. In this article, we present a real, unique and pioneer video surveillance system for boat traffic monitoring, ARGOS. The system runs continuously 24 hours a day, 7 days a week, day and night in the city of Venice (Italy) since 2007 and it is able to build a reliable background model of the water channel and to track the boats navigating the channel with good accuracy in real-time. A significant experimental evaluation, reported in this article, has been performed in order to assess the real performance of the system.\n\nLiterature Review:",
"role": "user"
},
{
"content": "Direct foreground tracking without pre-segmenting the foreground can be done by 'motion segmentation'. The spatial information is incorporated implicitly as the features or pixels with same motion characteristics are likely to belong to the same foreground object. Although several motion segmentation approaches are used in the computer vision, as discussed later in section V-B, methods for maritime EO problem have used optical flow based motion segmentation only BIB002 , BIB001 . The underlying assumptions in optical flow based techniques are that the objects are rigid and the motion is smooth. Bloisi et. al BIB002 first computed connected segments in the foreground, which are not necessarily the foreground objects. Then, they computed sparse motion maps for each blob. The wakes and shadows do not have a consistent motion map and thus are suppressed in the optical flow approach. In order to deal with multiple foreground objects in one blob, a modified k-means clustering approach was applied on the optical flow map. First the motion map was over-clustered into many small clusters using k-means clustering. Then, the clusters were iteratively merged till further merging reduced the cluster separation instead of increasing the cluster separation. Notably, optical flow method fails in boat maneuvers because the optical flow may detect different directions for the different parts of the boat. Further, two boats having very similar motion characteristics and present in one blob cannot be separated by optical flow as well as k-means clustering. Mittal et. al BIB001 computed the optical flow velocity vector f by solving the flow constraint equation ∇g · f + g t = 0, where g is the measured value of a color channel, ∇g is the gradient of g, and g t is the temporal derivative of g. This equation was solved at each pixel such that the error in the estimated flow vector f is minimised while satisfying the constraint that the velocity f is locally constant.",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.185.0 | [
{
"content": "Read the following paper excerpts and answer the question. Output a JSON object with \"answer\" and \"evidence\" fields. The answer must be extracted directly from the paper text.\n\nIf the question cannot be answered from the given text, respond with \"null\".\n\nPaper: Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding\n\nThe goal of this paper is to use multi-task learning to efficiently scale slot filling models for natural language understanding to handle multiple target tasks or domains. The key to scalability is reducing the amount of training data needed to learn a model for a new task. The proposed multi-task model delivers better performance with less data by leveraging patterns that it learns from the other tasks. The approach supports an open vocabulary, which allows the models to generalize to unseen words, which is particularly important when very little training data is used. A newly collected crowd-sourced data set, covering four different domains, is used to demonstrate the effectiveness of the domain adaptation and open vocabulary techniques.\n\nIntroduction\nSlot filling models are a useful method for simple natural language understanding tasks, where information can be extracted from a sentence and used to perform some structured action. For example, dates, departure cities and destinations represent slots to fill in a flight booking task. This information is extracted from natural language queries leveraging typical context associated with each slot type. Researchers have been exploring data-driven approaches to learning models for automatic identification of slot information since the 90's, and significant advances have been made BIBREF0 . Our paper builds on recent work on slot-filling using recurrent neural networks (RNNs) with a focus on the problem of training from minimal annotated data, taking an approach of sharing data from multiple tasks to reduce the amount of data for developing a new task.\nAs candidate tasks, we consider the actions that a user might perform via apps on their phone. Typically, a separate slot-filling model would be trained for each app. For example, one model understands queries about classified ads for cars BIBREF1 and another model handles queries about the weather BIBREF2 . As the number of apps increases, this approach becomes impractical due to the burden of collecting and labeling the training data for each model. In addition, using independent models for each task has high storage costs for mobile devices.\nAlternatively, a single model can be learned to handle all of the apps. This type of approach is known as multi-task learning and can lead to improved performance on all of the tasks due to information sharing between the different apps BIBREF3 . Multi-task learning in combination with neural networks has been shown to be effective for natural language processing tasks BIBREF4 . When using RNNs for slot filling, almost all of the model parameters can be shared between tasks. In our study, only the relatively small output layer, which consists of slot embeddings, is individual to each app. More sharing means that less training data per app can be used and there will still be enough data to effectively train the network. The multi-task approach has lower data requirements, which leads to a large cost savings and makes this approach scalable to large numbers of applications.\nThe shared representation that we build on leverages recent work on slot filling models that use neural network based approaches. Early neural network based papers propose feedforward BIBREF5 or RNN architectures BIBREF6 , BIBREF7 . The focus shifted to RNN's with long-short term memory cells (LSTMs) BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 after LSTMs were shown to be effective for other tasks BIBREF12 . The most recent papers use variations on LSTM sequence models, including encoder-decoder, external memory, or attention architectures BIBREF13 , BIBREF14 , BIBREF15 . The particular variant that we build on is a bidirectional LSTM, similar to BIBREF16 , BIBREF11 .\nOne highly desirable property of a good slot filling model is to generalize to previously unseen slot values. For instance, we should not expect that the model will see the names of all the cities during training time, especially when only a small amount of training data is used. We address the generalizability issue by incorporating the open vocabulary embeddings from Ling et al. into our model BIBREF17 . These embeddings work by using a character RNN to process a word one letter at a time. This way the model can learn to share parameters between different words that use the same morphemes. For example BBQ restaurants frequently use words like “smokehouse”, “steakhouse”, and “roadhouse” in their names and “Bayside”,“Bayview”, and “Baywood” are all streets in San Francisco. Recognizing these patterns would be helpful in detecting a restaurant or street name slot, respectively.\nThe two main contributions of this work are the multi-task model and the use of the open vocabulary character-based embeddings, which together allow for scalable slot filling models. Our work on multi-task learning in slot filling differs from its previous use in BIBREF18 in that we allow for soft sharing between tasks instead of explicitly matching slots to each other across different tasks. A limitation of explicit slot matching is that two slots that appear to have the same underlying type, such as location-based slots, may actually use the slot information in different ways depending on the overall intent of the task. In our model, the sharing between tasks is done implicitly by the neural network. Our approach to handling words unseen in training data is different from the delexicalization proposed in BIBREF19 in that we do not require the vocabulary items associated with slots and values to be prespecified. It is complementary to work on extending domain coverage BIBREF20 , BIBREF21 .\nThe proposed model is described in more detail in Section \"Model\" . The approach is assessed on a new data collection based on four apps, described in Section \"Data\" . The experiments described in Section \"Training and Model Configuration Details\" investigate how much data is necessary for the $n$ -th app using a multi-task model that leverages the data from the previous $n-1$ apps, with results compared against the single-task model that only utilizes the data from the $n$ -th app. We conclude in Section \"Conclusions\" with a summary of the key findings and discussion of opportunities for future work.\n\nModel\nOur model has a word embedding layer, followed by a bi-directional LSTM (bi-LSTM), and a softmax output layer. The bi-LSTM allows the model to use information from both the right and left contexts of each word when making predictions. We choose this architecture because similar models have been used in prior work on slot filling and have achieved good results BIBREF16 , BIBREF11 . The LSTM gates are used as defined by Sak et al. including the use of the linear projection layer on the output of the LSTM BIBREF22 . The purpose of the projection layer is to produce a model with fewer parameters without reducing the number of LSTM memory cells. For the multi-task model, the word embeddings and the bi-LSTM parameters are shared across tasks but each task has its own softmax layer. This means that if the multi-task model has half a million parameters, only a couple thousand of them are unique to each task and the other 99.5% are shared between all of the tasks.\nThe slot labels are encoded in BIO format BIBREF23 indicating if a word is the beginning, inside or outside any particular slot. Decoding is done greedily. If a label does not follow the BIO syntax rules, i.e. an inside tag must follow the appropriate begin tag, then it is replaced with the outside label. Evaluation is done using the CoNLL evaluation script BIBREF24 to calculate the F1 score. This is the standard way of evaluating slot-filling models in the literature.\nIn recent work on language modeling, a neural architecture that combined fixed word embeddings with character-based embeddings was found to to be useful for handling previously unseen words BIBREF25 . Based on that result, the embeddings in the open vocabulary model are a concatenation of the character-based embeddings with fixed word embeddings. When an out-of-vocabulary word is encountered, its character-based embedding is concatenated with the embedding for the unknown word token. The character-based embeddings are generated from a two layer bi-LSTM that processes each word one character at a time. The character-based word embedding is produced by concatenating the last states from each of the directional LSTM's in the second layer and passing them through a linear layer for dimensionality reduction.\n\nTraining and Model Configuration Details\nThe data is split to use 30% for training with 70% to be used for test data. The reason that a majority of the data is used for testing is that in the second experiment the results are reported separately for sentences containing out of vocabulary tokens and a large amount of data is needed to get a sufficient sample size. Hyperparameter tuning presents a challenge when operating in a low resource scenario. When there is barely enough data to train the model none can be spared for a validation set. We used data from the United app for hyperparameter tuning since it is the largest and assumed that the hyperparameter settings generalized to the other apps.\nTraining is done using stochastic gradient descent with minibatches of 25 sentences. The initial learning rate is 0.3 and is set to decay to 98% of its value every 100 minibatches. For the multi-task model, training proceeds by alternating between each of the tasks when selecting the next minibatch. All the parameters are initialized uniformly in the range [-0.1, 0.1]. Dropout is used for regularization on the word embeddings and on the outputs from each LSTM layer with the dropout probability set to 60% BIBREF27 .\nFor the single-task model, the word embeddings are 60 dimensional and the LSTM is dimension 100 with a 70 dimensional projection layer on the LSTM. For the multi-task model, word embeddings are 200 dimensional, and the LSTM has 250 dimensions with a 170 dimensional projection layer. For the open vocabulary version of the model, the 200-dimensional input is a concatenation of 160-dimensional traditional word embeddings with 40-dimensional character-based word embeddings. The character embedding layer is 15 dimensions, the first LSTM layer is 40 dimensions with a 20 dimensional projection layer, and the second LSTM layer is 130 dimensions.\n\nMulti-task Model Experiments\nWe compare a single-task model against the multi-task model for varying amounts of training data. In the multi-task model, the full amount of data is used for $n-1$ apps and the amount of data is allowed to vary only for the $n$ -th application. These experiments use the traditional word embeddings with a closed vocabulary. Since the data for the United app is bigger than the other three apps combined, it is used as an anchor for the multi-task model. The other three apps alternate in the position of the $n$ -th app. The data usage for the $n$ -th app is varied while the other $n-1$ apps in each experiment use the full amount of available training data. The full amount of training data is different for each app. The data used for the $n$ -th app is 200, 400, or 800 sentences or all available training data depending on the experiment. The test set remains fixed for all of the experiments even as part of the training data is discarded to simulate the low resource scenario.\nIn Figure 1 we show the single-task vs. multi-task model performance for each of three different applications. The multi-task model outperforms the single-task model at all data sizes, and the relative performance increases as the size of the training data decreases. When only 200 sentences of training data are used, the performance of the multi-task model is about 60% better than the single-task model for both the Airbnb and Greyhound apps. The relative gain for the OpenTable app is 26%. Because the performance of the multi-task model decays much more slowly as the amount of training data is reduced, the multi-task model can deliver the same performance with a considerable reduction in the amount of labeled data.\n\nOpen Vocabulary Model Experiments\nThe open vocabulary model experiments test the ability of the model to handle unseen words in test time, which are particularly likely to occur when using a reduced amount of training data. In these experiments the open vocabulary model is compared against the fixed embedding model. The results are reported separately for the sentences that contain out of vocabulary tokens, since these are where the open vocabulary system is expected to have an advantage.\nFigure 2 gives the OOV rate for each app for varying amounts of training data plotted on a log-log scale. The OOV words tend to be task-specific terminology. For example, the OpenTable task is the only one that has names of restaurants but names of cities are present in all four tasks so they tend to be covered better. The OOV rate dramatically increases when the size of the training data is less than 500 sentences. Since our goal is to operate in the regime of less than 500 sentences per task, handling OOVs is a priority. The multi-task model is used in these experiments. The only difference between the closed vocabulary and open vocabulary systems is that the closed vocabulary system uses the traditional word embeddings and the open vocabulary system uses the traditional word embeddings concatenated with character-based embeddings.\nTable 4 reports F1 scores on the test set for both the closed and open vocabulary systems. The results differ between the tasks, but none have an overall benefit from the open vocabulary system. Looking at the subset of sentences that contain an OOV token, the open vocabulary system delivers increased performance on the Airbnb and Greyhound tasks. These two are the most difficult apps out of the four and therefore had the most room for improvement. The United app is also all lower case and casing is an important clue for detecting proper nouns that the open vocabulary model takes advantage of.\nLooking a little deeper, in Figure 3 we show the breakdown in performance across individual slot types. Only those slot types which occur at least one hundred times in the test data are shown in this figure. The slot types that are above the diagonal saw a performance improvement using the open vocabulary model. The opposite is true for those that are below the diagonal. The open vocabulary system appears to do worse on slots that express quantities, dates and times and better on slots with greater slot perplexity (i.e., greater variation in slot values) like ones relating to locations. The three slots where the open vocabulary model gave the biggest gain are the Greyhound LeavingFrom and GoingTo slots along with the Airbnb Amenities slot. The three slots where the open vocabulary model did the worst relative to the closed vocabulary model are the Airbnb Price slot, along with the Greyhound DiscountType and DepartDate slots. The Amenities slot is an example of a slot with higher perplexity (with options related to pets, availability of a gym, parking, fire extinguishers, proximity to attractions), and the DiscountType is one with lower perplexity (three options cover almost all cases). We hypothesize that the reason that the numerical slots are better under the closed vocabulary model is due to their relative simplicity and not an inability of the character embeddings to learn representations for numbers.\n\nTable 1: Data statistics for each of the four target applications.\n\nTable 2: Listing of slot types for each app.\n\nFigure 1: F1 score for multi-task vs. single-task models.\n\nTable 3: Example labeled sentences from each application.\n\nFigure 3: Comparison of performance on individual slot types.\n\nTable 4: Comparison of F1 scores for open and closed vocabulary systems on the full test set vs. the subset with OOV words.\n\nQuestion: Does the performance increase using their method?",
"role": "user"
},
{
"content": "{\"answer\": [\"The multi-task model outperforms the single-task model at all data sizes\", \"but none have an overall benefit from the open vocabulary system\"], \"evidence\": [\"In Figure 1 we show the single-task vs. multi-task model performance for each of three different applications. The multi-task model outperforms the single-task model at all data sizes, and the relative performance increases as the size of the training data decreases. When only 200 sentences of training data are used, the performance of the multi-task model is about 60% better than the single-task model for both the Airbnb and Greyhound apps. The relative gain for the OpenTable app is 26%. Because the performance of the multi-task model decays much more slowly as the amount of training data is reduced, the multi-task model can deliver the same performance with a considerable reduction in the amount of labeled data.\", \"Table 4 reports F1 scores on the test set for both the closed and open vocabulary systems. The results differ between the tasks, but none have an overall benefit from the open vocabulary system. Looking at the subset of sentences that contain an OOV token, the open vocabulary system delivers increased performance on the Airbnb and Greyhound tasks. These two are the most difficult apps out of the four and therefore had the most room for improvement. The United app is also all lower case and casing is an important clue for detecting proper nouns that the open vocabulary model takes advantage of.\"]}",
"role": "assistant"
}
] |
science.acl_arc_intent_classification | science.acl_arc_intent_classification.981 | [
{
"content": "Classify the intent of this citation from an NLP research paper. \n\nYour answer must be one word from: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. No other text allowed.\n\nSection Title:\nintroduction\n\nContext before:\nRobinson, 1982; Bobrow, 1978) consult relatively small lexicons, typically generated by hand. Few established parsing systems have substantial lexicons and even those which employ very comprehensive grammars (eg. on developing dictionary servers for office automation systems (Kay, 1984b).\n\nCitation:\nRobinson , 1982 ; Bobrow , 1978 ) consult relatively small lexicons , typically generated by hand .\n\nContext after:\nTwo exceptions to this generalisation are the Linguistic String Project (Sager, 1981) and the IBM CRITIQUE (formerly EPISTLE) Project (Heidorn et al., 1982; Byrd, 1983); the former employs a dictionary of approximately 10,000 words, most of which are specialist medical terms, the latter has well over 100,000 entries, gathered from machine readable sources. In addition, there are a number of projects under way to develop substantial lexicons from machine readable sources (see Boguraev, 1986 for details). However, as yet few results have been published concerning the utility of electronic versions of published dictionaries as sources for such lexicons. In this paper we provide an evaluation of the LDOCE grammar code system from this perspective.",
"role": "user"
},
{
"content": "CompareOrContrast",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.192309 | [
{
"content": "Given the full text of a scientific research paper, write an abstract section for the paper. Your response should contain only the abstract without any other text.\n\nArticle text:\nthe recent observations of the cosmic microwave background ( cmb ) lead to a hubble constant of @xmath6 for the base six - parameter @xmath7cdm model ( planck collaboration 2013 , herafter p13 ) .\nthis value is in tension , at about the @xmath8 level , with the direct measurement of @xmath9 reported by r11 .\nif these numbers are taken at face value , they suggest evidence for new physics at about the @xmath8 level ( for example , exotic physics in the neutrino or dark energy sectors as discussed in p13 ; see also wyman 2013 , hamann and hasenkamp 2013 ; battye and moss 2013 ; rest 2013 ; suyu 2013 )\n. the exciting possibility of discovering new physics provides strong motivation to subject both the cmb and @xmath1 measurements to intense scrutiny .\ndirect astrophysical measurements of the hubble constant have a checkered history ( see , for example , the reviews by tammann , sandage and reindl 2008 ; freedman and madore 2010 ) .\nthe hubble space telescope ( hst ) key project led to a significant improvement in the control of systematic errors leading to ` final ' estimate of @xmath10 ( freedman 2001 ) . since then , two cepheid based programmes have been underway with the aim of reducing the error on @xmath1 : the _ supernovae and @xmath1 for the equation of state _ ( sh0es ) programme of r11 ( with earlier results reported in riess 2009 ) and the _ carnegie hubble program _ of freedman ( 2012 ) . in addition , other programmes are underway using geometrical methods , for example the megamaser cosmology project ( mcp ) ( reid 2013 ; braatz 2013 ) and the cosmological monitoring of gravitational lenses ( cosmograil ) project ( suyu 2010 , courbin 2011 ; trewes 2013 ) .\nthis paper presents a reanalysis of the r11 cepheid data .\nthe @xmath1 measurement from these data has the smallest error and has been used widely in combination with cmb measurements for cosmological parameter analysis ( _ e.g. _ hinshaw 2012 ; hou 2012 ; sievers 2013 ) .\nthe study reported here was motivated by certain aspects of the r11 analysis : the r11 outlier rejection algorithm ( which rejects a large fraction , @xmath11 , of the cepheids ) , the low reduced @xmath0 values of their fits , and the variations of some of the parameter values with different distance anchors , particularly the metallicity dependence of the period - luminosity relation .\nthe layout of this paper is as follows .\nsection 2 reviews the near - ir period - luminosity ( p - l ) relation of a sample of lmc cepheids . section 3 describes a reanalysis of the r11 sample using the maser distance to ngc 4258 as an anchor .\nsection 4 investigates the use of the lmc and mw cepheids as anchors .\nsection 5 investigates combinations of the three distance anchors and presents some internal consistency checks .\nthe conclusions are summarized in section 6 .\ni will start with the lmc cepheids which i will use as a reference since the slope of the p - l relation is tightly constrained from this sample .\ni use the 53 lmc cepheids with h - band magnitudes listed in persson ( 2004 ) and v , i magnitudes listed in sebo ( 2002 ) .\nfigure 1 shows the wesenheit magnitudes @xmath12 plotted against period @xmath13 ( in units of days ) .\nthe line shows a least - squares fit to @xmath14 _ i.e. _ minimising @xmath15 where @xmath16 is the error on @xmath17 and @xmath18 is the ` internal ' scatter that gives a reduced @xmath0 ( denoted @xmath19 in this paper ) of unity . since @xmath18 depends on the parameters @xmath20 and @xmath21 , the minimisation is performed iteratively until convergence .\nthe best fit parameters are @xmath22 the upper period limit is imposed because there is evidence that the p - l relation departs from a power law for periods @xmath23 days ( persson , 2004 ; freedman 2011 ; scowcroft 2011 ) . the lower period limit in ( [ lmc3b ] ) has been imposed because there have been some claims that the p - l relation at optical and near - ir wavelengths changes at periods less than 10 days ( ngeow 2009 ) .\nthe results of ( 4a ) and ( 4b ) and figure 1 show no evidence for any significant change in the power - law slope at low periods , in agreement with the results of persson ( 2004 ) .\nevidently , a single power law is an extremely good fit to the lmc cepheids at least to periods of 60 days , and the slope of the p - l relation is determined to high accuracy .\nas discussed by r11 , there are several reasons to expect outliers in the p - l relation . these include variables misidentified as classical cepheids , blended images , errors in crowding corrections and possible aliasing of the periods .\n@xmath24 the h - band only p - l relations are fitted galaxy - by - galaxy to a power law with slope fixed at @xmath25 in the first iteration , weighted by the magnitude errors in table 2 of r11 .\ncepheids with periods @xmath26 days are excluded .\nonce the outliers have been removed , r11 proceed to global fits using all of their galaxies , now adding a @xmath28 mag .\nerror in quadrature to the magnitude errors listed in their table 2 ( riess , private communication ) .\none of the consequences of adding this additional error term is that the @xmath19 values of the r11 global fits are always less than unity ( with typical values of @xmath29 ) so r11 rescale their covariance matrices by @xmath30 to compute errors on parameters .",
"role": "user"
},
{
"content": "i reanalyse the riess _ \n et al . _ \n ( 2011 , hereafter r11 ) cepheid data using the revised geometric maser distance to ngc 4258 of humphreys _ et al . _ \n ( 2013 , hereafter h13 ) . \n i explore different outlier rejection criteria designed to give a reduced @xmath0 of unity and compare the results with the r11 rejection algorithm , which produces a reduced @xmath0 that is substantially less than unity and , in some cases , leads to underestimates of the errors on parameters . \n i show that there are sub - luminous low metallicity cepheids in the r11 sample that skew the global fits of the period - luminosity relation . \n this has a small but non - negligible impact on the global fits using ngc 4258 as a distance scale anchor , but adds a poorly constrained source of systematic error when using the large magellanic cloud ( lmc ) as an anchor . \n i also show that the small milky way ( mw ) cepheid sample with accurate parallax measurements leads to a distance to ngc 4258 that is in tension with the maser distance . \n i conclude that @xmath1 based on the ngc 4258 maser distance is @xmath2 , compatible within @xmath3 with the recent determination from for the base six - parameter cosmology . \n if the h - band period - luminosity relation is assumed to be independent of metallicity and the three distance anchors are combined , i find @xmath4 , which differs by @xmath5 from the value . \n the differences between the results and these estimates of @xmath1 are not large enough to provide compelling evidence for new physics at this stage . 0.15 truein * key words * : cosmology : distance scale , cosmological parameters \n . 0.1 truein",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.9076 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nPower spectrum analysis of heart rate variability to assess the changes in sympathovagal balance during graded orthostatic tilt.\nAbstract 1:\nBACKGROUND The powers of the low-frequency ( LF ) and high-frequency ( HF ) oscillations characterizing heart rate variability ( HRV ) appear to reflect , in their reciprocal relationship , changes in the state of the sympathovagal balance occurring during numerous physiological and pathophysiological conditions . However , no adequate information is available on the quantitative resolution of this methodology . METHODS AND RESULTS We studied 22 healthy volunteers ( median age , 46.5 years ) who were subjected after a rest period to a series of passive head-up tilt steps r and omly chosen from the following angles : 15 degrees , 30 degrees , 45 degrees , 60 degrees , and 90 degrees . From the continuous ECG , after appropriate analog-to-digital conversion , a personal computer was used to compute , with an autoregressive methodology , time and frequency domain indexes of RR interval variability . Spectral and cross-spectral analysis with the simultaneously recorded respiratory signal excluded its contribution to LF . Age was significantly correlated to variance and to the absolute values in milliseconds squared of very-low-frequency ( VLF ) , LF , and HF components . The tilt angle was correlated to both LF and HF ( expressed in normalized units [ nu ] ) and to the LF-to-HF ratio ( r = .78 , -.72 , and .68 ; respectively ) . Lower levels of correlation were found with HF ( in ms2 ) and RR interval . No correlation was present between tilt angle and variance , VLF , or LF ( in ms2 ) . Individual analysis confirmed that the use of nu provided the greatest consistency of results . CONCLUSIONS Spectral analysis of HRV , using nu or LF-to-HF ratio , appears to be capable of providing a noninvasive quantitative evaluation of grade d changes in the state of the sympathovagal balance\n\n\nTitle 2:\nMultiscale entropy analysis of biological signals.\nAbstract 2:\nTraditional approaches to measuring the complexity of biological signals fail to account for the multiple time scales inherent in such time series . These algorithms have yielded contradictory findings when applied to real-world data sets obtained in health and disease states . We describe in detail the basis and implementation of the multiscale entropy ( MSE ) method . We extend and elaborate previous findings showing its applicability to the fluctuations of the human heartbeat under physiologic and pathologic conditions . The method consistently indicates a loss of complexity with aging , with an erratic cardiac arrhythmia ( atrial fibrillation ) , and with a life-threatening syndrome ( congestive heart failure ) . Further , these different conditions have distinct MSE curve profiles , suggesting diagnostic uses . The results support a general \" complexity-loss \" theory of aging and disease . We also apply the method to the analysis of coding and noncoding DNA sequences and find that the latter have higher multiscale entropy , consistent with the emerging view that so-called \" junk DNA \" sequences contain important biological information\n\n\nTitle 3:\nThe influence of a constraint and bimanual training program using a variety of modalities on endurance and on the cardiac autonomic regulation system of children with unilateral cerebral palsy: A self-control clinical trial.\nAbstract 3:\nBACKGROUND An intensive hybrid program , combining constraint with bimanual training , improves upper extremity function as well as walking endurance of children with unilateral cerebral palsy ( UCP ) . Endurance improvement may be associated with the cardiac autonomic regulation system ( CARS ) adaptation , known to be impaired among these children . OBJECTIVE To examine the influence of an intensive hybrid program on CARS , walking endurance and the correlation with upper extremity function of children with UCP . METHODS Twenty-four children aged 6 - 10 years with UCP participated in a hybrid program , 10 days , 6 hours per day . Data were collected pre- , post- and 3-months post-intervention . Main outcome measures included the Polar RS800CX for heart rate ( HR ) and heart rate variability ( HRV ) data , the 6-Minute Walk Test ( 6MWT ) for endurance , and the Assisting H and Assessment ( AHA ) and Jebsen-Taylor Test of H and Function ( JTTHF ) for bimanual and unimanual function . RESULTS A significant reduction in HR and an increase in HRV at post- and 3-month post-intervention was noted ( χ22= 8.3 , p = 0.016 ) along with a significant increase in 6MWT with a median increase of 81 meters ( χ22= 11.0 , p = 0.004 ) at the same interval . A significant improvement was noted in unimanual and bimanual performance following the intervention . CONCLUSIONS An intensive hybrid program effectively improved CARS function as well as walking endurance and upper extremity function in children with UCP ( 213 )\n\n\nTitle 4:\nInter-Observer Agreement in Measuring Respiratory Rate\nAbstract 4:\nBackground Respiratory rate ( RR ) is an important vital sign which is strongly correlated with in-hospital mortality . At the same time , RR is the most likely vital sign to be omitted when assessing a patient . We believe that one reason for this could be the difficulty in measure the RR , since it is not read off a monitor , but counted manually . Also there is the possibility of assessment bias and the inter-observer reliability becomes important . We therefore set out to investigate how the nursing staff counting the actual number of respirations per minute would agree with the nursing staff using a predefined ordinal scale . Methods For this prospect i ve study , we recorded five videos of a young healthy man breathing approximately 5 , 10 , 15 , 30 and 60 times per minute . The videos were shown in a r and om order to a suitable sample of the nursing staff . The participants were r and omized into two groups ; one to count the exact number of breaths per minute , and one to use a predefined ordinal scale . Results Comparing the exact number of breaths per minute , the Intra Class Coefficient ( ICC ) was 0.99 ( 95 % CI : 0.97–1.00 ) . Comparing the RR using the predefined scale , the overall Kappa Fleiss Coefficient was 0.75 . Conclusions The inter-observer agreement was high when comparing the use of the actual number of breaths per minute and substantial when comparing the use of the predefined scale . This is the largest inter-observer study on RR to date . However , further studies on the use of scaled comparisons of RR are needed\n\n\nTitle 5:\nQuantifying errors in spectral estimates of HRV due to beat replacement and resampling\nAbstract 5:\nSpectral estimates of heart rate variability ( HRV ) often involve the use of techniques such as the fast Fourier transform ( FFT ) , which require an evenly sample d time series . HRV is calculated from the variations in the beat-to-beat ( RR ) interval timing of the cardiac cycle which are inherently irregularly spaced in time . In order to produce an evenly sample d time series prior to FFT-based spectral estimation , linear or cubic spline resampling is usually employed . In this paper , by using a realistic artificial RR interval generator , interpolation and resampling is shown to result in consistent over-estimations of the power spectral density ( PSD ) compared with the theoretical solution . The Lomb-Scargle ( LS ) periodogram , a more appropriate spectral estimation technique for unevenly sample d time series that uses only the original data , is shown to provide a superior PSD estimate . Ectopy removal or replacement is shown to be essential regardless of the spectral estimation technique . Resampling and phantom beat replacement is shown to decrease the accuracy of PSD estimation , even at low levels of ectopy or artefact . A linear relationship between the frequency of ectopy/artefact and the error ( mean and variance ) of the PSD estimate is demonstrated . Comparisons of PSD estimation techniques performed on real RR interval data during minimally active segments ( sleep ) demonstrate that the LS periodogram provides a less noisy spectral estimate of HRV\n\n\nTitle 6:\nEffects of respiratory time ratio on heart rate variability and spontaneous baroreflex sensitivity.\nAbstract 6:\nPaced breathing is a frequently performed technique for cardiovascular autonomic studies . The relative timing of inspiration and expiration during paced breathing , however , is not consistent . We , therefore , examined whether indexes of heart rate variability and spontaneous baroreflex sensitivity would be affected by the respiratory time ratio that is set . We studied 14 healthy young adults who controlled their breathing rates to either 0.1 or 0.25 Hz in the supine and sitting positions . Four different inspiratory-to-expiratory time ratios ( I/E ) ( uncontrolled , 1:1 , 1:2 , and 1:3 ) were examined for each condition in a r and omized order . The results showed spectral indexes of heart rate variability and spontaneous baroreflex sensitivity were not influenced by the I/E that was set during paced breathing under supine and sitting positions . Porta 's and Guzik 's indexes of heart rate asymmetry were also not different at various I/E during 0.1-Hz breathing , but had larger values at 1:1 during 0.25-Hz breathing , although significant change was found in the sitting position only . At the same time , Porta 's and Guzik 's indexes obtained during 0.1-Hz breathing were greater than during 0.25-Hz breathing in both positions . The authors suggest that setting the I/E during paced breathing is not necessary when measuring spectral indexes of heart rate variability and spontaneous baroreflex sensitivity under the conditions used in this study . The necessity of paced breathing for the measurement of heart rate asymmetry , however , requires further investigation",
"role": "user"
},
{
"content": "Pediatric patients with CP presented a significantly higher resting heart rate and reduced HRV , different autonomic responses to movement stimuli compared to children with normal development , but also reduced HRV parameters in the children dependent on adult assistance for mobility compared to those generally independent .",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.1294 | [
{
"content": "Which dataset would be most relevant for this research query?\n\nQuery: We present a method for interactive video annotation based on an incremental clustering of the feature space. We define an\n\nKeyphrases: action recognition video\n\nDataset descriptions:\nSEMAINE: The SEMAINE videos dataset contains spontaneous data capturing the audiovisual interaction between a human and an operator undertaking the role of an avatar with four personalities: Poppy (happy), Obadiah (gloomy), Spike (angry) and Prudence (pragmatic). The audiovisual sequences have been recorded at a video rate of 25 fps (352 x 288 pixels).\nMovieGraphs: Provides detailed, graph-based annotations of social situations depicted in movie clips. Each graph consists of several types of nodes, to capture who is present in the clip, their emotional and physical attributes, their relationships (i.e., parent/child), and the interactions between them.\nKTH: The efforts to create a non-trivial and publicly available dataset for action recognition was initiated at the KTH Royal Institute of Technology in 2004. The KTH dataset is one of the most standard datasets, which contains six actions: walk, jog, run, box, hand-wave, and hand clap.\nMECCANO: The MECCANO dataset is the first dataset of egocentric videos to study human-object interactions in industrial-like settings. The MECCANO dataset has been acquired in an industrial-like scenario in which subjects built a toy model of a motorbike.\n\nOptions: A: SEMAINE B: MovieGraphs C: KTH D: MECCANO",
"role": "user"
},
{
"content": "KTH",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.682 | [
{
"content": "Answer the following question using the provided context. Only provide the answer without any additional text.\n\nQuestion: What is DiseaseEnhancer?\nContext: DiseaseEnhancer: a resource of human disease-associated enhancer catalog.\nTo facilitate better usage of published data and exploring enhancer deregulation in various human diseases, we created DiseaseEnhancer (http://biocc.hrbmu.edu.cn/DiseaseEnhancer/), a manually curated database for disease-associated enhancers. As of July 2017, DiseaseEnhancer includes 847 disease-associated enhancers in 143 human diseases. Database features include basic enhancer information (i.e. genomic location and target genes); disease types; associated variants on the enhancer and their mediated phenotypes (i.e. gain/loss of enhancer and the alterations of transcription factor bindings).\nDiseaseEnhancer provides a promising avenue for researchers to facilitate the understanding of enhancer deregulation in disease pathogenesis, and identify new biomarkers for disease diagnosis and therapy.\nTo facilitate better usage of published data and exploring enhancer deregulation in various human diseases, we created DiseaseEnhancer (http://biocc.hrbmu.edu.cn/DiseaseEnhancer/), a manually curated database for disease-associated enhancers.\nAs of July 2017, DiseaseEnhancer includes 847 disease-associated enhancers in 143 human diseases.",
"role": "user"
},
{
"content": "DiseaseEnhancer is a manually curated resource of human disease-associated enhancer catalog. As of July 2017, DiseaseEnhancer includes 847 disease-associated enhancers in 143 human diseases. Database features include basic enhancer information (i.e. genomic location and target genes); disease types; associated variants on the enhancer and their mediated phenotypes (i.e. gain/loss of enhancer and the alterations of transcription factor bindings).",
"role": "assistant"
}
] |
science.ncbi_ner | science.ncbi_ner.150 | [
{
"content": "Extract all unique entities of these types from the following abstract: [\"Disease\"]\n\nOutput a JSON object with entity types as keys and lists of entities as values.\n\nAbstract:\nMarkedly accelerated catabolism of apolipoprotein A-II (ApoA-II) and high density lipoproteins containing ApoA-II in classic lecithin: cholesterol acyltransferase deficiency and fish-eye disease. Classic (complete) lecithin cholesterol acyltransferase (LCAT) deficiency and Fish-eye disease (partial LCAT deficiency) are genetic syndromes associated with markedly decreased plasma levels of high density lipoprotein (HDL) cholesterol but not with an increased risk of atherosclerotic cardiovascular disease. We investigated the metabolism of the HDL apolipoproteins (apo) apoA-I and apoA-II in a total of five patients with LCAT deficiency, one with classic LCAT deficiency and four with Fish-eye disease. Plasma levels of apoA-II were decreased to a proportionately greater extent (23% of normal) than apoA-I (30% of normal). In addition, plasma concentrations of HDL particles containing both apoA-I and apoA-II (LpA-I A-II) were much lower (18% of normal) than those of particles containing only apoA-I (LpA-I) (51% of normal). The metabolic basis for the low levels of apoA-II and LpA-I A-II was investigated in all five patients using both exogenous radiotracer and endogenous stable isotope labeling techniques. The mean plasma residence time of apoA-I was decreased at 2. 08 +/- 0. 27 d (controls 4. 74 +/- 0. 65 days); however, the residence time of apoA-II was even shorter at 1. 66 +/- 0. 24 d (controls 5. 25 +/- 0. 61 d). In addition, the catabolism of apoA-I in LpA-I A-II was substantially faster than that of apoA-I in LpA-I. In summary, genetic syndromes of either complete or partial LCAT deficiency result in low levels of HDL through preferential hypercatabolism of apoA-II and HDL particles containing apoA-II. Because LpA-I has been proposed to be more protective than LpA-I A-II against atherosclerosis, this selective effect on the metabolism of LpA-I A-II may provide a potential explanation why patients with classic LCAT deficiency and Fish-eye disease are not at increased risk for premature atherosclerosis despite markedly decreased levels of HDL cholesterol and apoA-I",
"role": "user"
},
{
"content": "{\"Disease\": [\"atherosclerotic cardiovascular disease\", \"Fish-eye disease\", \"complete or partial LCAT deficiency\", \"atherosclerosis\", \"fish-eye disease\", \"cholesterol acyltransferase deficiency\", \"LCAT deficiency\", \"partial LCAT deficiency\", \"classic LCAT deficiency\", \"Classic (complete) lecithin cholesterol acyltransferase (LCAT) deficiency\"]}",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.3507 | [
{
"content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nEffectiveness of prolonged use of continuous passive motion (CPM), as an adjunct to physiotherapy, after total knee arthroplasty.\nAbstract 1:\nAdequate and intensive rehabilitation is an important requirement for successful total knee arthroplasty. Although research suggests that Continuous Passive Motion (CPM) should be implemented in the first rehabilitation phase after surgery, there is substantial debate about the duration of each session and the total period of CPM application. A Cochrane review on this topic concluded that short-term use of CPM leads to greater short-term range of motion. It also suggested, however, that future research should concentrate on the treatment period during which CPM should be administered.\n In a randomised controlled trial we investigated the effectiveness of prolonged CPM use in the home situation as an adjunct to standardised PT. Efficacy was assessed in terms of faster improvements in range of motion (RoM) and functional recovery, measured at the end of the active treatment period, 17 days after surgery. Sixty patients with knee osteoarthritis undergoing TKA and experiencing early postoperative flexion impairment were randomised over two treatment groups. The experimental group received CPM + PT for 17 consecutive days after surgery, whereas the usual care group received the same treatment during the in-hospital phase (i.e. about four days), followed by PT alone (usual care) in the first two weeks after hospital discharge. From 18 days to three months after surgery, both groups received standardised PT. The primary focus of rehabilitation was functional recovery (e.g. ambulation) and regaining RoM in the knee.\n Prolonged use of CPM slightly improved short-term RoM in patients with limited RoM at the time of discharge after total knee arthroplasty when added to a semi-standard PT programme. Assessment at 6 weeks and three months after surgery found no long-term effects of this intervention Neither did we detect functional benefits of the improved RoM at any of the outcome assessments.\n Although results indicate that prolonged CPM use might have a small short-term effect on RoM, routine use of prolonged CPM in patients with limited RoM at hospital discharge should be reconsidered, since neither long-term effects nor transfer to better functional performance was detected.\n ISRCTN85759656.\n\n\nTitle 2:\nA controlled evaluation of continuous passive motion in patients undergoing total knee arthroplasty.\nAbstract 2:\nTo evaluate the efficacy of continuous passive motion (CPM) in the postoperative management of patients undergoing total knee arthroplasty.\n A randomized controlled single-blind trial of CPM plus standardized rehabilitation vs standard rehabilitation alone.\n A referral hospital for arthritis and musculoskeletal care.\n Consecutive patients with end-stage osteoarthritis or rheumatoid arthritis undergoing primary total knee arthroplasty who had at least 90 degrees of passive knee flexion. One hundred fifty-four patients were eligible and 102 patients agreed to participate and were randomized. Ninety-three patients completed the study protocol.\n Continuous passive motion machines programmed for rate and specified arc of motion within 24 hours of surgery with range increased daily as tolerated with standardized rehabilitation program compared with standardized rehabilitation program alone.\n Primary outcomes were pain, active and passive knee range of motion, swelling (or circumference), quadriceps strength at postoperative day 7, as well as complications, length of stay, and active and passive range of motion and function at 6 weeks.\n Use of CPM increased active flexion and decreased swelling and the need for manipulations but did not significantly affect pain, active and passive extension, quadriceps strength, or length of hospital stay. At 6 weeks there were no differences between the two groups in either range of motion or function. In this series, use of CPM resulted in a net savings of $6764 over conventional rehabilitation in achieving these results.\n For the average patient undergoing total knee arthroplasty, CPM is more effective in improving range of motion, decreasing swelling, and reducing the need for manipulation than is conventional therapy and lowers cost.\n\n\nTitle 3:\nUse of inpatient continuous passive motion versus no CPM in computer-assisted total knee arthroplasty.\nAbstract 3:\nContinuous passive motion (CPM) has shown positive effects on tissue healing, edema, hemarthrosis, and joint function (L. Brosseau et al., 2004). CPM has also been shown to increase short-term early flexion and decrease length of stay (LOS) ( L. Brosseau et al., 2004; C. M. Chiarello, C. M. S. Gundersen, & T. O'Halloran, 2004). The benefits of CPM for the population of patients undergoing computer-assisted total knee arthroplasty (TKA) have not been examined.\n The primary objective of this study was to determine whether the use of CPM following computer-assisted TKA resulted in differences in range of motion, edema/drainage, functional ability, and pain.\n This was an experimental, prospective, randomized study of patients undergoing unilateral, computer-assisted TKA. The experimental group received CPM thrice daily and physical therapy (PT) twice daily during their hospitalization. The control group received PT twice daily and no CPM during the hospital stay. Both groups received PT after discharge. Measurement included Knee Society scores, Western Ontario McMaster Osteoarthritis Index values, range of motion, knee circumference, and HemoVac drainage. Data were collected at various intervals from preoperatively through 3 months.\n Although the control group was found to be higher functioning preoperatively, there was no statistically significant difference in flexion, edema or drainage, function, or pain between groups through the 3-month study period.\n\n\nTitle 4:\nContinuous passive motion as an adjunct to active exercises in early rehabilitation following total knee arthroplasty - a randomized controlled trial.\nAbstract 4:\nContinuous passive motion is frequently used post-operatively to increase knee range of motion after total knee arthroplasty in spite of little conclusive evidence. The aim of this study was to examine whether continuous passive motion (CPM) as an adjunct to active exercises had any short time effects (after one week and three months) on pain, range of motion, timed walking and stair climbing.\n A randomized controlled trial was conducted. A total of 63 patients undergoing primary TKA were randomly assigned into an experimental group receiving CPM and active exercises and a control group receiving active exercises only. Outcomes were assessed by goniometer, visual analogue scale (VAS), timed 'Up and Go' test (TUG), timed 40 m walking distance and timed stair climbing.\n There were no statistical differences between the treatment groups for any outcome measures either at one week or after three months. For the whole group, a significant and 50% reduction in pain score was found after three months (p < 0.01). Compared with before surgery, a significantly impaired knee flexion range of motion (p < 0.01) and a significantly decreased number of patients able to climb stairs were found after three months (p < 0.01).\n CPM was not found to have an additional short-time effect compared with active physiotherapy. After three months considerable pain relief was obtained for the whole group, the patients preoperative ROM was not restored and the number of patients able to climb stairs had decreased.\n\n\nTitle 5:\nDeep-vein thrombosis and continuous passive motion after total knee arthroplasty.\nAbstract 5:\nSeventy-five of 150 consecutive patients who underwent total knee arthroplasty had routine physiotherapy and seventy-five had continuous passive motion of the lower limb that had been operated on as well as routine physiotherapy. A pulmonary embolus did not develop in any patient, but about 40 per cent had thrombosis in the veins of the calf, whether passive motion had been administered or not. Radiographically, the deep-vein thrombosis was seen to extend into or proximal to the popliteal vessel in 5 per cent of the patients in each group. Sex, age, obesity, or a history of hypertension or diabetes did not influence the incidence of venous thrombosis, but there was a higher incidence in patients in whom cement was used for fixation of the total knee components, irrespective of the use of continuous passive motion of the limb.\n\n\nTitle 6:\nEffectiveness of continuous passive motion and conventional physical therapy after total knee arthroplasty: a randomized clinical trial.\nAbstract 6:\nThis randomized clinical trial was conducted to compare the effectiveness of 3 in-hospital rehabilitation programs with and without continuous passive motion (CPM) for range of motion (ROM) in knee flexion and knee extension, functional ability, and length of stay after primary total knee arthroplasty (TKA).\n Eighty-one subjects who underwent TKA for a diagnosis of osteoarthritis were recruited.\n All subjects were randomly assigned to 1 of 3 groups immediately after TKA: a control group, which received conventional physical therapy intervention only; experimental group 1, which received conventional physical therapy and 35 minutes of CPM applications daily; and experimental group 2, which received conventional physical therapy and 2 hours of CPM applications daily. All subjects were evaluated once before TKA and at discharge. The primary outcome measure was active ROM in knee flexion at discharge. Active ROM in knee extension, Timed \"Up & Go\" Test results, Western Ontario and McMaster Universities Osteoarthritis Index questionnaire scores, and length of stay were the secondary outcome measures.\n The characteristics of and outcome measurements for the subjects in the 3 groups were similar at baseline. No significant difference among the 3 groups was demonstrated in primary or secondary outcomes at discharge.\n The results of this study do not support the addition of CPM applications to conventional physical therapy in rehabilitation programs after primary TKA, as applied in this clinical trial, because they did not further reduce knee impairments or disability or reduce the length of the hospital stay.\n\n\nTitle 7:\nBeneficial effects of continuous passive motion after total condylar knee arthroplasty.\nAbstract 7:\nA randomised, controlled study of the use of postoperative continuous passive motion (CPM) and immobilisation regimen after total condylar knee arthroplasty was performed. CPM resulted in a significant increase in both the early and late range of knee flexion. This increase occurred in both rheumatoid and osteoarthritic patients. The improvement of 10 degrees at 12 months allowed additional important function to be attained. CPM resulted in significantly earlier discharge from hospital. It did not increase the clinical incidence of wound healing problems, nor did it significantly increase the postoperative fixed flexion deformity or the extension lag. CPM can be recommended as a safe and effective modality to achieve more rapid and more successful postoperative rehabilitation after knee arthroplasty.\n\n\nTitle 8:\nContinuous passive motion compared to active physical therapy after knee arthroplasty: similar hospitalization times in a randomized study of 68 patients.\nAbstract 8:\n68 consecutive patients who had primary knee arthroplasties because of arthrosis were randomized to postoperative continuous passive motion (CPM) or active physical therapy (APT). Rehabilitation in both groups was initiated on the first postoperative day. The CPM group sustained less postoperative knee swelling with more rapid initial improvement in knee flexion than did the APT group, but there were no differences between the groups in knee flexion at discharge. Postoperative pain rating and hospitalization times were similar in the two groups.",
"role": "user"
},
{
"content": "There is not enough evidence from the available RCTs to conclude that CPM reduces VTE after TKA. We cannot assess the effect of CPM on death because no such events occurred amongst the participants of these trials.",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.161 | [
{
"content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: GeneticalterationsandIOP",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.48802 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nlaennec , in 1827 , was the first to define pneumomediastinum as the presence of air in mediastinum .\npulmonary barotrauma in mechanically ventilated patients is a result of alveolar rupture and characterized by development of extra - alveolar air .\ndissection of air elsewhere along fascial planes can result in pneumomediastinum , pneumoperitonium or subcutaneous emphysema .\nan indian male patient , age 29 , with no history of chronic illness , presented to the emergency department in an outlying hospital with complaints of fever , headache , and transient nausea and vomiting .\nblood pressure initially was 128/72 , heart rate 89/ regular , respiratory rate 21 , temperature 39.1 c. glasgow coma scale was 13 .\nthere was a rapid decrease in level of consciousness with decrease of glasgow coma scale to 6 .\nthe patient was intubated , mechanically ventilated , and transferred to micu at hamad medical corporation .\nchest x - ray post intubation was normal with endotracheal tube in proper position with no evidence of pneumothorax . during transfer\nthe patient was connected to a small portable ventilator with tidal volume of 700 ml , peep of 8 , rate of 16 breaths per minute , and pip of 45 cm h2o .\na chest tube was inserted with improvement of saturation and restoration of normal blood pressure .\nthe patient was mechanically ventilatied with tidal volume 400 , rate 14/minute , peep of 5 , and pip of 30 cm h2o .\nacyclovir was continued with gradual steady improvement in the level of consciousness . on the second day ,\nsuddenly the patient desaturated to 82% , with clinical evidence of surgical emphysema over the anterior chest wall and upper abdomen .\nchest x - ray , ct chest , abdomen and proximal areas of both lower limbs was performed ( fig 1 4 ) .\ndiagnosis of pneumomediastinum , extensive surgical emphysema extending from neck down to knees , left sided pneumothorax and pneumoperitonium was diagnosed .\ninsertion of left sided chest tube ( chest x - ray 1 ) was then performed , with conservative treatment of pneumoperitonium and surgical emphysema .\npatient improved clinically , and was able to breathe on his own after five days with improved level of consciousness ( glasgow coma scale 15 ) .\nthere was gradual resolution of pneumothorax , pneumomediastinum , surgical emphysema and pneumoperitonium until it resolved after two weeks , and the patient was discharged in good condition .\nchest x- ray with surgical emphysema , pneumomediastinum , air under right hemi- diaphragm and bilateral chest tubes .\nct of lower part of pelvis and proximal parts of both thighs with air in the pelvis and scrotum and upper part of both thighs\nasthma , chronic interstitial lung disease and acute respiratory distress syndrome ( ards ) are independent risk factors for barotrauma .\nthe development of barotrauma is associated with an increased icu length of stay and an increased mortality rate .\nhigh ventilation pressures and global or regional over distension of the lungs are responsible for most cases of barotrauma . elevation in peak airway pressure during volume - cycled mechanical ventilation often precedes the development of barotrauma . in one report , the incidence of barotrauma was low in patients whose peak airway pressures were less than 50 cm h2o , suggesting this may be a useful therapeutic target .\nbut it deserves mentioning that the relation between peak pressure and barotrauma has been challenged by many other studies .\nit can be difficult to diagnose pneumomediastinum because it may have many intrathoracic and extrathoracic causes , and radiological findings may simulate other conditions .\nthe mediastinum communicates with several anatomic structures including the submandibular space , retropharyngeal space , and vascular sheaths in the neck .\na tissue plane extends anteriorly from the mediastinum to the retroperitoneal space through the sternocostal attachment of the diaphragm .\nthe mediastinum also communicates directly with the retroperitoneum by way of the periaortic and peri - esophageal fascial planes [ 15 , 16 ] .\npneumoperitoneum , if not due to visceral perforation in the setting of mechanical ventilation , can usually be managed conservatively .",
"role": "user"
},
{
"content": "a 29 year old male patient of indian ancestry was admitted to an outside hospital with rapid deterioration of his level of consciousness . \n the patient required mechanical ventilation and transfer to micu at hamad medical corporation . \n the patient remained hypoxic . \n chest x - ray , ct of chest , abdomen , pelvis and proximal areas of both lower limbs were performed . \n pneumomediastinum , pneumoperitoneum , and extensive surgical emphysema were the diagnoses .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.38657 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\nthe aim of this study is to report a case of oral bisphosphonate - related osteonecrosis of the jaw ( bronj ) resulting in implant failure .\na patient suspected of having bronj was referred to the department of periodontology , kyung hee university school of dentistry for the evaluation and treatment of exposed bone around implants .\nthe patient , who had been taking oral bisphosphonates ( bps ) for about a year , was successfully treated with systemic antibiotics , chlorhexidine mouth rinse , explantation , and surgical debridement of necrotic bone .\nthe results of this case suggest that a patient taking bps orally should be treated cautiously .\nappropriate management including cessation of bps and respective dental treatment may reduce the development of bronj .\nbisphosphonates ( bps ) are drugs for inhibiting bone resorption , widely used for the treatment of osteoporosis , multiple myeloma and skeletal complications of bone metastases .\nbps were proven to inhibit decreases in bone density and prevent bone fracture by reducing activation of osteoclasts and inducing apoptosis of osteoclasts .\nthe two main categories of bps are the non - nitrogen and nitrogen - containing bps .\nalendronate is an oral nitrogen - containing bp and the most commonly used drug to treat osteoporosis and osteopenia . even though bps have clinical efficacy , a number of cases of osteonecrosis of the jaw involving a patient treated with bps for a long period of time\nhave been reported worldwide since marx reported the first case , and this has been recognized as a serious adverse effect of these drugs .\nthe american association of oral and maxillofacial surgeons ( aaoms ) suggests patients may be considered to have bisphosphonate - related osteonecrosis of the jaw ( bronj ) if each of the following 3 characteristics are present : 1 ) current or previous treatment with bps , 2 ) exposed , necrotic bone in the maxillofacial region that has persisted for more than 8 weeks , and 3 ) no history of radiation therapy to the jaws .\nthe american dental association council on scientific affairs ( ada csa ) reported that the incidence of bronj for patients receiving alendronate for a long period of time was estimated to be 0.7 per 100,000 person years of exposure .\neven though bronj occurs infrequently , risk factors such as systemic / local factors , potency of bps and period of taking bps may raise the incidence rate of bronj .\nfurthermore , as the population ages , the numbers of patients who have been treated with bps or continue to use them for treatment of osteoporosis is on the rise . as a result ,\nthe risk of bronj for patients who are receiving bps is increasing after implant placement .\nthis case report illustrates a 67-year - old female patient who had received oral bps and developed a significant bone defect with necrosis after proper implant placement , and was treated with explantation .\ncase report protocol ( khd irb 2010 - 1 ) was approved by the institutional review board of kyung hee university dental hospital .\na 67-year - old female was referred to the department of periodontology , kyung hee university school of dentistry with chief complaints of pain on chewing and gingival swelling in the upper right molar area beginning 2 months previously .\nshe was treated with non - surgical periodontal therapy and antibiotic administration for 1 month at a private clinic .\ntwo dental implants had been placed with a ridge split in the upper right second premolar and the first molar areas at the private clinic 1 year earlier .\nshe had been diagnosed with osteoporosis , and thus had a 1-year history of taking oral bps , 70 mg alendronate ( fosamax , merck , whitehouse station , usa ) once per week .\nshe had been taken off alendronate 5 months before her arrival at the kyung hee university school of dentistry , because a bone density test and a vitamin d test at the department of endocrinology showed a normal range . however , her serum c - terminal cross - linked telopeptide of type i collagen ( ctx ) level was low , measured at 21 pg / ml . on the first day of dental examination , she complained of pain on palpation , and gingival swelling and redness on the buccal side of the two implants .\nexposure of the necrotic bone was observed between the upper right first premolar tooth and second premolar implant .\nhowever , we could not determine how long it had been exposed because the patient was reffered to us for treatment .\nthe upper right second premolar implant showed 3 degrees of mobility , pain on percussion , and a 10-mm probing depth in the buccal and distal areas .\nalveolar bone resorption with internal scattered residual bony fragments was seen between the two implants on a panoramic radiograph ( fig\n2 ) . widening of the periodontal ligament space and a radiolucent lesion in the apical area of the upper right first premolar were also seen . during the first visit , inflammatory tissue around the implants\ngingival swelling decreased after 1 week , but there was no change in exposure of the necrotic bone ( fig .\n3 ) . explantation and extraction of the upper right first premolar was planned to prevent additional bone destruction .\nafter an intrasulcular incision was made around the implants and the tooth to be extracted under local anesthesia , a vertical incision was done at the mesiobuccal line angle of the upper right canine .\nthe full thickness flap was then elevated , and exposure of the implant threads was noticed on the buccal and inter - implant area ( fig .\n4 ) . after extraction of the upper right first premolar , perforation on the buccal side of the extraction socket was seen and the extraction socket was also penetrated to the upper right second premolar implant distally at the apical area ( fig . 5 ) .\nthe two implants were removed with dental forceps and an elevator . the remaining granulated tissue was removed and a generalized ostectomy was done .\nthe sharp edges of necrotic bone were trimmed by a high speed drill to prevent inflammation of the soft tissue and to manage pain .\nthe remaining necrotic bone was partially ( not entirely ) removed by the high speed drill .\ninterestingly , during this procedure , distinct bone surface bleeding was not seen to any great extent ( fig .\ntetracycline ( 50 mg / ml ) irrigation was performed , and the flap was extended coronally to effect primary closure and was then sutured ( fig . 7 ) .\namoxicillin 500 mg ( amoxapen , chong kun dang pharmaceutical co. , seoul , korea ) and ibuprofen 200 mg ( brufen , sam - il co. , seoul , korea ) were each administered for 1 week , and a mouth rinse with 0.12% chlorhexidine was recommended for the following 2 weeks .\nthe following day , the patient complained of a slight fever , but it was relieved 2 days later .\nstitch - out was done after 1 week ( fig . 8) and wound dehiscence was seen on the extraction socket of the upper right first premolar .\nafter 4 weeks , pin - point bone exposure was detected on the distal side of the upper right canine , but the patient did not have any discomfort or pain .\nafter 4 months , no additional bone exposure was found at the operation site ( fig .\n9 ) , and there were no significant findings on the panoramic radiograph ( fig .\nthe patient is wearing a removable partial denture at the moment and receives follow - up care on a regular basis .\nthe patient in this case had a history of taking bps for 1 year and no history of radiation therapy to the jaws , but there was no way to ascertain how long the necrotic bone had been exposed in the oral cavity because she was referred without any further information . however , we estimated that this patient met the three requirements of bronj as suggested by aaoms and thus was diagnosed with bronj .\nalthough there has not been sufficient research to determine the nature of bronj , some risk factors have been connected with an increasing tendency toward the incidence of bronj for patients taking bps .\nalthough few reports have investigated risk factors , it has been suggested that risk factors such as potency of bps , period of taking bps , and systemic / local factors could influence the incidence of bronj .\nthe more powerful potency of bps and the longer period of taking bps , the greater the occurrence of bronj .\nlocal risk factors which increase the incidence of bronj are as follows : extraction , implant placement , surgical procedures including osteotomy , and anatomic structures such as tori .\nsystemic risk factors include old age , diabetes mellitus , use of steroids , cancer therapy , and smoking .\nthe most common risk factors in dentistry are intraoral surgery such as extraction and implant placement .\nit was reported that the incidence rate of bronj in patients taking bps who had intraoral surgery was seven times higher than that of those who did not have surgery . in this case\nhowever , we assumed that old age ( 67-year - old patient ) , extraction and implant placement worked as negative risk factors for bronj .\nbps were divided into two types depending on administration method - intravenous infusion and oral administration .\npatients taking oral bps manifested less bone exposure and mild symptoms compared to patients with iv bps .\nit has been reported that oral bps had a lower risk since it took longer to develop bisphosphonate - induced osteonecrosis because of a slower accumulation rate in the bone . however , there are many patients taking oral bps for the treatment of osteoporosis , so it is very important to assess risks , especially when patients took bps for a long period of time ( longer than 3 years ) .\nmarx et al . suggested using serum levels of morning fasting ctx to assess patients ' risks of developing bronj by oral administration of bps .\nthis telopeptide fragment is cleaved from the main crosslink chains of collagen by the osteoclast during bone resorption . therefore , its serum level is proportional to the osteoclastic activity .\nthe ctx value can be used as a marker for deciding the optimal timing of surgical intervention .\nassessed 30 cases of bronj , and based on 17 out of 30 patients who were still receiving bps at the beginning of the study , they classified risks of developing bronj into 3 classes depending on the ctx value . in this population , ctx values less than 100 pg / ml were classified into a high risk group , values from 100 to 150 pg / ml into a moderate risk group , and values higher than 150 pg / ml into a minimal risk group .\nthe authors suggested that clinicians might consider implementing a \" drug holiday \" to achieve the desired ( low risk ) ctx level before initiating oral surgical procedures .\nthe ctx value was 21 pg / ml in this case , which means the patient was in the high risk group .\nshe had a history of delaying intraoral minor surgery for 1 to 2 months in the department of oral and maxillofacial surgery , kyung hee university school of dentistry after being referred to the department of endocrine medicine .\none of the most common clinical features associated with bronj is an exposure of necrotic bone . in the early stages ,\nhowever , patients may have severe pain if secondary infection or trauma on soft tissue above the sharp edges of the exposed bone occurred . in this case , the patient complained of pain on chewing and gingival swelling on the upper right 2 implants which were placed 1 year earlier , and necrotic bone was observed around the implants .\nthe patient was diagnosed as possessing stage 2 of bronj by the criteria of aamos , and stage 2 treatment strategy was carried out - selective removal of necrotic bone . only the sharp necrotic bone which triggered soft tissue inflammation and pain\nantibiotics and analgesics were administered for the control of pain and mouth rinse with 0.12% chlorhexidine was recommended .\npinpoint bone exposure was observed at the surgical site , but it was covered by soft tissue in 6 weeks , and healing was satisfactory .\nthe treatment objectives for patients with an established diagnosis of bronj are to alleviate pain , eliminate inflammation in soft and hard tissue , and minimize the progression of bone necrosis .\nremoval of necrotic bone which constantly irritates soft tissue and recontouring of the bony shape are needed .\nhowever , it is difficult to expose viable bleeding bone for the surgical margin since bps affect the entire jaw bone and the necrotic bone margin is indefinite . therefore , custom osteotomy is impossible , and it is recommended that loose segments of bony sequestrum be removed without exposing uninvolved bone .\nmarx et al . reported that attempts to accomplish debridement , cover the exposed bone with flaps , or bone - contouring procedures have mostly been counterproductive and have led to further exposed bone , worsening of symptoms , and a greater risk for a pathologic fracture of the jaw .\ntherefore , those procedures were to be limited only to cases where there was no response to nonsurgical management or continuation of symptoms . since necrotic exposed bone itself is not painful , it is suggested that it retained for the preservation of jaw structure and normal function . in terms of implant failure related to bps , jeffcoat reported a 100% success rate for 102 implants with no clinical evidence of infection , pain or bone necrosis in patients receiving oral bps for a mean duration of 3 years .\nmadrid and sanz reviewed 1 prospective and 3 retrospective studies and reported that there was no bronj after implant placement in 217 patients who took oral bps for less than 5 years .\nthey concluded that oral bp administration did not affect the short term success rate of an implant ( 1 to 4 years ) .\nalthough the implant failure rate due to bronj is low for patients administered oral bps , implant failure is possible for patients who took bps for only 1 year .\nthus , it is suggested that if implant placement were planned , informed consent should be provided relating to possible future implant failure and osteonecrosis of the jaw for patients taking oral bps .\ndiscontinuation of oral bps can be considered from 3 months prior to surgery to 3 months after surgery if the patient 's systemic condition allows it , in cases where the administration period of bps was greater than 3 years .\na number of studies have recently reported that the incidence of bronj decreased after discontinuation of oral bps therapy .\nas the aging population grows , the number of patients with osteoporosis increases , thus the number of patients taking bps for the treatment of osteoporosis is also on the rise .\nas we have discussed above , serum ctx could be used as a systemic marker for assessment for the risk of bronj ; thus further studies about serum ctx are needed .\nin addition , more detailed studies should be conducted to decide if modification of a treatment plan for implant placement and extraction ( depending on the period of administration ) is necessary for patients taking oral bps .",
"role": "user"
},
{
"content": "purposethe aim of this study is to report a case of oral bisphosphonate - related osteonecrosis of the jaw ( bronj ) resulting in implant failure.methodsa patient suspected of having bronj was referred to the department of periodontology , kyung hee university school of dentistry for the evaluation and treatment of exposed bone around implants.resultsthe patient , who had been taking oral bisphosphonates ( bps ) for about a year , was successfully treated with systemic antibiotics , chlorhexidine mouth rinse , explantation , and surgical debridement of necrotic bone.conclusionsthe results of this case suggest that a patient taking bps orally should be treated cautiously . \n appropriate management including cessation of bps and respective dental treatment may reduce the development of bronj .",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.48424 | [
{
"content": "Write an abstract for the research paper below.\n\nPaper text:\ninvasive cervical resorption ( icr ) is uncommon and often aggressive form of external tooth resorption , which may involve any tooth of the permanent dentition .\nit is defined as a localized resorptive process that involves the surface of root below epithelial attachment and coronal aspect of the supporting alveolar process , namely the zone of connective tissue attachment .\nbecause of its cervical involvement and invasive property , this destructive resorptive defect may lead to the loss of tooth structure .\noften , affected tooth may show pinkish discoloration of tooth crown due to resorption of coronal dentin and enamel as highly vascular resorptive tissue becomes visible through this residual structure . among the different causes\nidentified , injury and stimulation by sulcular microorganisms in the adjacent marginal tissues is one of the cause and traumatic injuries , orthodontic tooth movement , orthognathic and dento - alveolar surgery , periodontal treatment , and internal bleaching have been reported as predisposing factors for such lesion .\non examination of intraoral periapical ( iopa ) radiograph , icr may represent as a barely discernible radiolucency or may be evident dramatically .\nthe lesions may have well - delineated radiolucencies with irregular borders and sometimes may simulate as caries radiographically .\nas initiation of icr is apical to the epithelial attachment , it is most commonly seen in the cervical area , but it may occur anywhere in the root . initially at the early stages , it may be somewhat symmetrical but later , asymmetrical nature of larger lesions are more evident .\na successful outcome for such cases generally involve early diagnosis , elimination of the resorption , and restorative management .\nwhen icr is diagnosed , generally , three choices are considerable for treatment : \n no treatment with eventual extraction when the tooth becomes symptomatic;immediate extraction ; oraccess , debridement , and restoration of the resorptive lesion . \n no treatment with eventual extraction when the tooth becomes symptomatic ; immediate extraction ; or access , debridement , and restoration of the resorptive lesion . till date , various materials have been promoted to seal the resorptive defect such as mineral trioxide aggregate ( mta ) , glass - ionomer cement ( gic ) , calcium enriched mixture ( cem ) etc .\nmta is a bioactive material that has been used for various endodontic applications since the early 1990s .\nmta is indicated for restoring internal and external resorptive defects , horizontal root fractures , sealing communications between the root canal space and external root surfaces , filling root canals of teeth with mature and open apices .\nmta has been proved to be biocompatible , stimulate mineralization , and encourage apatite - like crystalline deposits .\nmore recently , other bioactive materials , such as cem cement , bioaggregate , biodentine , endosequence root repair material ( errm ) , and endosequence bc sealer , have also been marketed . among the options available , a new bioactive cement , biodentine ( septodont , st .\nmaur - des - fosss , france ) , can be a valid option since it acts as a substitute for dentin .\nit shares both its indications and mode of action with calcium hydroxide , but does not have its drawbacks .\nwhen biodentine was used to seal perforations in the furcal area , it induced the repair of the periodontium and new cementum formation over the material .\nthe past decade has revealed the use of the operating microscope for surgical endodontics as one of the most significant developments .\nthe microscope enables the surgeon to assess pathological changes more precisely and to remove pathological lesions with far greater precision , thus minimizing tissue damage .\ntherefore , the present case report was about describing the management of maxillary left central incisor with external cervical resorption surgically under operatory microscope using biodentine .\na 23-year - old female patient presented to the department of conservative dentistry and endodontics , with a discolored upper right and left central incisor ( 11 and 21 ) .\nthe crown of 21 showed pink hue and there was little swelling in relation to 21 at its labial aspect [ figure 1a ] .\n( a ) preoperative intraoral photograph , ( b ) preoperative iopa radiograph , ( c ) cervical resorption defect seen under dental operatory microscope , ( d ) sealing of defect with biodentine under dental operating microscope , ( e ) postoperative iopa radiograph , ( f ) eleven months follow - up iopa radiograph the medical history of the patient was noncontributory .\nthe patient had noticed color change of related teeth , and it increased over the last 3 - 4 months .\npast dental history revealed that she had undergone trauma 6 years back and teeth remained untreated .\nperiodontal probing depths were physiological at all sites except for the distobuccal surface where sinus tract and the necrotic material were present .\nthe preoperative iopa radiographic examination [ figure 1b ] revealed an irregular , large radiolucent area in the cervical third of the external root surface at distal aspect of 21 at supra - osseous level .\nthe clinical diagnosis was irreversible pulpitis with external icr . for root canal treatment of 11 and 21 with debridement , curettage , and restoration of the external resorption defect ,\nconsent was obtained from the patient and the procedures were explained to the patient . under local anesthesia ,\nthe access cavities of 11 and 21 were opened on the palatal surface . after working length determination of both teeth , the root canals were biomechanically prepared i.e. cleaning and shaping with manual instruments using # k- files and 3% naocl ( parcan , septodont healthcare india pvt . ltd . ,\nafter the root canal systems were debrided , it was rinsed with sterile water and dried with paper points . as the defect was on the cervical area ,\na large circular resorptive area was seen clearly from the surgical site containing granulation tissue under the dom [ figure 1c ] .\nthe resorptive area was cleaned by rinsing with alternating solutions of 3% naocl ( parcan , septodont healthcare india pvt . ltd . , india ) and 17% ethylenediaminetetraacetic acid ( edta ) ( prime dental product limited , india ) because of property of organic tissue fixation and antibacterial efficacy of naocl and some regenerative potential of edta such as exposing tgf -1 from human dentin ( to improve clinical outcomes ) .\nafter drying the site , the root canal of 21 was secured by inserting a suitable gutta - percha ( gp ) point and the resorptive area was filled with biodentine ( manipulated according to manufacturer 's instructions ) and contoured as per external root anatomy [ figure 1d ] .\nthe material was left for fifteen minutes to achieve the initial setting and relative hardness .\nafter , the setting of biodentine was confirmed , the root canals of 11 and 22 was properly obturated with gp point after sealer application ( apexit plus , ivoclar vivadent ) and access cavity was temporarily sealed with cavit - g . finally , the flap was kept at its original position and sutured with 3 - 0 black silk suture material and the patient was instructed to report after a week for suture removal .\nhowever the patient reported to the department only after 11 months and was completely asymptomatic on presentation .\nwe performed clinical and radiographic examinations for treated teeth [ figure 1f ] and observed that repair of resorption defect was successfully treated with biodentine .\nfor root canal treatment of 11 and 21 with debridement , curettage , and restoration of the external resorption defect , surgery was the treatment of choice .\nconsent was obtained from the patient and the procedures were explained to the patient . under local anesthesia ,\nthe access cavities of 11 and 21 were opened on the palatal surface . after working length determination of both teeth , the root canals were biomechanically prepared i.e. cleaning and shaping with manual instruments using # k- files and 3% naocl ( parcan , septodont healthcare india pvt . ltd . ,\nafter the root canal systems were debrided , it was rinsed with sterile water and dried with paper points . as the defect was on the cervical area ,\na large circular resorptive area was seen clearly from the surgical site containing granulation tissue under the dom [ figure 1c ] .\nthe resorptive area was cleaned by rinsing with alternating solutions of 3% naocl ( parcan , septodont healthcare india pvt . ltd . ,\nindia ) and 17% ethylenediaminetetraacetic acid ( edta ) ( prime dental product limited , india ) because of property of organic tissue fixation and antibacterial efficacy of naocl and some regenerative potential of edta such as exposing tgf -1 from human dentin ( to improve clinical outcomes ) .\ngranulomatous tissue was removed from the surgical site efficiently through the curettage . after drying the site ,\nthe root canal of 21 was secured by inserting a suitable gutta - percha ( gp ) point and the resorptive area was filled with biodentine ( manipulated according to manufacturer 's instructions ) and contoured as per external root anatomy [ figure 1d ] .\nthe material was left for fifteen minutes to achieve the initial setting and relative hardness .\nafter , the setting of biodentine was confirmed , the root canals of 11 and 22 was properly obturated with gp point after sealer application ( apexit plus , ivoclar vivadent ) and access cavity was temporarily sealed with cavit - g .\nfinally , the flap was kept at its original position and sutured with 3 - 0 black silk suture material and the patient was instructed to report after a week for suture removal .\nhowever the patient reported to the department only after 11 months and was completely asymptomatic on presentation .\nwe performed clinical and radiographic examinations for treated teeth [ figure 1f ] and observed that repair of resorption defect was successfully treated with biodentine .\nregarding the treatment of icr , successful management of each case must be linked to the etiology .\na radiographic evaluation is essential to the diagnosis of icr and the difficulty in distinguishing this lesion from internal root resorption .\nuse of cone beam computed tomography ( cbct ) may be an important diagnostic tool in this regard .\nthis imaging technique may confirm the real extent of resorption and its possibility of communication with the periodontal space .\ndifferent modalities have been suggested by several authors for the treatment of cervical external root resorption .\nthe basic aim of treating icr is the complete removal of resorptive tissue and the restoration of the defect area .\nthe present case describes an invasive cervical resorptive defect in which the tooth shows a sign of pulpal and periapical infection and requires root canal treatment followed by sealing the resorptive area with biodentine using surgical intervention .\nthe etiological factor for the resorption in the present case seems to be traumatic injury . since the affected teeth remained untreated for a long period of time\nthe communication between the resorption defect and the root canal system was large in size , and the defect was treated surgically .\nsurgical treatment of varying degrees of icr generally involves periodontal flap reflection , curettage , granulation tissue removal , and restoration of the defect with suitable material , such as mta , resin - modified gic , cem , biodentine , and repositioning the flap to its original position . the root canal treatment and management of the resorptive defect for this case\nwere performed at the same appointment in order to avoid secondary infection . like mta ,\nit also has the property of promoting hydroxyapatite formation , stimulating stem cell differentiation and can induce hard tissue formation .\ncem cement has demonstrated similar results to mta when used as pulp capping agent , furcation perforation repair , pulpotomy of permanent molar teeth with established irreversible pulpitis , and in management of internal root resorption . on the other hand ,\nbiodentine has also been promoted as a favorable repair material due to its bioactivity and biocompatibility .\nbiodentine has setting time of less than 12 min and high mechanical properties with excellent sealing ability .\nits property to release calcium ion and enhancing the alkaline environment makes biodentine more conducive for osteoblastic activity .\nalso , calcium and hydroxide ions stimulate the release of pyrophosphatase , alkaline phosphatase , and bmp-2 , which favors the mineralization process . managing the case with use of dom ,\nenabled the assess of pathological changes more precisely , and to remove pathological lesions with far greater precision , thus minimizing tissue damage during the surgery and decreased the postoperative patient discomfort .\nafter the treatment , the patient did not follow the regular visits for check - up , but when returned to the department after 11 months , satisfactory healing was observed in the case .\nthe patient was completely asymptomatic and the radiographic examinations showed healing of the periradicular radiolucency with indication of bone formation .\nproper diagnosis , case selection , and its implementation can lead to the successful outcome and long - term retention of the tooth .\nalthough this case report presents a favorable outcome , further studies are encouraged to support the use of biodentine to fill external icr defect .\ndepartment of conservative dentistry and endodontics , dr . r. ahmed dental college & hospital , kolkata - 700 014 .\ndepartment of conservative dentistry and endodontics , dr . r. ahmed dental college & hospital , kolkata - 700 014 .",
"role": "user"
},
{
"content": "invasive cervical resorption ( icr ) of a tooth is relatively uncommon and the etiology is not very clear . \n it is sometimes misdiagnosed and can lead to improper management or tooth loss . \n correct diagnosis and proper management can result in a successful outcome . \n the treatment should aim toward the complete suppression of all resorbing tissues and the reconstruction of resorptive defect by the placement of a suitable filling material or some biological systems . \n one of the most significant developments of the past decade , i.e. the operating microscope used for surgical endodontics , helps the surgeon to assess pathological changes more precisely and to remove pathological lesions with far greater precision , thus minimizing tissue damage . \n the aim of this article was to show the management of maxillary left central incisor diagnosed with external icr using biodentine under dental operatory microscope ( dom ) .",
"role": "assistant"
}
] |
Subsets and Splits