dataset
stringclasses
45 values
id
stringlengths
17
64
messages
listlengths
2
2
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.29693
[ { "content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nApplications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.\n\nCited Abstract(s):\n[1]: We describe Charm++, an object oriented portable parallel programming language based on C++. Its design philosophy, implementation, sample applications and their performance on various parallel machines are described. Charm++ is an explicitly parallel language consisting of C++ with a few extensions. It provides a clear separation between sequential and parallel objects. The execution model of Charm++ is message driven, thus helping one write programs that are latency-tolerant. The language supports multiple inheritance, dynamic binding, overloading, strong typing, and reuse for parallel objects, all of which are more difficult problems in a parallel context. Charm++ provides specific modes for sharing information between parallel objects. It is based on the Charm parallel programming system, and its runtime system implementation reuses most of the runtime system for Charm.\n\n[2]: Malleable jobs are parallel programs that can change the number of processors on which they are executing at run time in response to an external command. One of the advantages of such jobs is that a job scheduler for malleable jobs can provide improved system utilization and average response time over a scheduler for traditional jobs. In this paper, we present a programming system for creating malleable jobs that is more general than other current malleable systems. In particular, it is not limited to the master-worker paradigm or the Fortran SPMD programming model, but can also support general purpose parallel programs including those written in MPI and Charm++, and has built-in migration and load-balancing, among other features.\n\n[3]: Efficient management of distributed resources, under conditions of unpredictable and varying workload, requires enforcement of dynamic resource management policies. Execution of such policies requires a relatively fine-grain control over the resources allocated to jobs in the system. Although this is a difficult task using conventional job management and program execution models, reconfigurable applications can be used to make it viable. With reconfigurable applications, it is possible to dynamically change, during the course of program execution, the number of concurrently executing tasks of an application as well as the resources allocated. Thus, reconfigurable applications can adapt to internal changes in resource requirements and to external changes affecting available resources. In this paper, we discuss dynamic management of resources on distributed systems with the help of reconfigurable applications. We first characterize reconfigurable parallel applications. We then present a new programming model for reconfigurable applications and the Distributed Resource Management System (DRMS), an integrated environment for the design, development, execution, and resource scheduling of reconfigurable applications. Experiments were conducted to verify the functionality and performance of application reconfiguration under DRMS. A detailed breakdown of the costs in reconfiguration is presented with respect to several different applications. Our results indicate that application reconfiguration is effective under DRMS and can be beneficial in improving individual application performance as well as overall system performance. We observe a significant reduction in average job response time and an improvement in overall system utilization.", "role": "user" }, { "content": "Moreira and Naik [3] propose a technique for dynamic resource management on distributed systems using a checkpointing framework called Distributed Resource Management Systems (DRMS). The framework supports jobs that can change their active number of tasks during program execution, map the new set of tasks to execution units, and redistribute data among the new set of tasks. DRMS does not make reconfiguration decisions based on application performance however, and it uses file-based checkpointing for data redistribution. A more recent work by Kale [2] achieves reconfiguration of MPI-based message passing programs. However, the reconfiguration is achieved using Adaptive MPI (AMPI), which in turn relies on Charm++ [1] for the processor virtualization layer, and requires that the application be run with many more threads than processors.", "role": "assistant" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.888
[ { "content": "Extract drug combinations from this sentence and classify each combination into one of these categories:\n- POS: Positive interaction between drugs\n- NEG: Negative interaction between drugs\n- COMB: Drugs used together with unknown interaction type\n\nOutput only a JSON array where each entry is [drug_1, drug_2, ..., drug_n, category].\n\nparagraph: A rapid stability-indicating, fused-core HPLC method for simultaneous determination of β-artemether and lumefantrine in anti-malarial fixed dose combination products. artemisinin-based fixed dose combination (FDC) products are recommended by World Health Organization (WHO) as a first-line treatment. However , the current artemisinin FDC products , such as β-artemether and lumefantrine , are inherently unstable and require controlled distribution and storage conditions , which are not always available in resource-limited settings . Moreover, quality control is hampered by lack of suitable analytical methods. Thus, there is a need for a rapid and simple, but stability-indicating method for the simultaneous assay of β-artemether and lumefantrine FDC products. ### methods Three reversed-phase fused-core HPLC columns (Halo RP-Amide, Halo C18 and Halo Phenyl-hexyl), all thermostated at 30°C, were evaluated. β-artemether and lumefantrine (unstressed and stressed), and reference-related impurities were injected and chromatographic parameters were assessed. Optimal chromatographic parameters were obtained using Halo RP-Amide column and an isocratic mobile phase composed of acetonitrile and 1 mM phosphate buffer pH 3.0 (52:48; V/V) at a flow of 1.0 ml/min and 3 μl injection volume. Quantification was performed at 210 nm and 335 nm for β-artemether and for lumefantrine, respectively. In-silico toxicological evaluation of the related impurities was made using Derek Nexus v2.0®. ### results Both β-artemether and lumefantrine were separated from each other as well as from the specified and unspecified related impurities including degradants. A complete chromatographic run only took four minutes. Evaluation of the method, including a Plackett-Burman robustness verification within analytical QbD-principles, and real-life samples showed the method is suitable for quantitative assay purposes of both active pharmaceutical ingredients, with a mean recovery relative standard deviation (± RSD) of 99.7 % (± 0.7%) for β-artemether and 99.7 % (± 0.6%) for lumefantrine. All identified β-artemether-related impurities were predicted in Derek Nexus v2.0® to have toxicity risks similar to β-artemether active pharmaceutical ingredient (API) itself. ### conclusions A rapid, robust, precise and accurate stability-indicating, quantitative fused-core isocratic HPLC method was developed for simultaneous assay of β-artemether and lumefantrine. This method can be applied in the routine regulatory quality control of FDC products. The in-silico toxicological investigation using Derek Nexus® indicated that the overall toxicity risk for β-artemether-related impurities is comparable to that of β-artemether API.\n\nsentence: However , the current artemisinin FDC products , such as β-artemether and lumefantrine , are inherently unstable and require controlled distribution and storage conditions , which are not always available in resource-limited settings .", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.192659
[ { "content": "You will be presented with the full text of a scientific research paper. Write an abstract summarizing the key points. Your response should contain only the abstract without any other text.\n\nArticle text:\nthere is much interest at present in the possible use of the new generation of high - power laser facilities ( in particular the national ignition facility at livermore and the laser megajoule in bordeaux ) to simulate astrophysical phenomena such as supernovae . at first sight\nthis programme appears to suffer from one obvious drawback .\nthe phenomena one wishes to simulate generally involve _ explosions _ while the laser facilities are designed to produce _\nremarkably , as we will show , this is not a problem . under certain ,\nnot too restrictive , conditions there exists an exact mathematical duality which allows one to transform an explosion problem to an implosion problem and vice versa .\nthus it is possible , in a precise sense , to use _ implosion _ experiments to simulate _ exploding _ systems .\nthe euler equations of perfect gas dynamics can be conveniently written in the form @xmath0 where @xmath1 denotes the lagrangian , material or convective derivative defined by = + * u * and @xmath2 is the mass density , @xmath3 the thermal energy density , @xmath4 the pressure and @xmath5 the velocity .\nin addition to these differential equations one algebraic relation is needed , an equation of state relating the pressure to the mass and energy densities the simplest being a polytropic equation of state , p = ( -1 ) with @xmath6 a constant .\nthese equations are mathematically equivalent to the mass , momentum and energy conservation equations in smooth regions of the flow . only at shocks\nis it necessary to revert to the fundamental conservation forms to recover the correct shock jump conditions . now consider the following transformation of the dependent and independent variables , @xmath7 where for the moment @xmath8 is an arbitrary function of time .\napart from the , at first sight rather strange , time - dependent scaling factors this is essentially a transformation to a coordinate system which is expanding or contracting with a scale factor @xmath8 . if we define @xmath9 and note that = a^2 ddt(1a ) = - d adt it is easy to see that it is an involutionary transformation with inverse obtained by simply interchanging the starred and unstarred quantities @xmath10 let us now consider how the dynamical equations transform under this change of variables .\nit is easy to see that @xmath11 and thus , after some elementary algebra , @xmath12 remarkably , we see that if the scale factor @xmath13 is such that ^ * = d^2 a^*d t^*2 = - a^2 a = 0 and the gas is a polytrope of exponent @xmath14 with @xmath15 then the euler equations are _ invariant _ under this transformation .\nnote that , because the euler equations in conservation form are algebraically equivalent to the simplified forms , the conservation forms are also invariant and thus the whole structure of ideal gas dynamics , including the rankine - hugoniot shock relations , is preserved .\nthe condition that the acceleration of the scale factor be zero , @xmath16 , requires that @xmath8 be a linear function of @xmath17 and , without loss of generality , we can take @xmath18 where @xmath19 is a constant characteristic expansion time .\nthe time transformation is then t^ * = = t_0 ^ 2 = - t_0 ^ 2 t and it is convenient to set the constant to zero and choose t^ * = -t_0 ^ 2 t , t = -t_0 ^ 2t^*. the initial singularity of the expansion in physical space occurs at @xmath20 and is mapped to @xmath21 , the long term behaviour as @xmath22 is mapped to @xmath23 .\nit is important to note that in the dual representation the time variable is bounded from _ above _ , @xmath24 , whereas in physical space it is bounded from _ below _ , @xmath25 .\nthe remarkable result is that for an ideal gas of point particles with no internal structure ( which is what the 5/3 polytrope is ) hydrodynamics in a uniformly expanding system is exactly equivalent to hydrodynamics in a static system .\nthis result , or special forms of it , appears to have been discovered a number of times by cosmologists ( where the idea of factoring out the general expansion of the universe is very natural ) ; a recent discussion is that of martel and shapiro ( 1998 ) where they propose the felicitous name of `` supercomoving variables '' to describe this transformation .\nwhat does not seem to have been generally noted is that this transformation can be used outside the cosmological context ( however poyet and spiegel , 1979 , did use a variant in an analysis of stellar pulsations ) .\nthe fact that the transformation is exact for the gas of ideal point particles strongly hints that it is derived from a similar result for the free particle motion .\nin fact there is such a duality , although it is almost trivial .\nthe freely moving point particle moves along a straight line trajectory , = * x*_0 + * v*_0 t , with starting point @xmath26 and velocity @xmath27 .\nif we write this as = * v*_0 + * x*_0 1 t we see that there is a dual representation of the trajectory in which @xmath17 is replaced by @xmath28 , lengths are scaled by a factor proportional to time , initial points and final velocities are interchanged , but the trajectory remains a straight line .\nif collisions are instantaneous , localised and elastic they look the same in either system , and thus in both systems one can write down a boltzmann equation and then derive the hydrodynamic equations as limits of moments of the boltzmann equation .\nthis approach also shows that higher order effects , such as viscosity and heat conduction , can formally be treated in the same way ; however the resulting transformed transport coefficients will in general have unphysical time dependencies ( for an application see drury and stewart , 1976 ) .\nthis analysis also shows that similar results will hold in different numbers of spatial dimensions , but the equation of state will have to correspond to the ideal gas in that number of dimensions . in @xmath29 spatial dimensions it is easy to verify that the `` super - comoving '' transformation takes the form @xmath30 and that the euler equations are invariant if the gas has a polytropic equation of state such that d p = 2 corresponding to an adiabatic exponent = 1 + 2d .\nan interesting way of looking at this transformation ( for which we are indebted to our colleague etienne parizot ) is that it provides an analogue in spherical geometry to the freedom that galilei transformations allow in planar geometry .\nif we are looking at a planar shock , it is often convenient to transform to a reference frame where the upstream medium , or the downstream medium , or the shock itself , appears stationary . in\nspherical systems one can not apply galilei boosts because the origin is fixed , however this transformation , by allowing one to take out an arbitrary uniform expansion , gives one much the same freedom .\ncomputational studies of the evolution of a supernova remnant commonly start with initial conditions of dense pressure - free ejecta expanding ballistically away from the site of the explosion , which it is convenient to locate at the coordinate origin , and interacting with a stationary , or slowly moving , ambient medium of much lower density and negligible pressure . to illustrate the application of the duality transformation\nlet us consider the simple , if somewhat artificial , case of uniform density ejecta interacting with a uniform and stationary ambient medium in perfect spherical symmetry .\nthen the initial conditions correspond to @xmath31 in the region @xmath32 occupied by the ejecta ( @xmath33 is the maximum expansion speed of the ejecta ) and @xmath34 in the external ( @xmath35 ) medium of constant density @xmath36 .\nthe sweep - up time @xmath37 corresponds to the point where the ejecta , if expanding unimpeded , would have a density equal to the ambient medium .\nthis defines the physical problem of expanding ejecta interacting with a stationary environment .\nlet us now consider the dual problem obtained by applying the transformation with scale factor a ( t ) = t. then the dependent variables transform as @xmath38 so that the explosion , which occurs at @xmath20 in physical problem , occurs at @xmath39 in the dual problem . conversely the asymptotic evolution as @xmath40 in the physical problem is mapped to the behaviour at @xmath23 in the dual problem .\nthe ejecta density in the dual problem is constant , ^*(r^ * , t^ * ) = a^3 ( r , t ) = _ 0 and the velocity is zero , @xmath41 , in @xmath42 .\nhowever the ambient medium is now time - dependent with density , in the region @xmath43 ^*(r^ * , t^ * ) = ( t)^3 _ 0 = ( -t^*)^-3 _ 0 and velocity u^*(r^ * , t^ * ) = r^*t^*. thus in the dual problem we have _ stationary _ ejecta interacting with an _ imploding _ ambient medium whereas in the physical problem we have _ exploding _ ejecta interacting with a _\nstationary _ ambient medium . instead of the initial explosion at @xmath20 in the physical problem\nwe have the final crunch at @xmath44 in the dual problem .\nthe evolution in physical space of the supernova remnant structure has been often discussed and is well - known ( eg truelove and mckee , 1999 ; dwarkadas and chevalier , 1998 ) . at early times , @xmath45 , the bulk of the ejecta\nexpand ballistically except for a thin interaction region on the outside consisting of a forward shock running into the ambient medium , a zone of hot shocked ambient medium , a contact discontinuity , a zone of shocked ejecta and a reverse shock propagating slowly into the ejecta . at later times , when the mass of swept up ambient material becomes comparable to the ejecta mass , the reverse shock detaches itself from the contact discontinuity and implodes on the centre and the outer forward shock approximates the self - similar sedov solution for a strong point explosion in a cold gas . in the dual system\nthe interaction looks a little different , and in some ways is simpler .\ninitially we have the stationary sphere of high density material ( which for convenience we continue to call the ejecta , although in the dual representation it has not been ejected but is simply sitting there ) surrounded by a very low density converging flow .\nthe inflowing gas has to decelerate at a shock which stands about 10% further out in radius than the edge of the ejecta .\nwriting for convenience @xmath46 there is an exact similarity solution in which @xmath47 , @xmath48 and @xmath49 in the region external to the sphere of ejecta .\nthis steeply rising pressure ( @xmath50 ) drives the reverse shock into the ejecta and starts the implosion of the ejecta .\n= 0.9 at later times , as the ejecta collapse , the shock in the imploding ambient medium also moves inwards thereby reducing the rate of increase of the pressure . transforming the sedov solution to the dual system\nwe see that the shock radius scales as r^ * ^3/5 and the postshock pressure as p^*^-19/5 .\nfigure 1 attempts to show schematically the relation between the two representations .\nwe note in passing that the dual representation is also useful for analytic and numerical studies ; this aspect will be explored in a companion paper ( dwarkadas and drury , in preparation ) .\nthe perfectly symmetric explosion is neither realistic nor especially interesting ; it is the easiest case to analyse numerically and there is no reason to suppose that a laboratory simulation would yield any additional information\n. however reality is more complicated .\nit is clear that the ejecta emerging from real supernova explosions are highly nonuniform on a wide range of scales and that to calculate the resulting remnant evolution in three dimensions is likely to remain a computationally challenging problem for some considerable time ( cf arnett , 1999 ) .\nthe interesting implication of this work is that it should be possible with the new generation of implosion facilities to simulate precisely this problem , the interaction of highly structured ejecta with their surroundings including all the effects of spherical geometry .\none can easily imagine constructing a solid target whose density distribution models the density distribution of the expanding ejecta . if this target is then used in an implosion experiment , and if the momentum loading on the surface is tailored to rise in the same manner as the pressure behind the forward shock in the dual system , a steep initial rise as @xmath51 decreasing to @xmath52 , the evolution of the internal structures including all the turbulent mixing , instabilities and shock formation , should be exactly replicated .\nwe emphasise finally that the transformation discussed in this paper is additional to and complements the well - known linear scaling relations as excellently discussed by ryutov et al ( 1999 ) in the astrophysical context , or connor and taylor ( 1977 ) in the plasma physics context .\ndimensional similarity and scaling arguments are obviously central to any attempt at simulation on a laboratory scale of astrophysical systems , however precisely because they are very general and linear they can not turn an explosion into an implosion . the remarkable nonlinear symmetry discussed in this paper is specific to the ideal gas equation of state , but subject to this constraint gives a powerful new degree of freedom in simulation studies by allowing an arbitrary uniform expansion or contraction to be factored out thereby transforming an explosion problem to an implosion one or vice versa .\nthis work was in part supported by the eu under the tmr programme , contract fmrx - ct98 - 0168 .\nsome of it was carried out while ld was a visitor at the research centre for theoretical astrophysics of the university of sydney .\n= 0 pt = 5 pt plus 5 pt", "role": "user" }, { "content": "the euler equations of ideal gas dynamics posess a remarkable nonlinear involutional symmetry which allows one to factor out an arbitrary uniform expansion or contraction of the system . \n the nature of this symmetry ( called by cosmologists the transformation to supercomoving variables ) is discussed and its origin clarified . \n it is pointed out that this symmetry allows one to map an explosion problem to a dual implosion problem and vice versa . the application to laboratory simulations of supernova remnants \n is considered ; in principle this duality allows the complete three - dimensional evolution of highly structured explosion ejecta to be modelled using a static target in an implosion facility . # 1#1 t # 1#1x # 1d#1d t", "role": "assistant" } ]
science.pico_ner
science.pico_ner.2624
[ { "content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nThe Coping Cat program for children with anxiety and autism spectrum disorder : a pilot randomized controlled trial . The purpose of this pilot study was to evaluate whether a modified version of the Coping Cat program could be effective in reducing anxiety in children with autism spectrum disorder ( ASD ) . Twenty-two children ( ages 8-14 ; IQ ≥ 70 ) with ASD and clinically significant anxiety were randomly assigned to 16 sessions of the Coping Cat program ( cognitive-behavioral therapy ; CBT ) or a 16-week waitlist . Children in the CBT condition evidenced significantly larger reductions in anxiety than those in the waitlist . Treatment gains were largely maintained at two-month follow-up . Results provide preliminary evidence that a modified version of the Coping Cat program may be a feasible and effective program for reducing clinically significant levels of anxiety in children with high-functioning ASD .", "role": "user" }, { "content": "{\"Intervention\": [\"modified version of the Coping Cat program\"], \"Outcome\": [\"anxiety\"], \"Participant\": [\"anxiety\", \"children\", \"autism spectrum disorder ( ASD\", \"children ( ages 8-14 ;\", \"Twenty-two\", \"anxiety and autism spectrum disorder\", \"ASD and clinically significant anxiety\"]}", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.1984
[ { "content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nVideo streaming currently accounts for the majority of Internet traffic. One factor that enables video streaming is HTTP Adaptive Streaming (HAS), that allows the users to stream video using a bit rate that closely matches the available bandwidth from the server to the client. MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard, that allows the clients to select the resolution to download based on their own estimations. The algorithm for determining the next segment in a DASH stream is not partof the standard, but it is an important factor in the resulting playback quality. Nowadays vehicles are increasingly equipped with mobile communication devices, and in-vehicle multimedia entertainment systems. In this paper, we evaluate the performance of various DASH adaptation algorithms over a vehicular network. We present detailed simulation results highlighting the advantages and disadvantages of various adaptation algorithms in delivering video content to vehicular users, and we show how the different adaptation algorithms perform in terms of throughput, playback interruption time, and number of interruptions.\n\nCited Abstract(s):\n[1]: With companies such as Netflix and YouTube accounting for more than 50 of the peak download traffic on North American fixed networks in 2015, video streaming represents a significant source of Internet traffic. Multimedia delivery over the Internet has evolved rapidly over the past few years. The last decade has seen video streaming transitioning from User Datagram Protocol to Transmission Control Protocol-based technologies. Dynamic adaptive streaming over HTTP (DASH) has recently emerged as a standard for Internet video streaming. A range of rate adaptation mechanisms are proposed for DASH systems in order to deliver video quality that matches the throughput of dynamic network conditions for a richer user experience. This survey paper looks at emerging research into the application of client-side, server-side, and in-network rate adaptation techniques to support DASH-based content delivery. We provide context and motivation for the application of these techniques and review significant works in the literature from the past decade. These works are categorized according to the feedback signals used and the end-node that performs or assists with the adaptation. We also provide a review of several notable video traffic measurement and characterization studies and outline open research questions in the field.\n\n[2]: In this survey, we present state-of-the-art bitrate adaptation algorithms for HTTP adaptive streaming (HAS). As a key distinction from other streaming approaches, the bitrate adaptation algorithms in HAS are chiefly executed at each client, i.e. , in a distributed manner. The objective of these algorithms is to ensure a high quality of experience (QoE) for viewers in the presence of bandwidth fluctuations due to factors like signal strength, network congestion, network reconvergence events, etc. While such fluctuations are common in public Internet, they can also occur in home networksor even managed networks where there is often admission control and QoS tools. Bitrate adaptation algorithms may take factors like bandwidth estimations, playback buffer fullness, device features, viewer preferences, and content features into account, albeit with different weights. Since the viewer’s QoE needs to be determined in real-time during playback, objective metrics are generally used including number of buffer stalls, duration of startup delay, frequency and amount of quality oscillations, and video instability. By design, the standards for HAS do not mandate any particular adaptation algorithm, leaving it to system builders to innovate and implement their own method. This survey provides an overview of the different methods proposed over the last several years.", "role": "user" }, { "content": "Recent surveys [1] [2] give a good overview of the bit rate adaptation algorithms for DASH based content delivery.", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.15271
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Widespread Gene Conversion in Centromere Cores\nArticle:\nAbstract:\nCentromeres are the most dynamic regions of the genome, yet they are typified by little or no crossing over, making it difficult to explain the origin of this diversity. To address this question, we developed a novel CENH3 ChIP display method that maps kinetochore footprints over transposon-rich areas of centromere cores. A high level of polymorphism made it possible to map a total of 238 within-centromere markers using maize recombinant inbred lines. Over half of the markers were shown to interact directly with kinetochores( CENH3) by chromatin immunoprecipitation. Although classical crossing over is fully suppressed across CENH3 domains, two gene conversion events( i. e., non-crossover marker exchanges) were identified in a mapping population. A population genetic analysis of 53 diverse inbreds suggests that historical gene conversion is widespread in maize centromeres, occurring at a rate >1×10−5/marker/generation. We conclude that gene conversion accelerates centromere evolution by facilitating sequence exchange among chromosomes.\nIntroduction:\nIn spite of their highly conserved function as the site of kinetochore assembly and spindle attachment, centromeres are the most dynamic regions of complex genomes. The components, copy number, and structural organization of centromeric DNA are highly divergent even among closely related species[1],[2],[3]. This apparent conflict between essentiality and sequence dispensability remains one of the major unresolved paradoxes in genetics. It has been hypothesized that the rapid evolution of centromeric DNA is primarily the result of an arms race in which meiotic drive sweeps novel centromeric repeats to fixation while centromeric proteins adapt to suppress this behavior[4]. Alternatively, some authors have argued that the role of selection is minimal and that observed variation can be explained by stochastic events such as mutation and genetic exchange[5],[6],[7]. Both proposals lack strong empirical support, as centromere drive has only rarely been documented[8], and mutational events are difficult to document in complex repetitive areas. Centromeres are specified epigenetically by the presence of a centromere-specific histone H3 variant, CENH3, which organizes the overlying kinetochores[4]. Kinetochores affect the function and behavior of centromeric DNA in pronounced ways. Perhaps most notable is their effect on crossing over. Cytogeneticists have long known that centromeres severely repress meiotic crossing over[9], and this result has since been confirmed in all species studied[10],[11],[12]. As a consequence, centromeres are often defined as regions where the frequency of crossovers approaches zero[12],[13],[14]. Nevertheless it is not accurate to presume that centromeres never experience genetic exchange. Empirical studies have revealed evidence for recombination between sister centromeres[15],[16], gene conversion events have been inferred from sequence analysis of mammalian centromeres[17],[18],[19], and large intrachromosomal rearrangements have been documented in rice centromeres[20],[21]. However, despite the extensive circumstantial evidence for genetic exchange among centromeres, the frequency and nature of the recombination has been difficult to measure. Maize centromeres contain a 156 bp tandem repeat known as CentC and an abundant class of Ty3/Gypsy-like transposons[22]. Several subfamilies of these so-called Centromeric Retroelements( CR elements, known as CRM in maize;[23]) exist, with CRM2 being the most abundant in the maize genome[24]. Over time, CR elements insert in and around each other resulting in a nested arrangement[25],[26]. Such insertion sites have a high probability of being unique and are generally polymorphic among lines, thereby providing an excellent tool for the genetic analysis of centromeres[27],[28]. Here we used transposon display[29] of CRM2 to generate centromere-specific markers in maize. Analysis of segregation in a mapping population, combined with CENH3 ChIP, allowed us to map the functional region of each maize centromere and provide direct evidence for conversion-type genetic exchanges within centromere cores. An analysis of haplotype variation and linkage disequilibrium in a broad panel of maize lines revealed further evidence for a high rate of gene conversion across all centromeres studied, consistent with an important role for stochastic processes in centromere evolution.\nDiscussion:\nOur data indicate that gene conversion is common within centromeres and may play a fundamental role in determining the dynamics and distribution of centromere repeats. This conclusion is based on three primary lines of evidence. First, our mapping data provide what is to our knowledge the only experimental evidence for centromeric gene conversion. Indeed, two independent conversion events were identified in 93 recombinant inbred lines using a set of 238 CRM2 markers, corresponding to a rate of 1. 86×10−4 exchanges per marker per generation. The second line of evidence comes from LD analysis of 75 markers typed in a set of 53 diverse inbred lines. These data show patterns consistent with genetic exchange, including unusually low LD and the clear presence of recombinant haplotypes( nonzero Rmin), but show no decay of LD with distance as would be expected in the presence of crossing over. Finally, two independent population genetic methods were used to directly estimate centromeric gene conversion, resulting in remarkably similar rates of ∼1×10−5 conversions per marker per generation. It is too early to tell how rates of gene conversion in centromeres compare to other regions of the maize genome, but one estimate of gene conversion at the maize anthocyaninless1 locus( ∼3×10−5/marker/generation[40]) suggests they may be of a similar order of magnitude. It has been hypothesized that centromere evolution in eukaryotes with asymmetric meiosis has been primarily governed by an arms race in which meiotic drive occasionally sweeps novel centromeric repeats to fixation[4]. While the extreme LD observed around a short tract of CentC on centromere 2 may hint at an evolutionary history consistent with these ideas( Figure 5B), our finding of widespread gene conversion explains how high levels of diversity may be observed even in yeast where meiotic drive is a less likely explanation[7]. Sequence data from mammalian centromeres are further consistent with this view, suggesting in several studies that gene conversion has contributed to extant centromere variation and the production of novel higher order repeat arrays[17],[18],[19]. If centromeric gene conversion is indeed common in maize, yeast, and humans, it seems reasonable to hypothesize that gene conversion is an important process within the centromere cores of all eukaryotes.", "role": "user" }, { "content": "Centromeres , which harbor the attachment points for microtubules during cell division , are characterized by repetitive DNA , paucity of genes , and almost complete suppression of crossing over . The repetitive DNA within centromeres appears to evolve much faster than would be expected for genetically inert regions , however . Current explanations for this rapid evolution tend to be theoretical . On the one hand there are arguments that subtle forms of selection on selfish repeat sequences can explain the rapid rate of change , while on the other hand it seems plausible that some form of accelerated neutral evolution is occurring . Here , we address this question in maize , which is known for its excellent genetic mapping resources . We first developed a method for identifying hundreds of single copy markers in centromeres and confirmed that they lie within functional domains by using a chromatin immunoprecipitation assay for kinetochore protein CENH3 . All markers were mapped in relation to each other . The data show that , whereas classical crossing over is suppressed , there is extensive genetic exchange in the form of gene conversion ( by which short segments of one chromosome are copied onto the other ) . These results were confirmed by demonstrating that similar short exchange tracts are common among the centromeres from multiple diverse inbred lines of maize . Our study suggests that centromere diversity can be at least partially attributed to a high rate of previously “hidden” genetic exchange within the core kinetochore domains .", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.73473
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\nautophagy , literally ' self - eating , ' is a catabolic process characterized by the lysosomal degradation of a cell 's own material or organelles for the maintenance of cellular energy balance and organelle function . among the diverse types of autophagy ,\nmacroautophagy ( hereafter referred to as autophagy ) involves the rearrangement of subcellular membranes to form autophagosomes , which are then delivered to lysosomes and form autophagolysosomes where the sequestered material is degraded and recycled .\nwhile autophagy plays a critical role in the clearance of degenerated proteins and senescent organelles as well as in the maintenance of cellular homeostasis during energy starvation or stress , dysregulated autophagy has been implicated in the pathogenesis of neurodegenerative diseases , cancer , aging , and infection .\nhere , we summarize current understandings of the molecular mechanism of autophagy , focusing on recent results that have examined the role of autophagy in diverse tissues involved in the control of body metabolism and energy homeostasis .\nthe unc51-like kinase 1 ( ulk1 ) complex is crucial in autophagy . in nutrient - rich conditions , mammalian target of rapamycin complex 1 ( mtorc1 ) kinase\nis incorporated into the ulk1-autophagy - related gene ( atg ) 13-fip200 complex and phosphorylates ulk1 .\ninhibition of mtorc1 by nutrient deprivation or rapamycin induces its dissociation from the ulk complex .\ndephosphorylated ulk1 is enzymatically active and phosphorylates matg13 and fip200 to initiate the autophagic process ( fig .\n1 ) . bcl-2-interacting myosin - like coiled - coil protein ( beclin 1 ) , which was initially identified as a bcl-2 binding protein , is also critically involved in the initiation of autophagy .\nbeclin 1 forms complexes with vps34 , vps15 , and atg14l to induce autophagosome formation or with vps34 , vps15 , and ultraviolet radiation resistance - associated gene ( uvrag ) to induce autophagosome maturation . after the dissociation of the beclin-1 complex from bcl-2 in autophagy - inducing conditions such as nutrient deprivation , vps34 , a class iii phosphatidylinositol 3-kinase within the complex ,\npi3p then recruits double fyve - containing protein 1 ( dfcp1 ) and atg proteins , which play crucial roles in the formation of the autophagosome cradle ( fig .\nthe atg system is critical for autophagosome completion and is similar to the ubiquitination system .\natg12 as a ubiquitin - like protein that is conjugated to atg5 and then to atg16l1 through the concerted action of atg7 , an e1-like enzyme , and atg10 , an e2-like enzyme .\natg8 , also called microtubule - associated protein 1 light chain 3 ( lc3 ) , is another ubiquitin - like protein that is converted to lc3-i immediately after synthesis by atg4 .\nlc3-i is conjugated to its lipid target , phosphatidylethanolamine ( pe ) , through atg7 , atg3 acting as another e2-like enzyme and atg12-atg5-atg16l1 complex . after processing ,\nautophagolysosomes are then conjugated to lysosomes to form autophagolysosomes , where the proteolysis of enclosed cytoplasmic contents or organelles occurs .\nhence , unwanted ubiquitinated proteins can be selectively eliminated by binding to p62 and becoming conjugated to lc3-ii of autophagosomes .\nin an attempt to study the role of autophagy in endocrine tissues , mice with -cell - specific deletion of autophagy - related 7 ( atg7 ) have been produced ( atg7 ) .\nthe insulinogenic index was significantly lower in atg7 mice compared to control mice , suggesting impaired -cell function in vivo .\nmorphologically , -cell mass was decreased , which was attributed to increased -cell death and reduced -cell proliferation .\ninsulin release and transient glucose - induced cytosolic ca were significantly attenuated in autophagy - deficient -cells compared to wild - type -cells , suggesting functional defects in addition to compromised viability .\nconfocal microscopy showed inclusion bodies in autophagy - deficient -cells which contained ubiquitin material and p62 , a polyubiquitin - binding adaptor protein , attesting to the importance of autophagy in the removal of insoluble or large long - lived ubiquitinated proteins .\nelectron microscopy showed vacuolar degeneration along with mitochondrial swelling and endoplasmic reticulum ( eprefacer ) distension .\nthese results suggest that autophagy is crucial in the maintenance of -cell structure , mass and function , although the role of autophagy in the development of diabetes is not clear from this study .\nto study the role of -cell autophagy in diabetes , we studied the er because er distention was observed in autophagy - deficient -cells and er stress is important in the development of diabetes .\nwhen we examined the expression of genes involved in the er stress response or the unfolded protein response ( upr ) , we found that their expression was significantly reduced in autophagy - deficient -cells despite er distention suggesting the presence of er stress ( fig .\n2a ) . while these results were contrary to expectations , a deficient upr in the face of er stress could be a sign of cellular decompensation or maladaptation\nindeed , autophagy - deficient -cells were more susceptible to treatment with er stressors such as thapsigargin or lipids in vitro ( fig .\nautophagy - deficient -cells were also more susceptible to the in vivo er stress imposed by obesity compared to autophagy - competent -cells .\nthus , when atg7 mice were bred to ob / w mice , -cell apoptosis was more pronounced in atg7-ob / ob mice compared to control atg7-ob / w or atg7-ob / ob mice . in addition , there was evidence of the increased accumulation of reactive oxygen species and decreased -cell mass in the pancreatic islets of atg7-ob / ob mice ( fig .\naccordingly , atg7-ob / ob mice developed severe diabetes and markedly deteriorated glucose intolerance , while littermate atg7-ob / w mice or atg7-ob / ob mice showed only mild hyperglycemia ( fig .\nthese results suggest that compromised -cell autophagy due to genetic causes , environmental insults , or aging could be a factor in the transition from obesity to diabetes . while closely related , obesity and diabetes are not the same disease process and -cell autophagy status may help determine the progression from obesity to diabetes .\nthe role of hypothalamic autophagy in the control of whole body energy balance has also been studied .\nmice with deleted atg7 specifically in their orexigenic agouti - related peptide ( agrp ) neurons ( atg7 mice ) were lean , which could be attributed to an increase of proopiomelanocortin ( pomc ) expression and anorexigenic -melanocyte stimulating hormone production , or to impaired agrp augmentation in response to fatty acids during starvation in autophagy - deficient agrp neurons .\nin contrast , mice with anorexigenic pomc neuron - specific atg7 deletions ( atg7 mice ) were obese [ 18 - 20 ] , which was attributed to increased food intake and reduced energy expenditure .\ninterestingly , the number of pomc neurons was not diminished in the hypothalamus of atg7 mice compared to control mice , which is in contrast to the diminished pancreatic -cell mass in atg7 mice . instead ,\nsignal transducer and activator of transcription 3 ( stat3 ) activation in the pomc neurons by leptin was diminished in atg7 mice , although the mechanism of deficient stat3 activation is not clear .\nthus , the role of hypothalamic autophagy on whole body metabolism , appetite , and energy expenditure depends on the types of neurons affected .\nmice with autophagy deficiencies in insulin target tissues such as skeletal muscle , liver , or adipose tissues have been generated and studied .\nanimals with targeted disruption of atg7 in the liver have been reported to show accumulation of lipids due to defects in the lipid - disposal process known as ' lipophagy ' .\nin addition , the conditional knockout of autophagy genes in adipose tissues results in defects in adipocyte differentiation . we have demonstrated that autophagy deficiencies in the skeletal muscle or liver lead to leanness and resistance to diet - induced obesity and diabetes .\nsuch phenotypes were attributed to mitochondrial dysfunction due to autophagy deficiencies in insulin target tissues and the subsequent mitochondrial stress response inducing the release of fgf21 as a ' mitokine ' .\nthese results are in contrast to a previous paper which suggested that autophagy deficiency is an element in the pathogenesis of insulin resistance and diabetes .\nsuch discrepancies could be due to differences in the methods , severity , or duration of gene targeting , the age of the experimental animals , and the mode of autophagy affected .\nthe concept of ' mitokines ' has been suggested by a caenorhabditis elegans model that shows increased longevity and has a disruption in the mitochondrial electron transport chain .\nin addition , a recent paper reported a significant prolongation of lifespan in transgenic mice overexpressing fgf21 .\nthe relationship between autophagy , mitochondria , and ' mitokines ' could be a hot topic not only in the field of metabolism and diabetes , but also in aging and longevity studies .\nthe roles of autophagy in whole body metabolism and the development of metabolic syndrome or diabetes have been explored by many investigators .\nthe majority of previous studies investigating the role of autophagy in body metabolism and diabetes employed site - specific knockout mouse models .\nwhile such models have provided important data regarding the impact of dysregulated autophagy in metabolic disorders , they may not reflect natural physiological or disease conditions .\nfurther studies employing more physiological models are likely to provide valuable information on the role of autophagy in the control of body metabolism and the pathogenesis of metabolic disorders . because autophagy plays a crucial role in other body systems besides the control of metabolism , the information obtained by studying the relationship between autophagy and body metabolism could lead to the development of a novel class of drugs that can be employed not only against diabetes or metabolic syndrome , but also cancer , neurodegeneration , and aging .", "role": "user" }, { "content": "autophagy plays a crucial role in the maintenance of cellular nutrient balance and the function of organelles such as mitochondria or the endoplasmic reticulum , which are important in intracellular metabolism , insulin release , and insulin sensitivity . in the insulin - producing pancreatic -cells , \n autophagy is important in the maintenance of -cell mass , structure , and function . \n mice with deficiencies in -cell - specific autophagy show reduced -cell mass and defects in insulin secretion that lead to hypoinsulinemia and hyperglycemia but not diabetes . however , these mice developed diabetes when bred with ob / ob mice , suggesting that autophagy - deficient -cells have defects in dealing with the increased metabolic stress imposed by obesity . \n these results also imply that autophagy deficiency in -cells could be a factor in the progression from obesity to diabetes . \n another important function of autophagy is in hypothalamic neurons for the central control of energy expenditure , appetite , and body weight . \n in addition , mice with autophagy deficiencies in the target tissues of insulin have yielded diverse phenotypes . \n taken together , these results suggest that autophagy is important in the control of whole body energy and nutrient homeostasis , and its dysregulation could play a role in the development of metabolic disorders and diabetes .", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.3401
[ { "content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nThe COVID-19 pandemic has challenged existing healthcare systems and has made prevention of healthcare personnel exposure a high priority. Essential healthcare services, including multi-disciplinary team (MDT) meetings that make medical decisions, are expected to continue uninterrupted in this time of social distancing. There are a multitude of virtual platforms available to enable remote MDT meetings, and the pandemic has accelerated their arrival into daily healthcare practice. While we deal with a pandemic crisis, we have comprehensively reviewed and reported on the popular platforms and services available for this purpose. While each platform has its own unique features and drawbacks, it is essential to liaise with information technology departments and data governance teams to understand the optimal platforms for use within each healthcare setting. Level of evidence: Not ratable.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.621
[ { "content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: A Whole-Body Approach to Point of Care Ultrasound Ultrasonography is an essential imaging modality in the ICU used to diagnose and guide the treatment of cardiopulmonary failure. Critical care ultrasonography requires that all image acquisition, image interpretation, and clinical applications of ultrasonography are personally performed by the critical care clinician at the point of care and that the information obtained is combined with the history, physical, and laboratory information. Point-of-care ultrasonography is often compartmentalized such that the clinician will focus on one body system while performing the critical care ultrasonography examination. We suggest a change from this compartmentalized approach to a systematic whole-body ultrasonography approach. The standard whole-body ultrasonography examination includes thoracic, cardiac, limited abdominal, and an evaluation for DVT. Other elements of ultrasonography are used when clinically indicated. Each of these elements is reviewed in this article and are accompanied by a link to pertinent cases from the Ultrasound Corner section of CHEST.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [\"cardiac\", \"whole-body\", \"Whole-Body\"], \"Bacterium\": [], \"BiologicFunction\": [\"DVT\", \"cardiopulmonary failure\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [\"body system\"], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"compartmentalized approach\", \"diagnose\", \"compartmentalized\", \"history\"], \"Food\": [], \"HealthCareActivity\": [\"critical care\", \"examination\", \"evaluation\", \"Ultrasonography\", \"treatment\", \"ultrasonography\", \"ultrasonography approach\", \"Ultrasound\", \"image interpretation\", \"Critical care\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"Ultrasound Corner section of CHEST\"], \"MedicalDevice\": [], \"Organization\": [\"ICU\"], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [\"clinician\"], \"ResearchActivity\": [], \"SpatialConcept\": [\"thoracic\", \"abdominal\"], \"Virus\": []}", "role": "assistant" } ]
science.pico_ner
science.pico_ner.4038
[ { "content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nCarbenicillin-trimethoprim/sulfamethoxazole versus carbenicillin-gentamicin as empiric therapy of infection in granulocytopenic patients . A prospective , randomized , double-blind study . The results of therapy with carbenicillin plus trimethoprim-sulfamethoxazole ( C-T/S ) were compared to those obtained with carbenicillin plus gentamicin ( C-G ) in a prospective double-blind study of empiric antibiotic therapy in granulocytopenic patients . Patients were stratified into two groups : favorable-prognosis , group 1 ( carcinoma , lymphoma , multiple myeloma ) , or unfavorable-prognosis , group 2 ( acute leukemia , bone marrow transplantation ) , based on anticipated duration of granulocytopenia . Over-all , empiric antibiotic trials were more often successful ( P = 0.004 ) in group 1 ( 55 of 62 patients or 89 per cent ) than in group 2 ( 42 of 64 patients , 66 per cent ) mwithin group 1 , there was a favorable outcome in 30 of 32 ( 94 per cent ) C-T/S trials and in 25 of 30 ( 83 per cent ) C-G trials ( P = 0.25 ) ; within group 2 , there was a favorable outcome in 23 of 30 ( 77 per cent ) C-T/S trials and in 19 of 34 ( 56 per cent ) C-G trials ( P = 0.14 ) , Combined results in both groups indicated a higher proportion of favorable outcome in C-T/S trials ( 53 of 62 , 85 per cent ) than in C-G trials ( 44 of 64 , 69 per cent ) . Further analysis ( Manetl-Naenszel test ) showed the over-all difference in outcome to be significant ( P = 0.049 ) , but the general applicability of this result may be limited by the rather low incidence of gram-negative bacterial infections in this study . There was no difference between the treatment regimens in antibiotic toxicity , and serious superinfection occurred only in group 2 patients ( 21 per cent of trials ) , equally divided between treatment arms . Initial protocol dosing achieved target plasma levels of trimethoprim ( 3 to 8 micrograms/ml ) or gentamicin ( 4 to 10 micrograms/ml ) in 57 of 68 ( 84 per cent ) C-T/S trials compared to 21 of 60 ( 35 per cent ) C-G trials .", "role": "user" }, { "content": "{\"Intervention\": [\"Carbenicillin-trimethoprim/sulfamethoxazole versus carbenicillin-gentamicin\", \"carbenicillin plus trimethoprim-sulfamethoxazole ( C-T/S ) were compared to those obtained with carbenicillin plus gentamicin ( C-G )\"], \"Outcome\": [\"favorable outcome\", \"antibiotic toxicity\", \"serious superinfection\", \"successful\", \"analysis ( Manetl-Naenszel test\", \"no difference\"], \"Participant\": [\"lymphoma\", \"two\", \"group 1\", \"bone marrow transplantation\", \"acute leukemia\", \"granulocytopenic\", \"granulocytopenia\", \"carcinoma\", \"group 2\", \"multiple myeloma\"]}", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.13652
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Human Breast Milk and Antiretrovirals Dramatically Reduce Oral HIV-1 Transmission in BLT Humanized Mice\nArticle:\nAbstract:\nCurrently, over 15% of new HIV infections occur in children. Breastfeeding is a major contributor to HIV infections in infants. This represents a major paradox in the field because in vitro, breast milk has been shown to have a strong inhibitory effect on HIV infectivity. However, this inhibitory effect has never been demonstrated in vivo. Here, we address this important paradox using the first humanized mouse model of oral HIV transmission. We established that reconstitution of the oral cavity and upper gastrointestinal( GI) tract of humanized bone marrow/liver/thymus( BLT) mice with human leukocytes, including the human cell types important for mucosal HIV transmission( i. e. dendritic cells, macrophages and CD4+ T cells), renders them susceptible to oral transmission of cell-free and cell-associated HIV. Oral transmission of HIV resulted in systemic infection of lymphoid and non-lymphoid tissues that is characterized by the presence of HIV RNA in plasma and a gradual decline of CD4+ T cells in peripheral blood. Consistent with infection of the oral cavity, we observed virus shedding into saliva. We then evaluated the role of human breast milk on oral HIV transmission. Our in vivo results demonstrate that breast milk has a strong inhibitory effect on oral transmission of both cell-free and cell-associated HIV. Finally, we evaluated the effect of antiretrovirals on oral transmission of HIV. Our results show that systemic antiretrovirals administered prior to exposure can efficiently prevent oral HIV transmission in BLT mice.\nIntroduction:\nPediatric HIV infection is associated with an accelerated course of disease and high mortality rate. In the absence of antiretroviral therapy, only 65% of HIV-infected children survive until their first birthday and less than half will reach two years of age[1]. Most children acquire HIV from their mother in utero, intrapartum or orally during breastfeeding[2]. In developed countries the incidence of mother-to-child transmission of HIV is extremely low; HIV-infected women receive antiretroviral therapy during pregnancy and delivery and abstain from breastfeeding. Furthermore, their children receive antiretroviral prophylaxis at birth and for several weeks thereafter. The majority of HIV-infected children live in sub-Saharan Africa where HIV-positive women have limited access to antiretroviral drugs and the health benefits of breastfeeding outweigh the risk of HIV transmission[3]. Despite the presence of innate factors in human breast milk that display strong HIV inhibitory activity in vitro[4]–[12], up to 44% of HIV infections in children can be attributed to breastfeeding. The risk of acquiring HIV after a single day of breastfeeding is extremely low( 0. 00028 per day of breastfeeding)[13], however, after ingesting liters of breast milk over a span of several months to years( ∼250 liters per year), 5–20% of infants born to HIV-infected women will eventually become infected with HIV in the absence of any preventative measures[14]. Exclusive breastfeeding( not allowing any water, juice or solid foods) has been associated with a drastic decrease in the HIV transmission rate through breastfeeding, indicating that breast milk acts as a vehicle of protection[14]. However, elevated levels of HIV particles( cell-free virus) and HIV-infected cells( cell-associated virus) in breast milk of HIV-positive women are associated with an increased risk for HIV transmission during breastfeeding[15]–[17]. Although it has been reported that a 10-fold increase in cell-free or cell-associated HIV in breast milk is associated with a 3-fold increase in transmission[15], it is still unclear whether cell-free and/or cell-associated virus are transmitted during breastfeeding. Furthermore, it is not known if the frequency of cell-free and cell-associated HIV transmission varies at different stages of lactation( i. e. colostrum, early breast milk and mature breast milk). Therefore, successful interventions may need to prevent transmission of both cell-free and cell-associated HIV during breastfeeding by reducing the viral load and number of infected cells in breast milk and/or by directly inhibiting infection of the infant oral and GI mucosa. The development of effective strategies to prevent HIV acquisition during breastfeeding would be significantly enhanced by a small animal model of oral HIV infection that could be utilized to study the relative contribution of cell-free and cell-associated virus in transmission, the mechanism for oral transmission of cell-free and cell-associated HIV and the innate HIV inhibitory activity of human breast milk. In addition, animal models are needed to test the efficacy of novel approaches to prevent transmission of cell-free and cell-associated HIV in breast milk. Thus, we developed an oral HIV transmission model based on BLT humanized mice[18]. We then used this model to demonstrate 1) efficient cell-free and cell-associated oral HIV transmission, 2) transmission can occur in the oral cavity or the upper GI tract, 3) transmission can be prevented with antivirals and 4) that human breast milk dramatically reduces oral HIV transmission.\nDiscussion:\nAvoidance of breastfeeding by HIV seropositive mothers in resource limited settings where prophylaxis is not available reduces the risk of HIV transmission but does not increase the overall survival of their children; breastfeeding protects these children from infections that result in diarrhea, pneumonia and sepsis[14]. In this study, we first established that human hematopoietic cells generated in situ are capable of repopulating the oral cavity and upper GI tract of humanized BLT mice. Specifically, these important mucosal tissues of BLT mice are repopulated with the types of human cells that have been identified to be important for mucosal HIV transmission( i. e. dendritic cells, macrophages and CD4+ T cells). Our results demonstrate that the presence of these human cells renders BLT mice susceptible to oral transmission of cell-free and cell-associated HIV. Oral transmission of HIV results in systemic infection of lymphoid and non-lymphoid tissues that is characterized by a gradual decline of CD4+ T cells in peripheral blood. In addition, infection of the oral cavity results in virus shedding into saliva, recapitulating the human condition[27],[28]. Our data also offers the first in vivo demonstration that human breast milk can inhibit oral transmission of cell-free and cell-associated HIV. Furthermore, oral transmission of HIV can be prevented with systemic FTC/TDF PrEP. Previous studies utilizing NOD/SCID and NOD/SCID/β2m−/− mice reconstituted with human peripheral blood leukocytes( hu-PBL mice) failed to demonstrate oral transmission of cell-free HIV following an oral exposure to CXCR4 and CCR5 tropic strains, including HIV-1JR-CSF[46]. Therefore, our work represents a significant advance since we demonstrated, for the first time, highly reproducible oral transmission of multiple HIV strains in BLT mice. Specifically, in this study we demonstrate oral transmission of the well characterized CCR5-tropic HIV-1 isolate JR-CSF and of several T/F viruses. Our data revealed that the efficiency of oral transmission varied among T/F viruses, suggesting that intrinsic properties of these viruses may contribute to their transmission in vivo. Recent studies of mother-infant transmission pairs indicate that the replicative fitness as well as the length of the variable loop and number of n-linked glycosylation sites in the envelope protein may influence which maternal variant( s) are preferentially transmitted[47]–[49]. Our results indicate that BLT mice could be used to study the molecular and biological properties of HIV strains that provide a selective advantage for oral transmission during breastfeeding. A comparison of viruses transmitted at early, mid and late stages of lactation would indicate if the composition of breast milk, which changes during lactation, influences which viruses are preferentially transmitted. During breastfeeding, HIV is transmitted from mother-to-child in the context of human breast milk. In the absence of antiretroviral therapy, the breast milk of most HIV-infected women possesses cell-free and/or cell-associated HIV[15]–[17]. Although elevated levels of both cell-free and cell-associated HIV in the breast milk of HIV-infected mothers have been associated with an increased risk for HIV transmission during breastfeeding[15]–[17], it is currently not known if both cell-free and cell-associated HIV are transmitted to infants. Studies illustrating that HAART administered to HIV-infected mothers during pregnancy or post-partum significantly decreases the amount of cell-free but not cell-associated HIV in breast milk[50],[51], combined with reports demonstrating that maternal HAART significantly decreases but does not eliminate HIV transmission during breastfeeding, suggest that both cell-free and cell-associated HIV may be transmitted[52],[53]. As the importance of cell-associated HIV in breast milk transmission becomes increasingly more appreciated, antiretroviral drugs and preventative strategies may be needed that reduce the burden of HIV-infected cells in the breast milk of HIV-infected women and/or directly inhibit transmission of cell-associated HIV in infants during breastfeeding. Our data demonstrating oral transmission of both cell-free and cell-associated HIV is of high relevance since it will make possible the future evaluation of novel prophylactic strategies aimed at preventing oral transmission of both cell-free and cell-associated HIV. However, even though infected cells were used for the exposures, we cannot rule out the possibility that actual transmission across the mucosal surface could occur with cell-free virions released from infected cells. Paradoxically, although breastfeeding can be attributed to a significant number of HIV infections in children, breast milk has been shown to potently inhibit HIV infectivity and to possess several innate factors with in vitro anti-HIV inhibitory activity[4]–[12]. Our results offer the first in vivo evidence that human breast milk can strongly inhibit oral transmission of both cell-free and cell-associated HIV. The ability of human breast milk to inhibit cell-associated HIV transmission in BLT mice is in contrast to in vitro studies suggesting that milk does not inhibit cell-associated infection[54]. This apparent discrepancy may be explained by the use of whole human breast milk for our experiments. Specifically, the in vitro experiments comparing breast milk inhibition of cell-free and cell-associated HIV infection utilized the skim milk fraction of breast milk. Further in vitro analysis comparing inhibition of cell-free HIV infection in the presence of whole breast milk or the skim milk fraction will be needed to address this issue. However, since most children at risk of HIV infection via breast milk do not receive skim milk, the potential relevance of this in vitro observation may be questionable. Nevertheless, inhibitory factors present in breast milk may differ in their ability to inhibit cell-free versus cell-associated HIV infection. While the skim milk fraction of human breast milk possesses proteins with HIV inhibitory activity( i. e. mucin, lactoferrin, bile salt-stimulated lipase and secretory leukocyte protease inhibitor[SLPI])[4]–[12], the lipid fraction may contain additional factors that can inhibit transmission of cell-free and/or cell-associated HIV. For example, increased concentrations of certain long-chain polyunsaturated fatty acids( LCPUFAs) in breast milk are associated with a decreased risk of HIV breastfeeding transmission[55]. Although their ability to inhibit cell-free versus cell-associated HIV infection has not been experimentally tested to our knowledge, it has been hypothesized that LCPUFAs may inhibit HIV infection by inactivating the virus' envelope, suppressing the release of HIV virions from the host cell membrane and/or enhancing the viability of infected CD4+ T cells[56]. All together, our results highlight the protective role of human breast milk against HIV transmission and suggest that components in both the skim milk and lipid fractions may contribute to its HIV inhibitory activity. Despite ingesting liters of breast milk over a span of several months to years, the majority of infants born to HIV-infected HAART naïve women( ∼85%) do not acquire HIV during breastfeeding[14]. This observation is in agreement with our data demonstrating the potent in vivo inhibitory activity of human breast milk on oral transmission of both cell-free and cell-associated HIV following a single oral exposure. Oral transmission of HIV in the presence of human breast milk may require multiple exposures over time. Furthermore, although increased levels of HIV in breast milk have been associated with an increased risk for HIV transmission during breastfeeding, several other maternal and infant factors have been associated with breastfeeding transmission. Additional maternal factors include seroconversion during lactation, CD4+ T cells counts below 500 cells per mm3, poor breast health( mastitis, nipple bleeding, etc), and decreased levels of alpha-defensins in breast milk. Infants that receive both breast milk and other food( mixed-feeding) are also more susceptible to HIV transmission during breastfeeding as are infants with oral thrush and decreased levels of salivary SLPI[reviewed in[14]]. One remaining question is, therefore, whether or not there are differences in the babies or mothers or in the breast milk in the cases where mother-to-child transmission does occur. In the future, it will be important to compare the inhibitory activity of breast milk obtained from HIV-infected mothers who transmit HIV to that of HIV-infected mothers that do not transmit HIV in vivo. In addition, in vivo experiments evaluating transmission in the presence of saliva obtained from infants with low and high levels of salivary SLPI will help assess the contribution of the inhibitory activity of infant salivary SLPI on oral transmission of cell-free and cell-associated HIV. Collectively, our results demonstrate that BLT mice are an attractive small animal model that can be utilized to study key aspects of oral HIV transmission and to test the efficacy of HIV vaccines, antiretroviral therapies and other preventative measures aimed at reducing mother-to-child transmission of HIV during breastfeeding. In addition, our data demonstrating the presence of human immune cells in the oral cavity and GI tract of BLT mice indicate that BLT mice may be utilized to study other human pathogens that are transmitted orally and/or infect these tissues( i. e. HCMV and EBV) and to answer fundamental questions about human oral and gastrointestinal immunity.", "role": "user" }, { "content": "Infected children acquire HIV from their mother in utero , intrapartum or by ingesting their mother's breast milk which can contain both HIV particles ( cell-free ) and HIV-infected cells ( cell-associated ) . Although breastfeeding is attributed to a significant number of HIV infections in children , most breastfed infants remain uninfected despite prolonged and repeated exposure to HIV . This limited transmission has led to two apparently contradictory roles for milk in HIV infection: vector of transmission or vehicle of protection ? Milk has a strong inhibitory effect on HIV infection in vitro . However , this has never been demonstrated in an in vivo system . In the present study , we address this paradox in a bone marrow/liver/thymus ( or BLT ) humanized mouse model of oral transmission of cell-free and cell-associated HIV . We demonstrate that human breast milk has potent HIV inhibitory activity that can prevent oral transmission of cell-free and cell-associated HIV in vivo . Our results provide key insight into oral HIV transmission and the protective role of milk . However , since transmission can and does occur in some instances after continued exposure to HIV in milk , we demonstrate that oral HIV transmission can be efficiently prevented in BLT humanized mice by the systemic administration of antiretrovirals .", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.4259
[ { "content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: dexamethasone, a steroid commonly used to treat inflammation, can drastically reduce death in patients who are severely sick with COVID-19.\n\nAbstract:\nOBJECTIVES: The infection caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) spreads worldwide and is considered a pandemic.\nThe most common manifestation of SARS-CoV2 infection (Coronavirus disease 2019 - COVID-19) is viral pneumonia with varying degrees of respiratory compromise and up to 40% of hospitalized patients might develop Acute Respiratory Distress Syndrome (ARDS).\nSeveral clinical trials evaluated the role of corticosteroids in non-COVID-19 ARDS with conflicting results.\nWe designed a trial to evaluate the effectiveness of early intravenous dexamethasone administration on the number of days alive and free of mechanical ventilation within 28 days after randomization in adult patients with moderate or severe ARDS due to confirmed or probable COVID-19.\nMETHODS: This is a pragmatic, prospective, randomized, stratified, multicenter, open-label, controlled trial including 350 patients with early-onset (less than 48h before randomization) moderate or severe ARDS, defined by the Berlin criteria, due to COVID-19.\nEligible patients will be randomly allocated to either standard treatment plus dexamethasone (intervention group) or standard treatment without dexamethasone (control group).\nPatients in the intervention group will receive dexamethasone 20mg IV once daily for 5 days, followed by dexamethasone 10mg IV once daily for additional 5 days or until Intensive Care Unit (ICU) discharge, whichever occurs first.\nThe primary outcome is ventilator-free days within 28 days after randomization, defined as days alive and free from invasive mechanical ventilation.\nSecondary outcomes are all-cause mortality rates at day 28, evaluation of the clinical status at day 15 assessed with a 6-level ordinal scale, mechanical ventilation duration from randomization to day 28, Sequential Organ Failure Assessment (SOFA) Score evaluation at 48h, 72h and 7 days and ICU-free days within 28.\nETHICS AND DISSEMINATION: This trial was approved by the Brazilian National Committee of Ethics in Research (Comissao Nacional de Etica em Pesquisa - CONEP) and National Health Surveillance Agency (ANVISA).\nAn independent data monitoring committee will perform interim analyses and evaluate adverse events throughout the trial.\nResults will be submitted for publication after enrolment and follow-up are complete.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.gnormplus_ner
science.gnormplus_ner.218
[ { "content": "Extract all unique entities of these types from the text: [\"Gene\"].\n\nOutput a JSON object of the format: {\"Gene\": [\"SSX\", ...]}.\n\nText:\nReceptor-associated Mad homologues synergize as effectors of the TGF-beta response. Transforming growth factor-beta TGF-beta is the prototype for a family of extracellular proteins that affect cell proliferation and tissue differentiation. TGF-beta-related factors, including BMP-2/4, Dpp and activin, act through two types of serine/threonine kinase receptors which can form a heteromeric complex. However, the mechanism of signal transduction by these receptors is largely unknown. In Drosophila, Mad is required for signalling by Dpp. We have isolated complementary DNAs for four human Mad homologues, one of which, hMAD-4, is identical to DPC-4, a candidate tumour suppressor. hMAD-3 and -4 synergized to induce strong ligand-independent TGF-beta-like responses. When truncated at their carboxy termini, hMAD-3 and -4 act as dominant-negative inhibitors of the normal TGF-beta response. The activity of hMAD-3 and -4 was regulated by the TGF-beta receptors, and hMAD-3 but not hMAD-4 was phosphorylated and associated with the ligand-bound receptor complex. These results define hMAD-3 and -4 as effectors of the TGF-beta response and demonstrate a function for DPCA-4/hMAD-4 as a tumour suppressor.", "role": "user" }, { "content": "{\"Gene\": [\"Mad\", \"ligand-bound receptor\", \"Transforming growth factor-beta\", \"Dpp\", \"DPCA-4\", \"BMP-2/4\", \"hMAD-3 and -4\", \"DPC-4\", \"TGF-beta-related factors\", \"hMAD-4\", \"hMAD-3\", \"serine/threonine kinase receptors\", \"tumour suppressor\", \"dominant-negative inhibitors\", \"TGF-beta\"]}", "role": "assistant" } ]
science.pico_ner
science.pico_ner.3120
[ { "content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nNo gains in efficacy observed by adding gemcitabine to adjuvant therapy for lymph node-positive breast cancer .", "role": "user" }, { "content": "{\"Intervention\": [\"gemcitabine to adjuvant therapy\"], \"Outcome\": [\"efficacy\"], \"Participant\": [\"lymph node-positive breast cancer\"]}", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.591
[ { "content": "Identify relationships between chemical compounds and genes in this abstract. Use only these relation types: ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nFormat your output as a JSON array with entries like [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"].\n\nAbstract:\nTemporal profile of brain and pituitary GnRHs, GnRH-R and gonadotropin mRNA expression and content during early development in European sea bass (Dicentrarchus labrax L.). A likely endocrine control mechanism for sexual differentiation in size-graded populations of European sea bass (Dicentrarchus labrax) is proposed by evaluating the brain expression and pituitary content of two forms of gonadotropin-releasing hormone (GnRH), namely sea bream (sbGnRH) and salmon (sGnRH), the pituitary expression of one subtype of GnRH receptor (dlGnRH-R-2A) and the three gonadotropin (GtH) subunits, namely glycoprotein alpha (GPalpha), follicle-stimulating hormone beta (FSHbeta) and luteinizing hormone beta (LHbeta), as well as the pituitary and plasma LH levels between 50 and 300 days post-hatching (dph). Four gradings were conducted between 2 and 8 months after hatching, resulting in a population of large and small individuals, having 96.5% females (female-dominant population) and 69.2% males (male-dominant population), respectively, after the last grading. The onset of gonadal differentiation was different in the two sexes, and coincided with a peak of expression of sbGnRH or sGnRH. Furthermore, the expression of these GnRHs was correlated with the expression of dlGnRH-R-2A. Sex-related differences in the brain and pituitary content of sbGnRH were also found at the time of sexual differentiation. Moreover, the observed sexual dimorphism at the transcriptional or synthesis level of these GnRH forms suggests that a different neuro-hormonal regulation is operating according to sex. At the onset of sex differentiation, FSHbeta transcriptional activity reached maximal values, which were maintained until the completion of the process. The present study suggests a role for sbGnRH, sGnRH and the dlGnRH-R-2A during gonadal differentiation, possibly through enhancement of FSHbeta gene expression. In males, a different endocrine regulation seems to exist also during spermiogenesis and spermiation, when gene transcription, peptide synthesis and release of LH are of greater importance.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.6519
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\nthe clinical evaluation of left ventricular ( lv ) diastolic function has been a difficult challenge .\nlv filling and pulmonary venous doppler patterns have been utilized to estimate lv diastolic function , comprising variables as such stiffness , relaxation and even lv filling pressures .\ndoppler echocardiography has been utilized not only as a diagnostic tool but also as a monitoring tool , permitting follow - up of the effects of therapeutic interventions . in the previous issue of critical care , vignon and\ncolleagues demonstrated the value of tissue doppler imaging ( tdi ) to estimate lv diastolic function in a setting where acute alterations of preloading conditions may interfere .\nthey also clearly show some weak points with respect to the correct use of these function variables .\na typical transmitral doppler pattern consists of a larger early filling velocity wave ( e wave ) followed by an atrial contraction flow velocity wave .\nreduced lv relaxation ( present in patients with advanced age , ischaemic heart disease or arterial hypertension ) will induce a reduction of the e wave in comparison with the atrial contraction flow velocity wave . increasing filling\npressures will increase the e wave velocity and will shorten the deceleration time of the e wave , with a transition from pseudonormalization to a restrictive pattern .\nthe phase of diastolic dysfunction for an individual patient remains difficult because of the interplay between relaxation and preload , which makes the routine pulsed wave doppler indices useless particularly in a setting where preloading conditions may change constantly and abruptly , as in many critically ill patients .\nthis doppler mode facilitates the assessment of the movement of the myocardial tissue , which typically reflects low velocity with very high amplitude doppler signals .\ntdi can therefore be utilized to estimate myocardial velocities at the mitral annulus to obtain an impression of both systolic and diastolic myocardial motion .\nthe spectral doppler pattern is characterized by a systolic wave , an early diastolic wave ( e ' wave ) and an atrial velocity wave .\nthe technique is hampered by shortcomings related to the doppler technology ( angle misalignment , translation and rotation of the myocardial tissue ) and by intrinsic characteristics of myocardial function ( for example , the presence of regional wall motion abnormalities ) .\nnevertheless , regional lv systolic function and lv diastolic function assessment is possible provided the sample volume is placed at the level of the mitral annulus and no ischaemia / infarction in the annulus region is present .\nwhereas the systolic component of this doppler pattern has been shown to be clearly preload dependent , less transparency exists on the load dependency of the diastolic myocardial velocities . with the initial description of this variable\n, the index appeared to be load independent in settings of rapid infusion or preload alteration .\njacques and colleagues , however , reported both afterload independency and preload dependency of the e ' wave in situations where a normal lv function was present .\nin contrast to the present study of vignon and colleagues , jacques and colleagues did not report the sample volume location .\ncomparison between the septal and lateral mitral annulus signals suggests that the septal signals are apparently more sensitive to preload alterations .\nthe reader , however , should bear in mind that the stability of the signals obtained at the septal side can be questioned , particularly in mechanically ventilated patients with ventilation - induced , right - sided , afterload shifts .\nnevertheless , several authors have already reported differences in e ' wave velocities between the septal and lateral walls , without the ability to indicate a clear cause for this difference .\nvignon and colleagues show no changes of the e ' wave in a rather small subset of intensive care unit patients , and again several issues have to be kept in mind .\nthe effects of drugs such as epinephrine , however , have not been described ; one should expect an increase of the e ' wave , at least when considering a similar effect to that with dobutamine .\nit can thus be questioned whether an ultrafiltration ( and thus load diminishing ) effect is not concurring , keeping the e ' wave constant .\nsecond , the influence of ultrafiltration itself on lv systolic function and lv diastolic function during septic shock is largely unknown .\nthe combination of lower preload ( by ultrafiltration ) and decreased lv function would render a less preload - dependent e ' wave , as suggested by jacques and colleagues the number of intensive care unit patients included in this subset is again far too small to conclude in a proper manner . from all these studies , it is unclear whether the discussed concepts apply to all haemodynamic states . the fact that the e wave / e ' wave relates to pulmonary capillary wedge pressure suggests that increased intrathoracic pressures could impede this relationship strongly .\napart from the fact that the e ' wave ( in conjunction with traditional doppler parameters ) is useful in determining lv diastolic function more accurately , tdi in the critically ill patient keeps more questions open than are answered .\ne wave = early filling velocity wave ; e ' = early diastolic wave ; lv = left ventricular ; tdi = tissue doppler imaging .\nthis work was supported by an unrestricted grant from the international research centre of ghent university .", "role": "user" }, { "content": "assessment of left ventricular diastolic function in the critically ill patient remains a difficult issue in clinical practice . combined use of routine transmitral and pulmonary venous doppler patterns in conjunction with tissue doppler imaging \n have been claimed to allow bedside diagnosis of diastolic dysfunction . \n although in the previous issue of critical care it was clearly demonstrated there might be a difference in load dependency of the early myocardial tissue doppler velocity between lateral and septal placed sample volume , there remain still several unanswered questions , particularly with respect to the preload dependency of these indices .", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.102114
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\nhe stated he was pressed between reversing 10-ton truck and wall at a construction site .\nplain radiographs and three - dimensional ct revealed an apc type ii pelvic ring injury with 3.0 cm sized diastasis of pubic symphysis ( fig .\nwe scheduled an operation for open reduction and internal fixation with plate on diastasis of pubic symphysis at 4 days after injury .\nsurgery was performed under image intensifler guidance , in the supine position through a pfannenstiel approach .\nafter reduction of symphysis pubis was tried but failed , using a weber pointed reduction clamp placed anteriorly at same level on pubic body , satisfactory reduction was obtained using a farabeau clamp after temporary two screw insertion on superior border of both pubis , and confirmed by fluoroscopy . due to a difficulty of reduction , double plating was done at anterior and superior border of pubis ( fig .\non postoperative radiograph checked on one day after operation , right si joint widening was observed ( fig .\nwe planned a supplementary operation using percutaneous si screws at the second day after initial operation .\nthe patient was positioned supine with applying radiolucent padding under his sacrum not to compress buttock muscle . under an image intensifier , we inserted half - threaded cannulated screw with a washer into s1 vertebral body placed from the lateral surface of the iliac wing for achieving the reduction of widen si joint\n. then second fully - threaded screw was inserted into s2 body as a positional screw .\nthe patient recovered well and was discharged from the hospital on the seventh day after second operation .\n3d - f ) , satisfactory reduction of right si diastasis was confirmed . partially weight bearing on the injured side and fully weight bearing on the contralateral side were permitted after removal of drainage .\none year after injury he had no anterior pain and his follow - up radiograph showed no screw breakage , fixation failure or heterotopic ossification ( fig .\nthe patient was asked if data concerning the case could be submitted for publication , and he consented .\npubic symphysis is known as a non - synovial amphiarthrodial joint that is formed at the junction of the two innominate bones anteriorly .\nthe joint comprises a fibro - cartilaginous disc that is stabilized by the anterior capsular and ligamentous structures .\nthe typical mechanism of injury for an open - book pelvic ring injury is due to a force directed from anterior to posterior , or an external rotation or abduction force applied to either or both legs8 ) . in apc type ii patterns ,\nwidening of the symphysis of more than 2.5 cm has been correlated with rupture of anterior sacrospinous ligament and thought to be required operative fixation like a symphyseal plating with or without si joint fixation2345 ) .\nthe apc type iii patterns is characterized by vertically unstable injury associated with a posterior disruption of si complex , which requires both anterior and posterior fixation146 ) . in general , diastasis of pubic symphysis without any evidence of vertical instability\nthus only anterior plating is enough to reduce disrupted pelvic ring due to anatomical properties of posterior pelvic ring similar to suspension bridge . until now , retrospective studies suggest that posterior ring fixations may not be necessary in apc type ii injury pattern injury8910 ) .\nstatic radiographs may lead to underestimation of the actual injury , even if no pelvic manipulation has occurred .\nstress x - rays were performed on a series of 22 patients with a symphyseal diastasis between 1 and 2.5 cm to determine the extent of pelvic ring instability7 ) despite an injury , radiographic measurement of an average diastasis of 1.8 cm , the average diastasis with applied stress was 2.5 cm .\nsuch findings of intraoperative stress examination have led to a change of the treatment plan in more than 25% of patients7 ) . in the present a case , though initial radiographs revealed 3.0-cm pubic symphyseal diastasis , ct revealed 1.2-cm diastasis and no widening of si joint .\nwe assumed that unintended significant variable recoil may occur even if any pelvic wrapping was not tried .\nsuch unintended recoil could happen by the patient transportation process using bed sheet for checking ct or the convex table setting during checking ct , which is different from flat table setting during checking plain radiograph .\neven if initial radiograph showed si diastasis , following preoperative ct showed minimal si diastasis .\nthus we think there may be injured anterior si complex and intact posterior si complex which could be treated by only anterior plating . however , though anterior plating was done , postoperative radiograph and ct revealed widening of si joint .\nwe presumed that there would be concealed injury of posterior si ligamentous complex which should act as a hinge if injury type was apc ii and it could make consequent seesaw effect by forced reduction of widened pubic symphysis with double anterior plating . when ct was checked for scrutinizing associated posterior ring lesions that have occurred after diastasis of pubic symphysis\nwas measured more than 2.5 cm by initial radiograph , it should be scrutinized not only associated si joint widening but also whether diastasis of pubic symphysis on ct was equal to that of initial radiograph . making a comparison between initial radiographs and following ct , if a smaller measurement of pubic symphyseal widening was observed on following ct , it should be considered a possibility of underestimation of a si joint injury or apc type iii injury . even when pelvic open - book injury was thought that only anterior plating is required without si joint fixation , intraoperative stress test for excluding concealed posterior ring disruption should be considered .", "role": "user" }, { "content": "the classification of anteroposterior compression ( apc ) injury type is based on using static radiographs , stress radiographs are known as a useful adjunct in classifying type of apc pelvic injuries . according to a recent article , the intraoperative stress examination has led to a change in the treatment plan in more than 25% of patients on 22 patients presumed apc type i ( symphyseal diastasis < 2.5 cm ) injuries . here \n authors present a case demonstrating a necessity of intraoperative stress test for excluding concealed posterior ring disruption .", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.120144
[ { "content": "You will be presented with the full text of a scientific research paper. Write an abstract summarizing the key points. Your response should contain only the abstract without any other text.\n\nArticle text:\nblack holes which are formed in the early universe are known as primordial black holes ( pbhs ) .\nthese black holes are formed as a result of initial inhomogeneities [ 1,2 ] , inflation [ 3,4 ] , phase transitions [ 5 ] , bubble collisions [ 6,7 ] or the decay of cosmic loops [ 8 ] . in 1974 hawking discovered that the black holes emit thermal radiation due to quantum effects [ 9 ] . so the black holes get evaporated depending upon their masses .\nsmaller the masses of the pbhs , quicker they evaporate .\nbut the density of a black hole varies as inversely with it s mass .\nso high density is needed for forming lighter black holes . and such high densities is available only in the early universe .\nthus primordial black holes are the only black holes whose masses could be small enough to have evaporated by present time .\nfurther , pbhs could act as seeds for structure formation[10 ] and could also form a significant component of dark matter[11,12,13 ] . using standard cosmology barrow and carr\n[ 14 ] have studied the evaporation of pbhs .\nthey have , however , not included the effect of accretion of radiation which seems to play an important role .\nmajumdar , das gupta and saxena [ 15 ] have solved the baryon asymmetry problem including accretion . in the present work ,\nwe include accretion of radiation while studying the evaporation of pbhs and have shown that how evaporation times of pbhs change wih accretion efficiency .\nfor a spatially flat(k=0 ) frw universe with scale factor @xmath0 , the einstein equation is [ 16 ] @xmath1 where @xmath2 is the density of the universe .\n+ the energy conservation equation is @xmath3 on assuming that the universe is filled with perfect fluid describrd by equation of state @xmath4 .the parameter @xmath5 is @xmath6 for radiation dominated era@xmath7 and is @xmath8 for matter dominated era@xmath9 , where time @xmath10 marks the end of the radiation dominated era @xmath11 sec .\n+ now equation(2 ) gives @xmath12 using this solution in equation(1 ) , one gets the wellknown temporal behaviour of the scale factor @xmath13 as @xmath14 due to hawking evaporation , the rate at which the pbh mass ( m ) decreases is given by @xmath15 where @xmath16 black hole radius=@xmath17 with g as newton s gravitational constant . + @xmath18 black body constant + @xmath19 and@xmath20 hawking temperature=@xmath21 .\n+ now equation ( 4 ) becomes @xmath22 integrating the above equation , we get @xmath23^{\\frac{1}{3}}\\end{aligned}\\ ] ] where @xmath24 .\nwhen a pbh passes through radiation dominated era , the accretion of radiation leads to increase of its mass with the rate given by @xmath25 where @xmath26 is the radiation energy density of the sorrounding of the black hole=@xmath27 and @xmath28 is the accretion efficiency .\nthe value of the accretion efficiency @xmath28 depends upon complex physical processes such as the mean free paths of the particles comprising the radiation sorrounding the pbhs .\nany peculiar velocity of the pbh with respect to the cosmic frame could increase the value of @xmath28 [ 15,17 ] .\nsince the precise value of @xmath28 is unknown , it is customary [ 18 ] to take the accretion rate to be proportional to the product of the surface area of the pbh and the energy density of radiation with @xmath29 . + after substituting the expressions for @xmath30 and\n@xmath31 equation(7 ) becomes @xmath32 using equation(3 ) , we get @xmath33 on integration , the above eqution gives @xmath34^{-1}\\end{aligned}\\ ] ] the variation of accreting mass with time for different @xmath28 is shown in figure-1 .\n+ from figure-1 , it is cleared that the mass of the pbh increases with accretion efficiency .\n+ now equation(10 ) gives @xmath35 but horizon mass varies with @xmath36 as @xmath37 .\n+ so @xmath38 but for accretion to become effective , horizon mass @xmath39 must grow faster than black hole mass @xmath40 which implies ( @xmath41 ) is a positive quantity .\n+ hence equation(12 ) demands ( @xmath42 ) is a positive quantity which gives @xmath43 the behaviour of horizon mass @xmath44 and primordial black hole mass is shown in figure-2 .\nhaving @xmath45 and @xmath44 with @xmath36 ]\nprimordial black holes , as discussed before , are only formed in radiation dominated era .\nso depending on their evaporation , we can divide pbhs into 2 categories .\n+ ( i ) pbhs evaporated in radiation dominated era @xmath7 + ( ii ) pbhs evaporated in matter dominated era @xmath9 . +\n* case - i * @xmath7 + black hole evaporation equation ( 6 ) implies @xmath47^\\frac{1}{3}\\end{aligned}\\ ] ] if we consider both evaporation and accretion simultaneously , then the rate at which primordial black hole mass changes is given by @xmath48 this equation can nt be solved analytically .\nso we have solved it by using numerical methods .\n+ ( for our calculation purpose , we have used @xmath49 and @xmath50 . )\n+ for a given @xmath51 the equation(14 ) and the solution of the equation(15 ) are shown in figure-3 .\n+ ] figure-3 clearly shows that the evaporation time of pbh increases with accretion efficiency . + * case - ii * @xmath9 + since there is no accretion in matter dominated era , so the first term in the combined equation ( 15 ) for variation of @xmath52 with time needs to be integrated only upto @xmath10 .\n+ based on numerical solution with above provision , we construct the following table-1 for the pbhs which are evaporated today i.e. @xmath53 . + @xmath54table-1 + @xmath28 & @xmath55 & @xmath51 + @xmath8 & @xmath56s & @xmath57 g + @xmath58 & @xmath59s & @xmath60 g + @xmath61 & @xmath62s & @xmath63 g + @xmath64 & @xmath65s & @xmath66 g + @xmath67 & @xmath68s & @xmath69 g + @xmath70 & @xmath71s & @xmath72 g + @xmath73 & @xmath74s & @xmath75 g +\nthe fraction of the universe s mass going into pbhs at time @xmath36 is [ 2 ] @xmath76(1+z)^{-1}\\end{aligned}\\ ] ] where @xmath77 is the density parameter associated with pbhs formed at time @xmath36 , @xmath78 is the redshift associated with time @xmath36 and @xmath79 is the microwave background density having value @xmath80 . + again for @xmath81 , @xmath82 .\n+ so @xmath83 again using @xmath84 , we can write the fraction of the universe going into pbhs of mass m is @xmath85 observations of the cosmolgical deceleration parameter imply @xmath86 over all mass ranges for which pbhs have not evaporated .\nbut presently evaporating pbhs(@xmath87 ) generate a @xmath5-ray background whose most of the energy is appearing at around 100 mev[19 ] .\nif the fraction of the emitted energy which goes into photons is @xmath88 , then the density of the radiation at this energy is expected to be @xmath89 . since @xmath90 and the observed @xmath5-ray background density around @xmath91 mev is @xmath92\n, we infer @xmath93 .\n+ now equation ( 18 ) becomes @xmath94 the variation of @xmath95 with @xmath28 drawn from variation of @xmath87 with @xmath28 is shown in the following table-2 .\n@xmath96table-2 + @xmath28 & @xmath87 & @xmath97 + @xmath8 & @xmath98 g & @xmath99 + @xmath61 & @xmath100 g & @xmath101 + @xmath67 & @xmath102 g & @xmath103 + @xmath73 & @xmath104 g & @xmath105 + @xmath106 & @xmath107 g & @xmath108 + @xmath109 & @xmath110 g & @xmath111 +\nif we consider only evaporation then the primordial black holes which are created on or before @xmath112 sec are completely evaporated by present time .\nhowever , we found that if we include accretion , then the primotdial black holes which are created on @xmath56sec will live longer depending on accretion efficiency .\nwe also concluded that accretion efficiency can not exceed @xmath113 .\nfurther , our analysis imposes constraint on the fraction of the universe s mass going into pbhs which is consistent with previous results [ 20,21,22 ] that @xmath114 .\nthus accretion increases the number of existing pbhs depending on accretion efficiency , which helps in considering pbh as the proper candidate for dark matter .\nwe , thus , provide within standard cosmology a possible realisation of the speculation advanced earlier [ 11,12,13 ] .\nwe are thankful to institute of physics , bhubaneswar , india , for providing the library and computational facility .\nb.nayak would like to thank the council of scientific and industrial research , government of india , for the award of jrf , f.no .\n09/173(0125)/2007-emr - i .\n@xmath115 $ ] ya . b. zeldovich and i. novikov , sov .\nastron . astrophys .\nj. @xmath116 , @xmath117 @xmath118 .\n@xmath119 $ ] b. j. carr , astrophys .\nj. @xmath120 , @xmath121 @xmath122 .\n+ @xmath123 $ ] m. y. kholpov , b. a. malomed and ya .\nb. zeldovich , mon . not .\nr. astron .\n@xmath124 , @xmath125 @xmath126 .\n+ @xmath127 $ ] b. j. carr , j. gilbert and j. lidsey , phys .\nrev . d @xmath128 @xmath129 @xmath130 .\n+ @xmath131 $ ] m. y. kholpov and a. polnarev , phys .\n@xmath132 , @xmath133 @xmath134 .\n+ @xmath135 $ ] h. kodma , m. sasaki and k. sato , prog .\n@xmath136 , @xmath137 @xmath138 .\n+ @xmath139 $ ] d. la and p. j. steinhardt , phys .\nlett @xmath140 , @xmath141 @xmath142 .\n+ @xmath143 $ ] a. polnarev and r. zemboricz , phys .\nd @xmath144 , @xmath145 @xmath146 .\n+ @xmath147 $ ] s. w. hawking , commun . math .\n@xmath144 , @xmath148 @xmath122 .\n+ @xmath149 $ ] k. j. mack , j. p. ostriker and m. ricotti , astrophys . j. @xmath150 , @xmath151 @xmath152 .\n+ @xmath153 $ ] d. blais , c. kiefer , d. polarski , phys .\nb @xmath154 , @xmath155 @xmath156 .\n+ @xmath157 $ ] d. blais , t. bringmann , c. kiefer , d. polarski , phys . rev .\nd @xmath158 , @xmath159 @xmath160 .\n+ @xmath161 $ ] a. barrau , d. blais , g. boudoul , d. polarski , annalen phys .\n@xmath162 , @xmath163 @xmath164 .\n+ @xmath165 $ ] j. d. barrow and b. j. carr , phys .\nd @xmath166 , @xmath167 @xmath168 . + @xmath169\n$ ] a. s. majumdar , p. das gupta and r. p. saxena , int .\nd @xmath170 , @xmath171 @xmath172 .\n+ @xmath173 $ ] s. wienberg , ` gravitation and cosmology ' , wiley , new york , @xmath174 .\n+ @xmath175 $ ] n. upadhyay , p. das gupta and r. p. saxena , phys .\nrev . d @xmath176\n, @xmath177 @xmath178 .\n+ @xmath179 $ ] r. guedens , d. clancy and a. r. liddle , phys .\nd @xmath180 , @xmath181 @xmath156 .\n$ ] d. page and s. w. hawking , astrophys . j. @xmath183 , @xmath121 @xmath184 .\n+ @xmath185 $ ] b. j. carr , astron .\n@xmath186 , @xmath187 @xmath130 .\n+ @xmath188 $ ] i. d. novikov et al . ,\nj. @xmath189 , @xmath190 @xmath191 .\n+ @xmath192 $ ] j. macgibbon and b. j. carr , astrophys . j. @xmath193 , @xmath194 @xmath195 .", "role": "user" }, { "content": "primordial black holes evaporate due to hawking radiation . \n we find that the evaporation time of primordial black holes increase when accretion of radiation is included . thus depending on accretion efficiency more and more number of primordial black holes \n are existing today , which strengthens the idea that the primordial black holes are the proper candidate for dark matter . \n pacs numbers : 98.80.-k , 97.60.lf key words : primordial black hole , accretionn , accretion efficiency .", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.485
[ { "content": "Summarize the paper:\n\nPaper text:\n\nLike humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum.\nWhile conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads.\nIn this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), which introduces a novel approach to curriculum learning.\nLILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples.\nIt works in two distinct phases: first, in the incremental label introduction phase, we unmask ground-truth labels in fixed increments during training, to improve the starting point from which networks learn.\nIn the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution.\nWe evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10.\nWe show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks.\nWe further extend LILAC to state-of-the-art performance across CIFAR-10 using simple data augmentation while exhibiting label order invariance among other important properties.\nDeep networks have seen rich applications in high-dimensional problems characterized by a large number of labels and a high volume of samples.\nHowever, successfully training deep networks to solve problems under such conditions is mystifyingly hard (Erhan et al. (2009) ; Larochelle et al. (2007) ).\nThe go-to solution in most cases is Stochastic Gradient Descent with mini-batches (simple batch learning) and its derivatives.\nWhile offering a standardized solution, simple batch learning often fails to find solutions that are simultaneously stable, highly generalizable and scalable to large systems (Das et al. (2016) ; Keskar et al. (2016) ; Goyal et al. (2017) ; You et al. (2017) ).\nThis is a by-product of how mini-batches are constructed.\nFor example, the uniform prior assumption over datasets emphasizes equal contributions from each data point regardless of the underlying distribution; small batch sizes help achieve more generalizable solutions, but do not scale as well to vast computational resources as large mini-batches.\nIt is hard to construct a solution that is a perfect compromise between all cases.\nTwo lines of work, curriculum learning and label smoothing, offer alternative strategies to improve learning in deep networks.\nCurriculum learning, inspired by strategies used for humans (Skinner (1958) ; Avrahami et al. (1997) ), works by gradually increasing the conceptual difficulty of samples used to train deep networks ; Florensa et al. (2017) ; Graves et al. (2017) ).\nThis has been shown to improve performance on corrupted (Jiang et al. (2017) ) and small datasets (Fan et al. (2018) ).\nMore recently, deep networks have been used to categorize samples (Weinshall et al. (2018) ) and variations on the pace with which these samples were shown to deep networks were analyzed in-depth (Hacohen & Weinshall (2019) ).\nTo the best of our knowledge, previous works assumed that samples cover a broad spectrum of difficulty and hence need to be categorized and presented in a specific order.\nThis introduces computational overheads e.g. pre-computing the relative difficulty of samples, and also reduces the effective amount of data from which a model can learn in early epochs.\nFurther, curriculum learning approaches have not been shown to compete with simple training strategies at the top end of performance in image benchmarks.\nA complementary approach to obtaining generalizable solutions is to avoid over-fitting or getting stuck in local minima.\nIn this regard, label smoothing offers an important solution that is invariant to the underlying architecture.\nEarly works like Xie et al. (2016) replace ground-truth labels with noise while Reed et al. (2014) uses other models' outputs to prevent over-fitting.\nThis idea was extended in Bagherinezhad et al. (2018) to an iterative method which uses logits obtained from previously trained versions of the same deep network.\nWhile Miyato et al. (2015) use local distributional smoothness, based on the robustness of a model's distribution around a data point, to regularize outcomes, Pereyra et al. (2017) penalized highly confident outputs directly.\nClosest in spirit to our work is the label smoothing method defined in Szegedy et al. (2016) , which offers an alternative target distribution for all training samples with no extra data augmentation.\nIn general, label smoothing is applied to all examples regardless of how it affects the network's understanding of them.\nFurther, in methods which use other models to provide logits/labels, often the parent network used to provide those labels is trained using an alternate objective function or needs to be fully re-trained on the current dataset, both of which introduce additional computation.\nIn this work, we propose LILAC, Learning with Incremental Labels and Adaptive Compensation, which emphasizes a label-based curriculum and adaptive compensation, to improve upon previous methods and obtain highly accurate and stable solutions.\nLILAC is conceived as a method to learn strong embeddings by using the recursive training strategy of incremental learning alongside the use of unlabelled/wrongly-labelled data as hard negative examples.\nIt works in two key phases,\n1) incremental label introduction and\n2) adaptive compensation.\nIn the first phase, we incrementally introduce groups of labels in the training process.\nData, corresponding to labels not yet introduced to the model, use a single fake label selected from within the dataset.\nOnce a network has been trained for a fixed number of epochs with this setup, an additional set of ground-truth labels is introduced to the network and the training process continues.\nIn recursively revealing labels, LILAC allows the model sufficient time to develop a strong understanding of each class by contrasting against a large and diverse set of negative examples.\nOnce all ground-truth labels are revealed the adaptive compensation phase of training is initiated.\nThis phase mirrors conventional batch learning, except we adaptively replace the target one-hot vector of incorrectly classified samples with a softer distribution.\nThus, we avoid adjusting labels across the entire dataset, like previous methods, while elevating the stability and average performance of the model.\nFurther, instead of being pre-computed by an alternative model, these softer distributions are generated on-the-fly from the outputs of the model being trained.\nWe apply LILAC to three standard image benchmarks and compare its performance to the strongest known baselines.\nWhile incremental and continual learning work on evolving data distributions with the addition of memory constraints ((Rebuffi et al., 2017; Castro et al., 2018) and derivative works), knowledge distillation ( Rolnick et al., 2018) and similar works) or other requirements, this work is a departure into using negative mining and focused training to improve learning on a fully available dataset.\nIn incremental/continual learning works, often the amount of data used to retrain the network is small compared to the original dataset while in LILAC we fully use the entire dataset, distinguished by Seen and Unseen labels.\nThus, it avoids data deficient learning.\nFurther, works like Bucher et al. (2016) ; Li et al. (2013) ; Wang & Gupta (2015) emphasize the importance of hard negative mining, both in size and diversity, in improving learning.\nAlthough the original formulation of negative mining was based on imbalanced data, recent object detection works have highlighted its importance in contrasting and improving learning in neural networks.\nTo summarize, our main contributions in LILAC are as follows,\n• we introduce a new take on curriculum learning by incrementally learning labels as opposed to samples, • our method adaptively compensates incorrectly labelled samples by softening their target distribution which improves performance and removes external computational overheads, • we improve average recognition accuracy and decrease the standard deviation of performance across several image classification benchmarks compared to batch learning, a property not shared by other curriculum learning and label smoothing methods.\nIn the incremental phase, we initially replace the ground-truth labels of several class using a constant held-out label.\nGradually, over the course of several fixed intervals of training we reveal the true label.\nWithin a fixed interval of training, we keep constant two sets of data, \"Seen\", whose groundtruth labels are known and \"Unseen\", whose labels are replaced by a fake value.\nWhen training, Illustration of the evolution of data partitions in the incremental label introduction phase for a four label dataset.\nIn the first incremental step, only one label is used for training while the remaining data use label 4.\nA short period of training is performed with this fixed setup, where data from U is uniformly sampled to match the number of samples from S, in every mini-batch.\nThe final incremental step depicted is equivalent to batch learning since all the labels are available to the network.\nOnce all the ground-truth labels are revealed we begin the adaptive compensation phase described in Sec. 2.2.\nmini-batches are uniformly sampled from the entire training set, but the instances from \"Unseen\" classes use the held-out label.\nBy the end of the final interval, we reveal all ground-truth labels.\nWe now describe the incremental phase in more detail.\nAt the beginning of the incremental label introduction phase, we virtually partition data into two mutually exclusive sets, S : Seen and U : Unseen, as shown in Fig. 1 .\nData samples in S use their ground-truth labels as target values while those in U use a designated unseen label, which is held constant throughout the entire training process.\nLILAC assumes a random ordering of labels, Or(M ), where M denotes the total number of labels in the dataset.\nWithin this ordering, the number of labels and corresponding data initially placed in S is defined by the variable b.\nThe remaining labels, M − b, are initially placed in U and incrementally revealed in intervals of m labels, a hyper-parameter defined by the user.\nTraining in the incremental phase happens at fixed intervals of E epochs each.\nWithin a fixed interval, the virtual data partition is held constant.\nEvery mini-batch of data is sampled uniformly from the entire original dataset and within each mini-batch, labels are obtained based on their placement in S or U. Then the number of samples from U is reduced or augmented, using a uniform prior, to match the number of samples from S. This is done to ensure no unfair skew in predictions towards U since all data points use the same designated label.\nFinally, the curated mini-batches of data are used to train the neural network.\nAt the end of each fixed interval, we reveal another set of m groundtruth labels and move samples of those classes from U to S after which the entire data curation and training process is repeated for the next interval.\nIn this work, we proposed LILAC which rethinks curriculum learning based on incrementally learning labels instead of samples.\nThis approach helps kick-start the learning process from a substantially better starting point while making the learned embedding space amenable to adaptive negative logit compensation.\nBoth these techniques combine well in LILAC to show the highest performance on CIFAR-10 for simple data augmentations while easily outperforming batch and curriculum learning and label smoothing on comparable network architectures.\nThe next step in unlocking the full potential of this setup is to extend this setup to include a confidence measure on the predictions of network so that it can handle the effects of dropout or partial inputs.\nIn further expanding LILAC's ability to handle partial inputs, we aim to explore its effect on standard incremental learning (memory constrained) while also extending it applicability to more complex neural network architectures.\nA LILAC: ALGORITHM Table 8 : The table captures the effect of varying the number of epochs used for the fixed training intervals in the incremental label introduction phase.\nAcross CIFAR-10 there is an obvious peak after which the mean value decreases.\nHowever, in STL-10 there seems to be a consistent increase, with the assumption of minor noise.\nFinally, in CIFAR-100 there isn't a clear pattern.\nFrom the results in Table 8 , we observe that the choice of E is dependent on the dataset.\nThere isn't an explicit pattern that can be used to select the value of E without trial runs.\nFurther, the available run-time is an important constraint when select E from a range of values since both m and E affect it.", "role": "user" }, { "content": "A novel approach to curriculum learning by incrementally learning labels and adaptively smoothing labels for mis-classified samples which boost average performance and decreases standard deviation.", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.4757
[ { "content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nThe Cochrane Collaboration’s tool for assessing risk of bias in randomised trials\nAbstract 1:\nFlaws in the design , conduct , analysis , and reporting of r and omised trials can cause the effect of an intervention to be underestimated or overestimated . The Cochrane Collaboration ’s tool for assessing risk of bias aims to make the process clearer and more\n\n\nTitle 2:\nEvidence for biological effects of metformin in operable breast cancer: a pre-operative, window-of-opportunity, randomized trial\nAbstract 2:\nMetformin may reduce the incidence of breast cancer and enhance response to neoadjuvant chemotherapy in diabetic women . This trial examined the effects of metformin on Ki67 and gene expression in primary breast cancer . Non-diabetic women with operable invasive breast cancer received pre-operative metformin . A pilot cohort of eight patients had core biopsy of the cancer at presentation , a week later ( without treatment ; internal control ) , then following metformin 500-mg o.d . for 1 week increased to 1-g b.d . for a further week continued to surgery . A further 47 patients had core biopsy at diagnosis were r and omized to metformin ( the same dose regimen ) or no drug , and 2 weeks later had core biopsy at surgery . Ki67 immunohistochemistry , transcriptome analysis on formalin-fixed paraffin-embedded cores and serum insulin determination were performed blinded to treatment . Seven patients ( 7/32 , 21.9 % ) receiving metformin withdrew because of gastrointestinal upset . The mean percentage of cells staining for Ki67 fell significantly following metformin treatment in both the pilot cohort ( P = 0.041 , paired t-test ) and in the metformin arm ( P = 0.027 , Wilcoxon rank test ) but was unchanged in the internal control or metformin control arms . Messenger RNA expression was significantly downregulated by metformin for PDE3B ( phosphodiesterase 3B , cGMP-inhibited ; a critical regulator of cAMP levels that affect activation of AMP-activated protein kinase , AMPK ) , confirmed by immunohistochemistry , SSR3 , TP53 and CCDC14 . By ingenuity pathway analysis , the tumour necrosis factor receptor 1 ( TNFR1 ) signaling pathway was most affected by metformin : TGFB and MEKK were upregulated and cdc42 downregulated ; mTOR and AMPK pathways were also affected . Gene set analysis additionally revealed that p53 , BRCA1 and cell cycle pathways also had reduced expression following metformin . Mean serum insulin remained stable in patients receiving metformin but rose in control patients . This trial presents biomarker evidence for anti-proliferative effects of metformin in women with breast cancer and provides support for therapeutic trials of metformin\n\n\nTitle 3:\nPresurgical Trial of Metformin in Overweight and Obese Patients with Newly Diagnosed Breast Cancer\nAbstract 3:\nIntroduction : We conducted a presurgical trial to assess the tissue-related effects of metformin in overweight/obese breast cancer ( BC ) patients . Methods : Metformin 1,500 mg daily was administered to 35 nondiabetics with stage 0–III BC , body mass index ( BMI ) ≥ 25 kg/m2 . The primary endpoint was tumor proliferation change ( i.e. , ki-67 ) . Tumor proliferation change was compared to untreated historical controls , matched by age , BMI , and stage . Results : There was no reduction in ln(ki-67 ) after metformin ( p = .98 ) or compared to controls ( p = .47 ) . There was a significant reduction in BMI , cholesterol , and leptin . Conclusion : Despite no proliferation changes , we observed reductions in other relevant biomarkers\n\n\nTitle 4:\nMetformin intervention in obese non-diabetic patients with breast cancer: phase II randomized, double-blind, placebo-controlled trial\nAbstract 4:\nPrevious observational studies have suggested that metformin in diabetes patients may reduce breast cancer risk more than the reductions from other anti-diabetes medications . This r and omized , double-blind , placebo-controlled trial was performed to evaluate the efficacy of metformin for controlling physical and metabolic profiles related to prognosis and adverse events in non-diabetic breast cancer patients . Female breast cancer patients ( N = 105 ) , at least 6 months post-mastectomy , with obesity ( ≥25 kg/m2 ) and /or pre-diabetes ( fasting blood sugar levels ≥100 mg/dL ) , were r and omly assigned to three groups ( placebo , metformin 500 mg , and metformin 1000 mg ) stratified by tamoxifen use . A linear mixed model for repeated measurements among three groups and ANOVA for profile differences during 6 months of treatment were used for the intention-to-treat analysis . The metformin 1000 mg group had a significantly greater decline in glucose and HbA1c levels between treatment weeks 0 and 6 month ( p = 0.008 and 0.009 , respectively ) , and the declines increased with an increase in body mass index ( BMI ) level ( p interaction with BMI = 0.007 and 0.067 , respectively ) . A marginally significant different effect from the metformin 1000 mg treatment was detected for glucose and HbA1c levels ( p interaction = 0.084 and 0.063 , respectively ) in the intention-to-treat analysis . Metformin 1000 mg treatment had a favorable effect on controlling glucose and HbA1C levels in obese non-diabetic breast cancer patients , indicating prognostic importance . Further trials are needed to eluci date the risk – benefit ratio of long-term use of metformin\n\n\nTitle 5:\nDual effect of metformin on breast cancer proliferation in a randomized presurgical trial.\nAbstract 5:\nPURPOSE Metformin is associated with reduced breast cancer risk in observational studies in patients with diabetes , but clinical evidence for antitumor activity is unclear . The change in Ki-67 between pretreatment biopsy and post-treatment surgical specimen has prognostic value and may predict antitumor activity in breast cancer . PATIENTS AND METHODS After tumor biopsy , we r and omly allocated 200 nondiabetic women with operable breast cancer to either metformin 850 mg/twice per day ( n = 100 ) or placebo ( n = 100 ) . The primary outcome measure was the difference between arms in Ki-67 after 4 weeks adjusted for baseline values . RESULTS Overall , the metformin effect on Ki-67 change relative to placebo was not statistically significant , with a mean proportional increase of 4.0 % ( 95 % CI , -5.6 % to 14.4 % ) 4 weeks apart . However , there was a different drug effect depending on insulin resistance ( homeostasis model assessment [ HOMA ] index > 2.8 , fasting glucose [ mmol/L ] × insulin [mU/L]/22.5 ; P(interaction ) = .045 ) , with a nonsignificant mean proportional decrease in Ki-67 of 10.5 % ( 95 % CI , -26.1 % to 8.4 % ) in women with HOMA more than 2.8 and a nonsignificant increase of 11.1 % ( 95 % CI , -0.6 % to 24.2 % ) with HOMA less than or equal to 2.8 . A different effect of metformin according to HOMA index was noted also in luminal B tumors ( P(interaction ) = .05 ) . Similar trends to drug effect modifications were observed according to body mass index ( P = .143 ) , waist/hip girth-ratio ( P = .058 ) , moderate alcohol consumption ( P = .005 ) , and C-reactive protein ( P = .080 ) . CONCLUSION Metformin before surgery did not significantly affect Ki-67 overall , but showed significantly different effects according to insulin resistance , particularly in luminal B tumors . Our findings warrant further studies of metformin in breast cancer with careful consideration to the metabolic characteristics of the study population\n\n\nTitle 6:\nThe Effects of Metformin and Weight Loss on Biomarkers Associated With Breast Cancer Outcomes\nAbstract 6:\nBackground This study investigated the effects of metformin and weight loss on biomarkers associated with breast cancer prognosis . Methods Overweight/obese postmenopausal breast cancer survivors ( n = 333 ) were r and omly assigned to metformin vs placebo and to a weight loss intervention vs control ( ie , usual care ) . The 2 × 2 factorial design allows a single r and omized trial to investigate the effect of two factors and interactions between them . Outcomes were changes in fasting insulin , glucose , C-reactive protein ( CRP ) , estradiol , testosterone , and sex-hormone binding globulin ( SHBG ) . The trial was powered for a main effects analysis of metformin vs placebo and weight loss vs control . All tests of statistical significance were two-sided . Results A total of 313 women ( 94.0 % ) completed the six-month trial . High prescription adherence ( ie , ≥80 % of pills taken ) ranged from 65.9 % of participants in the metformin group to 81.3 % of those in the placebo group ( P < .002 ) . Mean percent weight loss was statistically significantly higher in the weight loss group ( -5.5 % , 95 % confidence interval [ CI ] = -6.3 % to -4.8 % ) compared with the control group ( -2.7 % , 95 % CI = -3.5 % to -1.9 % ) . Statistically significant group differences ( ie , percent change in metformin group minus placebo group ) were -7.9 % ( 95 % CI = -15.0 % to -0.8 % ) for insulin , -10.0 % ( 95 % CI = -18.5 % to -1.5 % ) for estradiol , -9.5 % ( 95 % CI = -15.2 % to -3.8 % ) for testosterone , and 7.5 % ( 95 % CI = 2.4 % to 12.6 % ) for SHBG . Statistically significant group differences ( ie , percent change in weight loss group minus placebo group ) were -12.5 % ( 95 % CI = -19.6 % to -5.3 % ) for insulin and 5.3 % ( 95 % CI = 0.2 % to 10.4 % ) for SHBG . Conclusions As adjuvant therapy , weight loss and metformin were found to be a safe combination strategy that modestly lowered estrogen levels and advantageously affected other biomarkers thought to be on the pathway for reducing breast cancer recurrence and mortality", "role": "user" }, { "content": "We did not observe non-linear changes in the dose – response relationship between metformin and biomarkers as outcomes .\nConclusions Breast cancer patients receiving metformin as treatment for diabetes showed significant reduction in levels of insulin , fasting glucose , CRP , HOMA , leptin , BMI , and Ki-67", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.7274
[ { "content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey on Urban Traffic Anomalies Detection Algorithms \nChapter Title: A. STATISTICAL APPROACHES \n1. Abstract of Cited Paper (BIB001): Novel methods for implementation of detector-level multivariate screening methods are presented. The methods use present data and classify data as outliers on the basis of comparisons with empirical cutoff points derived from extensive archived data rather than from standard statistical tables. In addition, while many of the ideas of the classical Hotelling's T2-statistic are used, modern statistical trend removal and blocking are incorporated. The methods are applied to intelligent transportation system data from San Antonio and Austin, Texas. These examples show how the suggested new methods perform with high-quality traffic data and apparently lower-quality traffic data. All algorithms were implemented by using the SAS programming language. \n2. Abstract of Cited Paper (BIB002): This paper studies statistical aggregation procedures in the regression setting. A motivating factor is the existence of many different methods of estimation, leading to possibly competing estimators. We consider here three different types of aggregation: model selection (MS) aggregation, convex (C) aggregation and linear (L) aggregation. The objective of (MS) is to select the optimal single estimator from the list; that of (C) is to select the optimal convex combination of the given estimators; and that of (L) is to select the optimal linear combination of the given estimators. We are interested in evaluating the rates of convergence of the excess risks of the estimators obtained by these procedures. Our approach is motivated by recently published minimax results [Nemirovski, A. (2000). Topics in non-parametric statistics. Lectures on Probability Theory and Statistics (Saint-Flour, 1998). Lecture Notes in Math. 1738 85-277. Springer, Berlin; Tsybakov, A. B. (2003). Optimal rates of aggregation. Learning Theory and Kernel Machines. Lecture Notes in Artificial Intelligence 2777 303-313. Springer, Heidelberg]. There exist competing aggregation procedures achieving optimal convergence rates for each of the (MS), (C) and (L) cases separately. Since these procedures are not directly comparable with each other, we suggest an alternative solution. We prove that all three optimal rates, as well as those for the newly introduced (S) aggregation (subset selection), are nearly achieved via a single \"universal\" aggregation procedure. The procedure consists of mixing the initial estimators with weights obtained by penalized least squares. Two different penalties are considered: one of them is of the BIC type, the second one is a data-dependent l 1 -type penalty. \n3. Abstract of Cited Paper (BIB003): Nonparametric Bayesian mixture models, in particular Dirichlet process (DP) mixture models, have shown great promise for density estimation and data clustering. Given the size of today's datasets, computational efficiency becomes an essential ingredient in the applicability of these techniques to real world data. We study and experimentally compare a number of variational Bayesian (VB) approximations to the DP mixture model. In particular we consider the standard VB approximation where parameters are assumed to be independent from cluster assignment variables, and a novel collapsed VB approximation where mixture weights are marginalized out. For both VB approximations we consider two different ways to approximate the DP, by truncating the stick-breaking construction, and by using a finite mixture model with a symmetric Dirichlet prior. \n4. Abstract of Cited Paper (BIB004): This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces. \n5. Abstract of Cited Paper (BIB005): Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning. \n6. Abstract of Cited Paper (BIB006): Traffic data collections are exceedingly useful for road network management. Such collections are typically massive and are full of errors, noise and abnormal traffic behaviour. These abnormalities are regarded as outliers because they are inconsistent with the rest of the data. Hence, the problem of outlier detection (OD) is non-trivial. This paper presents a novel method for detecting outliers in large-scale traffic data by modelling the information as a Dirichlet process mixture model (DPMM). In essence, input traffic signals are truncated and mapped to a covariance signal descriptor, and the vector dimension is then further reduced by principal component analysis. This modified signal vector is then modelled by a DPMM. Traffic signals generally share heavy spatial-temporal similarities within signals or among various categories of traffic signals, and previous OD methods have proved incapable of properly discerning these similarities or differences. The contribution of this study is to represent real-world traffic data by a robust DPMM-based method and to perform an unsupervised OD to achieve a detection rate of 96.67% in a ten-fold cross validation. Language: en \n7. Abstract of Cited Paper (BIB007): It is meaningful to detect outliers in traffic data for traffic management. However, this is a massive task for people from large-scale database to distinguish outliers. In this paper, we present two methods: Kernel Smoothing Na\\\"ive Bayes (NB) method and Gaussian Mixture Model (GMM) method to automatically detect any hardware errors as well as abnormal traffic events in traffic data collected at a four-arm junction in Hong Kong. Traffic data was recorded in a video format, and converted to spatial-temporal (ST) traffic signals by statistics. The ST signals are then projected to a two-dimensional (2D) (x,y)-coordinate plane by Principal Component Analysis (PCA) for dimension reduction. We assume that inlier data are normal distributed. As such, the NB and GMM methods are successfully applied in outlier detection (OD) for traffic data. The kernel smooth NB method assumes the existence of kernel distributions in traffic data and uses Bayes' Theorem to perform OD. In contrast, the GMM method believes the traffic data is formed by the mixture of Gaussian distributions and exploits confidence region for OD. This paper would address the modeling of each method and evaluate their respective performances. Experimental results show that the NB algorithm with Triangle kernel and GMM method achieve up to 93.78% and 94.50% accuracies, respectively.\n\nLiterature Review:", "role": "user" }, { "content": "Statistical analysis models such as the Gaussian aggregation model BIB003 , principle component analysis BIB005 , stochastic gradient descent BIB006 , and Dirichlet Process Mixture BIB004 , are based on the fact that in general, inlier flows follow some statistical process represented by an alternative hypothesis and the outlier flows deviate from this statistical mechanism and respect to the null hypothesis. For detecting outliers in large-scale urban traffic data, Ngan et al. BIB007 proposed a Dirichlet Process Mixture Model (DPMM). The set of all flow values F = {f 1 , f 2 , . . . , f |F| } is projected into a n-dimensional space, where the i th dimension is defined by the flow values {f i , . . . , f i+w }, where w is the window length projection such as (1 ≤ w ≤ |F| and n = |F| + w). The n dimensions are then entered in a Principal Component Analysis (PCA) kernel to reduce and transform the traffic data space into a two-dimensional (2D) (x, y) coordinate plane. In this step, the covariance matrix among the variables of the n dimensions is computed, and the Eigenvalues are then determined and sorted from the highest to the lowest. This provides the dimensions in the order of significance. The two highest significance dimensions are considered while the rest are ignored. The obtained flow vector represented by the two dimensions is injected into the Dirichlet process to detect flow outliers. Thus, the clusters are estimated using G ∼ DP(H, α), with α being the concentration parameters, and H is the hypothesis base distribution defined by H = {φ. µ}, φ, and µ are the mixture density covariance and the mixture weights of the data, respectively. The clusters with a high number of flow values are considered as normal, and the other clusters are labeled as outliers. Lam et al. BIB008 proposed a Kernel Smoothing Naive Bayes (KSNB) approach to automatically determine any errors as well as abnormal traffic in data from Hong Kong. The authors assumed that inlier flow values followed a kernel smooth distribution. The KSNB model automatically determines regions formed by kernel distributions and then considers them as inliers. In contrast, any flow value outside of those regions is considered to be an outlier. The kernel estimator for the set of flow values F is defined as and b =F − mT , whereF and T are the average of the flows ( . If the score of F i is greater than 1, then it is considered an outlier, otherwise, it is considered a normal flow value. Turochy and Smith proposed a Multivariate Statistical Quality Control (MSQC) approach for traffic congestion outlier detection. This approach took other traffic variables that contributed to the congested case, such as the average speed, and the occupancy rate, instead of using a single variable represented by the flow values. For more details about how to compute these variables, we refer the readers to BIB001 . The historical traffic flows T are fitted to the F-distribution F |T |,|T |−p (α), where p is the number of variables (in this case is set to 3), and α is the confidence significance level. When the new observation flows x = {x flow , x speed , x occupancy } are detected, T with the corresponding x are projected to the F-distribution with the α value. If the alternative hypothesis is accepted then x is considered as normal flow, otherwise, the score of x is computed as: where S is the covariance matrix defined by (x −x)(x −x) t . If the score is greater than the cutoff threshold, then x is considered to be an outlier, otherwise, it is a normal flow. Park et al. BIB002 proposed a Multiple Blocks on Multivariate Statistical Quality Control (MB-MSQC) approach to deal with the variability problem of flow during the hours of the day. For example, in almost all urban cities, there is an increase in traffic between 6:00 to 9:00 and 16:00 to 19:00. Thus, the set of flow values is grouped into five distinct blocks: (B 1 : 00:00 to 6:00, B 2 : 6:00 to 9:00, B 3 : 9:00 to 16:00, B 4 : 16:00 to 19:00, and B 5 : 19:00 to 00:00). Afterwards, the MSQC proposed in is independently applied on each block of flows. This algorithm has been tested on traffic data from San Antonio and Austin, USA. According to the authors, the results revealed the superiority of MB-MSQC compared to MSQC in terms of precision.", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.362
[ { "content": "Answer the following biomedical question based on the given context. Provide only the answer without additional explanation.\n\nQuestion: Which species of bacteria did the mitochondria originate from?\n\nContext: Recently, α-proteobacteria have been shown to possess virus-like gene transfer agents that facilitate high frequency gene transfer in natural environments between distantly related lineages. This system could have driven the genomic integration of the mitochondrial progenitor and its proto-eukaryote host and contributed to the evolutionary mosaic of genes seen in modern-day prokaryotic and eukaryotic genomes.\nAlthough the Alphaproteobacteria are thought to be the closest relatives of the mitochondrial progenitor, there is dispute as to what its particular sister group is.\nMore detailed phylogenetic analyses with additional Alphaproteobacteria and including genes from the mitochondria of Reclinomonas americana found matches of mitochondrial genes to those of members of the Rickettsiaceae, Anaplasmataceae, and Rhodospirillaceae families.\nBiologists agree that the ancestor of mitochondria was an alpha-proteobacterium.\nMitochondria originated by permanent enslavement of purple non-sulphur bacteria.\nPhylogenetic analyses based on genes located in the mitochondrial genome indicate that these genes originated from within the alpha-proteobacteria.\nThe strong relationship with alpha-proteobacterial genes observed for some mitochondrial genes, combined with the lack of such a relationship for others, indicates that the modern mitochondrial proteome is the product of both reductive and expansive processes.\nAccumulating evolutionary data point to a monophyletic origin of mitochondria from the order Rickettsiales.\nEvolutionary analyses of proteins encoded in the genome contain the strongest phylogenetic evidence to date for the view that mitochondria descend from alpha-proteobacteria.\nThe functional profiles of these genes show similarities to those of mitochondrial genes: no genes required for anaerobic glycolysis are found in either R. prowazekii or mitochondrial genomes, but a complete set of genes encoding components of the tricarboxylic acid cycle and the respiratory-chain complex is found in R. prowazekii.\nPhylogenetic analyses indicate that R. prowazekii is more closely related to mitochondria than is any other microbe studied so far.\nThe phylogenetic analysis supports the hypothesis that mitochondria are derived from the alpha-proteobacteria and more specifically from within the Rickettsiaceae. We have estimated that the common ancestor of mitochondria and Rickettsiaceae dates back to more than 1500 million years ago.\nGOBASE also includes a fully reannotated genome sequence of Rickettsia prowazekii, one of the closest bacterial relatives of mitochondria, and will shortly expand to contain more data from bacteria from which organelles originated.\nThe genome sequence of Rickettsia prowazekii and the origin of mitochondria.\nAlthough mitochondria derive from alpha-proteobacteria, many proteins acting in this organelle did not originate from bacteria.", "role": "user" }, { "content": "Biologists agree that the ancestor of mitochondria was an alpha-proteobacterium.", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.77218
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\nproteins encoded by genomes generally function after adopting proper 3d structures . a rapid increase in the number of entirely sequenced genomes led to an unprecedented growth in the number of hypothetical proteins resulting from genome annotation .\nprotein structures and functions can be inferred from amino acid sequences by using advanced computer programs .\nthere is no doubt in the importance of structural and functional annotations of hypothetical proteins .\nthe gtop project was started in 1999 as reported ( 1 ) and was taken over by the dna data bank of japan ( 2 ) in 2007 , under which the database has been continuously updated .\ngtop is a database that provides protein annotation of 3d structures and functions based on similarity searches against pdb ( 3 ) , scop ( 4 ) , and swiss - prot ( 5 ) , 2d structure predictions , pfam ( 6 ) protein families , prosite ( 7 ) functional motifs , prediction of trans - membrane regions , and others .\nthere are several databases of the 3d structures of all the genome - encoded proteins .\nfor example , superfamily ( http://supfam.mrc-lmb.cam.ac.uk/superfamily/ ) ( 8) provides scop domain assignments to proteins encoded by completely sequenced genomes\n. a collection of comparative protein 3d structure models is available at modbase ( http://modbase.compbio.ucsf.edu/modbase-cgi/index.cgi ) ( 9 ) in some entirely sequenced genomes .\ngene3d ( http://gene3d.biochem.ucl.ac.uk/gene3d/ ) ( 10 ) makes public cath - based domain assignments and functional annotations to proteins in more than 500 genomes .\nfunctional and domain assignments including intrinsically disordered ( i d ) regions can be found at pedant ( http://pedant.gsf.de/ ) ( 11 ) . from the previous report\n, we have added a large body of data and tools to gtop , for example i d region assignments , exon information on eukaryotic proteins , an efficient mechanism to search within a user - specified set of genomes , and tools for phylogenetic profile search . since its inception ,\ngtop has employed a user - friendly interface to let the user grasp features of a query protein at a glance .\na gtop user can readily obtain comprehensive structural and functional data of all the proteins encoded by entirely sequenced genomes .\na list of the genomes stored in gtop is available at http://spock.genes.nig.ac.jp/~genome/org.html , together with the abbreviations of organism names used in the database . in the 2002 paper\nthe database has grown to cover a total of 797 genomes , with 41 , 466 , 114 and 176 genomes of archaea , eubacteria , eukaryota and bacteriophages , respectively .\nthe following data are subject to regular renewal : ( i ) amino acid sequences encoded by genomes newly sequenced after the previous update , ( ii ) amino acid sequences that existed in the previous version but were subsequently modified and ( iii ) reference databases such as pdb , scop , swiss - prot , prosite , and pfam whose new versions were released .\nthe sequences fallen in category ( ii ) were recalculated to keep annotations up - to - date . update category ( iii )\nis crucial to keep annotations up - to - date , because most annotations in gtop are obtained by homology search programs or those based on homology search .\nthe main focus of gtop is structural annotations made by homology searches against the pdb and scop databases . although gtop used psi - blast\n( 12 ) in the previous report , it now employs reverse - psi - blast ( 13 ) , as this method gives comparable results in drastically reduced computation time .\nhmm searches using the superfamily profiles ( 8) of scop domains were additionally conducted , as they are particularly effective in identifying small domains such as dna binding domains .\nfigure 1 presents a time course of the number of the genomes stored and the average fractions of proteins with 3d annotations made by blast and reverse - psi - blast .\nthe fraction of sequences with alignments to pdb shows a steadily increasing trend , reflecting the growth of the pdb database . the fraction aligned by reverse - psi - blast exceeds that by blast , reflecting the higher sensitivity of the former method .\nhowever , one should note that in this statistics a sequence is considered to be annotated if it has at least one pdb hit by blast or reverse psi - blast and it may have large tracts of structurally undetermined regions . when statistics is evaluated residue - wise , the fractions of regions aligned to pdb sequences in the latest version in human and escherichia coli proteins are 47% and 64% , respectively . \n\nfigure 1.the time courses of the number of genomes included and the fraction of the sequences with homologs in the pdb .\nthe line graphs represent the ratios of the sequences with homologs in the pdb , while the column graph stands for the number of genomes in gtop .\nthe scales for the fraction and the number of genomes are shown at the right and left ends , respectively .\nthe blue , green , and red lines correspond to fruit fly , e. coli , and the overall average , respectively .\nthe solid and dotted lines respectively show the ratios obtained using reverse psi - blast , and those using blast .\nthe time courses of the number of genomes included and the fraction of the sequences with homologs in the pdb .\nthe line graphs represent the ratios of the sequences with homologs in the pdb , while the column graph stands for the number of genomes in gtop .\nthe scales for the fraction and the number of genomes are shown at the right and left ends , respectively .\nthe blue , green , and red lines correspond to fruit fly , e. coli , and the overall average , respectively .\nthe solid and dotted lines respectively show the ratios obtained using reverse psi - blast , and those using blast .\nas most proteins do not entirely consist of structural domains , the fraction of residues with structural assignments will not reach unity ; outside of globular domains there exist i d regions that assume no specific 3d structures by themselves , and tend to contain active regions in proteins involved in crucial biological processes such as signal transduction and transcriptional regulation ( 1416 ) .\nrecent research revealed that i d regions exist predominantly on the cytoplasmic side of eukaryotic proteins ( 17 ) , play important roles in cell signaling , transcriptional control ( 18 ) .\nwe predicted i d regions in proteins stored in gtop by the disopred2 ( 19 ) program and presented them .\nfigure 2a shows a gtop screen shot of human androgen receptor , a typical protein with long i d regions .\nas this example illustrates , gtop graphically displays complex domain architectures of eukaryotic proteins composed of structural domains and i d regions . \n\n( a ) the domain assignments of the human androgen receptor are presented in color bars to facilitate intuitive grasp of molecular architecture of the protein .\nthis is a typical protein with long i d regions : the n - terminal half of the protein consists mainly of i d regions ( 18,22 ) , consistent with the i d regions predicted by disopred2 ( gray bars on the line marked by disopred ) .\n( b ) a structurally aligned region of the same protein is shown in the exon view . this page can be obtained by clicking on the characters \n1t7ra circled in figure 2a , and by clicking on the exon display and 3d ( jmol - applet ) buttons in the top section of the pop - up screen .\n( a ) the domain assignments of the human androgen receptor are presented in color bars to facilitate intuitive grasp of molecular architecture of the protein .\nthis is a typical protein with long i d regions : the n - terminal half of the protein consists mainly of i d regions ( 18,22 ) , consistent with the i d regions predicted by disopred2 ( gray bars on the line marked by disopred ) .\n( b ) a structurally aligned region of the same protein is shown in the exon view . this page can be obtained by clicking on the characters \n1t7ra circled in figure 2a , and by clicking on the exon display and 3d ( jmol - applet ) buttons in the top section of the pop - up screen .\nthe existence of introns and exons is a unique feature of eukaryotic genes and the location of exon boundaries in the corresponding protein structure is of interest ( 20 ) .\nwe thus developed tools to display exon boundaries on amino acid sequences and 3d structures .\nthe exons are presented in 5 colors both in the 3d structure and the sequence displays , from which the boundaries can be clearly seen .\nwe developed a 3d viewing system incorporating jmol applet ( http://www.jmol.org/ ) so that the user can view 3d structures in the browser without installing additional software .\nexon information is also presented in green and blue stripes ( near the bottom of figure 2a ) .\ngtop strives to keep precomputed annotations of all the amino acid sequences of proteins derived from all the completely sequenced genomes .\none clear benefit of having precomputed annotations beside the rapidity of supplying information is to make inter - genomic comparative analyses possible .\nphylogenetic profile search is one analytical tool that exploits this advantage : a user - specified search produces the presence and absence pattern of features such as scop folds , superfamilies , and families , pfam domains , prosite motifs , and the number of trans - membrane helices . the user can conduct a search for a specific feature that are present in certain species and/or absent in others ; for example , a search for a scop domain present in all the eubacterial species and absent in all the eukaryotic species in gtop .\nthe summary section of gtop also offers comparative statistics , which has the ratio of 3d annotations in each genome , the frequencies of scop folds , superfamilies , and families , pfam domains and prosite motifs .\nthe tools for keyword , homology , and text searches in gtop were thus modified so that the user can reduce search time through selection of the genomes in which to conduct a search .\nthe user can easily specify organisms with the use of check boxes placed next to organism names .\nan annotation summary of each protein , consisting of abbreviated one - line descriptions , is saved in a master file .\nmaster file information for each protein is displayed below a gtop diagram of the type shown in figure 2a .\nall the available data of each genome have been compiled in one file , freely downloadable from ftp://spock.genes.nig.ac.jp / pub / gtop/. explanations of the meanings for each header can be found at http://spock.genes.nig.ac.jp/~genome/mas-doc.html .\ndespite the wealth of currently available structural data and use of sensitive programs , considerable fractions of most proteins have neither structural domains nor i d regions assigned .\nwe are currently developing a system to accurately classify the fraction into structural domains and i d regions .\nexcitingly this will result in reliable identification of structural domains whose 3d structures remain undetermined .\nwe expect that the installation of this system will provide further insights into the protein structure .\nwe are also considering incorporation of protein protein interaction data to enrich gtop further .\nthe gtop database is supported in part by the target protein research program from the ministry of education , culture , sports , science and technology of japan , and in part by the bioinformatics research and development project from the japan science and technology agency .\nfunding for open access publication charge : the ministry of education , culture , sports , science and technology of japan .", "role": "user" }, { "content": "the genomes to protein structures and functions ( gtop ) database ( http://spock.genes.nig.ac.jp/~genome/gtop.html ) freely provides an extensive collection of information on protein structures and functions obtained by application of various computational tools to the amino acid sequences of entirely sequenced genomes . \n gtop contains annotations of 3d structures , protein families , functions , and other useful data of a protein of interest in user - friendly ways to give a deep insight into the protein structure . from the initial 1999 version \n , gtop has been continually updated to reap the fruits of genome projects and augmented to supply novel information , in particular intrinsically disordered regions . \n as intrinsically disordered regions constitute a considerable fraction of proteins and often play crucial roles especially in eukaryotes , their assignments give important additional clues to the functionality of proteins . \n additionally , we have incorporated the following features into gtop : a platform independent structural viewer , results of hmm searches against scop and pfam , secondary structure predictions , color display of exon boundaries in eukaryotic proteins , assignments of gene ontology terms , search tools , and master files .", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.61358
[ { "content": "Write an abstract for this chemistry paper.\n\nTitle:\nSynthesis of dihydroimidazole tethered imidazolinethiones and their activity as novel antagonists of the nuclear retinoic acid receptor-related orphan receptors (RORs)\n\nPaper sections:\n\n<p>The nuclear retinoic acid receptor-related orphan receptors (RORs) (NR1F1, NR1F2, and NR1F3), in particular ROR-gamma (RORγ ), have recently been the focus of attention to develop novel treatments of metabolic, inflammatory, and autoimmune diseases. Genetic studies demonstrated a key role for RORγ t (a truncated form of RORγ expressed exclusively in the thymus) in the differentiation of pro-inflammatory Th17 cells1; RORα was subsequently shown to also be essential for Th17 cell differentiation.2 A growing body of evidence suggests a pathogenic role for Th17 in several autoimmune diseases, including multiple sclerosis, rheumatoid arthritis, inflammatory bowel disease, type I diabetes, and psoriasis. 3 The discovery that the RORγ inverse agonists SR1001, digoxin, and ursolic acid could suppress Th17 differentiation and delay the onset of disease in an experimental autoimmune encephalomyelitis mouse model of multiple sclerosis,4–6 demonstrated that targeting RORγ and/or RORα might be an attractive strategy to treat inflammatory autoimmune diseases. During the last decade, numerous compounds have been discovered that modulate RORγ transcriptional activity, and are the focus of therapeutic developments by several pharmaceutical companies (reviewed in7). Here we report the synthesis and screening results of dihydroimidazole tethered imidazolidinethiones to identify novel antagonists of RORs. We generated CHO-K1 stable cell lines expressing a 5x-UAS-luciferase reporter and a Gal4-ROR-LBD for all three RORs (α , β , and γ ) under the control of a tetracycline repressor. These cell lines were used for the screening of all synthesized compounds and we identified several molecules with micromolar activity as RORγ /α dual antagonists, which will serve as starting point for future computer-guided combinatorial chemistry approaches to generate potent and selective RORγ inhibitors.</p><p>The synthesis of dihydroimidazole tethered imidazolidinethione compounds is outlined in Scheme 1.9 Starting from resin-bound orthogonally protected Fmoc-Lys(Boc)-OH 1, the Boc group was deprotected with a solution of trifluoroacetic (TFA) acid in dichloromethane (DCM) and the free ε-amine was acylated with a variety of Boc-amino acids (diversity R1) in the presence of diisopropylcarbodiimide (DICI) and benzotriazole (HOBt). The Boc group was deprotected and the generated amine was acylated with different commercially available carboxylic acids (diversity R2). The resin-bound polyamides 2 were treated with excess borane in tetyrahydrofurane to generate the corresponding resin-bound polyamines 3. The primary amine of the tetraamines 3 was selectively protected with 2-acetyldimedone (Dde-OH) and the resin-bound polyamines was treated with thiocarbonyldiimidazole to afford the corresponding imidazolidinthione 4. Following Dde deprotection with a hydrazine in DMF, the free amine was acylated with different carboxylic acids (diversity R3) and the generated compound 5 was treated with phosphorous oxychloride (POCl3) to yield following cleavage of the solid support the corresponding dihydroimidazole tethered imidazolidinethione compounds 6.</p><p>We performed the synthesis of 116 individual compounds and evaluated them on TRex-CHO-luciferase-RORγ cells at 2 μg/ml; 8 compounds elicited significant toxicity as determined by resazurin assay (data not shown) and were discarded for further analysis. 30 compounds that inhibited doxycycline-dependent luciferase stimulation by 70% or more were titrated in a confirmatory assay against RORγ and the top 15 compounds were followed up with a second dose response with a 6 point 2-fold serial dilutions starting at 8 μg/ml, which corresponds to 15–18.7 μM as maximum concentration. This secondary titration experiment was performed in all Gal4-RORγ , RORα , and RORβ stable cell lines and the IC50 values obtained are shown in Fig. 1.</p><p>Compound 6a showed the lowest IC50 towards RORγ (5.1 μM), but it lacked selectivity as it also inhibited transcriptional activity by RORα with comparable efficacy (6.4 μM) and RORβ with slightly higher IC50 (8.3 μM). Compounds 6b and 6c also inhibited RORγ with similar potency (5.6 and 5.8 μM, respectively); whereas 6b was a dual RORγ /α antagonist with lower potency against RORβ , compound 6c showed a slight selectivity towards RORγ with a 2-fold lower IC50. Interestingly, all these compounds share the same R1 (L-Phe) and R2 (phenylacetic acid) groups, varying only in the R3 position. In general, most of the active compounds in this class could be categorized as dual RORγ /α antagonists, with a moderately higher IC50 values against RORβ cells. Compounds 6n and 6o showed some selectivity towards RORγ , but their potency was compromised.</p>\n\nCell lines and luciferase assays\n<p>TRex-CHO-K1 cells (Thermo Scientific), which express the tetracycline repressor, were used to generate stable cell lines expressing a luciferase reporter gene under the control of a 5x-UAS response element (pGL-4.31 [luc2P/Gal4UAS/Hygro], Promega). Following transfection and a two weeks selection with hygromycin, TRex-CHO-UAS-luc cells were transfected with a pcDNA4/TO expression vector (Thermo Scientific) encoding the Gal4-DBD-ROR-LBD fusion proteins containing the following ROR LBDs: mRORγ (NR1F1) (Pro-261-end), hRORα (NR1F2) (Pro-211-end), or rRORβ (Pro 205-end). The pcDNA4/TO expression vector allows for tetracycline controlled expression of the corresponding Gal4-ROR proteins as well as for selection of stably transfected cells with Zeocin. Individual clones for each of the Gal4-RORγ , Gal4-RORα , and Gal4-RORβ cell lines were isolated using a standard serial dilution protocol. Clones with the highest doxycycline-dependent stimulation of luciferase activity were selected for all functional experiments described below. For TRex-CHO-Gal4-RORγ-Luc cells, we also selected for clones that showed sensitivity to SR2211 as reported in the literature.8 All cell lines were maintained in complete Ham's F12 medium containing 10% fetal bovine serum (FBS), 10 μg/ml blasticidine, 100 μg/ml hygromycin, and 100 μg/ml Zeocin, and were used within 2 months of culture without any loss of doxycycline-dependent stimulation of luciferase activity.</p><p>For screening dihydroimidazole tethered imidazolinethione compounds, cells were resuspended in Ham's F12 assay medium containing 2% charcoal-treated FBS and no selection drugs. Cells were seeded in 30 μl of assay medium at a density of 10,000 cells per well in 384 white plates with clear bottom (Greiner). Cells were allowed to attach for 2 h, followed by compound treatment and stimulation with 20 ng/ml doxycycline for an additional 16–20 h. Appropriate solvent controls, negative controls with no doxycycline, and blanks with no cells were included in every plate. Prior to measuring luciferase activity, 6 μl of 0.15 mg/ml resazurin (Cayman Chem.) in PBS were added and cells were incubated for 1 h at 37 °C. Fluorescence intensity was measured at 545 nm excitation/590 nm emission using a CLARIOstar reader (BMGLabtech) as a measure of cell viability, to detect any potential cytotoxic effect of the test compounds. Following the resazurin assay, the medium was removed and 20 μl of BriteLite-Plus luciferase reagent (Perking Elmer) were added; luminescence was measured in a CLARIOstar reader using a 384 aperture spoon and a 0.1 s measuring interval. Luminescence was normalized by the fluorescence intensity and the percentage of inhibition was calculated with respect to control cells stimulated with doxycycline in the presence of solvent control (0.1% DMF). All primary screens were performed in triplicate with a final concentration of 2 μg/ml. Active compounds were subjected to a dose response confirmatory assay followed by a second 6 point titration experiment with quadruplicate data points. IC50 values were calculated using GraphPrism (GraphPad software).</p>", "role": "user" }, { "content": "Targeting the transcriptional activity of nuclear hormone receptors has proven an effective strategy to treat certain human diseases, and they have become a major focus point to develop novel therapies for the treatment of cancer, inflammation, autoimmune diseases, metabolic disorders, and others. One family of nuclear receptors that has attracted most interest in recent years is the retinoic acid receptor-related orphan receptors (RORs), in particular ROR\\xce\\xb3 . ROR\\xce\\xb3 is a critical regulator of the immune system and ROR\\xce\\xb3 antagonists have shown activity in animal models of inflammatory autoimmune diseases. Here we present the synthesis and biological evaluation of dihydroimidazole tethered imidazolinethiones. We have identified several dual ROR\\xce\\xb3 /\\xce\\xb1 and pan-ROR antagonists with significant activity in cellular assays that could serve as starting points for future optimization efforts to generate potent and selective ROR\\xce\\xb3 modulators.", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.805.1
[ { "content": "Read the following paper excerpts and answer the question. Output a JSON object with \"answer\" and \"evidence\" fields. The answer must be extracted directly from the paper text.\n\nIf the question cannot be answered from the given text, respond with \"null\".\n\nPaper: Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine\n\nMultimedia or spoken content presents more attractive information than plain text content, but it's more difficult to display on a screen and be selected by a user. As a result, accessing large collections of the former is much more difficult and time-consuming than the latter for humans. It's highly attractive to develop a machine which can automatically understand spoken content and summarize the key information for humans to browse over. In this endeavor, we propose a new task of machine comprehension of spoken content. We define the initial goal as the listening comprehension test of TOEFL, a challenging academic English examination for English learners whose native language is not English. We further propose an Attention-based Multi-hop Recurrent Neural Network (AMRNN) architecture for this task, achieving encouraging results in the initial tests. Initial results also have shown that word-level attention is probably more robust than sentence-level attention for this task with ASR errors.\n\nIntroduction\nWith the popularity of shared videos, social networks, online course, etc, the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to. Accessing large collections of multimedia or spoken content is difficult and time-consuming for humans, even if these materials are more attractive for humans than plain text information. Hence, it will be great if the machine can automatically listen to and understand the spoken content, and even visualize the key information for humans. This paper presents an initial attempt towards the above goal: machine comprehension of spoken content. In an initial task, we wish the machine can listen to and understand an audio story, and answer the questions related to that audio content. TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today's machine can perform with such a test.\nThe listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . In SQA, when the users enter questions in either text or spoken form, the machine needs to find the answer from some audio files. SQA usually worked with ASR transcripts of the spoken content, and used information retrieval (IR) techniques BIBREF2 or relied on knowledge bases BIBREF3 to find the proper answer. Sibyl BIBREF4 , a factoid SQA system, used some IR techniques and utilized several levels of linguistic information to deal with the task. Question Answering in Speech Transcripts (QAST) BIBREF5 , BIBREF6 , BIBREF7 has been a well-known evaluation program of SQA for years. However, most previous works on SQA mainly focused on factoid questions like “What is name of the highest mountain in Taiwan?”. Sometimes this kind of questions may be correctly answered by simply extracting the key terms from a properly chosen utterance without understanding the given spoken content. More difficult questions that cannot be answered without understanding the whole spoken content seemed rarely dealt with previously.\nWith the fast development of deep learning, neural networks have successfully applied to speech recognition BIBREF8 , BIBREF9 , BIBREF10 or NLP tasks BIBREF11 , BIBREF12 . A number of recent efforts have explored various ways to understand multimedia in text form BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . They incorporated attention mechanisms BIBREF16 with Long Short-Term Memory based networks BIBREF19 . In Question Answering field, most of the works focused on understanding text documents BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . Even though BIBREF24 tried to answer the question related to the movie, they only used the text and image in the movie for that. It seems that none of them have studied and focused on comprehension of spoken content yet.\n\nProposed Approach\nThe overall structure of the proposed model is in Fig 2 . The input of model includes the transcriptions of an audio story, a question and four answer choices, all represented as word sequences. The word sequence of the input question is first represented as a question vector $V_Q$ in Section \"Experiments\" . With the question vector $V_Q$ , the attention mechanism is applied to extract the question-related information from the story in Section \"Story Attention Module\" . The machine then goes through the story by the attention mechanism several times and obtain an answer selection vector $V_{Q_n}$ in Section \"Hopping\" . This answer selection vector $V_{Q_n}$ is finally used to evaluate the confidence of each choice in Section \"Answer Selection\" , and the choice with the highest score is taken as the output. All the model parameters in the above procedure are jointly trained with the target where 1 for the correct choice and 0 otherwise.\n\nStory Attention Module\nFig. 3 (B) shows the attention mechanism which takes the question vector $V_Q$ obtained in Fig. 3 (A) and the story transcriptions as the input to encode the whole story into a story vector representation $V_{S}$ . The story transcription is a very long word sequence with many sentences, so we only show two sentences each with 4 words for simplicity. There is a bidirectional GRU in Fig 3 (B) encoding the whole story into a story vector representation $V_{S}$ . The word vector representation of the $t$ -th word $S_{t}$ is constructed by concatenating the hidden layer outputs of forward and backward GRU networks, that is $S_t = [y_{f}(t) \\Vert y_{b}(t)]$ . Then the attention value $\\alpha _t$ for each time index ${t}$ is the cosine similarity between the question vector $V_{Q}$ and the word vector representation $S_{t}$ of each word, $V_{S}$0 . With attention values $V_{S}$2 , there can be two different attention mechanisms, word-level and sentence-level, to encode the whole story into the story vector representations $V_{S}$3 .\nWord-level Attention: We normalize all the attention values $\\alpha _t$ into $\\alpha _t^\\prime $ such that they sum to one over the whole story. Then all the word vector $S_{t}$ from the bidirectional GRU network for every word in the story are weighted with this normalized attention value $\\alpha _{t}^\\prime $ and sum to give the story vector, that is $V_{S} = \\sum _{t}\\alpha _{t}^{\\prime }S_{t}$ .\nSentence-level Attention: Sentence-level attention means the model collects the information only at the end of each sentence. Therefore, the normalization is only performed over those words at the end of the sentences to obtain $\\alpha _t^{\\prime \\prime }$ . The story vector representation is then $V_{S} = \\sum _{t=eos}\\alpha _t^{\\prime \\prime }*S_{t}$ , where only those words at the end of sentences (eos) contribute to the weighted sum. So $V_{S} = \\alpha _4^{\\prime \\prime }*S_4 + \\alpha _8^{\\prime \\prime }*S_8$ in the example of the Fig. 3 \n\nAnswer Selection\nAs in the upper part of Fig. 2 , the same way previously used to encode the question into $V_Q$ in Fig. 3 (A) is used here to encode four choice into choice vector representations $V_A$ , $V_B$ , $V_C$ , $V_D$ . Then the cosine similarity between the output of the last hop $V_{Q_n}$ and the choice vectors are computed, and the choice with highest similarity is chosen.\n\nBaselines\nWe compared the proposed model with some commonly used simple baselines in BIBREF24 and the memory network BIBREF16 . $\\bullet $ Choice Length: The most naive baseline is to select the choices based on the number of words in it without listening to the stories and looking at the questions. This included: (i) selecting the longest choice, (ii) selecting the shortest choice or (iii) selecting the choice with the length most different from the rest choices. $\\bullet $ Within-Choices similarity: With the vector representations for the choices in pre-processing of Section \"Experimental Setup\" , we computed the cosine distance among the four choices and selected the one which is (i) the most similar to or (ii) the most different from the others.\n $\\bullet $ Question and Choice Similarity: With the vector representations for the choices and questions in pre-processing of Section \"Experimental Setup\" , the choice with the highest cosine similarity to the question is selected. $\\bullet $ Sliding Window BIBREF24 , BIBREF32 : This model try to found a window of $W$ utterances in the story with the maximum similarity to the question. The similarity between a window of utterances and a question was the averaged cosine similarity of the utterances in the window and the question by their glove vector representation. After obtaining the window with the largest cosine similarity to the question, the confidence score of each choice is the average cosine similarity between the utterances in the window and the choice. The choice with the highest score is selected as the answer.\n $\\bullet $ Memory Network BIBREF16 : We implemented the memory network with some modifications for this task to find out if memory network was able to deal it. The original memory network didn't have the embedding module for the choices, so we used the module for question in the memory network to embed the choices. Besides, in order to have the memory network select the answer out of four choices, instead of outputting a word in its original version, we computed the cosine similarity between the the output of the last hop and the choices to select the closest choice as the answer. We shared all the parameters of embedding layers in the memory network for avoiding overfitting. Without this modification, very poor results were obtained on the testing set. The embedding size of the memory network was set 128, stochastic gradient descent was used as BIBREF16 with initial learning rate of 0.01. Batch size was 40. The size of hop was tuned from 1 to 3 by development set.\n\nResults\nWe used the accuracy (number of question answered correctly / total number of questions) as our evaluation metric. The results are showed in Table 1 . We trained the model on the manual transcriptions of the stories, while tested the model on the testing set with both manual transcriptions (column labelled “Manual”) and ASR transcriptions (column labelled “ASR”).\n $\\bullet $ Choice Length: Part (a) shows the performance of three models for selecting the answer with the longest, shortest or most different length, ranging from 23% to 35%.\n $\\bullet $ Within Choices similarity: Part (b) shows the performance of two models for selecting the choice which is most similar to or the most different from the others. The accuracy are 36.09% and 27.87% respectively.\n $\\bullet $ Question and Choice Similarity: In part (c), selecting the choice which is the most similar to the question only yielded 24.59%, very close to randomly guess.\n $\\bullet $ Sliding Window: Part (d) for sliding window is the first baseline model considering the transcription of the stories. We tried the window size {1,2,3,5,10,15,20,30} and found the best window size to be 5 on the development set. This implied the useful information for answering the questions is probably within 5 sentences. The performance of 31.15% and 33.61% with and without ASR errors respectively tells how ASR errors affected the results, and the task here is too difficult for this approach to get good results.\n $\\bullet $ Memory Network: The results of memory network in part (e) shows this task is relatively difficult for it, even though memory network was successful in some other tasks. However, the performance of 39.17% accuracy was clearly better than all approaches mentioned above, and it's interesting that this result was independent of the ASR errors and the reason is under investigation. The performance was 31% accuracy when we didn't use the shared embedding layer in the memory network.\n $\\bullet $ AMRNN model: The results of the proposed model are listed in part (f), respectively for the attention mechanism on word-level and sentence-level. Without the ASR errors, the proposed model with sentence-level attention gave an accuracy as high as 51.67%, and slightly lower for word-level attention. It's interesting that without ASR errors, sentence-level attention is about 2.5% higher than word-level attention. Very possibly because that getting the information from the whole sentence is more useful than listening carefully at every words, especially for the conceptual and high-level questions in this task. Paying too much attention to every single word may be a bit noisy. On the other hand, the 34.32% ASR errors affected the model on sentence-level more than on word-level. This is very possibly because the incorrectly recognized words may seriously change the meaning of the whole sentences. However, with attention on word-level, when a word is incorrectly recognized, the model may be able to pay attention on other correctly recognized words to compensate for ASR errors and still come up with correct answer.\n\nAnalysis on a typical example\nFig 4 shows the visualization of the attention weights obtained for a typical example story in the testing set, with the proposed AMRNN model using word-level or sentence-level attention on manual or ASR transcriptions respectively. The darker the color, the higher the weights. Only a small part of the story is shown where the response of the model made good difference. This story was mainly talking about the thick cloud and some mysteries on Venus. The question for this story is “What is a possible origin of Venus'clouds?\" and the correct choice is “Gases released as a result of volcanic activity\". In the manual transcriptions cases (left half of Fig 4 ), both models, with word-level or sentence-level attention, answered the question right and focused on the core and informative words/sentences to the question. The sentence-level model successfully captured the sentence including “...volcanic eruptions often omits gases.”; while the word-level model captured some important key words like “volcanic eruptions\", “emit gases\". However, in ASR cases (right half of Fig 4 ), the ASR errors misled both models to put some attention on some irrelevant words/sentences. The sentence-level model focus on the irrelevant sentence “In other area, you got canyons...\"; while the word-level model focused on some irrelevant words “canyons\", “rift malaise\", but still capture some correct important words like “volcanic\" or “eruptions\" to answer correctly. By the darkness of the color, we can observe that the problem caused by ASR errors was more serious for the sentence-level attention when capturing the key concepts needed for the question. This may explain why in part (f) of Table 1 we find degradation caused by ASR errors was less for word-level model than for sentence-level model.\n\nFigure 2: The overall structure of the proposed Attention-based Multi-hop Recurrent Neural Network (AMRNN) model.\n\nTable 1: Accuracy results of different models\n\nQuestion: What is the new task proposed in this work?", "role": "user" }, { "content": "{\"answer\": [\" listening comprehension task \"], \"evidence\": [\"With the popularity of shared videos, social networks, online course, etc, the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to. Accessing large collections of multimedia or spoken content is difficult and time-consuming for humans, even if these materials are more attractive for humans than plain text information. Hence, it will be great if the machine can automatically listen to and understand the spoken content, and even visualize the key information for humans. This paper presents an initial attempt towards the above goal: machine comprehension of spoken content. In an initial task, we wish the machine can listen to and understand an audio story, and answer the questions related to that audio content. TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today\\u0027s machine can perform with such a test.\", \"The listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . In SQA, when the users enter questions in either text or spoken form, the machine needs to find the answer from some audio files. SQA usually worked with ASR transcripts of the spoken content, and used information retrieval (IR) techniques BIBREF2 or relied on knowledge bases BIBREF3 to find the proper answer. Sibyl BIBREF4 , a factoid SQA system, used some IR techniques and utilized several levels of linguistic information to deal with the task. Question Answering in Speech Transcripts (QAST) BIBREF5 , BIBREF6 , BIBREF7 has been a well-known evaluation program of SQA for years. However, most previous works on SQA mainly focused on factoid questions like \\u201cWhat is name of the highest mountain in Taiwan?\\u201d. Sometimes this kind of questions may be correctly answered by simply extracting the key terms from a properly chosen utterance without understanding the given spoken content. More difficult questions that cannot be answered without understanding the whole spoken content seemed rarely dealt with previously.\"]}", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.76441
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\noral squamous cell carcinoma ( oscc ) is the most prevalent pathological type and accounts for over 80% of head and neck malignancies .\nthe majority of the oral cancer patients are diagnosed at an advanced clinical stage and the rate of survival has not changed . despite advances in the treatment , about 40% of these patients die from uncontrolled loco - regional disease alone and 24% show metastases to distant sites .\noral squamous papilloma is a benign epithelial neoplasm , with exophytic growth , most frequently found on the tongue and palate , and normally associated with infection by the human papilloma virus , mostly hpv-6 and hpv-11 .\nit is the most common benign tumor in the oral cavity , and its biological potential for malignant transformation is still matter of debate .\nherein , several studies have been conducted to identify putative biomarkers for better understanding the biological behavior of oral lesions ; particularly the ones that have potential for malignancy .\ngalectins are a family of non - integrin -galactoside - binding lectins with related amino acid sequences which are found in many animals .\nat least 14 members of the galectin family have been identified so far . they were cloned and classified into three subgroups based on their structure and number of carbohydrate - recognition domains : prototype ( galectins1 , 2 , 5 , 7 , 10 , 11 , 13 , and 14 ) , chimera type ( galectin-3 ) , and tandem repeat type ( galectins-6 , 8 , 9 , and 12 ) .\nit has been suggested that some members of the galectin family may participate in several biological processes including carcinogenesis . to date , the clinical significance of galectins in oral lesions , in special oral papillomas and oscc is not yet clear .\ntherefore , the aim of this study was to investigate the expression of galectins 1 , 3 and 9 in ordinary mucosa , oral papillomas , and osccs . to our knowledge , this is the first study in which the concomitant expression of these immunomarkers has been demonstrated in these oral lesions .\nthis was a retrospective study of tissue specimens from the oral cavity ( tongue ) in paraffin blocks from the archive of federal university of sao paulo ( unifesp ) , department of pathology , from 1996 to 2011 .\nthe use of these tissues for this research was approved by the unifesp / human ethical committee .\ncomparisons were made between the following groups : 41 samples of oscc ; 15 samples of oral squamous papilloma ; and 8 samples of oral normal epithelium obtained from adult patient necropsies .\ndiagnoses from histological typing of oral lesions and ordinary mucosa were made by two of the authors ( tah and gf ) .\nhematoxylin and eosin staining was carried out and serial sections were used for immunostaining of galectin 1 , 3 , and 9 proteins .\n, the sections were boiled in citrate buffer ( 2.94 g / l sodium citrate , ph 6.0 ) for 30 minutes and subsequently cooled to 30c .\nendogenous peroxidase activity was blocked by incubating the slides in methanol with 3% h2o2 for 20 minutes , followed by washing in phosphate - buffered saline ( pbs ; ph 7.4 ) .\nthe primary antibodies were diluted 1:100 for galectins 1 , 3 , and 9 , in 1% bovine serum albumin ( bsa ) , and sections were incubated for 16 hours at 4c .\nafter washing in pbs , the sections were incubated with secondary biotinylated antibody for 30 minutes with peroxidase - streptavidin conjugate ( dako lsab - hrp , denmark ) .\nthe sections were washed in pbs ( ph 7.4 ) and the proteins were visualized for light microscopy with dab reagent 0.06% 3,3-diaminobenzidine tetrahydroclhoride and 0.03% h2o2 in phosphate - citrate buffer ( sigma , usa ) .\nnegative controls were made by eliminating the primary antibody as established in previous study conducted by our group .\na total of 1000 epithelial cells were evaluated in 3 - 5 fields at 400 magnification .\nimmunohistochemical data were assessed by kruskal - wallis non - parametric test followed by dunn 's test . for all analyzes , it was adopted the value of p < 0.05 for statistical significance .\nthis was a retrospective study of tissue specimens from the oral cavity ( tongue ) in paraffin blocks from the archive of federal university of sao paulo ( unifesp ) , department of pathology , from 1996 to 2011 .\nthe use of these tissues for this research was approved by the unifesp / human ethical committee .\ncomparisons were made between the following groups : 41 samples of oscc ; 15 samples of oral squamous papilloma ; and 8 samples of oral normal epithelium obtained from adult patient necropsies .\ndiagnoses from histological typing of oral lesions and ordinary mucosa were made by two of the authors ( tah and gf ) .\nhematoxylin and eosin staining was carried out and serial sections were used for immunostaining of galectin 1 , 3 , and 9 proteins .\n, the sections were boiled in citrate buffer ( 2.94 g / l sodium citrate , ph 6.0 ) for 30 minutes and subsequently cooled to 30c .\nendogenous peroxidase activity was blocked by incubating the slides in methanol with 3% h2o2 for 20 minutes , followed by washing in phosphate - buffered saline ( pbs ; ph 7.4 ) .\nthe primary antibodies were diluted 1:100 for galectins 1 , 3 , and 9 , in 1% bovine serum albumin ( bsa ) , and sections were incubated for 16 hours at 4c .\nafter washing in pbs , the sections were incubated with secondary biotinylated antibody for 30 minutes with peroxidase - streptavidin conjugate ( dako lsab - hrp , denmark ) .\nthe sections were washed in pbs ( ph 7.4 ) and the proteins were visualized for light microscopy with dab reagent 0.06% 3,3-diaminobenzidine tetrahydroclhoride and 0.03% h2o2 in phosphate - citrate buffer ( sigma , usa ) .\nnegative controls were made by eliminating the primary antibody as established in previous study conducted by our group .\na total of 1000 epithelial cells were evaluated in 3 - 5 fields at 400 magnification .\nimmunohistochemical data were assessed by kruskal - wallis non - parametric test followed by dunn 's test . for all analyzes , it was adopted the value of p < 0.05 for statistical significance .\nthe results of the immunohistochemical expression of galectins 1 , 3 , and 9 are shown in figures 14 .\n( a ) normal oral epithelium ; ( b ) oral papilloma , and ( c ) oral squamous cell carcinoma representative immunostaining of galectin-3 .\n( a ) normal oral epithelium ; ( b ) oral papilloma , and ( c ) oral squamous cell carcinoma representative immunostaining of galectin-9 .\n( a ) normal oral epithelium ; ( b ) oral papilloma , and ( c ) oral squamous cell carcinoma number of positive cells for galectins 1 , 3 , and 9 .\np < 0.05 when compared to control ( normal oral mucosa ) immunohistochemical analysis demonstrated weak expression of the galectin-1 for normal mucosa , oral papilloma and oscc either to nucleus or to cytoplasm of epithelial cells .\nregarding galectin-3 , immunoexpression was detected in the nucleus or cytoplasm of epithelial cells [ figure 2 ] . as can be seen from figure 4 , significant statistically differences ( p < 0.05 ) were noticed to oral papilloma when compared to normal mucosa .\nocss also demonstrated a decreased galectin-3 immunoexpression when compared to negative control [ figure 4 ] .\nsignificant statistically differences ( p < 0.05 ) were noticed when compared to negative control [ figure 4 ] .\ngalectin-9 immunoexpression was weakly detected in the cytoplasm of epithelial cells [ figure 3 ] .\nthe aim of this study was to characterize the protein expression of galectins 1 , 3 , and 9 in normal oral mucosa , oral squamous papillomas , and osccs to better understand the pathogenesis of these oral lesions .\ngalectins comprise a family of animal lectins . at present , 15 types have been identified in humans .\nthese proteins participate in different cell functions , such as cell cycle , adhesion and differentiation , apoptosis , and immune responses , as well as in cancer development and progression .\nseveral studies have demonstrated the involvement of galectins in tumorigenesis and have emphasized their diagnostic and prognostic value , particularly for squamous cell carcinoma of the head and neck .\nsome investigations have shown the overexpression of galectin-1 in squamous cell carcinoma of the head and neck when compared to normal mucosa .\nthis fact was not detected in this study because no significant statistically differences were noticed between groups .\nan earlier study conducted by de vasconcelos carvalho et al . has observed higher expression of galectin-1 in high - risk oral epithelial dysplasia .\nour results demonstrated that the immunoexpression of galectin-1 was weak in ordinary mucosa . in oral squamous papilloma\ncindolo et al . found that increased expression of galectin-1 in epithelial cells of endometrial carcinoma was associated with a higher degree of differentiation .\ntaken together , these results indicate that galectin-1 is not differentially expressed in oral squamous papilloma and oscc .\nit can translocate to the nucleus from the cytoplasm via non - classical secretion pathway after it is synthesized on the cytoplasmic ribosomes .\nthis protein is expressed by human neutrophils , macrophages , mast cells , and langerhans cells , through which it could also be involved in inflammatory processes .\nmoreover , it has been associated with various biological processes , including cell adhesion , recognition , proliferation , differentiation , immunomodulation , angiogenesis , and apoptosis and it can not be a reliable marker for cancer aggressiveness and metastasis .\nour study showed differential galectin-3 expression in the oral epithelium being a lower expression in oral squamous papillomas and oscc when compared to normal mucosa .\nsuch findings are fully in line with other studies that demonstrated a reduction in the expression of galectin-3 in intraepithelial and cervical carcinoma specimens , ovarian carcinoma , and uterine adenocarcinomas when compared to normal tissues .\nthis apparent reduced expression of galectin-3 may indicate early biomolecular alterations in altered tissue . according to liu and rabinovich\n, there is evidence indicating that the inhibition of galectin-3 expression plays a pivotal role on the onset of the phenotypic transformation of tumors as well as growth .\nhowever , overexpression of galectin-3 has been detected in other tumors such as pancreatic , gastric , thyroid , renal , and cell interaction , particularly in the upper layers of the human epidermis .\nothers have yet assumed that this cell - cell interaction is necessary for the control of normal tissue growth .\ngalectin-9 exhibits eca activity in vitro and in vivo . activates eosinophils , and it is an eosinophil chemoattractant .\ngalectin-9 is correlated with cellular adhesion and aggregation in some tumor cells . in this study , galectin -9\nindependent of its biological phenomemon involved in this process , we believe that galectin-9 is not involved in understanding the behavior of oral lesions .\nthe immunohistochemical findings in the lesions investigated in the present study suggest that the biological behavior of osccs of the mouth may be associated with downregulation of galectin-3 .\ntherefore , we conclude that galectin-3 may be a suitable immunomarker for predicting malignant transformation of oral mucosa cells .", "role": "user" }, { "content": "background : the aim of this study was to characterize the immunohistochemical expression of galectin 1 , 3 , and 9 in normal oral epithelium , oral squamous papilloma , and oral squamous cell carcinoma.materials and methods : immunohistochemical staining for galectins 1 , 3 , and 9 was evaluated in 8 samples of normal oral squamous epithelium , 15 samples of oral squamous papilloma , and 41 samples of oral squamous cell carcinoma . \n immunohistochemical data were assessed by kruskal - wallis non - parametric test followed by dunn 's test . for all analyzes , it was adopted the value of p < 0.05 for statistical significance.results:significant differences were found in galectin- 3 expression when comparing ordinary mucosa and oral squamous papilloma with the oral squamous cell carcinoma samples.conclusion:these findings indicate that galectin-3 is closely involved in malignant transformation of oral mucosa cells .", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.23408
[ { "content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nLittle research on coronaviruses has been conducted on wild animals in Africa. Here, we screened a wide range of wild animals collected in six provinces and five caves of Gabon between 2009 and 2015. We collected a total of 1867 animal samples (cave-dwelling bats, rodents, non-human primates and other wild animals). We explored the diversity of CoVs and determined the factors driving the infection of CoVs in wild animals. Based on a nested reverse transcription-polymerase chain reaction, only bats, belonging to the Hipposideros gigas (4/156), Hipposideros cf. ruber (13/262) and Miniopterus inflatus (1/249) species, were found infected with CoVs. We identified alphacoronaviruses in H. gigas and H. cf. ruber and betacoronaviruses in H. gigas. All Alphacoronavirus sequences grouped with Human coronavirus 229E (HCoV-229E). Ecological analyses revealed that CoV infection was significantly found in July and October in H. gigas and in October and November in H. cf ruber. The prevalence in the Faucon cave was significantly higher. Our findings suggest that insectivorous bats harbor potentially zoonotic CoVs; highlight a probable seasonality of the infection in cave-dwelling bats from the North-East of Gabon and pointed to an association between the disturbance of the bats' habitat by human activities and CoV infection.", "role": "user" }, { "content": "[\"Mechanism\"]", "role": "assistant" } ]
science.pico_ner
science.pico_ner.541
[ { "content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\n[ The problems of prophylactic chemotherapy , the second-look operation and the maintenance of the remission in the treatment of ovarian cancer ( author 's transl ) ] . The results of the treatment of 1022 ovarian cancers were reviewed and the problems of prophylactic chemotherapy , a second look operation and the maintenance of the remission were studied . In ovarian cancers of stage 1 and stage 2 , a post-operative prophylactic chemotherapy is useful only in cases with tumor cells in the secretions of the pouch of Douglas or in the ascites and in cases where a sensitivity to the chemotherapy can be assumed . In about 50 % of primarily inoperable stage 3 ovarian cancers , the random treatment with cyclophosphamide ( Endoxan ) results in a clinical remission . A significant five year cure rate can only be obtained if the uterus and the adnexa are removed . Radical operation during the remission appears to be very important for survival of the patient . For the maintenance of the remission continuous chemotherapy for at least 2 years following treatment by operation and radiotherapy is necessary .", "role": "user" }, { "content": "{\"Intervention\": [\"post-operative prophylactic chemotherapy\", \"cyclophosphamide ( Endoxan )\", \"radiotherapy\", \"chemotherapy\", \"prophylactic chemotherapy\"], \"Outcome\": [\"rate\", \"cure\", \"clinical remission\", \"survival\"], \"Participant\": [\"stage 1\", \"stage 2\", \"ovarian cancers\", \"1022 ovarian cancers\", \"inoperable stage 3 ovarian cancers\"]}", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.2647
[ { "content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: These anthropogenic sounds are generally more continuous in quality than sounds produced naturally in the environment (Wood & Yezerinac 2006; Slabbekoorn & Ripmeester 2008; Pijanowski et al. 2011).", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.7517
[ { "content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nOBJECTIVES: CT findings of COVID-19 look similar to other atypical and viral (non-COVID-19) pneumonia diseases. This study proposes a clinical computer-aided diagnosis (CAD) system using CT features to automatically discriminate COVID-19 from non-COVID-19 pneumonia patients. METHODS: Overall, 612 patients (306 COVID-19 and 306 non-COVID-19 pneumonia) were recruited. Twenty radiological features were extracted from CT images to evaluate the pattern, location, and distribution of lesions of patients in both groups. All significant CT features were fed in five classifiers namely decision tree, K-nearest neighbor, naive Bayes, support vector machine, and ensemble to evaluate the best performing CAD system in classifying COVID-19 and non-COVID-19 cases. RESULTS: Location and distribution pattern of involvement, number of the lesion, ground-glass opacity (GGO) and crazy-paving, consolidation, reticular, bronchial wall thickening, nodule, air bronchogram, cavity, pleural effusion, pleural thickening, and lymphadenopathy are the significant features to classify COVID-19 from non-COVID-19 groups. Our proposed CAD system obtained the sensitivity, specificity, and accuracy of 0.965, 93.54%, 90.32%, and 91.94%, respectively, using ensemble (COVIDiag) classifier. CONCLUSIONS: This study proposed a COVIDiag model obtained promising results using CT radiological routine features. It can be considered an adjunct tool by the radiologists during the current COVID-19 pandemic to make an accurate diagnosis. KEY POINTS: * Location and distribution of involvement, number of lesions, GGO and crazy-paving, consolidation, reticular, bronchial wall thickening, nodule, air bronchogram, cavity, pleural effusion, pleural thickening, and lymphadenopathy are the significant features between COVID-19 from non-COVID-19 groups. * The proposed CAD system, COVIDiag, could diagnose COVID-19 pneumonia cases with an AUC of 0.965 (sensitivity = 93.54%; specificity = 90.32%; and accuracy = 91.94%). * The AUC, sensitivity, specificity, and accuracy obtained by radiologist diagnosis are 0.879, 87.10%, 88.71%, and 87.90%, respectively.", "role": "user" }, { "content": "[\"Diagnosis\"]", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.9813
[ { "content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Pygo1+/-/Pygo2+/+,Pygo1+/-/Pygo2+/-andPygo1+/-/Pygo2+/+kidneysallshowedsimilarlevelsofBAT-galexpression.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.covidfact_entailment
science.covidfact_entailment.199
[ { "content": "Based on the abstract below, determine if the claim is supported or contradicted. Output a JSON object with the verdict (\"SUPPORT\" or \"CONTRADICT\") and evidence sentences from the abstract.\n\nClaim: Alpha-1 antitrypsin inhibits sars-cov-2 infection\n\nText:\nSevere acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes coronavirus disease 2019 (COVID-19).\nTo identify factors of the respiratory tract that suppress SARS-CoV-2, we screened a peptide/protein library derived from bronchoalveolar lavage, and identified α1-antitrypsin (α1-AT) as specific inhibitor of SARS-CoV-2.\nα1-AT targets the viral spike protein and blocks SARS-CoV-2 infection of human airway epithelium at physiological concentrations.\nOur findings show that endogenous α1-AT restricts SARS-CoV-2 and repurposes α1-AT-based drugs for COVID-19 therapy.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"To identify factors of the respiratory tract that suppress SARS-CoV-2, we screened a peptide/protein library derived from bronchoalveolar lavage, and identified \\u03b11-antitrypsin (\\u03b11-AT) as specific inhibitor of SARS-CoV-2.\", \"\\u03b11-AT targets the viral spike protein and blocks SARS-CoV-2 infection of human airway epithelium at physiological concentrations.\", \"Our findings show that endogenous \\u03b11-AT restricts SARS-CoV-2 and repurposes \\u03b11-AT-based drugs for COVID-19 therapy.\"]}", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.6666
[ { "content": "Which dataset would be most relevant for this research query?\n\nQuery: In this paper, we study three argumentation mining tasks namely argument component identification, stance detection\n\nKeyphrases: argumentation mining text\n\nDataset descriptions:\nCIC: The dataset is annotated with stance towards one topic, namely, the independence of Catalonia.\nWikiQA: The WikiQA corpus is a publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. In order to reflect the true information need of general users, Bing query logs were used as the question source.\nM2E2: Aims to extract events and their arguments from multimedia documents. Develops the first benchmark and collect a dataset of 245 multimedia news articles with extensively annotated events and arguments.\nCDCP: The Cornell eRulemaking Corpus – CDCP is an argument mining corpus annotated with argumentative structure information capturing the evaluability of arguments. The corpus consists of 731 user comments on Consumer Debt Collection Practices (CDCP) rule by the Consumer Financial Protection Bureau (CFPB); the resulting dataset contains 4931 elementary unit and 1221 support relation annotations.\n\nOptions: A: CIC B: WikiQA C: M2E2 D: CDCP", "role": "user" }, { "content": "WikiQA", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.14193
[ { "content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nWe release a dataset of 65646 StarCraft replays that contains 1535 million frames and 496 million player actions. We provide full game state data along with the original replays that can be viewed in StarCraft. The game state data was recorded every 3 frames which ensures suitability for a wide variety of machine learning tasks such as strategy classification, inverse reinforcement learning, imitation learning, forward modeling, partial information extraction, and others. We use TorchCraft to extract and store the data, which standardizes the data format for both reading from replays and reading directly from the game. Furthermore, the data can be used on different operating systems and platforms. The dataset contains valid, non-corrupted replays only and its quality and diversity was ensured by a number of heuristics. We illustrate the diversity of the data with various statistics and provide examples of tasks that benefit from the dataset. We make the dataset available at this https URL . En Taro Adun!\n\nCited Abstract(s):\n[1]: Developing computer-controlled groups to engage in combat, control the use of limited resources, and create units and buildings in real-time strategy (RTS) games is a novel application in game AI. However, tightly controlled online commercial game pose challenges to researchers interested in observing player activities, constructing player strategy models, and developing practical AI technology in them. Instead of setting up new programming environments or building a large amount of agentpsilas decision rules by playerpsilas experience for conducting real-time AI research, the authors use replays of the commercial RTS game StarCraft to evaluate human player behaviors and to construct an intelligent system to learn human-like decisions and behaviors. A case-based reasoning approach was applied for the purpose of training our system to learn and predict player strategies. Our analysis indicates that the proposed system is capable of learning and predicting individual player strategies, and that players provide evidence of their personal characteristics through their building construction order.\n\n[2]: StarCraft is a real-time strategy (RTS) game and the choice of strategy has big impact on the final results of the game. For human players, the most important thing in the game is to select the strategy in the early stage of the game. Also, it is important to recognize the opponent's strategy as quickly as possible. Because of the “fog-of-war” in the game, the player should send a scouting unit to opponent's hidden territory and the player predicts the types of strategy from the partially observed information. Usually, expert players are familiar with the relationships between two build orders and they can change the current build order if his choice is not strong to the opponent's strategy. However, players in AI competitions show quite different behaviors compared to the human leagues. For example, they usually have a pre-selected build order and rarely change their order during the game. In fact, the computer players have little interest in recognizing opponent's strategy and scouting units are used in a limited manner. The reason is that the implementation of scouting behavior and the change of build order from the scouting vision is not a trivial problem. In this paper, we propose to use replays to predict the strategy of players and make decision on the change of build orders. Experimental results on the public replay files show that the proposed method predicts opponent's strategy accurately and increases the chance of winning in the game.\n\n[3]: We present a data mining approach to opponent modeling in strategy games. Expert gameplay is learned by applying machine learning techniques to large collections of game logs. This approach enables domain independent algorithms to acquire domain knowledge and perform opponent modeling. Machine learning algorithms are applied to the task of detecting an opponent's strategy before it is executed and predicting when an opponent will perform strategic actions. Our approach involves encoding game logs as a feature vector representation, where each feature describes when a unit or building type is first produced. We compare our representation to a state lattice representation in perfect and imperfect information environments and the results show that our representation has higher predictive capabilities and is more tolerant of noise. We also discuss how to incorporate our data mining approach into a full game playing agent.\n\n[4]: This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games' state (not only player's orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions' mixtures components\n\n[5]: In order to experiment with machine learning and data mining techniques in the domain of Real-Time Strategy games such as StarCraft, a dataset is required that captures the complex detail of the interactions taking place between the players and the game. This paper describes a new extraction process by which game data is extracted both directly from game log (replay) files, and indirectly through simulating the replays within the StarCraft game engine. Data is then stored in a compact, hierarchical, and easily accessible format. This process is applied to a collection of expert replays, creating a new standardised dataset. The data recorded is enough for almost the complete game state to be reconstructed, from either player's viewpoint, at any point in time (to the nearest second). This process has revealed issues in some of the source replay files, as well as discrepancies in prior datasets. Where practical, these errors have been removed in order to produce a higher-quality reusable dataset.", "role": "user" }, { "content": "The existing StarCraft datasets can be subdivided into two groups based on extracted data type (Table ). Datasets from the first group, i.e. [1] [3] [2] focus on specific aspects of the game and contain data that can only be used in a particular context. Datasets from the second group, e.g. [4] and [5] contains general purpose full state data and is not restricted to any particular scope.", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.7230
[ { "content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nRecurrent neural networks are strong dynamic systems, but they are very sensitive to their hyper-parameter configuration. Moreover, training properly a recurrent neural network is a tough task, therefore selecting an appropriate configuration is critical. There have been proposed varied strategies to tackle this issue, however most of them are still impractical because of the time resources needed. In this study, we propose a low computational cost model to evaluate the expected performance of a given architecture based on the distribution of the error of random samples. We validate empirically our proposal using three use case.\n\nCited Abstract(s):\n[1]: Countless learning tasks require dealing with sequential data. Image captioning, speech synthesis, and music generation all require that a model produce outputs that are sequences. In other domains, such as time series prediction, video analysis, and musical information retrieval, a model must learn from inputs that are sequences. Interactive tasks, such as translating natural language, engaging in dialogue, and controlling a robot, often demand both capabilities. Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes. Unlike standard feedforward neural networks, recurrent networks retain a state that can represent information from an arbitrarily long context window. Although recurrent neural networks have traditionally been dicult to train, and often contain millions of parameters, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful large-scale learning with them. In recent years, systems based on long short-term memory (LSTM) and bidirectional (BRNN) architectures have demonstrated ground-breaking performance on tasks as varied as image captioning, language translation, and handwriting recognition. In this survey, we review and synthesize the research that over the past three decades rst yielded and then made practical these powerful learning models. When appropriate, we reconcile conicting notation and nomenclature. Our goal is to provide a selfcontained explication of the state of the art together with a historical perspective and references to primary research.", "role": "user" }, { "content": "RNNs are feedforward neural networks extended by incorporating edges that span adjacent time steps, introducing a notion of time to the model. These special type of edges, known as recurrent edges, may form cycles and connect a node to itself (self-connections) across time. Thus, at a time @math , a node connected to recurrent edges receives input from the current data point @math also from hidden node @math (the previous state of the network). The output @math at each time @math is computed according to the hidden node values at time @math ( @math ). Input at time @math ( @math ) can determine the output at time @math ( @math ) by way of recurrent connections [1] .", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.10467
[ { "content": "Which dataset would be most relevant for this research query?\n\nQuery: An effective moving object detection and tracking system based on the combined background subtraction and frame differen\n\nKeyphrases: moving object detection tracking stereo images\n\nDataset descriptions:\nMiddlebury: The Middlebury Stereo dataset consists of high-resolution stereo sequences with complex geometry and pixel-accurate ground-truth disparity data. The ground-truth disparities are acquired using a novel technique that employs structured lighting and does not require the calibration of the light projectors.\nLUNA: The LUNA challenges provide datasets for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified.\nUA-DETRAC: Consists of 100 challenging video sequences captured from real-world traffic scenes (over 140,000 frames with rich annotations, including occlusion, weather, vehicle category, truncation, and vehicle bounding boxes) for object detection, object tracking and MOT system.\nUAVDT: UAVDT is a large scale challenging UAV Detection and Tracking benchmark (i.e., about 80, 000 representative frames from 10 hours raw videos) for 3 important fundamental tasks, i.e., object DETection (DET), Single Object Tracking (SOT) and Multiple Object Tracking (MOT). The dataset is captured by UAVs in various complex scenarios.\n\nOptions: A: Middlebury B: LUNA C: UA-DETRAC D: UAVDT", "role": "user" }, { "content": "Middlebury", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.80389
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\ndouble minute chromosomes ( dmin ) or homogeneously staining regions ( hsrs ) was first described in a direct preparation of cells from patient with untreated bronchogenic carcinoma .\nsait et al . , ( 2002 ) reported for the first time that dmin are from chromosome 19 .\nhomogeneously staining of regions or dmin are the cytogenetic hallmarks of genomic amplification in cancers .\nalthough found in a variety of human tumor cells , their presence in hematologic malignancies is rare . also , their role in leukemogenesis is not clear but they have been reported to be associated with rapid progression and short survival time .\ndmin are found in tumor cell proliferations , characteristically varying in number from cell to cell .\ngene amplification causes an increase in the gene copy number and , subsequently , elevates the expression of the amplified genes , which modify normal growth control and survival pathway . in this connection , c - myc was the most frequently amplified gene , but cases with mixed lineage leukemia ( mll ) gene amplification have also been reported in the current literature .\nthe semi conservative replication of deoxyribonucleic acid ( dna ) in dmin has been demonstrated to occur in both human and mouse cell line . here , we present two cases with leukemia associated with dmin . the latest data regarding dmin was reported by mitelman database ( http://cgapanci.nih.gov/chromosomes/mitelman ) and own peer reviewed publication .\nwe receive bone marrow ( bm ) and peripheral blood ( pb ) specimen from aml and cml diagnosed adult patients at initial presentation .\nthe diagnosis of leukemia patients was based on characterization of the leukemic cells , obtained from bone marrow and molecular cytogenetic , when appropriate . in each patient , 0.5 - 1.0 ml bm / pb was obtained and studied using ; ( a ) a 24 h unstimulated culture technique and ( b ) methotrexate cell synchronization method , with some modification . for culture , 3 - 5 10 cells were cultured in 4 ml medium ( rpmi 1640 , gibco - brl grand island , ny , usa ) supplemented with 15% heat inactivated fetal bovine serum ( gibco - brl grand island , ny , usa ) at 37c in an atmosphere containing 5% co2 . for methotrexate ( mtx ) synchronization , bm / pb cells were synchronized with 10 m mtx after 1.0 - 5.0 h of culture .\nthe s - phase block of synchronized cells was released after 17 h by the adding of 10 m thymidin for 3.0 - 6.0 h. standard mtx cell synchronization procedures for cultures , collection of samples , and slide preparation were modified and performed in our laboratory .\nbriefly , the cultured cells were then treated with colcemide ( gibco - brl grand island , ny , usa ) final concentration , 10 g / ml and incubated at 37c for an additional 3 min .\nthe contents of the tube were then centrifuged for 10 min at 1,000 rpm and re - suspend in 10 ml of 75 mm kcl ( 0.56% ) pre - warmed to 37c for 20 min . at this stage , 1 ml of carnoys fixative ( 3:1 methanol : acetic acid )\nwas added in to the tube , and this fixation step was repeated four times .\nten slides were prepared for each culture and stained for 3 min with giemsa g banding .\ntwo patients with cml and aml were seen between 1994 and 2010 , at shahid beheshti university of medical sciences and various private clinics , in tehran , iran . in patient 1 , a 23-year - old female , in previous good health , presented with weight loss , tiredness . her peripheral blood hemoglobin was 6.2 g / dl with a platelet count of 100 10/l ( differential : 77% blast , 1% neutrophils , 22% lymphocytes ) .\nthe marrow aspirate was cellular with reduction in all normal marrow elements and marked increased in sudan black - positive myelobalst .\ng - banding analysis in patient one , aml - m1 showed that all of 20 metaphases cells revealed both numerical and structural abnormalities , including 46 , xx , t ( 9;22)(q34.1;q11 ) + dmin ; 45 , xx , t ( 9;22)(q34.1;q11 ) , -15 .\nsixteen of the 20 cells ( 80% ) also had 3 19 dmin in each cell .\nher past medical history noticed without history of prior toxic exposure . in patient 2 , with cml - blast crisis ,\na 28-year - old female , was the subject of dms . on an admission ,\nsplenomegaly was found , count was 15.1 10/mm , hemoglobin was 7.2 g / dl , rbc 224 10/mm and bone marrow examination revealed hypercellularity with promyelocyte 14.2% and myeloblast 49.7% .\ng - banding analysis revealed that 14 of the 80 metaphases cells ( 70% ) [ figure 1 ] , had dmin , including [ t ( 9;22)(q34.1;q11 ) ] + dmin .\nher past medical history was notable only for occupation for cultivation as farm yard manure .\nthe presence of dmin with two leukemia patients , which were identified in our laboratory , was also observed in other parts of the world .\nthese two new cases of chromosomes will be discussed in the following sections utilizing the largest chromosome aberrations database in cancers ( http://cgapanci.nih.gov/chromosomes/mitelman ) .\nalso , thomas et al . , ( 2004 ) documented 33 cases with dmin in acute myeloid leukemia .\nthe relationship of dmin / hsrs and malignancies seems well established and , indeed , dmin / hsrs have not , so far , been observed in non malignant cell .\ngene amplification cause an increase in the gene copy number and , subsequently , elevate the expression of the amplified genes , which modify normal growth control and survival pathway .\ndmin represent a mechanism for upregulated oncogene expression and are generally associated with a poor prognosis .\nthis type of gene amplification has been found in various solid tumors , such as colon , pancreatic , breast carcinoma , brain tumors , and neuroblastoma ( http://cgapanci.nih.gov/chromosomes/mitelman ) .\ngenerally , c - myc amplification results in an over expression of the myc protein , which is known to be a critical nuclear transcription factor . some results suggested that dmin and hsr are alternative manifestations of the same biological phenomenon .\namplification of the ets1 , fl11 , srpr , nfrkb , and kcnj4 genes located at 11q23 - 24 distal to mll was demonstrated in a patient with aml .\nmany oncogenes have been identified on dmin , for example , mycn , c - myc , eifa2 , and mdm2 .\nseveral early reports concluded that dmin , whose number and size vary from cell to cell , were not viral or bacterial contamination .\nalso , past medical history of both cases presented here , recorded without history of any micro organism contaminations and history of previous malignancies . in case 2 , past medical history was notable only for occupation for cultivation as farm yard manure .\nit is coincident with previous reports indicating that dmin do occur in leukemic patients without previous history of malignancy . on the contrary ,\nit is reasonable to suggest that the demonstration of dmin in patients with aml in the present study might be an indication that these patients have a previous history of malignant disease treated with irradiation or chemotherapy or both .\nthe identification of two new cases of dmin presented here are secondary , apparently non - random associated with leukemia . however , further studies and accumulation of new cases are needed in the hope of defining it as specific abnormalities in the field of leukemia .", "role": "user" }, { "content": "homogeneously staining regions ( hsr ) or double minute chromosomes ( dmin ) are autonomously replicating extra - chromosomal elements that are frequently associated with gene amplification in a variety of cancers . \n the diagnosis of leukemia patients was based on characterization of the leukemic cells obtained from bone marrow cytogenetics . \n this study report two cases , one with acute myeloblastic leukemia without maturation ( aml - m1 ) , aged 23-year - old female , and the other with chronic myelogenous leukemia ( cml)-blast crisis , a 28-year - old female associated with double minute chromosomes . \n most cases of acute myeloid leukemia with dmin in the literature ( including our cases ) have been diagnosed as having acute myeloid leukemia .", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.629
[ { "content": "Answer the following biomedical question based on the given context. Provide only the answer without additional explanation.\n\nQuestion: Where is the enzyme PM20D1 localized?\n\nContext: we identify a secreted enzyme, peptidase M20 domain containing 1 (PM20D1), that is enriched in UCP1(+) versus UCP1(-) adipocytes\nA secreted enzyme, PM20D1, enriched in UCP1+ adipocytes, exhibits catalytic and hydrolytic activity to reversibly form N-acyl amino acids.", "role": "user" }, { "content": "UCP1+ adipocytes", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.18389
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Robust averaging protects decisions from noise in neural computations\nArticle:\nAbstract:\nAn ideal observer will give equivalent weight to sources of information that are equally reliable. However, when averaging visual information, human observers tend to downweight or discount features that are relatively outlying or deviant( ‘robust averaging’). Why humans adopt an integration policy that discards important decision information remains unknown. Here, observers were asked to judge the average tilt in a circular array of high-contrast gratings, relative to an orientation boundary defined by a central reference grating. Observers showed robust averaging of orientation, but the extent to which they did so was a positive predictor of their overall performance. Using computational simulations, we show that although robust averaging is suboptimal for a perfect integrator, it paradoxically enhances performance in the presence of “late” noise, i. e. which corrupts decisions during integration. In other words, robust decision strategies increase the brain’s resilience to noise arising in neural computations during decision-making.\nIntroduction:\nDecisions about the visual world often require observers to integrate information from multiple sources. An ideal observer will give each source a weight that is proportional to its reliability. Thus, where all sources are equally trustworthy, the best policy is simply to average the available features or decision information. For example, a decision about which fruit to buy at the supermarket might involve averaging the estimated size and colour of the produce, or a wager about which football team will win might be made after averaging the speed and skill of all the players on a team[1]. Previous studies have investigated how humans average perceptual information by presenting participants with array composed of multiple visual elements and asking them to report the mean size, colour or shape of the items displayed[2–6]. Interestingly, recent reports suggest that human averaging judgments do not resemble those of an ideal observer[7–10]. Rather, when averaging, humans tend to downweight or discount visual features that are unusual or outlying with respect to the distribution of features occurring over recent trials( “robust averaging”). Haberman and Whitney first showed that observers discount emotional deviants when averaging the expression in human faces[7]. Subsequently, de Gardelle, Summerfield and colleagues provided evidence that observers discount outlying colour or shape values during averaging of features in a multi-element array[8, 9]. Control analyses ruled out the possibility that the observed effect was an artefact of hardwired nonlinearities in feature space. Together, these studies suggest that humans are “robust averagers”, overweighting inliers relative to outliers rather than giving equal weight to all elements( although see[11] for a failure to replicate this finding using a 2-alternative forced choice averaging task). According to a widely-accepted framework with its roots in Bayesian decision theory[1, 12], robust averaging is suboptimal. Intuitively, robust averaging discards information about the stimulus array, and should thus reduce performance relative to a policy that integrates the stimulus feature values evenly. Why, then, do humans give more weight to inliers than outliers during integration of decision information? Here, we tackled this question using psychophysical testing of human observers and computational simulation. We asked participants to average the orientation( tilt) in a circular array of gratings, relative to a central reference grating that either( i) remained the same or( ii) varied in a trial-wise fashion over a block of trials. This latter manipulation allowed us to test whether robust averaging is still observed even when the distribution of sensory information is uniform around the circle and varies randomly from trial. Using this approach, we show that human robust averaging can be conceived of as a policy that rapidly allocates limited resources( gain; see Eq 2 below) to items that are closest to the category boundary( or indifference point). Although this policy is suboptimal in the absence of noise, it has a surprising protective effect on decisions that are corrupted by “late” noise arising during or beyond information integration. Our manuscript is organised as follows. We begin by describing the behaviour of a cohort of human observers performing the orientation averaging task. Next, we describe a simple psychophysical model in which feature values( tilt, relative to a reference value) are transformed nonlinearly before being averaged to form a decision variable. This variable is corrupted with “late”( post-averaging) noise and then used to determine model choices. This model accounts better for human behaviour( including observed robust averaging) than a rival account, based on an ideal observer, that replaces the initial nonlinear step with a purely linear multiplicative transformation. Next, we use simulations to explore the properties of this model. We show that as we increase late noise, a model that engages in robust averaging comes to outperform the linear model, i. e. achieves higher simulated choice accuracy. Finally, we return to the human data, and show that for both model and humans, the use of a robust averaging strategy is a positive predictor of decision accuracy, in particular under high estimated late noise.\nDiscussion:\nHuman observers have previously been shown to be “robust averagers” of low-level visual features such as shape and colour[8, 9], and even of high-dimensional stimuli such as faces[7]. Here, we add to these earlier findings, describing robust averaging of the tilt of a circular array of gratings. However, the focus of the current experiment was to use computational simulations to understand why humans engage in robust averaging. We describe a simple psychophysical model in which features values are transformed nonlinearly prior to averaging. This model assumes the decisions are limited by a fixed resource, and that gain is allocated differentially across feature space, giving priority to inliers–those features that fall close to the category boundary. Through simulations, we find that in our experiment, this relative discounting of outliers gives a boost to performance when decisions are additionally corrupted by “late” noise, i. e. noise arising during, or beyond, the integration of information. Previously, robust averaging has been considered a suboptimal policy that incurs an unnecessary loss by discarding relevant decision information[17]. The current work offers a new perspective, suggesting that robust averaging is a form of bounded rationality. If we consider an observer whose neural computations are not corrupted by late noise, it is true that robust averaging incurs a cost relative to perfect averaging. However, here we consider decisions as being constrained not just by sources of noise that are external to the observer, or that arise during sensory transduction, but also capacity limits in human information processing. Processing capacity allows a multiplicative gain to be applied to feature values, with higher gain ensuring that feature values are converted to cumulative decision values that fall further from the category boundary( here, the reference orientation). When decision values are further from the category boundary, they are more resilient to “late” noise, which might otherwise drive them to the incorrect side of the category boundary, thereby forcing an error. However, when gain is limited, it must be allocated judiciously. Our simulations show that allocating gain to stimuli that are most likely to occur confers a benefit on performance, and suggest that humans may adopt a robust averaging policy in order to maximise their accuracy on the task. One longstanding hypothesis states that neural systems will maximise the efficiency of information encoding by allocating the highest resources( e. g. neurons) to those features that are most likely to occur[18]. For example, enhanced human sensitivity to cardinal angles of orientation( those close to 0° and 90°) may reflect the prevalence of contours with this angle in natural scenes[19]. Indeed, neural systems learning via unsupervised methods will naturally learn to represent features in proportion to the frequency with which they occur. Here, we make a related argument for neural gain control. The efficiency of gain control allocation depends on the distribution of features that occurs in the local environment. Allocating gain to features that are rare or unexpected, even when they are more diagnostic of the category, is inefficient, as resources are “wasted” in feature values that are highly unlikely to occur; whereas allocating gain to those features that occur most frequently will confer the greatest benefit. This benefit, however, is only observable when decisions are corrupted by “late” noise, i. e. that arising beyond information averaging. This finding has important implications for our understanding of what may be the “optimal” policy for performing a categorisation task. The ideal observer framework allows us to write down a decision policy that will maximise accuracy for an observer that is limited not by capacity but by noise arising in the external environment. Here, we show an example where the policy that is optimal for an unbiased, noiseless observer is not the one that maximises accuracy for healthy humans. The current study adds to an emerging body of work that the human brain may have evolved perceptual processing steps that squash, compress or discretise feature information in order to make decisions robust to noise[15]. In another recent line of work, participants were asked to compare the average height of two simultaneously-occurring streams of bars[20] or average value of two streams of numbers[21]. Human choices were best described by a model which discarded information about the locally weaker item, but this “selective integration” policy paradoxically increased simulated performance under higher late noise. As described here, participants seemed to adjust their decision policy to account for their own internal late noise: participants with higher estimated late noise were more likely to engage in robust averaging. Like selective integration, thus, robust averaging is a decision policy that discards decision information but paradoxically confers a benefit on choice. Additionally, the design of our study allows us to draw conclusions about the timescale over which gain allocation occurs. In previous work, robust averaging was found to vary with the overall distribution of features present in a block of trials. For example, when averaging Gaussian-distributed features in a red-to-purple colour space, purple features were relatively downweighted, but when averaging in a red-to-blue colour space, purple features were relatively upweighted[8]. In other words, the allocation of gain to features depended on the overall distribution of features in the block of trials, with the most frequently-occurring( i. e. expected) items enjoying preferential processing. Here, we saw no difference in robust averaging between a fixed reference condition( in which the Gaussian distribution of orientations remained stable over a prolonged block of trials) and a variable reference condition( in which the Gaussian distribution of orientations changed from trials to trial, and was uniform over the entire session). In other words, any adaptive gain control was set by the reference, and thus occurred very rapidly, i. e. within the timescale of a single trial. Evidence for remarkably rapid adaptive gain control has been described before. Indeed, short-lag repetition priming may be considered a form of gain control[22], in which the prime dictates which features should be processed preferentially[10]. During sequential averaging, the behavioural weight and neural gain applied to a feature depend on its distance from the cumulative average information viewed thus far, as if features pass through an adaptive filter with nonlinear form[14]. These observations are consistent with the theoretical framework that we propose here. Finally, we discuss some limitations of our approach. Firstly, our model uses a simple power function to describe the nonlinear transformation of inputs prior to averaging. We chose this function for mathematical convenience–it provides a simple means of parameterizing the mapping function feature to decision information in a way that privileges inliers( k < 1) or outliers( k > 1). However, other forms of nonlinear transformation that are not tested here may also account for the data. Secondly, our best-fitting model assumes zero sensory encoding noise( or ‘early’ noise). Adding early noise to the model did not change qualitatively the benefit of robust averaging under higher late noise, unless it becomes performance-limiting in itself. However, in other settings, early noise will be an important limiting factor on performance. Although we found that our “late noise only” model fit better than an “early noise only” model, we do not wish to claim that there is no early noise in our task. Since the current experiment was not designed to estimate the level of early noise, it may be of interest to directly manipulate both early and late noise in future experiments.", "role": "user" }, { "content": "Humans often make decisions by averaging information from multiple sources . When all the sources are equally reliable , they should all have equivalent impact ( or weight ) on the decisions of an “ideal” observer , i . e . one with perfect memory . However , recent experiments have suggested that humans give unequal weight to sources that are deviant or unusual , a phenomenon called “robust averaging” . Here , we use computer simulations to try to understand why humans do this . Our simulations show that under the assumption that information processing is limited by a source of internal uncertainty that we call “late” noise , robust averaging actually leads to improved performance . Using behavioural testing , we replicate the finding of robust averaging in a cohort of healthy humans , and show that those participants that engage in robust averaging perform better on the task . This study thus provides new information about the limitations on human decision-making .", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.29168
[ { "content": "Given the full text of a scientific research paper, write an abstract section for the paper. Your response should contain only the abstract without any other text.\n\nArticle text:\nin @xcite a calculation of the btz black hole entropy from loop quantum gravity was obtained .\nthe calculation follows similar steps to the calculation of the usual 4-dimensional case .\nthe same calculation was done in @xcite using spin foam models .\nthe calculation is done by defining an expectation value using spin foam partition functions with observables .\nhere we describe and compare both calculations see that both give the same answer .\nwe focus on the euclidean version of the black hole .\na three dimensional solution of einstein s equations was introduced for the first time in @xcite .\nnow it is known as the btz black hole .\nthe three dimensional euclidean solution to empty einstein equations of general relativity with negative cosmological constant is given by the metric @xmath0 in @xcite , it is shown that by a change of coordinates , the solution can be written in the form @xmath1 for @xmath2 .\nimmediately it can be recognised as the metric of the hyperbolic space @xmath3 . then after some isometric identifications also described in @xcite the btz solution is in fact given by a fundamental region of the hyperbolic space .\nthis region is a solid torus where the core of the torus is the black hole horizon @xmath4 , and the rest of the torus is the outside of the black hole @xmath5 .\nthese identifications were also reviewed in @xcite . according to bekenstein - hawking formula\nthe leading term of the entropy of a black hole in three dimensions is given by @xmath6 where @xmath7 is the black hole horizon length .\nthe entropy is believed to be related to the logarithm of the number of microstates .\nin @xcite the calculation of the entropy for the btz black hole is done in a similar way to the four dimensional case .\nthe isolated three dimensional black hole horizon was introduced in @xcite .\nthe black hole horizon surface is thought as a circular boundary on a spacelike surface .\nit is also asummed that @xmath8 spin network graph edges puncture the horizon(see figure 1 ) .\nthe corresponding edges are labelled with irreducible representations of @xmath9 . if @xmath10 are the edge labels which cross the horizon .\nthe length spectrum of three dimensional gravity was studied in @xcite .\nthe length of the horizon according to @xcite is given by the number of states which give rise to the entropy is shown to be given by the dimension of the invariant tensor in the decomposition of the tensor product of irreducible representations of the quantum group version of @xmath9 .\nthis number is shown to be given approximately by let @xmath17 be the turaev - viro @xcite partition function of the triangulated solid torus with the only difference that in the partition function we leave the labels at the horizon fixed .\n@xmath17 is therefore a function of these labels .. @xmath18 the calculation of @xmath21 was carried out in @xcite and is computed by thinking of the blocks which form the triangulated horizon .\nconsider a particular triangulation which locally looks like in figure 3 and where the horizon is triangulated with an even number of edges .\nlabel the horizon edges by @xmath22 .\neach pair of edges @xmath23 belongs to a triangle which is labelled as @xmath24 .\nthe edges labelled @xmath25 belong to the boundary of the solid torus .\nwe have seen that loop quantum gravity and spin foam calculations lead to the same result .\nthis really implies a very nice result since it suggests what is always expected ; that loop quantum gravity and spin foams are truly the same theory .", "role": "user" }, { "content": "we present a comparison of the calculation of btz black hole entropy in loop quantum gravity and in spin foam models . \n we see that both give the same answer .", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.1551
[ { "content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: PLAY HANDS PROTECTIVE GLOVES: TECHNICAL NOTE ON DESIGN AND CONCEPT Cerebral Palsy (CP) is the leading cause of childhood motor disability, with a global incidence of 1.6 to 2.5/1,000 live births. Approximately 23% of children with CP are dependent upon assistive technologies. Some children with developmental disabilities have self-injurious behaviors such as finger biting but also have therapeutic needs. The purpose of this technical note is to describe design considerations for a protective glove and finger covering that maintains finger dexterity for children who exhibit finger and hand chewing (dermatophagia) and require therapeutic range of motion and may benefit from sensory stimulation resulting from constant contact between glove and skin. Protecting Little and Adolescent Youth (PLAY) Hands are protective gloves for children with developmental disorders such as CP who injure themselves by biting their hands due to pain or sensory issues. PLAY Hands will be cosmetically appealing gloves that provide therapeutic warmth, tactile sensory feedback, range of motion for donning/ doffing, and protection to maximize function and quality of life for families of children with developmental disorders. The technology is either a per-finger protective orthosis or an entire glove solution designed from durable 3D-printed biodegradable / bioabsorbable materials such as thermoplastics. PLAY Hands represent a series of protective hand wear interventions in the areas of self-mutilating behavior, kinematics, and sensation. They will be made available in a range of protective iterations from single- or multi-digit finger orthoses to a basic glove design to a more structurally robust and protective iteration. To improve the quality of life for patients and caregivers, they are conceptualized to be cosmetically appealing, protective, and therapeutic.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [\"finger\", \"hands\"], \"Bacterium\": [], \"BiologicFunction\": [\"finger and hand chewing\", \"biting\", \"Cerebral Palsy\", \"dermatophagia\", \"developmental disabilities\", \"finger biting\", \"sensory feedback\", \"developmental disorders\", \"therapeutic warmth\", \"CP\"], \"BiomedicalOccupationOrDiscipline\": [\"kinematics\"], \"BodySubstance\": [], \"BodySystem\": [\"skin\"], \"Chemical\": [], \"ClinicalAttribute\": [\"range of motion\"], \"Eukaryote\": [], \"Finding\": [\"pain\", \"protective iterations\", \"sensory issues\", \"motor disability\", \"live births\", \"protective iteration\"], \"Food\": [], \"HealthCareActivity\": [\"sensory stimulation\", \"interventions\"], \"InjuryOrPoisoning\": [\"self-mutilating behavior\"], \"IntellectualProduct\": [\"technical note\"], \"MedicalDevice\": [\"single-\", \"per-finger protective orthosis\", \"multi-digit finger orthoses\"], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [\"caregivers\"], \"ResearchActivity\": [], \"SpatialConcept\": [], \"Virus\": []}", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.9263
[ { "content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nImpact of fumaric acid esters on cardiovascular risk factors and depression in psoriasis: a prospective pilot study\nAbstract 1:\nPatients with psoriasis have an increased risk of cardiovascular disease that is partly attributable to chronic systemic inflammation . The aim of our prospect i ve pilot study was to investigate the impact of fumaric acid esters ( FAE ) , a first-line systemic antipsoriatic treatment in Germany , on cardiovascular risk parameters . Participants with moderate-to-severe psoriasis from the University Medical Center Mannheim and the University Hospital Würzburg were treated with FAE for 16 weeks according to st and ard dosage recommendations . Disease severity , life quality and depression scores as well as biomarkers of inflammation , lipid and glucose metabolism were assessed prior to initiation of FAE and after 16 weeks . Out of 39 participants recruited , 27 completed the study . 44 % of all participants and 63 % of those completing the 16-week treatment achieved PASI 50 response and 27 or 37 % PASI 75 response . Clinical improvement was paralleled by significant improvement in quality of life , high treatment satisfaction and significant reduction of depressive symptoms . Adverse events , most frequently mild gastrointestinal complaints , flush and lymphocytopenia occurred in 89 % . FAE did not modify glucose metabolism or inflammatory parameters substantially . However , a highly significant increase in serum levels of the atheroprotective cytokine adiponectin was noted after 16 weeks ( median 4.7 vs. 8.9 µg/ml ; p = 0.0002 ) . Our study demonstrates a significant beneficial impact of FAE on adiponectin , indicating a potential cardioprotective effect . It will be interesting to verify this finding in larger cohorts and to assess the long-term influence of FAE on cardiovascular risk and disease\n\n\nTitle 2:\nSoluble E-selectin as a marker of disease activity in pustulosis palmaris et plantaris.\nAbstract 2:\nIn this study , we investigated a possible correlation between adhesion molecules and activity of pustulosis palmaris et plantaris ( PPP ) . Serum levels of soluble E-selectin ( sE-selectin ) , soluble intercellular adhesion molecules 1 ( sICAM-1 ) and tumour necrosis factor alpha ( TNF-alpha ) in 30 untreated PPP patients were examined , and compared with those in 20 healthy subjects . Values in 10 PPP patients were re-examined after treatment . Serum levels of sE-selectin and TNF-alpha in untreated PPP patients were significantly higher than those in healthy subjects . There was a statistically significant correlation between the disease activity and serum levels of sE-selectin in untreated PPP patients . Furthermore , disease activity of PPP was higher in patients who smoked and during the summer , with elevation of serum sE-selectin levels . Serum levels of sE-selectin were downregulated with the recovery from PPP . These results suggest that sE-selectin may play a role in the pathogenesis of PPP and could be a reliable marker of its disease activity\n\n\nTitle 3:\nCirculating levels of lipocalin-2 and retinol-binding protein-4 are increased in psoriatic patients and correlated with baseline PASI\nAbstract 3:\nPsoriasis has been related to metabolic syndrome ( MS ) . Adipocytokines produced by white adipose tissue may be involved in the pathogenesis of psoriasis and its association with MS . Our objectives were to characterize the profile of a number of different inflammatory and atherogenic markers , vitamins , adipokines and cytokines and their potential involvement in MS in patients with moderate-to-severe psoriasis without joint involvement compared to anthropometrically matched controls , and to evaluate correlation with severity of the skin disease and changes after narrow-b and UVB ( NB-UVB ) phototherapy . We design ed a prospect i ve cross-sectional study . Baseline waist circumference , body fat composition , lipid , carbohydrate and calcium metabolism profile , inflammation markers , homocysteine and vitamins D , B6 , B12 and folic acid , leptin , resistin , omentin , lipocalin-2 , adipocyte fatty acid-binding protein , retinol-binding protein-4 ( RBP-4 ) , interleukin-6 , soluble tumour necrosis factor receptor 1 ( sTNFR1 ) and interleukin-17 of 50 psoriasis patients and 50 gender , age and body mass index-matched controls were recorded , then evaluated after NB-UVB in the patients . The patients had higher baseline serum concentrations of leptin , RBP-4 , lipocalin-2 and sTNFR1 . Baseline psoriasis area and severity index correlated with serum concentrations of RBP-4 and lipocalin-2 only . Principal components analysis disclosed a component including vitamins B12 , B6 , folic acid , calcidiol and HDL-cholesterol that was only present in healthy controls and opposed to a cluster of variables which promote MS . This component was absent in the patients . Our results point to lipocalin-2 and RBP-4 as relevant mediators of the trend towards MS in psoriatic patients\n\n\nTitle 4:\nSelected immunological changes in patients with Goeckerman's therapy TNF-alpha, sE-selectin, sP-selectin, sICAM-1 and IL-8.\nAbstract 4:\nPsoriasis is one of the most frequent inflammatory skin diseases in which abnormal individual immune reactivity plays an important role . The aim of the present study was to describe selected immunological changes , concerning pro-inflammatory cytokines ( TNF-alpha , IL-8 ) and adhesion molecules ( sE-selectin , sP-selectin , sICAM-1 ) , in 56 patients cured by Goeckerman 's therapy ( GT ) . GT includes dermal application of crude coal tar ( containing polycyclic aromatic hydrocarbons ) and exposure to UV radiation . When compared with the control group ( healthy blood donors ) , the patients before GT had significantly increased serum levels of sE-selectin ( p<0.001 ) , sP-selectin ( p<0.001 ) , sICAM-1 ( p<0.001 ) and IL-8 ( p<0.001 ) . Significantly decreased serum levels of sE-selectin ( p<0.05 ) and significantly increased serum levels of IL-8 ( p<0.05 ) were found after GT therapy . Serum levels of sICAM significantly correlated with the disease activity and with serum levels of sE-selectin . The level of PASI score ( Psoriasis Area and Severity Index ) significantly decreased after GT ( p<0.001 ) and confirms the high efficiency GT . These findings confirmed that pro-inflammatory chemokine ( IL-8 ) and adhesion molecules ( sE-selectin , sP-selectin , sICAM-1 ) play an important role in the development and regulation of inflammation in psoriasis . Determination of sE-selectin and sICAM seems to be a promising marker of psoriasis 's activity . Chemokine pathway ( IL-8 ) and TNF-alpha activity seem to be modulated by Goeckerman 's therapy ( polycyclic aromatic hydrocarbons )\n\n\nTitle 5:\nEffect of Narrow-Band UVB Phototherapy on Soluble Cell Adhesion Molecules in Patients with Psoriasis Vulgaris\nAbstract 5:\nThis study investigated the effect of narrow-b and ultraviolet B ( NB-UVB ) phototherapy on serum levels of the soluble cell adhesion molecules sE-selectin , sP-selectin , sL-selectin and soluble intercellular adhesion molecule-1 ( sICAM-1 ) in 58 patients with psoriasis vulgaris . Psoriasis Area and Severity Index ( PASI ) scores significantly decreased after treatment , confirming the efficacy of NB-UVB phototherapy . Serum levels of sE-selectin also decreased significantly after treatment , and levels of sICAM-1 showed a significant correlation with PASI score and with levels of sE-selectin . The efficacy of NB-UVB phototherapy in improving psoriatic lesions may be a function of decreased serum levels of E-selectin . These findings emphasize the complex roles of soluble cell adhesion molecules in the immunopathogenesis of psoriasis\n\n\nTitle 6:\nSoluble E‐selectin in sera of patients with atopic dermatitis and psoriasis—correlation with disease activity\nAbstract 6:\nSummary E‐selectin endothelial leucocyte adhesion molecule‐1 is expressed on endothelial cells in distinct inflammatory skin diseases . E‐selectin mediates the adhesion between activated endothelium and different inflammatory cells . To evaluate soluble E‐selectin as a marker of disease activity in patients with atopic dermatitis and psoriasis , the concentration of soluble E‐selectin , determined by ELISA , was studied in sera of patients before and after treatment and compared with normal non‐atopic controls . The disease severity was established using clinical scoring systems . Levels of soluble E‐selectin were significantly elevated in sera of patients with atopic dermatitis and psoriasis ( as compared with controls ) . Clinical improvement , after treatment , in patients with atopic dermatitis , but not in psoriasis , was associated with a significant decrease in serum levels of soluble E‐selectin . There was a significant correlation of soluble E‐selectin and disease activity in patients with atopic dermatitis . These data indicate that soluble E‐selectin is another parameter to evaluate the inflammatory response in atopic dermatitis and psoriasis . Determination of soluble E‐selectin may be a useful measure of disease activity in atopic dermatitis", "role": "user" }, { "content": "However , increased serum levels of IL-17 correlated with psoriasis in men .\nFor other biomarkers , age , gender and psoriasis area and severity index did not explain the differences in effect size between the studies .\nConclusions Serum levels of TNF-α , IFN-γ , IL-2 , IL-6 , IL-8 , IL-18 , IL-22 , chemerin , lipocalin-2 , resistin , sE-selectin , fibrinogen , complement 3 , and adiponectin correlate with psoriasis and can be used as potential biomarkers for psoriasis and response to the treatment .", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.63302
[ { "content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey of PPAD-Completeness for Computing Nash Equilibria \nChapter Title: Some reductions among equilibrium problems \n1. Abstract of Cited Paper (BIB001): A new era of theoretical computer science addresses fundamental problems about auctions, networks, and human behavior.\n\nLiterature Review:", "role": "user" }, { "content": "By way of example, consider the well-known result (that predates the PPADhardness of Nash; see BIB002 Chapter 2 for further background) that symmetric 2-player games are as hard to solve as general ones. Given a n × n game G, construct a symmetric 2n × 2n game G = f (G), such that given any Nash equilibrium of G we can efficiently reconstruct a Nash equilibrium of G. To begin, let us assume that all payoffs in G are positive -the reason why this assumption is fine, is that if there are any negative payoffs, then we can add a sufficiently large constant to all payoffs and obtain a strategically equivalent version (i.e. one that has the same Nash equilibria). Now suppose we solve the 2n × 2n game G = 0 G G T 0 , where we assume that entries contain payoffs to both players, and the zeroes represent n × n matrices of payoffs of zero to both players. Let p and q denote the probabilities that players 1 and 2 use their first n actions, in some given solution. If we label the rows and columns of the payoff matrix with the probabilities assigned to them by the players, we have If p = q = 1, both players receive payoff 0, and both have an incentive to change their behaviour, by the assumption that G's payoffs are all positive (and similarly if p = q = 0). So we have p > 0 and 1 − q > 0, or alternatively, 1 − p > 0 and q > 0. Assume p > 0 and 1 − q > 0 (the analysis for the other case is similar). Let {p 1 , ..., p n } be the probabilities used by player 1 for his first n actions, {q 1 , . . . q n } the probabilities for player 2's second n actions. Note that p 1 + . . . + p n = p and q 1 + . . . + q n = 1 − q. Then (p 1 /p, . . . , p n /p) and (q 1 /(1 − q), . . . , q n /(1 − q)) are a Nash equilibrium of G. To see this, consider the diagram; they should form a best response to each other for the top-right part. This is an example of the kind of reduction defined in Section 1.2. In the context of games, this kind of reduction is called in a Nash homomorphism. Abbott et al. reduce general 2-player games to win-lose 2-player games (where payoffs are 0 or 1). Various other Nash homomorphisms have been derived independently of the work relating Nash to PPAD. An important one is Bubelis BIB001 that reduces (in a more algebraic style) k-player games to 3-player games. The result also highlights a key distinction between k-player games (for k ≥ 3) and 2-player games; in 2-player games the solutions are rational numbers (provided that the payoff in the games are rational) while for 3 or more players, the solutions may be irrational. The reduction of BIB001 preserves key algebraic properties of the quantities x i j in a solution, such as their degree (i.e the degree of the lowest-degree polynomial with integer coefficients satisfied by x i j ). Since we have noted that 2-player games have solutions in rational numbers, this kind of reduction could not apply to 2-player games.", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.1976
[ { "content": "Summarize the paper:\n\nPaper text:\n\nConvolution Neural Network (CNN) has gained tremendous success in computer vision tasks with its outstanding ability to capture the local latent features.\nRecently, there has been an increasing interest in extending CNNs to the general spatial domain.\nAlthough various types of graph convolution and geometric convolution methods have been proposed, their connections to traditional 2D-convolution are not well-understood.\nIn this paper, we show that depthwise separable convolution is a path to unify the two kinds of convolution methods in one mathematical view, based on which we derive a novel Depthwise Separable Graph Convolution that subsumes existing graph convolution methods as special cases of our formulation.\nExperiments show that the proposed approach consistently outperforms other graph convolution and geometric convolution baselines on benchmark datasets in multiple domains.\nConvolution Neural Network (CNN) BID14 has been proven to be an efficient model family in extracting hierarchical local patterns from grid-structured data, which has significantly advanced the state-of-the-art performance of a wide range of machine learning tasks, including image classification, object detection and audio recognition BID15 .\nRecently, growing attention has been paid to dealing with data with an underlying graph/non-Euclidean structure, such as prediction tasks in sensor networks BID23 , transportation systems , and 3D shape correspondence application in the computation graphics BID2 .\nHow to replicate the success of CNNs for manifold-structured data remains an open challenge.Many graph convolution and geometric convolution methods have been proposed recently.\nThe spectral convolution methods BID3 BID5 BID11 are the mainstream algorithm developed as the graph convolution methods.\nBecause their theory is based on the graph Fourier analysis BID20 ), one of their major limitations is that in this model the knowledge learned from different graphs is not transferrable BID19 .\nOther group of approaches is geometric convolution methods, which focuses on various ways to leverage spatial information about nodes BID17 BID19 .\nExisting models mentioned above are either not capable of capturing spatial-wise local information as in the standard convolution, or tend to have very large parameter space and hence, are prone to overfitting.\nAs a result, both the spectral and the geometric convolution methods have not produced the results comparable to CNNs on related tasks.\nSuch a misalignment makes it harder to leverage the rapidly developing 2D-convolution techniques in the generic spatial domain.\nWe note graph convolution methods are also widely used in the pure graph structure data, like citation networks and social networks BID11 .\nOur paper will only focus on the data with the spatial information.In this paper, we provide a unified view of the graph convolution and traditional 2D-convolution methods with the label propagation process BID24 .\nIt helps us better understand and compare the difference between them.\nBased on it, we propose a novel Depthwise Separable Graph Convolution (DSGC), which inherits the strength of depthwise separable convolution that has been extensively used in different state-of-the-art image classification frameworks including Inception Network BID22 , Xception Network BID4 and MobileNet (Howard et al., 2017) .\nCompared with previous graph and geometric methods, the DSGC is more expressive and aligns closer to the depthwise separable convolution network, and shares the desirable characteristic of small parameter size as in the depthwise separable convolution.\nIn experiments section, we evaluate the DSGC and baselines in three different machine learning tasks.\nThe experiment results show that the performance of the proposed method is close to the standard convolution network in the image classification task on CIFAR dataset.\nAnd it outperforms previous graph convolution and geometric convolution methods in all tasks.\nFurthermore, we demonstrate that the proposed method can easily leverage the advanced technique developed for the standard convolution network to enhance the model performance, such as the Inception module BID22 , the DenseNet architecture BID8 and the Squeeze-and-Excitation block BID7 .The\nmain contribution of this paper is threefold:• A unified view of traditional 2D-convolution and graph convolution methods by introducing depthwise separable convolution.• A\nnovel Depthwise Separable Graph Convolution (DSGC) for spatial domain data.• We\ndemonstrate the efficiency of the DSGC with extensive experiments and show that it can facilitate the advanced technique of the standard convolution network to improve the model performance.", "role": "user" }, { "content": "We devise a novel Depthwise Separable Graph Convolution (DSGC) for the generic spatial domain data, which is highly compatible with depthwise separable convolution.", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.113
[ { "content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: Quinolinic acid (Sigma, St. Louis) was used to lesion the hippocampal formation and caudate nucleus (Schwartz et al., 1983) as described (Worley et al., 1986a).", "role": "user" }, { "content": "Method", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.18199
[ { "content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nBACKGROUND: Cell entry of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) depends on binding of the viral spike (S) proteins to angiotensin-converting enzyme 2 and on S protein priming by TMPRSS2. Inhibition of TMPRSS2 may work to block or decrease the severity of SARS-CoV-2 infections. Intriguingly, TMPRSS2 is an androgen-regulated gene that is up-regulated in prostate cancer where it supports tumor progression and is involved in a frequent genetic translocation with the ERG gene. First- or second-generation androgen-deprivation therapies (ADTs) decrease the levels of TMPRSS2. Here we put forward the hypothesis that ADTs may protect patients affected by prostate cancer from SARS-CoV-2 infections. MATERIALS AND METHODS: We extracted data regarding 9280 patients (4532 males) with laboratory-confirmed SARS-CoV-2 infection from 68 hospitals in Veneto, one of the Italian regions that was most affected by the coronavirus disease 2019 (COVID-19) pandemic. The parameters used for each COVID-19-positive patient were sex, hospitalization, admission to intensive care unit, death, tumor diagnosis, prostate cancer diagnosis, and ADT. RESULTS: There were evaluable 9280 SARS-CoV-2-positive patients in Veneto on 1 April 2020. Overall, males developed more severe complications, were more frequently hospitalized, and had a worse clinical outcome than females. Considering only the Veneto male population (2.4 million men), 0.2% and 0.3% of non-cancer and cancer patients, respectively, tested positive for SARS-CoV-2. Comparing the total number of SARS-CoV-2-positive cases, prostate cancer patients receiving ADT had a significantly lower risk of SARS-CoV-2 infection compared with patients who did not receive ADT (OR 4.05; 95% CI 1.55-10.59). A greater difference was found comparing prostate cancer patients receiving ADT with patients with any other type of cancer (OR 4.86; 95% CI 1.88-12.56). CONCLUSION: Our data suggest that cancer patients have an increased risk of SARS-CoV-2 infections compared with non-cancer patients. However, prostate cancer patients receiving ADT appear to be partially protected from SARS-CoV-2 infections.", "role": "user" }, { "content": "[\"Treatment\"]", "role": "assistant" } ]
science.genia_ner
science.genia_ner.406
[ { "content": "Extract all unique entities of these types from the biomedical abstract: [\"CellLine\", \"DNA\", \"Protein\", \"CellType\", \"RNA\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values. If no entities found for a type, use [].\n\nAbstract:\n\nLymphocytes from CML patients lack a 47 kDa factor having affinity for a genomic sterol regulatory sequence. Deranged cellular cholesterol homeostasis has been widely recognized in the initiation as well as progression of various types of cancers including chronic myeloid leukaemia (CML). Since the human genomic sterol regulatory element (SRE) has been shown to regulate various key genes involved in this phenomenon, the present study revealed the existence of a unique 47 kDa protein factor having affinity for this SRE sequence in lymphocytes from normal subjects, as well as its absence in lymphocytes from untreated CML patients. However, this factor appeared when these CML patients achieved complete haematological remission (CHR) through alpha-interferon therapy. Furthermore, an inverse relationship was also observed between the LDL receptor gene expression at the transcriptional level and the binding affinity of this 47 kDa protein factor to the SRE sequence. Based upon these results we propose that this factor may have a role in pathophysiology of chronic myeloid leukaemia.", "role": "user" }, { "content": "{\"CellLine\": [], \"CellType\": [\"Lymphocytes\", \"lymphocytes\"], \"DNA\": [\"human genomic sterol regulatory element\", \"SRE sequence\", \"LDL receptor gene\", \"SRE\", \"genomic sterol regulatory sequence\"], \"Protein\": [\"alpha - interferon\", \"LDL receptor\", \"47 kDa protein factor\", \"47 kDa factor\"], \"RNA\": []}", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.4648
[ { "content": "Output a JSON object with \"verdict\" and \"evidence\" fields based on the claim and abstract below.\n\nClaim: no-one can ever recover from COVID-19\n\nAbstract:\nThe novel Coronavirus disease 2019 (COVID-19) is an illness caused due to Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2).\nThe World Health Organization (WHO) has declared this outbreak a global health emergency and as on April 24, 2020, it has spread to 213 countries, with 25,91,015 confirmed cases and 742,855 cases have been recovered from COVID-19.\nIn this dreadful situation our team has already published an article in the Science of the Total Environment, which elaborates the various aspects of the SARS-CoV-2 infection.\nIn this situation, it is imperative to understand the possible outcome of COVID-19 recovered patients and determine if they have any other detrimental illnesses by longitudinal analysis to safeguard their life in future.\nIt is necessary to follow-up these recovered patients and performs comprehensive assessments for detection and appropriate management towards their psychological, physical, and social realm.\nThis urges us to suggest that it is highly important to provide counselling, moral support as well as a few recommended guidelines to the recovered patients and society to restore to normalcy.\nEpidemiological, clinical and immunological studies from COVID-19 recovered patients are particularly important to understand the disease and to prepare better for potential outbreaks in the future.\nLongitudinal studies on a larger cohort would help us to understand the in-depth prognosis as well as the pathogenesis of COVID-19.\nAlso, follow-up studies will help us provide more information for the development of vaccines and drugs for these kinds of pandemics in the future.\nHence, we recommend more studies are required to unravel the possible mechanism of COVID-19 infection and the after-effects of it to understand the characteristics of the virus and to develop the necessary precautionary measures to prevent it.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"The World Health Organization (WHO) has declared this outbreak a global health emergency and as on April 24, 2020, it has spread to 213 countries, with 25,91,015 confirmed cases and 742,855 cases have been recovered from COVID-19.\", \"In this situation, it is imperative to understand the possible outcome of COVID-19 recovered patients and determine if they have any other detrimental illnesses by longitudinal analysis to safeguard their life in future.\", \"Epidemiological, clinical and immunological studies from COVID-19 recovered patients are particularly important to understand the disease and to prepare better for potential outbreaks in the future.\"]}", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.1001
[ { "content": "Extract entities of these types from the text: [\"Chemical\", \"Protein\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values.\n\nText:\nReduced plasma oxytocin levels in female patients with borderline personality disorder.\nThe neuropeptide oxytocin is involved in social cognition and interaction across species and plays a crucial role in the regulation of affiliative behaviors. Oxytocin levels in cerebrospinal fluid (CSF), but also in plasma or urine, have been shown to be negatively associated with childhood traumata, aggressive behavior, and suicide attempts. Recently, an altered activity of the oxytocin system has been discussed to play a prominent role in borderline personality disorder (BPD), which is thought to be closely related to traumatic experiences in childhood and is characterized by (para)suicidal behaviors as well as aggressive outbursts. In the present study, we compared plasma oxytocin levels of women with and without BPD in the follicular phase and assessed the relationship between oxytocin concentrations and childhood traumata. Women diagnosed with BPD had significantly reduced oxytocin concentrations, even after controlling for estrogen, progesterone, and contraceptive intake. In addition, plasma oxytocin correlated negatively with experiences of childhood traumata, in particular with emotional neglect and abuse. The results of mediation analyses do not support a model of oxytocin being a prominent mediator in the link between childhood trauma and BPD. Thus, the findings indicate dysregulations in the oxytocin system of patients diagnosed with BPD with more longitudinal research being necessary to disentangle the relationship between childhood adversities, oxytocin system, and psychopathology.", "role": "user" }, { "content": "{\"Chemical\": [\"estrogen\", \"Oxytocin\", \"progesterone\", \"oxytocin\"], \"Protein\": [\"Oxytocin\", \"oxytocin\"]}", "role": "assistant" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.683
[ { "content": "Based on the following biomedical documents, answer the question with either yes or no.\n\nQuestion: Has ATF4 transcription factor been linked to cancer and neoplastic transformation?\n\nContext: aken together, we discovered a novel axis of BCL10-regulated OSCC progression via STAT1/ATF4/S100P/P65 signaling, which could predict the prognosis of OSCC and will be beneficial for developing therapeutic strategy against advanced OSCC\ns a result, the level of phosphorylated Eukaryotic Initiation Factor 2 alpha (eIF2α) is markedly elevated, resulting in the promotion of a pro-adaptive signaling pathway by the inhibition of global protein synthesis and selective translation of Activating Transcription Factor 4 (ATF4).\nMany cancers overexpress ATF4, a stress-induced transcription factor that promotes cell survival under hypoxic conditions and other stresses of the tumor microenvironment, but the potential contributions of ATF4 to oncogenesis itself have been little explored. Here, we report that ATF4 promotes oncogene-induced neoplastic transformation by suppressing the expression of cellular senescence-associated genes.\nHere, we report that ATF4 promotes oncogene-induced neoplastic transformation by suppressing the expression of cellular senescence-associated genes.\nElevated levels of ATF4 were sufficient to suppress expression of these proteins and drive oncogenic transformation.\nOur findings define a central function for ATF4 in promoting oncogenic transformation by suppressing a central pathway of cellular senescence.<br>\nATF4 expression is upregulated in cancer.\nActivating transcription factor 4 (ATF4), an endoplasmic reticulum stress-inducible transcription factor, plays important roles in cancer progression and resistance to therapy.\nActivating transcription factor 4 (ATF4) is a stress-induced transcription factor that is frequently upregulated in cancer cells.\nOur findings define a central function for ATF4 in promoting oncogenic transformation by suppressing a central pathway of cellular senescence.\nStress-regulated transcription factor ATF4 promotes neoplastic transformation by suppressing expression of the INK4a/ARF cell senescence factors.\nActivating transcription factor 4 (ATF4), a member of the ATF/CREB family, has been reported to be related to tumor angiogenesis", "role": "user" }, { "content": "yes", "role": "assistant" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.569
[ { "content": "Read this passage from a research paper and answer the question.\n\nContext: In order to address the prohibitively large language action spaces, we explicitly consider the set of response candidates that are generated from the fine-tuned GPT-2. The GPT-Critic selects the\n\nQuestion: How does the proposed method address the issue of large action spaces?", "role": "user" }, { "content": "The proposed method consider the set of response candidates that are generated from the fine-tuned GPT-2 as action spaces.", "role": "assistant" } ]
science.cdr_ner
science.cdr_ner.95
[ { "content": "Find all entities of these types in the text: [\"Chemical\", \"Disease\"].\n\nOutput a JSON object.\n\nText:\nPrazosin-induced stress incontinence. A case of genuine stress incontinence due to prazosin, a common antihypertensive drug, is presented. Prazosin exerts its antihypertensive effects through vasodilatation caused by selective blockade of postsynaptic alpha-1 adrenergic receptors. As an alpha-blocker, it also exerts a significant relaxant effect on the bladder neck and urethra. The patient's clinical course is described and correlated with initial urodynamic studies while on prazosin and subsequent studies while taking verapamil. Her incontinence resolved with the change of medication. The restoration of continence was accompanied by a substantial rise in maximum urethral pressure, maximum urethral closure pressure, and functional urethral length. Patients who present with stress incontinence while taking prazosin should change their antihypertensive medication before considering surgery, because their incontinence may resolve spontaneously with a change in drug therapy.", "role": "user" }, { "content": "{\"Chemical\": [\"Prazosin\", \"prazosin\", \"verapamil\"], \"Disease\": [\"incontinence\", \"stress incontinence\"]}", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.662
[ { "content": "Identify relationships between chemical compounds and genes in this abstract. Use only these relation types: ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nFormat your output as a JSON array with entries like [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"].\n\nAbstract:\nDiverse effects of macromolecular crowding on the sequential glycan-processing pathway involved in glycoprotein quality control. Compared with in vitro conditions, the intracellular environment is highly crowded with biomolecules; this has numerous effects on protein functions, including enzymatic activity. We examined the effects of macromolecular crowding on glycan processing of N-glycoprotein in the endoplasmic reticulum as a model sequential metabolic pathway. Experiments with synthetic substrates of physiological glycan structure clearly showed that the first half of the pathway (glucose trimming) was accelerated, whereas the second (mannose trimming) was decelerated under molecular crowding conditions. Furthermore, calreticulin, a lectin-like molecular chaperone, bound more strongly to a glycan-processing intermediate under these conditions. This study demonstrates the diverse effects of molecular crowding on sequential enzymatic processing, and the importance of the effects of macromolecular crowding on in vitro assays for understanding sequential metabolic pathways.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1049
[ { "content": "Answer the following biomedical question based on the given context. Provide only the answer without additional explanation.\n\nQuestion: Which algorithm has been developed for finding conserved non-coding elements (CNEs)?\n\nContext: CNEFinder: finding conserved non-coding elements in genomes.\nTowards this direction, identifying sets of CNEs in a wide range of organisms is an important first step. Currently, there are no tools published in the literature for systematically identifying CNEs in genomes.Results: We fill this gap by presenting CNEFinder; a tool for identifying CNEs between two given DNA sequences with user-defined criteria.\nResults We fill this gap by presenting CNEFinder; a tool for identifying CNEs between two given DNA sequences with user-defined criteria.", "role": "user" }, { "content": "CNEFinder", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.2590
[ { "content": "Extract all unique entities from this biomedical abstract. Only choose entities from these types: [\"Virus\", \"MedicalDevice\", \"BodySubstance\", \"SpatialConcept\", \"BiologicFunction\", \"ResearchActivity\", \"AnatomicalStructure\", \"ProfessionalOrOccupationalGroup\", \"HealthCareActivity\", \"BiomedicalOccupationOrDiscipline\", \"Bacterium\", \"Chemical\", \"InjuryOrPoisoning\", \"IntellectualProduct\", \"Finding\", \"ClinicalAttribute\", \"Organization\", \"Eukaryote\", \"Food\", \"BodySystem\", \"PopulationGroup\"].\n\nOutput a JSON object where keys are entity types and values are lists of extracted entities. If no entities found for a type, use empty list [].\n\nAbstract: Efficacy of peptide receptor radionuclide therapy with (177)Lu-octreotate in metastatic pulmonary neuroendocrine tumors: a dual-centre analysis There is lack of data on the specific benefit of peptide receptor radionuclide therapy (PRRT) in neuroendocrine tumors (NET) of pulmonary origin. This dual- centre study aimed to assess outcome and toxicity of standardized PRRT with (177)Lu-octreotate in a patient population of advanced pulmonary NET of grade 1-2. We retrospectively assessed 22 consecutively patients treated with 4 intended cycles at 3 monthly intervals (mean activity per cycle 7.8±0.68 GBq). In a median follow-up period of 54 months, no significant nephrotoxicity (≥ grade 3) was observed. Reversible hematotoxicity (grade 3) occurred in 3 patients (13.6%). Treatment response consisted of partial response in 6 (27.3%), stable disease in 9 (40.9%), and progressive disease in 7 (31.8%) patients. Median progression-free survival (PFS) and overall survival (OS) was 27 (95% CI, 9-45) and 42 months (95% CI, 25-59), respectively. High hepatic tumor load (> 50%) and high plasma chromogranin A (> 600 ng/mL) were negative baseline predictors for PFS and OS on univariate analysis, CgA remained significant on multivariate analysis (PFS, P=0.011; OS, P=0.026). Disease progression despite PRRT was associated with shorter survival (median OS 15 vs 53 mo, P<0.001). Despite a higher incidence of treatment failure compared to NET of other origins, the observed substantial and sustained disease stabilization (median PFS of 27 mo, disease control rate of > 2/3 of pts) indicates considerable efficacy of (177)Lu-octreotate in pulmonary NET.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [\"pulmonary\"], \"Bacterium\": [], \"BiologicFunction\": [\"hematotoxicity\", \"Disease progression\", \"disease\", \"progressive disease\", \"NET\", \"neuroendocrine tumors\", \"hepatic tumor\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"plasma chromogranin A\", \"(177)Lu-octreotate\", \"CgA\", \"peptide receptor\"], \"ClinicalAttribute\": [\"Treatment response\"], \"Eukaryote\": [], \"Finding\": [\"stable disease\", \"treatment failure\"], \"Food\": [], \"HealthCareActivity\": [\"radionuclide therapy\", \"stabilization\", \"disease control\", \"PRRT\", \"treated with\", \"follow-up\"], \"InjuryOrPoisoning\": [\"nephrotoxicity\", \"toxicity\"], \"IntellectualProduct\": [\"grade 3\", \"grade 1-2\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"analysis\", \"study\"], \"SpatialConcept\": [], \"Virus\": []}", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.17182
[ { "content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Preserving the Fabric of Our Lives: A Survey of Web Preservation Initiatives \nChapter Title: Automatic harvesting approaches \n1. Abstract of Cited Paper (BIB001): With the popularity of the World Wide Web and the recognition of its worthiness of being archived we find numerous projects aiming at creating large-scale repositories containing excerpts and snapshots of Web data. Interfaces are being created that allow users to surf through time, analyzing the evolution of Web pages, or retrieving information using search interfaces. Yet, with the timeline and metadata available in such a Web archive, additional analyzes that go beyond mere information exploration, become possible. In this paper we present the AOLAP project building a Data Warehouse of such a Web archive, allowing its analysis and exploration from different points of view using OLAP technologies. Specifically, technological aspects such as operating systems and Web servers used, geographic location, and Web technology such as the use of file types, forms or scripting languages, may be used to infer e.g. technology maturation or impact.\n\nLiterature Review:", "role": "user" }, { "content": "The Internet Archive (http://www.archive.org/) and the Swedish Royal Library's Kulturarw 3 project (http://www.kb.se/kw3/) were amongst the first to adopt the automatic harvesting approach. In this, Web crawler programs -similar to those used by Web search services -are used to follow links and download content according to particular collection rules. The Kulturarw 3 crawler, for example, is set-up to only collect Web sites in the .se domain, those sites physically located in Sweden, and sites in other domains selected for their relevance to Sweden . The Internet Archive collects the Web on a much broader scale, but their crawlers will not harvest sites (or parts of them) protected by the robot exclusion protocol. A number of other national-based initiatives have followed the automatic approach, most notably the Helsinki University Library in its experiments with the NEDLIB harvester BIB001 and the Austrian On-Line Archive (AOLA) BIB002 .", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.5681
[ { "content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: The rationale behind the proposed solution is that autoencoders are able to capture a reduced dimensionality representation of the input data, still retaining important characteristic information, as shown in [27, 28] for forensic purposes.", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.7939
[ { "content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nTwo new algorithms for the all-reduce operation optimized for imbalanced process arrival patterns (PAPs) are presented: (1) sorted linear tree, (2) pre-reduced ring as well as a new way of online PAP detection, including process arrival time estimations, and their distribution between cooperating processes was introduced. The idea, pseudo-code, implementation details, benchmark for performance evaluation and a real case example for machine learning are provided. The results of the experiments were described and analyzed, showing that the proposed solution has high scalability and improved performance in comparison with the usually used ring and Rabenseifner algorithms.\n\nCited Abstract(s):\n[1]: Reduction algorithms are optimized only under the assumption that all processes commence the reduction simultaneously. Research on process arrival times has shown that this is rarely the case. Thus, all benchmarking methodologies that take into account only balanced arrival times might not portray a true picture of real-world algorithm performance. In this paper, we select a subset of four reduction algorithms frequently used by library implementations and evaluate their performance for both balanced and imbalanced process arrival times. The main contribution of this paper is a novel imbalance robust algorithm that uses pre-knowledge of process arrival times to construct reduction schedules. The performance of selected algorithms was empirically evaluated on a 128 node subset of the Partnership for Advanced Computing in Europe CURIE supercomputer. The reported results show that the new imbalance robust algorithm universally outperforms all the selected algorithms, whenever the reduction schedule is precomputed. We find that when the cost of schedule construction is included in the total runtime, the new algorithm outperforms the selected algorithms for problem sizes greater than 1 MiB.", "role": "user" }, { "content": "[1] continued the work with optimization of the MPI reduction operations dealing with the imbalanced PAPs. The main contribution is a new algorithm, called Clairvoyant, scheduling the exchange of data segments (fixed parts of reduced data) between reducing processes, without assumption of data atomicity, and taking into account PATs, thus causing as many as possible segments to be reduced by the early arriving processes. The idea of the algorithm bases on the assumption that the PAP is known during process scheduling. The paper provided a theoretical background for the PAPs, with its own definition of the time imbalances, including a PAT vector, absolute imbalance, absorption time as well as their normalized versions, followed by the analysis of the proposed algorithm, and its comparison to other typically used reduction algorithms. Its pseudo-code was described and the implementation details were roughly provided with two examples of its execution for balanced and imbalanced PAPs. Afterwards the performed experiments were described, including details about used mini-benchmark and the results of practical comparison with other solutions (typical algorithms with no support for imbalanced PAPs) were provided. Finally, the results of the experiments showing advantage of the proposed algorithm were presented and discussed.", "role": "assistant" } ]
science.bioasq_general_qa
science.bioasq_general_qa.1094
[ { "content": "Answer the following question using the provided context. Only provide the answer without any additional text.\n\nQuestion: What is the use of the Canadian C-Spine Rule?\nContext: PURPOSE: Although, Canadian C-spine rule and the National Emergency X-Radiography Utilization Study (NEXUS) criteria in ruling out clinically important cervical spine injuries have been validated using large prospective studies, no consensus exist as to which rule should be endorsed.\nOBJECTIVE: The Canadian C-Spine Rule (CCR) and the National Emergency X-Radiography Utilization Study (NEXUS) criteria are two commonly used clinical decision rules which use midline cervical spine (c-spine) tenderness on palpation as an indication for c-spine imaging post-trauma.\nBACKGROUND: A consistent approach to cervical spine injury (CSI) clearance for patients 65 and older remains a challenge. Clinical clearance algorithms like the National Emergency X-Radiography Utilisation Study (NEXUS) criteria have variable accuracy and the Canadian C-spine rule excludes older patients.\n. Clinical decision rules such as the Canadian C-Spine Rule are frequently used to risk-stratify patients needing radiography.\nPragmatic Strategy Empowering Paramedics to Assess Low-Risk Trauma Patients With the Canadian C-Spine Rule and Selectively Transport Them Without Immobilization: Protocol for a Stepped-Wedge Cluster Randomized Trial.\nThe high sensitivity of the Canadian C-spine Rule (CCR) prevents missing C-spine injuries while limiting the amount of unnecessary radiologic examinations.\nThe Canadian C-spine rule clinically clears cervical spine fracture without imaging.\nOBJECTIVE: The Canadian C-Spine Rule (CCR) and the National Emergency X-Radiography Utilization Study (NEXUS) criteria are two commonly used clinical decision rules which use midline cervical spine (c-spine) tenderness on palpation as an indication for c-spine imaging post-t\nINTRODUCTION: The Canadian C-Spine Rule (CCR) is a clinical decision aid to facilitate the safe removal of cervical collars in the alert, orientated, low-risk adult trau\nnt. The Canadian C-Spine Rule is designed to allow physicians to be more selective and accurate in ordering C-spine radiography, and to rapidly clear the C-spine without the need for radiography in many patie\ndecision rules such as the Canadian C-Spine Rule are frequently used to risk-stratify patients needing radiography. The le\nThe Canadian c-spine rule (CCR) allows safe, reproducible use of radiography in alert, stable patients with potential c-spine injury in the emergency setting [Stiell, I., Cle\nThe Canadian C-spine rule clinically clears cervical spine fracture without imaging\nefficient. The Canadian C-Spine Rule is designed to allow physicians to be more selective and accurate in ordering C-spine radiography, and to rapidly clear the C-spine without the need for radiography in ma\nSTUDY OBJECTIVE: We designed the Canadian C-Spine Rule for the clinical clearance of the cervical spine, without need for diagnostic imaging, in alert and stable\nThe Canadian c-spine rule (CCR) allows safe, reproducible use of radiography in alert, stable patients with potential c-spine injury in the emergency setting [Stiell, I., Clement, C., McKnight, R., Brison, R., Schull, M., Lowe, B., Worthington, J., Eisenhauer, M., Cass, D., Greenberg, G., MacPhail, I., Dreyer, J., Lee, J., Bandiera, G., Reardon, M., Holroyd, B., Lesiuk, H., G. Wells, 2003.", "role": "user" }, { "content": "The Canadian C-spine rule clinically clears cervical spine fracture without imaging.", "role": "assistant" } ]
science.scientific_lay_summarisation_elife_single_doc_summ
science.scientific_lay_summarisation_elife_single_doc_summ.512
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Transcranial magnetic stimulation (TMS) inhibits cortical dendrites\nArticle:\nAbstract:\nOne of the leading approaches to non-invasively treat a variety of brain disorders is transcranial magnetic stimulation( TMS). However, despite its clinical prevalence, very little is known about the action of TMS at the cellular level let alone what effect it might have at the subcellular level( e. g. dendrites). Here, we examine the effect of single-pulse TMS on dendritic activity in layer 5 pyramidal neurons of the somatosensory cortex using an optical fiber imaging approach. We find that TMS causes GABAB-mediated inhibition of sensory-evoked dendritic Ca2+ activity. We conclude that TMS directly activates fibers within the upper cortical layers that leads to the activation of dendrite-targeting inhibitory neurons which in turn suppress dendritic Ca2+ activity. This result implies a specificity of TMS at the dendritic level that could in principle be exploited for investigating these structures non-invasively.\nIntroduction:\nTranscranial magnetic stimulation( TMS) holds great promise as a non-invasive method that can be used to both enhance and impair cognitive abilities( Eldaief et al., 2013). As such, it has proved to be an important tool for addressing basic questions about brain function as well as for diagnostic and therapeutic purposes( Fregni and Pascual-Leone, 2007). Stimulation is produced by generating a brief, high-intensity magnetic field by passing a brief electric current through a magnetic coil. As a general rule, TMS affects the action of feedback projections( Juan and Walsh, 2003; Hung et al., 2005; Camprodon et al., 2010; Zanto et al., 2011) leading to a disruption in perception( Shimojo et al., 2001; Kammer et al., 2005). Due to this influence on higher order cognitive processing, TMS is not only useful for examining the interactions of different brain areas( Pascual-Leone and Walsh, 2001; Silvanto et al., 2005; Murd et al., 2012), but it has also been used as a therapeutic method to alleviate some of the symptoms of hemispatial neglect( Nyffeler et al., 2009), schizophrenia including auditory hallucinations( Giesel et al., 2012), pain, depression and epilepsy. Despite great interest( Mueller et al., 2014), the cellular influence of TMS has yet to be ascertained since the precise effects of TMS at the level of a single neuron are very difficult to gauge, particularly in humans. The neural architecture of the brain means the neural processes which receive and transform most synaptic inputs, the dendrites, extend into the upper layers where TMS would be most effective. Since dendrites can shape synaptic input to be greater or less than their linear sum( Polsky et al., 2004; Losonczy and Magee, 2006; Larkum et al., 2009) thereby altering the firing properties of the neuron( Larkum et al., 1999), the active properties of dendrites have attracted attention and have been linked to cognitive processes and feature selectivity( Lavzin et al., 2012; Xu et al., 2012; Smith et al., 2013; Cichon and Gan, 2015). Furthermore, it has been suggested that active dendritic processing underlies a more general principle of cortical operation that facilitates parallel associations between different cortical regions and the thalamus( Larkum, 2013) which is controlled by dendritic inhibition( Palmer et al., 2012; Lovett-Barron and Losonczy, 2014). Establishing the validity of this hypothesis will have important ramifications for understanding brain function as a whole. TMS presents a most promising way to study the causal relationship between active dendritic properties and cognition but only if its effect on dendrites can be understood and ultimately controlled. Using an optical fiber imaging approach, here we present a study examining the effect of TMS on sensory-evoked dendritic activity in layer 5 pyramidal neurons of the somatosensory cortex. We find that TMS suppresses dendritic Ca2+ activity evoked by tactile stimulation and that this suppression can be abolished by blocking GABAB receptors and excitatory transmission in the upper layers of the cortex. We uncover the cellular mechanisms underlying TMS-evoked inhibition, demonstrating that TMS of the rat brain activates long-range fibers that leads to the activation of dendrite-targeting inhibitory neurons in the upper cortical layers which in-turn suppress dendritic Ca2+ activity. Since indirect brain stimulation shows immense promise in treating many neurological disorders, such as epilepsy( Berenyi et al., 2012), this study not only illustrates the cellular mechanisms underlying TMS but also highlights dendrites as potential targets for therapeutic approaches.\nDiscussion:\nThe results from this study demonstrate the subcellular effect of TMS on dendritic Ca2+ activity for the first time. TMS alone did not directly activate L5 pyramidal dendrites but instead activated axonal processes coursing through the upper layers and synapsing onto GABAB-mediating interneurons in L1. A similar form of synaptic activation of interneurons leading to inhibition in L5 pyramidal neuron dendrites has been shown previously with activation of callosal input to the cortex( Palmer et al., 2012). This form of 'silent' inhibition involves the activation of inhibitory conductances which are not detectable at the soma except through their eventual influence on the generation of action potential output. These results therefore highlight an effect of single-pulse TMS on the cortical network, which involves the activation of a specific cortical microcircuit. The action of TMS at the cellular and network levels is extremely complex and likely constitutes the activation of a range of different cell types leading to multiple effects( Rossi et al., 2009; Siebner et al., 2009). A recent combined experimental and theoretical investigation of the biophysical underpinnings of TMS suggested that the generation of a magnetic field is most likely to evoke firing in cell bodies as opposed to dendrites or axons( Pashut et al., 2014). This is consistent with our finding that no dendritic activity was observed with TMS stimulation alone indicating there was no direct activation of the pyramidal cell dendrites and contrasted with the signals found in both L1 and L2/3 neurons following a TMS pulse. It is also consistent with the activation of neurogliaform interneurons in L1 that target the dendrites of L5 pyramidal neurons and suppress Ca2+ activity( Ziemann, 2010; Terao and Ugawa, 2002; Pashut et al., 2011; Tamás et al., 2003; Oláh et al., 2007; Oláh et al., 2009). This form of dendritic inhibition arises from the metabotropic inactivation of L-type( Cav1) channels in the apical dendrite that underlie the dendritic Ca2+ activity( Wozny and Williams, 2011) and can last several hundred milliseconds( Pérez-Garci et al., 2006). The suppression of Ca2+ channels can significantly reduce the action potential firing of the pyramidal neuron even when the driving input to the pyramidal neuron is not predominantly dendritic( Palmer et al., 2012). The long time-scale of this form of inhibition( ~500 ms) raises interesting consequences for the participation of pyramidal neurons in the cortical network. The similarity of some of the effects of TMS to interhemispheric inhibition has been noted previously in human studies( Jiang et al., 2013; Lee, 2014; Pérez-Garci et al., 2013; Ferbert et al., 1992) including its mediation via GABAB receptor-activation( Ferbert et al., 1992; Kobayashi et al., 2003), although these investigations could not examine the cellular mechanisms. For this study, we used a large coil typically used in humans and a smaller( 25 mm) coil designed for use in rodents. The effect on dendritic Ca2+ was the same in both cases. Clearly, the use of TMS coils with rats where the magnetic field generated is comparable to the size of the rat brain itself raises the possibility that the effects in humans may differ. However, the cortical feedback fibers which synapse onto the tuft dendrites of pyramidal neurons are located in the part of the cortex closest to the magnetic coil( i. e. L1) in both rats and humans( Larkum et al., 1999; Larkum, 2013), suggesting there would be overlap with respect to the influence of TMS. The aim of this study was to examine the effect of TMS on dendritic activity in L5 neocortical neurons rather than a general study on the overall effects of TMS at the cellular level. We were interested in this, in particular, because we have previously hypothesized that cell assemblies over different cortical regions might be associated through the activation of dendritic Ca2+ spikes in these neurons( Larkum, 2013). According to this hypothesis, dendritic activity in these neurons is a marker of important cognitive processes. The finding that TMS targets this mechanism is therefore highly relevant to the rationale of the study and may be instructive in understanding current applications of TMS. For instance, TMS has been used to alleviate some of the symptoms of hemispatial neglect( Nyffeler et al., 2009) and auditory hallucinations( Giesel et al., 2012) via unknown inhibitory processes. In conclusion, the results presented here indicate that the inhibitory actions of TMS is due to the recruitment of upper cortical layer interneurons mediating both GABAA and GABAB-receptor-activated inhibition in the dendrites of pyramidal neurons. This may have implications for the interpretation of results in humans using TMS as a form of 'virtual lesion'( Lee et al., 2007).", "role": "user" }, { "content": "The brain’s billions of neurons communicate with one another using electrical signals . Applying a magnetic field to a small area of the scalp can temporarily disrupt these signals by inducing small electrical currents in the brain tissue underneath . The currents interfere with the brain’s own electrical signals and temporarily disrupt the activity of the stimulated brain region . This technique , which is known as transcranial magnetic stimulation , is often used to investigate the roles of specific brain regions . By examining what happens when a region is briefly taken ‘offline’ , it is possible to deduce what that area normally does . Transcranial magnetic stimulation is also used to treat brain disorders ranging from epilepsy to schizophrenia without the need for surgery or drugs . But despite its widespread usage , little is known about how transcranial magnetic stimulation affects individual neurons . Neurons are made up of a cell body , which has numerous short branches called dendrites , and a cable-like structure called the axon . Neurons signal to each other by releasing chemical messengers across junctions called synapses . The chemical signals are generally released from the axon of one neuron and bind to receptor proteins on a dendrite on another neuron to stimulate electrical activity in the receiving neuron . Murphy et al . have now investigated how transcranial magnetic stimulation affects the activity of dendrites from neurons within the cortex of the rat brain . This revealed that the magnetic fields stimulate other neurons that inhibit the activity of dendrites from neurons within the deeper layers of the cortex . The inhibition process depends on a type of receptor protein in the dendrites called GABAB receptors; blocking these receptors prevents transcranial magnetic stimulation from altering the activity of stimulated brain regions . The processes occurring in these dendrites have been linked to cognitive function . The next challenge will be to integrate the non-invasive transcranial magnetic stimulation approach with cognitive tests in humans that can now manipulate dendritic activity to test their importance under various circumstances .", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.7397
[ { "content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nINTRODUCTION: Healthcare organisations have had to make adaptations to reduce the impact of the Coronavirus 2019 (COVID-19) pandemic. This has necessitated urgent reconfiguration within inflammatory bowel disease (IBD) services to ensure safety of patients and staff and seamless continuity of care provision. AIM: To describe the adaptations made by a large inflammatory bowel disease service, caring for over 3,500 IBD patients, in response to the COVID-19 pandemic. METHODS: A diary record of responses to the pandemic were logged, and meeting minutes were reviewed. Data were recorded from IBD advice lines, multidisciplinary team (MDT) meeting minutes, infusion unit attendances, and electronic referral systems for the 8-week period from 9 March 2020 until 2 May 2020. Descriptive analysis was performed. RESULTS: The IBD service at Hull University Teaching Hospitals NHS Trust (IBD Hull) instituted rapid structural and functional changes to the service. Outpatient services were suspended and substituted by virtual consultations, and inpatient services were reduced and moved to ambulatory care where possible. The delivery of biologic and immunomodulatory therapies was significantly modified to ensure patient and staff safety. There was a substantial increase in IBD advice line calls. CONCLUSION: The rapidly evolving COVID-19 pandemic required a prompt response, regular reassessment and planning, and continues to do so. We share our experience in of the successful adaptations made to our IBD service.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.1344
[ { "content": "Summarize the paper:\n\nPaper text:\n\nThis work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.\nText-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.\nThese games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.\nMoreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora. \n\nOne aspect that makes these games particularly challenging for learning agents is the combinatorially large action space.\n\nExisting methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions.\nIn this work, we propose to use the exploration approach of Go-Explore (Ecoffet et al., 2019) for solving text-based games.\nMore specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories.\n\nOur experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment.\nMoreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space.\nText-based games became popular in the mid 80s with the game series Zork (Anderson & Galley, 1985) resulting in many different text-based games being produced and published (Spaceman, 2019) .\nThese games use a plain text description of the environment and the player has to interact with them by writing natural-language commands.\nRecently, there has been a growing interest in developing agents that can automatically solve text-based games by interacting with them.\nThese settings challenge the ability of an artificial agent to understand natural language, common sense knowledge, and to develop the ability to interact with environments using language (Luketina et al., 2019; Branavan et al., 2012) .\nSince the actions in these games are commands that are in natural language form, the major obstacle is the extremely large action space of the agent, which leads to a combinatorially large exploration problem.\nIn fact, with a vocabulary of N words (e.g. 20K) and the possibility of producing sentences with at most m words (e.g. 7 words), the total number of actions is O(N m ) (e.g. 20K 7 ≈ 1.28e 30 ).\nTo avoid this large action space, several existing solutions focus on simpler text-based games with very small vocabularies where the action space is constrained to verb-object pairs (DePristo & Zubek, 2001; Narasimhan et al., 2015; Zelinka, 2018) .\nMoreover, many existing works rely on using predetermined sets of admissible actions (He et al., 2015; Tessler et al., 2019; Zahavy et al., 2018) .\nHowever, a more ideal, and still under explored, alternative would be an agent that can operate in the full, unconstrained action space of natural language that can systematically generalize to new text-based games with no or few interactions with the environment.\nTo address this challenge, we propose to use the idea behind the recently proposed GoExplore (Ecoffet et al., 2019) algorithm.\nSpecifically, we propose to first extract high reward trajectories of states and actions in the game using the exploration methodology proposed in Go-Explore and then train a policy using a Seq2Seq (Sutskever et al., 2014) model that maps observations to actions, in an imitation learning fashion.\nTo show the effectiveness of our proposed methodology, we first benchmark the exploration ability of Go-Explore on the family of text-based games called CoinCollector .\nThen we use the 4,440 games of \"First TextWorld Problems\" (Côté, 2018) , which are generated using the machinery introduced by , to show the generalization ability of our proposed methodology.\nIn the former experiment we show that Go-Explore finds winning trajectories faster than existing solutions, and in the latter, we show that training a Seq2Seq model on the trajectories found by Go-Explore results in stronger generalization, as suggested by the stronger performance on unseen games, compared to existing competitive baselines (He et al., 2015; Narasimhan et al., 2015) .\nReinforcement Learning Based Approaches for Text-Based Games Among reinforcement learning based efforts to solve text-based games two approaches are prominent.\nThe first approach assumes an action as a sentence of a fixed number of words, and associates a separate Qfunction (Watkins, 1989; Mnih et al., 2015) with each word position in this sentence.\nThis method was demonstrated with two-word sentences consisting of a verb-object pair (e.g. take apple) (DePristo & Zubek, 2001; Narasimhan et al., 2015; Zelinka, 2018; Fulda et al., 2017) .\nIn the second approach, one Q-function that scores all possible actions (i.e. sentences) is learned and used to play the game (He et al., 2015; Tessler et al., 2019; Zahavy et al., 2018) .\nThe first approach is quite limiting since a fixed number of words must be selected in advance and no temporal dependency is enforced between words (e.g. lack of language modelling).\nIn the second approach, on the other hand, the number of possible actions can become exponentially large if the admissible actions (a predetermined low cardinality set of actions that the agent can take) are not provided to the agent.\nA possible solution to this issue has been proposed by Tao et al. (2018) , where a hierarchical pointer-generator is used to first produce the set of admissible actions given the observation, and subsequently one element of this set is chosen as the action for that observation.\nHowever, in our experiments we show that even in settings where the true set of admissible actions is provided by the environment, a Q-scorer (He et al., 2015) does not generalize well in our setting (Section 5.2 Zero-Shot) and we would expect performance to degrade even further if the admissible actions were generated by a separate model.\nLess common are models that either learn to reduce a large set of actions into a smaller set of admissible actions by eliminating actions (Zahavy et al., 2018) or by compressing them in a latent space (Tessler et al., 2019) .\nExperimental results show that our proposed Go-Explore exploration strategy is a viable methodology for extracting high-performing trajectories in text-based games.\nThis method allows us to train supervised models that can outperform existing models in the experimental settings that we study.\nFinally, there are still several challenges and limitations that both our methodology and previous solutions do not fully address yet.\nFor instance:\nState Representation The state representation is the main limitation of our proposed imitation learning model.\nIn fact, by examining the observations provided in different games, we notice a large overlap in the descriptions (D) of the games.\nThis overlap leads to a situation where the policy receives very similar observations, but is expected to imitate two different actions.\nThis show especially in the joint setting of CookingWorld, where the 222 games are repeated 20 times with different entities and room maps.\nIn this work, we opted for a simple Seq2Seq model for our policy, since our goal is to show the effectiveness of our proposed exploration methods.\nHowever, a more complex Hierarchical-Seq2Seq model (Sordoni et al., 2015) or a better encoder representation based on knowledge graphs (Ammanabrolu & Riedl, 2019a; b) would likely improve the of performance this approach.\nLanguage Based Exploration In Go-Explore, the given admissible actions are used during random exploration.\nHowever, in more complex games, e.g. Zork I and in general the Z-Machine games, these admissible actions are not provided.\nIn such settings, the action space would explode in size, and thus Go-Explore, even with an appropriate cell representation, would have a hard time finding good trajectories.\nTo address this issue one could leverage general language models to produce a set of grammatically correct actions.\nAlternatively one could iteratively learn a policy to sample actions, while exploring with Go-Explore.\nBoth strategies are viable, and a comparison is left to future work.\nIt is worth noting that a hand-tailored solution for the CookingWorld games has been proposed in the \"First TextWorld Problems\" competition .\nThis solution 6 managed to obtain up to 91.9% of the maximum possible score across the 514 test games on an unpublished dataset.\nHowever, this solution relies on entity extraction and template filling, which we believe limits its potential for generalization.\nTherefore, this approach should be viewed as complementary rather than competitor to our approach as it could potentially be used as an alternative way of getting promising trajectories.\nIn this paper we presented a novel methodology for solving text-based games which first extracts high-performing trajectories using phase 1 of Go-Explore and then trains a simple Seq2Seq model that maps observations to actions using the extracted trajectories.\nOur experiments show promising results in three settings, with improved generalization and sample efficiency compared to existing methods.\nFinally, we discussed the limitations and possible improvements of our methodology, which leads to new research challenges in text-based games.", "role": "user" }, { "content": "This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.", "role": "assistant" } ]
science.scientific_lay_summarisation_elife_single_doc_summ
science.scientific_lay_summarisation_elife_single_doc_summ.2061
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Roundabout receptor 2 maintains inhibitory control of the adult midbrain\nArticle:\nAbstract:\nThe maintenance of excitatory and inhibitory balance in the brain is essential for its function. Here we find that the developmental axon guidance receptor Roundabout 2( Robo2) is critical for the maintenance of inhibitory synapses in the adult ventral tegmental area( VTA), a brain region important for the production of the neurotransmitter dopamine. Following selective genetic inactivation of Robo2 in the adult VTA of mice, reduced inhibitory control results in altered neural activity patterns, enhanced phasic dopamine release, behavioral hyperactivity, associative learning deficits, and a paradoxical inversion of psychostimulant responses. These behavioral phenotypes could be phenocopied by selective inactivation of synaptic transmission from local GABAergic neurons of the VTA, demonstrating an important function for Robo2 in regulating the excitatory and inhibitory balance of the adult brain.\nIntroduction:\nThe midbrain dopamine system, consisting of the VTA and substantia nigra pars compacta( SNc), is essential for motor function, motivation, reward, learning, and memory. Alterations in the activity patterns of dopamine neurons have been proposed as a key contributor to mental illness( Grace, 1991). In addition to dopamine, accumulating evidence points to an essential balance of excitatory and inhibitory neurotransmitter systems in the brain to allow for proper function. Shifts in this balance are increasingly linked to a variety of mental disorders including schizophrenia( Eichler and Meier, 2008) and autism( Nelson and Valakh, 2015). The VTA dopamine system is also broadly implicated in addiction where changes in inhibitory and excitatory strength are proposed to underlie drug-seeking behavior( Chen et al., 2010). The cellular mechanisms that maintain the excitatory and inhibitory synaptic control of the adult VTA are not well known, but resolving these processes has important implications for resolving the molecular regulation of circuit connectivity. To identify additional genes that may regulate synaptic connectivity in the midbrain dopamine system, we surveyed the Allen Institute mouse brain expression atlas( Lein et al., 2007) for genes with enriched or partially enriched expression in the VTA/SNc that had previously defined roles in axonal pathfinding, synaptogenesis, or plasticity, and have linkage to mental illness. One of the genes that we identified encodes for the axon guidance receptor Robo2. Robo2 was first discovered as one of the four mammalian homologs to the Drosophila Roundabout gene( Robo), which was named because of the unique axonal recrossing pattern generated at the CNS midline( Seeger et al., 1993; Kidd et al., 1998; Brose et al., 1999; Simpson et al., 2000). Further studies found that Robo family members perform additional axon guidance functions to regulate brain wiring broadly throughout the nervous system( Ypsilanti and Chedotal, 2014), including midbrain dopamine neurons( Dugan et al., 2011). Additional studies expanded the essential functions of Robo2 to include cell migration, synaptogenesis, synaptic plasticity, neuronal survival, and dendritic patterning during prenatal and early postnatal development( Shen and Cowan, 2010; Gibson et al., 2014; Koropouli and Kolodkin, 2014; Ypsilanti and Chedotal, 2014). Previous studies demonstrate that the neuronal axon guidance receptor deleted in colorectal cancer( DCC) is also highly expressed in the adult dopamine neurons, is dynamically regulated by amphetamine( Yetnikoff et al., 2007), and mice with haploinsufficiency for DCC demonstrate blunted responses to the psychomotor activating effects of amphetamine( Flores et al., 2005). Consistent with the continued function of DCC in the adult nervous system, it has recently been demonstrated that DCC regulates excitatory synaptic transmission in the adult hippocampus( Horn et al., 2013). These findings are consistent with a role for adult expression of neuronal pathfinding genes in the regulation of the adult dopamine system. During early development of the spinal cord, Robo and DCC signaling have opposing roles in commissural axon guidance, with DCC activation promoting axon growth toward the midline and Robo serving as a chemorepellant to prevent recrossing( Tessier-Lavigne and Goodman, 1996). Robo receptors are activated upon binding chemorepellants of the Slit family, Slit1, Slit2, and Slit3( Tessier-Lavigne and Goodman, 1996), which cause growth cone collapse through cytoskeletal remodeling via Rho-family GTPase signaling( Wong et al., 2001). All three genes for the Slit ligands and the receptors Robo1 and Robo2( Marillat et al., 2002) are expressed in the adult midbrain, including the VTA and SNc; however, the function of these ligands and receptors in this context and their cell-specific requirements has yet to be established. Of further interest, genes involved in Robo signaling have been linked to schizophrenia, autism, conduct disorder, intellectual disability, language impairment, and bipolar depression( Anitha et al., 2008; Potkin et al., 2009, 2010; Viding et al., 2010; Suda et al., 2011; Dadds et al., 2013; St Pourcain et al., 2014; Wang et al., 2014; Okbay et al., 2016). Based on the expression of Robo2 in the adult limbic system of the brain, its role in establishing the early wiring of the midbrain dopamine system( Dugan et al., 2011), and its opposing function related to netrin/DCC signaling, we hypothesized that this gene may play an important role in maintaining or modulating synaptic connectivity of the adult dopamine system. To test this hypothesis, we conditionally inactivated the Robo2 gene exclusively in the adult VTA of mice. We find that Robo2 regulates the inhibitory synaptic connectivity of the VTA, and that disruption of this connectivity has profound behavioral effects including altered psychomotor activity and impaired learning. Most intriguing was a paradoxical inversion of behavioral responses to the psychostimulants cocaine and amphetamine, whereby these drugs calmed hyperactivity in mice lacking Robo2 in the adult midbrain. These findings strongly implicate Robo2 signaling in maintaining balanced control of the VTA that is highly relevant for behavioral disorders linked to altered psychomotor control and cognition such as autism, ADHD, addiction, and schizophrenia.\nDiscussion:\nOur results provide the first demonstration that Robo2 has a critical function beyond early development of the nervous system by potently regulating GABAergic synapses. The observed behavioral phenotype of inverted psychostimulant responses is a rare genetic finding. Numerous other genetic models have been described that cause hyperactivity, but few have inverted psychostimulant responses( Leo and Gainetdinov, 2013). Given the link between hyperactivity and paradoxical calming by psychostimulants in ADHD, it is attractive to think that Robo2 regulation of inhibitory control of the midbrain may play an important role in this disorder. Although selective inactivation of Robo2 exclusively in the adult VTA is not a true genetic model of ADHD, our results strongly implicate balanced excitatory and inhibitory control of the midbrain as a likely source of ADHD-like phenotypes. Similar to inactivation of the dopamine transporter( Gainetdinov et al., 1999), we observed increased sensitivity to the serotonin transport blocker fluoxetine, suggesting the hyperactivity associated with loss of Robo2 is due to a potential enhancement of the serotonin system. We find that Robo2 is highly expressed in the VTA in both dopamine and non-dopamine producing neurons. Given the dopamine-related nature of the phenotypes we observed, we were surprised to find that inactivation of Robo2 exclusively in dopamine neurons was not sufficient to invoke these phenotypes, particularly given the partial enrichment of Robo2 mRNA in these cells. Instead our data suggests that Robo2 is required in multiple cell types for the observed behavioral changes. There are numerous cell types within the VTA, including dopamine neurons, GABAergic interneurons, GABAergic projection neurons, and glutamatergic neurons( Barker et al., 2016). Because the phenotype emerged only when dopamine and non-dopamine neurons were genetically targeted, it is possible that Robo2 is required globally within the VTA. Unfortunately, Cre driver lines that allow for the inducible expression of Cre recombinase only in GABA or glutamate neurons of the VTA do not currently exist that would allow us to test this in greater detail. However, given that many of the observed behavioral deficits are dopamine-dependent and the genetic manipulation alters synaptic physiology in both dopamine and non-dopamine neurons, it is highly likely that the observed deficits are the results of a global change in GABAergic connectivity in the VTA. Consistent with the function of GABA in this process, reducing synaptic release from midbrain GABA neurons recapitulated the adult VTA Robo2 mutant phenotype, though the effects were much more severe overall. These findings point to a key role of GABAergic transmission in the VTA for behavioral hyperactivity and learning deficits. Given the high degree of specificity for partial GABA agonists in targeting GABA receptors with specific subunit compositions( Hoestgaard-Jensen et al., 2014), drugs targeting specific GABAA receptors in the VTA may be a promising therapeutic avenue for multiple disorders. Additional studies are required to understand the molecular mechanisms whereby Robo2 signaling controls inhibitory synapses in the VTA. The reduced frequency of sIPSCs following Robo2 inactivation is consistent with a requirement for Robo2 in presynaptic GABA release. It was recently shown that Robo receptors interact with A-kinase anchoring proteins( AKAPs)( Samelson et al., 2015), placing Robo receptors in key scaffolding complexes. It has also been shown that AKAP 79/150 is expressed in dopamine and non-dopamine neurons in the VTA where it regulates GABAergic synaptic transmission. Intriguingly, disruption of AKAP function induces a long-term depression like effect on GABAergic synapse( Dacher et al., 2013). Thus, an intriguing possibility is that Robo2 receptors interact with AKAPs to stabilize inhibitory synaptic connectivity. Another, non-mutually exclusive possibility is that Robo2 interacts with Robo1 either in cis or in trans to stabilize synapses. Both Robo1 and Robo2 are expressed in the VTA, and it has been demonstrated in Drosophila that during commissural axon pathfinding, Robo2 interacts with Robo1 in trans to suppress the repulsive properties of Robo1( Evans et al., 2015). Future exploration into these potential mechanisms will provide important insight into this process. In addition to expression in the ventral midbrain, Robo2 is highly expressed in other brain regions, most notably the limbic system including the prefrontal cortex, striatum, and hippocampus. Whether adult Robo2 signaling functions in a similar fashion in these regions will be important to elucidate, particularly given the linkage between Slit/Robo signaling and mental illness. Our results also suggest that Robo2 receptors may play an important role in regulating synaptic changes associated with drugs of abuse. Interestingly, during cocaine sensitization, changes in spine density in the striatum results from inhibition of the small Rho-family GTPase, Rac1, which is a known effector molecule for Robo signaling, and is a potent regulator of spine turnover during development( Dietz et al., 2012). In addition, cocaine sensitization reduces inhibitory synaptic transmission in the VTA( Liu et al., 2005) that may be mediated in part by Robo2 signaling. Determining how Slit/Robo signaling regulates synaptic processes in other cell types will be an important next step towards further resolving its function in the adult nervous system.", "role": "user" }, { "content": "Although no two people are alike , we all share the same basic brain structure . This similarity arises because the same developmental program takes place in every human embryo . Specific genes are activated in a designated sequence to generate the structure of a typical human brain . But what happens to these genes when development is complete – do they remain active in the adult brain ? A gene known as Robo2 encodes a protein that helps neurons find their way through the developing brain . Many of these neurons will ultimately form part of the brain’s reward system . This is a network of brain regions that communicate with one another using a chemical called dopamine . The reward system contributes to motivation , learning and memory , and also underlies drug addiction . In certain mental illnesses such as Parkinson’s disease and schizophrenia , the dopamine-producing neurons in the reward system work incorrectly or die . To find out whether Robo2 is active in the mature nervous system , Gore et al . used genetic techniques to selectively remove the gene from the reward system of adult mice . Doing so reduced the ability of the dopamine neurons within the reward system to inhibit one another , which in turn increased their activity . This changed the behavior of the mice , making them hyperactive and less able to learn and remember . Cocaine makes normal mice more active; however , mice that lacked the Robo2 gene became less active when given cocaine . Overall , the work of Gore et al . suggests that developmental axon guidance genes remain important in the adult brain . Studying developmental genes such as Robo2 may therefore open up new treatment possibilities for a number of mental illnesses and brain disorders .", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.257.1
[ { "content": "Read the following paper excerpts and answer the question. Output a JSON object with \"answer\" and \"evidence\" fields. The answer must be extracted directly from the paper text.\n\nIf the question cannot be answered from the given text, respond with \"null\".\n\nPaper: Semantic Document Distance Measures and Unsupervised Document Revision Detection\n\nIn this paper, we model the document revision detection problem as a minimum cost branching problem that relies on computing document distances. Furthermore, we propose two new document distance measures, word vector-based Dynamic Time Warping (wDTW) and word vector-based Tree Edit Distance (wTED). Our revision detection system is designed for a large scale corpus and implemented in Apache Spark. We demonstrate that our system can more precisely detect revisions than state-of-the-art methods by utilizing the Wikipedia revision dumps https://snap.stanford.edu/data/wiki-meta.html and simulated data sets.\n\nDistance/similarity Measures\nIn this section, we first introduce the classic VSM model, the word2vec model, DTW and TED. We next demonstrate how to combine the above components to construct our semantic document distance measures: wDTW and wTED. We also discuss the implementation of our revision detection system.\n\nBackground\nVSM represents a set of documents as vectors of identifiers. The identifier of a word used in this work is the tf-idf weight. We represent documents as tf-idf vectors, and thus the similarity of two documents can be described by the cosine distance between their vectors. VSM has low algorithm complexity but cannot represent the semantics of words since it is based on the bag-of-words assumption.\nWord2vec produces semantic embeddings for words using a two-layer neural network. Specifically, word2vec relies on a skip-gram model that uses the current word to predict context words in a surrounding window to maximize the average log probability. Words with similar meanings tend to have similar embeddings.\nDTW was developed originally for speech recognition in time series analysis and has been widely used to measure the distance between two sequences of vectors.\nGiven two sequences of feature vectors: INLINEFORM0 and INLINEFORM1 , DTW finds the optimal alignment for INLINEFORM2 and INLINEFORM3 by first constructing an INLINEFORM4 matrix in which the INLINEFORM5 element is the alignment cost of INLINEFORM6 and INLINEFORM7 , and then retrieving the path from one corner to the diagonal one through the matrix that has the minimal cumulative distance. This algorithm is described by the following formula. DISPLAYFORM0 \nTED was initially defined to calculate the minimal cost of node edit operations for transforming one labeled tree into another. The node edit operations are defined as follows.\nDeletion Delete a node and connect its children to its parent maintaining the order.\nInsertion Insert a node between an existing node and a subsequence of consecutive children of this node.\nSubstitution Rename the label of a node.\nLet INLINEFORM0 and INLINEFORM1 be two labeled trees, and INLINEFORM2 be the INLINEFORM3 node in INLINEFORM4 . INLINEFORM5 corresponds to a mapping from INLINEFORM6 to INLINEFORM7 . TED finds mapping INLINEFORM8 with the minimal edit cost based on INLINEFORM9 \nwhere INLINEFORM0 means transferring INLINEFORM1 to INLINEFORM2 based on INLINEFORM3 , and INLINEFORM4 represents an empty node.\n\nWord Vector-based Dynamic Time Warping\nAs a document can be considered as a sequence of paragraphs, wDTW returns the distance between two documents by applying another DTW on top of paragraphs. The cost function is exactly the DistPara distance of two paragraphs given in Algorithm SECREF20 . Algorithm SECREF21 and Figure FIGREF22 describe our wDTW measure. wDTW observes semantic information from word vectors, which is fundamentally different from the word distance calculated from hierarchies among words in the algorithm proposed by BIBREF27 liu2007sentence. The shortcomings of their work are that it is difficult to learn semantic taxonomy of all words and that their DTW algorithm can only be applied to sentences not documents.\nwDTW\n[h] Represent documents INLINEFORM0 and INLINEFORM1 with vectors of paragraphs: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 DistPara INLINEFORM15 calculate INLINEFORM16 Return: INLINEFORM17 \n\nWord Vector-based Tree Edit Distance\nTED is reasonable for measuring document distances as documents can be easily transformed to tree structures visualized in Figure FIGREF24 . The document tree concept was originally proposed by BIBREF0 si1997check. A document can be viewed at multiple abstraction levels that include the document title, its sections, subsections, etc. Thus for each document we can build a tree-like structure with title INLINEFORM0 sections INLINEFORM1 subsections INLINEFORM2 ... INLINEFORM3 paragraphs being paths from the root to leaves. Child nodes are ordered from left to right as they appear in the document.\nWe represent labels in a document tree as the vector sequences of titles, sections, subsections and paragraphs with word2vec embeddings. wTED converts documents to tree structures and then uses DistPara distances. More formally, the distance between two nodes is computed as follows.\nThe cost of substitution is the DistPara value of the two nodes.\nThe cost of insertion is the DistPara value of an empty sequence and the label of the inserted node. This essentially means that the cost is the sum of the L2-norms of the word vectors in that node.\nThe cost of deletion is the same as the cost of insertion.\nCompared to the algorithm proposed by BIBREF28 sidorov2015computing, wTED provides different edit cost functions and uses document tree structures instead of syntactic n-grams, and thus wTED yields more meaningful distance scores for long documents. Algorithm SECREF23 and Figure FIGREF28 describe how we calculate the edit cost between two document trees.\nwTED\n[1] Convert documents INLINEFORM0 and INLINEFORM1 to trees INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 \nInitialize tree edit distance INLINEFORM0 node label INLINEFORM1 node label INLINEFORM2 Update TED mapping cost INLINEFORM3 using INLINEFORM4 DistPara INLINEFORM5 INLINEFORM6 DistPara INLINEFORM7 INLINEFORM8 DistPara INLINEFORM9 \nReturn: INLINEFORM0 \n\nEstimating the Cut-off Threshold\nHyperprameter INLINEFORM0 is calibrated by calculating the absolute extreme based on an initial set of documents, i.e., all processed documents since the moment the system was put in use. Based on this set, we calculate all distance/similarity scores and create a histogram, see Figure FIGREF31 . The figure shows the correlation between the number of document pairs and the similarity scores in the training process of one simulated corpus using VSM. The optimal INLINEFORM1 in this example is around 0.6 where the number of document pairs noticeably drops.\nAs the system continues running, new documents become available and INLINEFORM0 can be periodically updated by using the same method.\n\nNumerical Experiments\nThis section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods.\n\nData Sets\nIn this section, we introduce the two data sets we used for our revision detection experiments: Wikipedia revision dumps and a document revision data set generated by a computer simulation. The two data sets differ in that the Wikipedia revision dumps only contain linear revision chains, while the simulated data sets also contains tree-structured revision chains, which can be very common in real-world data.\nThe Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data.\nWe pre-processed the Wikipedia revision dumps using the JWPL Revision Machine BIBREF32 and produced a data set that contains 62,234 documents with 46,354 revisions. As we noticed that short documents just contributed to noise (graffiti) in the data, we eliminated documents that have fewer than three paragraphs and fewer than 300 words. We removed empty lines in the documents and trained word2vec embeddings on the entire corpus. We used the documents occurring in the first INLINEFORM0 of the revision period for INLINEFORM1 calibration, and the remaining documents for test.\nThe generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period.\nWe generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5.\n\nResults\nWe use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record.\nWe illustrate the performances of wDTW, wTED, WMD, VSM, PV-DTW and PV-TED on the Wikipedia revision dumps in Figure FIGREF43 . wDTW and wTED have the highest F-measure scores compared to the rest of four measures, and wDTW also have the highest precision and recall scores. Figure FIGREF49 shows the average evaluation results on the simulated data sets. From left to right, the corpus size increases and the revision chains become longer, thus it becomes more challenging to detect document revisions. Overall, wDTW consistently performs the best. WMD is slightly better than wTED. In particular, when the corpus size increases, the performances of WMD, VSM, PV-DTW and PV-TED drop faster than wDTW and wTED. Because the revision operations were randomly selected in each corpus, it is possible that there are non-monotone points in the series.\nwDTW and wTED perform better than WMD especially when the corpus is large, because they use dynamic programming to find the global optimal alignment for documents. In contrast, WMD relies on a greedy algorithm that sums up the minimal cost for every word. wDTW and wTED perform better than PV-DTW and PV-TED, which indicates that our DistPara distance in Algorithm SECREF20 is more accurate than the Euclidian distance between paragraph vectors trained by PV.\nWe show in Table TABREF53 the average running time of the six distance/similarity measures. In all the experiments, VSM is the fastest, wTED is faster than wDTW, and WMD is the slowest. Running WMD is extremely expensive because WMD needs to solve an INLINEFORM0 sequential transshipment problem for every two documents where INLINEFORM1 is the average number of words in a document. In contrast, by splitting this heavy computation into several smaller problems (finding the distance between any two paragraphs), which can be run in parallel, wDTW and wTED scale much better.\nCombining Figure FIGREF43 , Figure FIGREF49 and Table TABREF53 we conclude that wDTW yields the most accurate results using marginally more time than VSM, PV-TED and PV-DTW, but much less running time than WMD. wTED returns satisfactory results using shorter time than wDTW.\n\nConclusion\nThis paper has explored how DTW and TED can be extended with word2vec to construct semantic document distance measures: wDTW and wTED. By representing paragraphs with concatenations of word vectors, wDTW and wTED are able to capture the semantics of the words and thus give more accurate distance scores. In order to detect revisions, we have used minimum branching on an appropriately developed network with document distance scores serving as arc weights. We have also assessed the efficiency of the method of retrieving an optimal revision subnetwork by finding the minimum branching.\nFurthermore, we have compared wDTW and wTED with several distance measures for revision detection tasks. Our results demonstrate the effectiveness and robustness of wDTW and wTED in the Wikipedia revision dumps and our simulated data sets. In order to reduce the computation time, we have computed document distances at the paragraph level and implemented a boosting learning system using Apache Spark. Although we have demonstrated the superiority of our semantic measures only in the revision detection experiments, wDTW and wTED can also be used as semantic distance measures in many clustering, classification tasks.\nOur revision detection system can be enhanced with richer features such as author information and writing styles, and exact changes in revision pairs. Another interesting aspect we would like to explore in the future is reducing the complexities of calculating the distance between two paragraphs.\n\nAcknowledgments\nThis work was supported in part by Intel Corporation, Semiconductor Research Corporation (SRC).\n\nFigure 1: Revision network visualization\n\nFigure 2: Setting τ\n\nFigure 3: Corpora simulation\n\nFigure 4: Precision, recall and F-measure on the Wikipedia revision dumps\n\nTable 1: A simulated data set\n\nFigure 5: Average precision, recall and F-measure on the simulated data sets\n\nTable 2: Running time of VSM, PV-TED, PV-DTW, wTED, wDTW and WMD\n\nFigure 1: wDTW visualization\n\nQuestion: How large is the Wikipedia revision dump dataset?", "role": "user" }, { "content": "{\"answer\": [\"eight GB\"], \"evidence\": [\"The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data.\"]}", "role": "assistant" } ]
science.chia_ner
science.chia_ner.1433
[ { "content": "Extract all unique entities of these types from the following clinical trial criteria: [\"Negation\", \"Observation\", \"ReferencePoint\", \"Procedure\", \"Drug\", \"Measurement\", \"Value\", \"Person\", \"Multiplier\", \"Device\", \"Qualifier\", \"Mood\", \"Temporal\", \"Condition\", \"Visit\", \"Scope\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values. If no entities are found for a type, use [].\n\nText:\n\nPatients who are 19 years or older on screening\nPatients with type 2 diabetes mellitus\nPatients with 7.0% = HbA1c = 11.0% at the screening visit\nPatients with Fasting Plasma Glucose <15mmol/L(270mg/dL) on screening", "role": "user" }, { "content": "{\"Condition\": [\"type 2 diabetes mellitus\"], \"Device\": [], \"Drug\": [], \"Measurement\": [\"Fasting Plasma Glucose\", \"HbA1c\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [\"years\"], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [\"screening\"], \"Scope\": [], \"Temporal\": [\"at the screening visit\", \"on screening\"], \"Value\": [\"0% = .0% a\", \"270mg/dL\", \"\\u003c15mmol/L\", \"19 or older\"], \"Visit\": []}", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.17244
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Chirality provides a direct fitness advantage and facilitates intermixing in cellular aggregates\nArticle:\nAbstract:\nChirality in shape and motility can evolve rapidly in microbes and cancer cells. To determine how chirality affects cell fitness, we developed a model of chiral growth in compact aggregates such as microbial colonies and solid tumors. Our model recapitulates previous experimental findings and shows that mutant cells can invade by increasing their chirality or switching their handedness. The invasion results either in a takeover or stable coexistence between the mutant and the ancestor depending on their relative chirality. For large chiralities, the coexistence is accompanied by strong intermixing between the cells, while spatial segregation occurs otherwise. We show that the competition within the aggregate is mediated by bulges in regions where the cells with different chiralities meet. The two-way coupling between aggregate shape and natural selection is described by the chiral Kardar-Parisi-Zhang equation coupled to the Burgers’ equation with multiplicative noise. We solve for the key features of this theory to explain the origin of selection on chirality. Overall, our work suggests that chirality could be an important ecological trait that mediates competition, invasion, and spatial structure in cellular populations.\nIntroduction:\nLiving systems have harnessed a variety of physical principles to design and exploit spatial patterns[1–3]. Many biological patterns are chiral, i. e. they break left-right symmetry. While the mechanism of chiral symmetry breaking have been elucidated in some systems[4–10], the functional role of chirality remains largely unexplored[10–13]. Chirality exists at all scales: from molecules to populations[7, 13–18]. The origin of molecular chirality is typically attributed to chance[4, 5, 19, 20]. For nucleotides and amino acids, the classical explanation of homochirality posits two steps: a fluctuation that slightly breaks the left-right symmetry and a self-amplifying process that increases the asymmetry further[6, 19]. More recent work demonstrates that the amplification step may not be necessary because intrinsic noise in chemical reactions is sufficient to establish and stabilize the symmetry breaking[20]. The existence of many chiral components within the cell then serves as a natural explanation for macroscopic chirality[15, 21, 22]. Consistent with this view, chiral body plans arise early in the development due to a symmetry breaking event at a microscopic scale, which is amplified further during the subsequent growth[8, 9, 16, 21, 23, 24]. Similarly, the macroscopic chirality of bacterial colonies is typically explained by the chirality of individual bacteria[13, 25, 26]. The existing theory explains how, but not why chirality emerges. Indeed, a lot of effort went into elucidating the mechanism of the chiral symmetry breaking[7, 13, 14, 27], but the relationship between chirality and fitness has received much less attention[11, 12, 28–30]. Several lines of evidence, however, do suggest that a change in chirality could be advantageous[11, 12, 28–30]. Experiments with Arthrospira showed that this bacterium changes from a right-handed to a left-handed helix following the exposure to grazing by a ciliate[11, 28]. Extensive work with Paenibacillus demonstrated that this microbe switches between a chiral and a non-chiral forms to optimize its fitness in different environments[13, 30]. Human cells are also known to form chiral patterns. The handedness of these patterns is the same across all tissue types; except, it is reversed in cancer[29]. Thus, in a variety of systems, a change in chirality co-occurs with the evolution of higher growth, dispersal, or competitive ability. Motivated by these striking examples, we decided to explore whether chirality could be a product of natural selection rather than a historical accident. We asked this question in the context of growing cellular aggregates and found that chirality could directly affect fitness through a pattern-formation mechanism. Our results show that, depending on the growth conditions, it could be advantageous for a cell to increase its chirality or to switch its handedness relative to that of the ancestral population. These dynamics often lead to the coexistence of the left- and right-handed forms, which is a major departure from the classic theories of homochirality[6, 19, 20]. Coexisting cell types may enjoy additional benefits of chirality because they develop unique spatial structure that facilitates cross-feeding and other social interactions[31–34].\nDiscussion:\nSelection for a particular chirality may seem impossible because an object and its mirror image have identical physical properties. This apparent paradox is however easily resolved by noticing that natural selection always favors a change in chirality relative to that of the ancestral population rather than an absolute, pre-defined value of chirality. A vivid example of how evolution drives a change in chirality comes from Satsuma snails. Most species in the Satsuma genus are dextral( clockwise coiled), but they often have sister species that are almost identical except for the opposite direction of coiling[12]. These sinister( counterclockwise coiled) species enjoy a distinct selective advantage because they are essentially resistant to the predation by Pareatidae iwasakii, a snake that is common in the rage of Satsuma[12]. Resistance to predation comes from the left-right asymmetry in the jaw of P. iwasakii, which has adapted to the coiling direction of its most common prey. Similarly, a reversal of handedness provides protection to Arthrospira, cyanobacteria that forms helical trichomes, from the predation by a ciliate[11, 28]. In both examples, the mutants enjoy the advantage of being in the minority. This mechanism does not require the presence of a predator and can occur due to a large number of factors. For example, a mutant with chiral motility may spatially segregate from the rest of the population and thereby escape from an intense competition for resources[65]. Our main finding is that selection for chirality can also be mediated by the formation of non-trivial spatial patterns. Mismatch in the chiral bias makes cells move towards each other near in-flow boundaries and away from each other near out-flow boundaries. As a result, the colony edge becomes populated with bulges and dips, which grow over time and alter the relative abundance of the strains. One consequence of these dynamics is that it pays off to be different from the majority of the population: A mutant with the opposite handedness can invade when rare and stably coexist with the ancestor due to negative frequency-dependent selection. For strains with the same handedness, the more chiral strain typically wins the competition because it creates a one-sided bulge that overgrows the less chiral strain. Thus, we identified a distinct selection mechanism that can explain both the evolution toward stronger chirality and sudden reversal of handedness. The predicted effects of chirality are observable even in the presence of moderate growth rate differences and can be tested experimentally by comparing the competition between cells with different chirality to our predictions for colony shape and composition. Selection for chirality could also come from the indirect benefits of the emergent spatial pattern. One possibility is that pointed bulges might facilitate the invasion of host tissue or other environments. The other possibility is that strain intermixing could promote social interactions that rely on cell contact or the exchange of diffusible metabolites. We found that intermixing between the strains with opposite handedness is stable only when their chiralities exceed a certain threshold. Below this threshold, genetic drift creates macroscopic sectors that grow over time and spatially segregate the strains. As a result of this process, social interactions are either suppressed or completely abolished[31–34]. All of our results can be explained by a simple effective theory that describes population dynamics in terms of colony shape and composition. This description is simpler and much more intuitive than the full two-dimensional growth encoded by reaction-diffusion equations or other mechanistic models. Therefore, our theory could provide a valuable framework to study competition and cooperation in compact aggregates such as microbial colonies and cancer tumors. Shape undulations inevitably occur when aggregates contain strains that grow at different rates[35, 66]. So far, most theoretical studies have neglected this complexity and assumed that colonies have a flat front[31, 34, 41]. Front undulations, however, are known to profoundly change the nature of competition, for example, by allowing the regions with cooperating strains to overgrow the regions where cooperation has been lost[66, 67]. Our theory is an important first step towards understanding this interplay between evolution in compact aggregates and their shape. The effective theory also provides an interesting extension of the KPZ equation to systems that break the mirror symmetry. Such symmetry breaking could occur in a variety of systems within the KPZ universality class, for example, during the simultaneous deposition of two homophilic molecules with opposite handedness. In summary, we have identified a new mechanism of selection for chirality and developed a theory to explain it. Our findings describe the chirality of cells while most of the previous work focused on the emergence of homochirality in biological molecules[4–6, 19, 20]. Unlike the frozen homochirality of nucleotides and amino acids, the chirality of cells continues to evolve, often on the time scale of a few generations[11, 28–30]. Our work suggests that some changes in cellular chirality could be adaptive and, therefore, deserve further study.", "role": "user" }, { "content": "Is it better to be left- or right-handed ? The answer depends on whether the goal is making a handshake or winning a boxing match . The need for coordination favors the handedness of the majority , but being different could also provide an advantage . The same rules could apply to microbial colonies and cancer tumors . Like humans , cells often have handedness ( chirality ) that reflects the lack of mirror symmetry in their shapes or movement patterns . We find that cells gain a substantial fitness advantage by either increasing the magnitude of their chirality or switching to the opposite handedness . Selection for specific chirality can overcome differences in growth rate and is mediated by the formation of bulges along the colony edge in regions where cells with different chiralities meet .", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.67100
[ { "content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey of Multi-Objective Sequential Decision-Making \nChapter Title: Multiple Stochastic Policies with Monotonically Increasing Scalarization Functions \n1. Abstract of Cited Paper (BIB001): In this article, I will consider Markov Decision Processes with two criteria, each defined as the expected value of an infinite horizon cumulative return. The second criterion is either itself subject to an inequality constraint, or there is maximum allowable probability that the single returns violate the constraint. I describe and discuss three new reinforcement learning approaches for solving such control problems. \n2. Abstract of Cited Paper (BIB002): Multiobjective reinforcement learning algorithms extend reinforcement learning techniques to problems with multiple conflicting objectives. This paper discusses the advantages gained from applying stochastic policies to multiobjective tasks and examines a particular form of stochastic policy known as a mixture policy. Two methods are proposed for deriving mixture policies for episodic multiobjective tasks from deterministic base policies found via scalarised reinforcement learning. It is shown that these approaches are an efficient means of identifying solutions which offer a superior match to the user's preferences than can be achieved by methods based strictly on deterministic policies.\n\nLiterature Review:", "role": "user" }, { "content": "In the multiple policy setting where stochastic non-stationary policies, i.e., the full set Π m , are allowed, we again cannot consider only deterministic stationary policies. However, we can employ stochastic stationary policies instead of deterministic non-stationary ones. In particular, we can employ a mixture policy BIB003 π m that takes a set of N deterministic policies, and selects the i-th policy from this set, π i with probability p i , where N i=0 p i = 1. This leads to values that are a linear combination of the values of the constituent policies. In our previous example, we can replace π ns by a policy π m that chooses π 1 with probability p 1 and π 2 otherwise, resulting in the following values: Fortunately, it is not necessary to explicitly represent an entire P CS(Π m ) explicitly. Instead, it is sufficient to compute a CCS(Π m DS ). The necessary stochastic policies to create a P CS(Π m ) can then be easily constructed by making mixture policies from those policies on the CCS(Π m DS ). Corollary 2. In an infinite horizon discounted MOMDP, an infinite set of mixture policies P M can be constructed from policies that are on a CCS(Π m DS ), such that this set P M , is a P CS(Π m ) BIB003 ). Proof. We can construct a policy with any value vector on the convex surface, e.g., the black lines in Figure 2a , by mixing policies on a CCS(Π m DS ), e.g., the black dots. 10 Therefore, we can always construct a mixture policy that dominates a policy with a value under this surface, e.g., B. We can show by contradiction that there cannot be any policy above 9. shows this in an infinite-horizon discounted setting, but the arguments hold also for the finite-horizon and average-reward settings. 10. Note that we should always mix policies that are \"adjacent\"; the line between any pair of the policies we mix should be on the convex surface. E.g. mixing the policy represented by the leftmost black dot in Figure 2a and the policy represented by the rightmost black dot does not lead to optimal policies, as the line connecting these two points is under the convex surface. the convex surface. If there was, it would be optimal for some w if f was linear. Consequently, by Corollary 1, there would be a deterministic stationary policy with at least equal value. But since the convex surface spans the values on the CCS(Π m DS ), this leads to a contradiction. Thus, no policy can Pareto-dominate a mixture policy on the convex surface. Thanks to Corollary 2, it is sufficient to compute a CCS(Π m DS ) to solve an MOMDP, as reflected in box (6) of Table 1 . A surprising consequence of this fact, which to our knowledge is not made explicit in the literature, is that Pareto optimality, though the most common solution concept associated with multi-objective problems, is actually only necessary in one specific problem setting: Observation 1. The multiple policy setting when f is monotonically increasing and only deterministic policies are considered (box (5) of Table 1 ), requires computing a Pareto coverage set. When either f is linear or stochastic policies are allowed, a CCS(Π m DS ) suffices. Wakuta (1999) proves the sufficiency of a CCS(Π m DS ) for monotonically increasing scalarizations with multiple stochastic policies (box (6) of Table 1 ) in infinite horizon MOMDPs, but in a different way. Instead of the mixture policies in Corollary 2, he uses stationary randomizations over deterministic stationary policies. provide a similar proof for the average reward case. Note that, while it is common to consider non-stationary or stochastic policies when f is nonlinear, such policies typically condition only on the current state, or the current state and time, not the agent's reward history. However, in this setting, policies that condition on that reward history can dominate those that do not. For example, suppose there are two objectives which can take only positive values and f simply selects the smaller of the two, i.e., f (V π , w) = min i V π i . Suppose also that, in a given state, two actions are available, which yields rewards of (4, 4) and (0, 5) respectively. Finally, suppose that the agent can arrive at that state with one of two reward histories, whose discounted sums are either (5, 0) or (3, 3). A policy that conditions on these discounted reward histories can outperform policies that do not, i.e., the optimal policy selects the action yielding (4, 4) when the reward history sums to (3, 3) and the action yielding (0, 5) when the reward history sums to (5, 0). So, while for single objective MDPs the Markov property and additive returns are sufficient to restrict our attention to policies that ignore history, in the multi-objective case, the scalarized returns are no longer additive and therefore the optimal policy can depend on the history. Examples of methods that exploit this fact are the 'steering' approach BIB001 ) and the reward-augmented-state thresholded lexicographic ordering method by BIB002 , which are discussed in Section 6.1.", "role": "assistant" } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.1408
[ { "content": "Classify the intent of this citation from an NLP research paper. \n\nYour answer must be one word from: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. No other text allowed.\n\nSection Title:\nNone\n\nContext before:\nThey also proposed two major categories of meta-learning approaches for recommender systems, merging and ensemble, each subdivided into the more specific subclasses suggested by Burke (2002) as follows. Lekakos and Giaglis (2007) implemented a supervised version of this approach for a recommender system, as opposed to our unsupervised version. This kind of meta-learning is referred to as stacking by the Data Mining community (Witten and Frank 2000).\n\nCitation:\nThey also proposed two major categories of meta-learning approaches for recommender systems , merging and ensemble , each subdivided into the more specific subclasses suggested by Burke ( 2002 ) as follows .\n\nContext after:\nThe merging category corresponds to techniques where the individual methods affect each other in different ways (this category encompasses Burke’s feature combination, cascade, feature augmentation, and meta-level sub-categories). The ensemble category corresponds to techniques where the predictions of the individual methods are combined to produce a final prediction (this category encompasses Burke’s weighted, switching, and mixed sub-categories). Our system falls into the ensemble category, because it combines the results of the various methods into a single outcome. More specifically, it belongs to Burke’s switching sub-category, where a single method is selected on a case-by-case basis.", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.12432
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Discovery of an Ebolavirus-Like Filovirus in Europe\nArticle:\nAbstract:\nFiloviruses, amongst the most lethal of primate pathogens, have only been reported as natural infections in sub-Saharan Africa and the Philippines. Infections of bats with the ebolaviruses and marburgviruses do not appear to be associated with disease. Here we report identification in dead insectivorous bats of a genetically distinct filovirus, provisionally named Lloviu virus, after the site of detection, Cueva del Lloviu, in Spain.\nIntroduction:\nFiloviruses cause lethal hemorrhagic fever in humans and nonhuman primates. The family Filoviridae includes two genera: Marburgvirus, comprising various strains of the Lake Victoria marburgvirus( MARV); and Ebolavirus( EBOVs), comprising four species including Sudan ebolavirus( SEBOV), Zaire ebolavirus( ZEBOV), Ivory Coast ebolavirus( CIEBOV), and Reston ebolavirus( REBOV); and a tentative species Bundibugyo ebolavirus( BEBOV)[1]. MARV was discovered in 1967 in Marburg, Germany during an outbreak in laboratory staff exposed to tissues from monkeys imported from Uganda. ZEBOV was discovered in 1976 in Yambuku, Zaire during a 312-person outbreak associated with 90% mortality. With the exception of REBOV, that appears to be pathogenic in nonhuman primates but not in humans and is endemic in the Philippines, all known filoviruses are pathogenic in primates including humans and are endemic in Africa[2]. Bats are implicated as reservoirs and vectors for transmission of filoviruses in Africa[3]. ZEBOV sequences have been found in fruit bats( Hypsignathus monstrosus, Epomops franqueti and Myonycteris torquata)[4],[5]. MARV sequences have been found in fruit( Rousettus aegyptiacus) and insectivorous( Rhinolophus eloquens and Miniopterus inflatus) bats[6],[7]. Bats naturally or experimentally infected with ZEBOV or MARV are healthy and shed virus in their feces for up to 3 weeks[4],[5],[7]. In 2002, colonies of Schreiber's bats( Miniopterus schreibersii), sustained massive die-offs in caves in France, Spain and Portugal[8]. M. schreibersii, family Vespertilionidae, comprises at least four geographically discrete lineages distributed in Oceania, southern Europe, southern Africa, and southeast Asia[9]. Here we report the discovery of a novel ebolavirus-like filovirus in bats from Europe.\nDiscussion:\nAlthough the dynamics of epidemic filoviral diseases among humans, great apes, and other primates have been described in detail[22],[23],[24], the natural reservoirs, modes of transmission to hominids and pongids( gorillas, and chimpanzees), and temporal dynamics remain obscure. Life forms of diverse taxa have been suggested as potential reservoirs, including bats, rodents, arthropods, and plants[5],[25]. Several lines of evidence support a role for bats including virus replication at high levels in experimentally inoculated insectivorous bats[5]; asymptomatic infection of fruit and insectivorous bats with EBOV in central Africa[4]; asymptomatic infection of fruit and insectivorous bats with MARV[6],[26]; and a history consistent with human exposure to a fruit bat reservoir during a ZEBOV outbreak in the Democratic Republic of Congo( DRC) in 2007[27]. The discovery of LLOV in M. schreibersii is consistent with filovirus tropism for bats. However, unlike MARV and EBOV, where asymptomatic circulation is posited to be consistent with evolution to avirulence in this long-term host-parasite relationships, several observations suggest that in the case of LLOV, filovirus infection may be pathogenic. LLOV was found in the affected bat population( M. schreibersii) but not in other healthy M. schreibersii or in bats of other species that cohabited the same caves( M. myotis). Furthermore, lung and spleen, tissues with evidence of immune cell infiltrates consistent with viral infection, contained LLOV RNA sequences( Table 1). The sudden outbreak of bat die-offs in Spain that precipitated this study destroyed several bat colonies in less than 10 days[8]. As recently highlighted by the example of white nose syndrome, a lethal fungal skin infection that is associated with recent declines in North American bat populations[28], bats play critical ecological roles in insect control, plant pollination, and seed dissemination. Although we have not demonstrated a causal relationship between LLOV and mortality in M. schreibersii, the discovery of a novel filovirus in western Europe, and the wide geographical distribution of the associated insectivorous bat is a significant concern. While the virus was detected in the north of Spain, simultaneous bat die-offs also have been observed in Portugal and France[8]. Filoviruses had been posited to show a geographically related phylogeographic structure[29]. Viruses and subtypes from particular geographic area cluster together phylogenetically, suggesting a stable host-parasite relationship wherein viruses are maintained in permanent local-regional pools. Whereas EBOV is associated with humid afrotropics, MARV is focused in drier areas in eastern and south-central Africa[29]. In that analysis, CIEBOV and ZEBOV coincided ecologically, while MARV, a more distantly related filovirus, did not. M. schreibersii distribution does not overlap with the predicted or observed areas of ZEBOV or MARV activity. Thus, LLOV appears not to share the known ecological or geographical niches of other recognized filoviruses. Recently, the discovery of integrated filovirus elements has led to the proposal that filoviruses have co-evolved with mammals over millions of years[30],[31]. Phylogenetic analyses of LLOV indicate a common ancestor of all filoviruses at least 150, 000 years ago. The discovery of a novel filovirus in a distinct geographical niche suggests that the diversity and distribution of filoviruses should be studied further.", "role": "user" }, { "content": "A novel filovirus , provisionally named Lloviu virus ( LLOV ) , was detected during the investigation of Miniopterus schreibersii die-offs in Cueva del Lloviu in southern Europe . LLOV is genetically distinct from other marburgviruses and ebolaviruses and is the first filovirus detected in Europe that was not imported from an endemic area in Africa . Filoviruses , amongst the most lethal of primate pathogens , have only been reported as natural infections in sub-Saharan Africa and the Philippines . Infections of bats with the ebolaviruses and marburgviruses do not appear to be associated with disease . Here we report identification of genetically distinct filovirus in dead insectivorous bats in caves in Spain .", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.10141
[ { "content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Temporal and Real-Time Databases: A Survey \nChapter Title: B. Transaction Processing \n1. Abstract of Cited Paper (BIB001): Abstract : This paper presents the design of the storage system for the POSTGRES data base system under construction at Berkeley. It is novel in several ways. First, the storage manager supports transaction management but does so without using a conventional write ahead log (WAL). In fact, there is no code to run at recovery time, and consequently recovery from crashes is essentially instantaneous. Second, the storage manager allows a user to optionally keep the entire past history of data base objects by closely integrating an archival storage system to which historical records are spooled. Lastly, the storage manager is consciously constructed as a collection of asynchronous processes. Hence, a large monolithic body of code is avoided and opportunities for parallelism can be exploited. The paper concludes with a analysis of the storage system which suggests that it is performance competitive with WAL systems in many situations. \n2. Abstract of Cited Paper (BIB002): During the past few years, several studies have been made on the performance of real-time database systems with respect to the number of transactions that miss their deadlines. These studies have used either simulation models or database testbeds as their performance evaluation tools. We present a preliminary analytical performance study of real-time transaction processing. Using a series of approximations, we derive simple closed-form solutions to reduced real-time database models. Although quantitatively approximate, the solutions accurately capture system sensitivity to workload parameters and indicate conditions under which performance bounds are achieved. > \n3. Abstract of Cited Paper (BIB003): In a real-time database system, an application may assign a value to a transaction to reflect the return it expects to receive if the transaction commits before its deadline. Most research on real-time database systems has focused on systems where all transactions are assigned the same value, the performance goal being to minimize the number of missed deadlines. When transactions are assigned different values, the goal of the system shifts to maximizing the sum of the values of those transactions that commit by their deadlines. Minimizing the number of missed deadlines becomes a secondary concern. In this article, we address the problem of establishing a priority ordering among transactions characterized by both values and deadlines that results in maximizing the realized value. Of particular interest is the tradeoff established between these values and deadlines in constructing the priority ordering. Using a detailed simulation model, we evaluate the performance of several priority mappings that make this tradeoff in different, but fixed, ways. In addition, a \"bucket\" priority mechanism that allows the relative importance of values and deadlines to be controlled is introduced and studied. The notion of associating a penalty with transactions whose deadlines are not met is also briefly considered.\n\nLiterature Review:", "role": "user" }, { "content": "Several researchers have investigated adapting existing concurrency control and transaction management techniques to support transaction time. The subtle issues involved in choosing whether to timestamp at the beginning of a transaction (which restricts the concurrency control method that can be used) or at the end of the transaction (which may require data written earlier by the transaction to be read again to record the transaction) have been resolved in favor of the latter through some implementation tricks that effectively eliminate the need for additional reads BIB001 , , BIB002 . (We revisit this issue in the context of real-time databases in Section VI.B.2.) Times-tamping in a distributed setting has also been considered, as has integrating temporal indexes with concurrency control to increase the available concurrency [ 1381. Finally, since a transaction-time database contains all past versions of the database, it can be used to recover from media failures that cause a portion or all of the current version to be lost [ 1381. In the remainder of this section, we discuss the issues related to processing real-time transactions. Evaluations of the techniques discussed were made by means of either a testbed system (e.g., ) or simulation (e.g., [3] , BIB004 , [215] ). One exception is the approximate analysis of real-time databases BIB003 , which used an analytical approach to approximate the steady-state fraction of real-time transactions that complete successfully.", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.3085
[ { "content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: Cell-tracing experiments in chicken (Jaffredo et al., 2000; Jaffredo et al., 1998) and mouse (Chen et al., 2009; de Bruijn et al., 2002; Sugiyama et al., 2003; Zovein et al., 2008) species indicate that IAHCs derive from hemogenic ECs through an Endothelial-to-Hematopoietic Transition (EHT).", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.1652
[ { "content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nSoftware engineers make use of design patterns for reasons that range from performance to code comprehensibility. Several design patterns capturing the body of knowledge of best practices have been proposed in the past, namely creational, structural and behavioral patterns. However, with the advent of mobile devices, it becomes a necessity a catalog of design patterns for energy efficiency. In this work, we inspect commits, issues and pull requests of 1027 Android and 756 iOS apps to identify common practices when improving energy efficiency. This analysis yielded a catalog, available online, with 22 design patterns related to improving the energy efficiency of mobile apps. We argue that this catalog might be of relevance to other domains such as Cyber-Physical Systems and Internet of Things. As a side contribution, an analysis of the differences between Android and iOS devices shows that the Android community is more energy-aware.\n\nCited Abstract(s):\n[1]: As Android platform becomes more and more popular, a large amount of Android applications have been developed. When developers design and implement Android applications, power consumption management is an important factor to consider since it affects the usability of the applications. Thus, it is important to help developers adopt proper strategies to manage power consumption. Interestingly, today, there is a large number of Android application repositories made publicly available in sites such as GitHub. These repositories can be mined to help crystalize common power management activities that developers do. These in turn can be used to help other developers to perform similar tasks to improve their own Android applications.In this paper, we present an empirical study of power management commits in Android applications. Our study extends that of who perform an empirical studyon energy aware commits; however they do not focus on Android applications and only a few of the commits that they study come from Android applications. Android applications are often different from other applications (e.g., those running on a server) due to the issue of limited battery life and the use of specialized APIs. As subjects of our empirical study, we obtain a list of open source Android applications from F-Droid and crawl their commits from Github. We get 468 power management commits after we filter the commits using a set of keywords and by performing manual analysis. These 468 power management commits are from 154 different Android applications and belong to 15 different application categories. Furthermore, we use open card sort to categorize these power management commits and we obtain 6 groups which correspond to different power management activities. Our study also reveals that for different kinds of Android application (e.g., Games, Connectivity, Navigation, etc.), the dominant power management activities differ.For example, the percentageof power management commits belonging to Power Adaptation activity is larger for Navigation applications than those belonging to other categories.", "role": "user" }, { "content": "With a similar approach, [1] have mined 468 power management commits to find coding practices in Android apps . Using a hybrid card sort approach, six different power management practices were identified: , , , , . The study shows that power management activities are more prevalent in navigation apps. Conversely, our work focuses on energy-saving commits, pull requests, and issues. Using the same taxonomy, our work concentrates exclusively on coding practices for , and . Moreover, rather than analyzing the prevalence of power management activities amongst different app categories, we emphasize on providing actionable findings for mobile app practitioners. Finally, we extend this work to the iOS mobile platform, which shares a big part of the mobile app market.", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.120
[ { "content": "Write an abstract for this chemistry paper.\n\nTitle:\nPeptide array–based interactomics\n\nPaper sections:\nIntroduction\n<p>Cellular signaling is in large parts based on a complex network of protein interactions with other proteins or other biological molecules. Studying protein-protein interaction (PPI) networks is pivotal for the understanding of cellular signaling [1–3]. Protein-protein interactions can be studied in different ways, including the genetic modification of the protein sequence, measurements by two-hybrid interactions, or chromatographic comigration [4–6]. The direct detection of interaction partners after isolating the protein of interest is the most common one [7]. The mass spectrometric measurement of interacting proteins has increased in popularity due to the increased sensitivity and possibilities of modern mass spectrometry–based proteomics [6, 8–10]. While many studies use whole proteins for interaction studies, the use of peptides in such studies has increased over time [11–16]. In this article, we will focus on the newly developing field of peptide array–based interaction studies.</p>\n\n\n<p>a Protein immunoprecipitation. The protein of interest is captured by an immobilized antibody for the pull-down. b Peptide pull-down. A peptide and a PTM-modified version of the same peptide are immobilized on a bead. The PTM prevents in this case the binding of the protein. c Highly parallelized peptide pull-down. The peptides are carrying different PTMs or mutations (indicated by the colored pins), which are enabling (orange and red) or preventing (black) interaction. d Peptide array designed for a PrISMa screen. A SLiM-containing area in a protein of interest (dark blue) is covered by different tiling peptides. Each of the peptides covers a different part and only the second peptide contains the entire SLiM (dark blue shade). e Inclusion of PTMs in the peptide array designed for PrISMa analysis. For each of the tiling peptides, a peptide with and one without the PTM is included. f + g Identification of false-positive binders in a PrISMa setup. Neighboring peptides cover different parts of the SLiM allowing only partial binding. A protein peptide not showing this binding behavior is excluded as it has a high probability to be nonspecific binding</p>\n\nImmunoprecipitations and peptide pull-downs\n<p>In parallel, advances in the chemical synthesis of peptides allowed the production of larger quantities and longer peptides. This enabled the use of peptides in pull-down assays as an alternative to the genetic generation of tagged protein fragments [19]. The chemical synthesis of peptides carrying amino acid exchanges provided a fast alternative to cloning techniques [20–22]. Several studies used arrays of synthetic peptides containing alanine exchanges for the corresponding amino acids as a binding matrix. Incubation of the matrix and detection of the protein interactors allowed the identification of essential amino acids for the interaction [23–26]. An alternative technique for the screening for the best interacting peptides is the phage display technique, where peptides are presented on a cell using phage expressing protein fragments [21, 27–29].</p>\n\nPeptide pull-down meets mass spectrometry\n<p>The full potential of a peptide pull-down is only utilized in combination with a powerful detection methodology, which allows the identification of new interaction partners. With the rise of mass spectrometry–based proteomics, interacting proteins can be identified in a peptide pull-down (reviewed in [13]). Differently from protein-based interaction studies, peptide-protein interactions are usually of low affinity, thus preferring high-affinity interactions with more abundant proteins [30]. This preference introduces a bias against the detection of low abundant interaction partners or transient interactions and thus limits the use of peptide pull-downs. At the same time, a peptide bait permits narrowing the interaction site down to a fraction of a protein, allowing the precise mapping of the interaction site without the need to generate protein truncations [31, 32].</p><p>A significant advantage of peptides over protein pull-downs is the possibility of including post-translational modifications (PTMs) in the peptide. As peptides are chemically synthesized, any PTM can be included as long as it is compatible with the synthesis technique. The binding surface is thus fully modified, which is usually not the case with complete proteins [33–36].</p><p>Fast regulation of biological networks relies on the rapid addition and removal of PTMs during signaling, leading in many cases to the formation or loss of protein interactions. Capturing these transient interactions is challenging [36, 37]. Dissecting the recruitment of proteins using PTM-containing peptides allows identifying different complexes involved in the interaction (Fig. 1b). The interaction of SH2 domains with the phosphorylated C-terminal tail of the epidermal growth receptor is such an example. Synthetic phosphotyrosine-containing peptides were used to show the specificity of Grp2's SH2 domains [38, 39]. To discriminate the different interactomes of the ErbB receptors for the phosphorylated versus the unphosphorylated state, phosphorylated peptides and their unmodified counterparts derived from the C-terminal region of ErbB receptors were used in a pull-down study, revealing the specific recruitment of Stat5 to the double phosphorylated C-terminus [40]. A systematic study of all 99 human SH2 domains probed their specificity in a system biological study. The binding patterns were confirmed by peptide pull-down experiments showing the regulation in a time-resolved manner [41]. Besides phosphorylations, other PTMs have been shown to change interactions. An example is the methylation at position three of the transcription factor C/EBPß, which leads to the loss of interaction with the SWI/SNF complex [42–44].</p>\n\nLinear motifs as interaction mediators\n<p>Many proteins adapt a defined fold, which is determined by their amino acid sequence. These folded domains are contrasted by regions lacking a specific fold, which are called intrinsically disordered regions (IDRs) [45]. Despite their lack of structure, IDRs are important docking sites for many proteins [46, 47]. They are often decorated with many PTMs indicating their importance for regulated interactions [45, 48–50]. IDRs can harbor many interaction motifs, which fall into three groups: short linear motifs (SLiMs), molecular recognition features (MoRFs), and intrinsically disordered domains (IDDs). The groups differ in length and how they support interactions. SLiMs are usually between three and 10 amino acids in length while MoRFs are slightly longer with 10 to 70 amino acids. MoRFs undergo a transition from the unordered to an ordered state while the ligand-binding takes place [51]. IDDs fold upon interaction with the binding partner [52].</p>\n\n\n<p>Interaction of a SLiM with a specific binding site of an interacting protein. The SLiM consists of a specific amino acid pattern, which, in this case, is defined for a set of amino acids and interspaced with amino acids (X) with no contribution to the binding (here: LXXLLXXXLXXF)</p>\n\nLinear motifs as interaction mediators\n<p>Synthesized peptides covering short parts of a protein can contain complete SLiMs, making them the perfect carrier for SLiM-based interaction studies. This has been used in studies, where a known sequence with and without the PTM was used to identify the interaction partners, as demonstrated in a study using two different peptides derived from EGFR and Sos1 and 2. The peptides were synthesized in phosphorylated and unmodified form and used in a pull-down experiment utilizing SILAC-labeled (stable isotope labeling in cell culture) cell extracts. The interaction partners were identified using mass spectrometry [63].</p>\n\nPeptide array–based interaction screens\n<p>The advancements of peptide synthesis by SPOT synthesis allow the creation of arrays holding many different peptides on a single membrane surface [23, 64, 65]. Cellulose membranes are an attractive alternative support to bead-based technologies, which can directly be used in biological assays, including immunological assays or parallelized peptide pull-downs [66]. The synthesis allows the inclusion of different PTMs in the membrane, permitting the systematic comparison of the interactome of a peptide sequence and its modified counterpart.</p><p>While peptide matrices have been used to find the optimal binding sequence [67–69], the true power of the approach emerges when it is combined with a proteomics readout. Meyer and coworkers use this principle to analyze mutations that cause neurological diseases [12]. One hundred twenty known disease-causing mutations were selected in extensive bioinformatics analysis. Peptides for the wild-type and mutated sequence were used to construct a peptide array and probed for differential binding with a proteomics readout (Fig. 1c). This created a PPI network of gained and lost interactions. A subnetwork of five interactors was related to clathrin-mediated transport. Three of the interaction nodes created a dileucine motif which is necessary for clathrin-dependent transport. In case of the glucose transporter GLUT1, the mutated version was wrongly localized to the endocytic compartment [12].</p><p>A screening technique targeting transient SLiM-based interactions along the primary structure was recently developed. The technique, PrISMa (Protein Interaction Screen on a Peptide Matrix), is based on a membrane-bound array of overlapping peptides, spanning the entire sequence of the protein of interest, creating a sliding window for the detection of SLiM-mediated interactions [11] (Fig. 1d, f). The interaction partners for each peptide were identified using mass spectrometry–based proteomics. The PrISMa technique provides a number of advantages over peptide pull-downs. The high concentration of immobilized peptides on the matrix allows the stabilization of the transient interactions, thus increasing the sensitivity for detecting these interactions. The overlapping peptide structure of the matrix offers the implementation of powerful filtering methods, based on the partial presence of the SLiM in the peptide adjacent to the main binding peptide. This allows separating unspecific background binding from SLiM-mediated interactions (Fig. 1g). The technique was used to map the interactome of the transcription factor C/EBPβ, identifying a large number of new interaction partners [11].</p><p>Additionally, the setup of the peptide array allows the inclusion of PTMs in the matrix to probe simultaneously for PTM-mediated interactions (Fig. 1e). For the C/EBPß study, several PTMs including methylation, citrullination, acetylation, and phosphorylation were included. This revealed the binding of the TLE3 complex specifically to the methylated form of C/EBPß.</p><p>A limitation of the method is the restriction of the screen: it can be only applied to intrinsically disordered regions, in which the structure depends only on the amino acid sequence. For interactions that depend on three-dimensional structures, this cannot be applied. To address this limitation, Ramberger and coworkers combined the PrISMa method with a BioID interaction screen to explore the C/EBPα interactome. They observed a significant overlap of the interaction partners between both technical approaches, and interestingly, common protein binders with C/EBPβ interactome [60].</p>\n\nOutlook\n<p>Over time, peptide-based interaction studies have significantly helped to reveal interaction sites or confirm specific PTM-regulated interactions. The new matrix-based interaction screens open a new era of interaction studies allowing to test many SLiM-based interactions at the same time. Besides the costs for the peptide matrix, using a mass spectrometric readout, the time of the measurement for all the interaction screens is a major restriction. Although the measurement of certain interactions can be parallelized by the use of isotopic labeling techniques, like SILAC and maybe TMT/iTRAQ in the future, the measurement of a peptide matrix can consume weeks on data acquisition in the mass spectrometer. Here, the technical developments of fast liquid chromatography systems might reduce the time constraints and open the use of peptide matrices for more laboratories. This will broaden the use of these techniques to identify more, potential druggable, PPI-driven diseases and will promote the deeper understanding of PPI networks, which depend on low affinity interactions.</p>\n\n\n<p>Publisher's note</p><p>Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.</p>", "role": "user" }, { "content": "The analysis of protein-protein interactions (PPIs) is essential for the understanding of cellular signaling. Besides probing PPIs with immunoprecipitation-based techniques, peptide pull-downs are an alternative tool specifically useful to study interactome changes induced by post-translational modifications. Peptides for pull-downs can be chemically synthesized and thus offer the possibility to include amino acid exchanges and post-translational modifications (PTMs) in the pull-down reaction. The combination of peptide pull-down and analysis of the binding partners with mass spectrometry offers the direct measurement of interactome changes induced by PTMs or by amino acid exchanges in the interaction site. The possibility of large-scale peptide synthesis on a membrane surface opened the possibility to systematically analyze interactome changes for mutations of many proteins at the same time. Short linear motifs (SLiMs) are amino acid patterns that can mediate protein binding. A significant number of SLiMs are located in regions of proteins, which are lacking a secondary structure, making the interaction motifs readily available for binding reactions. Peptides are particularly well suited to study protein interactions, which are based on SLiM-mediated binding. New technologies using arrayed peptides for interaction studies are able to identify SLIM-based interaction and identify the interaction motifs.Graphical abstract", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.49983
[ { "content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: A Survey of FPGA-Based Neural Network Accelerator \nChapter Title: INTRODUCTION \n1. Abstract of Cited Paper (BIB001): We present a state-of-the-art speech recognition system developed using end-to-end deep learning. Our architecture is significantly simpler than traditional speech systems, which rely on laboriously engineered processing pipelines; these traditional systems also tend to perform poorly when used in noisy environments. In contrast, our system does not need hand-designed components to model background noise, reverberation, or speaker variation, but instead directly learns a function that is robust to such effects. We do not need a phoneme dictionary, nor even the concept of a \"phoneme.\" Key to our approach is a well-optimized RNN training system that uses multiple GPUs, as well as a set of novel data synthesis techniques that allow us to efficiently obtain a large amount of varied data for training. Our system, called Deep Speech, outperforms previously published results on the widely studied Switchboard Hub5'00, achieving 16.0% error on the full test set. Deep Speech also handles challenging noisy environments better than widely used, state-of-the-art commercial speech systems. \n2. Abstract of Cited Paper (BIB002): In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. \n3. Abstract of Cited Paper (BIB003): The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.\n\nLiterature Review:", "role": "user" }, { "content": "Recent research on Neural Network (NN) is showing great improvement over traditional algorithms in machine learning. Various network models, like convolutional neural network (CNN), recurrent neural network (RNN), have been proposed for image, video, and speech process. CNN improves the top-5 image classi cation accuracy on ImageNet BIB004 dataset from 73.8% to 84.7% in 2012 and further helps improve object detection BIB001 with its outstanding ability in feature extraction. RNN BIB002 achieves state-of-the-art word error rate on speech recognition. In general, NN features a high ing ability to a wide range of pa ern recognition problems. is ability makes NN a promising candidate for many arti cial intelligence applications. But the computation and storage complexity of NN models are high. In Table 1 , we list the number of operations, number of parameters (add or multiplication), and top-1 accuracy on ImageNet dataset BIB004 of state-of-the-art CNN models. Take CNN as an example. e largest CNN model for a 224 × 224 image classi cation requires up to 39 billion oating point operations (FLOP) and more than 500MB model parameters BIB003 . As the computation complexity is proportional to the input image size, processing images with higher resolutions may need more than 100 billion operations. Latest work like MobileNet and Shu eNet are trying to reduce the network size with advanced network structures, but with obvious accuracy loss. e balance between the size of NN models and accuracy is still an open question today. In some cases, the large model size hinders the application of NN, especially in power limited or latency critical scenarios. erefore, choosing a proper computation platform for neural-network-based applications is essential. A typical CPU can perform 10-100G FLOP per second, and the power e ciency is usually below 1GOP/J. So CPUs are hard to meet the high performance requirements in cloud applications nor the low power requirements in mobile applications. In contrast, GPUs o er up to 10TOP/s peak performance and are good choices for high performance neural network applications. Development frameworks like Ca e and Tensor ow also o er easy-to-use interfaces which makes GPU the rst choice of neural network acceleration. Besides CPUs and GPUs, FPGAs are becoming a platform candidate to achieve energy e cient neural network processing. With a neural network oriented hardware design, FPGAs can implement high parallelism and make use of the properties of neural network computation to remove additional logic. Algorithm researches also show that an NN model can be simpli ed in a hardware-friendly way while not hurting the model accuracy. erefore FPGAs are possible to achieve higher energy e ciency compared with CPU and GPU. FPGA-based accelerator designs are faced with two challenges in performance and exibility: • Current FPGAs usually support working frequency at 100-300MHz, which is much less than CPU and GPU. e FPGA's logic overhead for recon gurability also reduces the overall system performance. A straightforward design on FPGA is hard to achieve high performance and high energy e ciency. • Implementation of neural networks on FPGAs is much harder than that on CPUs or GPUs. Development framework like Ca e and Tensor ow for CPU and GPU is absent for FPGA. Many designs addressing the above two problems have been carried out to implement energy e cient and exible FPGA-based neural network accelerators. In this paper, we summarize the techniques proposed in these work from the following aspects: • We rst give a simple model of FPGA-based neural network accelerator performance to analyze the methodology in energy e cient design. • We investigate current technologies for high performance and energy e cient neural network accelerator designs. We introduce the techniques in both so ware and hardware level and estimate the e ect of these techniques. • We compare state-of-the-art neural network accelerator designs to evaluate the techniques introduced and estimate the achievable performance of FPGA-based accelerator design, which is at least 10× be er energy e cient than current GPUs. • We investigate state-of-the-art automatic design methods of FPGA-based neural network accelerators. e rest part of this paper is organized as follows: Section 2 introduces the basic operations of neural networks and the background of FPGA-based NN accelerator. In section 3, we analyze the design target of NN accelerators and corresponding methods. Section 4 and section 5 review the techniques in NN model compression and accelerator design respectively. Section 6 compares existing designs and evaluate the techniques. Section 8 introduces the methods for a exible accelerator design. Section 9 concludes this paper.", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.4349
[ { "content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: AbnormalMitochondrialPhenotypeinSlow-TwitchSkeletalMuscleofPGC-1alpha-/-Mice", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.1066
[ { "content": "Based on the following biomedical documents, answer the question with either yes or no.\n\nQuestion: Is Olaparib effective for prostate cancer?\n\nContext: We hypothesized that metastatic, castration-resistant prostate cancers with DNA-repair defects would respond to poly(adenosine diphosphate [ADP]-ribose) polymerase (PARP) inhibition with olaparib.METHODS: We conducted a phase 2 trial in which patients with metastatic, castration-resistant prostate cancer were treated with olaparib tablets at a dose of 400 mg twice a day.\nCONCLUSIONS: Treatment with the PARP inhibitor olaparib in patients whose prostate cancers were no longer responding to standard treatments and who had defects in DNA-repair genes led to a high response rate.\nIn addition, phase III trials in breast, gastric and pancreatic cancer are underway/planned, and phase I/II investigation is being conducted in other malignancies, including prostate cancer, non-small cell lung cancer, Ewing's sarcoma and advanced cancer.\nIn a phase II study, researchers found that the PARP inhibitor olaparib led to stable disease or tumor regressions in patients with advanced breast, ovarian, pancreatic, and prostate cancers who had germline mutations in BRCA1 or BRCA2.\nEligibility included ovarian cancer resistant to prior platinum; breast cancer with ≥ three chemotherapy regimens for metastatic disease; pancreatic cancer with prior gemcitabine treatment; or prostate cancer with progression on hormonal and one systemic therapy.\nThe tumor response rate was 26.2% (78 of 298; 95% CI, 21.3 to 31.6) overall and 31.1% (60 of 193; 95% CI, 24.6 to 38.1), 12.9% (eight of 62; 95% CI, 5.7 to 23.9), 21.7% (five of 23; 95% CI, 7.5 to 43.7), and 50.0% (four of eight; 95% CI, 15.7 to 84.3) in ovarian, breast, pancreatic, and prostate cancers, respectively. Stable disease ≥ 8 weeks was observed in 42% of patients (95% CI, 36.0 to 47.4), including 40% (95% CI, 33.4 to 47.7), 47% (95% CI, 34.0 to 59.9), 35% (95% CI, 16.4 to 57.3), and 25% (95% CI, 3.2 to 65.1) of those with ovarian, breast, pancreatic, or prostate cancer, respectively.\nIt is increasingly clear that there are molecularly distinct subtypes of various common cancers, with different therapeutic approaches required for each subtype, for example, the use of the monoclonal antibodies (trastuzumab and cetuximab) in HER2-positive breast cancer and wild-type KRAS colorectal cancer; tyrosine kinase inhibitors (imatinib, gefitinib, erlotinib and crizotinib) in chronic myeloid leukaemia, gastrointestinal stromal tumours and non-small-cell lung cancer and intracellular agents (vemurafenib and olaparib) in metastatic malignant melanoma and ovarian, breast and prostate cancer.\nOlaparib, one of the most studied PARPis, has demonstrated activity in BRCA1/2(MUT+) and BRCA-like sporadic ovarian and breast cancers, and looks promising in prostate and pancreatic cancers.\nProstate cancer cells cotreated with the HDAC inhibitor, suberoylanilide hydroxamic acid (SAHA) and the PARPi, olaparib, demonstrated a synergistic decrease in cell viability compared with single-agent treatment (combination index<0.9), whereas normal prostatic cells did not.\nCONCLUSIONS: Treatment with the PARP inhibitor olaparib in patients whose prostate cancers were no longer responding to standard treatments and who had defects in DNA-repair genes led to a high response rate.\nOlaparib, one of the most studied PARPis, has demonstrated activity in BRCA1/2(MUT+) and BRCA-like sporadic ovarian and breast cancers, and looks promising in prostate and pancreatic cancers.\nDNA-Repair Defects and Olaparib in Metastatic Prostate Cancer.\nProstate cancer cells cotreated with the HDAC inhibitor, suberoylanilide hydroxamic acid (SAHA) and the PARPi, olaparib, demonstrated a synergistic decrease in cell viability compared with single-agent treatment (combination index < 0.9), whereas normal prostatic cells did not.\nWe hypothesized that metastatic, castration-resistant prostate cancers with DNA-repair defects would respond to poly(adenosine diphosphate [ADP]-ribose) polymerase (PARP) inhibition with olaparib.\nA phase II study of the PARP inhibitor olaparib (AstraZeneca) for cancer patients with inherited BRCA1 and BRCA2 gene mutations confirmed earlier results showing clinical benefit for advanced breast and ovarian cancers, and demonstrated evidence of effectiveness against pancreatic and prostate cancers.\nBACKGROUND: Prostate cancer is a heterogeneous disease, but current treatments are not based on molecular stratification. We hypothesized that metastatic, castration-resistant prostate cancers with DNA-repair defects would respond to poly(adenosine diphosphate [ADP]-ribose) polymerase (PARP) inhibition with olaparib.METHODS: We conducted a phase 2 trial in which patients with metastatic, castration-resistant prostate cancer were treated with olaparib tablets at a dose of 400 mg twice a day.\nProstate cancer is a heterogeneous disease, but current treatments are not based on molecular stratification. We hypothesized that metastatic, castration-resistant prostate cancers with DNA-repair defects would respond to poly(adenosine diphosphate [ADP]-ribose) polymerase (PARP) inhibition with olaparib.\nSilencing RAD51 sensitized prostate cancer cells to SAHA and olaparib alone. Collectively, cotreatment with HDACi and PARPi downregulated HR-related protein expression and concomitantly increased DNA damage, resulting in prostate cancer cell death.\nProstate cancer cells cotreated with the HDAC inhibitor, suberoylanilide hydroxamic acid (SAHA) and the PARPi, olaparib, demonstrated a synergistic decrease in cell viability compared with single-agent treatment (combination index<0.9), whereas normal prostatic cells did not.\nThe specificity of the biomarker suite was 94%. Anemia (in 10 of the 50 patients [20%]) and fatigue (in 6 [12%]) were the most common grade 3 or 4 adverse events, findings that are consistent with previous studies of olaparib.CONCLUSIONS: Treatment with the PARP inhibitor olaparib in patients whose prostate cancers were no longer responding to standard treatments and who had defects in DNA-repair genes led to a high response rate.\nProstate cancer cells cotreated with the HDAC inhibitor, suberoylanilide hydroxamic acid (SAHA) and the PARPi, olaparib, demonstrated a synergistic decrease in cell viability compared with single-agent treatment (combination index &lt; 0.9), whereas normal prostatic cells did not.\nIn addition, phase III trials in breast, gastric and pancreatic cancer are underway/planned, and phase I/II investigation is being conducted in other malignancies, including prostate cancer, non-small cell lung cancer, Ewing's sarcoma and advanced cancer.\nINTERPRETATION: Olaparib has antitumour activity against metastatic castration-resistant prostate cancer with DDR gene aberrations, supporting the implementation of genomic stratification of metastatic castration-resistant prostate cancer in clinical practice.\nCONCLUSIONS: In men with metastatic castration-resistant prostate cancer who had disease progression while receiving enzalutamide or abiraterone and who had alterations in genes with a role in homologous recombination repair, olaparib was associated with longer progression-free survival and better measures of response and patient-reported end points than either enzalutamide or abiraterone.\nOlaparib is an additional option for second- and third-line treatment in those with alterations in BRCA1, BRCA2, and ATM.\nIn this review, we describe current therapies for mCRPC, the rationale for anti-PARP therapies, the pharmacology of olaparib for prostate cancer, clinical trials of olaparib for mCRPC, our clinical experience with olaparib for prostate cancer at a comprehensive cancer center, and future directions of olaparib for the treatment of mCRPC.\nOlaparib, a poly (ADP-ribose) polymerase inhibitor, has demonstrated an improvement in median progression-free survival (PFS) in select patients with metastatic castration-resistant prostate cancer (mCRPC).\nActivity of durvalumab plus olaparib in metastatic castration-resistant prostate cancer in men with and without DNA damage repair mutations.\nBACKGROUND: Patients with metastatic castration-resistant prostate cancer and homologous recombination repair (HRR) mutations have a better response to treatment with the poly(ADP-ribose) polymerase inhibitor olaparib than patients without HRR mutations.\nThe TOPARP-A clinical trial demonstrated that the PARP inhibitor olaparib may be an effective strategy for treating prostate cancer.\nMMARY: The poly(ADP-ribose) polymerase (PARP) inhibitors olaparib and rucaparib are now approved by the Food and Drug Administration for the treatment of advanced prostate cancer. Here, we\nBACKGROUND: We previously reported that olaparib led to significantly longer imaging-based progression-free survival than the physician's choice of enzalutamide or abiraterone among men with metastatic castration-resistant prostate cancer who had qualifying alterations in homologous recombination repair genes and whose disease had progressed during previous treatment with a next-generation hormonal agent.\nRecent clinical studies show favorable results for the PARP inhibitor olaparib used as single agent for treatment of metastatic castration-resistant PCa.\nThe PROFOUND phase III trial, comparing olaparib with enzalutamide/abiraterone therapy, revealed increased radiological progression-free survival (rPFS) and overall survival (OS) among patients with metastatic castration-resistant prostate cancer (mCRPC) with BRCA1, BRCA2 or ATM mutations.\nOlaparib Targets Some Advanced Prostate Cancers.\nThe PARP inhibitor (PARPi) olaparib received FDA breakthrough designation for treatment of metastatic castration-resistant prostate cancers (CRPC) carrying mutations in BRCA1/2 or ATM genes.\nIn prostate cancer, two PARPi, rucaparib and olaparib, have been FDA approved for the treatment of metastatic castration-resistant prostate cancer (mCRPC).\nOlaparib is an FDA-approved PARP inhibitor (PARPi) that has shown promise as a synthetic lethal treatment approach for BRCA-mutant castration-resistant prostate cancer (CRPC) in clinical use\nTargeting Plk1 to Enhance Efficacy of Olaparib in Castration-Resistant Prostate Cancer\nRECENT FINDINGS: The approval of several PARPi (olaparib, rucaparib, and niraparib) has driven the focus of anticancer treatment on synthetic lethality in prostate cancer too.\nPATIENT SUMMARY: A large clinical study concluded that treatment with the PARP inhibitor olaparib benefits men with metastatic castration-resistant prostate cancer whose tumors harbor alterations in 15 different DNA repair genes.\nAmong them, olaparib and rucaparib have breakthrough designations for BRCA1/2-mutated mCRPC.\nIn phase II clinical trials, including patients with advanced castration-resistant PC, olaparib seems to be efficacious and well tolerated.\nOf note is the recent U.S. Food and Drug Administration breakthrough therapy designation of olaparib for the treatment of BRCA1/2- or ATM-mutated metastatic castration-resistant prostate cancer.", "role": "user" }, { "content": "yes", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.29233
[ { "content": "Write an abstract for this chemistry paper.\n\nTitle:\nDevelopment of a hydrolysis-based small-molecule hydrogen selenide (H<sub>2</sub>Se) donor\n\nPaper sections:\nIntroduction\n<p>Selenium is oen regarded as a toxic metalloid, but it is also an essential bioinorganic dietary micronutrient. [1][2][3] For example, geographic regions with selenium-decient soil display unusually high occurrences of conditions including Keshan and Kashin-Beck diseases in the population, which are both tied to a dietary scarcity of selenium. 4,5 Dietary sources of selenium are typically selenomethionine (SeMet) and selenate salts (SeO 4 2À ), which must pass through complex metabolic pathways prior to incorporation into selenium-containing biomolecules. 6 In the body, selenium exerts function primarily as selenocysteine, oen referred to as the 21st amino acid, which is incorporated into and gives rise to the oen unique reactivity of selenoproteins. 7 Twenty-ve selenoproteins have been identied in humans and fall into several main categories. These categories include glutathione peroxidases (Gpx), which scavenge harmful peroxide species, thioredoxin reductases (TrxR), which regulate thiol-disulde redox homeostasis, iodothyronine deiodinases (DIOs), which regulate thyroid hormone equilibria, and specialized selenoproteins that exhibit alternative functions, such as protein folding and selenium transport. 3 Selenium-decient environments oen result in the preferential expression of these proteins, whereas selenium-rich environments result in the upregulation of selenium excretion pathways to mitigate selenium toxicity. 8 Many of these pathways are hypothesized to proceed through the intermediate formation of hydrogen selenide (H 2 Se/HSe À ), which is an important yet elusive small biomolecule of interest (Fig. 1). 9 A common approach to increase the bioavailability of selenium is to use exogenous synthetic selenium-containing small molecules. For example, the organoselenium compound ebselen mimics the behavior of glutathione peroxidase and exhibits cytoprotective, anti-inammatory, and antioxidant effects. 10,11 Similarly, the glutathione-mediated reduction of selenite (SeO 3 2À ) to elemental selenium is thought to proceed through a selenodiglutathione (GS-Se-SG) intermediate en-route to a selenopersulde (GS-SeH), which subsequently either decomposes to GSH and Se 0 or is converted to H 2 Se through both enzymatic and non-enzymatic pathways. 12,13 More recently, the hydrolysis of phthalic selenoanhydride was used to generate reactive selenium species (RSeS) to examine the differential synergies of these compounds with H 2 S and GSH in radical scavenging. 14 In this investigation, H 2 Se release was proposed during hydrolysis but was not observed directly in the experiments.</p><p>Interest in developing chemical tools for investigating H 2 Se and related RSeS has grown in the last few years, with new investigations into the molecular recognition of HSe À in synthetic host-guest systems 15 and with the advent of rstgeneration uorescent probes for H 2 Se detection. 16,17 In part, these investigations are motivated by potential roles of selenides in treating conditions ranging from arsenic poisoning, in which toxic arsenic species can react with H 2 Se to form readily-excreted products, 18,19 to cancer, in which H 2 Se can induce oxidative stress under normoxic conditions or reductive stress under hypoxic conditions in HepG2 cells, resulting in HMGB1 protein damage and ultimately apoptosis. 20 At physiological pH, almost all H 2 Se exists as HSe À due to the acidity of the diprotic form of H 2 Se (pK a ¼ 3.9). The high redox activity and high nucleophilicity of H 2 Se/HSe À , when coupled with the relatively low biological selenium content ($0.2 mg kg À1 in humans), make investigations into the biological roles of H 2 Se difficult. 21,22 Building from these past results and increased interest in biorelevant small RSeS, we viewed that wellcharacterized, synthetic small molecules that release H 2 Se directly and under controlled conditions would provide a muchneeded chemical tool for expanding research related to the chemical biology of selenium. Here we report the development and characterization of a hydrolysis-based small-molecule H 2 Se donor and provide insights into the reaction mechanism and methods for direct H 2 Se trapping.</p>\n\nResults and discussion\n<p>Drawing parallels to biological organosulfur chemistry, the last een years have witnessed a surge in research related to hydrogen sulde (H 2 S/HS À ) as an important reactive sulfur species and gasotransmitter. 23 Substantial efforts have focused on the development of small-molecule H 2 S donors for delivery to biological environments. [24][25][26] Although the structure and complexity of such systems have evolved signicantly, an early and broadly-used example of such donors is the hydrolysisactivated donor GYY4137, which relies on the hydrolytic cleavage of P]S bonds to generate H 2 S. 27 GYY4137 has been used in >200 publications to date (Web of Science) and exhibits anti-inammatory, vasorelaxant, and anti-cancer as well as other effects in different biological models [27][28][29] with diverse applications ranging from medicinal to agricultural science. 30,31 Motivated by the broad utility of this approach to access H 2 S donor motifs, we sought to use similar chemistry to generate well-dened H 2 Se donors that are activated by P]Se bond hydrolysis. To prepare such a donor, we treated Woollins' reagent with an excess of morpholine, drawing parallels to the synthesis of GYY4137, to generate TDN1042 in moderate yield (Fig. 2a). The resultant product was characterized by 1 H, 13 C { 1 H}, 31 P, and 77 Se NMR spectroscopy (Fig. S1-S4 †). Single crystals suitable for X-ray diffraction were grown by layering hexane onto a solution of TDN1042 in CH 2 Cl 2 , which conrmed the molecular structure (Fig. 2b).</p><p>With TDN1042 in hand, we next evaluated its reaction chemistry by NMR spectroscopy. Initial studies using 31 P NMR spectroscopy in wet DMSO-d 6 revealed the clean conversion of TDN1042 to phenylphosphonic acid (PPA) as expected (Fig. 3a). We next monitored the hydrolysis in buffered aqueous solutions using quantitative 31 P NMR spectroscopy with triethylphosphate (TEP) as an internal integration standard. In these experiments, we also observed clean conversion of TDN1042 (d( 31 P) ¼ 61 ppm) to the expected PPA hydrolysis product (d( 31 P) ¼ 12 ppm). To determine the effect of pH on this reaction, we measured the rate of hydrolysis of TDN1042 (10 mM) in citrate buffer (50 mM) ranging from pH 3.0 to pH 6.0 in ame-sealed NMR tubes at ambient temperature (Fig. 3b). The resulting hydrolysis data (Fig. 3c) revealed an increase in rate at more acidic pH values, which is consistent with the expected hydrolysis mechanism. A similar pH dependence was observed for GYY4137 in a previous report, 27 although the experimental conditions and methods used to monitor rates and product conversions are too dissimilar to those used here for TDN1042 to make direct quantitative comparisons. This similarity in pH dependences does, however, suggest that TDN1042 could nd utility in biological contexts much like GYY4137.</p><p>Having established that the hydrolysis of TDN1042 results in PPA formation, we next sought to conrm H 2 Se release directly. To monitor H 2 Se release, our goal was to trap H 2 Se directly rather than use uorogenic probes in case a reactive intermediate en-route to H 2 Se release resulted in the activation of such systems. To accomplish this labeling, we used benzyl bromide (BnBr) as an electrophilic trapping agent and monitored the reaction by 31 P and 77 Se NMR spectroscopy (Fig. 4a). On the basis of the proposed release mechanism of H 2 Se from TDN1042, we expected that, akin to initial protonation, BnBr would initially alkylate the P]Se moiety, which would activate TDN1042 toward hydrolysis to release benzyl selenol (BnSeH) with subsequent alkylation by BnBr to generate dibenzyl selenide (Bn 2 Se). By monitoring the reaction by both 31 P and 77 Se NMR spectroscopy, we observed immediate formation of an intermediate (1) upon addition of BnBr and H 2 O to a solution of TDN1042 in DMSO-d 6 (Fig. 4b and c). Intermediate 1 exhibited a singlet in the 31 P spectrum (d ¼ 69 ppm) with two sets of selenium satellites with different coupling constants (J P-Se ¼ 786 Hz, J P-Se ¼ 401 Hz). This coupling pattern is consistent with inequivalent selenium environments in 1 and is in contrast to the single set of selenium satellites seen in TDN1042 (d( 31 P) ¼ 61 ppm, 1 J P-Se ¼ 671 Hz). Furthermore, the 77 Se NMR spectrum of 1 revealed both a doublet (d ¼ À129 ppm, 1 J P-Se ¼ 786 Hz) and a doublet of triplets (d ¼ 354 ppm, 1 J P-Se ¼ 401 Hz, 2 J Se-H ¼ 11 Hz), which correspond to the P]Se and P-SeCH 2 Ph moieties, respectively (Fig. 4e). As the reaction proceeded, the intensity of the d ¼ À129 and 354 ppm peaks decreased, with concomitant formation of PPA (d( 31 P) ¼ 12 ppm) in the 31 P NMR spectrum. The 77 Se NMR spectrum revealed two Se-containing products, with a triplet at d( 77 Se) ¼ 394 ppm and a pentet at d( 77 Se) ¼ 330 ppm. The 330 ppm resonance corresponds to Bn 2 Se, in which the selenium signal is split by two sets of benzylic protons. The 394 ppm resonance corresponds to dibenzyl diselenide (Bn 2 Se 2 ), which was conrmed by comparison with an authentic Bn 2 Se 2 sample (Fig. S13 †). Formation of the diselenide is likely due to the auto-oxidation of BnSeH, which as has been observed previously. 32 Taken together, these alkylation experiments support the mechanism of H 2 Se release and provide mechanistic insights into the hydrolysis mechanism.</p><p>To denitively establish H 2 Se release, we next performed experiments in which the electrophilic trapping agent was separate from the donor. For this investigation, we used 2,4-dinitrouorobenzene (FDNB) as an electrophilic labeling reagent to trap the H 2 Se released and volatilized into the headspace of the reaction apparatus. 33,34 In these experiments, a vial containing an aqueous solution of TDN1042 was acidied with HCl and sparged with N 2 to help volatilize any H 2 Se into the headspace, which was subsequently bubbled through a trapping solution containing a large excess of FDNB (Fig. 5a).</p><p>A nal solution containing AgNO 3 was used to scavenge any unreacted H 2 Se. Using HPLC, we observed formation of both di(2,4-dinitrophenyl) selenide ((DNP) 2 Se) and the corresponding diselenide ((DNP) 2 Se 2 ) in the trapping solution (Fig. 5b and S14 †), which is consistent with directly trapping H 2 Se as well as the auto-oxidation process. The identity of the observed products was conrmed by comparison to authentic samples of (DNP) 2 Se and (DNP) 2 Se 2 synthesized according to published procedures (Fig. S18 †). 35 Taken together, these results conrm that TDN1042 releases H 2 Se directly.</p>\n\nConclusions\n<p>Here we report the development and characterization of the hydrolysis-based H 2 Se donor TDN1042. Using multinuclear NMR experiments, we monitored the reaction pathway for H 2 Se release and conrmed H 2 Se generation using different electrophilic trapping methods. We anticipate that this wellcharacterized H 2 donor will nd utility in biological investigations into the roles of H 2 Se and related reactive selenium species in the future and will facilitate the development of</p>\n\nConflicts of interest\n<p>There are no conicts to declare.</p>", "role": "user" }, { "content": "Selenium is essential to human physiology and has recently shown potential in the treatment of common pathophysiological conditions ranging from arsenic poisoning to cancer. Although the precise metabolic and chemical pathways of selenium incorporation into biomolecules remain somewhat unclear, many such pathways proceed through hydrogen selenide (H 2 Se/HSe À ) formation. Despite this importance, well-characterized chemistry that enables H 2 Se release under controlled conditions remains lacking.Motivated by this need, we report here the development of a hydrolysis-based H 2 Se donor (TDN1042).Utilizing 31 P and 77 Se NMR experiments, we demonstrate the pH dependence of H 2 Se release and characterize observed reaction intermediates during the hydrolysis mechanism. Finally, we confirm H 2 Se release using electrophilic trapping reagents, which not only demonstrates the fidelity of this donor platform but also provides an efficient method for investigating future H 2 Se donor motifs. Taken together, this work provides an early example of an H 2 Se donor that functions through a well-defined and characterized mechanism.", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.49948
[ { "content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Facial Landmark Detection: a Literature Survey \nChapter Title: Analytic fitting methods \n1. Abstract of Cited Paper (BIB001): Active Appearance Models (AAMs) and the closely related concepts of Morphable Models and Active Blobs are generative models of a certain visual phenomenon. Although linear in both shape and appearance, overall, AAMs are nonlinear parametric models in terms of the pixel intensities. Fitting an AAM to an image consists of minimising the error between the input image and the closest model instances i.e. solving a nonlinear optimisation problem. We propose an efficient fitting algorithm for AAMs based on the inverse compositional image alignment algorithm. We show that the effects of appearance variation during fitting can be precomputed (“projected out”) using this algorithm and how it can be extended to include a global shape normalising warp, typically a 2D similarity transformation. We evaluate our algorithm to determine which of its novel aspects improve AAM fitting performance. \n2. Abstract of Cited Paper (BIB002): We describe a very simple framework for deriving the most-well known optimization problems in Active Appearance Models (AAMs), and most importantly for providing efficient solutions. Our formulation results in two optimization problems for fast and exact AAM fitting, and one new algorithm which has the important advantage of being applicable to 3D. We show that the dominant cost for both forward and inverse algorithms is a few times mN which is the cost of projecting an image onto the appearance subspace. This makes both algorithms not only computationally realizable but also very attractive speed-wise for most current systems. Because exact AAM fitting is no longer computationally prohibitive, we trained AAMs in-the-wild with the goal of investigating whether AAMs benefit from such a training process. Our results show that although we did not use sophisticated shape priors, robust features or robust norms for improving performance, AAMs perform notably well and in some cases comparably with current state-of-the-art methods. We provide Matlab source code for training, fitting and reproducing the results presented in this paper at http://ibug.doc.ic.ac.uk/resources. \n3. Abstract of Cited Paper (BIB003): In this paper we provide the first, to the best of our knowledge, Bayesian formulation of one of the most successful and well-studied statistical models of shape and texture, i.e. Active Appearance Models (AAMs). To this end, we use a simple probabilistic model for texture generation assuming both Gaussian noise and a Gaussian prior over a latent texture space. We retrieve the shape parameters by formulating a novel cost function obtained by marginalizing out the latent texture space. This results in a fast implementation when compared to other simultaneous algorithms for fitting AAMs, mainly due to the removal of the calculation of texture parameters. We demonstrate that, contrary to what is believed regarding the performance of AAMs in generic fitting scenarios, optimization of the proposed cost function produces results that outperform discriminatively trained state-of-the-art methods in the problem of facial alignment \"in the wild\".\n\nLiterature Review:", "role": "user" }, { "content": "The analytic fitting methods formulate AAM fitting problem as a nonlinear optimization problem and solve it analytically. In particular, the algorithm searches the best set of shape and appearance coefficients p, λ that minimize the difference between reconstructed image and the testing image with a nonlinear least squares formulation: Here, A 0 + M m λ m A m represents the reconstructed face in the shape normalized frame depending on the shape and appearance coefficients, and the whole objective function represents the reconstruction error. One natural way to solve the optimization problem is to use the Gaussian-Newton methods. However, since the Jacobin and Hessian matrix for both p and λ need to be calculated for each iteration BIB001 , the fitting procedure is usually very slow. To address this problem, Baker and Matthews proposed a series of algorithms, among which the Project Out Inverse Compositional algorithm (POIC) BIB002 and the Simultaneous Inverse Compositional (SIC) algorithm are two popular works. In POIC BIB002 , the errors are projected into space spanned by the appearance eigenvectors {A m } Ka m=1 , and its orthogonal complement space. The shape coefficients are firstly searched in the appearance space, and the appearance coefficients are then searched in the orthogonal space, given the shape coefficients. In SIC , the appearance and shape coefficients are estimated jointly with gradient descent algorithm. Compared to POIC, SIC is more computationally intensive, but generalizes better than POIC . Recently, more advanced analytic fitting methods only estimate the shape coefficients, which fully determine the landmark locations as in Equation 3 . For example, in BIB004 , the Bayesian Active Appearance Model formulates AAM as a probabilistic PCA problem. It treats the texture coefficients as hidden variables and marginalizes them out to solve for the shape coefficients: But exactly integrating out λ can be computationally expensive. To alleviate this problem, in BIB003 , Tzimiropoulos and Pantic propose the fast-SIC algorithm and the fast-forward algorithm. In both algorithms, the appearance coefficient updates ∆λ are represented in terms of the shape coefficient updates ∆p, and they are plugged into the nonlinear least square formulation, which is then directly minimized to solve for ∆p. Different from BIB004 , BIB003 follows a deterministic approach.", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.17062
[ { "content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nThe COVID-19 pandemic has presented a major unanticipated stress on the workforce, organizational structure, systems of care, and critical resource supplies. To ensure provider safety, to maximize efficiency, and to optimize patient outcomes, health systems need to be agile. Critical care cardiologists may be uniquely positioned to treat the numerous respiratory and cardiovascular complications of the SARS-CoV-2 and support clinicians without critical care training who may be suddenly asked to care for critically ill patients. This review draws upon the experiences of colleagues from heavily impacted regions of the United States and Europe, as well as lessons learned from military mass casualty medicine. This review offers pragmatic suggestions on how to implement scalable models for critical care delivery, cultivate educational tools for team training, and embrace technologies (e.g., telemedicine) to enable effective collaboration despite social distancing imperatives.", "role": "user" }, { "content": "[\"Treatment\"]", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.6838
[ { "content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: ThisprocedurehasbeenextensivelyappliedtotargetedEScellsandwasreadilyachievedinthecaseofCapn4+/-EScells[31].", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.1820
[ { "content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Theseproteins,baseduponthedefectsinhumans,areresponsibleforregulatingdietarysterolentryandbiliarysterolsecretion.", "role": "user" }, { "content": "[\"proteins\", \"sterol\", \"sterol\"]", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.3222
[ { "content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nRupture of the lateral ligaments of the ankle: a controlled clinical trial.\nAbstract 1:\nThe results of a randomized study comparing three different principles of treatment for rupture of the lateral ligaments of the ankle are presented. A total of 95 patients was treated and followed up for 17 months. In this series, 32 patients were treated with primary suture and plaster-of-Paris, 33 patients with plaster-of-Paris only and 30 patients with strapping. In all, 31 patients (97%) were completely free of symptoms in the operation group, 22 (67%) in the plaster-of-Paris group and 23 (77%) in the strapping group.\n\n\nTitle 2:\nComparison of three different treatments for ruptured lateral ankle ligaments.\nAbstract 2:\nTwo hundred consecutive patients with arthrographically verified rupture of one or both of the lateral ankle ligaments were allocated to treatment with either an operation and a walking cast, walking cast alone, or strapping with an inelastic tape - all for 5 weeks. Eighty-seven percent of the patients attended follow-up after 1 year. Only 5 percent in each treatment group were unsatisfied with the result. There were no differences between the treatment groups in ankle stability or symptoms during different activities, regardless of rupture of the anterior talofibular ligament alone or combined with rupture of the calcaneofibular ligament. However, the patients treated with tape had fewer symptoms, fewer complaints when running, and more ankles recovered to the preinjury state. Therefore, in lateral ankle ligament rupture, tape bandages seem preferable.\n\n\nTitle 3:\nOperative and functional treatment of rupture of the lateral ligament of the ankle. A randomised, prospective trial.\nAbstract 3:\nConsecutive patients with a confirmed rupture of at least one of the lateral ligaments of the ankle were randomly assigned to receive either operative or functional treatment. They were evaluated at a median of 8 years (6 to 11). In total, 370 patients were included. Follow-up was available for 317 (86%). Fewer patients allocated to operative treatment reported residual pain compared with those who had been allocated to functional treatment (16% versus 25%, RR 0.64, CI 041 to 1.0). Fewer surgically-treated patients reported symptoms of giving way (20% versus 32%, RR 0.62, CI 0.42 to 0.92) and recurrent sprains (22% versus 34%, RR 0.66, CI 0.45 to 0.94). The anterior drawer test was less frequently positive in surgically-treated patients (30% versus 54%, RR 0.54, CI 0.41 to 0.72). The median Povacz score was significantly higher in the operative group (26 versus 22, p < 0.001). Compared with functional treatment, operative treatment gives a better long-term outcome in terms of residual pain, recurrent sprains and stability.\n\n\nTitle 4:\nInjuries of the lateral ligament of the ankle. Conservative vs. operative repair.\nAbstract 4:\nnan\n\n\nTitle 5:\nA prospective study of the treatment of severe tears of the lateral ligament of the ankle.\nAbstract 5:\nA prospective randomized clinical study was undertaken to compare bandaging, plaster cast immobilisation and operative treatment for recent tears of the lateral ligament of the ankle. The follow-up period was two years. Subjectively, only the fear of giving-way showed a clear difference in favour of operative repair. Objective evaluation, including stress radiographs, demonstrated no statistical differences between the three methods of treatment. Although bandaging appeared somewhat less satisfactory with respect to the resulting stability of the ankle, the differences were not statistically significant. The lateral ligament in patients over 40 years of age showed a statistically significant tendency to heal less well than that of younger patients. Severe ankle sprains in patients under 40 years of age should preferably be treated by operation, especially in younger patients and if the person is physically active. Lateral ligament tears in patients over 40 years of age should be treated conservatively and a secondary reconstruction carried out later, if necessary.\n\n\nTitle 6:\nA comparison of surgical and conservative treatment in ankle ligament tears.\nAbstract 6:\nGood results have been reported after both primary repair and conservative management of grade III ankle sprains. This prospective study found no advantage of operative treatment compared to taping; no significant differences were found between the groups with regard to objective or subjective stability, functional scores, or the overall result.\n\n\nTitle 7:\n[Therapy of injuries of the exterior ankle joint ligaments. Randomized study of postoperative therapy and early functional treatment tactics].\nAbstract 7:\nnan\n\n\nTitle 8:\n[Lesions of the lateral ligament of the ankle joint].\nAbstract 8:\nnan\n\n\nTitle 9:\n[Conservative functional treatment of fibular capsule ligament rupture even in the performance athlete?].\nAbstract 9:\nIn a prospective randomised study we examined 80 men and women of 18 to 45 years of age and 40 competitive sportsmen and sportswomen with fresh ruptures of the talofibular and calcaneofibular ligaments treated either surgically or conservatively by functional bracing. We controlled the results by anamnesis and by clinical and radiological follow-up. Conservative treatment leads to the same results after 3-8 months as surgical treatment (5 +/- 2 degrees of talar angulation). Functional conservative treatment results in earlier painless weight bearing in comparison to the surgical treatment which was, however, combined with three weeks immobilization directly after surgery. This proves the superiority of the early functional conservative treatment as long as there are no bony capsuloligamentous tears, osseocartilaginous lesions of the joint surface with splintered-off chips or \"flakes\" of cartilage (\"flake fractures\") or reruptures of a chronically unstable ankle joint. The similarly good results achieved in the group of 40 sportsmen and sportswomen by early functional conservative treatment do not justify operation of ruptures of the talofibular and calcaneofibular ligaments.\n\n\nTitle 10:\nA randomized, prospective study of operative and non-operative treatment of injuries of the fibular collateral ligaments of the ankle.\nAbstract 10:\nOne hundred and forty-six adults who had an isolated injury of the fibular collateral ligaments of the ankle were randomized to be managed operatively or non-operatively. Disruption of the ligaments was diagnosed by means of a physical examination and on the basis of stress radiographs of the ankle made with use of a specially designed device to hold the leg. Operative treatment, performed in seventy-three patients, consisted of suture repair of the disrupted ligaments within seventy-two hours after the injury, followed by immobilization of the ankle in a below-the-knee plaster cast for six weeks. Non-operative treatment, used for seventy-three patients, consisted of the use of an ankle orthosis for six weeks. After a minimum of two years of follow-up, we could detect no significant differences, with the numbers available, between the two groups with regard to the functional result or the degree of joint laxity that was evident on stress radiographs. The non-operative group lost a mean of 1.6 weeks from work, and the operative group lost a mean of 7.0 weeks. We concluded that non-operative treatment of an injury of the fibular collateral ligaments of the ankle yields results that are comparable with those of operative repair and is associated with a shorter period of recovery.\n\n\nTitle 11:\n[Treatment of lateral ligament rupture of the ankle. Prospective comparison of operative and conservative treatment].\nAbstract 11:\nnan\n\n\nTitle 12:\n[Surgical or conservative treatment of recent rupture of the lateral ligament of the upper ankle joint. Randomized clinical study].\nAbstract 12:\nnan\n\n\nTitle 13:\n[Therapy of fresh fibular ligament ruptures].\nAbstract 13:\nIn a prospective, random study carried out at the Casualty Clinic of the University Hospital of Hannover from 15 April 1986 to 31 July 1986, 200 patients were randomly selected from four treatment groups, according to treatment: operative-immobilized (group A, n = 52), operative-functional (group B, n = 50), conservative-immobilized (group C, n = 48) and conservative-functional (group D, n = 50). Follow-up examinations were made in 92.5% of patients after 3 months and 64% after 12 months. At the 3-month and 1-year follow-ups, no statistically significant differences were found in the total evaluation (100-point checklist) Moderately significant instability (6 degrees -10 degrees talar tilt and 6-10 mm anterior talar dislocation) in the conservative groups C and D was observed only in the clinical-radiological instability test. This statistically significant difference is, however, only evident in stress tenographically confirmed dual-ligament lesions. No functional physiological differences can be found after 12 months; on average, the period of incapacitation is 3 weeks shorter than with primary functional treatment, but so far there are no long-term results. Until late results are available, we conclude that conservative treatment, even for professional athletes, can be recommended as the procedure most free of risk and most economical; if at all, operative therapy should only be carried out when the instability is very serious.", "role": "user" }, { "content": "There is insufficient evidence available from randomised controlled trials to determine the relative effectiveness of surgical and conservative treatment for acute injuries of the lateral ligament complex of the ankle. High quality randomised controlled trials of primary surgical repair versus the best available conservative treatment for well-defined injuries are required.", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.505
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: The G1/S Specific Cyclin D2 Is a Regulator of HIV-1 Restriction in Non-proliferating Cells\nArticle:\nAbstract:\nMacrophages are a heterogeneous cell population strongly influenced by differentiation stimuli that become susceptible to HIV-1 infection after inactivation of the restriction factor SAMHD1 by cyclin-dependent kinases( CDK). Here, we have used primary human monocyte-derived macrophages differentiated through different stimuli to evaluate macrophage heterogeneity on cell activation and proliferation and susceptibility to HIV-1 infection. Stimulation of monocytes with GM-CSF induces a non-proliferating macrophage population highly restrictive to HIV-1 infection, characterized by the upregulation of the G1/S-specific cyclin D2, known to control early steps of cell cycle progression. Knockdown of cyclin D2, enhances HIV-1 replication in GM-CSF macrophages through inactivation of SAMHD1 restriction factor by phosphorylation. Co-immunoprecipitation experiments show that cyclin D2 forms a complex with CDK4 and p21, a factor known to restrict HIV-1 replication by affecting the function of the downstream cascade that leads to SAMHD1 deactivation. Thus, we demonstrate that cyclin D2 acts as regulator of cell cycle proteins affecting SAMHD1-mediated HIV-1 restriction in non-proliferating macrophages.\nIntroduction:\nMacrophages are a highly heterogeneous cell population that plays a prominent role in innate immune system as key effector cells for the elimination of pathogens, infected cells and cancer cells[1, 2]. Macrophages also play an essential role in maintaining tissue homeostasis by supporting tissue development and repairing damaged tissue architecture[1, 3]. Macrophage differentiation from monocytes occurs in the tissue in concomitance with the acquisition of a functional phenotype that depends on microenvironmental signals, accounting for the wide and apparently opposed variety of macrophage functions[4, 5]. Macrophages, as well as other myeloid lineage cells, become susceptible to HIV-1 infection after degradation or inactivation of the restriction factor SAMHD1, a triphosphohydrolase enzyme that controls the intracellular level of dNTPs[6–9]. Phosphorylation of SAMHD1 by cyclin dependent kinases( CDK) has been strongly associated with inactivation of the virus restriction mechanism, providing an association between virus replication and cell proliferation[10–12]. The activity of CDK is regulated by the binding of cyclins, a family of proteins characterized by a periodic, cell-cycle dependent pattern of expression[13, 14]. Cyclin-CDK complexes govern cell cycle progression and proliferation of mammalian cells and thus, pinpoint the specific time in which an event occurs during the cell cycle[13, 14]. We and others have shown that the complex cyclin D3-CDK6 acting upstream of CDK2 controls SAMHD1 phosphorylation and function in primary lymphocytes and macrophages[11, 15–17]. Cyclin-CDK function is also controlled by cyclin dependent kinase inhibitors( CDKIs) that generally act as negative regulators of the cell cycle by binding to CDKs and inhibiting their kinase activity[18]. Of particular importance is p21/waf1, a G1/S phase CDKI, that may also control HIV-1 replication through SAMHD1[19, 20]. D-type cyclins( cyclins D1, D2 and D3) are regarded as essential links between cell environment and the core cell cycle machinery. D-type cyclins drive cells through the G1 restriction point and into the S phase, after which growth factor stimulation is no longer essential to complete cell division[21]. D-type cyclins share the capacity to activate both CDK4 and CDK6[14]. Studies on single, double and triple cyclin D knockout mice revealed that D-type cyclin complexes have redundant functions. However, different D-type cyclins exhibit distinct expression patterns depending on the cell type, indicating that each D-type cyclin has essential functions in particular settings, as suggested by the narrow and tissue-specific phenotypes of the knockout mice( reviewed in[21]). Here, we have used primary human monocyte-derived macrophages( MDMs) differentiated through different stimuli to evaluate macrophage heterogeneity on cell activation and proliferation, characteristics that influence gene and protein expression patterns and determine susceptibility to HIV-1 infection. The comparative study has led to the identification and characterization of a cell cycle dependent pathway that restricts HIV-1 infection in primary macrophages. These non-proliferating macrophage population is characterized by a high expression of the G1/S-specific cyclin D2. Cyclin D2 acts through the binding to CDK4 and p21 in GM-CSF macrophages, a complex which is responsible for the lack of the active CDK that phosphorylates SAMHD1. Data from mouse peritoneal macrophages confirmed the existence of cyclin D2 expressing macrophages in vivo, further supporting the key role of cyclin D2.\nDiscussion:\nMacrophages are key components of the innate immune system that reside in tissues, where they function as immune sentinels. Although macrophage heterogeneity has classically been organized around two polarized endpoints known as M1 or classical and M2 or alternative activation[31], recent evidence have revealed an unrecognized greater diversity in the ontogeny and functional diversity of tissue-resident macrophages. It is now established that macrophages from embryonic progenitors can persist in tissues into adulthood and self-maintain by local proliferation( reviewed in[4, 5]). However, monocytes also contribute to the resident macrophage population, on which the local environment can impose tissue-specific macrophage functions( reviewed in[4, 5]). The myeloid colony-stimulating factors, M-CSF and GM-CSF are known to modulate macrophage phenotype and many studies illustrate their importance in the magnitude, duration and character of inflammatory responses[32, 33]. Thus, the description of the distinct molecular phenotypes consequence of M-CSF or GM-CSF stimulation may be important for both acute and chronic inflammatory pathology. Here, we show that differentiation stimuli determine distinct cell proliferation and cell cycle progression in primary macrophages, characteristics that are mostly dependent on the differential expression patterns of cell cycle proteins, especially D-type cyclins, their catalytic partners, CDKs, and the CDK inhibitor p21. Consistent with our observations, upregulation of cyclin D2 in response to GM-CSF treatment has already been reported in different hematopoietic cells[34–36], suggesting that cyclin D2 role might be relevant also for other cell types. The molecular interplay between the cell cycle, cyclins, and cell function is far from being fully understood. Conceptual advances in the field continue to uncover novel and interesting roles for cyclins in cellular processes that contribute to diseases, such as cancer or pathogenic infections[37]. It is well accepted that cell cycle control plays a major role in determining susceptibility to HIV-1 infection[38, 39]. It has been previously shown that limiting dNTP synthesis, for example through ribonucleotide reductase inhibition by hydroxyurea, limits HIV-1 replication[40]. Thus, differences on cell cycle regulation between M-CSF and GM-CSF macrophages also determined the susceptibility to HIV-1 infection and this may be a direct consequence of the control of the restriction factor SAMHD1 by CDK-mediated phosphorylation. Previous observations in M-CSF macrophages, showed that SAMHD1 phosphorylation was directly phosphorylated by CDK2, whose kinase activity was upstream regulated by the cyclin D3-CDK6 complex[11, 16]. However, the molecular mechanism might be different in GM-CSF macrophages because CDK6 was barely expressed and D-type cyclins expression was significantly upregulated, especially that of cyclin D2. Accordingly, knockdown of cyclin D2 and cyclin D3 resulted in opposite effects depending on the macrophage type, in terms of susceptibility to HIV-1 infection: cyclin D2 restricts infection in non-cycling GM-CSF macrophages and cyclin D3 enables infection in proliferating M-CSF macrophages. All D-type cyclins( D1, D2 and D3) are closely associated to the G1 phase, whose expression is induced by mitogenic signals and therefore play a significant role in cell cycle entry, by assembling together with CDK4 or CDK6[18, 21, 37]. However, although apparently redundant, single knockout mice exhibit several cell type specific abnormalities[21], suggestive of essential functions on particular settings for each cyclin, explaining, in part, the apparently opposed effects observed here for cyclin D2 and D3 when assessing susceptibility to HIV infection. On the other hand, D-type cyclins are linked to many human malignancies. Cyclin D1 is a well-established oncogene( review in[18]) but cyclin D2 or cyclin D3 overexpression are rarely reported[41, 42]. However, CCND2 is frequently methylated, with loss of cyclin D2 expression in pancreatic, breast and prostate cancer[43–45], pointing to a potential role as a tumor suppressor rather than an oncogene, indicating that overexpression of cyclin D2 might be limiting cell proliferation, as described here in GM-CSF macrophages. To uncover the molecular mechanism underlying cyclin D2 function in GM-CSF macrophages and taking into account that cyclins alone do not have catalytic activity per se, the study of putative partners of cyclin D2 was addressed. In concordance with the differential gene expression patterns, the inhibition of CDK4 which is similarly expressed in both macrophage types did not change protein profile or HIV-1 susceptibility. Conversely, knockdown of the CDK inhibitor p21, whose expression was also upregulated in GM-CSF macrophages, showed the same effect as that of Cyclin D2, i. e., upregulation of SAMHD1 phosphorylation and enhancement of HIV-1 replication, similar to our previous observation in M-CSF macrophages[20]. p21 belongs to the Cip/Kip family of CDKIs that have historically been considered negative regulators of the cyclin-CDKs and therefore controlling cell cycle progression, especially when referred to cyclin-CDK1 or-CDK2 complexes[20, 46]. However, Cip/Kip family of CDKIs interaction with Cyclin D-CDK4/6 appear much more complex, being involved in both the stabilization of the cyclin D/CDK complex but also acting as inhibitors of its kinase activity[47, 48]. The observation that a complex formed by Cyclin D2/CDK4/p21 was identified in non-cycling GM-CSF macrophages argues in favor of the inhibitory hypothesis at least in this specific cell type. Moreover, the fact that p27, another Cip/Kip family member, is barely expressed in GM-CSF macrophages indicates the specificity to the effect of Cyclin D2/CDK4/p21 complex. A direct inhibition of CDK2 function, similar to that observed in M-CSF[20] cannot be completely rule out, but seems improbable due to the low expression of CDK2 and lack of cell proliferation markers seemed in GM-CSF macrophages. The present works also highlights the interplay between cell cycle control and viral replication, with important implications that might be broader than simply affecting susceptibility to HIV-1 infection. Interestingly, both p21 and Cyclin D2 were identified as potential markers for viral latency, as both genes showed increased histone modification levels in HIV latently infected cells[49]. These observation suggests that the maintenance or not of HIV-1 latency may also be controlled by cell cycle related proteins such as Cyclin D2 and p21 and thus, open the opportunity for new therapeutic interventions. In addition, p21 and Cyclin D2 were also found upregulated after HTLV-I infection, a process that is mediated by the viral protein Tax, indicating that deregulation of G1/S checkpoint is also relevant for other retroviruses[48]. In summary, the identification of a novel cell cycle-mediated viral restriction pathway in primary non cycling macrophages have provided new evidences of the tight interplay between viral replication and cell cycle control, pointing towards the concerted action of Cyclin D2 and p21 in HIV-1 replication. The demonstration and characterization of specific D-type Cyclin roles in certain cell types, such as that reported for Cyclin D2 here, may also offer a window of opportunity for targeting D Cyclins in viral infections and in human cancers as well, where D-type Cyclin expression is frequently deregulated.", "role": "user" }, { "content": "Macrophages are a heterogeneous population of immune cells that provide crucial innate immune defense against pathogens , including HIV-1 . The molecular biology of HIV-1 infection in macrophages is influenced by the presence of the host cell restriction factor SAMHD1 , which is regulated by phosphorylation by cyclin dependent kinases , the catalytic proteins responsible for cell cycle progression . This study shows that differentiation stimuli strongly influence macrophage cell cycle and proliferation characteristics as well as susceptibility to HIV-1 infection through modulation of SAMHD1 activation . We have identified cyclin D2 as the key step controlling susceptibility to HIV-1 infection by modulation of the signaling pathway leading to SAMHD1 phosphorylation . We show that a complex formed by cyclin D2-CDK4-p21 in GM-CSF macrophages is responsible for the lack of the active CDK , which phosphorylates SAMHD1 . This situation is reversed in the absence of cyclin D2 , leading to the activation of CDKs and subsequent phosphorylation of its substrates , including SAMHD1 . Thus , we propose that the differential expression of the G1/S-specific cyclin D2 controls the HIV-1 restriction pathway in primary macrophages .", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.20466
[ { "content": "Write a Related Work section based on these abstracts. Use numerical citations like [1], [2], etc.\n\nMain Abstract:\nInstead of relying on huge and expensive data centers for rolling out cloudbased services to rural and remote areas, we propose a hardware platform based on small single-board computers. The role of these micro-data centers is twofold. On the one hand, they act as intermediaries between cloud services and clients, improving availability in the case of network or power outages. On the other hand, they run community-based services on local infrastructure. We illustrate how to build such a system without incurring high costs, high power consumption, or single points of failure. Additionally, we opt for a system that is extendable and scalable as well as easy to deploy, relying on an open design.\n\nCited Abstract(s):\n[1]: The unparalleled success of mobile technologies, the emergence of new modes of software and hardware production, and the free circulation of shared knowledge in the Web 2.0 have enabled a new generation of bottom-up, community-based, cost-effective telecommunications initiatives and projects. While these endeavours find their roots in previous hobbyists' movements (i.e. amateur radio, software hackers, do-it-yourself communities) today's' initiatives are able to connect, co-produce and share knowledge with world-wide communities, engaging new participants both at the local and the global level. This article reviews recent developments that aim to provide free or low-cost access to telecommunication services in different areas. From Do-It-Yourself (DIY) satellites to mesh networks, these projects tap into the wisdom and resources of communities to offer non-commercial alternatives to present telecommunications services. The technology, organisational, and regulatory challenges they also face cannot be underestimated either. However, in their struggle to find and consolidate new markets, affordable telecommunications reveal that a new digital economy based on co-production could be under way.\n\n[2]: This book provides policymakers and regulators with the frameworks and ideas to help them design strategies and policies to support the growth of the broadband ecosystem in their countries.\n\n[3]: The development towards ubiquitous network access requires innovative solutions to get remote areas included, especially rural areas of developing regions. We report on recent progress in the Serengeti Broadband Network, one of the first pilots established in the Tanzania ICT for Rural Development programme with the mission to design and validate a method to establish sustainable broadband markets in under-served areas. The challenges include ownership and leadership, sustainable business models, robustness of network components and poor or non-existent supply chains, including power supply.\n\n[4]: \n\n[5]: Advancement in mobile broadband communication enables to effectively communicate anywhere at any time. More specifically, for people who live in rural areas of developing countries it is the best means to communicate with the rest of the world. This has prodigious potential of enabling new opportunities in their live. The appearance of cloud computing, the recent improvements in communication networks, and the mobile phone market penetration allow a large number of users to access remote computing systems using their mobile devices.\n\n[6]: Connecting the unconnected world is a challenging task not only in technological perspective but also economically. Providing a viable solution for a communication network in large rural areas such as parts of Africa or remote Lapland in Northern Finland, surprisingly, share similar challenges. The main criteria for such a network are energy-efficient, robust, and reliable operation even in harsh environment. Important aspects are also low-cost and easy deployment of such a network. Developing a network that would meet all the criteria requires theoretical studies as well as real-world testing and implementations. In this paper, we will present some early results and design highlights of a wireless mesh network testbed deployment that can be used to foster the research work. We also envision it as a base for an actual solution. We introduce some of the concepts and objectives behind the testbed and describe and rationalize the chosen hardware and architecture choices.\n\n[7]: DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full-coverage broadband wireless infrastructure. What is the basis for a progressive, market-driven migration from e-governance to universal broadband connectivity that local users will pay for? DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity. DakNet has been successfully deployed in remote parts of both India and Cambodia at a cost two orders of magnitude less than that of traditional landline solutions.", "role": "user" }, { "content": "There are numerous publications on connectivity, communications networks, and bandwidth, see [2]9 [2] [3] [2]7 [1] [2]2 for a few examples. While the networking aspect is very important, the storage and computational aspects should not be neglected; especially since getting the infrastructure for widespread broadband connectivity into place will still take considerable time. Consequently, there is a need for local data centers that can bridge the communication gaps. propose a cloudlet-based approach to tackle network issues in emerging regions, but do so very briefly on a very abstract level, identifying research challenges, but not proposing any implementation [5] .", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.4056
[ { "content": "Based on the RCT reports below, write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article.\n\n\n\nTitle 1:\nFemoral tunnel enlargement after anatomic ACL reconstruction: a biological problem?\nAbstract 1:\nTunnel enlargement after anterior cruciate ligament ( ACL ) reconstruction may compromise revision surgery . The cause of this tunnel enlargement is not yet fully understood , but it is thought to be multifactorial , with biomechanical and biological factors playing a role . Tunnel enlargement has been described particularly in patients who underwent ACL reconstruction with hamstring tendons with extracortical fixation devices . The purpose of our study was to evaluate prospect ively with magnetic resonance imaging ( MRI ) the changes in femoral tunnel diameter following arthroscopic anatomic ACL reconstruction with hamstring tendons . At 3-month post-op , all tunnels had enlarged compared to the diameter of the drill and most tunnels enlarged more in the midsection than at the aperture . In the posterolateral tunnels , the entrance increased 16 % in diameter and the middle of the tunnel increased 30 % in diameter . In the anteromedial femoral tunnels , the tunnels enlarged 14 % at the aperture and 35 % in the midsection . All femoral tunnels enlarged and most of them enlarged in a fusiform manner . The biological factors explain better our findings than the mechanical theory , although mechanical factors may play a role and the cortical bone at the entrance of the tunnel may modify the way tunnels respond to mechanical stress\n\n\nTitle 2:\nTunnel widening after hamstring anterior cruciate ligament reconstruction is influenced by the type of graft fixation used: a prospective randomized study.\nAbstract 2:\nPURPOSE To compare the incidence of tunnel widening ( TW ) in patients undergoing anterior cruciate ligament ( ACL ) reconstruction with hamstring graft using either ( group A ) transfemoral fixation implant ( Transfix ; Arthrex , Naples , FL ) and an interference screw ( Arthrex ) in the tibial tunnel or ( group B ) extracortical fixation ( EndoButton ; Smith & Nephew Endoscopy , And over , MA ) in the femur and bicortical screw and washer distal to the tibial tunnel . TYPE OF STUDY Prospect i ve r and omized study . METHODS One hundred patients were included and r and omized and 87 patients were assessed at a 1-year follow-up . The evaluation included st and ardized radiographs , KT-1000 data , International Knee Documentation Committee ( IKDC ) ratings , and Lysholm score . The diameter of the tunnel at the 1 year follow-up was , after correction for magnification , compared with the tunnel diameter of the radiograph from 2 weeks postoperatively . A more than 2-mm enlargement was considered TW . RESULTS In group A in which transfixation in the femur and interference screw in the tibia was used , 7 of 41 patients had developed femoral TW and 5 of 41 tibial TW . In group B , 20 of 46 patients had TW in the femur and 16 of 46 in the tibia ( P < .05 , chi-square test ) . No significant difference was found with respect to Lysholm score , IKDC , or arthrometric evaluation . CONCLUSIONS There was a significant reduction of TW in both the femur and the tibia using fixation points close to the joint , compared with the system where the distance between the fixation points is long . We conclude that the position of the fixation sites and type of fixation device are major factors in the development of TW after ACL surgery . LEVEL OF EVIDENCE Level I , therapeutic study in a prospect i ve r and omized clinical trial\n\n\nTitle 3:\nAnterior cruciate ligament reconstruction using autologous hamstring double bundle graft compared with single bundle procedures.\nAbstract 3:\nA total of 218 patients with unilateral anterior cruciate ligament deficiency were r and omly assigned to one of four groups . In group A an anatomical double bundle anterior cruciate ligament reconstruction was performed ; group B were treated by a single bundle using an Endobutton for femoral fixation ; in group C by a single bundle using RigidFix cross pins for femoral fixation ; and in group D by a single bundle using a bioabsorbable TransFix II screw for femoral fixation . For tibial fixation a bioabsorbable Intrafix interference screw was used for all the groups and the graft was fashioned from the semitendinosus and gracilis tendons in all patients . In all , 18 patients were lost to follow-up . The remaining 200 were subjected to a clinical evaluation , with assessment of the anterior drawer , Lachman 's and the pivot-shift tests , and KT-1000 arthrometer measurement . They also completed the International Knee Documentation Committee , Lysholm knee and Tegner activity scores . At a mean of 29 months ( 25 to 38 ) follow-up there were no significant differences concerning time between injury and range of movement and Lysholm knee scores among the four groups . However , the double bundle method showed significantly better results for the pivot-shift test ( p = 0.002 ) . The KT 1000 measurements showed a mean difference between the reconstructed knee and the patients ' normal knee of 1.4 mm in the double bundle group and 2.4 mm in the single bundle group ; which was statistically significant . The Lachman and anterior drawer tests also showed superior results for the double bundle method . The International Knee Documentation Committee scale showed no significant difference among the groups ( p < 0.001 ) . On clinical evaluation the double bundle group showed less laxity than the single bundle groups . However , regardless of the technique , all knees were improved by anterior cruciate ligament reconstruction compared with their pre-operative status\n\n\nTitle 4:\nA prospective evaluation of tunnel enlargement in anterior cruciate ligament reconstruction with hamstrings: extracortical versus anatomical fixation\nAbstract 4:\nAbstract . Changes in the femoral and tibial bone tunnel were studied prospect ively after arthroscopic ACL reconstruction with quadruple hamstring autograft . To determine whether tunnel enlargement can be decreased by fixing the graft close to the joint line having a stiffer fixation construct we compared \" anatomical \" ( one absorbable interference screw femorally , and bicortical fixation with two absorbable interference screws tibially ) and extracortical fixation techniques ( Endobutton femorally , and two no. 6 Ethibond sutures over a suture washer tibially ) . Over a 2-year period we evaluated 60 patients clinical ly ( IKDC scale , Cincinnati Knee Score , KT-1000 ) and radiographically ( confirmed by MRI ) . The operated knee was radiographed immediately postoperatively and 6 and 24 months postoperatively . The femoral and tibial bone tunnel diameter was measured on anteroposterior and lateral images , and the tunnel area was calculated and compared to the initial area calculated from the perioperative drill size . In the \" anatomical \" group the immediately postoperative bone tunnel area was 75 % larger than the initial tunnel area , after 6 months it was increased another 31 % , and between 6 and 24 months it remained basically unchanged . In the \" extracortical \" group there was no significant enlargement immediately postoperatively , but after 6 months it was 65 % larger than the initial area of drill and graft size , and between 6 and 24 months it decreased to 47 % . There was no correlation between the amount of tunnel enlargement and clinical scores or KT-1000 measurement . Arthroscopic ACL reconstruction with quadruple hamstring autograft is associated with bone tunnel enlargement . Using a purely extracortical fixation technique thus significantly increased the tibial and femoral tunnel area during the first 6 postoperative months , while it decreased slightly thereafter . The insertion of large interference screws apparently not only compresses the graft in the bone tunnel but also significantly enlarges the bone tunnel itself . The immediate enlargement at the time of the operation is followed by a reduced further enlargement at 6 months and then stabilization . Tunnel widening did not influence clinical outcome over a 2-year period", "role": "user" }, { "content": "No significant differences were found between the cortical button and transfemoral fixation groups .\nIncluded studies did not report differences in clinical outcomes between the 2 groups .\nRadiographic results suggest increased femoral tunnel widening in the cortical button group .\nHowever , tunnel widening was not found to affect clinical results .\nThe present evidence suggests that there are no short- to medium-term differences in knee-specific outcome measures between patients treated with cortical button femoral graft fixation and those treated with suspensory transfemoral fixation when undergoing ACL reconstruction .\nIn addition , radiologic evidence of tunnel widening does not seem to affect short- to medium-term clinical outcomes .", "role": "assistant" } ]
science.covid_deepset_qa
science.covid_deepset_qa.504
[ { "content": "Read the text and answer the question. Only include the exact text from the passage that answers the question.\n\nText: The main idea of the proposed model, called FPASSA-ANFIS, is to improve the performance of ANFIS by determining the parameters of ANFIS using FPASSA. The FPASSA-ANFIS model is evaluated using the World Health Organization (WHO) official data of the outbreak of the COVID-19 to forecast the confirmed cases of the upcoming ten days. More so, the FPASSA-ANFIS model is compared to several existing models, and it showed better performance in terms of Mean Absolute Percentage Error (MAPE), Root Mean Squared Relative Error (RMSRE), Root Mean Squared Relative Error (RMSRE), coefficient of determination ( R 2 ), and computing time. Furthermore, we tested the proposed model using two different datasets of weekly influenza confirmed cases in two countries, namely the USA and China. The outcomes also showed good performances. Text: A large family of viruses, called coronaviruses, are severe pathogens for human beings, which infect respiratory, hepatic, gastrointestinal, and neurologic diseases. They are distributed among humans, birds, livestock, mice, bats, and other wild animals [1] [2] [3] . The outbreaks of two previous coronaviruses, SARS-CoV and MERS-CoV in 2003 and 2012, respectively, have approved the transmission from animal to animal, and human to human [4] . In December 2019, the World Health Organization (WHO) received notifications from China for many cases of respiratory illness that were linked to some people who had visited a seafood market in Wuhan [5] . Currently, Wuhan city suffers from the spreading of a novel coronavirus, called COVID-19 (previously, it was called 2019-nCoV). In [6] , the authors concluded that COVID-19 likely originated in bats, because it is more similar to two bat-derived coronavirus strains. However, the source of the COVID-19 is not confirmed yet, and it communities, Hong Kong and Toronto, were 1.2 and 1.32, respectively. Ong et al. [20] proposed a monitoring and forecasting model for influenza A (H1N1-2009). Furthermore, Nah et al. [21] proposed a probability-based model to predict the spread of the MERS. The Adaptive Neuro-Fuzzy Inference System (ANFIS) [22] is widely applied in time series prediction and forecasting problems, and it showed good performance in many existing applications. It offers flexibility in determining nonlinearity in the time series data, as well as combining the properties of both artificial neural networks (ANN) and fuzzy logic systems. It has been applied in various forecasting applications, for example, in [23] , a stock price forecasting model was proposed using ANFIS and empirical mode decomposition. Chen et al. [24] proposed a TAIEX time series forecasting model based on a hybrid of ANFIS and ordered weighted averaging (OWA). In [25] , another time series forecasting method was presented for electricity prices based on ANFIS. Svalina et al. [26] proposed an ANFIS based forecasting model for close price indices for a stock market for five days. Ekici and Aksoy [27] presented an ANFIS based building energy consumption forecasting model. More so, ANFIS is also applied to forecast electricity loads [28] . Kumar et al. [29] proposed an ANFIS based model to forecast return products. Ho and Tsai [30] applied ANFIS to forecast product development performance. However, estimating ANFIS parameters is a challenge that needs to be improved. Therefore, in previous studies, some individual swarm intelligence (SI) methods have been applied to the ANFIS parameters to enhance time series forecasting because these parameters have a significant effect on the performance of ANFIS. The SI methods include the particle swarm optimization (PSO) [31, 32] , social-spider optimization [33] , sine-cosine algorithm (SCA) [34] , and multi-verse optimizer (MVO) [35] . For example, in [34] SCA algorithm was applied to improve the ANFIS model to forecast oil consumption in three countries, namely, Canada, Germany, and Japan. In the same context, in [35] , The MVO algorithm was used to enhance the ANFIS model to forecast oil consumption in two countries. In addition, in [36] the PSO was used with ANFIS to predict biochar yield. However, individual SI algorithms may stock at local optima. Therefore, one solution is to apply hybrid SI algorithms to avoid this problem. In [37] , a hybrid of two SI algorithms, namely GA and SSA, was presented to improve the ANFIS model. The proposed new model called GA-SSA-ANFIS was applied to forecast crude oil prices for long-term time series data. However, the previously mentioned methods suffer from some limitations that can affect the performance of the forecasting output such as slow convergence and the ability to balance between exploration and exploitation phases can influence the quality of the final output. This motivated us to propose an alternative forecasting method dependent on the hybridization concept. This concept avoids the limitations of traditional SI techniques by combining the strengths of different techniques, and this produces new SI techniques that are better than traditional ones. In the current study, we propose an improved ANFIS model based on a modified flower pollination algorithm (FPA) using the salp swarm algorithm (SSA). The FPA is an optimization algorithm proposed by Yang [38] , which was inspired by the flow pollination process of the flowering plants. The FPA was employed in various optimization applications, for example to estimate solar PV parameter [39, 40] , solving sudoku puzzles [41] , feature selection [42] , antenna design [43] , and other applications [44] [45] [46] [47] . Moreover, SSA is also an optimization algorithm proposed by Mirjalili et al. [48] inspired by the behavior of salp chains. In recent years, the SSA was utilized to solve different optimization problems, such as feature selection [49, 50] , data classification [51] , image segmentation [52] , and others [53, 54] . The proposed method called FPASSA is a hybrid of FPA and SSA, in which the SSA is applied as a local search method for FPA. The proposed FPASSA starts by receiving the historical COVID-19 dataset. Then a set of solutions is generated where each of them represents the value for the parameters of the ANFIS model. Then the quality of each solution is calculated using the fitness value, and the solution that has the best fitness value is chosen to represent the best solution. Then the probability of each solution is computed. Then the current solution will be updated, either using global or local strategy in FPA. However, in the case of local strategy, the operators of SSA or FPA will be used according to the probability of the fitness value for each solution. The process of updating the solutions is repeated until reaching the stop condition, and the best parameter configurations are used to forecast the number of confirmed cases of COVID-19. The main contribution points of the current study are as follows:\n\n1. We propose an efficient forecasting model to forecast the confirmed cases of the COVID-19 in China for the upcoming ten days based on previously confirmed cases. An improved ANFIS model is proposed using a modified FPA algorithm, using SSA. We compare the proposed model with the original ANFIS and existing modified ANFIS models, such as PSO, GA, ABC, and FPA. The rest of this study is organized as follows. The preliminaries of ANFIS, FPA, and SSA are described in Section 2. Section 3 presents the proposed FPASSA, and Section 4 presents the experimental setup and results. We conclude this study in Section 5. The principles of the ANFIS are given in this section. The ANFIS model links the fuzzy logic and neural networks [22] . It generates a mapping between the input and output by applying IF-THEN rules (it is also called Takagi-Sugeno inference model). Figure 1 illustrates the ANFIS model where, y and x define the inputs to Layer 1 whereas, O 1i is its output of node i that is computed as follows:\n\nwhere µ denotes the generalized Gaussian membership functions. A i and B i define the membership values of µ. α i and ρ i denote the premise parameters set. The output of Layer 2 (it is also known as the firing strength of a rule) is calculated as follows:\n\nMeanwhile, the output of Layer 3 (it is also known as the normalized firing strength) is calculated as follows:\n\nThe output of Layer 4 (it is also known as an adaptive node) is calculated as follows:\n\nwhere r i , q i , and p i define the consequent parameters of the node i. Layer 5 contains only one node; its output is computed as: \n\nFlower Pollination Algorithm is an optimization method proposed by Yang [38] . It simulates the transfer of flowers' pollen by pollinators in nature. This algorithm utilizes the two types of pollination (i.e., self-pollination and cross-pollination). In self-pollination, the pollination occurs with no pollinators, whereas, in cross-pollination, the pollens are moved between different plants. In more detail, the self-pollination can be represented as a local pollination while the cross-pollination can be called global pollination. The global pollination or cross-pollination can be mathematically formed as follows:\n\nwhere x t i defines the pollen i at iteration t. L denotes the pollination's strength or the step size. F * is the target position or best solution. In some cases, insects can fly with different distance steps for a long space; therefore, Levy fly distribution is applied to simulate this movement. where λ = 1.5. Γ(λ) denotes the gamma function. This distribution is available for large steps s > 0. The self-pollination or local pollination can be mathematically formed as follows:\n\nwhere x t i and x k i represent pollens from different flower in the same plant. in the range [0,1] The process of pollination can be done using cross-pollination or self-pollination. Therefore, the random variable p, in the range [0, 1], is used to determine this process. SSA is an optimization technique introduced by [48] . It simulates the Salps' behavior in nature. This behavior is called salp chain. The mathematical model of SSA begins by splinting its population into a leader group and followers group. The leader is the front salp, whereas, the followers are the other salps. The search space is determined in n-dimensions with n variables. Equation (10) works to update the salps' positions. where x 1 j denotes the leader's position in j-th dimension. F j is the target position. ub j and lb j represent the max and min bounds, respectively. c 2 and c 3 denote random numbers in [0, 1]. c 1 is an important parameter; it balances between the exploration and exploitation phases. It is computed as follows:\n\nwhere the current loop number is t and the max loop' number is t max . Then, the followers' position is updated as follows:\n\nwhere x i j defines the i-th position of the follower in j-th dimension. i > 1. This section explains the proposed FPASSA-ANFIS method. It is a time series method for forecasting the confirmed cases of the COVID-19, as given in Figure 2 . The FPASSA-ANFIS utilizes the improved FPA to train the ANFIS model by optimizing its parameters. The FPASSA-ANFIS contains five layers as the classic ANFIS model. Layer 1 contains the input variables (the historical COVID-19 confirmed cases). Whereas Layer 5 produces the forecasted values. In the learning phase, the FPASSA is used to select the best weights between Layer 4 and Layer 5. The FPASSA-ANFIS starts by formatting the input data in a time series form. In our case, the autocorrelation function (ACF) was considered. ACF is one of the methods applied to find patterns in the data; it presents information about the correlation between points separated by various time lags. Therefore, in this paper, the variables with ACF greater than 0.2 are considered i.e., 5-lags. Besides, the training data contains 75% of the dataset, whereas the testing data contains 25% of them. The number of clusters is defined by the fuzzy c-mean (FCM) method to construct the ANFIS model. The parameters of the ANFIS model are prepared by the FPASSA algorithm. In the training phase, the calculation error (as in Equation (13)) between the real data and the predicted data is used to evaluate the parameters' quality. where T is the real data, and P is the predicted data. N s is the sample length. The smaller values of the objective function indicate good ANFIS's parameter. On the other hand, the updating phase of the followers' positions in the SSA algorithm is applied to improve the global pollination phase in the FPA algorithm. In this improvement, there is a random variable (r) used to switch between both phases. If r > 0.5, then the operators of the SSA is used; otherwise, the operators of the FPA are used. In general, The FPASSA starts by constructing the population (X); afterward, the objective function is calculated for each solution. The solution with the lowest error value is saved to the next iteration. This sequence is repeated until meeting the stop condition, which in this paper, is the maximum number of iterations. Then the best solution is passed to train the parameters of the ANFIS model. After finishing the training phase, the testing phase is started with the best solution to compute the final output. The performance of the proposed method is evaluated by comparing the real data with the predicted data using the performance measures. Finally, the FPASSA produces a foretasted value for confirmed cases of COVID-19 in China in the next day. The steps of the proposed FPASSA are presented in Algorithm 1. Input: Historical COVID-19 dataset, size of population N, total number of iterations t max . Divide the data into training and testing sets. Using Fuzzy c-mean method to determine the number of membership functions. Constructing the ANFIS network. Set the initial value for N solutions (X). Return the best solution that represents the best configuration for ANFIS. Apply the testing set to the best ANFIS model. Forecasting the COVID-19 for the next ten days. This section presents the description of the used dataset, the performance measures, the parameter setting for all methods, the experiment results, and discussions. The main dataset of this study is COVID-19 dataset. It was collected from the WHO website (https: //www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports/). It contains the daily confirmed cases in China from 21 January 2020 to 18 February 2020, as shown in Table 1 . We used 75% from the dataset to train the model while the rest is used to test it. Moreover, we evaluated the performance of the proposed method using two datasets of weekly influenza confirmed cases. The first one is called DS1; it was collected from the Centers for Disease Control and Prevention (CDC) (https://www.cdc.gov/flu/weekly/). It starts from week number 40 in 2015 and continues until week number 6 in 2020. Whereas, the second one is called DS2. It was collected from the WHO website (https://www.who.int/influenza). It contains the data of weekly influenza confirmed cases in China from week number 1 in 2016 to week number 8 in 2020. The quality of the proposed method is evaluated using a set of performance metrics as follows:\n\n• Root Mean Square Error (RMSE):\n\nwhere Yp and Y are the predicted and original values, respectively. • Mean Absolute Error (MAE):\n\n• Mean Absolute Percentage Error (MAPE):\n\n• Root Mean Squared Relative Error (RMSRE):\n\nN s represents the sample size of the data.\nQuestion: What is the main idea behind the proposed model?", "role": "user" }, { "content": "to improve the performance of ANFIS by determining the parameters of ANFIS using FPASSA.", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.12256
[ { "content": "Write an abstract for this chemistry paper.\n\nTitle:\nTwo-Carbon Ring Expansion of 1-indanones via Insertion of Ethylene into Carbon\\xe2\\x80\\x93Carbon Bonds\n\nPaper sections:\n\n<p>Ring expansion reactions of carbonyl compounds, such as Baeyer–Villiger oxidation, Beckmann rearrangement and various carbon-insertion reactions, are highly valuable transformations and have been frequently utilized in complex molecule syntheses.1 With a few exceptions, most direct ring expansion reactions could add only a one-atom unit to the existing structures. Compared to the well-established one- carbon homologation methods2 (Scheme 1a), limited approaches are known for direct two-carbon ring expansions of ketones.3 After an accidental discovery in 1974, Proctor elucidated that carbocyclic β-ketoesters could undergo a [2+2] cycloaddition with an activated alkyne, followed by an accelerated retro-4π cyclization, to give two-carbon extended products4 (Scheme 1b). Later, Kuninobu and Takai discovered a similar but efficient rhenium-catalyzed reaction for insertion of terminal alkynes with β-ketoesters,5 though the reaction was not suitable for preparing 7-membered rings. As a mechanistically related transformation, Caubere,6 and Stoltz,7 reported an intriguing two-carbon ring expansion method via benzyne insertion (Scheme 1b).</p><p>Alternatively, it could be attractive to directly insert a common unsaturated unit into a cyclic ketone through transition metal-catalyzed C–C activation,8–10 which should offer a straightforward and byproduct-free approach for multiatom ring expansions. The reaction involves oxidative addition of C–C bond to a low-valent transition metal,8b followed by 2π-insertion to give an enlarged metallocycle and C–C reductive elimination. Such a transformation, also known as a \"cut-and-sew\" process,9b has been extensively demonstrated in strained three- and four-membered ring systems (Scheme 1c).11 However, for unstrained systems,10 the scope of the process has been primarily limited to the use of polar C–CN bonds12 or some special intramolecular reactions.13 In addition, ethylene, as the most highly produced organic compound, may serve as an appealing two-carbon coupling partner; to the best of our knowledge, the \"cut-and-sew\" reaction using ethylene as a 2π unit has been elusive for either strained or unstrained systems. Moreover, the preference to break the stronger aryl–carbonyl bond (e.g., in 1-indanones), enabled by transition-metal catalysts, could offer complementary selectivity to the conventional1 or radical-mediated C–C cleavage reactions.14 Herein, we describe our preliminary development of a Rh-catalyzed two-carbon ring expansion of 1-indanones via insertion of ethylene into C–C bonds (Scheme 1d).</p><p>To explore the proposed ethylene-insertion reaction, unsubstituted 1-indanone (1a) was used as the model substrate, and the Jun's ketimine directing mode8d,10b was employed for C–C activation. The reaction parameters, including different aminopyridines (serving as the temporary directing group), ligands, solvents, additives, temperature and pressure of ethylene, were carefully optimized (see Table S1). Ultimately, the desired benzocycloheptenone product 1b was obtained in 76% yield from 1-indanone (1a) and ethylene gas (100 psi) in the presence of 5 mol % [Rh(C2H4)2Cl]2, 10 mol % 1,3- bis(2,4,6-trimethylphenyl)imidazol-2-ylidene (IMes), 20 mol % p-toluenesulfonic acid monohydrate (TsOH H2O), 100 mol % 2-amino-3-picoline (DG-1) and 100 mol % H2O in tetrahy- drofuran (THF) (Table S1, entry 1). The only observable side product was the ketone α-C–H insertion product,15 2-ethyl-1- indanone (1c), which was formed in 8% yield. Interestingly, in the absence of the strong σ-donating IMes ligand, the reaction still afforded the desired product 1b in 60% yield, but gave a poorer selectivity with the α-alkylation product formed in 15% yield (Table S1, entry 2). The Rh catalyst, TsOH H2O and the DG-1 are all critical for this transformation, and no desired product can be produced without any of them (Table S1, entries 3–5). When decreasing DG-1 to 30 and 50 mol %, the seven-membered ring product 1b could still be obtained in 54% and 69% yield, respectively (Table S1, entries 6 and 7), suggesting that DG-1 exhibits some catalytic activity. Other temporary directing groups were found either less efficient or less selective (Table S1, entries 8–11), but it is worth noting that simple 2- aminopyridine DG-2 favors forming the ketone α-alkylation product (for more control experiments, see Supporting Information, Section 3).</p><p>With the optimized conditions in hand, the substrate scope of this ethylene-insertion reaction was then explored (Chart 1). First, C6-substituted 1-indanone substrates were tested. The electronic property of the 6-substituent only had a marginal influence on this reaction, as 1-indanones bearing either electron-donating (2a–7a) or -withdrawing groups (8a–15a) all gave comparable results to the model substrate 1a. A series of functional groups, including free phenol (6a), sulfonamide (7a), chloride (10a), ester (14a), methyl ketone (15a), silyl (16a) and free hydroxyl group (17a), are tolerated. Besides, substrates containing aryl bromide (11a) and boronate (18a), which are generally reactive moieties in transition-metal catalysis, still afforded the desired products in moderate yields. Notably, for the substrate that contains two ketone carbonyls (15a), C–C activation occurred exclusively at the indanone site. 1-Indanones bearing substituents at the 4- or 5-position exhibited similar reactivity, yielding the corresponding benzocycloheptenones in 52—75% yields (19a–25a). As expected, substitutions at the 7- position resulted in lower reactivity due to the increased steric hindrance around the carbonyl functionality (26a). In addition, several disubstituted 1-indanones (27a–31a) or naphthyl-fused cyclopentanone (32a) proved to be competent substrates, affording the desired seven-membered ring products in moderate to good yields.</p><p>After examining the steric and electronic influence of the arene part on reactivity, the substitution effect at the aliphatic positions of 1-indanones was next investigated. 3-Methyl-1- indanone (33a) showed substantially reduced reactivity under the standard conditions; however, the yield could be improved in the absence of the IMes ligand with a reduced amount of water (50 mol %). Under these new reaction conditions, 3- phenyl 1-indanone (34a) afforded the desired product in 42% yield. Gratifyingly, the reaction efficiency was significantly improved when substrates containing an additional substituent on the benzene ring (35a–40a), though the exact reason is unclear. For example, 6-trifluoromethyl-3-methyl-1-indanone (37a) produced the desired product (37b) in 90% yield. Besides methyl and phenyl groups, other alkyl substituents at the 3- position were also tolerated (41b–43b). Unsurprisingly, substitution at the α-position (C2) of 1-indanones shut down the reactivity because the steric congestion around the ketone would inhibit forming the imine intermediate. Finally, this reaction can be applied to natural product-derived or tethered indanones. Indanone 44a with an ester-linked cholesterol and 45a with an ether-linked androsterone smoothly participated in this two-carbon ring expansion reaction. Similarly, starting from estrone-fused cyclopentanone 46a, a seven-membered ketone moiety can be efficiently introduced to give a unique 7–6–6–6–5 pentacyclic structure, which was unambiguously confirmed by X-ray crystallography.</p><p>From a practical viewpoint, the limits of the reaction condition, i.e., the lowest catalyst loading/temperature that could still afford good synthetic efficiency, were probed. To our delight, when decreasing the loading of the rhodium/IMes from 10 mol % to 3 mol % and decreasing DG-1 from 100 to 50 mol %, the reaction still worked well to give 85% conversion and 65% yield of product 9b (Scheme 2a, entry 2 vs entry 1). Using 5 mol % rhodium/IMes and 50 mol % DG-1, the reaction efficiency almost reached to the level of the original conditions (Scheme 2a, entry 3). Besides, when running at a lower temperature (130 °C), the reaction still proceeded smoothly even with 5 mol % rhodium/ligand (Scheme 2a, entry 4). The reaction is also scalable. On gram scales, good yields could still be obtained with 1-indanones bearing either an electron-donating or -with- drawing group, and DG-1 could be easily recycled (Scheme 2b). Moreover, synthesis of enantiomerically enriched 5-substituted benzocycloheptanone (Scheme 2c) could be achieved using chiral 1-indanone R-35a that was prepared in three steps from commercially available arylboronic acid 47 and α,β-unsaturated ester 48 via asymmetric conjugate addition.16 The slight erosion of enantioselectivity was likely due to the unproductive C–C activation at the C1–C2 position, which led to reversible β- hydrogen elimination.17</p><p>Benzocycloheptenones have been frequently used in the synthesis of bioactive compounds that contain seven-membered rings; however, the conventional approaches for preparing benzocycloheptenones are often inefficient18–20 (Scheme 3). Thus, this two-carbon homologation approach could contribute to shortening the syntheses of those complex pharmaceutical agents. For example, the trifluoromethyl-substituted benzocy- cloheptanone (8b), prepared in a single step using this method, was the key intermediate in the synthesis of antiobesity agent 50 (Scheme 3a). As a comparison, the prior approach required five steps from 2-iodo-4-trifluoromethyl-aniline 51.18 In the second case, CEP-28122 is a highly potent and selective inhibitor of ALIK (anaplastic lymphoma kinase), showing promising antitumor activity in human cancers.19 One key structural motif is the benzocycloheptene moiety, which was synthesized from methoxyl-substituted benzocycloheptenone 24b. The previous synthesis of 24b used three steps from 1-methoxy- 2,3-dimethylbenzene 52 with a 33% overall yield.19 Now, 24b can be prepared straightforwardly via the \"cut-and-sew\" process from commercially available 4-methoxy-1-indanone 24a (Scheme 3b). The third example involves the synthesis of amine 53, which is a NMDA (N-methyl-D-aspartate) receptor for the potential treatment of various neurological disorders.20 The existing route prepared the key intermediate 10b in three steps from unsubstituted benzocycloheptenone 1b that is either very expensive or requires an additional two or three steps for preparation. In addition, the electrophilic aromatic substitution used in this synthetic route exhibited poor site-selectivity, leading to a low overall yield. Analogously, through the two- carbon homologation, compound 10b was made available in one-step from relatively inexpensive 6-chloro-1-indanone 10a (Scheme 3c). Moreover, simple transformations of the benzocycloheptanone moiety could afford a range of synthetic-cally useful scaffolds, and here symmetrical benzocycloheptenone 1b was used to avoid forming regioisomers. For instance, tetracyclic compound 54 was afforded in 88% yield via Fischer indole synthesis. Conjugated seven-membered enone 55 can be prepared in a good yield via ketone desaturation.21 Treatment with sodium azide under different conditions delivered either eight-membered lactam 56 (77% yield) or tetrazole-fused 6-8-5 tricycle 57 (75% yield), in which the structure of 57 was unambiguously confirmed by X-ray crystallography (Scheme 3d).</p><p>In summary, we disclose a two-carbon ring expansion method that inserts ethylene into relatively unstrained C–C bonds in 1- indanones, which offers a straightforward but strategically distinct approach for preparing benzocycloheptenones. The reaction is chemoselective, scalable and redox-neutral, which could be used to simplify the syntheses of benzocycloheptene-derived bioactive compounds. Efforts on extending the reaction scope to other cyclic ketones and unsaturated coupling partners,22 as well as detailed mechanistic studies, are ongoing.</p>", "role": "user" }, { "content": "A rhodium-catalyzed direct insertion of ethylene into a relatively unstrained carbon\\xe2\\x80\\x93carbon bond in 1-indanones is reported, which provides a two-carbon ring expansion strategy for preparing seven-membered cyclic ketones. As many 1-indanones are commercially available and ethylene is inexpensive, this strategy simplifies synthesis of benzocycloheptenones that are valuable synthetic intermediates for bioactive compounds but challenging to prepare otherwise. In addition, the reaction is byproduct-free, redox neutral, and tolerant of a wide range of functional groups, which may have implications on unconventional strategic bond disconnec-tions for preparing complex cyclic molecules.", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.236
[ { "content": "Extract entities of these types from the text: [\"Chemical\", \"Protein\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values.\n\nText:\nDesign, synthesis, and structure-activity relationship studies of tryptanthrins as antitubercular agents.\nThe natural product tryptanthrin (1a) represents a potential lead for new tuberculosis (TB) drugs since tryptanthrin and its synthetic analogues possess potent in vitro activity against Mycobacterium tuberculosis (Mtb). However, in spite of their in vitro activity, none of these agents have been shown to be efficacious in vivo against animal models of TB. Described herein are syntheses of new tryptanthrin analogues together with a systematic investigation of their in vitro antitubercular activity and ADME properties followed by pharmacokinetic characterization in rodents for the most promising compounds. Those with the best potency and oral bioavailability were progressed to evaluations of efficacy against acute murine TB. The work aimed to prove the concept that this compound class can limit growth of Mtb during infection as well as to establish the SAR for in vitro activity against Mtb and the range of in vitro ADME parameters for this class of natural products. Novel C-11-deoxy (5b) and A-ring-saturated (6) tryptanthrin analogues were discovered that maintained activity against Mtb and showed improved solubility compared to tryptanthrin as well as evidence of oral bioavailability in rodents. However, neither 5b nor 6 demonstrated efficacy against acute murine TB following administration at doses up to 400 mg/kg daily for 4 weeks. Although 5b and 6 failed to inhibit replication or kill Mtb in vivo, they illuminate a path to new structural variations of the tryptanthrin scaffold that may maximize the potential of this class of compounds against TB.", "role": "user" }, { "content": "{\"Chemical\": [\"tryptanthrins\", \"tryptanthrin\"], \"Protein\": []}", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.8528
[ { "content": "Classify the topics of this COVID-19 literature paragraph by selecting from these categories:\n\n- Epidemic Forecasting: Predictions about COVID-19 spread\n- Treatment: Therapeutic methods or medication\n- Prevention: Measures to prevent spread\n- Mechanism: Biological mechanisms of the virus\n- Case Report: Report of a COVID-19 case\n- Transmission: How COVID-19 is transmitted\n- Diagnosis: Diagnosing COVID-19 in individuals\n\nYour output should be a valid JSON array with topics in double quotes and separated by commas. No additional text should be included.\n\nArticle:\n\nBACKGROUND: Reports show that AKI is a common complication of severe coronavirus disease 2019 (COVID-19) in hospitalized patients. Studies have also observed proteinuria and microscopic hematuria in such patients. Although a recent autopsy series of patients who died with severe COVID-19 in China found acute tubular necrosis in the kidney, a few patient reports have also described collapsing glomerulopathy in COVID-19. METHODS: We evaluated biopsied kidney samples from ten patients at our institution who had COVID-19 and clinical features of AKI, including proteinuria with or without hematuria. We documented clinical features, pathologic findings, and outcomes. RESULTS: Our analysis included ten patients who underwent kidney biopsy (mean age: 65 years); five patients were black, three were Hispanic, and two were white. All patients had proteinuria. Eight patients had severe AKI, necessitating RRT. All biopsy samples showed varying degrees of acute tubular necrosis, and one patient had associated widespread myoglobin casts. In addition, two patients had findings of thrombotic microangiopathy, one had pauci-immune crescentic GN, and another had global as well as segmental glomerulosclerosis with features of healed collapsing glomerulopathy. Interestingly, although the patients had confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection by RT-PCR, immunohistochemical staining of kidney biopsy samples for SARS-CoV-2 was negative in all ten patients. Also, ultrastructural examination by electron microscopy showed no evidence of viral particles in the biopsy samples. CONCLUSIONS: The most common finding in our kidney biopsy samples from ten hospitalized patients with AKI and COVID-19 was acute tubular necrosis. There was no evidence of SARS-CoV-2 in the biopsied kidney tissue.", "role": "user" }, { "content": "[\"Case Report\"]", "role": "assistant" } ]
science.chia_ner
science.chia_ner.1154
[ { "content": "Extract all unique entities of these types from the following clinical trial criteria: [\"Negation\", \"Observation\", \"ReferencePoint\", \"Procedure\", \"Drug\", \"Measurement\", \"Value\", \"Person\", \"Multiplier\", \"Device\", \"Qualifier\", \"Mood\", \"Temporal\", \"Condition\", \"Visit\", \"Scope\"].\n\nOutput a JSON object with entity types as keys and lists of extracted entities as values. If no entities are found for a type, use [].\n\nText:\n\nAge less than 18 years\nSignificant arterial disease (Ankle Brachial Pressure Index <0•9 or evidence on Arterial Duplex)\nAcute Deep Vein Thrombosis\nPatient unable or unwilling to have high compression (30mmHg minimum)\nPatients with dexterity insufficiency of hands\nPatients with peripheral neuropathy\nLeg ulcers of another underlying cause\nLeg ulcers of greater than 1 year duration\nPatients unable or unwilling to provide written, informed consent", "role": "user" }, { "content": "{\"Condition\": [\"dexterity insufficiency of hands\", \"peripheral neuropathy\", \"Acute Deep Vein Thrombosis\", \"underlying cause\", \"Leg ulcers\", \"arterial disease\"], \"Device\": [], \"Drug\": [], \"Measurement\": [\"Ankle Brachial Pressure Index\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [\"Age\"], \"Procedure\": [\"Arterial Duplex\"], \"Qualifier\": [\"another\", \"Significant\"], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [\"greater than 1 year duration\"], \"Value\": [\"\\u003c0\\u20229\", \"less than 18 year\"], \"Visit\": []}", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.23535
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Mediated Plastid RNA Editing in Plant Immunity\nArticle:\nAbstract:\nPlant regulatory circuits coordinating nuclear and plastid gene expression have evolved in response to external stimuli. RNA editing is one of such control mechanisms. We determined the Arabidopsis nuclear-encoded homeodomain-containing protein OCP3 is incorporated into the chloroplast, and contributes to control over the extent of ndhB transcript editing. ndhB encodes the B subunit of the chloroplast NADH dehydrogenase-like complex( NDH) involved in cyclic electron flow( CEF) around photosystem I. In ocp3 mutant strains, ndhB editing efficiency decays, CEF is impaired and disease resistance to fungal pathogens substantially enhanced, a process recapitulated in plants defective in editing plastid RNAs encoding NDH complex subunits due to mutations in previously described nuclear-encoded pentatricopeptide-related proteins( i. e. CRR21, CRR2). Furthermore, we observed that following a pathogenic challenge, wild type plants respond with editing inhibition of ndhB transcript. In parallel, rapid destabilization of the plastidial NDH complex is also observed in the plant following perception of a pathogenic cue. Therefore, NDH complex activity and plant immunity appear as interlinked processes.\nIntroduction:\nPlastid function relies on nuclear gene expression, and the import of nuclear gene products into plastids[1]. In fact, the plastid genome of current land plants encodes 75–80 proteins[2], whereas nuclear-encoded chloroplast proteins are estimated between 3500 and 4000[3]. Current data approximates several hundred nuclear-encoded proteins are involved in post-transcriptional regulation of plastid gene expression[4],[5]. One such regulation is mediated through RNA editing, a post-transcriptional process that alters specific cytidine residues to uridine( C-to-U) in different plastid RNAs[6]. Thirty-four sites are edited in 18 transcripts of Arabidopsis plastids[7]. Among the nuclear-encoded proteins regulating RNA editing, the pentatricopeptide repeat( PPR) protein family has attracted notable interest[8]. This family comprises 450 members defined by a tandem array of PPR motifs. PPRs are also involved in almost all stages of plastid gene expression, including splicing, RNA cleavage, translation, and RNA stabilization[9]. The pioneer work of Kotera et al.[10],[11] revealed the Arabidopsis PPR protein CHLORORESPIRATORY REDUCTION4( CRR4) acts as a site-specific recognition factor for RNA editing of the site 1( ndhD-1) in the plastid ndhD transcript. ndhD encodes the D subunit of the chloroplast NADH dehydrogenase-like complex( NDH), involved in cyclic electron flow( CEF) around photosystem I( PSI)[11],[12]. Consequently, crr4 mutants are defective in ndhD transcript editing at the ndhD-1 site, and CEF is compromised[10],[11]. Subsequently, the number of PPR-encoding genes participating in editing control in the chloroplast has enlarged[9]. Although empirical evidence has been demonstrated for only a few PPR proteins, it is currently accepted that PPR proteins act as sequence-specific RNA binding adaptors, and hypothetical inferences suggest PPRs recruit effector enzymes or proteins to the target RNAs[13],[14]. While the mechanism by which specific PPR proteins recognize specific editing sites is becoming understood, questions still remain to be completely solved including the characterization of the molecular components that conform the RNA editing apparatus( editosome) or the still unsolved identification of editing enzyme itself. Therefore, identification of additional components modulating editing activities in plastids, and ascertaining how control of the post-transcriptional mechanism of chloroplast function influences other biological processes, in particular immune responses, is of great importance. Despite the critical role of chloroplasts as a site for production of integral mediators of plant immunity such as salicylic acid, jasmonic acid, and ABA[15], the molecular link between chloroplasts and the nuclear-encoded immune system remains largely unexplored. MEcPP, a plastidial metabolite previously shown to be involved in activating plant immunity in Arabidopsis[16] has been shown to mediate a retrograde signaling regulating expression of nuclear stress-response genes[17]. Nomura et al.[18] recently reported the chloroplast calcium-sensing receptor( CAS), involved in transducing changes in cytosol Ca2+ concentrations into chloroplast responses, regulates plant immunity in Arabidopsis, possibly through chloroplast-derived ROS signals( i. e. 1O2 and H2O2;[19]). These ROS signals may function through a retrograde signaling pathway to activate the expression of nuclear genes. Terashima et al.[20] demonstrated CAS is a crucial component of the machinery driving CEF around photosystem I in Chlamydomonas reinhardtii, and suggested CAS mediates changes in CEF activity, however the mechanism remains unresolved. In contrast to linear photosynthetic electron flow, where light drives ATP and NADPH synthesis; during CEF, light only drives ATP production by cycling electrons around PSI and Cyt b6f complexes, providing the molecular basis for this major energetic switch. Concurrently, CEF leads to the reduction of the plastoquinone pool, thereby increasing the frequency of charge recombination events in PSII; and as a result, altering the chloroplast redox status[21]. Consequently, CAS and Ca2+ via CEF alter ROS homeostasis, and may activate ROS-mediated retrograde signaling, which in a plant-pathogen interaction may have an impact on the outcome of plant disease resistance. CEF is also interrelated with nonphotochemical quenching( NPQ) which protects plants against damage resulting from ROS formation[22]. Göhre et al.[23] recently reported defense activation during PAMP-triggered immunity( PTI) in Arabidopsis resulted in rapid NPQ decrease, and NPQ also influenced immune responses, suggesting that NPQ and CEF are integral components regulating plant defense response. Similarly, chloroplast-generated ROS following the recognition of pathogen-derived effectors by plant R proteins, resulted in HR-type programmed cell death( PCD), demonstrating chloroplast contribution to effector-triggered immunity( ETI)[24]. These evidences emphasize the importance of chloroplasts in plant immunity, and indicate the potential for future discoveries in this area of research. We show the disease resistance regulator OVEREXPRESSOR OF CATIONIC PEROXIDASE3( OCP3) is targeted to the chloroplast, and controls editing efficiency of plastid ndhB transcripts. We also show that NDH activity, and therefore CEF around PSI, is an important control point in plant immunity. Furthermore, a previously undescribed signaling pathway linking editing control with plant immunity via CEF activity modulation in the chloroplast was elucidated in this study.\nDiscussion:\nThis study provides new insights into the control of disease resistance in plants, and reinforces the importance of the chloroplasts in plant immunity. Our data identified OCP3 targeted to chloroplast, a finding that conceptually changed the previous assumption that OCP3 could function as a nuclear transcription factor. Furthermore, confocal microscopy revealed that OCP3 accumulated in plastids matching several PPR proteins. Moreover, OCP3 was found to be closely co-expressed with a cluster of 9 genes encoding PPR proteins including CRR21. CRR21 is responsible for site 2 editing at ndhD transcript, and ndhD encodes the D subunit of the chloroplast NDH complex[13], a crucial component of the CEF machinery around PSI[11]. In crr21 plants, NDH complex activity is impaired and CEF activity compromised[13], supporting a predominant post-transcriptional level of control. All these observations prompted us to hypothesize that OCP3 was involved in RNA editing in plastids. Therefore, we performed a comparative systematic study of the editing status of chloroplast transcripts between Col-0 and different ocp3 mutants. This study revealed that OCP3-defective plants carry specific editing defects at ndhB-6, ndhB-4, ndhB-3, and ndhB-2 sites. The observation that OCP3 associates in vivo with the ndhB transcript, as revealed by RIP assays, reinforce the consideration that OCP3 contributes to control over the extent of ndhB transcript editing. However, OCP3 appears not to carry any structural motif resembling the conserved RNA recognition motif( RRM), not even the motifs characteristic of other proteins functioning as trans-factors essential for editing, such as those present in the large subclasses of the pentatricopeptide repeat( PPR)-containing family proteins[13],[14]. This may suggest that the association of OCP3 with the ndhB RNA molecule may be likely indirect, presumably though the interaction with canonical RNA binding proteins recognizing appropriate cis-elements present in the ndhB RNA molecule such as those RNA-binding proteins mentioned above. Therefore, OCP3 may serve a regulatory role on the editing apparatus by regulating and/or adjusting the editing extent of the ndhB transcript according to external environmental cues. This appears to be the case also for other described editing accessory proteins such as the recently identified multiple organellar RNA editing factor( MORF) and members of the RNA-editing interacting protein( RIP) family[39],[40]. ndhB encodes the NDH complex B subunit, therefore we next hypothesized that the absence of a functional OCP3 protein should result in a defective NDH complex. Results indicated that the observed alterations in ndhB editing in ocp3 plants affected NDH activity but not NDH complex stability. This is a feature also found in other editing-related mutants( i. e. crr21). However, in other cases, the defective gene results in lack of NDH complex accumulation, as seen when editing defects impedes translation initiation of NdhD subunit( i. e. crr4,[10]) or when appropriate maturation of ndhB transcript is blocked( i. e. crr2,[35]). The editing defects in ocp3 plants resulted in an inactive NDH complex that compromised normal CEF around PSI. We therefore concluded that OCP3 is an integral plastidial factor required for fine-tuning CEF around PSI, and this control is exerted post-transcriptionally through the regulation of ndhB transcript editing. This finding has important consequences as it represents the first evidence interfacing plant immunity, RNA editing and CEF. Therefore, one can propose that when OCP3 fails, as occurs in ocp3 plants, then accurate ndhB transcript editing is impeded, and in turn NDH complex is altered and eventually CEF inhibited. Since in chloroplast the NDH complex is considered to alleviate various oxidative stresses[41],[42], it can be speculated that a defective CEF pathway could generate ROS locally that eventually may resulted in disease resistance activation. Since ocp3 plants carry constitutive enhanced production of ROS species, particularly H2O2, and also show constitutive expression of ROS-inducible genes[25],[28], and OCP3 gene expression was rapidly down regulated following fungal attack[25], we reasoned that editing will fail, and consequently NDH impaired, when a plant encounters a pathogen. This led us to find that Arabidopsis respond to attempted P. cucumerina infections by activating a rapid mechanism of editing inhibition which affected the 8 major editing sites in ndhB, therefore including those requiring OCP3, plus an additional editing site at ndhD. This suggests involvement of other factors functioning as pathogen-sensitive regulators of the editing process. In fact, a similar cause-effect relationship was observed in crr2, crr21 and ppra plants, which exhibited the same response as ocp3 plants when inoculated with P. cucumerina. Their characteristic heightened disease resistance was accompanied by increased callose deposition in response to fungal infection, evoking activation of a mechanism for priming of callose deposition in these mutants similar to that previously discovered in ocp3 plants[26]. In wild type plants, in addition to the pathogen-induced editing inhibition of ndhB transcripts, we observed the NDH complex becoming rapidly destabilized and therefore dismantled, presumably by the action of chloroplast proteases. Therefore, either NDH complex activity and/or stability constitute distinct hallmarks of the plant's defense response to fungal pathogens. Whether or not editing inhibition and NDH complex stability are linked processes or rather represent independent processes remains unknown and is a challenging issue for future research. Furthermore, strong repression of OCP3 and CRR21 gene expression, severe editing inhibition of ndhB and ndhD transcripts, and NDH destabilization were also rapidly triggered by the sole application of chitosan, which functions as a PAMP mimicking fungal structures. Consequently, editing inhibition and dismantling of the NDH complex appeared definitively engaged during activation of innate immunity. Therefore, when appropriately and timely activated following pathogen perception, the mechanisms leading to alteration of the NDH complex in the chloroplast should serve to set in motion a signaling process leading to an effective defense response to halt the advance of the pathogen. Cumulatively, these observations reinforced the idea that maintaining NDH complex integrity is pivotal to normal CEF functioning during photosynthesis, however its timely inhibition following pathogen attack is fundamental for plant immunity. Therefore, modulation of the NDH complex activity must be under a delicate balance, requiring precise but flexible control. The breadth of our data indicate control occurs at the RNA editing level, a process where the described proteins ultimately serve as sensors regulating the rate of NDH complex activity. Therefore, OCP3, and presumably those PPRs and accessory proteins mediating editing extent of NDH complex subunits, exhibit a Janus-faced function, serving reciprocally as negative regulator of plant immunity and as positive regulator of CEF during oxygenic photosynthesis.", "role": "user" }, { "content": "Plastids originated from cyanobacteria that were incorporated into the eukaryotic cell through an endosymbiotic relationship . During the gradual evolution from endosymbiont to organelle , most genes of the cyanobacterial genome were transferred to the nuclear genome . Therefore , plastid biogenesis and function relies on nuclear gene expression and the import of these gene products into plastids , with the molecular dialogue between these two plant cell compartments therefore needing a precise coordination . Nuclei-to-chloroplast communication , and vice versa , are thus regulated through anterograde and retrograde signaling pathways , respectively . Post-transcriptional RNA editing of plastid RNAs by nuclear encoded regulatory proteins , such as pentatricopetide repeat ( PPRs ) proteins , represents one of such mechanisms of control . Through the characterization of the nuclear-encoded OCP3 protein , previously found to function as a disease resistance regulator in Arabidopsis , we have discovered a pathogen-sensitive editing-mediated control of the plastidial NDH complex involved in cyclic electron flow ( CEF ) around photosystem I . This led us to find that different PPRs controlling editing extent of transcripts for plastidial NDH complex are modulated by pathogenic cues . Our results thus represent the first series of evidence indicating engagement of chloroplast RNA editing and chloroplast NDH activity in plant immunity .", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.2263
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Genome-Wide Association Study and Gene Expression Analysis Identifies CD84 as a Predictor of Response to Etanercept Therapy in Rheumatoid Arthritis\nArticle:\nAbstract:\nAnti-tumor necrosis factor alpha( anti-TNF) biologic therapy is a widely used treatment for rheumatoid arthritis( RA). It is unknown why some RA patients fail to respond adequately to anti-TNF therapy, which limits the development of clinical biomarkers to predict response or new drugs to target refractory cases. To understand the biological basis of response to anti-TNF therapy, we conducted a genome-wide association study( GWAS) meta-analysis of more than 2 million common variants in 2, 706 RA patients from 13 different collections. Patients were treated with one of three anti-TNF medications: etanercept( n = 733), infliximab( n = 894), or adalimumab( n = 1, 071). We identified a SNP( rs6427528) at the 1q23 locus that was associated with change in disease activity score( ΔDAS) in the etanercept subset of patients( P = 8×10−8), but not in the infliximab or adalimumab subsets( P>0. 05). The SNP is predicted to disrupt transcription factor binding site motifs in the 3′ UTR of an immune-related gene, CD84, and the allele associated with better response to etanercept was associated with higher CD84 gene expression in peripheral blood mononuclear cells( P = 1×10−11 in 228 non-RA patients and P = 0. 004 in 132 RA patients). Consistent with the genetic findings, higher CD84 gene expression correlated with lower cross-sectional DAS( P = 0. 02, n = 210) and showed a non-significant trend for better ΔDAS in a subset of RA patients with gene expression data( n = 31, etanercept-treated). A small, multi-ethnic replication showed a non-significant trend towards an association among etanercept-treated RA patients of Portuguese ancestry( n = 139, P = 0. 4), but no association among patients of Japanese ancestry( n = 151, P = 0. 8). Our study demonstrates that an allele associated with response to etanercept therapy is also associated with CD84 gene expression, and further that CD84 expression correlates with disease activity. These findings support a model in which CD84 genotypes and/or expression may serve as a useful biomarker for response to etanercept treatment in RA patients of European ancestry.\nIntroduction:\nRheumatoid arthritis( RA) is an autoimmune disease characterized by chronic inflammation of the synovial lining of the joint[1]. If left untreated, outcome varies from self-limited disease in a small proportion of RA patients to severe disease resulting in profound structural damage, excess morbidity and disability, and early mortality[2]. In the last twenty years, disease activity has been controlled in many patients by treatment with disease-modifying anti-rheumatic drugs( DMARDs), such as methotrexate, and the more recently developed biologic DMARDs that block inflammatory cytokines such as tumor necrosis factor-alpha( TNFa)[3]. Unfortunately, these medications are not effective in all RA patients, with up to one-third of patients failing to respond to any single DMARD[1]–[3]. Moreover, the biological mechanisms underlying treatment failure are unknown, which limits the development of clinical biomarkers to guide DMARD therapy or the development of new drugs to target refractory cases. There are two classes of anti-TNF therapy: the TNF receptor fusion protein( etanercept), which acts as a soluble receptor to bind circulating cytokine and prevent TNF from binding to its cell surface receptor, and monoclonal antibodies that bind TNF( adalimumab, infliximab, certolizumab, and golimumab). There are undoubtedly shared mechanisms between the two drug classes( e. g., downstream signaling factors), as illustrated by similar effects on the change in inflammatory cytokines, complement activation, lymphocyte trafficking, and apoptosis[4],[5],[6]. Similarly, there are likely to be different biological factors that influence response: infliximab and adalimumab are approved for treatment of Crohn's disease; infliximab and adalimumab bind to transmembrane TNF on the surface of activated immune cells, whereas etanercept only binds soluble TNF[7]; and etanercept also binds a related molecule, lymphotoxin alpha( LTA), whereas infliximab/adalimumab do not[8]. Pharmacogenetics of response to anti-TNF therapy in RA remains in its early stages, with no single variant reaching an unambiguous level of statistical significance. Candidate gene studies suggest associations of TNFa or TNF receptor alleles, RA risk alleles or other SNPs with response to anti-TNF therapy[9],[10],[11]. Two GWAS in small sample sets( largest was 566 patients) have been performed, which identified loci with suggestive evidence for association[12],[13]. Therefore, GWAS of large sample sizes may yet uncover genetic factors associated with response to anti-TNF therapy in RA, and larger cohorts enable separate analyses of the different types of anti-TNF drugs. Here we report a GWAS of 2, 706 samples with anti-TNF treatment response data collected from an international collaboration, including previously published GWAS data[12],[13]. Our primary outcome measure was the change in disease activity score based on a joint count in 28 joints( DAS28) from baseline to 3–12 months after initiating anti-TNF therapy. Our secondary outcome measure was European League Against Rheumatism( EULAR) responder status[14],[15], where patients are classified as EULAR good responders, moderate responders or non-responders based on follow up DAS28 after treatment and overall change in DAS28. We found a highly significant association for a variant that we also show is also a strong expression quantitative trait locus( eQTL) for the CD84 gene. Our findings suggest that CD84 genotype and/or expression may prove to be a biomarker for etanercept response in RA patients.\nDiscussion:\nHere we present the largest GWAS to date on anti-TNF therapy response in 2, 706 RA patients. We find a significant association at the 1q23/CD84 locus in 733 etanercept treated patients( P = 8×10−8), but not in RA patients treated with drugs that act as a monoclonal antibody to neutralize TNF( infliximab or adalimumab). The allele associated with a larger ΔDAS( i. e., better response) was associated with higher CD84 expression in PBMCs from non-RA patients( P = 1×10−11) and in RA patients( P = 0. 004). We first conducted a GWAS of both categories of anti-TNF drugs( the soluble receptor drug, etanercept, and two monoclonal antibody drugs, infliximab and adalimumab). However, this analysis revealed no strongly associated SNPs. When we subset our GWAS by each of the three individual drugs, several SNPs in the 1q23 locus were highly significant in etanercept-treated patients, and SNPs in three other loci( 10p15, 5q35 and 2q12) were associated in infliximab or adalimumab subset analyses. Furthermore, the top SNPs for each analysis( Table S3) showed little correlation across the three anti-TNF drugs. This simple observation suggests that genetic control of treatment response may be different for different drugs. This finding is consistent with the clinical observation that RA patients who fail one anti-TNF drug may still respond to a different anti-TNF drug, albeit at lower rates of response[19]. If confirmed in larger samples and more comprehensive analyses, then this could have major implications for how physicians prescribe these drugs. The most significant finding from our GWAS was a set of equivalent SNPs in LD with each other from the 1q23 locus in etanercept-treated RA patients( Figure 1 and Figure 2A). While the top SNP did not reach genome-wide significance in predicting treatment response, it did reach genome-wide significance as an eQTL in PBMCs( P = 1×10−11; Figure 2A). This finding indicates that the SNP( or another variant in LD with it) is indeed biologically functional in a human tissue that is important in the immune response. Two SNPs, rs10797077 and rs6427528, disrupt transcription factor binding sites, and represent excellent candidates for the causative allele to explain the effect on CD84 expression( Figure 2C). Our findings suggest that CD84 genotype and/or expression could be a biomarker for etanercept treatment response among individuals of European ancestry. The genetic and expression data predict that CD84 expression should be positively associated with treatment response( i. e., higher expression is associated with better response; Figure 3). While we did not observe a significant association between CD84 levels and ΔDAS, we did observe a trend consistent with this prediction( Figure 4). Importantly, we note that power was extremely limited with the small sample sizes for which we had CD84 expression as well as drug response data( n = 31 RA patients treated with etanercept). The CD84 gene is a compelling candidate for immune response, belonging to the CD2 subset of the immunoglobulin superfamily. It has been implicated in T-cell activation and maturation[20]. CD84 localizes to the surface of CD4+ and CD8+ T cells, and acts as a costimulatory molecule for IFN-gamma secretion[21]. CD84 is also expressed in B-cells, monocytes and platelets. CD84 has not been previously implicated in genetic studies of RA risk, disease activity, disease severity, or treatment response. A limitation of our study is the small sample size available for replication( n = 290 etanercept-treated patients), and the lack of replication observed for the top CD84 SNP( rs6427528) among patients of Portuguese and Japanese ancestry. The simplest explanation is that our original observation in the GWAS data represents a false positive association. However, the eQTL and gene expression data argue against this possibility. Explanations for a false negative finding in our replication collections include:( 1) lack of power, especially if the effect size observed in the GWAS represents an over-estimate of the true effect size( the Winner's Curse) – we estimate that we had 32% and 17% power( at P = 0. 05) to detect an association in the Portuguese and Japanese sample collections, respectively;( 2) clinical heterogeneity, which is always a possibility in pharmacogenetic studies, especially those conducted in different countries; and( 3) ethnic differences, including different patterns of LD between the underlying causative allele( which is as yet unknown) and marker SNPs tested in our study. We did observe subtle differences in local patterns of LD between Asians and Europeans using genetic data from the 1000 Genomes Project( Figure S5). We note that the rs6427528 minor allele A has a frequency of ∼5–10% in European and East Asian populations, and ∼50% in the African YRI population( HapMap2 and 1000 Genomes); therefore, it may be of interest to test African American samples in replication. What are the options for increasing sample size in pharmacogenetic studies, thereby providing an opportunity to replicate our CD84 genetic and expression findings? While it might seem trivial to collect more samples through traditional registries, this is extremely challenging for phenotypes pertaining to treatment efficacy. To underscore this point, we highlight our study design, where we organized samples and clinical data from 16 different collections across 7 different countries in order to obtain the samples for the current study. Going forward, non-traditional strategies to collect biospecimens linked with clinical data( e. g., online registries, electronic medical records) may be required to achieve clinical collections of sufficient size to discover pharmacogenomic predictors of efficacy. In conclusion, we conducted the largest GWAS to date for response to anti-TNF therapy in RA patients. Our genetic and expression data suggest that CD84 genetic variants and/or expression levels could be developed as predictive biomarkers for etanercept treatment response in RA patients of European ancestry.", "role": "user" }, { "content": "There are no genetic predictors of response to one of the most widely used classes of drugs in the treatment of rheumatoid arthritis—biological modifiers of the inflammatory cytokine tumor necrosis factor-alpha ( or anti-TNF therapy ) . To identify genetic predictors , we performed the largest genome-wide association study ( GWAS ) to date as part of an international collaboration . In our study , which included 2 , 706 RA patients treated with one of three anti-TNF drugs , the most significant finding was restricted to RA patients treated with etanercept ( P = 8×10−8 ) , a drug that acts as a soluble receptor to bind circulating cytokine and prevents TNF from binding to its cell surface receptor . The associated variant influences expression of a nearby immune-related gene , CD84 , whose expression is correlated with disease activity in RA patients . Together , our data support a model in which genomic factors related to CD84 expression serve as a predictor of disease activity and response to etanercept therapy among RA patients of European ancestry , but not anti-TNF therapies that act through different biological mechanisms or potentially in RA patients of other genetic ancestries .", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.1945
[ { "content": "Summarize the article in an uncomplicated manner.\n\nTitle: Unlinking the methylome pattern from nucleotide sequence, revealed by large-scale in vivo genome engineering and methylome editing in medaka fish\nArticle:\nAbstract:\nThe heavily methylated vertebrate genomes are punctuated by stretches of poorly methylated DNA sequences that usually mark gene regulatory regions. It is known that the methylation state of these regions confers transcriptional control over their associated genes. Given its governance on the transcriptome, cellular functions and identity, genome-wide DNA methylation pattern is tightly regulated and evidently predefined. However, how is the methylation pattern determined in vivo remains enigmatic. Based on in silico and in vitro evidence, recent studies proposed that the regional hypomethylated state is primarily determined by local DNA sequence, e. g., high CpG density and presence of specific transcription factor binding sites. Nonetheless, the dependency of DNA methylation on nucleotide sequence has not been carefully validated in vertebrates in vivo. Herein, with the use of medaka( Oryzias latipes) as a model, the sequence dependency of DNA methylation was intensively tested in vivo. Our statistical modeling confirmed the strong statistical association between nucleotide sequence pattern and methylation state in the medaka genome. However, by manipulating the methylation state of a number of genomic sequences and reintegrating them into medaka embryos, we demonstrated that artificially conferred DNA methylation states were predominantly and robustly maintained in vivo, regardless of their sequences and endogenous states. This feature was also observed in the medaka transgene that had passed across generations. Thus, despite the observed statistical association, nucleotide sequence was unable to autonomously determine its own methylation state in medaka in vivo. Our results apparently argue against the notion of the governance on the DNA methylation by nucleotide sequence, but instead suggest the involvement of other epigenetic factors in defining and maintaining the DNA methylation landscape. Further investigation in other vertebrate models in vivo will be needed for the generalization of our observations made in medaka.\nIntroduction:\nDNA methylation is central to the epigenetic control of transcription in vertebrates and is essential for cell differentiation and embryonic development[1–3]. While the cytosines in cytosine-guanine( CpG) dinucleotides are extensively methylated throughout vertebrate genomes, unmethylated CpGs are commonly found clustered at high density inside gene regulatory elements, such as promoters and enhancers. Previous studies have revealed that the methylation state of regulatory regions governs the expression of their associated genes[4, 5]. Furthermore, aberrant changes in the methylation state can lead to deregulated transcription, resulting in cellular dysfunction, diseases and developmental abnormality[6, 7]. Given its direct governance on transcription, the methylation landscape needs to be precisely specified and modulated. The DNA methylation pattern is established and maintained through highly dynamic biological processes, in which the methylome undergoes substantial, yet precise, changes. For instance, differentiating cells faithfully acquire specific methylation landscapes that are unique to their committed cell types[8–10]. Remarkably, in human and mice, the DNA methylome is extensively erased[11, 12] and fully reconstituted during gametogenesis and early embryonic development[13–15]. These facts suggest that the methylation landscape is pre-defined by genetic information. Thus, deciphering how the methylation pattern is encoded is a prerequisite for understanding of differentiation processes and the pathogenesis of various diseases[6, 16–18]. However, by what means the methylation pattern is defined in vivo remains enigmatic. Researches for the past decade proposed that DNA methylation pattern depends on local sequence context. In particular, in silico analyses asserted that there is the strong statistical association between sequence variants and differential DNA methylation states in vertebrates, from fish[19] to human[20]. A number of recent in vitro studies using cultured cells further demonstrated that high CpG density or the presence of specific DNA sequences that contain transcription-factor binding sites is capable of autonomously determining local hypomethylation in the globally methylated genome[21–24]. These recent in silico and in vitro reports support the notion that DNA methylation pattern is primarily determined by local sequence context[21]. However, the anticipated sequence-dependency of DNA methylation is in contradiction to the pioneer in vitro experiments in early 80’S[25–27], in which the methylation status of exogenous DNAs( either artificially CpG-methylated or completely unmethylated) was found maintained with certain fidelity for many cell generations upon stable genome integration. Given these opposing results, the sequence-dependency of DNA methylome seems less concrete than recently anticipated. Importantly, the above ideas have never been well demonstrated nor rigorously tested in vivo. In this respect, the report by Long et al.[28] provided valuable insights by studying the DNA methylation state of the 42-Mbp fragment of human chromosome 21 in the Tc1 trans-chromosomic mice, as well as the mouse genome loci-containing transgene constructs that were artificially transposed into the zebrafish genome. Their results suggested the existence of sequence-dependent DNA methylation in vivo, but their analyses only focused on non-native sequences( i. e. examining human genomic sequence in mouse, or mouse genomic sequence in zebrafish). Likewise, Li et al.[29] examined the methylation status of a transgene across three generations in rat and found the stable acquisition and inheritance of DNA methylation pattern, but the transgene examined was composed of a mouse promoter and human gene. Thus, it is difficult to draw a general conclusion with these studies on the causal relationship between DNA sequence and methylation in native context in vivo. Herein, we report the first experiments that rigorously tested the governance of DNA methylation state by nucleotide sequence in vivo. The small laboratory fish, medaka( Oryzias latipes), was chosen as an experimental model for their relatively small genome size( approx. 700 Mbp), short generation time( 2. 5 to 3 months), ease of in vivo genetic manipulation, oviparity, in addition to their capability of producing 10–20 fertilized embryos per pair on daily basis[30, 31]. Importantly, the medaka has polymorphic inbred lines from two geographically separated subpopulations living in the northern and southern part of Japan( 2. 5–3% SNP rate, for review, see[32]), and their genomes and methylomes were already decoded[19, 33, 34]. Although vertebrates could have variable DNA methylation dynamics, particularly during early embryonic development( e. g., the genome-wide methylation erasure immediately after fertilization is highly extensive in human and mice[11, 12], but very subtle or virtually absent from sheep[35], medaka[36] and zebrafish[37]), the ultimate zygotic DNA methylation landscape is highly conserved from fish to mammals[37–39]. In addition, since an extensively methylated genome is believed to be prerequisite for the onset of vertebrate evolution[40–42], the molecular mechanisms and logic underlying the patterning of DNA methylome are likely conserved among vertebrates. Hence, observations made on medaka can potentially shed light on the postulated, yet unproven, link between genomic sequences and DNA methylation in vertebrates. Contrary to expectation, our results suggest that nucleotide sequence, by itself, cannot dictate its own methylation state in vivo, which argues against the prevailing view of DNA methylation in vertebrates.\nDiscussion:\nAlthough DNA methylation is the best characterized epigenetic signature[50], the molecular basis and logic of its establishment still remain elusive. Given that CpG dyads are predominantly methylated unless they are clustered at high density[51], it is generally presumed that hypermethylation is the default state of vertebrate genomes and specific regions( i. e. gene regulatory elements) are protected from de novo methylation, rendering them hypomethylated[21, 24, 52–55]. Intensive researches for the past decade have demonstrated that the protection on the genomic loci is possibly mediated by nucleosome positioning[56–58] and/or the recruitment of a myriad of proteins[12, 59–63] which eventually block off local access of DNA methyltransferases or remove methylation on cytosines in vicinity through oxidation and thymine DNA glycosylase( TDG)-mediated base excision repair. However, little is known about how are these factors specifically predisposed on the preselected loci. As aforementioned, recent in vitro studies demonstrated that nucleotide sequence features( especially high CpG density and the presence of certain transcription factor binding sites) autonomously determined the local hypomethylated state[21–24, 64, 65]. However, this preposition has never been rigorously verified in vivo, presumably due to the fact that interrogation of genomic sequences at genome-wide scale requires large number of subject animals, which is prohibitive with classical mammalian models( e. g., rodents). With the use of medaka as an alternative vertebrate model, our results definitively showed that there is no immediate connection between DNA methylation state and underlying nucleotide sequence in vivo, in spite of their strong statistical association. By manipulating and controlling the methylation state of genomic sequences prior to reintegration into the genome, we demonstrated that the artificially established methylation states were predominantly maintained in medaka in vivo, independent of their nucleotide sequences and native methylation states. This also appears to be true for the transgene that passed across generations. Our results thus argue against not only the recently inferred determining role of DNA sequence on the methylation landscape, but also the longstanding belief that there is a default state( i. e. hypermethylated) for the vertebrate genomes. In fact, the postulated strict sequence-dependency seems paradoxical to the concept of epigenetics itself. There are accumulating reports for the last two decades that DNA methylation could be perturbed by transient physiological stress or chemical exposure. More importantly, the perturbed states could be highly persistent and inheritable, while the underlying genomic sequence remains unchanged[66–68]. These observations highlighted that DNA methylation pattern is not directly coupled with the underlying nucleotide sequence in vivo, in spite of what has been recently shown in silico and in vitro. However, our results do not rule out the existence of highly confined, local sequence-dependent DNA methylation. As proposed by Richards[69], the sequence-dependency of epigenetic signatures may vary with actual sequence-context, i. e. some nucleotide sequences may favor or even fully mandate certain methylation state, while others may be completely independent of DNA methylation. Although the artificially established hypermethylated state of HypoMD sequences examined in this study was mostly maintained after genome integration, we observed spontaneous, complete loss of methyl groups on some CpGs in the eleven pre-methylated HypoMDs, as well as within one of the in situ edited loci. This suggests the presence of local sequence elements that facilitate demethylation on specific CpGs, although their effect was spatially confined. As previously demonstrated in vitro, some DNA motifs, in particular several transcription factor binding sites( reviewed by Blattler and Farnham[70]), are indeed instructive to DNA methylation and may account for the change of methylation state in specific loci upon differentiation[18, 71]. Importantly, their effect was also demonstrated to be limited to no more than a few tens of base pairs up- and down-stream[22, 23]. It is thus likely that the restricted governing range( < 100 bp) of these DNA sequences is insufficient to account for the span of HypoMDs( median length > 1 kb). The apparent lack of sequence dependency can be explained by the involvement of epigenetic factors in DNA methylation in vivo, as suggested by Kaminsky et al.[20]. Genomic fragments tested in the present study and previous works were all purified prior to reintegration into the genome, hence lacked any associated factor( s) that can modulate DNA methylation. Future experiments will need to address the presumed methylation determining factor( s), their deposition onto specific locations of the genome, and their inheritance across cell division and animal generations. The strong link between DNA methylation, nucleosome position and histone modifications[72–74] could provide a hint for further investigation. In summary, with the use of medaka as a vertebrate model, our data presented herein oppose the recent proposition that the genome-wide DNA methylation pattern in vertebrates is primarily and autonomously designated by the underlying genomic sequence in vivo, but instead provide insights into potential involvement of other epigenetic factor( s) in defining the DNA methylation landscape. Our results demonstrate that the DNA methylation landscape and genomic sequence are not directly coupled, which underpin the widely-observed plasticity of DNA methylation along differentiation, as well as the transgenerational inheritance of perturbed DNA methylation in vivo. However, it is worth noting that vertebrate species could have variable methylation dynamics of DNA methylation during development and growth, especially during early embryonic stages, although underlying molecular mechanisms are probably conserved. This is true even within the same clade of vertebrate species, such as mammals[35]. Further investigation in other vertebrate models will definitely be needed before generalization of our observations made on medaka.", "role": "user" }, { "content": "The genomes of vertebrate animals are naturally and extensively modified by methylation . The DNA methylation is essential to normal functions of cells , hence the whole animal , since it governs gene expression . Defects in the establishment and maintenance of proper methylation pattern are commonly associated with various developmental abnormalities and diseases . How exactly is the normal pattern defined in vertebrate animals is not fully understood , but recent researches with computational analyses and cultured cells suggested that DNA sequence is a primary determinant of the methylation pattern . This study encompasses the first experiments that rigorously test this notion in whole animal ( medaka fish ) . In statistical sense , we observed the very strong correlation between DNA sequence and methylation state . However , by introducing unmethylated and artificially methylated native genomic DNA sequences into the genome , we demonstrated that the artificially conferred methylation states were robustly maintained in the animal , independent of the sequence and native state . Our results thus demonstrate that genome-wide DNA methylation pattern is not autonomously determined by the DNA sequence , which underpins the vital role of DNA methylation pattern as a core epigenetic element .", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.6481
[ { "content": "Write a literature review chapter based on these paper abstracts.\nLiterature Review Title: Augmentative and Alternative Communication (AAC) Advances: A Review of Configurations for Individuals with a Speech Disability \nChapter Title: Conversational Rates \n1. Abstract of Cited Paper (BIB001): The problem of assisting people with special needs is assuming a central role in our society, and information and communication technologies are asked to have a key role in aiding people with both physical and cognitive disabilities. This paper describes an eye tracking system, whose strong points are the simplicity and the consequent affordability of costs, designed and implemented to allow people with severe motor disabilities to use gaze as an input device for selecting areas on a computer screen. The motivation for this kind of input device, together with the communication impairments that it may help to solve are reported in the paper, that then describes the adopted technical solution, compared to existing approaches, and reports the results obtained by its experimentation. \n2. Abstract of Cited Paper (BIB002): The mission of the Georgia State University BrainLab is to create and adapt methods of human-computer interaction that will allow brain-computer interface (BCI) technologies to effectively control real-world applications. Most of the existing BCI applications were designed largely for training and demonstration purposes. Our goal is to research ways of transitioning BCI control skills learned in training to real-world scenarios. Our research explores some of the problems and challenges of combining BCI outputs with human-computer interface paradigms in order to achieve optimal interaction. We utilize a variety of application domains to compare and validate BCI interactions, including communication, environmental control, neural prosthetics, and creative expression. The goal of this research is to improve quality of life for those with severe disabilities. \n3. Abstract of Cited Paper (BIB003): Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond. \n4. Abstract of Cited Paper (BIB004): AbstractRecently, eye tracking technologies (i.e., technologies that automatically track the point of an individual's gaze while that person views or interacts with a visual image) have become available for research purposes. Based on the sampling of the orientation of the individual's eyes, researchers can quantify which locations within the visual image were fixated (viewed), for how long, and how many times. These automated eye tracking research technologies open up a wealth of avenues for investigating how individuals with developmental or acquired communication disabilities may respond to aided augmentative and alternative communication (AAC) systems. In this paper, we introduce basic terminology and explore some of the special challenges of conducting eye tracking research with populations with disabilities who might use AAC, including challenges of inferring attention from the presence of fixation and challenges related to calibration that may result from participant characteristics, behavioral idi... \n5. Abstract of Cited Paper (BIB005): Purpose Eye-gaze methods have the potential to advance the study of neurodevelopmental disorders. Despite their increasing use, challenges arise in using these methods with individuals with neurodevelopmental disorders and in reporting sufficient methodological detail such that the resulting research is replicable and interpretable. Method This tutorial presents key considerations involved in designing and conducting eye-gaze studies for individuals with neurodevelopmental disorders and proposes conventions for reporting the results of such studies. Results Methodological decisions (e.g., whether to use automated eye tracking or manual coding, implementing strategies to scaffold children's performance, defining valid trials) have cascading effects on the conclusions drawn from eye-gaze data. Research reports that include specific information about procedures, missing data, and selection of participants will facilitate interpretation and replication. Conclusions Eye-gaze methods provide exciting opportunit... \n6. Abstract of Cited Paper (BIB006): The past 20 years have witnessed unprecedented progress in brain–computer interfaces (BCIs). However, low communication rates remain key obstacles to BCI-based communication in humans. This study presents an electroencephalogram-based BCI speller that can achieve information transfer rates (ITRs) up to 5.32 bits per second, the highest ITRs reported in BCI spellers using either noninvasive or invasive methods. Based on extremely high consistency of frequency and phase observed between visual flickering signals and the elicited single-trial steady-state visual evoked potentials, this study developed a synchronous modulation and demodulation paradigm to implement the speller. Specifically, this study proposed a new joint frequency-phase modulation method to tag 40 characters with 0.5-s-long flickering signals and developed a user-specific target identification algorithm using individual calibration data. The speller achieved high ITRs in online spelling tasks. This study demonstrates that BCIs can provide a truly naturalistic high-speed communication channel using noninvasively recorded brain activities. \n7. Abstract of Cited Paper (BIB007): CONTEXT ::: Medicine is a highly visual discipline. Physicians from many specialties constantly use visual information in diagnosis and treatment. However, they are often unable to explain how they use this information. Consequently, it is unclear how to train medical students in this visual processing. Eye tracking is a research technique that may offer answers to these open questions, as it enables researchers to investigate such visual processes directly by measuring eye movements. This may help researchers understand the processes that support or hinder a particular learning outcome. ::: ::: ::: AIM ::: In this article, we clarify the value and limitations of eye tracking for medical education researchers. For example, eye tracking can clarify how experience with medical images mediates diagnostic performance and how students engage with learning materials. Furthermore, eye tracking can also be used directly for training purposes by displaying eye movements of experts in medical images. ::: ::: ::: CONCLUSIONS ::: Eye movements reflect cognitive processes, but cognitive processes cannot be directly inferred from eye-tracking data. In order to interpret eye-tracking data properly, theoretical models must always be the basis for designing experiments as well as for analysing and interpreting eye-tracking data. The interpretation of eye-tracking data is further supported by sound experimental design and methodological triangulation. \n8. Abstract of Cited Paper (BIB008): In this paper, a review is presented for the research on eye gaze estimation techniques and applications, which has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, head-mounted, automotive, and handheld devices. Analysis of the literature leads to the identification of several platform specific factors that influence gaze tracking accuracy. A key outcome from this review is the realization of a need to develop standardized methodologies for the performance evaluation of gaze tracking systems and achieve consistency in their specification and comparative evaluation. To address this need, the concept of a methodological framework for practical evaluation of different gaze tracking systems is proposed. \n9. Abstract of Cited Paper (BIB009): Augmentative and alternative communication (AAC) has been transformed by the social media revolution made possible by the emergence of mobile technology. The cumbersome dedicated devices of the 1970s have evolved into a burgeoning AAC app industry. However, the limited use and abandonment of AAC technologies remains high. Unlocking the untapped potential of technology requires a paradigm shift in the design of AAC technologies by building systems that minimize the cognitive load placed on users, adapting to their individual physical and language needs. Telling Tales shares insights and stories of how the combination of user-centred design, interdisciplinary research and the application of intelligent computing is providing a vision of future generations of AAC technologies. \n10. Abstract of Cited Paper (BIB010): Purpose Brain–computer interfaces (BCIs) have the potential to improve communication for people who require but are unable to use traditional augmentative and alternative communication (AAC) device...\n\nLiterature Review:", "role": "user" }, { "content": "Natural speech has a rate of 125-185 words per minute (WPM) BIB010 . Speech rates of less than 100 WPM are identified as slow [105] . Direct selection techniques, including eye gaze systems, are found to provide conversational rates of about 8-10 WPM BIB010 . Similarly, mechanically activated AAC switches and keyboards also affect the conversational rates. The automatic, step, and inverse activation of switches often requires the users to wait until the desired selection is displayed, introducing conversational delays. Scanning methods are reported to allow communicative rates of around two WPM BIB010 . Selecting letters to form words may also impact the user's communication rate. This is apparent in touch-activated methods, where users are required to spell words or select icons to form sentences or to write text. The conversational rates of breath-activated systems are further dependent on the encoding method, as systems where breathing variations are used to select letters to write words could negatively impact the conversational rates. A recent study BIB011 similarly reports that most BCI technologies still offer conversational rates of less than 20 letters per minute. Generally, the rates of conversation using AAC systems, including word prediction and letter abbreviation, were found to be 12-18 WPM, highly contrasting with the rates of natural speech BIB010 . For some AAC user groups, such as individuals primarily requiring an independent form expression, the communication of basic needs is at times prioritized over the speed of conversation. Nonetheless, moving beyond transactional conversations remains a target for current state-of-the-art AAC technologies for an enhanced experience when using an AAC tool BIB010 . • IR is invisible to the user's eyes BIB004 • IR can stabilize gaze estimation BIB004 • High Temporal resolution = high volume of data as patterns are averaged over long time spans BIB006 ; Consequence: sample sizes are often small BIB008 • Eye tracking data processing BIB008 and parsing is complex BIB006 • Need for calibration algorithms • IR signals are not reliable for outdoor use BIB004 • Generally, high cost BIB002 Head-pointing • Head-mounted visors in addition to a monitor and a camera BIB005 • Light/optical pointers BIB001 Less expensive compared to typical eye-gaze systems • Need fine user precision and controls BIB005 • In direct contact with the user's head Instant feedback to user whenever a key is pressed BIB001 Voluntary muscle control is a requirement for activation BIB001 Touch-activated methods Force production through: • Hand/arm/ body part control • Control extender • Resistive/Capacitive touch screen circuitry • Membrane keyboards: Non-conductive spacers separating conductive flat surfaces BIB001 Minimal activation pressure • No direct feedback upon activation • Requires appended feedback mechanisms (auditory/sensory) BIB001 • Prone to classification errors BIB009 • Low transfer rates of ECG-based BCI due to the low signal to noise ratio BIB007 • Most platforms are not yet suitable for everyday usage/ in-home usage • BCI devices often require extensive assistance from caregivers BIB003 Non-invasive External monitoring: EEG, MEG, fMRI, NIRS. [63]", "role": "assistant" } ]
science.pubmedqa_qa
science.pubmedqa_qa.8
[ { "content": "Answer the question with YES, NO, or MAYBE based on the abstract.\n\nAbstract:\n(BACKGROUND) Gallbladder carcinoma is characterized by delayed diagnosis, ineffective treatment and poor prognosis. Surgical resection has been thought to be the treatment of choice, while the role of radiotherapy as adjuvant or palliative treatment has not been fully clarified in the literature.\n(PATIENT AND METHODS) We present the case of a 45-year-old female, with unresectable gallbladder carcinoma, grade IV, histologically diagnosed during laparotomy. The patient was treated with palliative intent with percutaneous transhepatic biliary drainage. Furthermore, she received external radiotherapy by (60)Co, using a three-field technique (anterior-posterior and right lateral). The total dose was 3,000 cGy in 10 fractions, with 300 cGy per fraction, 5 days weekly.\n(RESULTS) The patient showed clinico-laboratory improvement and was discharged with a permanent percutaneous transhepatic endoprosthesis. During follow-up (10 and 12 months postirradiation), abdominal CTs showed no local extension of the tumor, while the patient had a good performance status. So far, 1 year after the diagnosis of gallbladder cancer she is still alive.\n\nQuestion:\nIs external palliative radiotherapy for gallbladder carcinoma effective?\n\nAnswer:", "role": "user" }, { "content": "YES", "role": "assistant" } ]
science.pico_ner
science.pico_ner.3293
[ { "content": "Find entities of these types in the text: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nOutput a JSON object with entity types as keys and lists of found entities as values.\n\nText:\nEgg intake during carbohydrate restriction alters peripheral blood mononuclear cell inflammation and cholesterol homeostasis in metabolic syndrome . Egg yolk contains bioactive components that improve plasma inflammatory markers and HDL profiles in metabolic syndrome ( MetS ) under carbohydrate restriction . We further sought to determine whether egg yolk intake affects peripheral blood mononuclear cell ( PBMC ) inflammation and cholesterol homeostasis in MetS , as HDL and its associated lipid transporter ATP-binding cassette transporter A1 ( ABCA1 ) reduce the inflammatory potential of leukocytes through modulation of cellular cholesterol content and distribution . Thirty-seven men and women classified with MetS consumed a moderate carbohydrate-restricted diet ( 25 % -30 % of energy ) for 12 weeks , in addition to consuming either three whole eggs per day ( EGG ) or the equivalent amount of yolk-free egg substitute ( SUB ) . Interestingly , lipopolysaccharide-induced PBMC IL-1β and TNFα secretion increased from baseline to week 12 in the SUB group only , despite increases in PBMC toll-like receptor 4 ( TLR4 ) mRNA expression in the EGG group . Compared to baseline , ABCA1 and 3-hydroxy-3-methyl-glutaryl ( HMG ) -CoA reductase mRNA expression increased by week 12 in the EGG group only , whereas changes in PBMC total cholesterol positively correlated with changes in lipid raft content . Together , these findings suggest that intake of whole eggs during carbohydrate restriction alters PBMC inflammation and cholesterol homeostasis in MetS .", "role": "user" }, { "content": "{\"Intervention\": [\"equivalent amount of yolk-free egg substitute ( SUB\", \"moderate carbohydrate-restricted diet\", \"either three whole eggs per day ( EGG\", \"Egg intake\", \"egg yolk intake\"], \"Outcome\": [\"PBMC toll-like receptor 4 ( TLR4 ) mRNA expression\", \"peripheral blood mononuclear cell ( PBMC ) inflammation\", \"in PBMC total cholesterol\", \"lipopolysaccharide-induced PBMC IL-1\\u03b2 and TNF\\u03b1 secretion increased\", \"alters PBMC inflammation and cholesterol homeostasis\", \"baseline , ABCA1 and 3-hydroxy-3-methyl-glutaryl ( HMG ) -CoA reductase mRNA expression increased\"], \"Participant\": [\"Thirty-seven\", \"MetS\", \"men\", \"women\"]}", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.94069
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\nduring the chemical synthesis of dna and rna , branching is prevented by ensuring that only one chemically reactive group is present in the growing oligonucleotide and one in the nucleoside 3-phosphoramidite .\nthis is achieved by blocking other reactive groups within the sugars and bases ( e.g. exocyclic amines ) with protecting groups .\nthe removal of the protecting groups that block exocyclic amines , commonly known as deprotection , occurs only after synthesis .\nadditionally , the 5-oh capping group in a newly incorporated nucleotide , usually 4,4-dimethoxytrityl ( dmt ) , must be removed prior to the addition of the next residue and at the completion of synthesis .\nalthough decapping and deprotection are efficient reactions , the yield is not nearly 100% . indeed , using monoclonal antibodies to specific protecting groups we previously showed that oligonucleotides from commercial sources claiming deprotection retained significant levels of protecting groups [ see refs ( 1,2 ) ; barley - maloney and agris , in press ] .\nthis antibody - based method was previous validated using high - performance liquid chromatography ( hplc ) ( 1 ) . whereas this level of contamination may be adequate for routine procedures , it is clearly not acceptable for diagnostic and prognostic uses of oligonucleotides , such as arrays .\nthese considerations led us to ask two questions : ( i ) can we detect remaining protecting groups in oligonucleotides printed on arrays and ( ii ) do protecting groups alter array results ?\ncorning ultragapglass slides were spotted with the operon human genome array - ready oligo set ( version 2.0 ) and custom design oligonucleotides on an omnigrid 100 ( genemachines ) arrayer .\nthe arrays were washed in a pre - hybridization buffer ( 5 ssc , 0.1% sds , 1% bsa ) for 1 h at 42c .\narrays were washed in deionized water and then probed with a 1:100 dilution [ in wash buffer ( sodium phosphate , 0.9% nacl , 0.5% tween-20 , 1% bsa ) ] of anti - dmt ( ab 1 - 30 - 26 ) monoclonal hybridoma supernatant ( 1 ) and incubated for 2 h at room temperature ( rt ) .\nthe secondary antibody , goat anti - mouse conjugated 660 alexafluor ( molecular probes ) was diluted in wash buffer ( 1:100 dilution ) and incubated on the array for 1 h at rt .\narrays were washed according to protocols found at the duke microarray facility web site ( ) and scanned on the axon genepix 4000b scanner ( molecular devices ) .\nlabeling and hybridization protocols are further detailed at the duke microarray facility web site ( ) .\nlabeling and hybridization protocols are further detailed at the duke microarray facility web site ( ) .\nthe presence of protecting groups on arrays was tested on glass slides spotted with the operon human genome array - ready oligo set ( version 2.0 ) . on the same array we spotted mixtures of decapped and\ndeprotected 4-thymosin and ribosomal protein l9 oligonucleotides , and capped and protected 4-thymosin and ribosomal protein l9 oligonucleotides .\nthe decapping and deprotection , which meant the dmt cap and the protecting benzoyl- ( bz ) , isobutiryl- ( ibu ) and isopropropylphenoxyacetyl- ( iprpac ) groups had been removed , followed our previously published procedure that was validated by hplc ( 1 ) .\nnear complete deprotection of the 4-thymosin and ribosomal protein l9 oligonucleotides was observed by gel electrophoresis and the dot - blot antibody - based assay described previously ( 1 ) ( data not shown ) .\nmixing of fully protected and completely deprotected oligonucleotides insured accurate determination of the percentage of protecting groups in any mixture and percentages from 2 to 100% were evaluated for detection with a monoclonal antibody to the dmt group ( 1 ) .\nthe dmt group in the 4-thymosin and ribosomal protein l9 oligonucleotides was easily visualized by an anti - dmt monoclonal antibody ( figure 1a , left panel ) ( see figure legend for methods ) .\ndmt could be detected even with only 10% of the dmt groups remaining ( data not shown ) .\nwhereas this signal was not easy to quantify in the cy3 or cy5 channel it was obvious upon inspection of the spot morphology in the cy5 channel .\nthe anti - dmt signal was characterized by a yellow / green spot surrounded by a red halo while the background signal observed with fully deprotected oligonucleotides was detected as a low intensity green signal on the spots ( figure 1a , anti - dmt panels ) .\nthe data above led us to conclude that we could identify dmt groups remaining on spotted oligonucleotides in situ . using this criterion\n, we detected the dmt signal , which we defined as spots containing a red halo , in only very few spots ( 3 in 21 000 ) among oligonucleotides from the operon set .\nwe surmise that these represent rare cases where the supplied oligonucleotide was poorly deprotected ( figure 1a , middle panel ) . in order to ascertain the effect of incomplete protection on array function\n, we analyzed the hybridization signals obtained with two total rna samples , which were labeled and hybridized using standard direct labeling protocols .\nfirst we evaluated spot morphology and this was found to be normal for the majority of the oligonucleotides in the operon set ( e.g. open arrow , figure 1a , right panel ) .\nthere were no signs of spot spreading or poor spot morphology due to printing techniques , thus we do not believe that the halos observed above with anti - dmt antibody staining are a result of poor printing quality .\nthe same was true for 4-thymosin and ribosomal protein l9 oligonucleotides that were fully deprotected ( 0% protection in figure 1b , hybridization panels ) ; however , the protected oligonucleotides resulted in aberrant spot morphology ( closed arrow , figure 1a and b , hybridization panels ) .\nanomalous spot morphology is evident in spots that contain 20% protected oligonucleotides and becomes extreme with higher levels of protection ( figure 1c ) .\nwe also noticed that the spots with aberrant morphology yielded an altered ratio of medians , thus giving a false indicator of expression ( data not shown ) .\nclearly even modest levels of protecting groups led to aberrant spot morphology and potentially altered expression data .\nthe alteration of medians suggested that protecting groups would interfere with microarray performance . in order to further test this\n, we measured the mean signal intensity for the same mixtures of oligonucleotide probes ( 4-thymosin and ribosomal protein l9 ) .\nthe presence of protected oligonucleotides clearly reduced the sum of the mean signal intensity for both probes and a significant reduction could be seen at low levels of protection ( figure 2a and b ) .\nwhereas we can not conclude that the dmt capping group was responsible for the observed reduction of signal and change in morphology , we can deduce that incomplete removal of dmt , bz , ibu or iprpac leads to suboptimal microarray performance .\nwhereas the method of mixing protected and deprotected oligonucleotides provides a very precise measure of the percentage of protecting groups in a spot , it can only supply an estimate of the untoward effects of protecting groups on probe performance . in one spot where each oligonucleotide is 10% protected hybridization of each oligonucleotide within\nthe spot is likely to be affected and therefore we argue that the negative effect on hybridization will be greater than effects for a spot where 10% of the oligonucleotides are fully protected .\nhence we believe that the data in figure 2 provide an underestimate of the negative effects of incomplete deprotection .\nthis also means , however , that at this time we can not give a strict criterion for rejection of a probe other than visual inspection and exclusion of any spot that produces a characteristic halo as seen in figure 1 . the wide use and\nthe formidable potential of microarray technologies has led to appropriate concern with microarray quality control ( 3,4 ) .\nissues of statistical power and biological variation have been carefully considered and the overall value of microarray data has been well validated . nonetheless , the standards for use in therapy and diagnosis will have to be developed and will likely be more rigorous than current best practices ( 3 ) .\nthe quality of the oligonucleotides , whether pre - made and spotted , or synthesized in situ have begun to be analyzed . pre\n- screening with dna - binding fluorescent dyes can be used to interrogate arrays for spot morphology and for some level of quality and quantity per spot ( 5,6 ) .\nstatistical methods ( 7,8 ) and ab initio calculations ( 9,10 ) can be used to identify poorly performing oligonucleotides or even regions of arrays . to date , however , there has been little discussion of incomplete deprotection in microarrays . in this report\nwe show that incomplete deprotection could lead to poor array performance and , more importantly , we provide a methodology to detect it .\nwhile we limit our conclusions to dna oligonucleotide spotted arrays , the potential for incomplete deprotection can also exist in arrays where oligonucleotide synthesis is carried out on the solid support .\nin fact , the significant negative functional consequences of incomplete deprotection may be exacerbated in these microarrays because of the shorter length of the oligonucleotides .\nwe suggest that screening of microarrays with monoclonal antibodies to identify protecting groups should guide the consideration given to data derived from individual features and should enhance the accuracy of the results obtained .\n( a ) left panel : shown is a section of an spotted array probed with anti - dmt ( ab 1 - 30 - 26 ) monoclonal hybridoma supernatant ( 1 ) .\nmiddle panel : a section of the human operon v2.0 oligonucleotide array that contains an oligonucleotide that displayed a red halo when probed for dmt groups as described for the left panel .\nright panel : hybridization of a cdna probe to the human operon oligonucleotide v2.0 array is shown .\nan open arrow points to the normal hybridization of a cdna probe to an oligonucleotide of unknown protection status .\nclosed arrow represents the location of a series of oligonucleotides , which possess varying amounts of dmt , bz , ibu , and iprpac groups ( 0100% ) .\n( b ) top panels represents sections of the arrays with ribosomal protein l9 and 4-thymosin oligonucleotides that contain varying amounts of dmt , bz , ibu and iprpac groups ( 0 , 80 and 2% ) and were probed with anti - dmt monoclonal antibody ( see text ) .\nbottom panels represent hybridization of a cdna probe to the same region of the array .\ncdna probe labeling and hybridization protocols were described above . protecting groups interfere with array function .\n( a ) a regression analysis of the sum of median signal intensity from differentially protected ( 0100% dmt , bz , ibu and iprpac ) 4-thymosin oligonucleotides that were hybridized with a cdna probe .\ndata points represent triplicate experiments ; variation within signal intensities is represented by error bars .\n( b ) a regression analysis of the sum of median signal intensity from differentially protected ( 0100% dmt , bz , ibu and iprpac ) ribosomal protein l9 oligonucleotides that were hybridized with a cdna probe .\ndata points represent triplicate experiments ; variation within signal intensities is represented by error bars .", "role": "user" }, { "content": "en masse analysis of gene structure and function by array technologies will have a lasting and profound effect on biology and medicine . \n this impact can be compromised by low quality of probes within arrays , which we show can be caused by incomplete removal of chemical protecting groups . to solve this quality control problem \n , we present a sensitive , specific and facile method to detect these groups in situ on arrays using monoclonal antibodies and existing instrumentation . \n screening of microarrays with these monoclonal antibodies should guide the consideration given to data derived from these and should enhance the accuracy of the results obtained .", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.2371
[ { "content": "Extract all chemical entities (ChEBI) from this biomedical text. Output as JSON array.\n\nText: Stainingwasabsentwhentheprimaryantibodywaspre-absorbedwiththeantigenicpeptide(datanotshown).", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.2594
[ { "content": "Classify the intent of this citation sentence by choosing from: [\"Method\", \"Background\", \"Result\"].\n\nMethod: Cites for methodology\nBackground: Cites for background information\nResult: Cites for findings\n\nCitation: Direct segments [46] and redundant memory mappings [184] reduce address translation overheads by mapping large contiguous virtual memory to contiguous physical address space which reduces address translation overheads by increasing the reach of TLB entries.", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.50279
[ { "content": "Write an abstract for the research paper below.\n\nPaper text:\ninvasive micropapillary carcinoma ( impc ) was initially described as a histological variant of breast cancer , characterized as a carcinoma composed of little clumps of tumor cells lying within clear spaces separated by a fibrocollagenous or delicate fibrovascular stroma . since then , it has been reported in various kinds of cancers derived from breast , urinary bladder , endometrium , lung , salivary gland and in cancers of the gastrointestinal tract . although reports of gastric cancers with impc are very limited [ 4 , 5 ] , a high frequency of lymphovascular invasions and metastases in lymph nodes have been described recently [ 3 , 6 , 7 ] .\n, we have found a tp53 r175h mutation in the tumor . to the best of our knowledge ,\nthis is the first report of gastric impc with a tp53 r175h mutation that has a high invasive or metastatic potential in the literature .\nthe patient 's family member gave written informed consent for this report , and the identity of the patient has been strictly protected .\nan 86-year - old japanese female patient was admitted to our hospital with a chief complaint of bloody emesis .\npresent status on admission was the following : body height : 151 cm , body weight : 71 kg , body temperature : 36 c , blood pressure : 172/75 mm hg , pulse : 88 bpm , palpebral conjunctiva : anemic , bulbar conjunctiva : not icteric .\nlaboratory data on admission noted hb 6.8 g / dl , cea 80.1 ng / ml and ca19 - 9 74.1 u / ml .\nafter endoscopic hemostasis , endoscopic biopsy was done ; the diagnosis was moderately differentiated adenocarcinoma with invasive small tumor nests ( fig .\n1a ) . systemic examination , including endoscopic ultrasonography and computerized tomography , determined that the preoperative stage of the patient was t2 n2 m0 ; clinical stage iib ( uicc ; 7th edition ) .\nshe underwent distal gastrectomy with systematic lymph node dissection and additional cholecystectomy , followed by billroth i reconstruction .\n16.7 12.8 cm ) demonstrated that a slightly elevated lesion ( 2.6 2.2 cm ) with ulceration ( 1.6 1.6 cm ) was present at the lesser curvature of the antrum ( fig .\nhistopathologically , moderately differentiated adenocarcinoma with small tumor nests was located adjacent to or overlapped with well - differentiated adenocarcinoma in serial sections of the entire lesion ( fig .\nthe small tumor nests that were surrounded by lacunar - like clear space and dense fibrous stroma showed an inverted apical - basal ( inside - out ) pattern ( fig .\nno fibrovascular core was present in the small nests , which was confirmed by anti - cd34 immunostaining ( data not shown ) .\n1c ) . immunostaining with anti - ema monoclonal antibody ( mab ) showed the inside - out pattern ; however , simultaneous cytoplasmic staining made the pattern unclear ( data not shown ) . on the other hand , anti - cd10 mab clearly demonstrated the distinct linear positivity on the stroma - facing surfaces of the small nests ( fig .\nthis reversed polarity provides a marked contrast to intestinal metaplasia that shows an ordinary apical - basal pattern adjacent to the tumor ( fig .\nanti - d2 - 40 immunostaining that reacts against normal lymphatic vessels revealed numerous infiltrating tumor nests in the vessels ( fig . 2c ) .\nmoreover , systematic regional lymph node dissection demonstrated extensive lymph node metastases ( 9 positives out of 20 examined ) .\ntaken together , the pathological diagnosis of the tumor was a relatively early gastric cancer with impc and multiple lymph node metastases [ t1b2 n3a m0 ; pathological stage iib ( uicc , 7th edition ) ] .\nthe entire tumoral lesion was situated on the boundary between the fundic glands without atrophy and pyloric glands with intestinal metaplasia ( fig .\ngastric adenocarcinomas with typical gastric phenotypes are negative for muc2 , ck20 and cd10 , whereas they are positive for both muc5ac and muc6 . in our case ,\nthe tumor was positive for muc2 , cdx2 and ck20 , although either partially or very faintly . on the other hand , it was negative for both muc5ac and muc6 ; therefore , it is very likely that the tumor was a gastric adenocarcinoma with intestinal phenotypes ( table 2 ) .\nboth anti - ki-67 and p53 mabs strongly stained the nuclei of the tumor cells ( data not shown ) . in order to know the genetic status of tp53 ,\npurified dna was amplified by pcr and the products were directly sequenced according to the iarc protocols with slight modifications .\na representative electropherogram indicated a point mutation of the g to a transition in the exon 5 ( fig .\nthis transition presumably harbors a missense mutation from arg to his at the codon 175 ( r175h ) .\nsequence analysis in the opposite direction was also performed , and the mutation was confirmed ( data not shown ) . the postoperative state of the patient was quite favorable .\nthe patient has been alive for 4 years without any episode of tumor relapse or distant metastasis .\nthus far , several asian groups have exclusively reported gastric impc in the literature [ 4 , 5 , 6 , 7 , 9 ] .\nthis is probably because either gastric impc has closely related to environmental factors such as dietary habits of a high salt intake or dominant h. pylori infection throughout the area .\notherwise , the concept of gastric impc itself may not have been widely accepted in western countries .\nowing to those pioneering reports , however , we understand that gastric impc is a rare but distinct variant of adenocarcinoma that shares histologic and immunohistochemical features with other kinds of cancers . in gastric cancers with impc ,\na high frequency of lymphovascular invasions and metastases in lymph nodes has been described [ 3 , 6 , 7 ] .\nfurthermore , it has been known that tumors with impc may undergo poor prognosis because of frequent metastasis , especially at an early stage . in our case , despite the depth of tumor invasion ( the submucosa ) , extensive lymphovascular invasions and multiple lymph node metastases were observed .\nseveral molecules , such as muc2 , beta - catenin and cd44v6 , have been discussed in the light of cell shape , impc polarity and grade of malignancy [ 3 , 5 ] .\nnevertheless , molecular analysis has never been conducted in gastric impc ; factors that are responsible for the frequent lymphovascular invasions and metastases remain unclear .\nwe have found a tp53 r175h mutation in the tumor . according to the iarc tp53 database ( release 17 ) , out of a total of 29,711 entries , 1,215 cases of r175h mutation have been registered , which is the largest number of all mutations in the database . among them , 67 cases of r175h mutation in gastric cancers have been reported .\ngain - of - function mutation , in addition to the loss of tumor suppression function and dominant negative effects . in our case , it is likely that r175h was expressed heterologously with the wild - type allele ( fig .\n, the mutated p53 may have exerted a gain of function as well as tumor suppression function and dominant negative effects in the tumor cells . in several human tumor models ,\nthe high invasive or metastatic mechanisms of the p53 r175h mutation have been investigated . in one example , the invasive phenotype of endometrial cancer cells with r175h was discussed in the context of the egfr / pi3k / akt pathways .\nsimilarly , an activation of c - met receptor tyrosine kinase for tumor invasion was reported upon r175h overexpression in esophageal squamous cell carcinoma .\nmoreover , e - cadherin was focused on a breast cancer cell line with r175h mutation in terms of cell shape and cell growth .\nthis is particularly interesting because the loss of e - cadherin expression is strongly associated with the invasive nature of breast lobular carcinomas .\nwe stained the gastric tumor cells with anti - e - cadherin antibody ; however , no significant differences were observed between normal cells and tumor cells with or without impc ( data not shown ) .\nwe have conducted the sametp53 mutational analyses in several cases of gastric and colorectal cancers with impc phenotypes .\nwe have found distinct tp53 mutations in addition to this report ; however , no identical mutation has been found [ ikeda , unpubl .\ndata : g154p in a gastric cancer ; e258k in a rectal cancer ] . in colon cancers with impc , similar studies have been performed , and a higher proportion of tp53 alteration was reported . in this report , however , no specific tp53 mutation was described .\nanti - ema / muc1 staining is commonly used to show the inverted pattern of impc [ 9 , 14 ] .\ncd10 is reported to be useful for depicting the pattern of sigmoid colon cancer . in gastric cancers\n, however , cd10 was not effective in demonstrating impc with obvious gastric phenotypes [ 4 , 9 ] . in our case , ema could show the positive inside - out pattern of the tumor cells ; however , simultaneous cytoplasmic staining made the pattern unclear .\nthus , it is quite reasonable for cd10 to indicate the inside - out pattern of impc of gastric adenocarcinoma with intestinal phenotypes .\ncd10 was also positive for metastatic tumor in the lymph nodes while keeping the reversed polarity .\nin this situation , we recognized the tumor nests as well as resident lymphoid follicles that were positive for cd10 in the background . in summary ,\nthe present report is one of the first reported cases of gastric impc with intestinal phenotypes harboring a tp53 r175h mutation in the literature . in order to generalize whether gastric impc with a tp53 r175h mutation will show multiple lymph node metastases at an early stage of the tumor , accumulation of similar clinical cases and", "role": "user" }, { "content": "we report a case of gastric invasive micropapillary carcinoma ( impc ) in an 86-year - old female patient . \n she was admitted to our hospital with a chief complaint of bloody emesis . \n upper gastrointestinal endoscopy found a gastric adenocarcinoma at the antrum . \n the biopsy specimens showed moderately differentiated adenocarcinoma with invasive small tumor nests . \n distal gastrectomy with systematic lymph node dissection demonstrated that the tumor had impc through a pathological examination . despite the depth of tumor invasion ( the submucosa ) , extensive lymph node metastases were observed . \n anti - d2 - 40 immunostaining revealed numerous infiltrating tumor cell nests in the lymphatic vessels , which could explain subsequent multiple lymph node metastases . \n the adenocarcinoma showed intestinal phenotypes by several immunohistochemical studies . \n one of these antibodies ( cd10 ) clearly demonstrated the inverted apical - basal ( inside - out ) pattern of impc , whereas it showed an ordinary pattern in intestinal metaplasia adjacent to the tumor . \n furthermore , genetic analysis by direct sequencing revealed a point mutation in the exon 5 of tp53 in the tumor . \n the mutation presumably harbors a missense mutation from arg to his at the codon 175 ( r175h ) . \n r175h has been previously described as a \n gain - of - function mutation with a high invasive or metastatic potential in several types of cancers . in summary \n , this is one of the first reported cases of gastric impc with intestinal phenotypes harboring a tp53 r175h mutation in the literature .", "role": "assistant" } ]