dataset
stringclasses
9 values
input
stringlengths
351
18.6k
output
stringlengths
8
7.48k
id
stringlengths
20
47
messages
listlengths
2
2
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: The entity embedding (Nguyen and Grishman, 2016;<cite> Fu et al., 2017</cite> ) is included for arguments that are entities rather than common nouns.
["Similarities"]
science.multicite_intent_classification.1296
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe entity embedding (Nguyen and Grishman, 2016;<cite> Fu et al., 2017</cite> ) is included for arguments that are entities rather than common nouns.", "role": "user" }, { "content": "[\"Similarities\"]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Polymorphism of the PEMT gene and susceptibility to nonalcoholic fatty liver disease (NAFLD). Phosphatidylethanolamine N-methyltransferase (PEMT) catalyzes phosphatidylcholine synthesis. PEMT knockout mice have fatty livers, and it is possible that, in humans, nonalcoholic fatty liver disease (NAFLD) might be associated with PEMT gene polymorphisms. DNA samples from 59 humans without fatty liver and from 28 humans with NAFLD were genotyped for a single nucleotide polymorphism in exon 8 of PEMT, which leads to a V175M substitution. V175M is a loss of function mutation, as determined by transiently transfecting McArdle-RH7777 cells with constructs of wild-type PEMT open reading frame or the V175M mutant. Met/Met at residue 175 (loss of function SNP) occurred in 67.9% of the NAFLD subjects and in only 40.7% of control subjects (P<0.03). For the first time we report that a polymorphism of the human PEMT gene (V175M) is associated with diminished activity and may confer susceptibility to NAFLD.
{"CellLine": ["McArdle-RH7777"], "Chemical": ["phosphatidylcholine"], "Disease": ["fatty livers", "NAFLD", "nonalcoholic fatty liver disease", "fatty liver"], "Gene": ["Phosphatidylethanolamine N-methyltransferase", "PEMT"], "Species": ["mice", "human", "humans"], "Variant": ["V175M", "Met/Met at residue 175"]}
science.biored_ner.41
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nPolymorphism of the PEMT gene and susceptibility to nonalcoholic fatty liver disease (NAFLD). Phosphatidylethanolamine N-methyltransferase (PEMT) catalyzes phosphatidylcholine synthesis. PEMT knockout mice have fatty livers, and it is possible that, in humans, nonalcoholic fatty liver disease (NAFLD) might be associated with PEMT gene polymorphisms. DNA samples from 59 humans without fatty liver and from 28 humans with NAFLD were genotyped for a single nucleotide polymorphism in exon 8 of PEMT, which leads to a V175M substitution. V175M is a loss of function mutation, as determined by transiently transfecting McArdle-RH7777 cells with constructs of wild-type PEMT open reading frame or the V175M mutant. Met/Met at residue 175 (loss of function SNP) occurred in 67.9% of the NAFLD subjects and in only 40.7% of control subjects (P<0.03). For the first time we report that a polymorphism of the human PEMT gene (V175M) is associated with diminished activity and may confer susceptibility to NAFLD.", "role": "user" }, { "content": "{\"CellLine\": [\"McArdle-RH7777\"], \"Chemical\": [\"phosphatidylcholine\"], \"Disease\": [\"fatty livers\", \"NAFLD\", \"nonalcoholic fatty liver disease\", \"fatty liver\"], \"Gene\": [\"Phosphatidylethanolamine N-methyltransferase\", \"PEMT\"], \"Species\": [\"mice\", \"human\", \"humans\"], \"Variant\": [\"V175M\", \"Met/Met at residue 175\"]}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: conserved polarity proteins Par6 and aPKC regulate cell polarization processes In addition to signaling pathways that alter genetic programes that trigger the loss of cell-cell adhesion, alternative pathways can alter cell plasticity to regulate cell-cell cohesion and increase invasive potential One such pathway involves TGFβ-induced phosphorylation of Par6 Recently, we found that aPKC can also phosphorylate Par6 to drive EMT and increase the migratory potential of non-small cell lung cancer cells Here, we propose a novel mechanism through which the nicotinamide adenine dinucleotide-dependent histone deacetylase SIRT1 regulates EMT in prostate cancer cells through cooperation with the EMT inducing transcription factor ZEB1 In contrast, silencing SIRT1 in metastatic prostate tumor cells restores cell-cell adhesion and induces a shift toward an epithelial morphology concomitant with increased expression of E-cadherin and decreased expression of mesenchymal markers We also found that SIRT1 has a physiologically relevant role in endogenous EMT induced by EGF signaling in prostate cancer cells We propose that the regulation of EMT by SIRT1 involves modulation of, and cooperation with, the EMT inducing transcription factor ZEB1 We thus identify a necessary role for ZEB1 in SIRT1-mediated EMT Numb has been shown to play an important role in the proper functions of Par protein complex and in cell-cell junctions, both of which are associated with EMT Recently, we showed that Numb is capable of binding to both Par3 and E-cadherin Knockdown of Numb by shRNA in MDCK cells led to a lateral to apical translocation of E-cadherin and beta-catenin, active F-actin polymerization, mis-localization of Par3 and aPKC, a decrease in cell-cell adhesion and an increase in cell migration and proliferation This is coupled with the transcriptional repression of E-cadherin, often mediated by Snail1, Snail2 and Zeb1/deltaEF1. EGF induces EMT in the breast cancer cell line PMC42-LA and the kinase inhibitor staurosporine (ST) induces EMT in embryonic neural epithelial cells, with F-actin de-bundling and disruption of cell-cell adhesion, via inhibition of aPKC ST in combination with EGF directed a greater EMT via actin depolymerisation and focal contact size reduction, resulting in a loosening of cell-ECM attachment along with Snail1-Zeb1/deltaEF1 induction This appeared fundamentally different to the EGF-induced EMT, highlighting the multiple pathways which can regulate EMT unctional loss of the cell-cell adhesion molecule E-cadherin is an essential event for epithelial-mesenchymal transition (EMT), a process that allows cell migration during embryonic development and tumour invasion Transcriptional and post-transcriptional regulatory mechanisms mediated by several inducers of EMT, in particular the ZEB and Snail factors, downregulate the expression and/or functional organization of core polarity complexes TGFbeta has pleiotropic effects that range from regulation of proliferation and apoptosis to morphological changes and epithelial-mesenchymal transition (EMT) Morphological changes were accompanied by decrease in E-cadherin and increased Snail expression but the mesenchymal markers (N-cadherin, SMAalpha and Vimentin) studied remained unchanged Neutralising E-cadherin function promoted the invasion and inhibited the aggregation of endocrine-sensitive MCF7 cells, whilst having little effect on the behaviour of TamR cells Question: Which proteins are related to the loss of cell-cell adhesion during EMT (epithelial-mesenchymal transition)?
["ZEB", "SNAIL", "SIRT1", "EGF", "TGFb", "aPKC", "Par6"]
science.bioasq_list_qa.292
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: conserved polarity proteins Par6 and aPKC regulate cell polarization processes\nIn addition to signaling pathways that alter genetic programes that trigger the loss of cell-cell adhesion, alternative pathways can alter cell plasticity to regulate cell-cell cohesion and increase invasive potential\nOne such pathway involves TGFβ-induced phosphorylation of Par6\nRecently, we found that aPKC can also phosphorylate Par6 to drive EMT and increase the migratory potential of non-small cell lung cancer cells\nHere, we propose a novel mechanism through which the nicotinamide adenine dinucleotide-dependent histone deacetylase SIRT1 regulates EMT in prostate cancer cells through cooperation with the EMT inducing transcription factor ZEB1\nIn contrast, silencing SIRT1 in metastatic prostate tumor cells restores cell-cell adhesion and induces a shift toward an epithelial morphology concomitant with increased expression of E-cadherin and decreased expression of mesenchymal markers\nWe also found that SIRT1 has a physiologically relevant role in endogenous EMT induced by EGF signaling in prostate cancer cells\nWe propose that the regulation of EMT by SIRT1 involves modulation of, and cooperation with, the EMT inducing transcription factor ZEB1\nWe thus identify a necessary role for ZEB1 in SIRT1-mediated EMT\nNumb has been shown to play an important role in the proper functions of Par protein complex and in cell-cell junctions, both of which are associated with EMT\nRecently, we showed that Numb is capable of binding to both Par3 and E-cadherin\nKnockdown of Numb by shRNA in MDCK cells led to a lateral to apical translocation of E-cadherin and beta-catenin, active F-actin polymerization, mis-localization of Par3 and aPKC, a decrease in cell-cell adhesion and an increase in cell migration and proliferation\nThis is coupled with the transcriptional repression of E-cadherin, often mediated by Snail1, Snail2 and Zeb1/deltaEF1.\nEGF induces EMT in the breast cancer cell line PMC42-LA and the kinase inhibitor staurosporine (ST) induces EMT in embryonic neural epithelial cells, with F-actin de-bundling and disruption of cell-cell adhesion, via inhibition of aPKC\nST in combination with EGF directed a greater EMT via actin depolymerisation and focal contact size reduction, resulting in a loosening of cell-ECM attachment along with Snail1-Zeb1/deltaEF1 induction\nThis appeared fundamentally different to the EGF-induced EMT, highlighting the multiple pathways which can regulate EMT\nunctional loss of the cell-cell adhesion molecule E-cadherin is an essential event for epithelial-mesenchymal transition (EMT), a process that allows cell migration during embryonic development and tumour invasion\nTranscriptional and post-transcriptional regulatory mechanisms mediated by several inducers of EMT, in particular the ZEB and Snail factors, downregulate the expression and/or functional organization of core polarity complexes\nTGFbeta has pleiotropic effects that range from regulation of proliferation and apoptosis to morphological changes and epithelial-mesenchymal transition (EMT)\nMorphological changes were accompanied by decrease in E-cadherin and increased Snail expression but the mesenchymal markers (N-cadherin, SMAalpha and Vimentin) studied remained unchanged\nNeutralising E-cadherin function promoted the invasion and inhibited the aggregation of endocrine-sensitive MCF7 cells, whilst having little effect on the behaviour of TamR cells\n\nQuestion: Which proteins are related to the loss of cell-cell adhesion during EMT (epithelial-mesenchymal transition)?", "role": "user" }, { "content": "[\"ZEB\", \"SNAIL\", \"SIRT1\", \"EGF\", \"TGFb\", \"aPKC\", \"Par6\"]", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Combining Recurrent and Convolutional Neural Networks for Relation Classification This paper investigates two different neural architectures for the task of relation classification: convolutional neural networks and recurrent neural networks. For both models, we demonstrate the effect of different architectural choices. We present a new context representation for convolutional neural networks for relation classification (extended middle context). Furthermore, we propose connectionist bi-directional recurrent neural networks and introduce ranking loss for their optimization. Finally, we show that combining convolutional and recurrent neural networks using a simple voting scheme is accurate enough to improve results. Our neural models achieve state-of-the-art results on the SemEval 2010 relation classification task. Introduction Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 . This study investigates two different types of NNs: recurrent neural networks (RNNs) and convolutional neural networks (CNNs) as well as their combination. We make the following contributions: (1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part. (2) We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before. (3) Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset. Related Work In 2010, manually annotated data for relation classification was released in the context of a SemEval shared task BIBREF8 . Shared task participants used, i.a., support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 . Recently, their results on this data set were outperformed by applying NNs BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . zeng2014 built a CNN based only on the context between the relation arguments and extended it with several lexical features. kim2014 and others used convolutional filters of different sizes for CNNs. nguyen applied this to relation classification and obtained improvements over single filter sizes. deSantos2015 replaced the softmax layer of the CNN with a ranking layer. They showed improvements and published the best result so far on the SemEval dataset, to our knowledge. socher used another NN architecture for relation classification: recursive neural networks that built recursive sentence representations based on syntactic parsing. In contrast, zhang investigated a temporal structured RNN with only words as input. They used a bi-directional model with a pooling layer on top. Convolutional Neural Networks (CNN) CNNs perform a discrete convolution on an input matrix with a set of different filters. For NLP tasks, the input matrix represents a sentence: Each column of the matrix stores the word embedding of the corresponding word. By applying a filter with a width of, e.g., three columns, three neighboring words (trigram) are convolved. Afterwards, the results of the convolution are pooled. Following collobertWeston, we perform max-pooling which extracts the maximum value for each filter and, thus, the most informative n-gram for the following steps. Finally, the resulting values are concatenated and used for classifying the relation expressed in the sentence. Convolutional Layer Following previous work (e.g., BIBREF5 , BIBREF6 ), we use 2D filters spanning all embedding dimensions. After convolution, a max pooling operation is applied that stores only the highest activation of each filter. We apply filters with different window sizes 2-5 (multi-windows) as in BIBREF5 , i.e. spanning a different number of input words. Recurrent Neural Networks (RNN) Traditional RNNs consist of an input vector, a history vector and an output vector. Based on the representation of the current input word and the previous history vector, a new history is computed. Then, an output is predicted (e.g., using a softmax layer). In contrast to most traditional RNN architectures, we use the RNN for sentence modeling, i.e., we predict an output vector only after processing the whole sentence and not after each word. Training is performed using backpropagation through time BIBREF9 which unfolds the recurrent computations of the history vector for a certain number of time steps. To avoid exploding gradients, we use gradient clipping with a threshold of 10 BIBREF10 . Input of the RNNs Initial experiments showed that using trigrams as input instead of single words led to superior results. Hence, at timestep INLINEFORM0 we do not only give word INLINEFORM1 to the model but the trigram INLINEFORM2 by concatenating the corresponding word embeddings. Connectionist Bi-directional RNNs Especially for relation classification, the processing of the relation arguments might be easier with knowledge of the succeeding words. Therefore in bi-directional RNNs, not only a history vector of word INLINEFORM0 is regarded but also a future vector. This leads to the following conditioned probability for the history INLINEFORM1 at time step INLINEFORM2 : DISPLAYFORM0 Thus, the network can be split into three parts: a forward pass which processes the original sentence word by word (Equation EQREF6 ); a backward pass which processes the reversed sentence word by word (Equation ); and a combination of both (Equation ). All three parts are trained jointly. This is also depicted in Figure FIGREF7 . Combining forward and backward pass by adding their hidden layer is similar to BIBREF7 . We, however, also add a connection to the previous combined hidden layer with weight INLINEFORM0 to be able to include all intermediate hidden layers into the final decision of the network (see Equation ). We call this “connectionist bi-directional RNN”. In our experiments, we compare this RNN with uni-directional RNNs and bi-directional RNNs without additional hidden layer connections. Word Representations Words are represented by concatenated vectors: a word embedding and a position feature vector. Pretrained word embeddings. In this study, we used the word2vec toolkit BIBREF11 to train embeddings on an English Wikipedia from May 2014. We only considered words appearing more than 100 times and added a special PADDING token for convolution. This results in an embedding training text of about 485,000 terms and INLINEFORM0 tokens. During model training, the embeddings are updated. Position features. We incorporate randomly initialized position embeddings similar to zeng2014, nguyen and deSantos2015. In our RNN experiments, we investigate different possibilities of integrating position information: position embeddings, position embeddings with entity presence flags (flags indicating whether the current word is one of the relation arguments), and position indicators BIBREF7 . Objective Function: Ranking Loss Ranking. We applied the ranking loss function proposed in deSantos2015 to train our models. It maximizes the distance between the true label INLINEFORM0 and the best competitive label INLINEFORM1 given a data point INLINEFORM2 . The objective function is DISPLAYFORM0 with INLINEFORM0 and INLINEFORM1 being the scores for the classes INLINEFORM2 and INLINEFORM3 respectively. The parameter INLINEFORM4 controls the penalization of the prediction errors and INLINEFORM5 and INLINEFORM6 are margins for the correct and incorrect classes. Following deSantos2015, we set INLINEFORM7 . We do not learn a pattern for the class Other but increase its difference to the best competitive label by using only the second summand in Equation EQREF10 during training. Experiments and Results We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set. For evaluation, we applied the official scoring script and report the macro F1 score which also served as the official result of the shared task. RNN and CNN models were implemented with theano BIBREF12 , BIBREF13 . For all our models, we use L2 regularization with a weight of 0.0001. For CNN training, we use mini batches of 25 training examples while we perform stochastic gradient descent for the RNN. The initial learning rates are 0.2 for the CNN and 0.01 for the RNN. We train the models for 10 (CNN) and 50 (RNN) epochs without early stopping. As activation function, we apply tanh for the CNN and capped ReLU for the RNN. For tuning the hyperparameters, we split the training data into two parts: 6.5k (training) and 1.5k (development) sentences. We also tuned the learning rate schedule on dev. Beside of training single models, we also report ensemble results for which we combined the presented single models with a voting process. Performance of CNNs As a baseline system, we implemented a CNN similar to the one described by zeng2014. It consists of a standard convolutional layer with filters with only one window size, followed by a softmax layer. As input it uses the middle context. In contrast to zeng2014, our CNN does not have an additional fully connected hidden layer. Therefore, we increased the number of convolutional filters to 1200 to keep the number of parameters comparable. With this, we obtain a baseline result of 73.0. After including 5 dimensional position features, the performance was improved to 78.6 (comparable to 78.9 as reported by zeng2014 without linguistic features). In the next step, we investigate how this result changes if we successively add further features to our CNN: multi-windows for convolution (window sizes: 2,3,4,5 and 300 feature maps each), ranking layer instead of softmax and our proposed extended middle context. Table TABREF12 shows the results. Note that all numbers are produced by CNNs with a comparable number of parameters. We also report F1 for increasing the word embedding dimensionality from 50 to 400. The position embedding dimensionality is 5 in combination with 50 dimensional word embeddings and 35 with 400 dimensional word embeddings. Our results show that especially the ranking layer and the embedding size have an important impact on the performance. Performance of RNNs As a baseline for the RNN models, we apply a uni-directional RNN which predicts the relation after processing the whole sentence. With this model, we achieve an F1 score of 61.2 on the SemEval test set. Afterwards, we investigate the impact of different position features on the performance of uni-directional RNNs (position embeddings, position embeddings concatenated with a flag indicating whether the current word is an entity or not, and position indicators BIBREF7 ). The results indicate that position indicators (i.e. artificial words that indicate the entity presence) perform the best on the SemEval data. We achieve an F1 score of 73.4 with them. However, the difference to using position embeddings with entity flags is not statistically significant. Similar to our CNN experiments, we successively vary the RNN models by using bi-directionality, by adding connections between the hidden layers (“connectionist”), by applying ranking instead of softmax to predict the relation and by increasing the word embedding dimension to 400. The results in Table TABREF14 show that all of these variations lead to statistically significant improvements. Especially the additional hidden layer connections and the integration of the ranking layer have a large impact on the performance. Combination of CNNs and RNNs Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly. The combination achieves an F1 score of 84.9 which is better than the performance of the two NN types alone. It, thus, confirms our assumption that the networks provide complementary information: while the RNN computes a weighted combination of all words in the sentence, the CNN extracts the most informative n-grams for the relation and only considers their resulting activations. Comparison with State of the Art Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features. Conclusion In this paper, we investigated different features and architectural choices for convolutional and recurrent neural networks for relation classification without using any linguistic features. For convolutional neural networks, we presented a new context representation for relation classification. Furthermore, we introduced connectionist recurrent neural networks for sentence classification tasks and performed the first experiments with ranking recurrent neural networks. Finally, we showed that even a simple combination of convolutional and recurrent neural networks improved results. With our neural models, we achieved new state-of-the-art results on the SemEval 2010 task 8 benchmark data. Acknowledgments Heike Adel is a recipient of the Google European Doctoral Fellowship in Natural Language Processing and this research is supported by this fellowship. This research was also supported by Deutsche Forschungsgemeinschaft: grant SCHU 2246/4-2. Figure 1: CNN with extended contexts Figure 2: Connectionist bi-directional RNN Table 1: F1 score of CNN and its components, * indicates statisticial significance compared to the result in the line above (z-test, p < 0.05) Table 2: F1 score of RNN and its components, * indicates statisticial significance compared to the result in the line above (z-test, p < 0.05) Table 3: State-of-the-art results for relation classification Query: By how much does their best model outperform the state-of-the-art? JSON Response:
{"answer": "0.8% F1 better than the best state-of-the-art", "evidence": ["Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features."]}
science.qasper_abstractive_qa.208.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nCombining Recurrent and Convolutional Neural Networks for Relation Classification\n\nThis paper investigates two different neural architectures for the task of relation classification: convolutional neural networks and recurrent neural networks. For both models, we demonstrate the effect of different architectural choices. We present a new context representation for convolutional neural networks for relation classification (extended middle context). Furthermore, we propose connectionist bi-directional recurrent neural networks and introduce ranking loss for their optimization. Finally, we show that combining convolutional and recurrent neural networks using a simple voting scheme is accurate enough to improve results. Our neural models achieve state-of-the-art results on the SemEval 2010 relation classification task.\n\nIntroduction\nRelation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 .\nThis study investigates two different types of NNs: recurrent neural networks (RNNs) and convolutional neural networks (CNNs) as well as their combination. We make the following contributions:\n(1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part.\n(2) We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before.\n(3) Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset.\n\nRelated Work\nIn 2010, manually annotated data for relation classification was released in the context of a SemEval shared task BIBREF8 . Shared task participants used, i.a., support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 . Recently, their results on this data set were outperformed by applying NNs BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .\nzeng2014 built a CNN based only on the context between the relation arguments and extended it with several lexical features. kim2014 and others used convolutional filters of different sizes for CNNs. nguyen applied this to relation classification and obtained improvements over single filter sizes. deSantos2015 replaced the softmax layer of the CNN with a ranking layer. They showed improvements and published the best result so far on the SemEval dataset, to our knowledge.\nsocher used another NN architecture for relation classification: recursive neural networks that built recursive sentence representations based on syntactic parsing. In contrast, zhang investigated a temporal structured RNN with only words as input. They used a bi-directional model with a pooling layer on top.\n\nConvolutional Neural Networks (CNN)\nCNNs perform a discrete convolution on an input matrix with a set of different filters. For NLP tasks, the input matrix represents a sentence: Each column of the matrix stores the word embedding of the corresponding word. By applying a filter with a width of, e.g., three columns, three neighboring words (trigram) are convolved. Afterwards, the results of the convolution are pooled. Following collobertWeston, we perform max-pooling which extracts the maximum value for each filter and, thus, the most informative n-gram for the following steps. Finally, the resulting values are concatenated and used for classifying the relation expressed in the sentence.\n\nConvolutional Layer\nFollowing previous work (e.g., BIBREF5 , BIBREF6 ), we use 2D filters spanning all embedding dimensions. After convolution, a max pooling operation is applied that stores only the highest activation of each filter. We apply filters with different window sizes 2-5 (multi-windows) as in BIBREF5 , i.e. spanning a different number of input words.\n\nRecurrent Neural Networks (RNN)\nTraditional RNNs consist of an input vector, a history vector and an output vector. Based on the representation of the current input word and the previous history vector, a new history is computed. Then, an output is predicted (e.g., using a softmax layer). In contrast to most traditional RNN architectures, we use the RNN for sentence modeling, i.e., we predict an output vector only after processing the whole sentence and not after each word. Training is performed using backpropagation through time BIBREF9 which unfolds the recurrent computations of the history vector for a certain number of time steps. To avoid exploding gradients, we use gradient clipping with a threshold of 10 BIBREF10 .\n\nInput of the RNNs\nInitial experiments showed that using trigrams as input instead of single words led to superior results. Hence, at timestep INLINEFORM0 we do not only give word INLINEFORM1 to the model but the trigram INLINEFORM2 by concatenating the corresponding word embeddings.\n\nConnectionist Bi-directional RNNs\nEspecially for relation classification, the processing of the relation arguments might be easier with knowledge of the succeeding words. Therefore in bi-directional RNNs, not only a history vector of word INLINEFORM0 is regarded but also a future vector. This leads to the following conditioned probability for the history INLINEFORM1 at time step INLINEFORM2 : DISPLAYFORM0 \nThus, the network can be split into three parts: a forward pass which processes the original sentence word by word (Equation EQREF6 ); a backward pass which processes the reversed sentence word by word (Equation ); and a combination of both (Equation ). All three parts are trained jointly. This is also depicted in Figure FIGREF7 .\nCombining forward and backward pass by adding their hidden layer is similar to BIBREF7 . We, however, also add a connection to the previous combined hidden layer with weight INLINEFORM0 to be able to include all intermediate hidden layers into the final decision of the network (see Equation ). We call this “connectionist bi-directional RNN”.\nIn our experiments, we compare this RNN with uni-directional RNNs and bi-directional RNNs without additional hidden layer connections.\n\nWord Representations\nWords are represented by concatenated vectors: a word embedding and a position feature vector.\nPretrained word embeddings. In this study, we used the word2vec toolkit BIBREF11 to train embeddings on an English Wikipedia from May 2014. We only considered words appearing more than 100 times and added a special PADDING token for convolution. This results in an embedding training text of about 485,000 terms and INLINEFORM0 tokens. During model training, the embeddings are updated.\nPosition features. We incorporate randomly initialized position embeddings similar to zeng2014, nguyen and deSantos2015. In our RNN experiments, we investigate different possibilities of integrating position information: position embeddings, position embeddings with entity presence flags (flags indicating whether the current word is one of the relation arguments), and position indicators BIBREF7 .\n\nObjective Function: Ranking Loss\nRanking. We applied the ranking loss function proposed in deSantos2015 to train our models. It maximizes the distance between the true label INLINEFORM0 and the best competitive label INLINEFORM1 given a data point INLINEFORM2 . The objective function is DISPLAYFORM0 \nwith INLINEFORM0 and INLINEFORM1 being the scores for the classes INLINEFORM2 and INLINEFORM3 respectively. The parameter INLINEFORM4 controls the penalization of the prediction errors and INLINEFORM5 and INLINEFORM6 are margins for the correct and incorrect classes. Following deSantos2015, we set INLINEFORM7 . We do not learn a pattern for the class Other but increase its difference to the best competitive label by using only the second summand in Equation EQREF10 during training.\n\nExperiments and Results\nWe used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set. For evaluation, we applied the official scoring script and report the macro F1 score which also served as the official result of the shared task.\nRNN and CNN models were implemented with theano BIBREF12 , BIBREF13 . For all our models, we use L2 regularization with a weight of 0.0001. For CNN training, we use mini batches of 25 training examples while we perform stochastic gradient descent for the RNN. The initial learning rates are 0.2 for the CNN and 0.01 for the RNN. We train the models for 10 (CNN) and 50 (RNN) epochs without early stopping. As activation function, we apply tanh for the CNN and capped ReLU for the RNN. For tuning the hyperparameters, we split the training data into two parts: 6.5k (training) and 1.5k (development) sentences. We also tuned the learning rate schedule on dev.\nBeside of training single models, we also report ensemble results for which we combined the presented single models with a voting process.\n\nPerformance of CNNs\nAs a baseline system, we implemented a CNN similar to the one described by zeng2014. It consists of a standard convolutional layer with filters with only one window size, followed by a softmax layer. As input it uses the middle context. In contrast to zeng2014, our CNN does not have an additional fully connected hidden layer. Therefore, we increased the number of convolutional filters to 1200 to keep the number of parameters comparable. With this, we obtain a baseline result of 73.0. After including 5 dimensional position features, the performance was improved to 78.6 (comparable to 78.9 as reported by zeng2014 without linguistic features).\nIn the next step, we investigate how this result changes if we successively add further features to our CNN: multi-windows for convolution (window sizes: 2,3,4,5 and 300 feature maps each), ranking layer instead of softmax and our proposed extended middle context. Table TABREF12 shows the results. Note that all numbers are produced by CNNs with a comparable number of parameters. We also report F1 for increasing the word embedding dimensionality from 50 to 400. The position embedding dimensionality is 5 in combination with 50 dimensional word embeddings and 35 with 400 dimensional word embeddings. Our results show that especially the ranking layer and the embedding size have an important impact on the performance.\n\nPerformance of RNNs\nAs a baseline for the RNN models, we apply a uni-directional RNN which predicts the relation after processing the whole sentence. With this model, we achieve an F1 score of 61.2 on the SemEval test set.\nAfterwards, we investigate the impact of different position features on the performance of uni-directional RNNs (position embeddings, position embeddings concatenated with a flag indicating whether the current word is an entity or not, and position indicators BIBREF7 ). The results indicate that position indicators (i.e. artificial words that indicate the entity presence) perform the best on the SemEval data. We achieve an F1 score of 73.4 with them. However, the difference to using position embeddings with entity flags is not statistically significant.\nSimilar to our CNN experiments, we successively vary the RNN models by using bi-directionality, by adding connections between the hidden layers (“connectionist”), by applying ranking instead of softmax to predict the relation and by increasing the word embedding dimension to 400.\nThe results in Table TABREF14 show that all of these variations lead to statistically significant improvements. Especially the additional hidden layer connections and the integration of the ranking layer have a large impact on the performance.\n\nCombination of CNNs and RNNs\nFinally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly. The combination achieves an F1 score of 84.9 which is better than the performance of the two NN types alone. It, thus, confirms our assumption that the networks provide complementary information: while the RNN computes a weighted combination of all words in the sentence, the CNN extracts the most informative n-grams for the relation and only considers their resulting activations.\n\nComparison with State of the Art\nTable TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features.\n\nConclusion\nIn this paper, we investigated different features and architectural choices for convolutional and recurrent neural networks for relation classification without using any linguistic features. For convolutional neural networks, we presented a new context representation for relation classification. Furthermore, we introduced connectionist recurrent neural networks for sentence classification tasks and performed the first experiments with ranking recurrent neural networks. Finally, we showed that even a simple combination of convolutional and recurrent neural networks improved results. With our neural models, we achieved new state-of-the-art results on the SemEval 2010 task 8 benchmark data.\n\nAcknowledgments\nHeike Adel is a recipient of the Google European Doctoral Fellowship in Natural Language Processing and this research is supported by this fellowship.\nThis research was also supported by Deutsche Forschungsgemeinschaft: grant SCHU 2246/4-2.\n\nFigure 1: CNN with extended contexts\n\nFigure 2: Connectionist bi-directional RNN\n\nTable 1: F1 score of CNN and its components, * indicates statisticial significance compared to the result in the line above (z-test, p < 0.05)\n\nTable 2: F1 score of RNN and its components, * indicates statisticial significance compared to the result in the line above (z-test, p < 0.05)\n\nTable 3: State-of-the-art results for relation classification\n\nQuery: By how much does their best model outperform the state-of-the-art?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"0.8% F1 better than the best state-of-the-art\", \"evidence\": [\"Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features.\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND AND THE PURPOSE OF THE STUDY: Diabetic neuropathy is the most common diabetic complication that often is accompanied by significant morbidity, mortality and economic burden. The purpose of this study was evaluation of effect of Semelil (ANGIPARSTM), a new herbal drug for treatment of diabetic foot ulcers or diabetic peripheral neuropathy. ABSTRACT.METHODS: In this double blind clinical trial, 49 type 2 diabetes patients with different degrees of neuropathy were evaluated in two groups (ANGIPARSTM and placebo groups). All patients were assessed at the start and 12 weeks after treatment, with laboratory tests, United Kingdom screening test, Michigan neuropathy screening score, Michigan diabetic neuropathy score, vibration perception thresholds, nerve conduction study, monofilament test and visual analog scale. ABSTRACT.RESULTS: Michigan diabetic neuropathy score was decreased notably in ANGIPARSTM group. In the nerve conduction study, appropriate meaningful changes were observed in the distal latency and amplitude in the motor Ulnar nerve in ANGIPARSTM group. ABSTRACT.CONCLUSION: The results showed limited evidence of efficacy of ANGIPARSTM in diabetic neuropathy treatment and more studies with a larger sample size and longer duration are required. Extracted Data:
[["Semelil (ANGIPARSTM)", "Placebo", "Michigan diabetic neuropathy score", "significantly decreased", "Michigan diabetic neuropathy score was decreased notably in ANGIPARSTM group."]]
science.evidence_inference.152
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND AND THE PURPOSE OF THE STUDY: Diabetic neuropathy is the most common diabetic complication that often is accompanied by significant morbidity, mortality and economic burden. The purpose of this study was evaluation of effect of Semelil (ANGIPARSTM), a new herbal drug for treatment of diabetic foot ulcers or diabetic peripheral neuropathy.\n\nABSTRACT.METHODS: In this double blind clinical trial, 49 type 2 diabetes patients with different degrees of neuropathy were evaluated in two groups (ANGIPARSTM and placebo groups). All patients were assessed at the start and 12 weeks after treatment, with laboratory tests, United Kingdom screening test, Michigan neuropathy screening score, Michigan diabetic neuropathy score, vibration perception thresholds, nerve conduction study, monofilament test and visual analog scale.\n\nABSTRACT.RESULTS: Michigan diabetic neuropathy score was decreased notably in ANGIPARSTM group. In the nerve conduction study, appropriate meaningful changes were observed in the distal latency and amplitude in the motor Ulnar nerve in ANGIPARSTM group.\n\nABSTRACT.CONCLUSION: The results showed limited evidence of efficacy of ANGIPARSTM in diabetic neuropathy treatment and more studies with a larger sample size and longer duration are required.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Semelil (ANGIPARSTM)\", \"Placebo\", \"Michigan diabetic neuropathy score\", \"significantly decreased\", \"Michigan diabetic neuropathy score was decreased notably in ANGIPARSTM group.\"]]", "role": "assistant" } ]
science.discomat_te
| Compound | Glass sample | Glass sample | Glass sample | Glass sample | Glass sample | Glass sample | Glass sample | Glass sample | | Compound | A | A | ANa | ANa | ACs | ACs | ANaV | ANaV | | | Nominal (wt%) | XRF-EDS (wt%) | Nominal (wt%) | XRF-EDS (wt%) | Nominal (wt%) | XRF-EDS (wt%) | Nominal (wt%) | XRF-EDS (wt%) | | SiO2 | 50 | 51+-0.5 | 47.5 | 48+-0.5 | 43.63 | 44+-1 | 45 | 44.8+-0.5 | | B2O3 | 16 | 16+-0.5 | 15.2 | 15+-0.5 | 13.96 | - | 14.4 | 14+-0.5 | | Na2O | 16 | 14+-0.5 | 15.2 | 14.7+-0.5 | 13.96 | 14+-2 | 14.4 | 14.6+-0.5 | | Al2O3 | 7 | 7.4+-0.5 | 6.65 | 6.5+-0.5 | 6.11 | 6+-0.5 | 6.3 | 6.5+-0.5 | | CaO | 5 | 5.4+-0.4 | 4.75 | 5+-0.4 | 4.36 | 4+-0.4 | 4.5 | 4.8+-0.4 | | Fe2O3 | 6 | 6.5+-0.4 | 5.7 | 5.2+-0.4 | 5.24 | 5.2+-0.4 | 5.4 | 5.2+-0.4 | | | | | | | | | | | | Added compounds | Added compounds | Added compounds | Added compounds | Added compounds | Added compounds | Added compounds | Added compounds | Added compounds | | Na2SO4 | - | | 5 | 0.7+-0.1 (SO3) | - | | 5 | 0.9+-0.1 (SO3) | | Cs2SO4 | - | | - | | 12.74 | 0.55+-0.1 (SO3) | - | | | V2O5 | - | | - | | - | | 5 | 4.9+-0.2 | | | | | | | | | | | | Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters | | R | 1.12+-0.05 | 1.12+-0.05 | 1.298+-0.05 | 1.298+-0.05 | 1.298+-0.1 | 1.298+-0.1 | 1.123+-0.05 | 1.123+-0.05 | | K | 3.62+-0.05 | 3.62+-0.05 | 3.62+-0.05 | 3.62+-0.05 | 3.62+-0.1 | 3.62+-0.1 | 3.62+-0.05 | 3.62+-0.05 | | R * =3K/16 | 0.726+-0.01 | 0.726+-0.01 | 0.726+-0.01 | 0.726+-0.01 | 0.726+-0.02 | 0.726+-0.02 | 0.726+-0.01 | 0.726+-0.01 | | r | 2.56+-0.001 | 2.56+-0.001 | 2.56+-0.001 | 2.56+-0.001 | 2.62+-0.001 | 2.62+-0.001 | 2.56+-0.001 | 2.56+-0.001 | Caption: Nominal and measured composition and density of the six- and seven-oxide glass samples produced and analysed in this work Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "50", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "A"} {"value": "51", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "A"} {"value": "47.5", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "ANa"} {"value": "48", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "ANa"} {"value": "43.63", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "ACs"} {"value": "44", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "ACs"} {"value": "45", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "ANaV"} {"value": "44.8", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "ANaV"} {"value": "16", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "wt", "material": "A"} {"value": "16", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "wt", "material": "A"} {"value": "15.2", "type": "Other"} {"value": "15", "type": "Other"} {"value": "13.96", "type": "Other"} {"value": "14.4", "type": "Other"} {"value": "14", "type": "Other"} {"value": "16", "type": "Other"} {"value": "14", "type": "Other"} {"value": "15.2", "type": "Other"} {"value": "14.7", "type": "Other"} {"value": "13.96", "type": "Other"} {"value": "14", "type": "Other"} {"value": "14.4", "type": "Other"} {"value": "14.6", "type": "Other"} {"value": "7", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "A"} {"value": "7.4", "type": "Other"} {"value": "6.65", "type": "Other"} {"value": "6.5", "type": "Other"} {"value": "6.11", "type": "Other"} {"value": "6", "type": "Other"} {"value": "6.3", "type": "Other"} {"value": "6.5", "type": "Other"} {"value": "5", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "wt", "material": "A"} {"value": "5.4", "type": "Other"} {"value": "4.75", "type": "Other"} {"value": "5", "type": "Other"} {"value": "4.36", "type": "Other"} {"value": "4", "type": "Other"} {"value": "4.5", "type": "Other"} {"value": "4.8", "type": "Other"} {"value": "6", "type": "Glass_Compound_Amount", "constituent": "Fe2O3", "unit": "wt", "material": "A"} {"value": "6.5", "type": "Other"} {"value": "5.7", "type": "Other"} {"value": "5.2", "type": "Other"} {"value": "5.24", "type": "Other"} {"value": "5.2", "type": "Other"} {"value": "5.4", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "wt", "material": "A"} {"value": "5.2", "type": "Other"} {"value": "5", "type": "Glass_Compound_Amount", "constituent": "Na2SO4", "unit": "wt", "material": "ANa"} {"value": "0.7", "type": "Other"} {"value": "5", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "wt", "material": "ANa"} {"value": "0.9", "type": "Other"} {"value": "12.74", "type": "Glass_Compound_Amount", "constituent": "Cs2SO4", "unit": "wt", "material": "ACs"} {"value": "0.55", "type": "Glass_Compound_Amount", "constituent": "Cs2SO4", "unit": "wt", "material": "ACs"} {"value": "5", "type": "Glass_Compound_Amount", "constituent": "Na2SO4", "unit": "wt", "material": "ANaV"} {"value": "4.9", "type": "Glass_Compound_Amount", "constituent": "V2O5", "unit": "wt", "material": "ANaV"} {"value": "1.12", "type": "Other"} {"value": "1.12", "type": "Other"} {"value": "1.298", "type": "Other"} {"value": "1.298", "type": "Other"} {"value": "1.298", "type": "Other"} {"value": "1.298", "type": "Other"} {"value": "1.123", "type": "Other"} {"value": "1.123", "type": "Other"} {"value": "3.62", "type": "Other"} {"value": "3.62", "type": "Other"} {"value": "3.62", "type": "Other"} {"value": "3.62", "type": "Other"} {"value": "3.62", "type": "Other"} {"value": "3.62", "type": "Other"} {"value": "3.62", "type": "Other"} {"value": "3.62", "type": "Other"} {"value": "0.726", "type": "Other"} {"value": "0.726", "type": "Other"} {"value": "0.726", "type": "Other"} {"value": "0.726", "type": "Other"} {"value": "0.726", "type": "Other"} {"value": "0.726", "type": "Other"} {"value": "0.726", "type": "Other"} {"value": "0.726", "type": "Other"} {"value": "2.56", "type": "Other"} {"value": "2.56", "type": "Other"} {"value": "2.56", "type": "Other"} {"value": "2.56", "type": "Other"} {"value": "2.62", "type": "Other"} {"value": "2.62", "type": "Other"} {"value": "2.56", "type": "Other"} {"value": "2.56", "type": "Other"}
science.discomat_te.63
[ { "content": "| Compound | Glass sample | Glass sample | Glass sample | Glass sample | Glass sample | Glass sample | Glass sample | Glass sample |\n| Compound | A | A | ANa | ANa | ACs | ACs | ANaV | ANaV |\n| | Nominal (wt%) | XRF-EDS (wt%) | Nominal (wt%) | XRF-EDS (wt%) | Nominal (wt%) | XRF-EDS (wt%) | Nominal (wt%) | XRF-EDS (wt%) |\n| SiO2 | 50 | 51+-0.5 | 47.5 | 48+-0.5 | 43.63 | 44+-1 | 45 | 44.8+-0.5 |\n| B2O3 | 16 | 16+-0.5 | 15.2 | 15+-0.5 | 13.96 | - | 14.4 | 14+-0.5 |\n| Na2O | 16 | 14+-0.5 | 15.2 | 14.7+-0.5 | 13.96 | 14+-2 | 14.4 | 14.6+-0.5 |\n| Al2O3 | 7 | 7.4+-0.5 | 6.65 | 6.5+-0.5 | 6.11 | 6+-0.5 | 6.3 | 6.5+-0.5 |\n| CaO | 5 | 5.4+-0.4 | 4.75 | 5+-0.4 | 4.36 | 4+-0.4 | 4.5 | 4.8+-0.4 |\n| Fe2O3 | 6 | 6.5+-0.4 | 5.7 | 5.2+-0.4 | 5.24 | 5.2+-0.4 | 5.4 | 5.2+-0.4 |\n| | | | | | | | | |\n| Added compounds | Added compounds | Added compounds | Added compounds | Added compounds | Added compounds | Added compounds | Added compounds | Added compounds |\n| Na2SO4 | - | | 5 | 0.7+-0.1 (SO3) | - | | 5 | 0.9+-0.1 (SO3) |\n| Cs2SO4 | - | | - | | 12.74 | 0.55+-0.1 (SO3) | - | |\n| V2O5 | - | | - | | - | | 5 | 4.9+-0.2 |\n| | | | | | | | | |\n| Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters | Structural parameters |\n| R | 1.12+-0.05 | 1.12+-0.05 | 1.298+-0.05 | 1.298+-0.05 | 1.298+-0.1 | 1.298+-0.1 | 1.123+-0.05 | 1.123+-0.05 |\n| K | 3.62+-0.05 | 3.62+-0.05 | 3.62+-0.05 | 3.62+-0.05 | 3.62+-0.1 | 3.62+-0.1 | 3.62+-0.05 | 3.62+-0.05 |\n| R * =3K/16 | 0.726+-0.01 | 0.726+-0.01 | 0.726+-0.01 | 0.726+-0.01 | 0.726+-0.02 | 0.726+-0.02 | 0.726+-0.01 | 0.726+-0.01 |\n| r | 2.56+-0.001 | 2.56+-0.001 | 2.56+-0.001 | 2.56+-0.001 | 2.62+-0.001 | 2.62+-0.001 | 2.56+-0.001 | 2.56+-0.001 |\n\nCaption: Nominal and measured composition and density of the six- and seven-oxide glass samples produced and analysed in this work\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"51\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"47.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"ANa\"}\n{\"value\": \"48\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"ANa\"}\n{\"value\": \"43.63\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"ACs\"}\n{\"value\": \"44\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"ACs\"}\n{\"value\": \"45\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"ANaV\"}\n{\"value\": \"44.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"ANaV\"}\n{\"value\": \"16\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"16\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"15.2\", \"type\": \"Other\"}\n{\"value\": \"15\", \"type\": \"Other\"}\n{\"value\": \"13.96\", \"type\": \"Other\"}\n{\"value\": \"14.4\", \"type\": \"Other\"}\n{\"value\": \"14\", \"type\": \"Other\"}\n{\"value\": \"16\", \"type\": \"Other\"}\n{\"value\": \"14\", \"type\": \"Other\"}\n{\"value\": \"15.2\", \"type\": \"Other\"}\n{\"value\": \"14.7\", \"type\": \"Other\"}\n{\"value\": \"13.96\", \"type\": \"Other\"}\n{\"value\": \"14\", \"type\": \"Other\"}\n{\"value\": \"14.4\", \"type\": \"Other\"}\n{\"value\": \"14.6\", \"type\": \"Other\"}\n{\"value\": \"7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"7.4\", \"type\": \"Other\"}\n{\"value\": \"6.65\", \"type\": \"Other\"}\n{\"value\": \"6.5\", \"type\": \"Other\"}\n{\"value\": \"6.11\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"6.3\", \"type\": \"Other\"}\n{\"value\": \"6.5\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"5.4\", \"type\": \"Other\"}\n{\"value\": \"4.75\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Other\"}\n{\"value\": \"4.36\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"4.5\", \"type\": \"Other\"}\n{\"value\": \"4.8\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Fe2O3\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"6.5\", \"type\": \"Other\"}\n{\"value\": \"5.7\", \"type\": \"Other\"}\n{\"value\": \"5.2\", \"type\": \"Other\"}\n{\"value\": \"5.24\", \"type\": \"Other\"}\n{\"value\": \"5.2\", \"type\": \"Other\"}\n{\"value\": \"5.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"wt\", \"material\": \"A\"}\n{\"value\": \"5.2\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2SO4\", \"unit\": \"wt\", \"material\": \"ANa\"}\n{\"value\": \"0.7\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"wt\", \"material\": \"ANa\"}\n{\"value\": \"0.9\", \"type\": \"Other\"}\n{\"value\": \"12.74\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Cs2SO4\", \"unit\": \"wt\", \"material\": \"ACs\"}\n{\"value\": \"0.55\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Cs2SO4\", \"unit\": \"wt\", \"material\": \"ACs\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2SO4\", \"unit\": \"wt\", \"material\": \"ANaV\"}\n{\"value\": \"4.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"V2O5\", \"unit\": \"wt\", \"material\": \"ANaV\"}\n{\"value\": \"1.12\", \"type\": \"Other\"}\n{\"value\": \"1.12\", \"type\": \"Other\"}\n{\"value\": \"1.298\", \"type\": \"Other\"}\n{\"value\": \"1.298\", \"type\": \"Other\"}\n{\"value\": \"1.298\", \"type\": \"Other\"}\n{\"value\": \"1.298\", \"type\": \"Other\"}\n{\"value\": \"1.123\", \"type\": \"Other\"}\n{\"value\": \"1.123\", \"type\": \"Other\"}\n{\"value\": \"3.62\", \"type\": \"Other\"}\n{\"value\": \"3.62\", \"type\": \"Other\"}\n{\"value\": \"3.62\", \"type\": \"Other\"}\n{\"value\": \"3.62\", \"type\": \"Other\"}\n{\"value\": \"3.62\", \"type\": \"Other\"}\n{\"value\": \"3.62\", \"type\": \"Other\"}\n{\"value\": \"3.62\", \"type\": \"Other\"}\n{\"value\": \"3.62\", \"type\": \"Other\"}\n{\"value\": \"0.726\", \"type\": \"Other\"}\n{\"value\": \"0.726\", \"type\": \"Other\"}\n{\"value\": \"0.726\", \"type\": \"Other\"}\n{\"value\": \"0.726\", \"type\": \"Other\"}\n{\"value\": \"0.726\", \"type\": \"Other\"}\n{\"value\": \"0.726\", \"type\": \"Other\"}\n{\"value\": \"0.726\", \"type\": \"Other\"}\n{\"value\": \"0.726\", \"type\": \"Other\"}\n{\"value\": \"2.56\", \"type\": \"Other\"}\n{\"value\": \"2.56\", \"type\": \"Other\"}\n{\"value\": \"2.56\", \"type\": \"Other\"}\n{\"value\": \"2.56\", \"type\": \"Other\"}\n{\"value\": \"2.62\", \"type\": \"Other\"}\n{\"value\": \"2.62\", \"type\": \"Other\"}\n{\"value\": \"2.56\", \"type\": \"Other\"}\n{\"value\": \"2.56\", \"type\": \"Other\"}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation Automated analysis of clinical notes is attracting increasing attention. However, there has not been much work on medical term abbreviation disambiguation. Such abbreviations are abundant, and highly ambiguous, in clinical documents. One of the main obstacles is the lack of large scale, balance labeled data sets. To address the issue, we propose a few-shot learning approach to take advantage of limited labeled data. Specifically, a neural topic-attention model is applied to learn improved contextualized sentence representations for medical term abbreviation disambiguation. Another vital issue is that the existing scarce annotations are noisy and missing. We re-examine and correct an existing dataset for training and collect a test set to evaluate the models fairly especially for rare senses. We train our model on the training set which contains 30 abbreviation terms as categories (on average, 479 samples and 3.24 classes in each term) selected from a public abbreviation disambiguation dataset, and then test on a manually-created balanced dataset (each class in each term has 15 samples). We show that enhancing the sentence representation with topic information improves the performance on small-scale unbalanced training datasets by a large margin, compared to a number of baseline models. Topic-attention Model ELMo ELMo is a new type of word representation introduced by BIBREF16 which considers contextual information captured by pre-trained BiLSTM language models. Similar works like BERT BIBREF23 and BioBERT BIBREF24 also consider context but with deeper structures. Compared with those models, ELMo has less parameters and not easy to be overfitting. We trained our ELMo model on the MIMIC-III corpus. Since some sentences also appear in the test set, one may raise the concern of performance inflation in testing. However, we pre-trained ELMo using the whole corpus of MIMIC in an unsupervised way, so the classification labels were not visible during training. To train the ELMo model, we adapted code from AllenNLP, we set the word embedding dimension to 64 and applied two BiLSTM layers. For all of our experiments that involved with ELMo, we initialized the word embeddings from the last layer of ELMo. Topic-attention We propose a neural topic-attention model for text classification. Our assumption is that the topic information plays an important role in doing the disambiguation given a sentence. For example, the abbreviation of the medical term FISH has two potential senses: general English fish as seafood and the sense of fluorescent in situ hybridization BIBREF17. The former case always goes with the topic of food, allergies while the other one appears in the topic of some examination test reports. In our model, a topic-attention module is applied to add topic information into the final sentence representation. This module calculates the distribution of topic-attention weights on a list of topic vectors. As shown in Figure SECREF5, we took the content vector $c_i$ (from soft-attention BIBREF22) and a topic matrix $T_{topic}=[t_1,t_2,..,t_j]$ (where each $t_i$ is a column vector in the figure and we illustrate four topics) as the inputs to the topic-attention module, and then calculated a weighted summation of the topic vectors. The final sentence representation $r_i$ for the sentence $i$ was calculated as the following: where $W_{topic}$ and $b _ { topic }$ are trainable parameters, $\beta _{it}$ represents the topic-attention weights. The final sentence representation is denoted as $r_i$, and $[\cdot ,\cdot ]$ means concatenation. Here $s_i$ is the representation of the sentence which includes the topic information. The final sentence representation $r_i$ is the concatenation of $c_i$ and $s_i$, now we have both context-aware sentence representation and topic-related representation. Then we added a fully-connected layer, followed by a Softmax to do classification with cross-entropy loss function. Topic Matrix To generate the topic matrix $T_{topic}$ as in Equation DISPLAY_FORM9, we propose a convolution-based method to generate topic vectors. We first pre-trained a topic model using the Latent Dirichlet Allocation (LDA) BIBREF25 model on MIMIC-III notes as used by the ELMo model. We set the number of topics to be 50 and ran for 30 iterations. Then we were able to get a list of top $k$ words (we set $k=100$) for each topic. To get the topic vector $t$ for a specific topic: where $e_j$ (column vector) is the pre-trained Doc2vec word embedding for the top word $j$ from the current topic, and $Conv(\cdot )$ indicates a convolutional layer; $maxpool(\cdot )$ is a max pooling layer. We finally reshaped the output $t$ into a 1-dimension vector. Eventually we collected all topic vectors as the topic matrix $T_{topic}$ in Figure SECREF5. Evaluation We did three groups of experiments including two baseline groups and our proposed model. The first group was the TF-IDF features in Section SECREF3 for traditional models. The Naïve Bayesian classifier (NB) has the highest scores among all traditional methods. The second group of experiments used neural models described in Section SECREF3, where LSTM with self attention model (LSTM-self) has competitive results among this group, we choose this model as our base model. Notably, this is the first study that compares and evaluates LSTM-based models on the medical term abbreviation disambiguation task. tableExperimental Results: we report macro F1 in all the experiments. figureTopic-attention Model The last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score. To further compare our proposed topic-attention model and the base model, we report an average of area under the curve(AUC) score among all 30 terms: the base model has an average AUC of 0.7189, and our topic-attention model (ELMo+Topic) achieves an average AUC of 0.8196. We provide a case study in Appendix SECREF9. The results show that the model can benefit from ELMo, which considers contextual features, and the topic-attention module, which brings in topic information. We can conclude that under the few-shot learning setting, our proposed model can better capture the sentence features when only limited training samples are explored in a small-scale dataset. Conclusion In this paper, we propose a neural topic-attention model with few-shot learning for medical abbreviation disambiguation task. We also manually cleaned and collected training and testing data for this task, which we release to promote related research in NLP with clinical notes. In addition, we evaluated and compared a comprehensive set of baseline models, some of which had never been applied to the medical term abbreviation disambiguation task. Future work would be to adapt other models like BioBERT or BERT to our proposed topic-attention model. We will also extend the method into other clinical notes tasks such as drug name recognition and ICD-9 code auto-assigning BIBREF26. In addition, other LDA-based approach can be investigated. Dataset Details Figure FIGREF11 shows the histogram for the distribution of term-sense pair sample numbers. The X-axis gives the pair sample numbers and the Y-axis shows the counts. For example, the first bar shows that there are 43 term-sense pairs that have the sample number in the range of 0-50. We also show histogram of class numbers for all terms in Figure FIGREF11. The Y-axis gives the counts while the X-axis gives the number of classes. For instance, the first bar means there are 12 terms contain 2 classes. Testing Dataset Since the training dataset is unbalanced and relatively small, it is hard to split it further into training and testing. Existing work BIBREF6, BIBREF5 conducted fold validation on the datasets and we found there are extreme rare senses for which only one or two samples exist. Besides, we believe that it is better to evaluate on a balanced dataset to determine whether it performs equally well on all classes. While most of the works deal with unbalanced training and testing that which may lead to very high accuracy if there is a dominating class in both training and testing set, the model may have a poor performance in rare testing cases because only a few samples have been seen during training. To be fair to all the classes, a good performance on these rare cases is also required otherwise it may lead to a severe situation. In this work, we are very interested to see how the model works among all senses especially the rare ones. Also, we can prevent the model from trivially predicting the dominant class and achieving high accuracy. As a result, we decided to create a dataset with the same number of samples for each case in the test dataset. Our testing dataset takes MIMIC-III BIBREF18 and PubMed as data sources. Here we are referring to all the notes data from MIMIC-III (NOTEEVENTS table ) and Case Reports articles from PubMed, as these contents are close to medical notes. To create the test set, we first followed the approach by BIBREF6 who applied an auto-generating method. Initially, we built a term-sense dictionary from the training dataset. Then we did matching for the sense words or phrases in the MIMIC-III notes dataset, and once there is a match, we replaced the words or phrases with the abbreviation term . We then asked two researchers with a medical background to check the matched samples manually with the following judgment: given this sentence and the abbreviation term and its sense, do you think the content is enough for you to guess the sense and whether this is the right sense? To estimate the agreement on the annotations, we selected a subset which contains 120 samples randomly and let the two annotators annotate individually. We got a Kappa score BIBREF27 of 0.96, which is considered as a near perfect agreement. We then distributed the work to the two annotators, and each of them labeled a half of the dataset, which means each sample was only labeled by a single annotator. For some rare term-sense pairs, we failed to find samples from MIMIC-III. The annotators then searched these senses via PubMed data source manually, aiming to find clinical notes-like sentences. They picked good sentences from these results as testing samples where the keywords exist and the content is informative. For those senses that are extremely rare, we let the annotators create sentences in the clinical note style as testing samples according to their experiences. Eventually, we have a balanced testing dataset, where each term-sense pair has around 15 samples for testing (on average, each pair has 14.56 samples and the median sample number is 15), and a comparison with training dataset is shown in Figure FIGREF11. Due to the difficulty for collecting the testing dataset, we decided to only collect for a random selection of 30 terms. On average, it took few hours to generate testing samples for each abbreviation term per researcher. Case Study We now select two representative terms AC, IM and plot their receiver operating characteristic(ROC) curves. The term has a relatively large number of classes and the second one has extremely unbalanced training samples. We show the details in Table TABREF16. We have 8 classes in term AC. Figure FIGREF15(a) illustrates the results of our best performed model and Figure FIGREF15(b) shows the results of the base model. The accuracy and F1 score have an improvement from 0.3898 and 0.2830 to 0.4915 and 0.4059 respectively. Regarding the rare senses (for example, class 0, 1, 4 and 6), we can observe an increase in the ROC areas. Class 6 has an obvious improvement from 0.75 to 1.00. Such improvements in the rare senses make a huge difference in the reported average accuracy and F1 score, since we have a nearly equal number of samples for each class in the testing data. Similarly, we show the plots for IM term in Figure FIGREF15(c) and FIGREF15(d). IM has only two classes, but they are very unbalanced in training set, as shown in Table TABREF16. The accuracy and F1 scores improved from 0.6667 and 0.6250 to 0.8667 and 0.8667 respectively. We observe improvements in the ROC areas for both classes. This observation further shows that our model is more sensitive to all the class samples compared to the base model, even for the terms that have only a few samples in the training set. Again, by plotting the ROC curves and comparing AUC areas, we show that our model, which applies ELMo and topic-attention, has a better representation ability under the setting of few-shot learning. Table 1: Examples showing the abbreviation term MR: MR from Note 1 is the abbreviation of ‘mental retardation’. One could easily misinterpret it as ‘mitral regurgitation’which may lead to further imaging studies with subsequent delaying or avoidance of non-urgent interventions due to higher risk of complications. In Note 2, MR means ‘mitral regurgitation’[6]. Figure 3: The term-sense pair count distribution histogram among 93 pairs. Figure 5: ROC curve plots on two selected terms: AC and IM. We compare results of the base model and our best model. Here we plot the ROC curve for each class. Query: How big are improvements of small-scale unbalanced datasets when sentence representation is enhanced with topic information? JSON Response:
{"answer": "it has 0.024 improvement in accuracy comparing to ELMO Only and 0.006 improvement in F1 score comparing to ELMO Only too", "evidence": ["The last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score."]}
science.qasper_abstractive_qa.245.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nA Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation\n\nAutomated analysis of clinical notes is attracting increasing attention. However, there has not been much work on medical term abbreviation disambiguation. Such abbreviations are abundant, and highly ambiguous, in clinical documents. One of the main obstacles is the lack of large scale, balance labeled data sets. To address the issue, we propose a few-shot learning approach to take advantage of limited labeled data. Specifically, a neural topic-attention model is applied to learn improved contextualized sentence representations for medical term abbreviation disambiguation. Another vital issue is that the existing scarce annotations are noisy and missing. We re-examine and correct an existing dataset for training and collect a test set to evaluate the models fairly especially for rare senses. We train our model on the training set which contains 30 abbreviation terms as categories (on average, 479 samples and 3.24 classes in each term) selected from a public abbreviation disambiguation dataset, and then test on a manually-created balanced dataset (each class in each term has 15 samples). We show that enhancing the sentence representation with topic information improves the performance on small-scale unbalanced training datasets by a large margin, compared to a number of baseline models.\n\nTopic-attention Model\nELMo ELMo is a new type of word representation introduced by BIBREF16 which considers contextual information captured by pre-trained BiLSTM language models. Similar works like BERT BIBREF23 and BioBERT BIBREF24 also consider context but with deeper structures. Compared with those models, ELMo has less parameters and not easy to be overfitting. We trained our ELMo model on the MIMIC-III corpus. Since some sentences also appear in the test set, one may raise the concern of performance inflation in testing. However, we pre-trained ELMo using the whole corpus of MIMIC in an unsupervised way, so the classification labels were not visible during training. To train the ELMo model, we adapted code from AllenNLP, we set the word embedding dimension to 64 and applied two BiLSTM layers. For all of our experiments that involved with ELMo, we initialized the word embeddings from the last layer of ELMo.\nTopic-attention We propose a neural topic-attention model for text classification. Our assumption is that the topic information plays an important role in doing the disambiguation given a sentence. For example, the abbreviation of the medical term FISH has two potential senses: general English fish as seafood and the sense of fluorescent in situ hybridization BIBREF17. The former case always goes with the topic of food, allergies while the other one appears in the topic of some examination test reports. In our model, a topic-attention module is applied to add topic information into the final sentence representation. This module calculates the distribution of topic-attention weights on a list of topic vectors. As shown in Figure SECREF5, we took the content vector $c_i$ (from soft-attention BIBREF22) and a topic matrix $T_{topic}=[t_1,t_2,..,t_j]$ (where each $t_i$ is a column vector in the figure and we illustrate four topics) as the inputs to the topic-attention module, and then calculated a weighted summation of the topic vectors. The final sentence representation $r_i$ for the sentence $i$ was calculated as the following:\nwhere $W_{topic}$ and $b _ { topic }$ are trainable parameters, $\\beta _{it}$ represents the topic-attention weights. The final sentence representation is denoted as $r_i$, and $[\\cdot ,\\cdot ]$ means concatenation. Here $s_i$ is the representation of the sentence which includes the topic information. The final sentence representation $r_i$ is the concatenation of $c_i$ and $s_i$, now we have both context-aware sentence representation and topic-related representation. Then we added a fully-connected layer, followed by a Softmax to do classification with cross-entropy loss function.\nTopic Matrix To generate the topic matrix $T_{topic}$ as in Equation DISPLAY_FORM9, we propose a convolution-based method to generate topic vectors. We first pre-trained a topic model using the Latent Dirichlet Allocation (LDA) BIBREF25 model on MIMIC-III notes as used by the ELMo model. We set the number of topics to be 50 and ran for 30 iterations. Then we were able to get a list of top $k$ words (we set $k=100$) for each topic. To get the topic vector $t$ for a specific topic:\nwhere $e_j$ (column vector) is the pre-trained Doc2vec word embedding for the top word $j$ from the current topic, and $Conv(\\cdot )$ indicates a convolutional layer; $maxpool(\\cdot )$ is a max pooling layer. We finally reshaped the output $t$ into a 1-dimension vector. Eventually we collected all topic vectors as the topic matrix $T_{topic}$ in Figure SECREF5.\n\nEvaluation\nWe did three groups of experiments including two baseline groups and our proposed model. The first group was the TF-IDF features in Section SECREF3 for traditional models. The Naïve Bayesian classifier (NB) has the highest scores among all traditional methods. The second group of experiments used neural models described in Section SECREF3, where LSTM with self attention model (LSTM-self) has competitive results among this group, we choose this model as our base model. Notably, this is the first study that compares and evaluates LSTM-based models on the medical term abbreviation disambiguation task.\ntableExperimental Results: we report macro F1 in all the experiments. figureTopic-attention Model\nThe last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score.\nTo further compare our proposed topic-attention model and the base model, we report an average of area under the curve(AUC) score among all 30 terms: the base model has an average AUC of 0.7189, and our topic-attention model (ELMo+Topic) achieves an average AUC of 0.8196. We provide a case study in Appendix SECREF9. The results show that the model can benefit from ELMo, which considers contextual features, and the topic-attention module, which brings in topic information. We can conclude that under the few-shot learning setting, our proposed model can better capture the sentence features when only limited training samples are explored in a small-scale dataset.\n\nConclusion\nIn this paper, we propose a neural topic-attention model with few-shot learning for medical abbreviation disambiguation task. We also manually cleaned and collected training and testing data for this task, which we release to promote related research in NLP with clinical notes. In addition, we evaluated and compared a comprehensive set of baseline models, some of which had never been applied to the medical term abbreviation disambiguation task. Future work would be to adapt other models like BioBERT or BERT to our proposed topic-attention model. We will also extend the method into other clinical notes tasks such as drug name recognition and ICD-9 code auto-assigning BIBREF26. In addition, other LDA-based approach can be investigated.\n\nDataset Details\nFigure FIGREF11 shows the histogram for the distribution of term-sense pair sample numbers. The X-axis gives the pair sample numbers and the Y-axis shows the counts. For example, the first bar shows that there are 43 term-sense pairs that have the sample number in the range of 0-50.\nWe also show histogram of class numbers for all terms in Figure FIGREF11. The Y-axis gives the counts while the X-axis gives the number of classes. For instance, the first bar means there are 12 terms contain 2 classes.\n\nTesting Dataset\nSince the training dataset is unbalanced and relatively small, it is hard to split it further into training and testing. Existing work BIBREF6, BIBREF5 conducted fold validation on the datasets and we found there are extreme rare senses for which only one or two samples exist. Besides, we believe that it is better to evaluate on a balanced dataset to determine whether it performs equally well on all classes. While most of the works deal with unbalanced training and testing that which may lead to very high accuracy if there is a dominating class in both training and testing set, the model may have a poor performance in rare testing cases because only a few samples have been seen during training. To be fair to all the classes, a good performance on these rare cases is also required otherwise it may lead to a severe situation. In this work, we are very interested to see how the model works among all senses especially the rare ones. Also, we can prevent the model from trivially predicting the dominant class and achieving high accuracy. As a result, we decided to create a dataset with the same number of samples for each case in the test dataset.\nOur testing dataset takes MIMIC-III BIBREF18 and PubMed as data sources. Here we are referring to all the notes data from MIMIC-III (NOTEEVENTS table ) and Case Reports articles from PubMed, as these contents are close to medical notes. To create the test set, we first followed the approach by BIBREF6 who applied an auto-generating method. Initially, we built a term-sense dictionary from the training dataset. Then we did matching for the sense words or phrases in the MIMIC-III notes dataset, and once there is a match, we replaced the words or phrases with the abbreviation term . We then asked two researchers with a medical background to check the matched samples manually with the following judgment: given this sentence and the abbreviation term and its sense, do you think the content is enough for you to guess the sense and whether this is the right sense? To estimate the agreement on the annotations, we selected a subset which contains 120 samples randomly and let the two annotators annotate individually. We got a Kappa score BIBREF27 of 0.96, which is considered as a near perfect agreement. We then distributed the work to the two annotators, and each of them labeled a half of the dataset, which means each sample was only labeled by a single annotator. For some rare term-sense pairs, we failed to find samples from MIMIC-III. The annotators then searched these senses via PubMed data source manually, aiming to find clinical notes-like sentences. They picked good sentences from these results as testing samples where the keywords exist and the content is informative. For those senses that are extremely rare, we let the annotators create sentences in the clinical note style as testing samples according to their experiences. Eventually, we have a balanced testing dataset, where each term-sense pair has around 15 samples for testing (on average, each pair has 14.56 samples and the median sample number is 15), and a comparison with training dataset is shown in Figure FIGREF11. Due to the difficulty for collecting the testing dataset, we decided to only collect for a random selection of 30 terms. On average, it took few hours to generate testing samples for each abbreviation term per researcher.\n\nCase Study\nWe now select two representative terms AC, IM and plot their receiver operating characteristic(ROC) curves. The term has a relatively large number of classes and the second one has extremely unbalanced training samples. We show the details in Table TABREF16. We have 8 classes in term AC. Figure FIGREF15(a) illustrates the results of our best performed model and Figure FIGREF15(b) shows the results of the base model. The accuracy and F1 score have an improvement from 0.3898 and 0.2830 to 0.4915 and 0.4059 respectively. Regarding the rare senses (for example, class 0, 1, 4 and 6), we can observe an increase in the ROC areas. Class 6 has an obvious improvement from 0.75 to 1.00. Such improvements in the rare senses make a huge difference in the reported average accuracy and F1 score, since we have a nearly equal number of samples for each class in the testing data. Similarly, we show the plots for IM term in Figure FIGREF15(c) and FIGREF15(d). IM has only two classes, but they are very unbalanced in training set, as shown in Table TABREF16. The accuracy and F1 scores improved from 0.6667 and 0.6250 to 0.8667 and 0.8667 respectively. We observe improvements in the ROC areas for both classes. This observation further shows that our model is more sensitive to all the class samples compared to the base model, even for the terms that have only a few samples in the training set. Again, by plotting the ROC curves and comparing AUC areas, we show that our model, which applies ELMo and topic-attention, has a better representation ability under the setting of few-shot learning.\n\nTable 1: Examples showing the abbreviation term MR: MR from Note 1 is the abbreviation of ‘mental retardation’. One could easily misinterpret it as ‘mitral regurgitation’which may lead to further imaging studies with subsequent delaying or avoidance of non-urgent interventions due to higher risk of complications. In Note 2, MR means ‘mitral regurgitation’[6].\n\nFigure 3: The term-sense pair count distribution histogram among 93 pairs.\n\nFigure 5: ROC curve plots on two selected terms: AC and IM. We compare results of the base model and our best model. Here we plot the ROC curve for each class.\n\nQuery: How big are improvements of small-scale unbalanced datasets when sentence representation is enhanced with topic information?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"it has 0.024 improvement in accuracy comparing to ELMO Only and 0.006 improvement in F1 score comparing to ELMO Only too\", \"evidence\": [\"The last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score.\"]}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Regulatory network of inflammation downstream of proteinase-activated receptors. BACKGROUND: Protease-activated receptors (PAR) are present in the urinary bladder, and their expression is altered in response to inflammation. PARs are a unique class of G protein-coupled that carry their own ligands, which remain cryptic until unmasked by proteolytic cleavage. Although the canonical signal transduction pathway downstream of PAR activation and coupling with various G proteins is known and leads to the rapid transcription of genes involved in inflammation, the effect of PAR activation on the downstream transcriptome is unknown. We have shown that intravesical administration of PAR-activating peptides leads to an inflammatory reaction characterized by edema and granulocyte infiltration. Moreover, the inflammatory response to intravesical instillation of known pro-inflammatory stimuli such as E. coli lipopolysaccharide (LPS), substance P (SP), and antigen was strongly attenuated by PAR1- and to a lesser extent by PAR2-deficiency. RESULTS: Here, cDNA array experiments determined inflammatory genes whose expression is dependent on PAR1 activation. For this purpose, we compared the alteration in gene expression in wild type and PAR1-/- mice induced by classical pro-inflammatory stimuli (LPS, SP, and antigen). 75 transcripts were considered to be dependent on PAR-1 activation and further annotated in silico by Ingenuity Pathways Analysis (IPA) and gene ontology (GO). Selected transcripts were target validated by quantitative PCR (Q-PCR). Among PAR1-dependent transcripts, the following have been implicated in the inflammatory process: b2m, ccl7, cd200, cd63, cdbpd, cfl1, dusp1, fkbp1a, fth1, hspb1, marcksl1, mmp2, myo5a, nfkbia, pax1, plaur, ppia, ptpn1, ptprcap, s100a10, sim2, and tnfaip2. However, a balanced response to signals of injury requires a transient cellular activation of a panel of genes together with inhibitory systems that temper the overwhelming inflammation. In this context, the activation of genes such as dusp1 and nfkbia seems to counter-balance the inflammatory response to PAR activation by limiting prolonged activation of p38 MAPK and increased cytokine production. In contrast, transcripts such as arf6 and dcnt1 that are involved in the mechanism of PAR re-sensitization would tend to perpetuate the inflammatory reaction in response to common pro-inflammatory stimuli. CONCLUSION: The combination of cDNA array results and genomic networks reveals an overriding participation of PAR1 in bladder inflammation, provides a working model for the involvement of downstream signaling, and evokes testable hypotheses regarding the transcriptome downstream of PAR1 activation. It remains to be determined whether or not mechanisms targeting PAR1 gene silencing or PAR1 blockade will ameliorate the clinical manifestation of cystitis.
{"CellLine": [], "Chemical": ["SP", "substance P", "lipopolysaccharide", "LPS"], "Disease": ["inflammation", "edema", "bladder inflammation", "inflammatory", "cystitis"], "Gene": ["cfl1", "PAR", "cdbpd", "mmp2", "marcksl1", "arf6", "s100a10", "plaur", "myo5a", "ccl7", "PAR-1", "pax1", "PARs", "ppia", "tnfaip2", "ptprcap", "proteinase-activated receptors", "cd200", "b2m", "PAR1", "fth1", "fkbp1a", "PAR2", "cd63", "nfkbia", "hspb1", "ptpn1", "sim2", "dcnt1", "Protease-activated receptors", "dusp1", "p38 MAPK"], "Species": ["mice", "E. coli"], "Variant": []}
science.biored_ner.76
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nRegulatory network of inflammation downstream of proteinase-activated receptors. BACKGROUND: Protease-activated receptors (PAR) are present in the urinary bladder, and their expression is altered in response to inflammation. PARs are a unique class of G protein-coupled that carry their own ligands, which remain cryptic until unmasked by proteolytic cleavage. Although the canonical signal transduction pathway downstream of PAR activation and coupling with various G proteins is known and leads to the rapid transcription of genes involved in inflammation, the effect of PAR activation on the downstream transcriptome is unknown. We have shown that intravesical administration of PAR-activating peptides leads to an inflammatory reaction characterized by edema and granulocyte infiltration. Moreover, the inflammatory response to intravesical instillation of known pro-inflammatory stimuli such as E. coli lipopolysaccharide (LPS), substance P (SP), and antigen was strongly attenuated by PAR1- and to a lesser extent by PAR2-deficiency. RESULTS: Here, cDNA array experiments determined inflammatory genes whose expression is dependent on PAR1 activation. For this purpose, we compared the alteration in gene expression in wild type and PAR1-/- mice induced by classical pro-inflammatory stimuli (LPS, SP, and antigen). 75 transcripts were considered to be dependent on PAR-1 activation and further annotated in silico by Ingenuity Pathways Analysis (IPA) and gene ontology (GO). Selected transcripts were target validated by quantitative PCR (Q-PCR). Among PAR1-dependent transcripts, the following have been implicated in the inflammatory process: b2m, ccl7, cd200, cd63, cdbpd, cfl1, dusp1, fkbp1a, fth1, hspb1, marcksl1, mmp2, myo5a, nfkbia, pax1, plaur, ppia, ptpn1, ptprcap, s100a10, sim2, and tnfaip2. However, a balanced response to signals of injury requires a transient cellular activation of a panel of genes together with inhibitory systems that temper the overwhelming inflammation. In this context, the activation of genes such as dusp1 and nfkbia seems to counter-balance the inflammatory response to PAR activation by limiting prolonged activation of p38 MAPK and increased cytokine production. In contrast, transcripts such as arf6 and dcnt1 that are involved in the mechanism of PAR re-sensitization would tend to perpetuate the inflammatory reaction in response to common pro-inflammatory stimuli. CONCLUSION: The combination of cDNA array results and genomic networks reveals an overriding participation of PAR1 in bladder inflammation, provides a working model for the involvement of downstream signaling, and evokes testable hypotheses regarding the transcriptome downstream of PAR1 activation. It remains to be determined whether or not mechanisms targeting PAR1 gene silencing or PAR1 blockade will ameliorate the clinical manifestation of cystitis.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"SP\", \"substance P\", \"lipopolysaccharide\", \"LPS\"], \"Disease\": [\"inflammation\", \"edema\", \"bladder inflammation\", \"inflammatory\", \"cystitis\"], \"Gene\": [\"cfl1\", \"PAR\", \"cdbpd\", \"mmp2\", \"marcksl1\", \"arf6\", \"s100a10\", \"plaur\", \"myo5a\", \"ccl7\", \"PAR-1\", \"pax1\", \"PARs\", \"ppia\", \"tnfaip2\", \"ptprcap\", \"proteinase-activated receptors\", \"cd200\", \"b2m\", \"PAR1\", \"fth1\", \"fkbp1a\", \"PAR2\", \"cd63\", \"nfkbia\", \"hspb1\", \"ptpn1\", \"sim2\", \"dcnt1\", \"Protease-activated receptors\", \"dusp1\", \"p38 MAPK\"], \"Species\": [\"mice\", \"E. coli\"], \"Variant\": []}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Regulation of signal transducer and activator of transcription 3 and apoptotic pathways by betaine attenuates isoproterenol-induced acute myocardial injury in rats. The present study was designed to investigate the cardioprotective effects of betaine on acute myocardial ischemia induced experimentally in rats focusing on regulation of signal transducer and activator of transcription 3 (STAT3) and apoptotic pathways as the potential mechanism underlying the drug effect. Male Sprague Dawley rats were treated with betaine (100, 200, and 400 mg/kg) orally for 40 days. Acute myocardial ischemic injury was induced in rats by subcutaneous injection of isoproterenol (85 mg/kg), for two consecutive days. Serum cardiac marker enzyme, histopathological variables and expression of protein levels were analyzed. Oral administration of betaine (200 and 400 mg/kg) significantly reduced the level of cardiac marker enzyme in the serum and prevented left ventricular remodeling. Western blot analysis showed that isoproterenol-induced phosphorylation of STAT3 was maintained or further enhanced by betaine treatment in myocardium. Furthermore, betaine (200 and 400 mg/kg) treatment increased the ventricular expression of Bcl-2 and reduced the level of Bax, therefore causing a significant increase in the ratio of Bcl-2/Bax. The protective role of betaine on myocardial damage was further confirmed by histopathological examination. In summary, our results showed that betaine pretreatment attenuated isoproterenol-induced acute myocardial ischemia via the regulation of STAT3 and apoptotic pathways.
{"CellLine": [], "Chemical": ["isoproterenol", "betaine"], "Disease": ["myocardial damage", "ventricular remodeling", "myocardial ischemia", "myocardial injury", "myocardial ischemic injury"], "Gene": ["signal transducer and activator of transcription 3", "Bax", "Bcl-2", "STAT3"], "Species": ["rats"], "Variant": []}
science.biored_ner.12
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nRegulation of signal transducer and activator of transcription 3 and apoptotic pathways by betaine attenuates isoproterenol-induced acute myocardial injury in rats. The present study was designed to investigate the cardioprotective effects of betaine on acute myocardial ischemia induced experimentally in rats focusing on regulation of signal transducer and activator of transcription 3 (STAT3) and apoptotic pathways as the potential mechanism underlying the drug effect. Male Sprague Dawley rats were treated with betaine (100, 200, and 400 mg/kg) orally for 40 days. Acute myocardial ischemic injury was induced in rats by subcutaneous injection of isoproterenol (85 mg/kg), for two consecutive days. Serum cardiac marker enzyme, histopathological variables and expression of protein levels were analyzed. Oral administration of betaine (200 and 400 mg/kg) significantly reduced the level of cardiac marker enzyme in the serum and prevented left ventricular remodeling. Western blot analysis showed that isoproterenol-induced phosphorylation of STAT3 was maintained or further enhanced by betaine treatment in myocardium. Furthermore, betaine (200 and 400 mg/kg) treatment increased the ventricular expression of Bcl-2 and reduced the level of Bax, therefore causing a significant increase in the ratio of Bcl-2/Bax. The protective role of betaine on myocardial damage was further confirmed by histopathological examination. In summary, our results showed that betaine pretreatment attenuated isoproterenol-induced acute myocardial ischemia via the regulation of STAT3 and apoptotic pathways.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"isoproterenol\", \"betaine\"], \"Disease\": [\"myocardial damage\", \"ventricular remodeling\", \"myocardial ischemia\", \"myocardial injury\", \"myocardial ischemic injury\"], \"Gene\": [\"signal transducer and activator of transcription 3\", \"Bax\", \"Bcl-2\", \"STAT3\"], \"Species\": [\"rats\"], \"Variant\": []}", "role": "assistant" } ]
science.discomat_te
| Atomic pairs | Cut-off length, A | | Li-Li | 2.3 | | Li-Ge | 2.2 | | Li-S | 2.0 | | Li-O | 1.7 | | Ge-Ge | 3.3 | | Ge-S | 1.9 | | Ge-O | 1.5 | | S-S | 3.0 | | S-O | 3.0 | | O-O | 2.7 | Caption: Distances of closest interatomic approach (also known as 'cut-off lengths') used in the RMC simulations. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "2.3", "type": "Other"} {"value": "2.2", "type": "Other"} {"value": "2.0", "type": "Other"} {"value": "1.7", "type": "Other"} {"value": "3.3", "type": "Other"} {"value": "1.9", "type": "Other"} {"value": "1.5", "type": "Other"} {"value": "3.0", "type": "Other"} {"value": "3.0", "type": "Other"} {"value": "2.7", "type": "Other"}
science.discomat_te.78
[ { "content": "| Atomic pairs | Cut-off length, A |\n| Li-Li | 2.3 |\n| Li-Ge | 2.2 |\n| Li-S | 2.0 |\n| Li-O | 1.7 |\n| Ge-Ge | 3.3 |\n| Ge-S | 1.9 |\n| Ge-O | 1.5 |\n| S-S | 3.0 |\n| S-O | 3.0 |\n| O-O | 2.7 |\n\nCaption: Distances of closest interatomic approach (also known as 'cut-off lengths') used in the RMC simulations.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"2.3\", \"type\": \"Other\"}\n{\"value\": \"2.2\", \"type\": \"Other\"}\n{\"value\": \"2.0\", \"type\": \"Other\"}\n{\"value\": \"1.7\", \"type\": \"Other\"}\n{\"value\": \"3.3\", \"type\": \"Other\"}\n{\"value\": \"1.9\", \"type\": \"Other\"}\n{\"value\": \"1.5\", \"type\": \"Other\"}\n{\"value\": \"3.0\", \"type\": \"Other\"}\n{\"value\": \"3.0\", \"type\": \"Other\"}\n{\"value\": \"2.7\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Active 3D Shape Reconstruction from Vision and Touch Main Content: 1 Introduction . 3D shape understanding is an active area of research , whose goal is to build 3D models of objects and environments from limited sensory data . It is commonly tackled by leveraging partial observations such as a single view RGB image [ 60 , 67 , 23 , 44 ] , multiple view RGB images [ 16 , 25 , 30 , 31 ] , depth maps [ 61 , 74 ] or tactile readings [ 7 , 49 , 59 , 45 , 38 ] . Most of this research focuses on building shape reconstruction models from a fixed set of partial observations . However , this constraint is relaxed in the active sensing scenario , where additional observation can be acquired to improve the quality of the 3D reconstructions . In active vision [ 4 ] , for instance , the objective can be to iteratively select camera perspectives from an object that result in the highest improvement in quality of the reconstruction [ 76 ] and only very recently the research community has started to leverage large scale datasets to learn exploration strategies that generalize to unseen objects [ 77 , 5 , 41 , 51 , 29 , 78 , 50 , 6 ] . Human haptic exploration of objects both with and without the presence of vision has classically been analysed from a psychological perspective , where it was discovered that the developed tactile exploration strategies for object understanding were demonstrated to not only be ubiquitous , but also highly tailored to specific tasks [ 37 , 33 ] . In spite of this , deep learning-based data-driven approaches to active touch for shape understanding are practically non-existent . Previous haptic exploration works consider objects ∗Correspondence to : ejsmith @ fb.com and edward.smith @ mail.mcgill.ca †Equal Contribution 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) , virtual . independently , and build uncertainty estimates over point clouds , produced by densely touching the objects surface with point-based touch sensors [ 7 , 73 , 28 , 18 , 46 ] . These methods do not make use of learned object priors , and a large number of touches sampled on an object ’ s surface ( over 100 ) is necessary to produce not only a prediction of the surface but also to drive exploration . However , accurate estimates of object shape have also been successfully produced with orders of magnitude fewer touch signals , by making use of high resolution tactile sensors such as [ 75 ] , large datasets of static 3D shape data , and deep learning – see e.g . [ 56 , 68 ] . Note that no prior work exists to learn touch exploration by leveraging large scale datasets . Moreover , no prior work explores active touch in the presence of visual inputs either ( e.g . an RGB camera ) . Combining the recent emergence of both data-driven reconstruction models from vision and touch systems [ 56 , 68 ] , and data-driven active vision approaches , we present a novel formulation for active touch exploration . Our formulation is designed to easily enable the use of vision signals to guide the touch exploration . First , we define a new problem setting over active touch for 3D shape reconstruction where touch exploration strategies can be learned over shape predictions from a learned reconstruction model with strong object priors . Second , we develop a simulator which allows for fast and realistic grasping of objects , and for extracting both vision and touch signals using a robotic hand augmented with high-resolution tactile sensors [ 75 ] . Third , we present a 3D reconstruction model from vision and touch which produces mesh-based predictions and achieves impressive performance in the single view image setting , both with and without the presence of touch signals . Fourth , we combine the simulator and reconstruction model to produce a tactile active sensing environment for training and evaluating touch exploration policies . The outline for this environment can be viewed in Figure 1 . Over the provided environment , we present a series of datadriven touch exploration models that take as an input a mesh- based shape reconstruction and decide the position of the next touch . By leveraging a large scale dataset of over 25k CAD models from the ABC dataset [ 34 ] together with our environment , the data-driven touch exploration models outperform baseline policies which fail to leverage learned patterns between object shape or the distribution of object shapes and optimal actions . We demonstrate our proposed data-driven solutions perform up to 18 % better than random baselines and lead to impressive object reconstructions relative to their input modality , such as those demonstrated in Figure 2 . Our framework , training and evaluation setup , and trained models are publicly available on a GitHub repository to ensure and encourage reproducible experimental comparison 3 . 3https : //github.com/facebookresearch/Active-3D-Vision-and-Touch 2 Related Work . 3D reconstruction . 3D shape reconstruction is a popular and well studied area of research with a multitude of approaches proposed across varying 3D representations [ 13 , 63 , 19 , 52 , 67 , 23 , 12 , 71 ] and input modalities [ 27 , 53 , 69 , 57 ] . While still a niche topic , a number of works have attempted to reconstruct 3D shape from only touch signals [ 7 , 49 , 59 , 45 , 38 ] . These works all assume point based touch signals which supply at most one point of contact of normal information . A limited number of methods have also been proposed for 3D reconstruction from both vision and touch signals [ 9 , 26 , 22 , 68 , 69 , 56 ] . In contrast to these works which use static datasets , we focus on improving 3D reconstruction through tactile exploration . Shape from interaction methods have also been proposed for object reconstruction through hand interactions [ 42 , 64 , 47 ] , though in contrast to our work an image of the interaction provides the additional information rather than touch readings . Active sensing for reconstruction . Active vision aims to manipulate the viewpoint of a camera in order to maximize the information for a particular task [ 11 , 76 ] . Similar to our setting , active vision can be useful in 3D shape reconstruction [ 62 , 70 , 17 , 41 , 14 , 35 ] , but where camera perspectives are planned instead of grasp locations . Only very recently have deep learning active vision approaches been proposed for 3D object reconstruction [ 77 , 5 , 41 , 51 ] . Deep learning based active sensing for reconstruction has also recently emerged in the medical imaging domain , where the time spent performing MRI scans has been reduced by learning to select a small number of more informative frequencies over a pre-trained reconstruction model [ 29 , 78 , 50 , 6 ] . Touch-based active exploration for shape understanding . Initial works tackling active tactile exploration focused on object recognition . [ 1–3 ] . While some of these methods do integrate both vision and touch for active shape understanding , they only focus on object recognition , operate over small datasets of already known and observed objects and do not leverage deep learning tools for learning patterns over large datasets of object shapes . Fleer and Moringen et . al . proposed to learn haptic exploration strategies using a reinforcement learning framework over a recurrent attention model , however these strategies were optimized for object classification and only trained over a dataset of 4 objects with a single , floating , depth-based tactile sensor array [ 20 ] . Several priors works have explored the problem of active acquisition of touch signals specifically for for surface reconstruction . A common theme among most approaches is to predict object shape using Gaussian processes so that uncertainty estimates are naturally available to drive the selection of the next point to touch , which is performed using a single finger [ 28 , 73 , 18 ] . To improve the speed of shape estimation during tactile exploration Matsubara et . al . [ 40 ] consider both uncertainty and travel cost when selecting touches using graph-based path planning . Bierbaum et al . [ 8 ] instead drove exploration through a dynamic potential field approach made popular in robot navigation to produce point cloud predictions , and extracted touch signals using a comparatively unrealistic 5 finger robotic hand model in simulation . These methods all make use of point based tactile sensors and use deterministic strategies , which are tuned and evaluated over a very small number of objects. . In contrast to these works , our approach is fully data driven , and as such , actions are selected using policies trained and evaluated over tens of thousands of objects . Moreover , predictions are made in mesh space fusing visual information with high resolution tactile signals extracted from objects in simulation using a realistic robot hand . 3 Active touch exploration . In our proposed active touch exploration problem , given a pre-trained shape reconstruction model over touch and optionally vision signals , the objective is to select the sequence of touch inputs which lead to the highest improvement in reconstruction accuracy . To tackle this problem , we define an active touch environment that contains a simulator , a reconstruction model ( a pre-trained neural network ) , and a loss function . The simulator takes as input a 3D object shape O together with parameters describing a grasp , g , and outputs touch readings , t , of the 3D shape at the grasp locations along with an RGB image of the object , I . The reconstruction model is a neural network parametrized by ϕ , which takes an input X and produces the current 3D shape estimate Ô as follows : Ô = f ( X ; ϕ ) . In our setup , we investigate two reconstruction model variants which differ in their inputs : 1 ) the model only receives a set of touch readings , t , such that X = t , and 2 ) the model receives both a set of touch readings , t , and an RGB image rendering of the shape , I , such that X = { t , I } . The loss function takes as input the current belief of the object ’ s shape , Ô , and the ground truth shape , O , and computes the distance between them : d ( O , Ô ) . Thus , active touch exploration can be formulated as sequentially selecting the optimal set of K grasp parameters { g1 , g2 ... , gK } that maximize the similarity between the ground truth shape O and the reconstruction output after K grasps ÔK 4 : argmin g1 , g2 , .... gK d ( O , ÔK ) , ( 1 ) where gk determine the touch readings fed to the reconstruction network producing Ôk . We use the Chamfer distance ( CD ) [ 60 ] between the predicted and target surface as the distance metric in our active touch formulation . Further details with respect to the CD are provided in the supplemental materials . In the reminder of this section , we provide details about the touch simulator and the reconstruction model . 3.1 Active touch simulator . Conceptually , our simulator can be described in the five steps depicted in Figure 3 . First , an object is loaded onto the environment . Second , an action space is defined around the 3D object by uniformly placing 50 points on a sphere centered at the center of the loaded object . Third , to choose a grasp , one of the points is selected and a 4-digit robot hand is placed such that its 3rd digit lies on the point and the hand ’ s palm lies tangent to the sphere . Fourth , the hand is moved towards the center of the object until it comes in contact with it . Last , the fingers of the hand are closed until they reach the maximum joint angle , or are halted by the contact with the object . As a result , the simulator produces 4 touch readings ( one from each finger of the hand ) and one RGB image of the object . Note that each action is defined by its position index on the sphere of 50 actions . This parameterization is selected specifically as it does not require any prior knowledge of the object , other than its center , and in simulation it consistently leads to successful interactions between the hand ’ s touch sensors and the object ’ s surface . In our simulator , all steps are performed in python across the robotics simulator PyBullet [ 15 ] , the rendering tool Pyrender [ 39 ] , and PyTorch [ 48 ] . For a given grasp and object , the object is loaded into PyBullet [ 15 ] , along with a Wonik ’ s Allegro Hand [ 54 ] equipped with vision-based touch sensors [ 36 ] on each of its fingers , and then the point in space corresponding to the action to be performed is selected and the grasping procedure is performed using PyBullet ’ s physics simulator . Pose information from the produced grasps is then extracted and used by Pyrender to render both a depth map of the object from the perspective of each sensor and an RGB image of the object from a fixed perspective . The depth maps are then converted into simulated touch signals using the method described in [ 56 ] . All steps in this procedure are performed in parallel or using GPU accelerated computing , and as a result across the 50 grasping options of 100 randomly chosen objects , simulated grasps and touch signals are produced in ∼ 0.0317 seconds each on a Tesla V100 GPU with 16 CPU cores . Our simulator supports two modes of tactile exploration grasping and poking . In the grasping scenario , the hand is performing a full grasp of an object using all four fingers . While , in the poking scenario , only the index finger of the hand is used for touch sensing . Further details on this simulated environment are provided the supplemental materials .
This paper presents an approach for active 3D reconstruction of objects based solely on touch, and optionally together with vision. The main idea is active perception -- estimate where to touch/see the object in order to better reconstruct it. The proposed method to solve this involves a neural network that fuses touch and vision signals to estimate 3D shape, and a policy that predicts where to touch next for best reconstruction performance. Data comes from a simulator that produces both touch and image signals (from the ABC dataset).
science.mup_single_document_summarization.11484
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nActive 3D Shape Reconstruction from Vision and Touch\n\nMain Content:\n1 Introduction . 3D shape understanding is an active area of research , whose goal is to build 3D models of objects and environments from limited sensory data . It is commonly tackled by leveraging partial observations such as a single view RGB image [ 60 , 67 , 23 , 44 ] , multiple view RGB images [ 16 , 25 , 30 , 31 ] , depth maps [ 61 , 74 ] or tactile readings [ 7 , 49 , 59 , 45 , 38 ] . Most of this research focuses on building shape reconstruction models from a fixed set of partial observations . However , this constraint is relaxed in the active sensing scenario , where additional observation can be acquired to improve the quality of the 3D reconstructions . In active vision [ 4 ] , for instance , the objective can be to iteratively select camera perspectives from an object that result in the highest improvement in quality of the reconstruction [ 76 ] and only very recently the research community has started to leverage large scale datasets to learn exploration strategies that generalize to unseen objects [ 77 , 5 , 41 , 51 , 29 , 78 , 50 , 6 ] . Human haptic exploration of objects both with and without the presence of vision has classically been analysed from a psychological perspective , where it was discovered that the developed tactile exploration strategies for object understanding were demonstrated to not only be ubiquitous , but also highly tailored to specific tasks [ 37 , 33 ] . In spite of this , deep learning-based data-driven approaches to active touch for shape understanding are practically non-existent . Previous haptic exploration works consider objects ∗Correspondence to : ejsmith @ fb.com and edward.smith @ mail.mcgill.ca †Equal Contribution 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) , virtual . independently , and build uncertainty estimates over point clouds , produced by densely touching the objects surface with point-based touch sensors [ 7 , 73 , 28 , 18 , 46 ] . These methods do not make use of learned object priors , and a large number of touches sampled on an object ’ s surface ( over 100 ) is necessary to produce not only a prediction of the surface but also to drive exploration . However , accurate estimates of object shape have also been successfully produced with orders of magnitude fewer touch signals , by making use of high resolution tactile sensors such as [ 75 ] , large datasets of static 3D shape data , and deep learning – see e.g . [ 56 , 68 ] . Note that no prior work exists to learn touch exploration by leveraging large scale datasets . Moreover , no prior work explores active touch in the presence of visual inputs either ( e.g . an RGB camera ) . Combining the recent emergence of both data-driven reconstruction models from vision and touch systems [ 56 , 68 ] , and data-driven active vision approaches , we present a novel formulation for active touch exploration . Our formulation is designed to easily enable the use of vision signals to guide the touch exploration . First , we define a new problem setting over active touch for 3D shape reconstruction where touch exploration strategies can be learned over shape predictions from a learned reconstruction model with strong object priors . Second , we develop a simulator which allows for fast and realistic grasping of objects , and for extracting both vision and touch signals using a robotic hand augmented with high-resolution tactile sensors [ 75 ] . Third , we present a 3D reconstruction model from vision and touch which produces mesh-based predictions and achieves impressive performance in the single view image setting , both with and without the presence of touch signals . Fourth , we combine the simulator and reconstruction model to produce a tactile active sensing environment for training and evaluating touch exploration policies . The outline for this environment can be viewed in Figure 1 . Over the provided environment , we present a series of datadriven touch exploration models that take as an input a mesh- based shape reconstruction and decide the position of the next touch . By leveraging a large scale dataset of over 25k CAD models from the ABC dataset [ 34 ] together with our environment , the data-driven touch exploration models outperform baseline policies which fail to leverage learned patterns between object shape or the distribution of object shapes and optimal actions . We demonstrate our proposed data-driven solutions perform up to 18 % better than random baselines and lead to impressive object reconstructions relative to their input modality , such as those demonstrated in Figure 2 . Our framework , training and evaluation setup , and trained models are publicly available on a GitHub repository to ensure and encourage reproducible experimental comparison 3 . 3https : //github.com/facebookresearch/Active-3D-Vision-and-Touch 2 Related Work . 3D reconstruction . 3D shape reconstruction is a popular and well studied area of research with a multitude of approaches proposed across varying 3D representations [ 13 , 63 , 19 , 52 , 67 , 23 , 12 , 71 ] and input modalities [ 27 , 53 , 69 , 57 ] . While still a niche topic , a number of works have attempted to reconstruct 3D shape from only touch signals [ 7 , 49 , 59 , 45 , 38 ] . These works all assume point based touch signals which supply at most one point of contact of normal information . A limited number of methods have also been proposed for 3D reconstruction from both vision and touch signals [ 9 , 26 , 22 , 68 , 69 , 56 ] . In contrast to these works which use static datasets , we focus on improving 3D reconstruction through tactile exploration . Shape from interaction methods have also been proposed for object reconstruction through hand interactions [ 42 , 64 , 47 ] , though in contrast to our work an image of the interaction provides the additional information rather than touch readings . Active sensing for reconstruction . Active vision aims to manipulate the viewpoint of a camera in order to maximize the information for a particular task [ 11 , 76 ] . Similar to our setting , active vision can be useful in 3D shape reconstruction [ 62 , 70 , 17 , 41 , 14 , 35 ] , but where camera perspectives are planned instead of grasp locations . Only very recently have deep learning active vision approaches been proposed for 3D object reconstruction [ 77 , 5 , 41 , 51 ] . Deep learning based active sensing for reconstruction has also recently emerged in the medical imaging domain , where the time spent performing MRI scans has been reduced by learning to select a small number of more informative frequencies over a pre-trained reconstruction model [ 29 , 78 , 50 , 6 ] . Touch-based active exploration for shape understanding . Initial works tackling active tactile exploration focused on object recognition . [ 1–3 ] . While some of these methods do integrate both vision and touch for active shape understanding , they only focus on object recognition , operate over small datasets of already known and observed objects and do not leverage deep learning tools for learning patterns over large datasets of object shapes . Fleer and Moringen et . al . proposed to learn haptic exploration strategies using a reinforcement learning framework over a recurrent attention model , however these strategies were optimized for object classification and only trained over a dataset of 4 objects with a single , floating , depth-based tactile sensor array [ 20 ] . Several priors works have explored the problem of active acquisition of touch signals specifically for for surface reconstruction . A common theme among most approaches is to predict object shape using Gaussian processes so that uncertainty estimates are naturally available to drive the selection of the next point to touch , which is performed using a single finger [ 28 , 73 , 18 ] . To improve the speed of shape estimation during tactile exploration Matsubara et . al . [ 40 ] consider both uncertainty and travel cost when selecting touches using graph-based path planning . Bierbaum et al . [ 8 ] instead drove exploration through a dynamic potential field approach made popular in robot navigation to produce point cloud predictions , and extracted touch signals using a comparatively unrealistic 5 finger robotic hand model in simulation . These methods all make use of point based tactile sensors and use deterministic strategies , which are tuned and evaluated over a very small number of objects. . In contrast to these works , our approach is fully data driven , and as such , actions are selected using policies trained and evaluated over tens of thousands of objects . Moreover , predictions are made in mesh space fusing visual information with high resolution tactile signals extracted from objects in simulation using a realistic robot hand . 3 Active touch exploration . In our proposed active touch exploration problem , given a pre-trained shape reconstruction model over touch and optionally vision signals , the objective is to select the sequence of touch inputs which lead to the highest improvement in reconstruction accuracy . To tackle this problem , we define an active touch environment that contains a simulator , a reconstruction model ( a pre-trained neural network ) , and a loss function . The simulator takes as input a 3D object shape O together with parameters describing a grasp , g , and outputs touch readings , t , of the 3D shape at the grasp locations along with an RGB image of the object , I . The reconstruction model is a neural network parametrized by ϕ , which takes an input X and produces the current 3D shape estimate Ô as follows : Ô = f ( X ; ϕ ) . In our setup , we investigate two reconstruction model variants which differ in their inputs : 1 ) the model only receives a set of touch readings , t , such that X = t , and 2 ) the model receives both a set of touch readings , t , and an RGB image rendering of the shape , I , such that X = { t , I } . The loss function takes as input the current belief of the object ’ s shape , Ô , and the ground truth shape , O , and computes the distance between them : d ( O , Ô ) . Thus , active touch exploration can be formulated as sequentially selecting the optimal set of K grasp parameters { g1 , g2 ... , gK } that maximize the similarity between the ground truth shape O and the reconstruction output after K grasps ÔK 4 : argmin g1 , g2 , .... gK d ( O , ÔK ) , ( 1 ) where gk determine the touch readings fed to the reconstruction network producing Ôk . We use the Chamfer distance ( CD ) [ 60 ] between the predicted and target surface as the distance metric in our active touch formulation . Further details with respect to the CD are provided in the supplemental materials . In the reminder of this section , we provide details about the touch simulator and the reconstruction model . 3.1 Active touch simulator . Conceptually , our simulator can be described in the five steps depicted in Figure 3 . First , an object is loaded onto the environment . Second , an action space is defined around the 3D object by uniformly placing 50 points on a sphere centered at the center of the loaded object . Third , to choose a grasp , one of the points is selected and a 4-digit robot hand is placed such that its 3rd digit lies on the point and the hand ’ s palm lies tangent to the sphere . Fourth , the hand is moved towards the center of the object until it comes in contact with it . Last , the fingers of the hand are closed until they reach the maximum joint angle , or are halted by the contact with the object . As a result , the simulator produces 4 touch readings ( one from each finger of the hand ) and one RGB image of the object . Note that each action is defined by its position index on the sphere of 50 actions . This parameterization is selected specifically as it does not require any prior knowledge of the object , other than its center , and in simulation it consistently leads to successful interactions between the hand ’ s touch sensors and the object ’ s surface . In our simulator , all steps are performed in python across the robotics simulator PyBullet [ 15 ] , the rendering tool Pyrender [ 39 ] , and PyTorch [ 48 ] . For a given grasp and object , the object is loaded into PyBullet [ 15 ] , along with a Wonik ’ s Allegro Hand [ 54 ] equipped with vision-based touch sensors [ 36 ] on each of its fingers , and then the point in space corresponding to the action to be performed is selected and the grasping procedure is performed using PyBullet ’ s physics simulator . Pose information from the produced grasps is then extracted and used by Pyrender to render both a depth map of the object from the perspective of each sensor and an RGB image of the object from a fixed perspective . The depth maps are then converted into simulated touch signals using the method described in [ 56 ] . All steps in this procedure are performed in parallel or using GPU accelerated computing , and as a result across the 50 grasping options of 100 randomly chosen objects , simulated grasps and touch signals are produced in ∼ 0.0317 seconds each on a Tesla V100 GPU with 16 CPU cores . Our simulator supports two modes of tactile exploration grasping and poking . In the grasping scenario , the hand is performing a full grasp of an object using all four fingers . While , in the poking scenario , only the index finger of the hand is used for touch sensing . Further details on this simulated environment are provided the supplemental materials .", "role": "user" }, { "content": "This paper presents an approach for active 3D reconstruction of objects based solely on touch, and optionally together with vision. The main idea is active perception -- estimate where to touch/see the object in order to better reconstruct it. The proposed method to solve this involves a neural network that fuses touch and vision signals to estimate 3D shape, and a policy that predicts where to touch next for best reconstruction performance. Data comes from a simulator that produces both touch and image signals (from the ABC dataset).", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: MobILE: Model-Based Imitation Learning From Observation Alone Main Content: 1 Introduction . This paper considers Imitation Learning from Observation Alone ( ILFO ) . In ILFO , the learner is presented with sequences of states encountered by the expert , without access to the actions taken by the expert , meaning approaches based on a reduction to supervised learning ( e.g. , Behavior cloning ( BC ) [ 49 ] , DAgger [ 50 ] ) are not applicable . ILFO is more general and has potential for applications where the learner and expert have different action spaces , applications like sim-to-real [ 56 , 14 ] etc . Recently , [ 59 ] reduced the ILFO problem to a sequence of one-step distribution matching problems that results in obtaining a non-stationary policy . This approach , however , is sample inefficient for longer horizon tasks since the algorithm does not effectively reuse previously collected samples when solving the current sub-problem . Another line of work considers model-based methods to infer the expert ’ s actions with either an inverse dynamics [ 63 ] or a forward dynamics [ 16 ] model ; these recovered actions are then fed into an IL approach like BC to output the final policy . These works rely on stronger assumptions that are only satisfied for Markov Decision Processes ( MDPs ) with injective transition dynamics [ 68 ] ; we return to this in the related works section . ∗Work initiated when RK was a post-doc at Cornell University ; work done outside Amazon . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) , virtual . We introduce MobILE—Model-based Imitation Learning and Exploring , a model-based framework , to solve the ILFO problem . In contrast to existing model-based efforts , MobILE learns the forward transition dynamics model—a quantity that is well defined for any MDP . Importantly , MobILE combines strategic exploration with imitation by interleaving a model learning step with a bonus-based , optimistic distribution matching step – a perspective , to the best of our knowledge , that has not been considered in Imitation Learning . MobILE has the ability to automatically trade-off exploration and imitation . It simultaneously explores to collect data to refine the model and imitates the expert wherever the learned model is accurate and certain . At a high level , our theoretical results and experimental studies demonstrate that systematic exploration is beneficial for solving ILFO reliably and efficiently , and optimism is a both theoretically sound and practically effective approach for strategic exploration in ILFO ( see Figure 1 for comparisons with other ILFO algorithms ) . This paper extends the realm of partial information problems ( e.g . Reinforcement Learning and Bandits ) where optimism has been shown to be crucial in obtaining strong performance , both in theory ( e.g. , E3 [ 30 ] , UCB [ 3 ] ) and practice ( e.g. , RND [ 10 ] ) . This paper proves that incorporating optimism into the min-max IL framework [ 69 , 22 , 59 ] is beneficial for both the theoretical foundations and empirical performance of ILFO . Our Contributions : We present MobILE ( Algorithm 1 ) , a provably efficient , model-based framework for ILFO that offers competitive results in benchmark gym tasks . MobILE can be instantiated with various implementation choices owing to its modular design . This paper ’ s contributions are : 1 . The MobILE framework combines ideas of model-based learning , optimism for exploration , and adversarial imitation learning . MobILE achieves global optimality with near-optimal regret bounds for classes of MDP dynamics that satisfy certain well studied notions of complexity . The key idea of MobILE is to use optimism to trade-off imitation and exploration . 2 . We show an exponential sample complexity gap between ILFO and classic IL where one has access to expert ’ s actions . This indicates that ILFO is fundamentally harder than IL . Our lower bound on ILFO also indicates that to achieve near optimal regret , one needs to perform systematic exploration rather than random or no exploration , both of which will incur sub-optimal regret . 3 . We instantiate MobILE with a model ensemble of neural networks and a disagreement-based bonus . We present experimental results on benchmark OpenAI Gym tasks , indicating MobILE compares favorably to or outperforms existing approaches . Ablation studies indicate that optimism indeed helps in significantly improving the performance in practice . 1.1 Related Works . Imitation Learning ( IL ) is considered through the lens of two types of approaches : ( a ) behavior cloning ( BC ) [ 45 ] which casts IL as a reduction to supervised or full-information online learning [ 49 , 50 ] , or , ( b ) ( adversarial ) inverse RL [ 40 , 1 , 69 , 17 , 22 , 29 , 18 ] , which involves minimizing various distribution divergences to solve the IL problem , either with the transition dynamics known ( e.g. , [ 69 ] ) , or unknown ( e.g. , [ 22 ] ) . MobILE does not assume knowledge of the transition dynamics , is model-based , and operates without access to the expert ’ s actions . Imitation Learning from Observation Alone ( ILFO ) [ 59 ] presents a model-free approach FAIL that outputs a non-stationary policy by reducing the ILFO problem into a sequence of min-max problems , one per time-step . While being theoretically sound , this approach can not share data across different time steps and thus is not data efficient for long horizon problems . Also FAIL in theory only works for discrete actions . In contrast , our paper learns a stationary policy using model-based approaches by reusing data across all time steps and extends to continuous action space . Another line of work [ 63 , 16 , 66 ] relies on learning an estimate of expert action , often through the use of an inverse dynamics models , P e ( a|s , s′ ) . Unfortunately , an inverse dynamics model is not well defined in many benign problem instances . For instance , [ 68 , remark 1 , section 9.3 ] presents an example showing that inverse dynamics isn ’ t well defined except in the case when the MDP dynamics is injective ( i.e. , no two actions could lead to the same next state from the current state . Note that even deterministic transition dynamics doesn ’ t imply injectivity of the MDP dynamics ) . Furthermore , ILPO [ 16 ] applies to MDPs with deterministic transition dynamics and discrete actions . MobILE , on the other hand , learns the forward dynamics model which is always unique and well-defined for both deterministic and stochastic transitions and works with discrete and continuous actions . Another line of work in ILFO revolves around using hand-crafted cost functions that may rely on task-specific knowledge [ 44 , 4 , 53 ] . The performance of policy outputted by these efforts relies on the quality of the engineered cost functions . In contrast , MobILE does not require cost function engineering . Model-Based RL has seen several advances [ 61 , 36 , 13 ] including ones based on deep learning ( e.g. , [ 34 , 19 , 38 , 24 , 37 , 65 ] ) . Given MobILE ’ s modularity , these advances in model-based RL can be translated to improved algorithms for the ILFO problem . MobILE bears parallels to provably efficient model-based RL approaches including E3 [ 31 , 27 ] , R-MAX [ 7 ] , UCRL [ 23 ] , UCBVI [ 5 ] , Linear MDP [ 67 ] , LC3 [ 25 ] , Witness rank [ 58 ] which utilize optimism based approaches to trade-off exploration and exploitation . Our work utilizes optimism to trade-off exploration and imitation . 2 Setting . We consider episodic finite-horizon MDPM = { S , A , P ? , H , c , s0 } , where S , A are the state and action space , P ? : S × A 7→ S is the MDP ’ s transition kernel , H is the horizon , s0 is a fixed initial state ( note that our work generalizes when we have a distribution over initial states ) , and c is the state-dependent cost function c : S 7→ [ 0 , 1 ] . Our result can be extended to the setting where c : S × S 7→ [ 0 , 1 ] , i.e. , the ground truth cost c ( s , s′ ) depends on state and next state pairs . For analysis simplicity , we focus on c : S 7→ [ 0 , 1 ] .2 We denote dπP ∈ ∆ ( S×A ) as the average state-action distribution of policy π under the transition kernel P , i.e. , dπP ( s , a ) : = 1 H ∑H t=1 Pr ( st = s , at = a|s0 , π , P ) , where Pr ( st = s , at = a|s0 , π , P ) is the probability of reaching ( s , a ) at time step t starting from s0 by following π under transition kernel P . We abuse notation and write s ∼ dπP to denote a state s is sampled from the state-wise distribution which marginalizes action over dπP ( s , a ) , i.e. , d π P ( s ) : = 1 H ∑H t=1 Pr ( st = s|s0 , π , P ) . For a given cost function f : S 7→ [ 0 , 1 ] , V πP ; f denotes the expected total cost of π under transition P and cost function f . Similar to IL setting , in ILFO , the ground truth cost c is unknown . Instead , we can query the expert , denoted as πe : S 7→ ∆ ( A ) . Note that the expert πe could be stochastic and does not have to be the optimal policy . The expert , when queried , provides state-only demonstrations τ = { s0 , s1 . . . sH } , where st+1 ∼ P ? ( ·|st , at ) and at ∼ πe ( ·|st ) . The goal is to leverage expert ’ s state-wise demonstrations to learn a policy π that performs as well as πe in terms of optimizing the ground truth cost c , with polynomial sample complexity on problem parameters such as horizon , number of expert samples and online samples and underlying MDP ’ s complexity measures ( see section 4 for precise examples ) . We track the progress of any ( randomized ) algorithm by measuring the ( expected ) regret incurred by a policy π defined as E [ V π ] − V π∗ as a function of number of online interactions utilized by the algorithm to compute π . 2.1 Function Approximation Setup . Since the ground truth cost c is unknown , we utilize the notion of a function class ( i.e. , discriminators ) F ⊂ S 7→ [ 0 , 1 ] to define the costs that can then be utilized by a planning algorithm ( e.g . NPG [ 26 ] ) for purposes of distribution matching with expert states . If the ground truth c depends ( s , s′ ) , we use discriminators F ⊂ S × S 7→ [ 0 , 1 ] . Furthermore , we use a model class P ⊂ S × A 7→ ∆ ( S ) to capture the ground truth transition P ? . For the theoretical results in the paper , we assume realizability : Assumption 1 . Assume F and P captures ground truth cost and transition , i.e. , c ∈ F , P ? ∈ P . We will use Integral probability metric ( IPM ) with F as our divergence measure . Note that if c ∈ F and c : S 7→ [ 0 , 1 ] , then IPM defined as maxf∈F Es∼dπf ( s ) − Es∼dπe f ( s ) directly upper 2Without any additional assumptions , in ILFO , learning to optimize action-dependent cost c ( s , a ) ( or c ( s , a , s′ ) is not possible . For example , if there are two sequences of actions that generate the same sequence of states , without seeing expert ’ s preference over actions , we do not know which actions to commit to . Algorithm 1 MobILE : The framework of Model-based Imitation Learning and Exploring for ILFO 1 : Require : IPM class F , dynamics model class P , policy class Π , bonus function class B , expert dataset De ≡ { sei } Ni=1 . 2 : Initialize policy π0 ∈ Π , replay buffer D−1 = ∅ . 3 : for t = 0 , · · · , T − 1 do 4 : Execute πt in true environment P ? to get samples τt = { sk , ak } H−1k=0 ∪ sH . Append to replay buffer Dt = Dt−1 ∪ τt . 5 : Update model and bonus : P̂t+1 : S ×A → S and bt+1 : S ×A → R+ using buffer Dt . 6 : Optimistic model-based min-max IL : obtain πt+1 by solving equation ( 1 ) with P̂t+1 , bt+1 , De . 7 : end for 8 : Return πT . bounds sub-optimality gap V π − V πe , where V π is the expected total cost of π under cost function c. This justifies why minimizing IPM between two state distributions suffices [ 22 , 59 ] . Similarly , if c depends on s , s′ , we can simply minimize IPM between two state-next state distributions , i.e. , maxf Es , s′∼dπf ( s , s′ ) − Es , s′∼dπe f ( s , s′ ) where discriminators now take ( s , s′ ) as input.3 To permit generalization , we require P to have bounded complexity . For analytical simplicity , we assume F is discrete ( but exponentially large ) , and we require the sample complexity of any PAC algorithm to scale polynomially with respect to its complexity ln ( |F| ) . The ln |F| complexity can be replaced to bounded conventional complexity measures such as Rademacher complexity and covering number for continuous F ( e.g. , F being a Reproducing Kernel Hilbert Space ) .
This paper investigates the problem of imitation learning from observation alone (ILFO). The authors leverage the idea of optimism in the face of uncertainty and propose a novel model-based framework for ILFO that can handle both discrete and continuous settings. The authors provide a regret bound of the framework and show the exponential gap between IL and ILFO and the benefit of uncertainty-based exploration. The authors further corroborate the theoretical and algorithmic results with experiments in simulation.
science.mup_single_document_summarization.11644
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nMobILE: Model-Based Imitation Learning From Observation Alone\n\nMain Content:\n1 Introduction . This paper considers Imitation Learning from Observation Alone ( ILFO ) . In ILFO , the learner is presented with sequences of states encountered by the expert , without access to the actions taken by the expert , meaning approaches based on a reduction to supervised learning ( e.g. , Behavior cloning ( BC ) [ 49 ] , DAgger [ 50 ] ) are not applicable . ILFO is more general and has potential for applications where the learner and expert have different action spaces , applications like sim-to-real [ 56 , 14 ] etc . Recently , [ 59 ] reduced the ILFO problem to a sequence of one-step distribution matching problems that results in obtaining a non-stationary policy . This approach , however , is sample inefficient for longer horizon tasks since the algorithm does not effectively reuse previously collected samples when solving the current sub-problem . Another line of work considers model-based methods to infer the expert ’ s actions with either an inverse dynamics [ 63 ] or a forward dynamics [ 16 ] model ; these recovered actions are then fed into an IL approach like BC to output the final policy . These works rely on stronger assumptions that are only satisfied for Markov Decision Processes ( MDPs ) with injective transition dynamics [ 68 ] ; we return to this in the related works section . ∗Work initiated when RK was a post-doc at Cornell University ; work done outside Amazon . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) , virtual . We introduce MobILE—Model-based Imitation Learning and Exploring , a model-based framework , to solve the ILFO problem . In contrast to existing model-based efforts , MobILE learns the forward transition dynamics model—a quantity that is well defined for any MDP . Importantly , MobILE combines strategic exploration with imitation by interleaving a model learning step with a bonus-based , optimistic distribution matching step – a perspective , to the best of our knowledge , that has not been considered in Imitation Learning . MobILE has the ability to automatically trade-off exploration and imitation . It simultaneously explores to collect data to refine the model and imitates the expert wherever the learned model is accurate and certain . At a high level , our theoretical results and experimental studies demonstrate that systematic exploration is beneficial for solving ILFO reliably and efficiently , and optimism is a both theoretically sound and practically effective approach for strategic exploration in ILFO ( see Figure 1 for comparisons with other ILFO algorithms ) . This paper extends the realm of partial information problems ( e.g . Reinforcement Learning and Bandits ) where optimism has been shown to be crucial in obtaining strong performance , both in theory ( e.g. , E3 [ 30 ] , UCB [ 3 ] ) and practice ( e.g. , RND [ 10 ] ) . This paper proves that incorporating optimism into the min-max IL framework [ 69 , 22 , 59 ] is beneficial for both the theoretical foundations and empirical performance of ILFO . Our Contributions : We present MobILE ( Algorithm 1 ) , a provably efficient , model-based framework for ILFO that offers competitive results in benchmark gym tasks . MobILE can be instantiated with various implementation choices owing to its modular design . This paper ’ s contributions are : 1 . The MobILE framework combines ideas of model-based learning , optimism for exploration , and adversarial imitation learning . MobILE achieves global optimality with near-optimal regret bounds for classes of MDP dynamics that satisfy certain well studied notions of complexity . The key idea of MobILE is to use optimism to trade-off imitation and exploration . 2 . We show an exponential sample complexity gap between ILFO and classic IL where one has access to expert ’ s actions . This indicates that ILFO is fundamentally harder than IL . Our lower bound on ILFO also indicates that to achieve near optimal regret , one needs to perform systematic exploration rather than random or no exploration , both of which will incur sub-optimal regret . 3 . We instantiate MobILE with a model ensemble of neural networks and a disagreement-based bonus . We present experimental results on benchmark OpenAI Gym tasks , indicating MobILE compares favorably to or outperforms existing approaches . Ablation studies indicate that optimism indeed helps in significantly improving the performance in practice . 1.1 Related Works . Imitation Learning ( IL ) is considered through the lens of two types of approaches : ( a ) behavior cloning ( BC ) [ 45 ] which casts IL as a reduction to supervised or full-information online learning [ 49 , 50 ] , or , ( b ) ( adversarial ) inverse RL [ 40 , 1 , 69 , 17 , 22 , 29 , 18 ] , which involves minimizing various distribution divergences to solve the IL problem , either with the transition dynamics known ( e.g. , [ 69 ] ) , or unknown ( e.g. , [ 22 ] ) . MobILE does not assume knowledge of the transition dynamics , is model-based , and operates without access to the expert ’ s actions . Imitation Learning from Observation Alone ( ILFO ) [ 59 ] presents a model-free approach FAIL that outputs a non-stationary policy by reducing the ILFO problem into a sequence of min-max problems , one per time-step . While being theoretically sound , this approach can not share data across different time steps and thus is not data efficient for long horizon problems . Also FAIL in theory only works for discrete actions . In contrast , our paper learns a stationary policy using model-based approaches by reusing data across all time steps and extends to continuous action space . Another line of work [ 63 , 16 , 66 ] relies on learning an estimate of expert action , often through the use of an inverse dynamics models , P e ( a|s , s′ ) . Unfortunately , an inverse dynamics model is not well defined in many benign problem instances . For instance , [ 68 , remark 1 , section 9.3 ] presents an example showing that inverse dynamics isn ’ t well defined except in the case when the MDP dynamics is injective ( i.e. , no two actions could lead to the same next state from the current state . Note that even deterministic transition dynamics doesn ’ t imply injectivity of the MDP dynamics ) . Furthermore , ILPO [ 16 ] applies to MDPs with deterministic transition dynamics and discrete actions . MobILE , on the other hand , learns the forward dynamics model which is always unique and well-defined for both deterministic and stochastic transitions and works with discrete and continuous actions . Another line of work in ILFO revolves around using hand-crafted cost functions that may rely on task-specific knowledge [ 44 , 4 , 53 ] . The performance of policy outputted by these efforts relies on the quality of the engineered cost functions . In contrast , MobILE does not require cost function engineering . Model-Based RL has seen several advances [ 61 , 36 , 13 ] including ones based on deep learning ( e.g. , [ 34 , 19 , 38 , 24 , 37 , 65 ] ) . Given MobILE ’ s modularity , these advances in model-based RL can be translated to improved algorithms for the ILFO problem . MobILE bears parallels to provably efficient model-based RL approaches including E3 [ 31 , 27 ] , R-MAX [ 7 ] , UCRL [ 23 ] , UCBVI [ 5 ] , Linear MDP [ 67 ] , LC3 [ 25 ] , Witness rank [ 58 ] which utilize optimism based approaches to trade-off exploration and exploitation . Our work utilizes optimism to trade-off exploration and imitation . 2 Setting . We consider episodic finite-horizon MDPM = { S , A , P ? , H , c , s0 } , where S , A are the state and action space , P ? : S × A 7→ S is the MDP ’ s transition kernel , H is the horizon , s0 is a fixed initial state ( note that our work generalizes when we have a distribution over initial states ) , and c is the state-dependent cost function c : S 7→ [ 0 , 1 ] . Our result can be extended to the setting where c : S × S 7→ [ 0 , 1 ] , i.e. , the ground truth cost c ( s , s′ ) depends on state and next state pairs . For analysis simplicity , we focus on c : S 7→ [ 0 , 1 ] .2 We denote dπP ∈ ∆ ( S×A ) as the average state-action distribution of policy π under the transition kernel P , i.e. , dπP ( s , a ) : = 1 H ∑H t=1 Pr ( st = s , at = a|s0 , π , P ) , where Pr ( st = s , at = a|s0 , π , P ) is the probability of reaching ( s , a ) at time step t starting from s0 by following π under transition kernel P . We abuse notation and write s ∼ dπP to denote a state s is sampled from the state-wise distribution which marginalizes action over dπP ( s , a ) , i.e. , d π P ( s ) : = 1 H ∑H t=1 Pr ( st = s|s0 , π , P ) . For a given cost function f : S 7→ [ 0 , 1 ] , V πP ; f denotes the expected total cost of π under transition P and cost function f . Similar to IL setting , in ILFO , the ground truth cost c is unknown . Instead , we can query the expert , denoted as πe : S 7→ ∆ ( A ) . Note that the expert πe could be stochastic and does not have to be the optimal policy . The expert , when queried , provides state-only demonstrations τ = { s0 , s1 . . . sH } , where st+1 ∼ P ? ( ·|st , at ) and at ∼ πe ( ·|st ) . The goal is to leverage expert ’ s state-wise demonstrations to learn a policy π that performs as well as πe in terms of optimizing the ground truth cost c , with polynomial sample complexity on problem parameters such as horizon , number of expert samples and online samples and underlying MDP ’ s complexity measures ( see section 4 for precise examples ) . We track the progress of any ( randomized ) algorithm by measuring the ( expected ) regret incurred by a policy π defined as E [ V π ] − V π∗ as a function of number of online interactions utilized by the algorithm to compute π . 2.1 Function Approximation Setup . Since the ground truth cost c is unknown , we utilize the notion of a function class ( i.e. , discriminators ) F ⊂ S 7→ [ 0 , 1 ] to define the costs that can then be utilized by a planning algorithm ( e.g . NPG [ 26 ] ) for purposes of distribution matching with expert states . If the ground truth c depends ( s , s′ ) , we use discriminators F ⊂ S × S 7→ [ 0 , 1 ] . Furthermore , we use a model class P ⊂ S × A 7→ ∆ ( S ) to capture the ground truth transition P ? . For the theoretical results in the paper , we assume realizability : Assumption 1 . Assume F and P captures ground truth cost and transition , i.e. , c ∈ F , P ? ∈ P . We will use Integral probability metric ( IPM ) with F as our divergence measure . Note that if c ∈ F and c : S 7→ [ 0 , 1 ] , then IPM defined as maxf∈F Es∼dπf ( s ) − Es∼dπe f ( s ) directly upper 2Without any additional assumptions , in ILFO , learning to optimize action-dependent cost c ( s , a ) ( or c ( s , a , s′ ) is not possible . For example , if there are two sequences of actions that generate the same sequence of states , without seeing expert ’ s preference over actions , we do not know which actions to commit to . Algorithm 1 MobILE : The framework of Model-based Imitation Learning and Exploring for ILFO 1 : Require : IPM class F , dynamics model class P , policy class Π , bonus function class B , expert dataset De ≡ { sei } Ni=1 . 2 : Initialize policy π0 ∈ Π , replay buffer D−1 = ∅ . 3 : for t = 0 , · · · , T − 1 do 4 : Execute πt in true environment P ? to get samples τt = { sk , ak } H−1k=0 ∪ sH . Append to replay buffer Dt = Dt−1 ∪ τt . 5 : Update model and bonus : P̂t+1 : S ×A → S and bt+1 : S ×A → R+ using buffer Dt . 6 : Optimistic model-based min-max IL : obtain πt+1 by solving equation ( 1 ) with P̂t+1 , bt+1 , De . 7 : end for 8 : Return πT . bounds sub-optimality gap V π − V πe , where V π is the expected total cost of π under cost function c. This justifies why minimizing IPM between two state distributions suffices [ 22 , 59 ] . Similarly , if c depends on s , s′ , we can simply minimize IPM between two state-next state distributions , i.e. , maxf Es , s′∼dπf ( s , s′ ) − Es , s′∼dπe f ( s , s′ ) where discriminators now take ( s , s′ ) as input.3 To permit generalization , we require P to have bounded complexity . For analytical simplicity , we assume F is discrete ( but exponentially large ) , and we require the sample complexity of any PAC algorithm to scale polynomially with respect to its complexity ln ( |F| ) . The ln |F| complexity can be replaced to bounded conventional complexity measures such as Rademacher complexity and covering number for continuous F ( e.g. , F being a Reproducing Kernel Hilbert Space ) .", "role": "user" }, { "content": "This paper investigates the problem of imitation learning from observation alone (ILFO). The authors leverage the idea of optimism in the face of uncertainty and propose a novel model-based framework for ILFO that can handle both discrete and continuous settings. The authors provide a regret bound of the framework and show the exponential gap between IL and ILFO and the benefit of uncertainty-based exploration. The authors further corroborate the theoretical and algorithmic results with experiments in simulation.", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: A Stanford University study reported that in asymptomatic GERD patients who were being treated with a proton pump inhibitor (PPI), 50% had pathologic esophageal acid exposure. ABSTRACT.AIM: We considered the possibility that the high prevalence of pathologic esophageal reflux might simply have resulted from calculating acidity as time pH < 4. ABSTRACT.METHODS: We calculated integrated acidity and time pH < 4 from the 49 recordings of 24-hour gastric and esophageal pH from the Stanford study as well as from another study of 57 GERD subjects, 26 of whom were treated for 8 days with 20 mg omeprazole or 20 mg rabeprazole in a 2-way crossover fashion. ABSTRACT.RESULTS: The prevalence of pathologic 24-hour esophageal reflux in both studies was significantly higher when measured as time pH < 4 than when measured as integrated acidity. This difference was entirely attributable to a difference between the two measures during the nocturnal period. Nocturnal gastric acid breakthrough was not a useful predictor of pathologic nocturnal esophageal reflux. ABSTRACT.CONCLUSION: In GERD subjects treated with a PPI, measuring time esophageal pH < 4 will significantly overestimate the prevalence of pathologic esophageal acid exposure over 24 hours and during the nocturnal period. Extracted Data:
[["integrated acidity", "time pH \u003c 4", "the prevalence of pathologic 24-hour esophageal reflux", "significantly decreased", "The prevalence of pathologic 24-hour esophageal reflux in both studies was significantly higher when measured as time pH \u003c 4 than when measured as integrated acidity"]]
science.evidence_inference.98
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: A Stanford University study reported that in asymptomatic GERD patients who were being treated with a proton pump inhibitor (PPI), 50% had pathologic esophageal acid exposure.\n\nABSTRACT.AIM: We considered the possibility that the high prevalence of pathologic esophageal reflux might simply have resulted from calculating acidity as time pH < 4.\n\nABSTRACT.METHODS: We calculated integrated acidity and time pH < 4 from the 49 recordings of 24-hour gastric and esophageal pH from the Stanford study as well as from another study of 57 GERD subjects, 26 of whom were treated for 8 days with 20 mg omeprazole or 20 mg rabeprazole in a 2-way crossover fashion.\n\nABSTRACT.RESULTS: The prevalence of pathologic 24-hour esophageal reflux in both studies was significantly higher when measured as time pH < 4 than when measured as integrated acidity. This difference was entirely attributable to a difference between the two measures during the nocturnal period. Nocturnal gastric acid breakthrough was not a useful predictor of pathologic nocturnal esophageal reflux.\n\nABSTRACT.CONCLUSION: In GERD subjects treated with a PPI, measuring time esophageal pH < 4 will significantly overestimate the prevalence of pathologic esophageal acid exposure over 24 hours and during the nocturnal period.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"integrated acidity\", \"time pH \\u003c 4\", \"the prevalence of pathologic 24-hour esophageal reflux\", \"significantly decreased\", \"The prevalence of pathologic 24-hour esophageal reflux in both studies was significantly higher when measured as time pH \\u003c 4 than when measured as integrated acidity\"]]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Insulin effects appetite via ventral tegmental neurons. Research Abstract: Dopaminergic midbrain neurons integrate signals on food palatability and food-associated reward into the complex control of energy homeostasis. To define the role of insulin receptor (IR) signaling in this circuitry, we inactivated IR signaling in tyrosine hydroxylase (Th)-expressing cells of mice (IR(ΔTh)). IR inactivation in Th-expressing cells of mice resulted in increased body weight, increased fat mass, and hyperphagia. While insulin acutely stimulated firing frequency in 50% of dopaminergic VTA/SN neurons, this response was abolished in IR(ΔTh) mice. Moreover, these mice exhibited an altered response to cocaine under food-restricted conditions. Taken together, these data provide in vivo evidence for a critical role of insulin signaling in catecholaminergic neurons to control food intake and energy homeostasis.
{"verdict": "SUPPORT", "evidence": ["While insulin acutely stimulated firing frequency in 50% of dopaminergic VTA/SN neurons, this response was abolished in IR(\u0394Th) mice.", "Taken together, these data provide in vivo evidence for a critical role of insulin signaling in catecholaminergic neurons to control food intake and energy homeostasis."]}
science.scifact_entailment.412
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nInsulin effects appetite via ventral tegmental neurons.\n\nResearch Abstract:\nDopaminergic midbrain neurons integrate signals on food palatability and food-associated reward into the complex control of energy homeostasis.\nTo define the role of insulin receptor (IR) signaling in this circuitry, we inactivated IR signaling in tyrosine hydroxylase (Th)-expressing cells of mice (IR(ΔTh)).\nIR inactivation in Th-expressing cells of mice resulted in increased body weight, increased fat mass, and hyperphagia.\nWhile insulin acutely stimulated firing frequency in 50% of dopaminergic VTA/SN neurons, this response was abolished in IR(ΔTh) mice.\nMoreover, these mice exhibited an altered response to cocaine under food-restricted conditions.\nTaken together, these data provide in vivo evidence for a critical role of insulin signaling in catecholaminergic neurons to control food intake and energy homeostasis.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"While insulin acutely stimulated firing frequency in 50% of dopaminergic VTA/SN neurons, this response was abolished in IR(\\u0394Th) mice.\", \"Taken together, these data provide in vivo evidence for a critical role of insulin signaling in catecholaminergic neurons to control food intake and energy homeostasis.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Transcription rates in S. cerevisiae range between 7 and 12 kb/min. Research Abstract: A number of proteins and drugs have been implicated in the process of transcriptional elongation by RNA polymerase (Pol) II, but the factors that govern the elongation rate (nucleotide additions per min) and processivity (nucleotide additions per initiation event) in vivo are poorly understood. Here, we show that a mutation in the Rpb2 subunit of Pol II reduces both the elongation rate and processivity in vivo. In contrast, none of the putative elongation factors tested affect the elongation rate, although mutations in the THO complex and in Spt4 significantly reduce processivity. The drugs 6-azauracil and mycophenolic acid reduce both the elongation rate and processivity, and this processivity defect is aggravated by mutations in Spt4, TFIIS, and CTDK-1. Our results suggest that, in vivo, a reduced rate of Pol II elongation leads to premature dissociation along the chromatin template and that Pol II processivity can be uncoupled from elongation rate.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.852
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nTranscription rates in S. cerevisiae range between 7 and 12 kb/min.\n\nResearch Abstract:\nA number of proteins and drugs have been implicated in the process of transcriptional elongation by RNA polymerase (Pol) II, but the factors that govern the elongation rate (nucleotide additions per min) and processivity (nucleotide additions per initiation event) in vivo are poorly understood.\nHere, we show that a mutation in the Rpb2 subunit of Pol II reduces both the elongation rate and processivity in vivo.\nIn contrast, none of the putative elongation factors tested affect the elongation rate, although mutations in the THO complex and in Spt4 significantly reduce processivity.\nThe drugs 6-azauracil and mycophenolic acid reduce both the elongation rate and processivity, and this processivity defect is aggravated by mutations in Spt4, TFIIS, and CTDK-1.\nOur results suggest that, in vivo, a reduced rate of Pol II elongation leads to premature dissociation along the chromatin template and that Pol II processivity can be uncoupled from elongation rate.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: The recognizer has been evaluated on 5k and 20k test data for the English and French languages using similar style corpora. For WSJ, paragraphs were selected ensuring not more than one word was out of the 5.6k most frequent words<cite> [ 19]</cite> , and these additional words were then included as part of the vocabulary.
["Uses"]
science.multicite_intent_classification.1222
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe recognizer has been evaluated on 5k and 20k test data for the English and French languages using similar style corpora. For WSJ, paragraphs were selected ensuring not more than one word was out of the 5.6k most frequent words<cite> [ 19]</cite> , and these additional words were then included as part of the vocabulary.", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.discomat_te
| | WG1 | WG2T | WG5 | WG4 | WG3 | | Er3+ (mol%) | 13.4 | 12 | 9.8 | 10.4 | 14.5 | | Crystal phase | Amorphous | Unknown | La x Er(1- x )F3 | La x Er(1- x )F3 | La x Er(1- x )F3 | | t (ms) | 2.4 (23%) | 3.8 (100%) | 2.3 (32%) | 2.3 (32%) | 2.2 (32%) | | t (ms) | 10.4 (77%) | | 10.3 (68%) | 9.0 (68%) | 8.6 (68%) | | FWHM | 58 | 74 | 62 | 62 | 71 | Caption: XRD, Er3+ concentration (+-1mol%), lifetime t (+-0.5ms) and bandwidth FWHM (+-0.5nm) for the 4I13/2 erbium level of as-deposited amorphous waveguide (WG1), CG waveguides obtained by one-step process (WG3, WG4, WG5) and CG waveguide obtained by two-step process (WG2T). The population ratio of each erbium site is given in brackets Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "13.4", "type": "Other"} {"value": "12", "type": "Other"} {"value": "9.8", "type": "Other"} {"value": "10.4", "type": "Other"} {"value": "14.5", "type": "Other"} {"value": "2.4", "type": "Other"} {"value": "3.8", "type": "Other"} {"value": "2.3", "type": "Other"} {"value": "2.3", "type": "Other"} {"value": "2.2", "type": "Other"} {"value": "10.4", "type": "Other"} {"value": "10.3", "type": "Other"} {"value": "9.0", "type": "Other"} {"value": "8.6", "type": "Other"} {"value": "58", "type": "Other"} {"value": "74", "type": "Other"} {"value": "62", "type": "Other"} {"value": "62", "type": "Other"} {"value": "71", "type": "Other"}
science.discomat_te.333
[ { "content": "| | WG1 | WG2T | WG5 | WG4 | WG3 |\n| Er3+ (mol%) | 13.4 | 12 | 9.8 | 10.4 | 14.5 |\n| Crystal phase | Amorphous | Unknown | La x Er(1- x )F3 | La x Er(1- x )F3 | La x Er(1- x )F3 |\n| t (ms) | 2.4 (23%) | 3.8 (100%) | 2.3 (32%) | 2.3 (32%) | 2.2 (32%) |\n| t (ms) | 10.4 (77%) | | 10.3 (68%) | 9.0 (68%) | 8.6 (68%) |\n| FWHM | 58 | 74 | 62 | 62 | 71 |\n\nCaption: XRD, Er3+ concentration (+-1mol%), lifetime t (+-0.5ms) and bandwidth FWHM (+-0.5nm) for the 4I13/2 erbium level of as-deposited amorphous waveguide (WG1), CG waveguides obtained by one-step process (WG3, WG4, WG5) and CG waveguide obtained by two-step process (WG2T). The population ratio of each erbium site is given in brackets\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"13.4\", \"type\": \"Other\"}\n{\"value\": \"12\", \"type\": \"Other\"}\n{\"value\": \"9.8\", \"type\": \"Other\"}\n{\"value\": \"10.4\", \"type\": \"Other\"}\n{\"value\": \"14.5\", \"type\": \"Other\"}\n{\"value\": \"2.4\", \"type\": \"Other\"}\n{\"value\": \"3.8\", \"type\": \"Other\"}\n{\"value\": \"2.3\", \"type\": \"Other\"}\n{\"value\": \"2.3\", \"type\": \"Other\"}\n{\"value\": \"2.2\", \"type\": \"Other\"}\n{\"value\": \"10.4\", \"type\": \"Other\"}\n{\"value\": \"10.3\", \"type\": \"Other\"}\n{\"value\": \"9.0\", \"type\": \"Other\"}\n{\"value\": \"8.6\", \"type\": \"Other\"}\n{\"value\": \"58\", \"type\": \"Other\"}\n{\"value\": \"74\", \"type\": \"Other\"}\n{\"value\": \"62\", \"type\": \"Other\"}\n{\"value\": \"62\", \"type\": \"Other\"}\n{\"value\": \"71\", \"type\": \"Other\"}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: This paper explores the issue of using different co-occurrence similarities between terms for separating query terms that are useful for retrieval from those that are harmful. The hypothesis under examination is that useful terms tend to be more similar to each other than to other query terms. Preliminary experiments with similarities computed using first-order and second-order co-occurrence seem to confirm the hypothesis. Term similarities could then be used for determining which query terms are useful and best reflect the user 's information need. A possible application would be to use this source of evidence for tuning the weights of the query terms.
Output Entities: {"Generic": ["query terms", "those", "similarities", "useful terms"], "Material": [], "Method": [], "Metric": [], "OtherScientificTerm": ["co-occurrence similarities", "first-order and second-order co-occurrence"], "Task": ["retrieval"]}
science.scierc_ner.46
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nThis paper explores the issue of using different co-occurrence similarities between terms for separating query terms that are useful for retrieval from those that are harmful. The hypothesis under examination is that useful terms tend to be more similar to each other than to other query terms. Preliminary experiments with similarities computed using first-order and second-order co-occurrence seem to confirm the hypothesis. Term similarities could then be used for determining which query terms are useful and best reflect the user 's information need. A possible application would be to use this source of evidence for tuning the weights of the query terms.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [\"query terms\", \"those\", \"similarities\", \"useful terms\"], \"Material\": [], \"Method\": [], \"Metric\": [], \"OtherScientificTerm\": [\"co-occurrence similarities\", \"first-order and second-order co-occurrence\"], \"Task\": [\"retrieval\"]}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: MS and hospitalized were included and analyzed after which they were divided into the primary progressive MS A and B groups, the relapsing-remitting MS (RRMS) C and D groups Multiple sclerosis presents with different phenotypes, most commonly a relapsing-remitting course and, less frequently, a progressive accumulation of disability from disease onset (primary progressive multiple sclerosis). The majority of people with relapsing-remitting multiple sclerosis, after a variable time, switch to a stage characterised by gradual neurological worsening known as secondary progressive multiple sclerosis. relapsing-remitting MS, primary and secondary progressive MS) Question: List the stages/types of Multiple Sclerosis.
["Primary Progressive", "Relapsing-Remitting", "Secondary Progressive"]
science.bioasq_list_qa.667
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: MS and hospitalized were included and analyzed after which they were divided into the primary progressive MS A and B groups, the relapsing-remitting MS (RRMS) C and D groups\nMultiple sclerosis presents with different phenotypes, most commonly a relapsing-remitting course and, less frequently, a progressive accumulation of disability from disease onset (primary progressive multiple sclerosis). The majority of people with relapsing-remitting multiple sclerosis, after a variable time, switch to a stage characterised by gradual neurological worsening known as secondary progressive multiple sclerosis.\nrelapsing-remitting MS, primary and secondary progressive MS)\n\nQuestion: List the stages/types of Multiple Sclerosis.", "role": "user" }, { "content": "[\"Primary Progressive\", \"Relapsing-Remitting\", \"Secondary Progressive\"]", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Combating Adversarial Misspellings with Robust Word Recognition To combat adversarial spelling mistakes, we propose placing a word recognition model in front of the downstream classifier. Our word recognition models build upon the RNN semicharacter architecture, introducing several new backoff strategies for handling rare and unseen words. Trained to recognize words corrupted by random adds, drops, swaps, and keyboard mistakes, our method achieves 32% relative (and 3.3% absolute) error reduction over the vanilla semi-character model. Notably, our pipeline confers robustness on the downstream classifier, outperforming both adversarial training and off-the-shelf spell checkers. Against a BERT model fine-tuned for sentiment analysis, a single adversarially-chosen character attack lowers accuracy from 90.3% to 45.8%. Our defense restores accuracy to 75% 1 . Surprisingly, better word recognition does not always entail greater robustness. Our analysis reveals that robustness also depends upon a quantity that we denote the sensitivity. Introduction Despite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures. For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 . In this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies BIBREF4 , BIBREF5 which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed. First, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece BIBREF6 , we demonstrate that an adversary can degrade a classifier's performance to that achieved by random guessing. This requires altering just two characters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK. Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK. Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary's effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK-ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options. Second, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving $90.3$ accuracy on a sentiment classification task, is degraded to $64.1$ by an adversarially-chosen 1-character swap in the sentence, which can only be restored to $69.2$ by adversarial training. Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training Fourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust. Model Sensitivity In computer vision, an important factor determining the success of an adversary is the norm constraint on the perturbations allowed to an image ( $|| \bf x - \bf x^{\prime }||_{\infty } < \epsilon $ ). Higher values of $\epsilon $ lead to a higher chance of mis-classification for at least one $\bf x^{\prime }$ . Defense methods such as quantization BIBREF22 and thermometer encoding BIBREF23 try to reduce the space of perturbations available to the adversary by making the model invariant to small changes in the input. In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity. We can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence $s$ from the set of sentences $\mathcal {S}$ , let $A(s) = {s_1}^{\prime } , {s_2}^{\prime }, \dots , {s_n}^{\prime }$ denote the set of $n$ perturbations to it under attack type $A$ , and let $V$ be the function that maps strings to an input representation for the downstream classifier. For a word level model, $V$ would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, $V$ would map inputs to a sequence of character IDs. Formally, sensitivity is defined as $$S_{W,V}^A=\mathbb {E}_{s}\left[\frac{\#_{u}(V \circ W({s_1}^{\prime }), \dots , V \circ W({s_n}^{\prime }))}{n}\right] ,$$ (Eq. 12) where $V \circ W (s_i)$ returns the input representation (of the downstream classifier) for the output string produced by the word-recognizer $W$ using $s_i$ and $\#_{u}(\cdot )$ counts the number of unique arguments. Intuitively, we expect a high value of $S_{W, V}^A$ to lead to a lower robustness of the downstream classifier, since the adversary has more degrees of freedom to attack the classifier. Thus, when using word recognition as a defense, it is prudent to design a low sensitivity system with a low error rate. However, as we will demonstrate, there is often a trade-off between sensitivity and error rate. Understanding Model Sensitivity To study model sensitivity, for each sentence, we perturb one randomly-chosen word and replace it with all possible perturbations under a given attack type. The resulting set of perturbed sentences is then fed to the word recognizer (whose sensitivity is to be estimated). As described in equation 12 , we count the number of unique predictions from the output sentences. Two corrections are considered unique if they are mapped differently by the downstream classifier. The neutral backoff variant has the lowest sensitivity (Table 5 ). This is expected, as it returns a fixed neutral word whenever the ScRNN predicts an UNK, therefore reducing the number of unique outputs it predicts. Open vocabulary (i.e. char-only, word+char, word-piece) downstream classifiers consider every unique combination of characters differently, whereas word-only classifiers internally treat all out of vocabulary (OOV) words alike. Hence, for char-only, word+char, and word-piece models, the pass-through version is more sensitive than the background variant, as it passes words as is (and each combination is considered uniquely). However, for word-only models, pass-through is less sensitive as all the OOV character combinations are rendered identical. Ideally, a preferred defense is one with low sensitivity and word error rate. In practice, however, we see that a low error rate often comes at the cost of sensitivity. We see this trade-off in Figure 2 , where we plot WER and sensitivity on the two axes, and depict the robustness when using different backoff variants. Generally, sensitivity is the more dominant factor out of the two, as the error rates of the considered variants are reasonably low. We verify if the sentiment (of the reviews) is preserved with char-level attacks. In a human study with 50 attacked (and subsequently misclassified), and 50 unchanged reviews, it was noted that 48 and 49, respectively, preserved the sentiment. Query: What does the "sensitivity" quantity denote? JSON Response:
{"answer": "The expected number of unique outputs a word recognition system assigns to a set of adversarial perturbations ", "evidence": ["In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is \u201cfooled\u201d. We denote this property of a model as its sensitivity.", "We can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence $s$ from the set of sentences $\\mathcal {S}$ , let $A(s) = {s_1}^{\\prime } , {s_2}^{\\prime }, \\dots , {s_n}^{\\prime }$ denote the set of $n$ perturbations to it under attack type $A$ , and let $V$ be the function that maps strings to an input representation for the downstream classifier. For a word level model, $V$ would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, $V$ would map inputs to a sequence of character IDs. Formally, sensitivity is defined as"]}
science.qasper_abstractive_qa.169.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nCombating Adversarial Misspellings with Robust Word Recognition\n\nTo combat adversarial spelling mistakes, we propose placing a word recognition model in front of the downstream classifier. Our word recognition models build upon the RNN semicharacter architecture, introducing several new backoff strategies for handling rare and unseen words. Trained to recognize words corrupted by random adds, drops, swaps, and keyboard mistakes, our method achieves 32% relative (and 3.3% absolute) error reduction over the vanilla semi-character model. Notably, our pipeline confers robustness on the downstream classifier, outperforming both adversarial training and off-the-shelf spell checkers. Against a BERT model fine-tuned for sentiment analysis, a single adversarially-chosen character attack lowers accuracy from 90.3% to 45.8%. Our defense restores accuracy to 75%\n1 . Surprisingly, better word recognition does not always entail greater robustness. Our analysis reveals that robustness also depends upon a quantity that we denote the sensitivity.\n\nIntroduction\nDespite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures.\nFor all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 .\nIn this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies BIBREF4 , BIBREF5 which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed.\nFirst, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece BIBREF6 , we demonstrate that an adversary can degrade a classifier's performance to that achieved by random guessing. This requires altering just two characters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK. Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK. Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary's effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK-ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options.\nSecond, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving $90.3$ accuracy on a sentiment classification task, is degraded to $64.1$ by an adversarially-chosen 1-character swap in the sentence, which can only be restored to $69.2$ by adversarial training.\nThird (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training\nFourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust.\n\nModel Sensitivity\nIn computer vision, an important factor determining the success of an adversary is the norm constraint on the perturbations allowed to an image ( $|| \\bf x - \\bf x^{\\prime }||_{\\infty } < \\epsilon $ ). Higher values of $\\epsilon $ lead to a higher chance of mis-classification for at least one $\\bf x^{\\prime }$ . Defense methods such as quantization BIBREF22 and thermometer encoding BIBREF23 try to reduce the space of perturbations available to the adversary by making the model invariant to small changes in the input.\nIn NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity.\nWe can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence $s$ from the set of sentences $\\mathcal {S}$ , let $A(s) = {s_1}^{\\prime } , {s_2}^{\\prime }, \\dots , {s_n}^{\\prime }$ denote the set of $n$ perturbations to it under attack type $A$ , and let $V$ be the function that maps strings to an input representation for the downstream classifier. For a word level model, $V$ would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, $V$ would map inputs to a sequence of character IDs. Formally, sensitivity is defined as \n$$S_{W,V}^A=\\mathbb {E}_{s}\\left[\\frac{\\#_{u}(V \\circ W({s_1}^{\\prime }), \\dots , V \\circ W({s_n}^{\\prime }))}{n}\\right] ,$$ (Eq. 12) \nwhere $V \\circ W (s_i)$ returns the input representation (of the downstream classifier) for the output string produced by the word-recognizer $W$ using $s_i$ and $\\#_{u}(\\cdot )$ counts the number of unique arguments.\nIntuitively, we expect a high value of $S_{W, V}^A$ to lead to a lower robustness of the downstream classifier, since the adversary has more degrees of freedom to attack the classifier. Thus, when using word recognition as a defense, it is prudent to design a low sensitivity system with a low error rate. However, as we will demonstrate, there is often a trade-off between sensitivity and error rate.\n\nUnderstanding Model Sensitivity\nTo study model sensitivity, for each sentence, we perturb one randomly-chosen word and replace it with all possible perturbations under a given attack type. The resulting set of perturbed sentences is then fed to the word recognizer (whose sensitivity is to be estimated). As described in equation 12 , we count the number of unique predictions from the output sentences. Two corrections are considered unique if they are mapped differently by the downstream classifier.\nThe neutral backoff variant has the lowest sensitivity (Table 5 ). This is expected, as it returns a fixed neutral word whenever the ScRNN predicts an UNK, therefore reducing the number of unique outputs it predicts. Open vocabulary (i.e. char-only, word+char, word-piece) downstream classifiers consider every unique combination of characters differently, whereas word-only classifiers internally treat all out of vocabulary (OOV) words alike. Hence, for char-only, word+char, and word-piece models, the pass-through version is more sensitive than the background variant, as it passes words as is (and each combination is considered uniquely). However, for word-only models, pass-through is less sensitive as all the OOV character combinations are rendered identical.\nIdeally, a preferred defense is one with low sensitivity and word error rate. In practice, however, we see that a low error rate often comes at the cost of sensitivity. We see this trade-off in Figure 2 , where we plot WER and sensitivity on the two axes, and depict the robustness when using different backoff variants. Generally, sensitivity is the more dominant factor out of the two, as the error rates of the considered variants are reasonably low.\nWe verify if the sentiment (of the reviews) is preserved with char-level attacks. In a human study with 50 attacked (and subsequently misclassified), and 50 unchanged reviews, it was noted that 48 and 49, respectively, preserved the sentiment.\n\nQuery: What does the \"sensitivity\" quantity denote?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"The expected number of unique outputs a word recognition system assigns to a set of adversarial perturbations \", \"evidence\": [\"In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is \\u201cfooled\\u201d. We denote this property of a model as its sensitivity.\", \"We can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence $s$ from the set of sentences $\\\\mathcal {S}$ , let $A(s) = {s_1}^{\\\\prime } , {s_2}^{\\\\prime }, \\\\dots , {s_n}^{\\\\prime }$ denote the set of $n$ perturbations to it under attack type $A$ , and let $V$ be the function that maps strings to an input representation for the downstream classifier. For a word level model, $V$ would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, $V$ would map inputs to a sequence of character IDs. Formally, sensitivity is defined as\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Recent studies based on NMT either focus entirely on the use of MT-based sentence embeddings in other tasks (Schwenk, 2018) , on translation quality (Lu et al., 2018) , on speed comparison (Britz et al., 2017) , or only exploring a bilingual scenario <cite>(Cífka and Bojar, 2018)</cite> .
["Background"]
science.multicite_intent_classification.2090
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nRecent studies based on NMT either focus entirely on the use of MT-based sentence embeddings in other tasks (Schwenk, 2018) , on translation quality (Lu et al., 2018) , on speed comparison (Britz et al., 2017) , or only exploring a bilingual scenario <cite>(Cífka and Bojar, 2018)</cite> .", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Background: Cognitive impairment is one of the most crucial disorders among multiple sclerosis (MS) patients. Since MS is an inflammatory disease and Boswellia papyrifera has anti-inflammatory effects, the influence of B. papyrifera on cognitive impairment in MS patients has been investigated in the present study. <p> Methods: In this clinical trial, 80 MS patients who referred to the clinic of Shafa Hospital, Kerman, Iran were selected. Having completed a written consent form, patients with relapsing remitting MS, with no occurrence of a new attack throughout 1 month before the study, with no pregnancy or breastfeeding entered the study. The patients were randomly divided into two groups; then Brief International Cognitive Assessment for MS (BICAMS) test was carried out. One group received B. papyrifera (capsule 300 mg, twice a day) while the other group received placebo with the same dose for 2 months. After 2 months of treatment, BICAMS was redone and changes were analyzed. The significant change value on the before-after BICAMS points were considered to be 8, 13, and 7 points for the symbol digit modality test (SDMT), the California verbal learning test (CVLT), and the brief visual-spatial memory test revised (BVMT-R), respectively. <p> Results: The patients' mean age was 36.58 8.50 years. The mean duration of disease was 7.41 4.13 years. About 84.2% (n = 64) of the patients was female. In the BVMT-R, 13 patients (34.2%), who had already taken B. papyrifera, were shown to have significant improvement compared to the placebo group with no improvement (P < 0. 001). About 12 and 8 patients in the treatment and placebo groups in the SDMT, respectively (P = 0.200) and 17 and 12 patients in the treatment and placebo groups in the CVLT, respectively (P = 0.170) had significant change values. <p> Conclusion: B. papyrifera showed significant improvement in visuospatial memory, but had no effect on verbal memory and information processing speed. Extracted Data:
[["B. papyrifera (capsule 300 mg, twice a day)", "placebo", "California verbal learning test (CVLT), and symbol digit modality test (SDMT)", "no significant difference", ". About 12 and 8 patients in the treatment and placebo groups in the SDMT, respectively (P = 0.200) and 17 and 12 patients in the treatment and placebo groups in the CVLT, respectively (P = 0.170) had significant change values."]]
science.evidence_inference.42
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: \nBackground: Cognitive impairment is one of the most crucial disorders among multiple sclerosis (MS) patients. Since MS is an inflammatory disease and Boswellia papyrifera has anti-inflammatory effects, the influence of B. papyrifera on cognitive impairment in MS patients has been investigated in the present study. <p> \nMethods: In this clinical trial, 80 MS patients who referred to the clinic of Shafa Hospital, Kerman, Iran were selected. Having completed a written consent form, patients with relapsing remitting MS, with no occurrence of a new attack throughout 1 month before the study, with no pregnancy or breastfeeding entered the study. The patients were randomly divided into two groups; then Brief International Cognitive Assessment for MS (BICAMS) test was carried out. One group received B. papyrifera (capsule 300 mg, twice a day) while the other group received placebo with the same dose for 2 months. After 2 months of treatment, BICAMS was redone and changes were analyzed. The significant change value on the before-after BICAMS points were considered to be 8, 13, and 7 points for the symbol digit modality test (SDMT), the California verbal learning test (CVLT), and the brief visual-spatial memory test revised (BVMT-R), respectively. <p> \nResults: The patients' mean age was 36.58 8.50 years. The mean duration of disease was 7.41 4.13 years. About 84.2% (n = 64) of the patients was female. In the BVMT-R, 13 patients (34.2%), who had already taken B. papyrifera, were shown to have significant improvement compared to the placebo group with no improvement (P < 0. 001). About 12 and 8 patients in the treatment and placebo groups in the SDMT, respectively (P = 0.200) and 17 and 12 patients in the treatment and placebo groups in the CVLT, respectively (P = 0.170) had significant change values. <p> \nConclusion: \nB. papyrifera showed significant improvement in visuospatial memory, but had no effect on verbal memory and information processing speed.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"B. papyrifera (capsule 300 mg, twice a day)\", \"placebo\", \"California verbal learning test (CVLT), and symbol digit modality test (SDMT)\", \"no significant difference\", \". About 12 and 8 patients in the treatment and placebo groups in the SDMT, respectively (P = 0.200) and 17 and 12 patients in the treatment and placebo groups in the CVLT, respectively (P = 0.170) had significant change values.\"]]", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Unlike past works, we do not require convolutional frontends [19] or interleaved recurrences <cite>[27]</cite> to train self-attention for ASR.
["Differences"]
science.multicite_intent_classification.172
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nUnlike past works, we do not require convolutional frontends [19] or interleaved recurrences <cite>[27]</cite> to train self-attention for ASR.", "role": "user" }, { "content": "[\"Differences\"]", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Postconditioning with cyclosporine a reduces early renal dysfunction by inhibiting mitochondrial permeability transition. mitochondrial permeability transition pore inhibitor (Cyclosporine A). treatment with the MPTP inhibitor atractyloside cyclosporine A, an MPTP inhibitor, NIM-811 (N-metyl-4-isoleucine-cyclosporine) is a mitochondria specific drug, which can prevent ischemic-reperfusion (IR) injury, by inhibiting mitochondrial permeability transition pores (mPTP). The mPTP inhibitor sanglifehrin A (SfA) mPTP inhibitor, cyclosporin A (CsA), cyclosporine A (CsA), a MPTP inhibitor, The mPTP inhibitor NIM811, A mPTP inhibitor (TRO-19622) Cyclosporine A (CsA), a mitochondria permeability transition pore (MPTP) inhibitor, and N-methyl-4-isoleucine cyclosporine (NIM811), an mPTP inhibitor, mPTP inhibitor cyclosporin A (CsA) ADP is not only a key substrate for ATP generation, but also a potent inhibitor of mitochondrial permeability transition pore (mPTP). MPTP inhibitor cyclosporin A mPTP inhibitor cyclosporin A MPTP inhibitor cyclosporine A (CsA) the mPTP inhibitor cyclosporine A The mPTP inhibitor cyclosporine A (CsA) Rather, we find that KB-R7943 inhibits opening of the mitochondrial permeability transition pore in permeabilized cells and isolated liver mitochondria. Our data reveal another mechanism through which KB-R7943 may protect against calcium-induced injury, as well as a novel means to inhibit the mitochondrial permeability transition pore. KB-R7943, a plasma membrane Na(+)/Ca(2+) exchanger inhibitor, blocks opening of the mitochondrial permeability transition pore. Inhibitors of calcium/calmodulin-dependent protein kinase II, a mitochondrial Ca(2+) uniporter (MCU) regulator, also prevented MPTP formation and arachidonic acid release induced by A23187 and H2O2. Unlike pyrrophenone, the diacylglycerol analog 1-oleoyl-2-acetyl-sn-glycerol and CsA blocked cell death and arachidonic acid release not by preventing mitochondrial calcium uptake but by inhibiting MPTP formation. Incubation with β3AR agonist (BRL37344, 7 μmol/L) significantly reduced cell death in isolated adult mouse cardiomyocytes during hypoxia/reoxygenation and decreased susceptibility to deleterious opening of the mitochondrial permeability transition pore (mPTP), via a mechanism dependent on the Akt-NO signaling pathway. Here we show that antamanide inhibits the mitochondrial permeability transition pore, a central effector of cell death induction, by targeting the pore regulator cyclophilin D. Indeed, (i) permeability transition pore inhibition by antamanide is not additive with the cyclophilin D-binding drug cyclosporin A, (ii) the inhibitory action of antamanide on the pore requires phosphate, as previously shown for cyclosporin A; (iii) antamanide is ineffective in mitochondria or cells derived from cyclophilin D null animals, and (iv) abolishes CyP-D peptidyl-prolyl cis-trans isomerase activity. Furthermore, in wild-type (non-transfected) SHSY5Y cells, the effects of lactacystin on mitochondrial function and cell viability are also prevented by cyclosporin A (1 μM) which blocks the activity of the mitochondrial permeability transition pore Exogenous zinc protects cardiac cells from reperfusion injury by targeting mitochondrial permeability transition pore through inactivation of glycogen synthase kinase-3beta. Bradykinin prevents reperfusion injury by targeting mitochondrial permeability transition pore through glycogen synthase kinase 3beta. HIF-1 reduces ischaemia-reperfusion injury in the heart by targeting the mitochondrial permeability transition pore. Pre—treatment of gallic acid and mitochondrially targeted gallic acid to sodium nitroprusside treated mitochondria not only significantly reduced the oxidative stress but also prevented mitochondrial permeability pore transition to a significant difference. Mitochondrially targeted gallic acid was found more effective in reducing oxidative stress and mitochondrial permeability pore transition than gallic acid. Mitochondrially targeted gallic acid was found more effective in reducing oxidative stress and mitochondrial permeability pore transition than gallic acid. We conclude that mitochondrially targeted gallic acid can be used for preventing mitochondrial impairment caused by oxidative stress. The involvement of mitochondrial permeability transition pore was determined with a mitochondrial permeability transition pore opener atractyloside and a specific mitochondrial permeability transition pore inhibitor cyclosporin A. Here we show that antamanide inhibits the mitochondrial permeability transition pore, a central effector of cell death induction, by targeting the pore regulator cyclophilin D. Mitochondrially targeted gallic acid was found more effective in reducing oxidative stress and mitochondrial permeability pore transition than gallic acid. Pre—treatment of gallic acid and mitochondrially targeted gallic acid to sodium nitroprusside treated mitochondria not only significantly reduced the oxidative stress but also prevented mitochondrial permeability pore transition to a significant difference. Question: List inhibtors targeting the mitochondrial permeability transition pore.
["Cyclosporine A", "Atractyloside", "N-metyl-4-isoleucine-cyclosporine", "Sanglifehrin A", "TRO-19622"]
science.bioasq_list_qa.383
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Postconditioning with cyclosporine a reduces early renal dysfunction by inhibiting mitochondrial permeability transition.\nmitochondrial permeability transition pore inhibitor (Cyclosporine A).\ntreatment with the MPTP inhibitor atractyloside\ncyclosporine A, an MPTP inhibitor,\nNIM-811 (N-metyl-4-isoleucine-cyclosporine) is a mitochondria specific drug, which can prevent ischemic-reperfusion (IR) injury, by inhibiting mitochondrial permeability transition pores (mPTP).\nThe mPTP inhibitor sanglifehrin A (SfA)\nmPTP inhibitor, cyclosporin A (CsA),\ncyclosporine A (CsA), a MPTP inhibitor,\nThe mPTP inhibitor NIM811,\nA mPTP inhibitor (TRO-19622)\nCyclosporine A (CsA), a mitochondria permeability transition pore (MPTP) inhibitor,\nand N-methyl-4-isoleucine cyclosporine (NIM811), an mPTP inhibitor,\nmPTP inhibitor cyclosporin A (CsA)\nADP is not only a key substrate for ATP generation, but also a potent inhibitor of mitochondrial permeability transition pore (mPTP).\nMPTP inhibitor cyclosporin A\nmPTP inhibitor cyclosporin A\nMPTP inhibitor cyclosporine A (CsA)\nthe mPTP inhibitor cyclosporine A\nThe mPTP inhibitor cyclosporine A (CsA)\nRather, we find that KB-R7943 inhibits opening of the mitochondrial permeability transition pore in permeabilized cells and isolated liver mitochondria.\nOur data reveal another mechanism through which KB-R7943 may protect against calcium-induced injury, as well as a novel means to inhibit the mitochondrial permeability transition pore.\nKB-R7943, a plasma membrane Na(+)/Ca(2+) exchanger inhibitor, blocks opening of the mitochondrial permeability transition pore.\nInhibitors of calcium/calmodulin-dependent protein kinase II, a mitochondrial Ca(2+) uniporter (MCU) regulator, also prevented MPTP formation and arachidonic acid release induced by A23187 and H2O2.\nUnlike pyrrophenone, the diacylglycerol analog 1-oleoyl-2-acetyl-sn-glycerol and CsA blocked cell death and arachidonic acid release not by preventing mitochondrial calcium uptake but by inhibiting MPTP formation.\nIncubation with β3AR agonist (BRL37344, 7 μmol/L) significantly reduced cell death in isolated adult mouse cardiomyocytes during hypoxia/reoxygenation and decreased susceptibility to deleterious opening of the mitochondrial permeability transition pore (mPTP), via a mechanism dependent on the Akt-NO signaling pathway.\nHere we show that antamanide inhibits the mitochondrial permeability transition pore, a central effector of cell death induction, by targeting the pore regulator cyclophilin D. Indeed, (i) permeability transition pore inhibition by antamanide is not additive with the cyclophilin D-binding drug cyclosporin A, (ii) the inhibitory action of antamanide on the pore requires phosphate, as previously shown for cyclosporin A; (iii) antamanide is ineffective in mitochondria or cells derived from cyclophilin D null animals, and (iv) abolishes CyP-D peptidyl-prolyl cis-trans isomerase activity.\nFurthermore, in wild-type (non-transfected) SHSY5Y cells, the effects of lactacystin on mitochondrial function and cell viability are also prevented by cyclosporin A (1 μM) which blocks the activity of the mitochondrial permeability transition pore\nExogenous zinc protects cardiac cells from reperfusion injury by targeting mitochondrial permeability transition pore through inactivation of glycogen synthase kinase-3beta.\nBradykinin prevents reperfusion injury by targeting mitochondrial permeability transition pore through glycogen synthase kinase 3beta.\nHIF-1 reduces ischaemia-reperfusion injury in the heart by targeting the mitochondrial permeability transition pore.\nPre—treatment of gallic acid and mitochondrially targeted gallic acid to sodium nitroprusside treated mitochondria not only significantly reduced the oxidative stress but also prevented mitochondrial permeability pore transition to a significant difference. Mitochondrially targeted gallic acid was found more effective in reducing oxidative stress and mitochondrial permeability pore transition than gallic acid.\nMitochondrially targeted gallic acid was found more effective in reducing oxidative stress and mitochondrial permeability pore transition than gallic acid. We conclude that mitochondrially targeted gallic acid can be used for preventing mitochondrial impairment caused by oxidative stress.\nThe involvement of mitochondrial permeability transition pore was determined with a mitochondrial permeability transition pore opener atractyloside and a specific mitochondrial permeability transition pore inhibitor cyclosporin A.\nHere we show that antamanide inhibits the mitochondrial permeability transition pore, a central effector of cell death induction, by targeting the pore regulator cyclophilin D.\nMitochondrially targeted gallic acid was found more effective in reducing oxidative stress and mitochondrial permeability pore transition than gallic acid.\nPre—treatment of gallic acid and mitochondrially targeted gallic acid to sodium nitroprusside treated mitochondria not only significantly reduced the oxidative stress but also prevented mitochondrial permeability pore transition to a significant difference.\n\nQuestion: List inhibtors targeting the mitochondrial permeability transition pore.", "role": "user" }, { "content": "[\"Cyclosporine A\", \"Atractyloside\", \"N-metyl-4-isoleucine-cyclosporine\", \"Sanglifehrin A\", \"TRO-19622\"]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Identifying the Sources of Uncertainty in Object Classification Main Content: In image-based object classification , the visual appearance of objects determines which class they are assigned to . External variables that are independent of the object , such as the perspective or the lighting conditions , can modify the object ’ s appearance resulting in ambiguous images that lead to misclassifications . Previous work has proposed methods for estimating the uncertainty of predictions and measure their confidence . However , such methods do not indicate which variables are the potential sources that cause uncertainty . In this paper , we propose a method for image-based object classification that uses disentangled representations to indicate which are the external variables that contribute the most to the uncertainty of the predictions . This information can be used to identify the external variables that should be modified to decrease the uncertainty and improve the classification . 1 INTRODUCTION . An object from the real world can be represented in terms of the data gathered from it through an observation/sensing process . These observations contain information about the properties of the object that allows their recognition , identification , and discrimination . In particular , one can obtain images from objects which represent its visual characteristics through photographs or rendering of images from 3D models . Image-based object classification is the task of assigning a category to images obtained from an object based on their visual appearance . The visual appearance of objects in an image is determined by the properties of the object itself ( intrinsic variables ) and the transformations that occur in the real world ( extrinsic variables ) ( Kulkarni et al. , 2015 ) . Probabilistic classifiers based on neural networks can provide a measure for the confidence of a model for a given prediction in terms of a probability distribution over the possible categories an image can be classified into . However , they do not indicate what variable contributes to the uncertainty . In some cases the extrinsic variables can affect the visual appearance of objects in images in such way that the predictions are highly uncertain . A measure of the uncertainty in terms of these extrinsic features can improve interpretability of the output of a classifier . Disentangled representation learning is the task of crating low-dimensional representations of data that capture the underlying variability of the data and in particular this variability can be explained in terms of the variables involved in data generation . These representations can provide interpretable data representations that can be used for different tasks such as domain adaptation ( Higgins et al. , 2017 ) , continuous learning ( Achille et al. , 2018 ) , noise removal ( Lopez et al. , 2018 ) , and visual reasoning ( van Steenkiste et al. , 2019 ) . In this paper we propose a method for the identification of the sources of uncertainty in imagebased object classification with respect to the extrinsic features that affect the visual appearance of objects in images by using disentangled data representations . Given an image of an object , our model identifies which extrinsic feature contributes the most to the classification output and provides information on how to modify such feature to reduce the uncertainty in the predictions . 2 RELATED WORK . Achieving explainable results in predictive models is an important task , especially for critical applications in which the decisions can affect human life such as in health , security , law and defence Barredo Arrieta et al . ( 2020 ) . Even though deep neural networks provide successful results for image classification , their predictions can ’ t be directly interpreted due to their complexity ( Zhang & Zhu , 2018 ) . In order to solve this different approaches have been proposed to provide visual interpretability to the results such as identification of the image regions that contribute to classification ( Selvaraju et al. , 2016 ) . The uncertainty of predictions provides an extra level of interpretability to the predictions of a model by indicating the level of confidence in a prediction Kendall & Gal ( 2017 ) . There are different methods to introduce uncertainty measures in classifiers which include bayesian neural networks , ensembles , etc . Obtaining disentangled representations , that capture distinct sources of variation independently , is an important step towards interpretable machine learning systems Kim & Mnih ( 2018 ) . Despite the lack of agreement on the definition , one description states that a disentangled representation should separate the distinct , informative factors of variations in the data Bengio et al . ( 2012 ) . Within deep generative models , disentanglement is achieved by using neural networks that approximate a conditional distribution on the data and a set of unobserved latent variables . Particularly variational autoencoders ( VAEs ) Kingma & Welling ( 2014 ) are heavily favored due to their ability to model a joint distribution while maintaining scalability and training stability Higgins et al . ( 2016 ) . Therefore most of the methods are based on augmentations on original VAE framework Higgins et al . ( 2016 ) ; Kim & Mnih ( 2018 ) ; Kulkarni et al . ( 2015 ) ; Mathieu et al . ( 2018 ) . In image-based object classification the variables that explain the visual characteristics of objects in data can be divided into those which represent the inherent properties of objects and those which represent its transformations . This explanation has been explored in Kulkarni et al . ( 2015 ) by describing the object ’ s properties as the intrinsic variables and the properties that describe the object transformations as the extrinsic variables . Other work refers to similar sets of variables and their disentanglement under different names but representing similar concepts . Hamaguchi et al . ( 2019 ) disentangles the variables corresponding to ambient variables with respect to object identity information on images . ( Gabbay & Hoshen , 2020 ) proposes the learning of disentangled representations that express the intra-class variability in terms of the class and content . ( Detlefsen & Hauberg , 2019 ) proposes the disentanglement of the appearance and the perspective . Separate the identity of cars from their pose ( Yang et al. , 2015 ) . 3 SETTING . Consider a dataset of images that have been generated from the observations of an object and which should be classified into a certain category . We will assume that this category depends only on the properties of the object itself and not on its surroundings . We use a neural network as a probabilistic classifier to assign each of the images to a category . Usually the output of a neural network can ’ t be directly interpreted in terms of the characteristics of the object have affected the confidence of the prediction . Disentanglement serves as a method to produce interpretable low-dimensional data representations that separate the variables that describe the properties of the objects and their surrounding . The main idea is that one can train a probabilistic classifier on disentangled low dimensional representations and identify which variables contribute to the uncertainty of the classification . 3.1 PROBABILISTIC CLASSIFIERS ON IMAGES . A probabilistic classifier is a model which outputs a conditional probability density PY |x over the set of K ∈ N possible categories Y = { 1 , 2 , . . . , K } conditioned on an input image x ∈ X . The value PY |x ( y ) can be interpreted as the degree of confidence the model assigns for an image x ∈ X to be classified into category y ∈ Y . We will be only considering throughout this work probabilistic classifiers that use deep neural networks to obtain the predictions . One can train a probabilistic classifier using a dataset of labeled images . Given a labeled datapoint x ∈ X with true label y∗ ∈ Y , the neural network ’ s weights are optimized to reduce the cross entropy between the network ’ s output distribution PY |x and the true label y∗ . The cross entropy loss corresponds to , L ( PY |x , y∗ ) = − ∑ y∈Y δy , y∗ logPY |x ( y ) , ( 1 ) with δy , y∗ the Kroenecker delta . One can measure the degree of uncertainty of the output by calculating the entropy of the output distribution PY |x for a given image . Higher entropy corresponds to a higher uncertainty . The entropy is calculated as , H ( PY |x ) = ∑ y∈Y PY |x ( y ) logPY |x ( y ) . ( 2 ) Even though it is possible to provide a degree of uncertainty to the estimates of a probabilistic classifier trained on images , these estimates do not indicate which true generative variables that describe the image have contributed to the uncertainty . In this paper we assume that those sources of uncertainty are due to the extrinsic variables that participate in the data generation process . 3.2 DATA GENERATION : INTRINSIC AND EXTRINSIC VARIABLES . We will assume that the underlying generative process for our data can be modelled by the joint probability distribution PX×V over the data space X and a set of true generative variables V . The true variables condition the data generative process and represent the properties of the real world . In our case , we will consider that these variables can be divided into two sets V = V ( I ) × V ( E ) that represent the intrinsic and extrinsic variables respectively . The intrinsic variables are those which represent the properties of the object itself ( e.g . its color , shape , size , material ) , while the extrinsic variables represent external factors ( e.g . the light conditions or the relative position and orientation of the object with respect to the observer that generates the image ) . Intrinsic variables are invariant to the transformations described by the extrinsic variables . The generative process consists of the independent sampling of an intrinsic variable v ( I ) ∼ PV ( I ) and an extrinsic variable v ( E ) ∼ PV ( E ) . Those generative variables together determine a conditional distribution over the data space from which a data point x ∼ PX| ( v ( I ) , v ( E ) ) is sampled . We assume that the intrinsic variables are independent of the extrinsic variables . During data generation both variables are combined to produce the visual appearance of the objects following the formula for the joint probability density PX×V ( x , ( v ( I ) , v ( E ) ) ) = PX| ( v ( I ) , v ( E ) ) ( x ) PVI ( v ( I ) ) PVE ( v ( E ) ) . ( 3 ) We assume that the set of intrinsic and extrinsic variables can be separated into a finite product of MI , ME ∈ N variables such that V ( I ) = V ( I ) 1 × · · · × V ( I ) MI and V ( E ) = V ( E ) 1 × · · · × V ( E ) ME . The total number of true generative variables is then M =MI +ME . 3.3 DISENTANGLEMENT FOR INTERPRETABILITY . Learning disentangled representations is the task of creating interpretable low-dimensional data representations that separate the information about the variables that are involved in the generation of the data ( Locatello et al. , 2018 ) . There is no common agreement on the definition of a disentangled representation . However , two properties have been proposed for disentangled representations that will be useful for our goal of producing interpretable measures of uncertainty in classification predictions . • Modularity : No more than a single dimension of the data representation should encode information about a true generative variable . • Compactness : Each of the true generative variables is encoded by a single dimension of the data representation . If a data representation is both modular and compact , we can identify for each true generative variable its corresponding data representation ’ s dimension that has all information about it . Consider Z = Z1 × Z2 × · · · × ZD as a D-dimensional data representation space . One can measure the compactness of a learned representation by measuring the mutual information between a data representation dimension in Zi and a true variable Vm as I ( PZi , PVm ) = KL ( PZi×Vm ||PZi ⊗ PVm ) A disentangled representation is modular and compact if for each generative variable Vm there is a unique latent dimension Zi such that I ( PZi , PVm ) 6= 0 and I ( PZj , PVm ) = 0 for all j 6= i . If a disentangled data representation is obtained which fulfills both modularity and compactness then we can separate the latent dimensions in such a way that there is a decomposition of the latent space Z = Z ( I ) × Z ( E ) into an intrinsic and an extrinsic set of latent variable dimensions . This would mean that a probabilistic classifier trained on the latent variables PY |Z ( y ) = PY |Z ( I ) ( y ) would only depend on the intrinsic variables . However , it has been proved in ( Locatello et al. , 2018 ; Mathieu et al. , 2018 ) that without supervision disentanglement can not be achieved . Therefore one might not be able to achieve perfect modularity or compactness . This means that some information about the true intrinsic variables might be contained in the extrinsic latent dimensions such that there is a dependency of the uncertainty in those variables as seen in Section 4.2 .
The paper presents an approach that for every object identifies the factors that have a high impact on the models' uncertainty. The approach consists of i) disentangled representations ii) a classifier on the top of the trained representations iii) technique that select dimensions of representation of an objects' (factors), that impact the uncertainty of a model the most. The i) and ii) are done in a known way; The disentanglement is done via Deep Convolutional Inverse Graphics Network (2015), and a classifier is trained with a standard maximum likelihood approach.
science.mup_single_document_summarization.268
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nIdentifying the Sources of Uncertainty in Object Classification\n\nMain Content:\nIn image-based object classification , the visual appearance of objects determines which class they are assigned to . External variables that are independent of the object , such as the perspective or the lighting conditions , can modify the object ’ s appearance resulting in ambiguous images that lead to misclassifications . Previous work has proposed methods for estimating the uncertainty of predictions and measure their confidence . However , such methods do not indicate which variables are the potential sources that cause uncertainty . In this paper , we propose a method for image-based object classification that uses disentangled representations to indicate which are the external variables that contribute the most to the uncertainty of the predictions . This information can be used to identify the external variables that should be modified to decrease the uncertainty and improve the classification . 1 INTRODUCTION . An object from the real world can be represented in terms of the data gathered from it through an observation/sensing process . These observations contain information about the properties of the object that allows their recognition , identification , and discrimination . In particular , one can obtain images from objects which represent its visual characteristics through photographs or rendering of images from 3D models . Image-based object classification is the task of assigning a category to images obtained from an object based on their visual appearance . The visual appearance of objects in an image is determined by the properties of the object itself ( intrinsic variables ) and the transformations that occur in the real world ( extrinsic variables ) ( Kulkarni et al. , 2015 ) . Probabilistic classifiers based on neural networks can provide a measure for the confidence of a model for a given prediction in terms of a probability distribution over the possible categories an image can be classified into . However , they do not indicate what variable contributes to the uncertainty . In some cases the extrinsic variables can affect the visual appearance of objects in images in such way that the predictions are highly uncertain . A measure of the uncertainty in terms of these extrinsic features can improve interpretability of the output of a classifier . Disentangled representation learning is the task of crating low-dimensional representations of data that capture the underlying variability of the data and in particular this variability can be explained in terms of the variables involved in data generation . These representations can provide interpretable data representations that can be used for different tasks such as domain adaptation ( Higgins et al. , 2017 ) , continuous learning ( Achille et al. , 2018 ) , noise removal ( Lopez et al. , 2018 ) , and visual reasoning ( van Steenkiste et al. , 2019 ) . In this paper we propose a method for the identification of the sources of uncertainty in imagebased object classification with respect to the extrinsic features that affect the visual appearance of objects in images by using disentangled data representations . Given an image of an object , our model identifies which extrinsic feature contributes the most to the classification output and provides information on how to modify such feature to reduce the uncertainty in the predictions . 2 RELATED WORK . Achieving explainable results in predictive models is an important task , especially for critical applications in which the decisions can affect human life such as in health , security , law and defence Barredo Arrieta et al . ( 2020 ) . Even though deep neural networks provide successful results for image classification , their predictions can ’ t be directly interpreted due to their complexity ( Zhang & Zhu , 2018 ) . In order to solve this different approaches have been proposed to provide visual interpretability to the results such as identification of the image regions that contribute to classification ( Selvaraju et al. , 2016 ) . The uncertainty of predictions provides an extra level of interpretability to the predictions of a model by indicating the level of confidence in a prediction Kendall & Gal ( 2017 ) . There are different methods to introduce uncertainty measures in classifiers which include bayesian neural networks , ensembles , etc . Obtaining disentangled representations , that capture distinct sources of variation independently , is an important step towards interpretable machine learning systems Kim & Mnih ( 2018 ) . Despite the lack of agreement on the definition , one description states that a disentangled representation should separate the distinct , informative factors of variations in the data Bengio et al . ( 2012 ) . Within deep generative models , disentanglement is achieved by using neural networks that approximate a conditional distribution on the data and a set of unobserved latent variables . Particularly variational autoencoders ( VAEs ) Kingma & Welling ( 2014 ) are heavily favored due to their ability to model a joint distribution while maintaining scalability and training stability Higgins et al . ( 2016 ) . Therefore most of the methods are based on augmentations on original VAE framework Higgins et al . ( 2016 ) ; Kim & Mnih ( 2018 ) ; Kulkarni et al . ( 2015 ) ; Mathieu et al . ( 2018 ) . In image-based object classification the variables that explain the visual characteristics of objects in data can be divided into those which represent the inherent properties of objects and those which represent its transformations . This explanation has been explored in Kulkarni et al . ( 2015 ) by describing the object ’ s properties as the intrinsic variables and the properties that describe the object transformations as the extrinsic variables . Other work refers to similar sets of variables and their disentanglement under different names but representing similar concepts . Hamaguchi et al . ( 2019 ) disentangles the variables corresponding to ambient variables with respect to object identity information on images . ( Gabbay & Hoshen , 2020 ) proposes the learning of disentangled representations that express the intra-class variability in terms of the class and content . ( Detlefsen & Hauberg , 2019 ) proposes the disentanglement of the appearance and the perspective . Separate the identity of cars from their pose ( Yang et al. , 2015 ) . 3 SETTING . Consider a dataset of images that have been generated from the observations of an object and which should be classified into a certain category . We will assume that this category depends only on the properties of the object itself and not on its surroundings . We use a neural network as a probabilistic classifier to assign each of the images to a category . Usually the output of a neural network can ’ t be directly interpreted in terms of the characteristics of the object have affected the confidence of the prediction . Disentanglement serves as a method to produce interpretable low-dimensional data representations that separate the variables that describe the properties of the objects and their surrounding . The main idea is that one can train a probabilistic classifier on disentangled low dimensional representations and identify which variables contribute to the uncertainty of the classification . 3.1 PROBABILISTIC CLASSIFIERS ON IMAGES . A probabilistic classifier is a model which outputs a conditional probability density PY |x over the set of K ∈ N possible categories Y = { 1 , 2 , . . . , K } conditioned on an input image x ∈ X . The value PY |x ( y ) can be interpreted as the degree of confidence the model assigns for an image x ∈ X to be classified into category y ∈ Y . We will be only considering throughout this work probabilistic classifiers that use deep neural networks to obtain the predictions . One can train a probabilistic classifier using a dataset of labeled images . Given a labeled datapoint x ∈ X with true label y∗ ∈ Y , the neural network ’ s weights are optimized to reduce the cross entropy between the network ’ s output distribution PY |x and the true label y∗ . The cross entropy loss corresponds to , L ( PY |x , y∗ ) = − ∑ y∈Y δy , y∗ logPY |x ( y ) , ( 1 ) with δy , y∗ the Kroenecker delta . One can measure the degree of uncertainty of the output by calculating the entropy of the output distribution PY |x for a given image . Higher entropy corresponds to a higher uncertainty . The entropy is calculated as , H ( PY |x ) = ∑ y∈Y PY |x ( y ) logPY |x ( y ) . ( 2 ) Even though it is possible to provide a degree of uncertainty to the estimates of a probabilistic classifier trained on images , these estimates do not indicate which true generative variables that describe the image have contributed to the uncertainty . In this paper we assume that those sources of uncertainty are due to the extrinsic variables that participate in the data generation process . 3.2 DATA GENERATION : INTRINSIC AND EXTRINSIC VARIABLES . We will assume that the underlying generative process for our data can be modelled by the joint probability distribution PX×V over the data space X and a set of true generative variables V . The true variables condition the data generative process and represent the properties of the real world . In our case , we will consider that these variables can be divided into two sets V = V ( I ) × V ( E ) that represent the intrinsic and extrinsic variables respectively . The intrinsic variables are those which represent the properties of the object itself ( e.g . its color , shape , size , material ) , while the extrinsic variables represent external factors ( e.g . the light conditions or the relative position and orientation of the object with respect to the observer that generates the image ) . Intrinsic variables are invariant to the transformations described by the extrinsic variables . The generative process consists of the independent sampling of an intrinsic variable v ( I ) ∼ PV ( I ) and an extrinsic variable v ( E ) ∼ PV ( E ) . Those generative variables together determine a conditional distribution over the data space from which a data point x ∼ PX| ( v ( I ) , v ( E ) ) is sampled . We assume that the intrinsic variables are independent of the extrinsic variables . During data generation both variables are combined to produce the visual appearance of the objects following the formula for the joint probability density PX×V ( x , ( v ( I ) , v ( E ) ) ) = PX| ( v ( I ) , v ( E ) ) ( x ) PVI ( v ( I ) ) PVE ( v ( E ) ) . ( 3 ) We assume that the set of intrinsic and extrinsic variables can be separated into a finite product of MI , ME ∈ N variables such that V ( I ) = V ( I ) 1 × · · · × V ( I ) MI and V ( E ) = V ( E ) 1 × · · · × V ( E ) ME . The total number of true generative variables is then M =MI +ME . 3.3 DISENTANGLEMENT FOR INTERPRETABILITY . Learning disentangled representations is the task of creating interpretable low-dimensional data representations that separate the information about the variables that are involved in the generation of the data ( Locatello et al. , 2018 ) . There is no common agreement on the definition of a disentangled representation . However , two properties have been proposed for disentangled representations that will be useful for our goal of producing interpretable measures of uncertainty in classification predictions . • Modularity : No more than a single dimension of the data representation should encode information about a true generative variable . • Compactness : Each of the true generative variables is encoded by a single dimension of the data representation . If a data representation is both modular and compact , we can identify for each true generative variable its corresponding data representation ’ s dimension that has all information about it . Consider Z = Z1 × Z2 × · · · × ZD as a D-dimensional data representation space . One can measure the compactness of a learned representation by measuring the mutual information between a data representation dimension in Zi and a true variable Vm as I ( PZi , PVm ) = KL ( PZi×Vm ||PZi ⊗ PVm ) A disentangled representation is modular and compact if for each generative variable Vm there is a unique latent dimension Zi such that I ( PZi , PVm ) 6= 0 and I ( PZj , PVm ) = 0 for all j 6= i . If a disentangled data representation is obtained which fulfills both modularity and compactness then we can separate the latent dimensions in such a way that there is a decomposition of the latent space Z = Z ( I ) × Z ( E ) into an intrinsic and an extrinsic set of latent variable dimensions . This would mean that a probabilistic classifier trained on the latent variables PY |Z ( y ) = PY |Z ( I ) ( y ) would only depend on the intrinsic variables . However , it has been proved in ( Locatello et al. , 2018 ; Mathieu et al. , 2018 ) that without supervision disentanglement can not be achieved . Therefore one might not be able to achieve perfect modularity or compactness . This means that some information about the true intrinsic variables might be contained in the extrinsic latent dimensions such that there is a dependency of the uncertainty in those variables as seen in Section 4.2 .", "role": "user" }, { "content": "The paper presents an approach that for every object identifies the factors that have a high impact on the models' uncertainty. The approach consists of i) disentangled representations ii) a classifier on the top of the trained representations iii) technique that select dimensions of representation of an objects' (factors), that impact the uncertainty of a model the most. The i) and ii) are done in a known way; The disentanglement is done via Deep Convolutional Inverse Graphics Network (2015), and a classifier is trained with a standard maximum likelihood approach.", "role": "assistant" } ]
science.discomat_te
| Dopants | Amorphous | Anatase | Rutile | | / | 2 | 5 | 8 | | Sn2+ | 3 | 6 | 9 | | Sn4+ | 1 | 4 | 7 | Caption: Gibb's free energy order of Sn2+/Sn4+-doped titania gel and crystals Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "2", "type": "Other"} {"value": "5", "type": "Other"} {"value": "8", "type": "Other"} {"value": "3", "type": "Other"} {"value": "6", "type": "Other"} {"value": "9", "type": "Other"} {"value": "1", "type": "Other"} {"value": "4", "type": "Other"} {"value": "7", "type": "Other"}
science.discomat_te.142
[ { "content": "| Dopants | Amorphous | Anatase | Rutile |\n| / | 2 | 5 | 8 |\n| Sn2+ | 3 | 6 | 9 |\n| Sn4+ | 1 | 4 | 7 |\n\nCaption: Gibb's free energy order of Sn2+/Sn4+-doped titania gel and crystals\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"2\", \"type\": \"Other\"}\n{\"value\": \"5\", \"type\": \"Other\"}\n{\"value\": \"8\", \"type\": \"Other\"}\n{\"value\": \"3\", \"type\": \"Other\"}\n{\"value\": \"6\", \"type\": \"Other\"}\n{\"value\": \"9\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"4\", \"type\": \"Other\"}\n{\"value\": \"7\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Testing Probabilistic Circuits Main Content: 1 Introduction . Probabilistic modeling is at the heart of modern computer science , with applications ranging from image recognition and image generation [ 29 , 30 ] to weather forecasting [ 3 ] . Probabilistic models have a multitude of representations , such as probabilistic circuits ( PCs ) [ 9 ] , graphical models [ 19 ] , generative networks [ 16 ] , and determinantal point processes [ 20 ] . Of particular interest to us are PCs , which are known to support guaranteed inference and thus have applications in safety-critical fields such as healthcare [ 2 , 25 ] . In this work , we will focus on PCs that are fragments of the Negation Normal Form ( NNF ) , specifically DNNFs , d-DNNFs , SDNNFs , and PIs [ 13 ] . We refer to the survey by Choi et al . [ 9 ] for more details regarding PCs . Given two distributions P and Q , a fundamental problem is to determine whether they are close . Closeness between distributions is frequently quantified using the total variation ( TV ) distance , dTV ( P , Q ) = ( 1/2 ) ‖P −Q‖1 , where ‖ · ‖ is the ` 1 norm [ 21 , 6 ] . Thus , stated formally , closeness testing is the problem of deciding whether dTV ( P , Q ) ≤ ε or dTV ( P , Q ) ≥ η for 0 ≤ ε < η ≤ 1 . Determining the closeness of models has applications in AI planning [ 13 ] , bioinformatics [ 31 , 33 , 35 ] and probabilistic program verification [ 15 , 23 ] . Equivalence testing is a special case of closeness testing , where one tests if dTV ( P , Q ) = 0 . Darwiche and Huang [ 13 ] initiated the study of equivalence testing of PCs by designing an equivalence test for d-DNNFs . An equivalence test is , however , of little use in contexts where the PCs under test ∗The accompanying tool , available open source , can be found at https : //github.com/meelgroup/teq . The Appendix is available in the accompanying supplementary material . †The authors decided to forgo the old convention of alphabetical ordering of authors in favor of a randomized ordering , denoted by r© . The publicly verifiable record of the randomization is available at https : //www.aeaweb.org/journals/policies/random-author-order/search with confirmation code : icoxrj0sNB2L . For citation of the work , authors request that the citation guidelines by AEA for random author ordering be followed . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . encode non-identical distributions that are nonetheless close enough for practical purposes . Such situations may arise due to the use of approximate PC compilation [ 10 ] and sampling-based learning of PCs [ 26 , 27 ] . As a concrete example , consider PCs that are learned via approximate methods such as stochastic gradient descent [ 27 ] . In such a case , PCs are likely to converge to close but non-identical distributions . Given two such PCs , we would like to know whether they have converged to distributions close to each other . Thus , we raise the question : Does there exist an efficient algorithm to test the closeness of two PC distributions ? In this work , we design the first closeness test for PCs with respect to TV distance , called Teq . Assuming the tested PCs allow poly-time approximate weighted model counting and sampling , Teq runs in polynomial time . Formally , given two PC distributions P and Q , and three parameters ( ε , η , δ ) , for closeness ( ε ) , farness ( η ) , and tolerance ( δ ) , Teq returns Accept if dTV ( P , Q ) ≤ ε and Reject if dTV ( P , Q ) ≥ η with probability at least 1 − δ. Teq makes atmost O ( ( η − ε ) −2 log ( δ−1 ) ) calls to the sampler and exactly 2 calls to the counter . Teq builds on a general distance estimation technique of Canonne and Rubinfeld [ 4 ] that estimates the distance between two distributions with a small number of samples . In the context of PCs , the algorithm requires access to an exact sampler and an exact counter . Since not all PCs support exact sampling and counting , we modify the technique presented in [ 4 ] to allow for approximate samples and counts . Furthermore , we implement and test Teq on a dataset of publicly available PCs arising from applications in circuit testing . Our results show that closeness testing can be accurate and scalable in practice . For some NNF fragments , such as DNNF , no sampling algorithm is known , and for fragments such as PI , sampling is known to be NP-hard [ 32 ] . Since Teq requires access to approximate weighted counters and samplers to achieve tractability , the question of determining the closeness of the PCs mentioned above remains unanswered . Thus , we investigate further and characterize the complexity of closeness testing for a broad range of PCs . Our characterization reveals that PCs from the fragments d-DNNFs and SDNNFs can be tested for closeness in poly-time via Teq , owing to the algorithms of Darwiche [ 11 ] and Arenas et al . [ 1 ] . We show that the SDNNF approximate counting algorithm of Arenas et al . [ 1 ] can be extended to log-linear SDNNFs using chain formulas [ 8 ] . Then , using previously known results , we also find that there are no poly-time equivalence tests for PCs from PI and DNNF , conditional on widely believed complexity-theoretic conjectures . Our characterization also reveals some open questions regarding the complexity of closeness and equivalence testing of PCs . The rest of the paper is organized in the following way . We define the notation and discuss related work in Section 2 . We then present the main contribution of the paper , the closeness test Teq , and the associated proof of correctness in Section 3 . We present our experimental findings in Section 4 , and then discuss the complexity landscape of closeness testing in Section 5 . We conclude the paper and discuss some open problems in Section 6 . Due to space constraints , we defer some proofs to the supplementary Section A . 2 Background . Let ϕ : { 0 , 1 } n → { 0 , 1 } be a circuit over n Boolean variables . An assignment σ ∈ { 0 , 1 } n to the variables of ϕ is a satisfying assignment if ϕ ( σ ) = 1 . The set of all satisfying assignments of ϕ is Rϕ . If |Rϕ| > 0 , then ϕ is said to be satisfiable and if |Rϕ| = 2n , then ϕ is said to be valid . We use |ϕ| to denote the size of circuit ϕ , where the size is the total number of vertices and edges in the circuit DAG . The polynomial hierarchy ( PH ) contains the classes ΣP1 ( NP ) and Π P 1 ( co-NP ) along with generalizations of the form ΣPi and Π P i where Π P i+1 = co-NP ΠPi and ΣPi+1 = NP ΣPi [ 34 ] . The classes ΣPi and ΠPi are said to be at level i . If it is shown that two classes on the same or consecutive levels are equal , the hierarchy collapses to that level . Such a collapse is considered unlikely , and hence is used as the basic assumption for showing hardness results , including the ones we present in the paper . 2.1 Probability distributions . A weight function w : { 0 , 1 } n → Q+ assigns a positive rational weight to each assignment σ . We extend the definition of w to also allow circuits as input : w ( ϕ ) = ∑ σ∈Rϕ w ( σ ) . For weight function w and circuit ϕ , w ( ϕ ) is the weighted model count ( WMC ) of ϕ w.r.t . w. In this paper , we focus on log-linear weight functions as they capture a wide class of distributions , including those arising from graphical models , conditional random fields , and skip-gram models [ 24 ] . Log-linear models are represented as literal-weighted functions , defined as : Definition 1 . For a set X of n variables , a weight function w is called literal-weighted if there is a poly-time computable map w : X → Q ∩ ( 0 , 1 ) such that for any assignment σ ∈ { 0 , 1 } n : w ( σ ) = ∏ x∈σ { w ( x ) if x = 1 1− w ( x ) if x = 0 For all circuits ϕ , and log-linear weight functions w , w ( ϕ ) can be represented in size polynomial in the input . Probabilistic circuits : A probabilistic circuit is a satisfiable circuit ϕ along with a weight function w. ϕ and w together define a discrete probability distribution on the set { 0 , 1 } n that is supported over Rϕ . We denote the p.m.f . of this distribution as : P ( ϕ , w ) ( σ ) = { 0 ϕ ( σ ) = 0 w ( σ ) /w ( ϕ ) ϕ ( σ ) = 1 In this paper , we study circuits that are fragments of the Negation Normal Form ( NNF ) . A circuit ϕ in NNF is a rooted , directed acyclic graph ( DAG ) , where each leaf node is labeled with true , false , v or ¬v ; and each internal node is labeled with a ∧ or ∨ and can have arbitrarily many children . We focus on four fragments of NNF , namely , Decomposable NNF ( DNNF ) , deterministic-DNNF ( d-DNNF ) , Structured DNNF ( SDNNF ) , and Prime Implicates ( PI ) . For further information regarding circuits in NNF , refer to the survey [ 14 ] and the paper [ 28 ] . The TV distance of two probability distributions P ( ϕ1 , w1 ) and P ( ϕ2 , w2 ) over { 0 , 1 } n is defined as : dTV ( P ( ϕ1 , w1 ) , P ( ϕ2 , w2 ) ) = 1 2 ∑ σ∈ { 0,1 } n |P ( ϕ1 , w1 ) ( σ ) − P ( ϕ2 , w2 ) ( σ ) | . P ( ϕ1 , w1 ) and P ( ϕ2 , w2 ) are said to be ( 1 ) equivalent if dTV ( P ( ϕ1 , w1 ) , P ( ϕ2 , w2 ) ) = 0 , ( 2 ) ε-close if dTV ( P ( ϕ1 , w1 ) , P ( ϕ2 , w2 ) ) ≤ ε , and ( 3 ) η-far if dTV ( P ( ϕ1 , w1 ) , P ( ϕ2 , w2 ) ) ≥ η . Our closeness testing algorithm Teq , assumes access to an approximate weighted counter Awct ( α , β , ϕ , w ) , and an approximate weighted sampler Samp ( α , β , ϕ , w ) . We define their behavior as follows : Definition 2 . Awct ( α , β , ϕ , w ) takes a circuit ϕ , a weight function w , a tolerance parameter α > 0 and a confidence parameter β > 0 as input and returns the approximate weighted model count of ϕ w.r.t . w such that Pr [ w ( ϕ ) 1 + α ≤ Awct ( α , β , ϕ , w ) ≤ ( 1 + α ) w ( ϕ ) ] ≥ 1− β Tractable approximate counting algorithms for PCs are known as Fully Polynomial Randomised Approximation Schemes ( FPRAS ) . The running time of an FPRAS is given by T ( α , β , ϕ ) = poly ( α−1 , log ( β−1 ) , |ϕ| ) . Definition 3 . Samp ( α , β , ϕ , w ) takes a circuit ϕ , a weight function w , a tolerance parameter α > 0 and a confidence parameter β > 0 as input and returns either ( 1 ) a satisfying assignment σ sampled approximately w.r.t . weight function w with probability ≥ 1− β or ( 2 ) a symbol ⊥ indicating failure with probability < β . In other words , whenever Samp samples σ : P ( ϕ , w ) ( σ ) 1 + α ≤ Pr [ Samp ( α , β , ϕ , w ) = σ ] ≤ ( 1 + α ) P ( ϕ , w ) ( σ ) Tractable approximate sampling algorithms for PCs are known as Fully Polynomial Almost Uniform Samplers ( FPAUS ) . The running time of an FPAUS for a single sample is given by T ( α , β , ϕ ) = poly ( α−1 , log ( β−1 ) , |ϕ| ) . In the rest of the paper [ m ] denotes the set { 1 , 2 , . . .m } , 1 ( e ) represents the indicator variable for event e , and E ( v ) represents the expectation of random variable v .
The contribution of this paper is a method for testing the closeness of the probability distributions encoded by two different probabalistic circuits (PCs). The closeness tester (called Teq), assumes access to an approximate weighted counter and an approximate weighted sampler. Teq is described (in words and pseudo-code) and the bulk of the paper is devoted to proving (PAC-style) correctness of Teq (ie Theorem 1) and characterising its time complexity (Theorem 2). Empirical results are given supporting the correctness of Teq and the stated time complexity theoretical results on Teq.
science.mup_single_document_summarization.11100
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nTesting Probabilistic Circuits\n\nMain Content:\n1 Introduction . Probabilistic modeling is at the heart of modern computer science , with applications ranging from image recognition and image generation [ 29 , 30 ] to weather forecasting [ 3 ] . Probabilistic models have a multitude of representations , such as probabilistic circuits ( PCs ) [ 9 ] , graphical models [ 19 ] , generative networks [ 16 ] , and determinantal point processes [ 20 ] . Of particular interest to us are PCs , which are known to support guaranteed inference and thus have applications in safety-critical fields such as healthcare [ 2 , 25 ] . In this work , we will focus on PCs that are fragments of the Negation Normal Form ( NNF ) , specifically DNNFs , d-DNNFs , SDNNFs , and PIs [ 13 ] . We refer to the survey by Choi et al . [ 9 ] for more details regarding PCs . Given two distributions P and Q , a fundamental problem is to determine whether they are close . Closeness between distributions is frequently quantified using the total variation ( TV ) distance , dTV ( P , Q ) = ( 1/2 ) ‖P −Q‖1 , where ‖ · ‖ is the ` 1 norm [ 21 , 6 ] . Thus , stated formally , closeness testing is the problem of deciding whether dTV ( P , Q ) ≤ ε or dTV ( P , Q ) ≥ η for 0 ≤ ε < η ≤ 1 . Determining the closeness of models has applications in AI planning [ 13 ] , bioinformatics [ 31 , 33 , 35 ] and probabilistic program verification [ 15 , 23 ] . Equivalence testing is a special case of closeness testing , where one tests if dTV ( P , Q ) = 0 . Darwiche and Huang [ 13 ] initiated the study of equivalence testing of PCs by designing an equivalence test for d-DNNFs . An equivalence test is , however , of little use in contexts where the PCs under test ∗The accompanying tool , available open source , can be found at https : //github.com/meelgroup/teq . The Appendix is available in the accompanying supplementary material . †The authors decided to forgo the old convention of alphabetical ordering of authors in favor of a randomized ordering , denoted by r© . The publicly verifiable record of the randomization is available at https : //www.aeaweb.org/journals/policies/random-author-order/search with confirmation code : icoxrj0sNB2L . For citation of the work , authors request that the citation guidelines by AEA for random author ordering be followed . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . encode non-identical distributions that are nonetheless close enough for practical purposes . Such situations may arise due to the use of approximate PC compilation [ 10 ] and sampling-based learning of PCs [ 26 , 27 ] . As a concrete example , consider PCs that are learned via approximate methods such as stochastic gradient descent [ 27 ] . In such a case , PCs are likely to converge to close but non-identical distributions . Given two such PCs , we would like to know whether they have converged to distributions close to each other . Thus , we raise the question : Does there exist an efficient algorithm to test the closeness of two PC distributions ? In this work , we design the first closeness test for PCs with respect to TV distance , called Teq . Assuming the tested PCs allow poly-time approximate weighted model counting and sampling , Teq runs in polynomial time . Formally , given two PC distributions P and Q , and three parameters ( ε , η , δ ) , for closeness ( ε ) , farness ( η ) , and tolerance ( δ ) , Teq returns Accept if dTV ( P , Q ) ≤ ε and Reject if dTV ( P , Q ) ≥ η with probability at least 1 − δ. Teq makes atmost O ( ( η − ε ) −2 log ( δ−1 ) ) calls to the sampler and exactly 2 calls to the counter . Teq builds on a general distance estimation technique of Canonne and Rubinfeld [ 4 ] that estimates the distance between two distributions with a small number of samples . In the context of PCs , the algorithm requires access to an exact sampler and an exact counter . Since not all PCs support exact sampling and counting , we modify the technique presented in [ 4 ] to allow for approximate samples and counts . Furthermore , we implement and test Teq on a dataset of publicly available PCs arising from applications in circuit testing . Our results show that closeness testing can be accurate and scalable in practice . For some NNF fragments , such as DNNF , no sampling algorithm is known , and for fragments such as PI , sampling is known to be NP-hard [ 32 ] . Since Teq requires access to approximate weighted counters and samplers to achieve tractability , the question of determining the closeness of the PCs mentioned above remains unanswered . Thus , we investigate further and characterize the complexity of closeness testing for a broad range of PCs . Our characterization reveals that PCs from the fragments d-DNNFs and SDNNFs can be tested for closeness in poly-time via Teq , owing to the algorithms of Darwiche [ 11 ] and Arenas et al . [ 1 ] . We show that the SDNNF approximate counting algorithm of Arenas et al . [ 1 ] can be extended to log-linear SDNNFs using chain formulas [ 8 ] . Then , using previously known results , we also find that there are no poly-time equivalence tests for PCs from PI and DNNF , conditional on widely believed complexity-theoretic conjectures . Our characterization also reveals some open questions regarding the complexity of closeness and equivalence testing of PCs . The rest of the paper is organized in the following way . We define the notation and discuss related work in Section 2 . We then present the main contribution of the paper , the closeness test Teq , and the associated proof of correctness in Section 3 . We present our experimental findings in Section 4 , and then discuss the complexity landscape of closeness testing in Section 5 . We conclude the paper and discuss some open problems in Section 6 . Due to space constraints , we defer some proofs to the supplementary Section A . 2 Background . Let ϕ : { 0 , 1 } n → { 0 , 1 } be a circuit over n Boolean variables . An assignment σ ∈ { 0 , 1 } n to the variables of ϕ is a satisfying assignment if ϕ ( σ ) = 1 . The set of all satisfying assignments of ϕ is Rϕ . If |Rϕ| > 0 , then ϕ is said to be satisfiable and if |Rϕ| = 2n , then ϕ is said to be valid . We use |ϕ| to denote the size of circuit ϕ , where the size is the total number of vertices and edges in the circuit DAG . The polynomial hierarchy ( PH ) contains the classes ΣP1 ( NP ) and Π P 1 ( co-NP ) along with generalizations of the form ΣPi and Π P i where Π P i+1 = co-NP ΠPi and ΣPi+1 = NP ΣPi [ 34 ] . The classes ΣPi and ΠPi are said to be at level i . If it is shown that two classes on the same or consecutive levels are equal , the hierarchy collapses to that level . Such a collapse is considered unlikely , and hence is used as the basic assumption for showing hardness results , including the ones we present in the paper . 2.1 Probability distributions . A weight function w : { 0 , 1 } n → Q+ assigns a positive rational weight to each assignment σ . We extend the definition of w to also allow circuits as input : w ( ϕ ) = ∑ σ∈Rϕ w ( σ ) . For weight function w and circuit ϕ , w ( ϕ ) is the weighted model count ( WMC ) of ϕ w.r.t . w. In this paper , we focus on log-linear weight functions as they capture a wide class of distributions , including those arising from graphical models , conditional random fields , and skip-gram models [ 24 ] . Log-linear models are represented as literal-weighted functions , defined as : Definition 1 . For a set X of n variables , a weight function w is called literal-weighted if there is a poly-time computable map w : X → Q ∩ ( 0 , 1 ) such that for any assignment σ ∈ { 0 , 1 } n : w ( σ ) = ∏ x∈σ { w ( x ) if x = 1 1− w ( x ) if x = 0 For all circuits ϕ , and log-linear weight functions w , w ( ϕ ) can be represented in size polynomial in the input . Probabilistic circuits : A probabilistic circuit is a satisfiable circuit ϕ along with a weight function w. ϕ and w together define a discrete probability distribution on the set { 0 , 1 } n that is supported over Rϕ . We denote the p.m.f . of this distribution as : P ( ϕ , w ) ( σ ) = { 0 ϕ ( σ ) = 0 w ( σ ) /w ( ϕ ) ϕ ( σ ) = 1 In this paper , we study circuits that are fragments of the Negation Normal Form ( NNF ) . A circuit ϕ in NNF is a rooted , directed acyclic graph ( DAG ) , where each leaf node is labeled with true , false , v or ¬v ; and each internal node is labeled with a ∧ or ∨ and can have arbitrarily many children . We focus on four fragments of NNF , namely , Decomposable NNF ( DNNF ) , deterministic-DNNF ( d-DNNF ) , Structured DNNF ( SDNNF ) , and Prime Implicates ( PI ) . For further information regarding circuits in NNF , refer to the survey [ 14 ] and the paper [ 28 ] . The TV distance of two probability distributions P ( ϕ1 , w1 ) and P ( ϕ2 , w2 ) over { 0 , 1 } n is defined as : dTV ( P ( ϕ1 , w1 ) , P ( ϕ2 , w2 ) ) = 1 2 ∑ σ∈ { 0,1 } n |P ( ϕ1 , w1 ) ( σ ) − P ( ϕ2 , w2 ) ( σ ) | . P ( ϕ1 , w1 ) and P ( ϕ2 , w2 ) are said to be ( 1 ) equivalent if dTV ( P ( ϕ1 , w1 ) , P ( ϕ2 , w2 ) ) = 0 , ( 2 ) ε-close if dTV ( P ( ϕ1 , w1 ) , P ( ϕ2 , w2 ) ) ≤ ε , and ( 3 ) η-far if dTV ( P ( ϕ1 , w1 ) , P ( ϕ2 , w2 ) ) ≥ η . Our closeness testing algorithm Teq , assumes access to an approximate weighted counter Awct ( α , β , ϕ , w ) , and an approximate weighted sampler Samp ( α , β , ϕ , w ) . We define their behavior as follows : Definition 2 . Awct ( α , β , ϕ , w ) takes a circuit ϕ , a weight function w , a tolerance parameter α > 0 and a confidence parameter β > 0 as input and returns the approximate weighted model count of ϕ w.r.t . w such that Pr [ w ( ϕ ) 1 + α ≤ Awct ( α , β , ϕ , w ) ≤ ( 1 + α ) w ( ϕ ) ] ≥ 1− β Tractable approximate counting algorithms for PCs are known as Fully Polynomial Randomised Approximation Schemes ( FPRAS ) . The running time of an FPRAS is given by T ( α , β , ϕ ) = poly ( α−1 , log ( β−1 ) , |ϕ| ) . Definition 3 . Samp ( α , β , ϕ , w ) takes a circuit ϕ , a weight function w , a tolerance parameter α > 0 and a confidence parameter β > 0 as input and returns either ( 1 ) a satisfying assignment σ sampled approximately w.r.t . weight function w with probability ≥ 1− β or ( 2 ) a symbol ⊥ indicating failure with probability < β . In other words , whenever Samp samples σ : P ( ϕ , w ) ( σ ) 1 + α ≤ Pr [ Samp ( α , β , ϕ , w ) = σ ] ≤ ( 1 + α ) P ( ϕ , w ) ( σ ) Tractable approximate sampling algorithms for PCs are known as Fully Polynomial Almost Uniform Samplers ( FPAUS ) . The running time of an FPAUS for a single sample is given by T ( α , β , ϕ ) = poly ( α−1 , log ( β−1 ) , |ϕ| ) . In the rest of the paper [ m ] denotes the set { 1 , 2 , . . .m } , 1 ( e ) represents the indicator variable for event e , and E ( v ) represents the expectation of random variable v .", "role": "user" }, { "content": "The contribution of this paper is a method for testing the closeness of the probability distributions encoded by two different probabalistic circuits (PCs). The closeness tester (called Teq), assumes access to an approximate weighted counter and an approximate weighted sampler. Teq is described (in words and pseudo-code) and the bulk of the paper is devoted to proving (PAC-style) correctness of Teq (ie Theorem 1) and characterising its time complexity (Theorem 2). Empirical results are given supporting the correctness of Teq and the stated time complexity theoretical results on Teq.", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Thalassemia is the commonest single gene disorder in India. About 10,000 infants with Thalassemia major are born every year. The present study was under taken with a hope to prevail better quality of life to the Thalassemic patients. Pallor being the chief complaint, a randomized controlled trial was undertaken with Dhatri Avaleha as it is specially mentioned for Pandu, Kamala & Haleemaka Roga. Children between age group of 1 to 15 years were randomly divided in two groups: Drug treated group (Group A) and Control group (Group B). Assessment was done on subjective and objective parameters after 30 and 60 days of treatment with follow up of two month. Statistically significant (P<0.01) result was obtained in Blood transfusion interval in group 'A' in comparison to group 'B'. Dhatri Avaleha may have a potential to increase blood transfusion interval and decrease secondary infection and thus it can be used as supportive therapy with modern medical management. Extracted Data:
[["Dhatri Avaleha", "Control", "Blood transfusion interval", "significantly increased", "Statistically significant (P\u003c0.01) result was obtained in Blood transfusion interval in group \u0027A\u0027 in comparison to group \u0027B\u0027. Dhatri Avaleha may have a potential to increase blood transfusion interval and decrease secondary infection and thus it can be used as supportive therapy with modern medical management."]]
science.evidence_inference.160
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Thalassemia is the commonest single gene disorder in India. About 10,000 infants with Thalassemia major are born every year. The present study was under taken with a hope to prevail better quality of life to the Thalassemic patients. Pallor being the chief complaint, a randomized controlled trial was undertaken with Dhatri Avaleha as it is specially mentioned for Pandu, Kamala & Haleemaka Roga. Children between age group of 1 to 15 years were randomly divided in two groups: Drug treated group (Group A) and Control group (Group B). Assessment was done on subjective and objective parameters after 30 and 60 days of treatment with follow up of two month. Statistically significant (P<0.01) result was obtained in Blood transfusion interval in group 'A' in comparison to group 'B'. Dhatri Avaleha may have a potential to increase blood transfusion interval and decrease secondary infection and thus it can be used as supportive therapy with modern medical management.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Dhatri Avaleha\", \"Control\", \"Blood transfusion interval\", \"significantly increased\", \"Statistically significant (P\\u003c0.01) result was obtained in Blood transfusion interval in group \\u0027A\\u0027 in comparison to group \\u0027B\\u0027. Dhatri Avaleha may have a potential to increase blood transfusion interval and decrease secondary infection and thus it can be used as supportive therapy with modern medical management.\"]]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: 8-bit Optimizers via Block-wise Quantization Main Content: Increasing model size is an effective way to achieve better performance for given resources ( Kaplan et al. , 2020 ; Henighan et al. , 2020 ; Raffel et al. , 2019 ; Lewis et al. , 2021 ) . However , training such large models requires storing the model , gradient , and state of the optimizer ( e.g. , exponentially smoothed sum and squared sum of previous gradients for Adam ) , all in a fixed amount of available memory . Although significant research has focused on enabling larger model training by reducing or efficiently distributing the memory required for the model parameters ( Shoeybi et al. , 2019 ; Lepikhin et al. , 2020 ; Fedus et al. , 2021 ; Brown et al. , 2020 ; Rajbhandari et al. , 2020 ) , reducing the memory footprint of optimizer gradient statistics is much less studied . This is a significant missed opportunity since these optimizer states use 33-75 % of the total memory footprint during training . For example , the Adam optimizer states for the largest GPT-2 ( Radford et al. , 2019 ) and T5 ( Raffel et al. , 2019 ) models are 11 GB and 41 GB in size . In this paper , we develop a fast , high-precision non-linear quantization method – block-wise dynamic quantization – that enables stable 8-bit optimizers ( e.g. , Adam , AdamW , and Momentum ) which maintain 32-bit performance at a fraction of the memory footprint and without any changes to the original hyperparameters.1 While most current work uses 32-bit optimizer states , recent high-profile efforts to use 16-bit optimizers report difficultly for large models with more than 1B parameters ( Ramesh et al. , 2021 ) . Going from 16-bit optimizers to 8-bit optimizers reduces the range of possible values from 216 = 65536 values to just 28 = 256 . To our knowledge , this has not been attempted before . Effectively using this very limited range is challenging for three reasons : quantization accuracy , computational efficiency , and large-scale stability . To maintain accuracy , it is critical to introduce some form of non-linear quantization to reduce errors for both common small magnitude values and rare large ones . However , to be practical , 8-bit optimizers need to be fast enough to not slow 1We study 8-bit optimization with current best practice model and gradient representations ( typically 16-bit mixed precision ) , to isolate optimization challenges . Future work could explore further compressing all three . down training , which is especially difficult for non-linear methods that require more complex data structures to maintain the quantization buckets . Finally , to maintain stability with huge models beyond 1B parameters , a quantization method needs to not only have a good mean error but excellent worse case performance since a single large quantization error can cause the entire training run to diverge . We introduce a new block-wise quantization approach that addresses all three of these challenges . Block-wise quantization splits input tensors into blocks and performs quantization on each block independently . This block-wise division reduces the effect of outliers on the quantization process since they are isolated to particular blocks , thereby improving stability and performance , especially for large-scale models . Block-wise processing also allows for high optimizer throughput since each normalization can be computed independently in each core . This contrasts with tensor-wide normalization , which requires slow cross-core synchronization that is highly dependent on task-core scheduling . We combine block-wise quantization with two novel methods for stable , high-performance 8-bit optimizers : dynamic quantization and a stable embedding layer . Dynamic quantization is an extension of dynamic tree quantization for unsigned input data . The stable embedding layer is a variation of a standard word embedding layer that supports more aggressive quantization by normalizing the highly non-uniform distribution of inputs to avoid extreme gradient variation . Our 8-bit optimizers maintain 32-bit performance at a fraction of the original memory footprint . We show this for a broad range of tasks : 1.5B and 355M parameter language modeling , GLUE finetuning , ImageNet classification , WMT ’ 14+WMT ’ 16 machine translation , MoCo v2 contrastive image pretraining+finetuning , and RoBERTa pretraining . We also report additional ablations and sensitivity analysis showing that all components – block-wise quantization , dynamic quantization , and stable embedding layer – are crucial for these results and that 8-bit Adam can be used as a simple drop-in replacement for 32-bit Adam , with no hyperparameter changes . We open-source our custom CUDA kernels and provide a PyTorch implementation that enables 8-bit optimization by changing two lines of code . 1 BACKGROUND . 1.1 STATEFUL OPTIMIZERS . An optimizer updates the parameters w of a neural network by using the gradient of the loss with respect to the weight gt = ∂L∂w at update iteration t. Stateful optimizers compute statistics of the gradient with respect to each parameter over time for accelerated optimization . Two of the most commonly used stateful optimizers are Adam ( Kingma and Ba , 2014 ) , and SGD with momentum ( Qian , 1999 ) – or Momentum for short . Without damping and scaling constants , the update rules of these optimizers are given by : Momentum ( gt , wt−1 , mt−1 ) = m0 = g0 Initialization mt = β1mt−1 + gt State 1 update wt = wt−1 − α ·mt Weight update ( 1 ) Adam ( gt , wt−1 , mt−1 , rt−1 ) = r0 = m0 = 0 Initialization mt = β1mt−1 + ( 1− β1 ) gt State 1 update rt = β2rt−1 + ( 1− β2 ) g2t State 2 update wt = wt−1 − α · mt√rt+ Weight update , ( 2 ) where β1 and β2 are smoothing constants , is a small constant , and α is the learning rate . For 32-bit states , Momentum and Adam consume 4 and 8 bytes per parameter . That is 4 GB and 8 GB for a 1B parameter model . Our 8-bit non-linear quantization reduces these costs to 1 GB and 2 GB . 1.2 NON-LINEAR QUANTIZATION . Quantization compresses numeric representations to save space at the cost of precision . Quantization is the mapping of a k-bit integer to a real element in D , that is , Qmap : [ 0 , 2k − 1 ] 7→ D. For example , the IEEE 32-bit floating point data type maps the indices 0 ... 232 − 1 to the domain [ -3.4e38 , +3.4e38 ] . We use the following notation : Qmap ( i ) = Qmapi = qi , for example Qmap ( 231 + 131072 ) = 2.03125 , for the IEEE 32-bit floating point data type . To perform general quantization from one data type into another we require three steps . ( 1 ) Compute a normalization constant N that transforms the input tensor T into the range of the domain D of the target quantization data type Qmap , ( 2 ) for each element of T/N find the closest corresponding value qi in the domain D , ( 3 ) store the index i corresponding to qi in the quantized output tensor TQ . To receive the dequantized tensor TD we look up the index and denormalize : TDi = Q map ( TQi ) ·N . To perform this procedure for dynamic quantization we first normalize into the range [ -1 , 1 ] through division by the absolute maximum value : N = max ( |T| ) . Then we find the closest values via a binary search : TQi = 2n argmin j=0 |Qmapj − Ti N | ( 3 ) 1.3 DYNAMIC TREE QUANTIZATION Figure 2 : Dynamic tree quantization . Dynamic Tree quantization ( Dettmers , 2016 ) is a method that yields low quantization error for both small and large magnitude values . Unlike data types with fixed exponent and fraction , dynamic tree quantization uses a datatype with a dynamic exponent and fraction that can change with each number . It is made up of four parts , as seen in Figure 2 : ( 1 ) The first bit of the data type is reserved for a sign . ( 2 ) The number of subsequent zero bits indicates the magnitude of the exponent . ( 3 ) The first bit that is set to one indicates that all following values are reserved for ( 4 ) linear quantization . By moving the indicator bit , num- bers can have a large exponent 10−7 or precision as high as 1/63 . Compared to linear quantization , dynamic tree quantization has better absolute and relative quantization errors for non-uniform distributions . Dynamic tree quantization is strictly defined to quantize numbers in the range [ -1.0 , 1.0 ] , which is ensured by performing tensor-level absolute max normalization . 2 8-BIT OPTIMIZERS . Our 8-bit optimizers have three components : ( 1 ) block-wise quantization that isolates outliers and distributes the error more equally over all bits ; ( 2 ) dynamic quantization , which quantizes both small and large values with high precision ; and ( 3 ) a stable embedding layer to improve stability during optimization for models with word embeddings . With these components , performing an optimizer update with 8-bit states is straightforward . We dequantize the 8-bit optimizer states to 32-bit , perform the update , and then quantize the states back to 8-bit for storage . We do this 8-bit to 32-bit conversion element-by-element in registers , which means no slow copies to GPU memory or additional temporary memory are needed to perform quantization and dequantization . For GPUs , this makes 8-bit optimizers faster than regular 32-bit optimizers , as we show in Section 3 . 2.1 BLOCK-WISE QUANTIZATION . Our block-wise quantization reduces the cost of computing normalization and improves quantization precision by isolating outliers . In order to dynamically quantize a tensor , as defined in Section 1.2 , we need to normalize the tensor into the range [ -1 , 1 ] . Such normalization requires a reduction over the entire tensor , which entails multiple synchronizations across GPU cores . Block-wise dynamic quantization reduces this cost by chunking an input tensor into small blocks of size B = 2048 and performing normalization independently in each core across this block . More formally , using the notation introduced in Section 1.2 , in block-wise quantization , we treat T as a one-dimensional sequence of elements that we chunk in blocks of size B . This means for an input tensor T with n elements we have n/B blocks . We proceed to compute a normalization constant for each block : Nb = max ( |Tb| ) , where b is the index of the block 0 .. n/B . With this block-wise normalization constant , each block can be quantized independently : TQbi = 2n argmin j=0 |Qmapj − Tbi Nb | ∣∣∣∣ 0 < i < B ( 4 ) This approach has several advantages , both for stability and efficiency . First , each block normalization can be computed independently . Thus no synchronization between cores is required , and throughput is enhanced . Secondly , it is also much more robust to outliers in the input tensor . For example , to contrast blockwise and regular quantization , if we create an input tensor with one million elements sampled from the standard normal distribution , we expect less than 1 % of elements of the tensor will be in the range [ 3 , +∞ ) . However , since we normalize the input tensor into the range [ -1,1 ] this means the maximum values of the distribution determine the range of quantization buckets . This means if the input tensor contains an outlier with magnitude 5 , the quantization buckets reserved for numbers between 3 and 5 will mostly go unused since less than 1 % of numbers are in this range . With blockwise quantization , the effect of outliers is limited to a single block . As such , most bits are used effectively in other blocks . Furthermore , because outliers represent the absolute maximum value in the input tensor , blockwise quantization approximates outlier values without any error . This guarantees that the largest optimizer states , arguably the most important , will always be quantized with full precision . This property makes block-wise dynamic quantization both robust and precise and is essential for good training performance in practice .
This paper proposes a non-linear block-wise quantization method to reduce the memory overhead of stateful optimizers, without sacrificing performance going from 32bits to 8 bit. The authors combine block-wise quantization with 2 methods to stabilize training: dynamic tree quantization and a stable embedding layer. Results on WMT, GLUE and Moco show the effectiveness of the method.
science.mup_single_document_summarization.6108
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\n8-bit Optimizers via Block-wise Quantization\n\nMain Content:\nIncreasing model size is an effective way to achieve better performance for given resources ( Kaplan et al. , 2020 ; Henighan et al. , 2020 ; Raffel et al. , 2019 ; Lewis et al. , 2021 ) . However , training such large models requires storing the model , gradient , and state of the optimizer ( e.g. , exponentially smoothed sum and squared sum of previous gradients for Adam ) , all in a fixed amount of available memory . Although significant research has focused on enabling larger model training by reducing or efficiently distributing the memory required for the model parameters ( Shoeybi et al. , 2019 ; Lepikhin et al. , 2020 ; Fedus et al. , 2021 ; Brown et al. , 2020 ; Rajbhandari et al. , 2020 ) , reducing the memory footprint of optimizer gradient statistics is much less studied . This is a significant missed opportunity since these optimizer states use 33-75 % of the total memory footprint during training . For example , the Adam optimizer states for the largest GPT-2 ( Radford et al. , 2019 ) and T5 ( Raffel et al. , 2019 ) models are 11 GB and 41 GB in size . In this paper , we develop a fast , high-precision non-linear quantization method – block-wise dynamic quantization – that enables stable 8-bit optimizers ( e.g. , Adam , AdamW , and Momentum ) which maintain 32-bit performance at a fraction of the memory footprint and without any changes to the original hyperparameters.1 While most current work uses 32-bit optimizer states , recent high-profile efforts to use 16-bit optimizers report difficultly for large models with more than 1B parameters ( Ramesh et al. , 2021 ) . Going from 16-bit optimizers to 8-bit optimizers reduces the range of possible values from 216 = 65536 values to just 28 = 256 . To our knowledge , this has not been attempted before . Effectively using this very limited range is challenging for three reasons : quantization accuracy , computational efficiency , and large-scale stability . To maintain accuracy , it is critical to introduce some form of non-linear quantization to reduce errors for both common small magnitude values and rare large ones . However , to be practical , 8-bit optimizers need to be fast enough to not slow 1We study 8-bit optimization with current best practice model and gradient representations ( typically 16-bit mixed precision ) , to isolate optimization challenges . Future work could explore further compressing all three . down training , which is especially difficult for non-linear methods that require more complex data structures to maintain the quantization buckets . Finally , to maintain stability with huge models beyond 1B parameters , a quantization method needs to not only have a good mean error but excellent worse case performance since a single large quantization error can cause the entire training run to diverge . We introduce a new block-wise quantization approach that addresses all three of these challenges . Block-wise quantization splits input tensors into blocks and performs quantization on each block independently . This block-wise division reduces the effect of outliers on the quantization process since they are isolated to particular blocks , thereby improving stability and performance , especially for large-scale models . Block-wise processing also allows for high optimizer throughput since each normalization can be computed independently in each core . This contrasts with tensor-wide normalization , which requires slow cross-core synchronization that is highly dependent on task-core scheduling . We combine block-wise quantization with two novel methods for stable , high-performance 8-bit optimizers : dynamic quantization and a stable embedding layer . Dynamic quantization is an extension of dynamic tree quantization for unsigned input data . The stable embedding layer is a variation of a standard word embedding layer that supports more aggressive quantization by normalizing the highly non-uniform distribution of inputs to avoid extreme gradient variation . Our 8-bit optimizers maintain 32-bit performance at a fraction of the original memory footprint . We show this for a broad range of tasks : 1.5B and 355M parameter language modeling , GLUE finetuning , ImageNet classification , WMT ’ 14+WMT ’ 16 machine translation , MoCo v2 contrastive image pretraining+finetuning , and RoBERTa pretraining . We also report additional ablations and sensitivity analysis showing that all components – block-wise quantization , dynamic quantization , and stable embedding layer – are crucial for these results and that 8-bit Adam can be used as a simple drop-in replacement for 32-bit Adam , with no hyperparameter changes . We open-source our custom CUDA kernels and provide a PyTorch implementation that enables 8-bit optimization by changing two lines of code . 1 BACKGROUND . 1.1 STATEFUL OPTIMIZERS . An optimizer updates the parameters w of a neural network by using the gradient of the loss with respect to the weight gt = ∂L∂w at update iteration t. Stateful optimizers compute statistics of the gradient with respect to each parameter over time for accelerated optimization . Two of the most commonly used stateful optimizers are Adam ( Kingma and Ba , 2014 ) , and SGD with momentum ( Qian , 1999 ) – or Momentum for short . Without damping and scaling constants , the update rules of these optimizers are given by : Momentum ( gt , wt−1 , mt−1 ) = m0 = g0 Initialization mt = β1mt−1 + gt State 1 update wt = wt−1 − α ·mt Weight update ( 1 ) Adam ( gt , wt−1 , mt−1 , rt−1 ) = r0 = m0 = 0 Initialization mt = β1mt−1 + ( 1− β1 ) gt State 1 update rt = β2rt−1 + ( 1− β2 ) g2t State 2 update wt = wt−1 − α · mt√rt+ Weight update , ( 2 ) where β1 and β2 are smoothing constants , is a small constant , and α is the learning rate . For 32-bit states , Momentum and Adam consume 4 and 8 bytes per parameter . That is 4 GB and 8 GB for a 1B parameter model . Our 8-bit non-linear quantization reduces these costs to 1 GB and 2 GB . 1.2 NON-LINEAR QUANTIZATION . Quantization compresses numeric representations to save space at the cost of precision . Quantization is the mapping of a k-bit integer to a real element in D , that is , Qmap : [ 0 , 2k − 1 ] 7→ D. For example , the IEEE 32-bit floating point data type maps the indices 0 ... 232 − 1 to the domain [ -3.4e38 , +3.4e38 ] . We use the following notation : Qmap ( i ) = Qmapi = qi , for example Qmap ( 231 + 131072 ) = 2.03125 , for the IEEE 32-bit floating point data type . To perform general quantization from one data type into another we require three steps . ( 1 ) Compute a normalization constant N that transforms the input tensor T into the range of the domain D of the target quantization data type Qmap , ( 2 ) for each element of T/N find the closest corresponding value qi in the domain D , ( 3 ) store the index i corresponding to qi in the quantized output tensor TQ . To receive the dequantized tensor TD we look up the index and denormalize : TDi = Q map ( TQi ) ·N . To perform this procedure for dynamic quantization we first normalize into the range [ -1 , 1 ] through division by the absolute maximum value : N = max ( |T| ) . Then we find the closest values via a binary search : TQi = 2n argmin j=0 |Qmapj − Ti N | ( 3 ) 1.3 DYNAMIC TREE QUANTIZATION Figure 2 : Dynamic tree quantization . Dynamic Tree quantization ( Dettmers , 2016 ) is a method that yields low quantization error for both small and large magnitude values . Unlike data types with fixed exponent and fraction , dynamic tree quantization uses a datatype with a dynamic exponent and fraction that can change with each number . It is made up of four parts , as seen in Figure 2 : ( 1 ) The first bit of the data type is reserved for a sign . ( 2 ) The number of subsequent zero bits indicates the magnitude of the exponent . ( 3 ) The first bit that is set to one indicates that all following values are reserved for ( 4 ) linear quantization . By moving the indicator bit , num- bers can have a large exponent 10−7 or precision as high as 1/63 . Compared to linear quantization , dynamic tree quantization has better absolute and relative quantization errors for non-uniform distributions . Dynamic tree quantization is strictly defined to quantize numbers in the range [ -1.0 , 1.0 ] , which is ensured by performing tensor-level absolute max normalization . 2 8-BIT OPTIMIZERS . Our 8-bit optimizers have three components : ( 1 ) block-wise quantization that isolates outliers and distributes the error more equally over all bits ; ( 2 ) dynamic quantization , which quantizes both small and large values with high precision ; and ( 3 ) a stable embedding layer to improve stability during optimization for models with word embeddings . With these components , performing an optimizer update with 8-bit states is straightforward . We dequantize the 8-bit optimizer states to 32-bit , perform the update , and then quantize the states back to 8-bit for storage . We do this 8-bit to 32-bit conversion element-by-element in registers , which means no slow copies to GPU memory or additional temporary memory are needed to perform quantization and dequantization . For GPUs , this makes 8-bit optimizers faster than regular 32-bit optimizers , as we show in Section 3 . 2.1 BLOCK-WISE QUANTIZATION . Our block-wise quantization reduces the cost of computing normalization and improves quantization precision by isolating outliers . In order to dynamically quantize a tensor , as defined in Section 1.2 , we need to normalize the tensor into the range [ -1 , 1 ] . Such normalization requires a reduction over the entire tensor , which entails multiple synchronizations across GPU cores . Block-wise dynamic quantization reduces this cost by chunking an input tensor into small blocks of size B = 2048 and performing normalization independently in each core across this block . More formally , using the notation introduced in Section 1.2 , in block-wise quantization , we treat T as a one-dimensional sequence of elements that we chunk in blocks of size B . This means for an input tensor T with n elements we have n/B blocks . We proceed to compute a normalization constant for each block : Nb = max ( |Tb| ) , where b is the index of the block 0 .. n/B . With this block-wise normalization constant , each block can be quantized independently : TQbi = 2n argmin j=0 |Qmapj − Tbi Nb | ∣∣∣∣ 0 < i < B ( 4 ) This approach has several advantages , both for stability and efficiency . First , each block normalization can be computed independently . Thus no synchronization between cores is required , and throughput is enhanced . Secondly , it is also much more robust to outliers in the input tensor . For example , to contrast blockwise and regular quantization , if we create an input tensor with one million elements sampled from the standard normal distribution , we expect less than 1 % of elements of the tensor will be in the range [ 3 , +∞ ) . However , since we normalize the input tensor into the range [ -1,1 ] this means the maximum values of the distribution determine the range of quantization buckets . This means if the input tensor contains an outlier with magnitude 5 , the quantization buckets reserved for numbers between 3 and 5 will mostly go unused since less than 1 % of numbers are in this range . With blockwise quantization , the effect of outliers is limited to a single block . As such , most bits are used effectively in other blocks . Furthermore , because outliers represent the absolute maximum value in the input tensor , blockwise quantization approximates outlier values without any error . This guarantees that the largest optimizer states , arguably the most important , will always be quantized with full precision . This property makes block-wise dynamic quantization both robust and precise and is essential for good training performance in practice .", "role": "user" }, { "content": "This paper proposes a non-linear block-wise quantization method to reduce the memory overhead of stateful optimizers, without sacrificing performance going from 32bits to 8 bit. The authors combine block-wise quantization with 2 methods to stabilize training: dynamic tree quantization and a stable embedding layer. Results on WMT, GLUE and Moco show the effectiveness of the method.", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: This paper describes the framework of a Korean phonological knowledge base system using the unification-based grammar formalism: Korean Phonology Structure Grammar (KPSG). The approach of KPSG provides an explicit development model for constructing a computational phonological system: speech recognition and synthesis system. We show that the proposed approach is more describable than other approaches such as those employing a traditional generative phonological approach.
Output Entities: {"Generic": ["approach", "approaches"], "Material": [], "Method": ["Korean Phonology Structure Grammar (KPSG)", "unification-based grammar formalism", "generative phonological approach"], "Metric": [], "OtherScientificTerm": [], "Task": ["phonological system", "Korean phonological knowledge base system"]}
science.scierc_ner.35
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nThis paper describes the framework of a Korean phonological knowledge base system using the unification-based grammar formalism: Korean Phonology Structure Grammar (KPSG). The approach of KPSG provides an explicit development model for constructing a computational phonological system: speech recognition and synthesis system. We show that the proposed approach is more describable than other approaches such as those employing a traditional generative phonological approach.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [\"approach\", \"approaches\"], \"Material\": [], \"Method\": [\"Korean Phonology Structure Grammar (KPSG)\", \"unification-based grammar formalism\", \"generative phonological approach\"], \"Metric\": [], \"OtherScientificTerm\": [], \"Task\": [\"phonological system\", \"Korean phonological knowledge base system\"]}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Generating Word and Document Embeddings for Sentiment Analysis Sentiments of words differ from one corpus to another. Inducing general sentiment lexicons for languages and using them cannot, in general, produce meaningful results for different domains. In this paper, we combine contextual and supervised information with the general semantic representations of words occurring in the dictionary. Contexts of words help us capture the domain-specific information and supervised scores of words are indicative of the polarities of those words. When we combine supervised features of words with the features extracted from their dictionary definitions, we observe an increase in the success rates. We try out the combinations of contextual, supervised, and dictionary-based approaches, and generate original vectors. We also combine the word2vec approach with hand-crafted features. We induce domain-specific sentimental vectors for two corpora, which are the movie domain and the Twitter datasets in Turkish. When we thereafter generate document vectors and employ the support vector machines method utilising those vectors, our approaches perform better than the baseline studies for Turkish with a significant margin. We evaluated our models on two English corpora as well and these also outperformed the word2vec approach. It shows that our approaches are cross-lingual and cross-domain. Methodology We generate several word vectors, which capture the sentimental, lexical, and contextual characteristics of words. In addition to these mostly original vectors, we also create word2vec embeddings to represent the corpus words by training the embedding model on these datasets. After generating these, we combine them with hand-crafted features to create document vectors and perform classification, as will be explained in Section 3.5. Methodology ::: Corpus-based Approach Contextual information is informative in the sense that, in general, similar words tend to appear in the same contexts. For example, the word smart is more likely to cooccur with the word hardworking than with the word lazy. This similarity can be defined semantically and sentimentally. In the corpus-based approach, we capture both of these characteristics and generate word embeddings specific to a domain. Firstly, we construct a matrix whose entries correspond to the number of cooccurrences of the row and column words in sliding windows. Diagonal entries are assigned the number of sliding windows that the corresponding row word appears in the whole corpus. We then normalise each row by dividing entries in the row by the maximum score in it. Secondly, we perform the principal component analysis (PCA) method to reduce the dimensionality. It captures latent meanings and takes into account high-order cooccurrence removing noise. The attribute (column) number of the matrix is reduced to 200. We then compute cosine similarity between each row pair $w_i$ and $w_j$ as in (DISPLAY_FORM3) to find out how similar two word vectors (rows) are. Thirdly, all the values in the matrix are subtracted from 1 to create a dissimilarity matrix. Then, we feed the matrix as input into the fuzzy c-means clustering algorithm. We chose the number of clusters as 200, as it is considered a standard for word embeddings in the literature. After clustering, the dimension i for a corresponding word indicates the degree to which this word belongs to cluster i. The intuition behind this idea is that if two words are similar in the VSM, they are more likely to belong to the same clusters with akin probabilities. In the end, each word in the corpus is represented by a 200-dimensional vector. In addition to this method, we also perform singular value decomposition (SVD) on the cooccurrence matrices, where we compute the matrix $M^{PPMI} = U\Sigma V^{T}$. Positive pointwise mutual information (PPMI) scores between words are calculated and the truncated singular value decomposition is computed. We take into account the U matrix only for each word. We have chosen the singular value number as 200. That is, each word in the corpus is represented by a 200-dimensional vector as follows. Methodology ::: Dictionary-based Approach In Turkish, there do not exist well-established sentiment lexicons as in English. In this approach, we made use of the TDK (Türk Dil Kurumu - “Turkish Language Institution”) dictionary to obtain word polarities. Although it is not a sentiment lexicon, combining it with domain-specific polarity scores obtained from the corpus led us to have state-of-the-art results. We first construct a matrix whose row entries are corpus words and column entries are the words in their dictionary definitions. We followed the boolean approach. For instance, for the word cat, the column words occurring in its dictionary definition are given a score of 1. Those column words not appearing in the definition of cat are assigned a score of 0 for that corresponding row entry. When we performed clustering on this matrix, we observed that those words having similar meanings are, in general, assigned to the same clusters. However, this similarity fails in capturing the sentimental characteristics. For instance, the words happy and unhappy are assigned to the same cluster, since they have the same words, such as feeling, in their dictionary definitions. However, they are of opposite polarities and should be discerned from each other. Therefore, we utilise a metric to move such words away from each other in the VSM, even though they have common words in their dictionary definitions. We multiply each value in a row with the corresponding row word's raw supervised score, thereby having more meaningful clusters. Using the training data only, the supervised polarity score per word is calculated as in (DISPLAY_FORM4). Here, $ w_{t}$ denotes the sentiment score of word $t$, $N_{t}$ is the number of documents (reviews or tweets) in which $t$ occurs in the dataset of positive polarity, $N$ is the number of all the words in the corpus of positive polarity. $N^{\prime }$ denotes the corpus of negative polarity. $N^{\prime }_{t}$ and $N^{\prime }$ denote similar values for the negative polarity corpus. We perform normalisation to prevent the imbalance problem and add a small number to both numerator and denominator for smoothing. As an alternative to multiplying with the supervised polarity scores, we also separately multiplied all the row scores with only +1 if the row word is a positive word, and with -1 if it is a negative word. We have observed it boosts the performance more compared to using raw scores. The effect of this multiplication is exemplified in Figure FIGREF7, showing the positions of word vectors in the VSM. Those “x" words are sentimentally negative words, those “o" words are sentimentally positive ones. On the top coordinate plane, the words of opposite polarities are found to be close to each other, since they have common words in their dictionary definitions. Only the information concerned with the dictionary definitions are used there, discarding the polarity scores. However, when we utilise the supervised score (+1 or -1), words of opposite polarities (e.g. “happy" and “unhappy") get far away from each other as they are translated across coordinate regions. Positive words now appear in quadrant 1, whereas negative words appear in quadrant 3. Thus, in the VSM, words that are sentimentally similar to each other could be clustered more accurately. Besides clustering, we also employed the SVD method to perform dimensionality reduction on the unsupervised dictionary algorithm and used the newly generated matrix by combining it with other subapproaches. The number of dimensions is chosen as 200 again according to the $U$ matrix. The details are given in Section 3.4. When using and evaluating this subapproach on the English corpora, we used the SentiWordNet lexicon BIBREF13. We have achieved better results for the dictionary-based algorithm when we employed the SVD reduction method compared to the use of clustering. Methodology ::: Supervised Contextual 4-scores Our last component is a simple metric that uses four supervised scores for each word in the corpus. We extract these scores as follows. For a target word in the corpus, we scan through all of its contexts. In addition to the target word's polarity score (the self score), out of all the polarity scores of words occurring in the same contexts as the target word, minimum, maximum, and average scores are taken into consideration. The word polarity scores are computed using (DISPLAY_FORM4). Here, we obtain those scores from the training data. The intuition behind this method is that those four scores are more indicative of a word's polarity rather than only one (the self score). This approach is fully supervised unlike the previous two approaches. Methodology ::: Combination of the Word Embeddings In addition to using the three approaches independently, we also combined all the matrices generated in the previous approaches. That is, we concatenate the reduced forms (SVD - U) of corpus-based, dictionary-based, and the whole of 4-score vectors of each word, horizontally. Accordingly, each corpus word is represented by a 404-dimensional vector, since corpus-based and dictionary-based vector components are each composed of 200 dimensions, whereas the 4-score vector component is formed by four values. The main intuition behind the ensemble method is that some approaches compensate for what the others may lack. For example, the corpus-based approach captures the domain-specific, semantic, and syntactic characteristics. On the other hand, the 4-scores method captures supervised features, and the dictionary-based approach is helpful in capturing the general semantic characteristics. That is, combining those three approaches makes word vectors more representative. Methodology ::: Generating Document Vectors After creating several embeddings as mentioned above, we create document (review or tweet) vectors. For each document, we sum all the vectors of words occurring in that document and take their average. In addition to it, we extract three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores, from each review. These polarity scores of words are computed as in (DISPLAY_FORM4). For example, if a review consists of five words, it would have five polarity scores and we utilise only three of these sentiment scores as mentioned. Lastly, we concatenate these three scores to the averaged word vector per review. That is, each review is represented by the average word vector of its constituent word embeddings and three supervised scores. We then feed these inputs into the SVM approach. The flowchart of our framework is given in Figure FIGREF11. When combining the unsupervised features, which are word vectors created on a word basis, with supervised three scores extracted on a review basis, we have better state-of-the-art results. Datasets We utilised two datasets for both Turkish and English to evaluate our methods. For Turkish, as the first dataset, we utilised the movie reviews which are collected from a popular website. The number of reviews in this movie corpus is 20,244 and the average number of words in reviews is 39. Each of these reviews has a star-rating score which is indicative of sentiment. These polarity scores are between the values 0.5 and 5, at intervals of 0.5. We consider a review to be negative it the score is equal to or lower than 2.5. On the other hand, if it is equal to or higher than 4, it is assumed to be positive. We have randomly selected 7,020 negative and 7,020 positive reviews and processed only them. The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative. We measured the Cohen's Kappa inter-annotator agreement score to be 0.82. If there was a disagreement on the polarity of a tweet, we removed it. We also utilised two other datasets in English to test the cross-linguality of our approaches. One of them is a movie corpus collected from the web. There are 5,331 positive reviews and 5,331 negative reviews in this corpus. The other is a Twitter dataset, which has nearly 1.6 million tweets annotated through a distant supervised method BIBREF14. These tweets have positive, neutral, and negative labels. We have selected 7,020 positive tweets and 7,020 negative tweets randomly to generate a balanced dataset. Experiments ::: Hyperparameters We used the LibSVM utility of the WEKA tool. We chose the linear kernel option to classify the reviews. We trained word2vec embeddings on all the four corpora using the Gensim library BIBREF18 with the skip-gram method. The dimension size of these embeddings are set at 200. As mentioned, other embeddings, which are generated utilising the clustering and the SVD approach, are also of size 200. For c-mean clustering, we set the maximum number of iterations at 25, unless it converges. Conclusion We have demonstrated that using word vectors that capture only semantic and syntactic characteristics may be improved by taking into account their sentimental aspects as well. Our approaches are cross-lingual and cross-domain. They can be applied to other domains and other languages than Turkish and English with minor changes. Our study is one of the few ones that perform sentiment analysis in Turkish and leverages sentimental characteristics of words in generating word vectors and outperforms all the others. Any of the approaches we propose can be used independently of the others. Our approaches without using sentiment labels can be applied to other classification tasks, such as topic classification and concept mining. The experiments show that even unsupervised approaches, as in the corpus-based approach, can outperform supervised approaches in classification tasks. Combining some approaches, which can compensate for what others lack, can help us build better vectors. Our word vectors are created by conventional machine learning algorithms; however, they, as in the corpus-based model, produce state-of-the-art results. Although we preferred to use a classical machine learning algorithm, which is SVM, over a neural network classifier to predict the labels of reviews, we achieved accuracies of over 90 per cent for the Turkish movie corpus and about 88 per cent for the English Twitter dataset. We performed only binary sentiment classification in this study as most of the studies in the literature do. We will extend our system in future by using neutral reviews as well. We also plan to employ Turkish WordNet to enhance the generalisability of our embeddings as another future work. Acknowledgments This work was supported by Boğaziçi University Research Fund Grant Number 6980D, and by Turkish Ministry of Development under the TAM Project number DPT2007K12-0610. Cem Rıfkı Aydın is supported by TÜBİTAK BIDEB 2211E. Fig. 2. The flowchart of our system. Table 1. Accuracies for different feature sets fed as input into the SVM classifier in predicting the labels of reviews. The word2vec algorithm is the baseline method. Query: What details are given about the Twitter dataset? JSON Response:
{"answer": "one of the Twitter datasets is about Turkish mobile network operators, there are positive, neutral and negative labels and provide the total amount plus the distribution of labels", "evidence": ["The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative. We measured the Cohen\u0027s Kappa inter-annotator agreement score to be 0.82. If there was a disagreement on the polarity of a tweet, we removed it.", "We also utilised two other datasets in English to test the cross-linguality of our approaches. One of them is a movie corpus collected from the web. There are 5,331 positive reviews and 5,331 negative reviews in this corpus. The other is a Twitter dataset, which has nearly 1.6 million tweets annotated through a distant supervised method BIBREF14. These tweets have positive, neutral, and negative labels. We have selected 7,020 positive tweets and 7,020 negative tweets randomly to generate a balanced dataset."]}
science.qasper_abstractive_qa.180.1
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nGenerating Word and Document Embeddings for Sentiment Analysis\n\nSentiments of words differ from one corpus to another. Inducing general sentiment lexicons for languages and using them cannot, in general, produce meaningful results for different domains. In this paper, we combine contextual and supervised information with the general semantic representations of words occurring in the dictionary. Contexts of words help us capture the domain-specific information and supervised scores of words are indicative of the polarities of those words. When we combine supervised features of words with the features extracted from their dictionary definitions, we observe an increase in the success rates. We try out the combinations of contextual, supervised, and dictionary-based approaches, and generate original vectors. We also combine the word2vec approach with hand-crafted features. We induce domain-specific sentimental vectors for two corpora, which are the movie domain and the Twitter datasets in Turkish. When we thereafter generate document vectors and employ the support vector machines method utilising those vectors, our approaches perform better than the baseline studies for Turkish with a significant margin. We evaluated our models on two English corpora as well and these also outperformed the word2vec approach. It shows that our approaches are cross-lingual and cross-domain.\n\nMethodology\nWe generate several word vectors, which capture the sentimental, lexical, and contextual characteristics of words. In addition to these mostly original vectors, we also create word2vec embeddings to represent the corpus words by training the embedding model on these datasets. After generating these, we combine them with hand-crafted features to create document vectors and perform classification, as will be explained in Section 3.5.\n\nMethodology ::: Corpus-based Approach\nContextual information is informative in the sense that, in general, similar words tend to appear in the same contexts. For example, the word smart is more likely to cooccur with the word hardworking than with the word lazy. This similarity can be defined semantically and sentimentally. In the corpus-based approach, we capture both of these characteristics and generate word embeddings specific to a domain.\nFirstly, we construct a matrix whose entries correspond to the number of cooccurrences of the row and column words in sliding windows. Diagonal entries are assigned the number of sliding windows that the corresponding row word appears in the whole corpus. We then normalise each row by dividing entries in the row by the maximum score in it.\nSecondly, we perform the principal component analysis (PCA) method to reduce the dimensionality. It captures latent meanings and takes into account high-order cooccurrence removing noise. The attribute (column) number of the matrix is reduced to 200. We then compute cosine similarity between each row pair $w_i$ and $w_j$ as in (DISPLAY_FORM3) to find out how similar two word vectors (rows) are.\nThirdly, all the values in the matrix are subtracted from 1 to create a dissimilarity matrix. Then, we feed the matrix as input into the fuzzy c-means clustering algorithm. We chose the number of clusters as 200, as it is considered a standard for word embeddings in the literature. After clustering, the dimension i for a corresponding word indicates the degree to which this word belongs to cluster i. The intuition behind this idea is that if two words are similar in the VSM, they are more likely to belong to the same clusters with akin probabilities. In the end, each word in the corpus is represented by a 200-dimensional vector.\nIn addition to this method, we also perform singular value decomposition (SVD) on the cooccurrence matrices, where we compute the matrix $M^{PPMI} = U\\Sigma V^{T}$. Positive pointwise mutual information (PPMI) scores between words are calculated and the truncated singular value decomposition is computed. We take into account the U matrix only for each word. We have chosen the singular value number as 200. That is, each word in the corpus is represented by a 200-dimensional vector as follows.\n\nMethodology ::: Dictionary-based Approach\nIn Turkish, there do not exist well-established sentiment lexicons as in English. In this approach, we made use of the TDK (Türk Dil Kurumu - “Turkish Language Institution”) dictionary to obtain word polarities. Although it is not a sentiment lexicon, combining it with domain-specific polarity scores obtained from the corpus led us to have state-of-the-art results.\nWe first construct a matrix whose row entries are corpus words and column entries are the words in their dictionary definitions. We followed the boolean approach. For instance, for the word cat, the column words occurring in its dictionary definition are given a score of 1. Those column words not appearing in the definition of cat are assigned a score of 0 for that corresponding row entry.\nWhen we performed clustering on this matrix, we observed that those words having similar meanings are, in general, assigned to the same clusters. However, this similarity fails in capturing the sentimental characteristics. For instance, the words happy and unhappy are assigned to the same cluster, since they have the same words, such as feeling, in their dictionary definitions. However, they are of opposite polarities and should be discerned from each other.\nTherefore, we utilise a metric to move such words away from each other in the VSM, even though they have common words in their dictionary definitions. We multiply each value in a row with the corresponding row word's raw supervised score, thereby having more meaningful clusters. Using the training data only, the supervised polarity score per word is calculated as in (DISPLAY_FORM4).\nHere, $ w_{t}$ denotes the sentiment score of word $t$, $N_{t}$ is the number of documents (reviews or tweets) in which $t$ occurs in the dataset of positive polarity, $N$ is the number of all the words in the corpus of positive polarity. $N^{\\prime }$ denotes the corpus of negative polarity. $N^{\\prime }_{t}$ and $N^{\\prime }$ denote similar values for the negative polarity corpus. We perform normalisation to prevent the imbalance problem and add a small number to both numerator and denominator for smoothing.\nAs an alternative to multiplying with the supervised polarity scores, we also separately multiplied all the row scores with only +1 if the row word is a positive word, and with -1 if it is a negative word. We have observed it boosts the performance more compared to using raw scores.\nThe effect of this multiplication is exemplified in Figure FIGREF7, showing the positions of word vectors in the VSM. Those “x\" words are sentimentally negative words, those “o\" words are sentimentally positive ones. On the top coordinate plane, the words of opposite polarities are found to be close to each other, since they have common words in their dictionary definitions. Only the information concerned with the dictionary definitions are used there, discarding the polarity scores. However, when we utilise the supervised score (+1 or -1), words of opposite polarities (e.g. “happy\" and “unhappy\") get far away from each other as they are translated across coordinate regions. Positive words now appear in quadrant 1, whereas negative words appear in quadrant 3. Thus, in the VSM, words that are sentimentally similar to each other could be clustered more accurately. Besides clustering, we also employed the SVD method to perform dimensionality reduction on the unsupervised dictionary algorithm and used the newly generated matrix by combining it with other subapproaches. The number of dimensions is chosen as 200 again according to the $U$ matrix. The details are given in Section 3.4. When using and evaluating this subapproach on the English corpora, we used the SentiWordNet lexicon BIBREF13. We have achieved better results for the dictionary-based algorithm when we employed the SVD reduction method compared to the use of clustering.\n\nMethodology ::: Supervised Contextual 4-scores\nOur last component is a simple metric that uses four supervised scores for each word in the corpus. We extract these scores as follows. For a target word in the corpus, we scan through all of its contexts. In addition to the target word's polarity score (the self score), out of all the polarity scores of words occurring in the same contexts as the target word, minimum, maximum, and average scores are taken into consideration. The word polarity scores are computed using (DISPLAY_FORM4). Here, we obtain those scores from the training data.\nThe intuition behind this method is that those four scores are more indicative of a word's polarity rather than only one (the self score). This approach is fully supervised unlike the previous two approaches.\n\nMethodology ::: Combination of the Word Embeddings\nIn addition to using the three approaches independently, we also combined all the matrices generated in the previous approaches. That is, we concatenate the reduced forms (SVD - U) of corpus-based, dictionary-based, and the whole of 4-score vectors of each word, horizontally. Accordingly, each corpus word is represented by a 404-dimensional vector, since corpus-based and dictionary-based vector components are each composed of 200 dimensions, whereas the 4-score vector component is formed by four values.\nThe main intuition behind the ensemble method is that some approaches compensate for what the others may lack. For example, the corpus-based approach captures the domain-specific, semantic, and syntactic characteristics. On the other hand, the 4-scores method captures supervised features, and the dictionary-based approach is helpful in capturing the general semantic characteristics. That is, combining those three approaches makes word vectors more representative.\n\nMethodology ::: Generating Document Vectors\nAfter creating several embeddings as mentioned above, we create document (review or tweet) vectors. For each document, we sum all the vectors of words occurring in that document and take their average. In addition to it, we extract three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores, from each review. These polarity scores of words are computed as in (DISPLAY_FORM4). For example, if a review consists of five words, it would have five polarity scores and we utilise only three of these sentiment scores as mentioned. Lastly, we concatenate these three scores to the averaged word vector per review.\nThat is, each review is represented by the average word vector of its constituent word embeddings and three supervised scores. We then feed these inputs into the SVM approach. The flowchart of our framework is given in Figure FIGREF11. When combining the unsupervised features, which are word vectors created on a word basis, with supervised three scores extracted on a review basis, we have better state-of-the-art results.\n\nDatasets\nWe utilised two datasets for both Turkish and English to evaluate our methods.\nFor Turkish, as the first dataset, we utilised the movie reviews which are collected from a popular website. The number of reviews in this movie corpus is 20,244 and the average number of words in reviews is 39. Each of these reviews has a star-rating score which is indicative of sentiment. These polarity scores are between the values 0.5 and 5, at intervals of 0.5. We consider a review to be negative it the score is equal to or lower than 2.5. On the other hand, if it is equal to or higher than 4, it is assumed to be positive. We have randomly selected 7,020 negative and 7,020 positive reviews and processed only them.\nThe second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative. We measured the Cohen's Kappa inter-annotator agreement score to be 0.82. If there was a disagreement on the polarity of a tweet, we removed it.\nWe also utilised two other datasets in English to test the cross-linguality of our approaches. One of them is a movie corpus collected from the web. There are 5,331 positive reviews and 5,331 negative reviews in this corpus. The other is a Twitter dataset, which has nearly 1.6 million tweets annotated through a distant supervised method BIBREF14. These tweets have positive, neutral, and negative labels. We have selected 7,020 positive tweets and 7,020 negative tweets randomly to generate a balanced dataset.\n\nExperiments ::: Hyperparameters\nWe used the LibSVM utility of the WEKA tool. We chose the linear kernel option to classify the reviews. We trained word2vec embeddings on all the four corpora using the Gensim library BIBREF18 with the skip-gram method. The dimension size of these embeddings are set at 200. As mentioned, other embeddings, which are generated utilising the clustering and the SVD approach, are also of size 200. For c-mean clustering, we set the maximum number of iterations at 25, unless it converges.\n\nConclusion\nWe have demonstrated that using word vectors that capture only semantic and syntactic characteristics may be improved by taking into account their sentimental aspects as well. Our approaches are cross-lingual and cross-domain. They can be applied to other domains and other languages than Turkish and English with minor changes.\nOur study is one of the few ones that perform sentiment analysis in Turkish and leverages sentimental characteristics of words in generating word vectors and outperforms all the others. Any of the approaches we propose can be used independently of the others. Our approaches without using sentiment labels can be applied to other classification tasks, such as topic classification and concept mining.\nThe experiments show that even unsupervised approaches, as in the corpus-based approach, can outperform supervised approaches in classification tasks. Combining some approaches, which can compensate for what others lack, can help us build better vectors. Our word vectors are created by conventional machine learning algorithms; however, they, as in the corpus-based model, produce state-of-the-art results. Although we preferred to use a classical machine learning algorithm, which is SVM, over a neural network classifier to predict the labels of reviews, we achieved accuracies of over 90 per cent for the Turkish movie corpus and about 88 per cent for the English Twitter dataset.\nWe performed only binary sentiment classification in this study as most of the studies in the literature do. We will extend our system in future by using neutral reviews as well. We also plan to employ Turkish WordNet to enhance the generalisability of our embeddings as another future work.\n\nAcknowledgments\nThis work was supported by Boğaziçi University Research Fund Grant Number 6980D, and by Turkish Ministry of Development under the TAM Project number DPT2007K12-0610. Cem Rıfkı Aydın is supported by TÜBİTAK BIDEB 2211E.\n\nFig. 2. The flowchart of our system.\n\nTable 1. Accuracies for different feature sets fed as input into the SVM classifier in predicting the labels of reviews. The word2vec algorithm is the baseline method.\n\nQuery: What details are given about the Twitter dataset?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"one of the Twitter datasets is about Turkish mobile network operators, there are positive, neutral and negative labels and provide the total amount plus the distribution of labels\", \"evidence\": [\"The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative. We measured the Cohen\\u0027s Kappa inter-annotator agreement score to be 0.82. If there was a disagreement on the polarity of a tweet, we removed it.\", \"We also utilised two other datasets in English to test the cross-linguality of our approaches. One of them is a movie corpus collected from the web. There are 5,331 positive reviews and 5,331 negative reviews in this corpus. The other is a Twitter dataset, which has nearly 1.6 million tweets annotated through a distant supervised method BIBREF14. These tweets have positive, neutral, and negative labels. We have selected 7,020 positive tweets and 7,020 negative tweets randomly to generate a balanced dataset.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Anthrax spores remain deadly until affected areas are decontaminated. Research Abstract: CONTEXT Bioterrorist attacks involving letters and mail-handling systems in Washington, DC, resulted in Bacillus anthracis (anthrax) spore contamination in the Hart Senate Office Building and other facilities in the US Capitol's vicinity. OBJECTIVE To provide information about the nature and extent of indoor secondary aerosolization of B anthracis spores. DESIGN Stationary and personal air samples, surface dust, and swab samples were collected under semiquiescent (minimal activities) and then simulated active office conditions to estimate secondary aerosolization of B anthracis spores. Nominal size characteristics, airborne concentrations, and surface contamination of B anthracis particles (colony-forming units) were evaluated. RESULTS Viable B anthracis spores reaerosolized under semiquiescent conditions, with a marked increase in reaerosolization during simulated active office conditions. Increases were observed for B anthracis collected on open sheep blood agar plates (P<.001) and personal air monitors (P =.01) during active office conditions. More than 80% of the B anthracis particles collected on stationary monitors were within an alveolar respirable size range of 0.95 to 3.5 micro m. CONCLUSIONS Bacillus anthracis spores used in a recent terrorist incident reaerosolized under common office activities. These findings have important implications for appropriate respiratory protection, remediation, and reoccupancy of contaminated office environments.
{"verdict": "SUPPORT", "evidence": ["More than 80% of the B anthracis particles collected on stationary monitors were within an alveolar respirable size range of 0.95 to 3.5 micro m. CONCLUSIONS Bacillus anthracis spores used in a recent terrorist incident reaerosolized under common office activities."]}
science.scifact_entailment.88
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nAnthrax spores remain deadly until affected areas are decontaminated.\n\nResearch Abstract:\nCONTEXT Bioterrorist attacks involving letters and mail-handling systems in Washington, DC, resulted in Bacillus anthracis (anthrax) spore contamination in the Hart Senate Office Building and other facilities in the US Capitol's vicinity.\nOBJECTIVE To provide information about the nature and extent of indoor secondary aerosolization of B anthracis spores.\nDESIGN Stationary and personal air samples, surface dust, and swab samples were collected under semiquiescent (minimal activities) and then simulated active office conditions to estimate secondary aerosolization of B anthracis spores.\nNominal size characteristics, airborne concentrations, and surface contamination of B anthracis particles (colony-forming units) were evaluated.\nRESULTS Viable B anthracis spores reaerosolized under semiquiescent conditions, with a marked increase in reaerosolization during simulated active office conditions.\nIncreases were observed for B anthracis collected on open sheep blood agar plates (P<.001) and personal air monitors (P =.01) during active office conditions.\nMore than 80% of the B anthracis particles collected on stationary monitors were within an alveolar respirable size range of 0.95 to 3.5 micro m. CONCLUSIONS Bacillus anthracis spores used in a recent terrorist incident reaerosolized under common office activities.\nThese findings have important implications for appropriate respiratory protection, remediation, and reoccupancy of contaminated office environments.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"More than 80% of the B anthracis particles collected on stationary monitors were within an alveolar respirable size range of 0.95 to 3.5 micro m. CONCLUSIONS Bacillus anthracis spores used in a recent terrorist incident reaerosolized under common office activities.\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Background: The procognitive actions of the nicotinic acetylcholine receptor (nAChR) agonist nicotine are believed, in part, to motivate the excessive cigarette smoking in schizophrenia, a disorder associated with deficits in multiple cognitive domains, including low-level auditory sensory processes and higher-order attention-dependent operations. Objectives: As N-methyl-d-aspartate receptor (NMDAR) hypofunction has been shown to contribute to these cognitive impairments, the primary aims of this healthy volunteer study were to: (a) to shed light on the separate and interactive roles of nAChR and NMDAR systems in the modulation of auditory sensory memory (and sustained attention), as indexed by the auditory event-related brain potential – mismatch negativity (MMN), and (b) to examine how these effects are moderated by a predisposition to auditory hallucinations/delusions (HD). Methods: In a randomized, double-blind, placebo-controlled design involving a low intravenous dose of ketamine (0.04 mg/kg) and a 4 mg dose of nicotine gum, MMN, and performance on a rapid visual information processing (RVIP) task of sustained attention were examined in 24 healthy controls psychometrically stratified as being lower (L-HD, n = 12) or higher (H-HD) for HD propensity. Results: Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine. Nicotine significantly enhanced response speed [reaction time (RT)] and accuracy (increased % hits and d′ and reduced false alarms) on the RVIP, with improved performance accuracy being prevented when nicotine was administered with ketamine. Both % hits and d′, as well as RT were poorer in H-HD (vs. L-HD) and while hit rate and d′ was increased by nicotine in H-HD, RT was slowed by ketamine in L-HD. Conclusions: Nicotine alleviated ketamine-induced sensory memory impairment and improved attention, particularly in individuals prone to HD. Extracted Data:
[["(KI) low intravenous dose of ketamine (0.04 mg/kg)", "(NG) 4 mg dose of nicotine gum", "hit rate in high - hallucinations/delusions (H-HD)", "significantly decreased", "hit rate and d\u2032 was increased by nicotine in H-HD"], ["(KI) low intravenous dose of ketamine (0.04 mg/kg)", "(NG) 4 mg dose of nicotine gum", "hit rate in high - hallucinations/delusions (H-HD)", "significantly decreased", "Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine."], ["(KI) low intravenous dose of ketamine (0.04 mg/kg)", "(NG) 4 mg dose of nicotine gum", "auditory event-related brain potential \u2013 mismatch negativity (MMN) in high - hallucinations/delusions (H-HD)", "significantly decreased", "Ketamine significantly slowed MMN, and reduced MMN in H-HD"], ["(KI) low intravenous dose of ketamine (0.04 mg/kg)", "(NG) 4 mg dose of nicotine gum", "d\u2032 (weight hits and false alarms into a single measure) in high - hallucinations/delusions (H-HD)", "significantly decreased", "hit rate and d\u2032 was increased by nicotine in H-HD"]]
science.evidence_inference.176
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Background: The procognitive actions of the nicotinic acetylcholine receptor (nAChR) agonist nicotine are believed, in part, to motivate the excessive cigarette smoking in schizophrenia, a disorder associated with deficits in multiple cognitive domains, including low-level auditory sensory processes and higher-order attention-dependent operations. Objectives: As N-methyl-d-aspartate receptor (NMDAR) hypofunction has been shown to contribute to these cognitive impairments, the primary aims of this healthy volunteer study were to: (a) to shed light on the separate and interactive roles of nAChR and NMDAR systems in the modulation of auditory sensory memory (and sustained attention), as indexed by the auditory event-related brain potential – mismatch negativity (MMN), and (b) to examine how these effects are moderated by a predisposition to auditory hallucinations/delusions (HD). Methods: In a randomized, double-blind, placebo-controlled design involving a low intravenous dose of ketamine (0.04 mg/kg) and a 4 mg dose of nicotine gum, MMN, and performance on a rapid visual information processing (RVIP) task of sustained attention were examined in 24 healthy controls psychometrically stratified as being lower (L-HD, n = 12) or higher (H-HD) for HD propensity. Results: Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine. Nicotine significantly enhanced response speed [reaction time (RT)] and accuracy (increased % hits and d′ and reduced false alarms) on the RVIP, with improved performance accuracy being prevented when nicotine was administered with ketamine. Both % hits and d′, as well as RT were poorer in H-HD (vs. L-HD) and while hit rate and d′ was increased by nicotine in H-HD, RT was slowed by ketamine in L-HD. Conclusions: Nicotine alleviated ketamine-induced sensory memory impairment and improved attention, particularly in individuals prone to HD.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"(KI) low intravenous dose of ketamine (0.04 mg/kg)\", \"(NG) 4 mg dose of nicotine gum\", \"hit rate in high - hallucinations/delusions (H-HD)\", \"significantly decreased\", \"hit rate and d\\u2032 was increased by nicotine in H-HD\"], [\"(KI) low intravenous dose of ketamine (0.04 mg/kg)\", \"(NG) 4 mg dose of nicotine gum\", \"hit rate in high - hallucinations/delusions (H-HD)\", \"significantly decreased\", \"Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine.\"], [\"(KI) low intravenous dose of ketamine (0.04 mg/kg)\", \"(NG) 4 mg dose of nicotine gum\", \"auditory event-related brain potential \\u2013 mismatch negativity (MMN) in high - hallucinations/delusions (H-HD)\", \"significantly decreased\", \"Ketamine significantly slowed MMN, and reduced MMN in H-HD\"], [\"(KI) low intravenous dose of ketamine (0.04 mg/kg)\", \"(NG) 4 mg dose of nicotine gum\", \"d\\u2032 (weight hits and false alarms into a single measure) in high - hallucinations/delusions (H-HD)\", \"significantly decreased\", \"hit rate and d\\u2032 was increased by nicotine in H-HD\"]]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Near-Optimal Offline Reinforcement Learning via Double Variance Reduction Main Content: 2 ) episodes of offline data in the finite-horizon stationary transition setting , where H is the horizon length and dm is the minimal marginal stateaction distribution induced by the behavior policy . This improves over the best known upper bound by a factor of H . Moreover , we establish an informationtheoretic lower bound of Ω ( H2/dm 2 ) which certifies that OPDVR is optimal up to logarithmic factors . Lastly , we show that OPDVR also achieves rate-optimal sample complexity under alternative settings such as the finite-horizon MDPs with non-stationary transitions and the infinite horizon MDPs with discounted rewards . 1 Introduction . Offline reinforcement learning ( offline RL ) aims at learning the near-optimal policy by using a static offline dataset that is collected by a certain behavior policy µ [ Lange et al. , 2012 ] . As offline RL agent has no access to interact with the environment , it is more widely applicable to problems where online interaction is infeasible , e.g . when trials-and-errors are expensive ( robotics , education ) , risky ( autonomous driving ) or even unethical ( healthcare ) [ see , e.g. , a recent survey Levine et al. , 2020 ] . Despite its practical significance , a precise theoretical understanding of offline RL has been lacking . Previous sample complexity bounds for RL has primarily focused on the online setting [ Azar et al. , 2017 , Jin et al. , 2018 , Zanette and Brunskill , 2019 , Simchowitz and Jamieson , 2019 , Efroni et al. , 2019 , Cai et al. , 2019 ] or the generative model ( simulator ) setting [ Azar et al. , 2013 , Sidford et al. , 2018a , b , Yang and Wang , 2019 , Agarwal et al. , 2020 , Wainwright , 2019 , Lattimore and Szepesvari , 2019 ] , both of which assuming interactive access to the environment and not applicable to offline RL . On the other hand , the sample complexity of offline RL remains unsettled even for environments with finitely many state and actions , a.k.a , the tabular MDPs ( Markov Decision Processes ) . One major line of work is concerned with the off-policy evaluation ( OPE ) problem [ Li et al. , 2015 , Jiang and Li , 2016 , Liu et al. , 2018 , Kallus and Uehara , 2019a , b , Uehara and Jiang , 2019 , Xie et al. , 2019 , Yin and Wang , 2020 , Duan and Wang , 2020 ] . These works provide sample complexity bounds for evaluating the performance of a fixed policy , and do not imply guarantees for policy optimization . Another line of work studies the sample complexity of offline policy optimization in conjunction with function 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . approximation [ Chen and Jiang , 2019 , Xie and Jiang , 2020b , a , Jin et al. , 2020 ] . These results apply to offline RL with general function classes , but when specialized to the tabular setting , they give rather loose sample complexity bounds with suboptimal dependencies on various parameters 1 . The recent work of Yin et al . [ 2021 ] showed that the optimal sample complexity for finding an -optimal policy in offline RL is Õ ( H3/dm 2 ) in the finite-horizon non-stationarysetting ( with matching upper and lower bounds ) , where H is the horizon length and dm is a constant related to the data coverage of the behavior policy in the given MDP . However , the optimal sample complexity in alternative settings such as stationary transition or infinite-horizon settings remains unknown . Further , the Õ ( H3/dm 2 ) sample complexity is achieved by an off-policy evaluation + uniform convergence type algorithm ; other more practical algorithms including ( stochastic ) optimal planning algorithms such as Q-Learning are not well understood in offline RL . Our Contributions In this paper , we propose an algorithm OPDVR ( Off-Policy Doubled Variance Reduction ) for offline reinforcement learning based on an extension of the variance reduction technique initiated in [ Sidford et al. , 2018a , Yang and Wang , 2019 ] . OPDVR performs stochastic ( minibatch style ) value iterations using the available offline data , and can be seen as a version of stochastic optimal planning that interpolates value iteration and Q-learning . Our main contributions are summarized as follows . • We show that OPDVR finds an -optimal policy with high probability using Õ ( H2/dm 2 ) episodes of offline data ( Section 4.1 ) . This improves upon the best known sample complexity by an H factor and to the best of our knowledge is the first that achieves an O ( H2 ) horizon dependence , thus separating the stationary case with the non-stationary case for offline RL . • We establish a sample ( episode ) complexity lower bound Ω ( H2/dm 2 ) for offline RL in the finite-horizon stationary setting ( Theorem 4.2 ) , showing that the sample complexity of OPDVR is optimal up to logarithmic factors . • In the finite-horizon non-stationary setting , and infinite horizon γ-discounted setting , we show that OPDVR achieves Õ ( H3/dm 2 ) sample ( episode ) complexity ( Section 3 ) and Õ ( ( 1−γ ) −3/dm 2 ) sample complexity ( Section 4.2 ) respectively . They are both optimal up to logarithmic factors and our infinite-horizon result improves over the best known results , e.g. , those derived for the fitted Q-iteration style algorithms [ Xie and Jiang , 2020b ] . • On the technical end , our algorithm presents a sharp analysis of offline RL with stationary transitions , and , importantly , the use of the doubling technique to resolve the initialization dependence defect which fails to make the original variance reduction algorithm of [ Sidford et al. , 2018a ] to be optimal , see Appendix F.4 . Running [ Sidford et al. , 2018a ] may not yield the desired accuracy as they stated and our result is robust in preserving the optimality . Related work . There is a large and growing body of work on the theory of offline RL and RL in general . We could not hope to provide a comprehensive survey , thus will instead highlight the few 1See Table 1 for a clear comparison . prior work that we depend upon on the technical level . The variance reduction techniques that we use in this paper builds upon the work of [ Sidford et al. , 2018a ] in the generative model setting , though it is nontrivial in adapting their techniques to the offline setting ; and our two-stage variance reduction appears essential for obtaining optimal rate for > 1 ( see Section 5 and Appendix F.4 for more detailed discussions ) . We also used a fictitious estimator technique that originates from the OPE literature [ Xie et al. , 2019 , Yin and Wang , 2020 ] , but extended it to the stationary-transition case , and to the policy optimization problem . As we mentioned earlier , the optimal sample complexity in offline RL in the tabular MDPs with stationary transitions was not settled . The result of [ Yin et al. , 2021 ] is optimal in the non-stationary case , but is suboptimal by a factor of H in the stationary case . Our lower bound is a variant of the construction of [ Yin et al. , 2021 ] that applies to the stationary case . Other existing work on offline RL has even weaker parameters ( sometimes due to their setting being more general , see details in Table 1 ) . We defer more detailed discussion related to the OPE literature and online RL / generative model literature to Appendix A due to space constraint . 2 Preliminaries . We consider reinforcement learning problems modeled by finite Markov Decision Processes ( MDPs ) ( we focus on the finite-horizon episodic setting , and defer the infinite-horizon discounted setting to Section 4.2 . ) An MDP is denoted by a tuple M = ( S , A , r , T , d1 , H ) , where S and A are the state and action spaces with finite cardinality |S| = S and |A| = A. Pt : S × A × S → [ 0 , 1 ] is the transition kernel with Pt ( s′|s , a ) be the probability of entering state s′ after taking action a at state s. We consider both the stationary and non-stationary transition setting : The stationary transition setting assumes Pt ≡ P is identical for all t ∈ [ H ] , and the non-stationary transition setting allows Pt to be different for different t. rt : S × A → [ 0 , 1 ] is the reward function which we assume to be deterministic2 . d1 is the initial state distribution , and H is the time horizon . A ( non-stationary ) policy π : S → PHA assigns to each state st ∈ S a distribution over actions at each time t. We use dπt ( s , a ) or d π t ( s ) to denote the marginal state-action/state distribution induced by policy π at time t , i.e . dπt ( s ) : = Pπ ( st = s ) and dπt ( s , a ) : = Pπ ( st = s , at = a ) . Q-value and Bellman operator . For any policy π and any fixed time t , the value function V πt ( · ) ∈ RS and Q-value function Qπt ( · , · ) ∈ RS×A , ∀s , a is defined as : V πt ( s ) = E [ ∑H i=t ri ∣∣∣st = s ] , Qπt ( s , a ) = E [ ∑Hi=t ri∣∣∣st , at = s , a ] . For the ease of exposition , we always enumerate Qπ as a column vector and similarly for Pt ( ·|s , a ) . Moreover , for any vector Q ∈ RS×A , the induced value vector and policy is defined in the greedy way : ∀st ∈ S , VQ ( st ) = maxat∈AQ ( st , at ) , πQ ( st ) = argmaxat∈AQ ( st , at ) . Given an MDP , for any vector V ∈ R S and any deterministic policy π , ∀t ∈ [ H ] the Bellman operator T πt : RS → RS is defined as : [ T πt ( V ) ] ( s ) : = r ( s , πt ( s ) ) + P > t ( ·|s , πt ( s ) ) V , and the corresponding Bellman optimality operator Tt : RS → RS , [ Tt ( V ) ] ( s ) : = maxa∈A [ r ( s , a ) + P > t ( ·|s , a ) V ] . Lastly , for a given value function Vt , we define backup function zt ( st , at ) : = P > t ( ·|st , at ) Vt+1 and the one-step variance as σVt+1 ( st , at ) : = Varst+1 [ Vt+1 ( st+1 ) |st , at ] . Offline learning problem . In this paper we investigate the offline learning problem , where we do not have interactive access to the MDP , and can only observe a static dataset D = { ( s ( i ) t , a ( i ) t , r ( i ) t , s ( i ) t+1 ) } t∈ [ H ] i∈ [ n ] .We assume that D is obtained by executing a pre-specified behavior policy µ ( also known as the logging policy ) for n episodes and collecting the trajectories τ ( i ) = ( s ( i ) 1 , a ( i ) 1 , r ( i ) 1 , . . . , s ( i ) H , a ( i ) H , r ( i ) H , s ( i ) H+1 ) , where each episode is rendered in the form : s ( i ) 1 ∼ d1 , a ( i ) t ∼ µt ( ·|s ( i ) t ) , r ( i ) t = r ( s ( i ) t , a ( i ) t ) , and s ( i ) t+1 ∼ Pt ( ·|s ( i ) t , a ( i ) t ) . Given the dataset D , our goal is to find an -optimal policy πout , in the sense that ||V π ? 1 − V πout 1 ||∞ < . Assumption on data coverage Due to the curse of distributional shift , efficient offline RL is only possible under certain data coverage properties for the behavior policy µ . Throughout this paper we assume the following : Assumption 2.1 ( Weak coverage ) . The behavior policy µ satisfies the following : There exists some optimal policy π ? such that dµt′ ( st′ , at′ ) > 0 if there exists t < t ′ such that dπ ? t : t′ ( st′ , at′ |st , at ) > 0 , where dπ ? t : t′ ( st′ , at′ |st , at ) is the conditional multi-step transition probability from step t to t′ . 2This is commonly assumed in the RL literature . The randomness in the reward will only cause a lower order error ( than the randomness in the transition ) for learning . Intuitively , Assumption 2.1 requires µ to “ cover ” certain optimal policy π ? , in the sense that any st′ , at′ is reachable by µ if it is attainable from a previous state-action pair by π ? . It is similar to [ Liu et al. , 2019 , Assumption 1 ] . Note that this is weaker than the standard “ concentrability ” assumption [ Munos , 2003 , Le et al. , 2019 , Chen and Jiang , 2019 ] : Concentrability defines βµ : = supπ∈Π ||dπ ( st , at ) /dµ ( st , at ) ||∞ < ∞ ( cf . [ Le et al. , 2019 , Assumption 1 & Example 4.1 ] ) , which requires the sufficient exploration for tabular case3 since we optimize over all policies ( see Section F.2 for a discussion ) . In contrast , our assumption only requires µ to “ trace ” one single optimal policy.4 With Assumption 2.1 , we define dm : = min t , st , at { dµt ( st , at ) : d µ t ( st , at ) > 0 } , ( 1 ) which is decided by the behavior policy µ and is required by offline learning ( see Theorem G.2 in Yin et al . [ 2021 ] ) . In general , dm is unknown yet we assume dm is known for the moment and will utilize dm in our algorithms . Indeed , in Lemma 5.1 , we show that estimating dm up to a multiplicative factor only requires Õ ( 1/dm ) episodes of offline data ; replacing the exact dm with this estimator suffices for our purpose and , importantly , will not affect our downstream sample complexities .
The papers presents an algorithms called OPDVR (Off-Policy Double Variance Reduction) based on variance reduction, which is seemingly designed with offline RL in mind, even though the main attribute of offline RL (data constraint and lack of support) is fully missed. The paper contains several theoretical results; their core contribution (among others) is for the stationary case, where transition probabilities are not function of time. They basically present a sample complexity that grows with $H^2$, an improvement over previous results by a factor of $H$. The authors provide similar result for the infinite-horizon (discounted) case as well, and expand their findings for the non-stationary case too. Main weakness: While this paper is a nice theoretical contribution, discussions on the "practical" impact of the basic assumptions, and how they can be relaxed to become useful in real-world, are missing, [see the comments below].
science.mup_single_document_summarization.11141
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nNear-Optimal Offline Reinforcement Learning via Double Variance Reduction\n\nMain Content:\n2 ) episodes of offline data in the finite-horizon stationary transition setting , where H is the horizon length and dm is the minimal marginal stateaction distribution induced by the behavior policy . This improves over the best known upper bound by a factor of H . Moreover , we establish an informationtheoretic lower bound of Ω ( H2/dm 2 ) which certifies that OPDVR is optimal up to logarithmic factors . Lastly , we show that OPDVR also achieves rate-optimal sample complexity under alternative settings such as the finite-horizon MDPs with non-stationary transitions and the infinite horizon MDPs with discounted rewards . 1 Introduction . Offline reinforcement learning ( offline RL ) aims at learning the near-optimal policy by using a static offline dataset that is collected by a certain behavior policy µ [ Lange et al. , 2012 ] . As offline RL agent has no access to interact with the environment , it is more widely applicable to problems where online interaction is infeasible , e.g . when trials-and-errors are expensive ( robotics , education ) , risky ( autonomous driving ) or even unethical ( healthcare ) [ see , e.g. , a recent survey Levine et al. , 2020 ] . Despite its practical significance , a precise theoretical understanding of offline RL has been lacking . Previous sample complexity bounds for RL has primarily focused on the online setting [ Azar et al. , 2017 , Jin et al. , 2018 , Zanette and Brunskill , 2019 , Simchowitz and Jamieson , 2019 , Efroni et al. , 2019 , Cai et al. , 2019 ] or the generative model ( simulator ) setting [ Azar et al. , 2013 , Sidford et al. , 2018a , b , Yang and Wang , 2019 , Agarwal et al. , 2020 , Wainwright , 2019 , Lattimore and Szepesvari , 2019 ] , both of which assuming interactive access to the environment and not applicable to offline RL . On the other hand , the sample complexity of offline RL remains unsettled even for environments with finitely many state and actions , a.k.a , the tabular MDPs ( Markov Decision Processes ) . One major line of work is concerned with the off-policy evaluation ( OPE ) problem [ Li et al. , 2015 , Jiang and Li , 2016 , Liu et al. , 2018 , Kallus and Uehara , 2019a , b , Uehara and Jiang , 2019 , Xie et al. , 2019 , Yin and Wang , 2020 , Duan and Wang , 2020 ] . These works provide sample complexity bounds for evaluating the performance of a fixed policy , and do not imply guarantees for policy optimization . Another line of work studies the sample complexity of offline policy optimization in conjunction with function 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . approximation [ Chen and Jiang , 2019 , Xie and Jiang , 2020b , a , Jin et al. , 2020 ] . These results apply to offline RL with general function classes , but when specialized to the tabular setting , they give rather loose sample complexity bounds with suboptimal dependencies on various parameters 1 . The recent work of Yin et al . [ 2021 ] showed that the optimal sample complexity for finding an -optimal policy in offline RL is Õ ( H3/dm 2 ) in the finite-horizon non-stationarysetting ( with matching upper and lower bounds ) , where H is the horizon length and dm is a constant related to the data coverage of the behavior policy in the given MDP . However , the optimal sample complexity in alternative settings such as stationary transition or infinite-horizon settings remains unknown . Further , the Õ ( H3/dm 2 ) sample complexity is achieved by an off-policy evaluation + uniform convergence type algorithm ; other more practical algorithms including ( stochastic ) optimal planning algorithms such as Q-Learning are not well understood in offline RL . Our Contributions In this paper , we propose an algorithm OPDVR ( Off-Policy Doubled Variance Reduction ) for offline reinforcement learning based on an extension of the variance reduction technique initiated in [ Sidford et al. , 2018a , Yang and Wang , 2019 ] . OPDVR performs stochastic ( minibatch style ) value iterations using the available offline data , and can be seen as a version of stochastic optimal planning that interpolates value iteration and Q-learning . Our main contributions are summarized as follows . • We show that OPDVR finds an -optimal policy with high probability using Õ ( H2/dm 2 ) episodes of offline data ( Section 4.1 ) . This improves upon the best known sample complexity by an H factor and to the best of our knowledge is the first that achieves an O ( H2 ) horizon dependence , thus separating the stationary case with the non-stationary case for offline RL . • We establish a sample ( episode ) complexity lower bound Ω ( H2/dm 2 ) for offline RL in the finite-horizon stationary setting ( Theorem 4.2 ) , showing that the sample complexity of OPDVR is optimal up to logarithmic factors . • In the finite-horizon non-stationary setting , and infinite horizon γ-discounted setting , we show that OPDVR achieves Õ ( H3/dm 2 ) sample ( episode ) complexity ( Section 3 ) and Õ ( ( 1−γ ) −3/dm 2 ) sample complexity ( Section 4.2 ) respectively . They are both optimal up to logarithmic factors and our infinite-horizon result improves over the best known results , e.g. , those derived for the fitted Q-iteration style algorithms [ Xie and Jiang , 2020b ] . • On the technical end , our algorithm presents a sharp analysis of offline RL with stationary transitions , and , importantly , the use of the doubling technique to resolve the initialization dependence defect which fails to make the original variance reduction algorithm of [ Sidford et al. , 2018a ] to be optimal , see Appendix F.4 . Running [ Sidford et al. , 2018a ] may not yield the desired accuracy as they stated and our result is robust in preserving the optimality . Related work . There is a large and growing body of work on the theory of offline RL and RL in general . We could not hope to provide a comprehensive survey , thus will instead highlight the few 1See Table 1 for a clear comparison . prior work that we depend upon on the technical level . The variance reduction techniques that we use in this paper builds upon the work of [ Sidford et al. , 2018a ] in the generative model setting , though it is nontrivial in adapting their techniques to the offline setting ; and our two-stage variance reduction appears essential for obtaining optimal rate for > 1 ( see Section 5 and Appendix F.4 for more detailed discussions ) . We also used a fictitious estimator technique that originates from the OPE literature [ Xie et al. , 2019 , Yin and Wang , 2020 ] , but extended it to the stationary-transition case , and to the policy optimization problem . As we mentioned earlier , the optimal sample complexity in offline RL in the tabular MDPs with stationary transitions was not settled . The result of [ Yin et al. , 2021 ] is optimal in the non-stationary case , but is suboptimal by a factor of H in the stationary case . Our lower bound is a variant of the construction of [ Yin et al. , 2021 ] that applies to the stationary case . Other existing work on offline RL has even weaker parameters ( sometimes due to their setting being more general , see details in Table 1 ) . We defer more detailed discussion related to the OPE literature and online RL / generative model literature to Appendix A due to space constraint . 2 Preliminaries . We consider reinforcement learning problems modeled by finite Markov Decision Processes ( MDPs ) ( we focus on the finite-horizon episodic setting , and defer the infinite-horizon discounted setting to Section 4.2 . ) An MDP is denoted by a tuple M = ( S , A , r , T , d1 , H ) , where S and A are the state and action spaces with finite cardinality |S| = S and |A| = A. Pt : S × A × S → [ 0 , 1 ] is the transition kernel with Pt ( s′|s , a ) be the probability of entering state s′ after taking action a at state s. We consider both the stationary and non-stationary transition setting : The stationary transition setting assumes Pt ≡ P is identical for all t ∈ [ H ] , and the non-stationary transition setting allows Pt to be different for different t. rt : S × A → [ 0 , 1 ] is the reward function which we assume to be deterministic2 . d1 is the initial state distribution , and H is the time horizon . A ( non-stationary ) policy π : S → PHA assigns to each state st ∈ S a distribution over actions at each time t. We use dπt ( s , a ) or d π t ( s ) to denote the marginal state-action/state distribution induced by policy π at time t , i.e . dπt ( s ) : = Pπ ( st = s ) and dπt ( s , a ) : = Pπ ( st = s , at = a ) . Q-value and Bellman operator . For any policy π and any fixed time t , the value function V πt ( · ) ∈ RS and Q-value function Qπt ( · , · ) ∈ RS×A , ∀s , a is defined as : V πt ( s ) = E [ ∑H i=t ri ∣∣∣st = s ] , Qπt ( s , a ) = E [ ∑Hi=t ri∣∣∣st , at = s , a ] . For the ease of exposition , we always enumerate Qπ as a column vector and similarly for Pt ( ·|s , a ) . Moreover , for any vector Q ∈ RS×A , the induced value vector and policy is defined in the greedy way : ∀st ∈ S , VQ ( st ) = maxat∈AQ ( st , at ) , πQ ( st ) = argmaxat∈AQ ( st , at ) . Given an MDP , for any vector V ∈ R S and any deterministic policy π , ∀t ∈ [ H ] the Bellman operator T πt : RS → RS is defined as : [ T πt ( V ) ] ( s ) : = r ( s , πt ( s ) ) + P > t ( ·|s , πt ( s ) ) V , and the corresponding Bellman optimality operator Tt : RS → RS , [ Tt ( V ) ] ( s ) : = maxa∈A [ r ( s , a ) + P > t ( ·|s , a ) V ] . Lastly , for a given value function Vt , we define backup function zt ( st , at ) : = P > t ( ·|st , at ) Vt+1 and the one-step variance as σVt+1 ( st , at ) : = Varst+1 [ Vt+1 ( st+1 ) |st , at ] . Offline learning problem . In this paper we investigate the offline learning problem , where we do not have interactive access to the MDP , and can only observe a static dataset D = { ( s ( i ) t , a ( i ) t , r ( i ) t , s ( i ) t+1 ) } t∈ [ H ] i∈ [ n ] .We assume that D is obtained by executing a pre-specified behavior policy µ ( also known as the logging policy ) for n episodes and collecting the trajectories τ ( i ) = ( s ( i ) 1 , a ( i ) 1 , r ( i ) 1 , . . . , s ( i ) H , a ( i ) H , r ( i ) H , s ( i ) H+1 ) , where each episode is rendered in the form : s ( i ) 1 ∼ d1 , a ( i ) t ∼ µt ( ·|s ( i ) t ) , r ( i ) t = r ( s ( i ) t , a ( i ) t ) , and s ( i ) t+1 ∼ Pt ( ·|s ( i ) t , a ( i ) t ) . Given the dataset D , our goal is to find an -optimal policy πout , in the sense that ||V π ? 1 − V πout 1 ||∞ < . Assumption on data coverage Due to the curse of distributional shift , efficient offline RL is only possible under certain data coverage properties for the behavior policy µ . Throughout this paper we assume the following : Assumption 2.1 ( Weak coverage ) . The behavior policy µ satisfies the following : There exists some optimal policy π ? such that dµt′ ( st′ , at′ ) > 0 if there exists t < t ′ such that dπ ? t : t′ ( st′ , at′ |st , at ) > 0 , where dπ ? t : t′ ( st′ , at′ |st , at ) is the conditional multi-step transition probability from step t to t′ . 2This is commonly assumed in the RL literature . The randomness in the reward will only cause a lower order error ( than the randomness in the transition ) for learning . Intuitively , Assumption 2.1 requires µ to “ cover ” certain optimal policy π ? , in the sense that any st′ , at′ is reachable by µ if it is attainable from a previous state-action pair by π ? . It is similar to [ Liu et al. , 2019 , Assumption 1 ] . Note that this is weaker than the standard “ concentrability ” assumption [ Munos , 2003 , Le et al. , 2019 , Chen and Jiang , 2019 ] : Concentrability defines βµ : = supπ∈Π ||dπ ( st , at ) /dµ ( st , at ) ||∞ < ∞ ( cf . [ Le et al. , 2019 , Assumption 1 & Example 4.1 ] ) , which requires the sufficient exploration for tabular case3 since we optimize over all policies ( see Section F.2 for a discussion ) . In contrast , our assumption only requires µ to “ trace ” one single optimal policy.4 With Assumption 2.1 , we define dm : = min t , st , at { dµt ( st , at ) : d µ t ( st , at ) > 0 } , ( 1 ) which is decided by the behavior policy µ and is required by offline learning ( see Theorem G.2 in Yin et al . [ 2021 ] ) . In general , dm is unknown yet we assume dm is known for the moment and will utilize dm in our algorithms . Indeed , in Lemma 5.1 , we show that estimating dm up to a multiplicative factor only requires Õ ( 1/dm ) episodes of offline data ; replacing the exact dm with this estimator suffices for our purpose and , importantly , will not affect our downstream sample complexities .", "role": "user" }, { "content": "The papers presents an algorithms called OPDVR (Off-Policy Double Variance Reduction) based on variance reduction, which is seemingly designed with offline RL in mind, even though the main attribute of offline RL (data constraint and lack of support) is fully missed. The paper contains several theoretical results; their core contribution (among others) is for the stationary case, where transition probabilities are not function of time. They basically present a sample complexity that grows with $H^2$, an improvement over previous results by a factor of $H$. The authors provide similar result for the infinite-horizon (discounted) case as well, and expand their findings for the non-stationary case too. Main weakness: While this paper is a nice theoretical contribution, discussions on the \"practical\" impact of the basic assumptions, and how they can be relaxed to become useful in real-world, are missing, [see the comments below].", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: To compare energy expenditure during and after active and handheld video game drumming compared to walking and sitting. Ten experienced, college-aged men performed four protocols (one per week): no-exercise seated control (CTRL), virtual drumming on a handheld gaming device (HANDHELD), active drumming on drum pads (DRUM), and walking on a treadmill at ~30% of VO2max (WALK). Protocols were performed after an overnight fast, and expired air was collected continuously during (30min) and after (30min) exercise. DRUM and HANDHELD song lists, day of the week, and time of day were identical for each participant. Significant differences (p < 0.05) among the average rates of energy expenditure (kcal·min−1) during activity included WALK > DRUM > HANDHELD. No significant differences in the rates of energy expenditure among groups during recovery were observed. Total energy expenditure was significantly greater (p < 0.05) during WALK (149.5 ± 30.6 kcal) compared to DRUM (118.7 ± 18.8 kcal) and HANDHELD (44.9±11.6 kcal), and greater during DRUM compared to HANDHELD. Total energy expenditure was not significantly different between HANDHELD (44.9 ± 11.6 kcal) and CTRL (38.2 ± 6.0 kcal). Active video game drumming at expert-level significantly increased energy expenditure compared to handheld, but it hardly met moderate-intensity activity standards, and energy expenditure was greatest during walking. Energy expenditure with handheld video game drumming was not different from no-exercise control. Thus, traditional aerobic exercise remains at the forefront for achieving the minimum amount and intensity of physical activity for health, individuals desiring to use video games for achieving weekly physical activity recommendations should choose games that require significant involvement of lower-body musculature, and time spent playing sedentary games should be a limited part of an active lifestyle. Extracted Data:
[["Walking", "Drumming on drum pads", "Total energy expenditure", "significantly increased", "Total energy expenditure was significantly greater (p \u003c 0.05) during WALK (149.5 \u00b1 30.6 kcal) compared to DRUM (118.7 \u00b1 18.8 kcal) and HANDHELD (44.9\u00b111.6 kcal), and greater during DRUM compared to HANDHELD."], ["Walking", "Virtual drumming", "Total energy expenditure", "significantly increased", "Total energy expenditure was significantly greater (p \u003c 0.05) during WALK (149.5 \u00b1 30.6 kcal) compared to DRUM (118.7 \u00b1 18.8 kcal) and HANDHELD (44.9\u00b111.6 kcal), and greater during DRUM compared to HANDHELD."], ["Drumming on drum pads", "Virtual drumming ", "Total energy expenditure", "significantly increased", "Total energy expenditure was significantly greater (p \u003c 0.05) during WALK (149.5 \u00b1 30.6 kcal) compared to DRUM (118.7 \u00b1 18.8 kcal) and HANDHELD (44.9\u00b111.6 kcal), and greater during DRUM compared to HANDHELD."], ["Virtual drumming ", "Control", "Total energy expenditure", "no significant difference", "Total energy expenditure was not significantly different between HANDHELD (44.9 \u00b1 11.6 kcal) and CTRL (38.2 \u00b1 6.0 kcal)."]]
science.evidence_inference.185
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: To compare energy expenditure during and after active and handheld video game drumming compared to walking and sitting. Ten experienced, college-aged men performed four protocols (one per week): no-exercise seated control (CTRL), virtual drumming on a handheld gaming device (HANDHELD), active drumming on drum pads (DRUM), and walking on a treadmill at ~30% of VO2max (WALK). Protocols were performed after an overnight fast, and expired air was collected continuously during (30min) and after (30min) exercise. DRUM and HANDHELD song lists, day of the week, and time of day were identical for each participant. Significant differences (p < 0.05) among the average rates of energy expenditure (kcal·min−1) during activity included WALK > DRUM > HANDHELD. No significant differences in the rates of energy expenditure among groups during recovery were observed. Total energy expenditure was significantly greater (p < 0.05) during WALK (149.5 ± 30.6 kcal) compared to DRUM (118.7 ± 18.8 kcal) and HANDHELD (44.9±11.6 kcal), and greater during DRUM compared to HANDHELD. Total energy expenditure was not significantly different between HANDHELD (44.9 ± 11.6 kcal) and CTRL (38.2 ± 6.0 kcal). Active video game drumming at expert-level significantly increased energy expenditure compared to handheld, but it hardly met moderate-intensity activity standards, and energy expenditure was greatest during walking. Energy expenditure with handheld video game drumming was not different from no-exercise control. Thus, traditional aerobic exercise remains at the forefront for achieving the minimum amount and intensity of physical activity for health, individuals desiring to use video games for achieving weekly physical activity recommendations should choose games that require significant involvement of lower-body musculature, and time spent playing sedentary games should be a limited part of an active lifestyle.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Walking\", \"Drumming on drum pads\", \"Total energy expenditure\", \"significantly increased\", \"Total energy expenditure was significantly greater (p \\u003c 0.05) during WALK (149.5 \\u00b1 30.6 kcal) compared to DRUM (118.7 \\u00b1 18.8 kcal) and HANDHELD (44.9\\u00b111.6 kcal), and greater during DRUM compared to HANDHELD.\"], [\"Walking\", \"Virtual drumming\", \"Total energy expenditure\", \"significantly increased\", \"Total energy expenditure was significantly greater (p \\u003c 0.05) during WALK (149.5 \\u00b1 30.6 kcal) compared to DRUM (118.7 \\u00b1 18.8 kcal) and HANDHELD (44.9\\u00b111.6 kcal), and greater during DRUM compared to HANDHELD.\"], [\"Drumming on drum pads\", \"Virtual drumming \", \"Total energy expenditure\", \"significantly increased\", \"Total energy expenditure was significantly greater (p \\u003c 0.05) during WALK (149.5 \\u00b1 30.6 kcal) compared to DRUM (118.7 \\u00b1 18.8 kcal) and HANDHELD (44.9\\u00b111.6 kcal), and greater during DRUM compared to HANDHELD.\"], [\"Virtual drumming \", \"Control\", \"Total energy expenditure\", \"no significant difference\", \"Total energy expenditure was not significantly different between HANDHELD (44.9 \\u00b1 11.6 kcal) and CTRL (38.2 \\u00b1 6.0 kcal).\"]]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Association of DRD2 polymorphisms and chlorpromazine-induced extrapyramidal syndrome in Chinese schizophrenic patients. AIM: Extrapyramidal syndrome (EPS) is most commonly affected by typical antipsychotic drugs that have a high affinity with the D2 receptor. Recently, many research groups have reported on the positive relationship between the genetic variations in the DRD2 gene and the therapeutic response in schizophrenia patients as a result of the role of variations in the receptor in modulating receptor expression. In this study, we evaluate the role DRD2 plays in chlorpromazine-induced EPS in schizophrenic patients. METHODS: We identified seven SNP(single nucleotide polymorphism) (-141Cins>del, TaqIB, TaqID, Ser311Cys, rs6275, rs6277 and TaqIA) in the DRD2 gene in 146 schizophrenic inpatients (59 with EPS and 87 without EPS according to the Simpson-Angus Scale) treated with chlorpromazine after 8 weeks. The alleles of all loci were determined by PCR (polymerase chain reaction). RESULTS: Polymorphisms TaqID, Ser311Cys and rs6277 were not polymorphic in the population recruited in the present study. No statistical significance was found in the allele distribution of -141Cins>del, TaqIB, rs6275 and TaqIA or in the estimated haplotypes (constituted by TaqIB, rs6275 and TaqIA) in linkage disequilibrium between the two groups. CONCLUSION: Our results did not lend strong support to the view that the genetic variation of the DRD2 gene plays a major role in the individually variable adverse effect induced by chlorpromazine, at least in Chinese patients with schizophrenia. Our results confirmed a previous study on the relationship between DRD2 and EPS in Caucasians.
{"CellLine": [], "Chemical": ["chlorpromazine", "antipsychotic drugs"], "Disease": ["Extrapyramidal syndrome", "schizophrenic", "schizophrenia", "extrapyramidal syndrome", "EPS"], "Gene": ["DRD2", "D2 receptor"], "Species": ["inpatients", "patients"], "Variant": ["rs6277", "rs6275", "-141Cins\u003edel", "Ser311Cys"]}
science.biored_ner.5
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nAssociation of DRD2 polymorphisms and chlorpromazine-induced extrapyramidal syndrome in Chinese schizophrenic patients. AIM: Extrapyramidal syndrome (EPS) is most commonly affected by typical antipsychotic drugs that have a high affinity with the D2 receptor. Recently, many research groups have reported on the positive relationship between the genetic variations in the DRD2 gene and the therapeutic response in schizophrenia patients as a result of the role of variations in the receptor in modulating receptor expression. In this study, we evaluate the role DRD2 plays in chlorpromazine-induced EPS in schizophrenic patients. METHODS: We identified seven SNP(single nucleotide polymorphism) (-141Cins>del, TaqIB, TaqID, Ser311Cys, rs6275, rs6277 and TaqIA) in the DRD2 gene in 146 schizophrenic inpatients (59 with EPS and 87 without EPS according to the Simpson-Angus Scale) treated with chlorpromazine after 8 weeks. The alleles of all loci were determined by PCR (polymerase chain reaction). RESULTS: Polymorphisms TaqID, Ser311Cys and rs6277 were not polymorphic in the population recruited in the present study. No statistical significance was found in the allele distribution of -141Cins>del, TaqIB, rs6275 and TaqIA or in the estimated haplotypes (constituted by TaqIB, rs6275 and TaqIA) in linkage disequilibrium between the two groups. CONCLUSION: Our results did not lend strong support to the view that the genetic variation of the DRD2 gene plays a major role in the individually variable adverse effect induced by chlorpromazine, at least in Chinese patients with schizophrenia. Our results confirmed a previous study on the relationship between DRD2 and EPS in Caucasians.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"chlorpromazine\", \"antipsychotic drugs\"], \"Disease\": [\"Extrapyramidal syndrome\", \"schizophrenic\", \"schizophrenia\", \"extrapyramidal syndrome\", \"EPS\"], \"Gene\": [\"DRD2\", \"D2 receptor\"], \"Species\": [\"inpatients\", \"patients\"], \"Variant\": [\"rs6277\", \"rs6275\", \"-141Cins\\u003edel\", \"Ser311Cys\"]}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: This paper presents an algorithm for computing optical flow, shape, motion, lighting, and albedo from an image sequence of a rigidly-moving Lambertian object under distant illumination. The problem is formulated in a manner that subsumes structure from motion, multi-view stereo, and photo-metric stereo as special cases. The algorithm utilizes both spatial and temporal intensity variation as cues: the former constrains flow and the latter constrains surface orientation; combining both cues enables dense reconstruction of both textured and texture-less surfaces. The algorithm works by iteratively estimating affine camera parameters, illumination, shape, and albedo in an alternating fashion. Results are demonstrated on videos of hand-held objects moving in front of a fixed light and camera.
Output Entities: {"Generic": ["former", "algorithm", "latter"], "Material": ["image sequence", "rigidly-moving Lambertian object", "photo-metric stereo", "motion", "multi-view stereo", "videos of hand-held objects"], "Method": ["estimating affine camera parameters , illumination , shape , and albedo"], "Metric": [], "OtherScientificTerm": ["flow", "surface orientation", "distant illumination", "spatial and temporal intensity variation"], "Task": ["computing optical flow , shape , motion , lighting , and albedo", "dense reconstruction of both textured and texture-less surfaces"]}
science.scierc_ner.0
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nThis paper presents an algorithm for computing optical flow, shape, motion, lighting, and albedo from an image sequence of a rigidly-moving Lambertian object under distant illumination. The problem is formulated in a manner that subsumes structure from motion, multi-view stereo, and photo-metric stereo as special cases. The algorithm utilizes both spatial and temporal intensity variation as cues: the former constrains flow and the latter constrains surface orientation; combining both cues enables dense reconstruction of both textured and texture-less surfaces. The algorithm works by iteratively estimating affine camera parameters, illumination, shape, and albedo in an alternating fashion. Results are demonstrated on videos of hand-held objects moving in front of a fixed light and camera.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [\"former\", \"algorithm\", \"latter\"], \"Material\": [\"image sequence\", \"rigidly-moving Lambertian object\", \"photo-metric stereo\", \"motion\", \"multi-view stereo\", \"videos of hand-held objects\"], \"Method\": [\"estimating affine camera parameters , illumination , shape , and albedo\"], \"Metric\": [], \"OtherScientificTerm\": [\"flow\", \"surface orientation\", \"distant illumination\", \"spatial and temporal intensity variation\"], \"Task\": [\"computing optical flow , shape , motion , lighting , and albedo\", \"dense reconstruction of both textured and texture-less surfaces\"]}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Message Passing Attention Networks for Document Understanding Graph neural networks have recently emerged as a very effective framework for processing graph-structured data. These models have achieved state-of-the-art performance in many tasks. Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text classification datasets show that our architectures are competitive with the state-of-the-art. Ablation studies reveal further insights about the impact of the different components on performance. Code is publicly available at: https://github.com/giannisnik/mpad . Message Passing Neural Networks BIBREF13 proposed a MP framework under which many of the recently introduced GNNs can be reformulated. MP consists in an aggregation phase followed by a combination phase BIBREF14. More precisely, let $G(V,E)$ be a graph, and let us consider $v \in V$. At time $t+1$, a message vector $\mathbf {m}_v^{t+1}$ is computed from the representations of the neighbors $\mathcal {N}(v)$ of $v$: The new representation $\mathbf {h}^{t+1}_v$ of $v$ is then computed by combining its current feature vector $\mathbf {h}^{t}_v$ with the message vector $\mathbf {m}_v^{t+1}$: Messages are passed for $T$ time steps. Each step is implemented by a different layer of the MP network. Hence, iterations correspond to network depth. The final feature vector $\mathbf {h}_v^T$ of $v$ is based on messages propagated from all the nodes in the subtree of height $T$ rooted at $v$. It captures both the topology of the neighborhood of $v$ and the distribution of the vertex representations in it. If a graph-level feature vector is needed, e.g., for classification or regression, a READOUT pooling function, that must be invariant to permutations, is applied: Next, we present the MP network we developed for document understanding. Message Passing Attention network for Document understanding (MPAD) ::: Readout After passing messages and performing updates for $T$ iterations, we obtain a matrix $\mathbf {H}^T \in \mathbb {R}^{n \times d}$ containing the final vertex representations. Let $\hat{G}$ be graph $G$ without the special document node, and matrix $\mathbf {\hat{H}}^T \in \mathbb {R}^{(n-1) \times d}$ be the corresponding representation matrix (i.e., $\mathbf {H}^T$ without the row of the document node). We use as our READOUT function the concatenation of self-attention applied to $\mathbf {\hat{H}}^T$ with the final document node representation. More precisely, we apply a global self-attention mechanism BIBREF22 to the rows of $\mathbf {\hat{H}}^T$. As shown in Eq. DISPLAY_FORM17, $\mathbf {\hat{H}}^T$ is first passed to a dense layer parameterized by matrix $\mathbf {W}_A^T \in \mathbb {R}^{d \times d}$. An alignment vector $\mathbf {a}$ is then derived by comparing, via dot products, the rows of the output of the dense layer $\mathbf {Y}^T \in \mathbb {R}^{(n-1) \times d}$ with a trainable vector $\mathbf {v}^T \in \mathbb {R}^d$ (initialized randomly) and normalizing with a softmax. The normalized alignment coefficients are finally used to compute the attentional vector $\mathbf {u}^T \in \mathbb {R}^d$ as a weighted sum of the final representations $\mathbf {\hat{H}}^T$. Note that we tried with multiple context vectors, i.e., with a matrix $\mathbf {V}^T$ instead of a vector $\mathbf {v}^T$, like in BIBREF22, but results were not convincing, even when adding a regularization term to the loss to favor diversity among the rows of $\mathbf {V}^T$. Master node skip connection. $\mathbf {h}_G^T \in \mathbb {R}^{2d}$ is obtained by concatenating $\mathbf {u}^T$ and the final master node representation. That is, the master node vector bypasses the attention mechanism. This is equivalent to a skip or shortcut connection BIBREF23. The reason behind this choice is that we expect the special document node to learn a high-level summary about the document, such as its size, vocabulary, etc. (more details are given in subsection SECREF30). Therefore, by making the master node bypass the attention layer, we directly inject global information about the document into its final representation. Multi-readout. BIBREF14, inspired by Jumping Knowledge Networks BIBREF12, recommend to not only use the final representations when performing readout, but also that of the earlier steps. Indeed, as one iterates, node features capture more and more global information. However, retaining more local, intermediary information might be useful too. Thus, instead of applying the readout function only to $t=T$, we apply it to all time steps and concatenate the results, finally obtaining $\mathbf {h}_G \in \mathbb {R}^{T \times 2d}$ : In effect, with this modification, we take into account features based on information aggregated from subtrees of different heights (from 1 to $T$), corresponding to local and global features. Message Passing Attention network for Document understanding (MPAD) ::: Hierarchical variants of MPAD Through the successive MP iterations, it could be argued that MPAD implicitly captures some soft notion of the hierarchical structure of documents (words $\rightarrow $ bigrams $\rightarrow $ compositions of bigrams, etc.). However, it might be beneficial to explicitly capture document hierarchy. Hierarchical architectures have brought significant improvements to many NLP tasks, such as language modeling and generation BIBREF24, BIBREF25, sentiment and topic classification BIBREF26, BIBREF27, and spoken language understanding BIBREF28, BIBREF29. Inspired by this line of research, we propose several hierarchical variants of MPAD, detailed in what follows. In all of them, we represent each sentence in the document as a word co-occurrence network, and obtain an embedding for it by applying MPAD as previously described. MPAD-sentence-att. Here, the sentence embeddings are simply combined through self-attention. MPAD-clique. In this variant, we build a complete graph where each node represents a sentence. We then feed that graph to MPAD, where the feature vectors of the nodes are initialized with the sentence embeddings previously obtained. MPAD-path. This variant is similar to the clique one, except that instead of a complete graph, we build a path according to the natural flow of the text. That is, two nodes are linked by a directed edge if the two sentences they represent follow each other in the document. Experiments ::: Baselines We evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants. doc2vec BIBREF37. Doc2vec (or paragraph vector) is an extension of word2vec that learns vectors for documents in a fully unsupervised manner. Document embeddings are then fed to a logistic regression classifier. CNN BIBREF38. The convolutional neural network architecture, well-known in computer vision, is applied to text. There is one spatial dimension and the word embeddings are used as channels (depth dimensions). DAN BIBREF39. The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax. Tree-LSTM BIBREF40 is a generalization of the standard LSTM architecture to constituency and dependency parse trees. DRNN BIBREF41. Recursive neural networks are stacked and applied to parse trees. LSTMN BIBREF42 is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations. C-LSTM BIBREF43 combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM. SPGK BIBREF44 also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word co-occurrence networks and then uses a SVM to categorize documents. WMD BIBREF45 is an application of the well-known Earth Mover's Distance to text. A k-nearest neighbor classifier is used. S-WMD BIBREF46 is a supervised extension of the Word Mover's Distance. Semantic-CNN BIBREF47. Here, a CNN is applied to semantic units obtained by clustering words in the embedding space. LSTM-GRNN BIBREF26 is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector. HN-ATT BIBREF27 is another hierarchical model, where the same encoder architecture (a bidirectional GRU-RNN) is used for both sentences and documents, with different parameters. A self-attention mechanism is applied to the RNN annotations at each level. Results and ablations ::: Ablation studies To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29. Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary. Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation. No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document. No renormalization. Here, we do not use the renormalization trick of BIBREF9 during MP (see subsection SECREF10). That is, Eq. DISPLAY_FORM11 becomes $\mathbf {M}^{t+1} = \textsc {MLP}^{t+1}\big (\mathbf {A}\mathbf {H}^{t}\big )$. In other words, instead of computing a weighted average of the incoming neighbors' feature vectors, we compute a weighted sum of them. Unlike the mean, which captures distributions, the sum captures structural information BIBREF14. As shown in Table TABREF29, using sum instead of mean decreases performance everywhere, suggesting that in our application, capturing the distribution of neighbor representations is more important that capturing their structure. We hypothesize that this is the case because statistical word co-occurrence networks tend to have similar structural properties, regardless of the topic, polarity, sentiment, etc. of the corresponding documents. Neighbors-only. In this experiment, we replaced the GRU combine function (see Eq. DISPLAY_FORM14) with the identity function. That is, we simply have $\mathbf {H}^{t+1}$=$\mathbf {M}^{t+1}$. Since $\mathbf {A}$ has zero diagonal, by doing so, we completely ignore the previous feature of the node itself when updating its representation. That is, the update is based entirely on its neighbors. Except on Reuters (almost no change), performance always suffers, stressing the need to take into account the root node during updates, not only its neighborhood. Acknowledgments We thank the NVidia corporation for the donation of a GPU as part of their GPU grant program. Table 1: Statistics of the datasets used in our experiments. CV indicates that cross-validation was used. # pretrained words refers to the number of words in the vocabulary having an entry in the Google News word vectors (except for Yelp2013). Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2). Query: Which component has the greatest impact on performance? JSON Response:
{"answer": "Increasing number of message passing iterations showed consistent improvement in performance - around 1 point improvement compared between 1 and 4 iterations", "evidence": ["Results and ablations ::: Ablation studies", "To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.", "Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary."]}
science.qasper_abstractive_qa.278.1
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nMessage Passing Attention Networks for Document Understanding\n\nGraph neural networks have recently emerged as a very effective framework for processing graph-structured data. These models have achieved state-of-the-art performance in many tasks. Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text classification datasets show that our architectures are competitive with the state-of-the-art. Ablation studies reveal further insights about the impact of the different components on performance. Code is publicly available at: https://github.com/giannisnik/mpad .\n\nMessage Passing Neural Networks\nBIBREF13 proposed a MP framework under which many of the recently introduced GNNs can be reformulated. MP consists in an aggregation phase followed by a combination phase BIBREF14. More precisely, let $G(V,E)$ be a graph, and let us consider $v \\in V$. At time $t+1$, a message vector $\\mathbf {m}_v^{t+1}$ is computed from the representations of the neighbors $\\mathcal {N}(v)$ of $v$:\nThe new representation $\\mathbf {h}^{t+1}_v$ of $v$ is then computed by combining its current feature vector $\\mathbf {h}^{t}_v$ with the message vector $\\mathbf {m}_v^{t+1}$:\nMessages are passed for $T$ time steps. Each step is implemented by a different layer of the MP network. Hence, iterations correspond to network depth. The final feature vector $\\mathbf {h}_v^T$ of $v$ is based on messages propagated from all the nodes in the subtree of height $T$ rooted at $v$. It captures both the topology of the neighborhood of $v$ and the distribution of the vertex representations in it.\nIf a graph-level feature vector is needed, e.g., for classification or regression, a READOUT pooling function, that must be invariant to permutations, is applied:\nNext, we present the MP network we developed for document understanding.\n\nMessage Passing Attention network for Document understanding (MPAD) ::: Readout\nAfter passing messages and performing updates for $T$ iterations, we obtain a matrix $\\mathbf {H}^T \\in \\mathbb {R}^{n \\times d}$ containing the final vertex representations. Let $\\hat{G}$ be graph $G$ without the special document node, and matrix $\\mathbf {\\hat{H}}^T \\in \\mathbb {R}^{(n-1) \\times d}$ be the corresponding representation matrix (i.e., $\\mathbf {H}^T$ without the row of the document node).\nWe use as our READOUT function the concatenation of self-attention applied to $\\mathbf {\\hat{H}}^T$ with the final document node representation. More precisely, we apply a global self-attention mechanism BIBREF22 to the rows of $\\mathbf {\\hat{H}}^T$. As shown in Eq. DISPLAY_FORM17, $\\mathbf {\\hat{H}}^T$ is first passed to a dense layer parameterized by matrix $\\mathbf {W}_A^T \\in \\mathbb {R}^{d \\times d}$. An alignment vector $\\mathbf {a}$ is then derived by comparing, via dot products, the rows of the output of the dense layer $\\mathbf {Y}^T \\in \\mathbb {R}^{(n-1) \\times d}$ with a trainable vector $\\mathbf {v}^T \\in \\mathbb {R}^d$ (initialized randomly) and normalizing with a softmax. The normalized alignment coefficients are finally used to compute the attentional vector $\\mathbf {u}^T \\in \\mathbb {R}^d$ as a weighted sum of the final representations $\\mathbf {\\hat{H}}^T$.\nNote that we tried with multiple context vectors, i.e., with a matrix $\\mathbf {V}^T$ instead of a vector $\\mathbf {v}^T$, like in BIBREF22, but results were not convincing, even when adding a regularization term to the loss to favor diversity among the rows of $\\mathbf {V}^T$.\nMaster node skip connection. $\\mathbf {h}_G^T \\in \\mathbb {R}^{2d}$ is obtained by concatenating $\\mathbf {u}^T$ and the final master node representation. That is, the master node vector bypasses the attention mechanism. This is equivalent to a skip or shortcut connection BIBREF23. The reason behind this choice is that we expect the special document node to learn a high-level summary about the document, such as its size, vocabulary, etc. (more details are given in subsection SECREF30). Therefore, by making the master node bypass the attention layer, we directly inject global information about the document into its final representation.\nMulti-readout. BIBREF14, inspired by Jumping Knowledge Networks BIBREF12, recommend to not only use the final representations when performing readout, but also that of the earlier steps. Indeed, as one iterates, node features capture more and more global information. However, retaining more local, intermediary information might be useful too. Thus, instead of applying the readout function only to $t=T$, we apply it to all time steps and concatenate the results, finally obtaining $\\mathbf {h}_G \\in \\mathbb {R}^{T \\times 2d}$ :\nIn effect, with this modification, we take into account features based on information aggregated from subtrees of different heights (from 1 to $T$), corresponding to local and global features.\n\nMessage Passing Attention network for Document understanding (MPAD) ::: Hierarchical variants of MPAD\nThrough the successive MP iterations, it could be argued that MPAD implicitly captures some soft notion of the hierarchical structure of documents (words $\\rightarrow $ bigrams $\\rightarrow $ compositions of bigrams, etc.). However, it might be beneficial to explicitly capture document hierarchy. Hierarchical architectures have brought significant improvements to many NLP tasks, such as language modeling and generation BIBREF24, BIBREF25, sentiment and topic classification BIBREF26, BIBREF27, and spoken language understanding BIBREF28, BIBREF29. Inspired by this line of research, we propose several hierarchical variants of MPAD, detailed in what follows. In all of them, we represent each sentence in the document as a word co-occurrence network, and obtain an embedding for it by applying MPAD as previously described.\nMPAD-sentence-att. Here, the sentence embeddings are simply combined through self-attention.\nMPAD-clique. In this variant, we build a complete graph where each node represents a sentence. We then feed that graph to MPAD, where the feature vectors of the nodes are initialized with the sentence embeddings previously obtained.\nMPAD-path. This variant is similar to the clique one, except that instead of a complete graph, we build a path according to the natural flow of the text. That is, two nodes are linked by a directed edge if the two sentences they represent follow each other in the document.\n\nExperiments ::: Baselines\nWe evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants.\ndoc2vec BIBREF37. Doc2vec (or paragraph vector) is an extension of word2vec that learns vectors for documents in a fully unsupervised manner. Document embeddings are then fed to a logistic regression classifier.\nCNN BIBREF38. The convolutional neural network architecture, well-known in computer vision, is applied to text. There is one spatial dimension and the word embeddings are used as channels (depth dimensions).\nDAN BIBREF39. The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax.\nTree-LSTM BIBREF40 is a generalization of the standard LSTM architecture to constituency and dependency parse trees.\nDRNN BIBREF41. Recursive neural networks are stacked and applied to parse trees.\nLSTMN BIBREF42 is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations.\nC-LSTM BIBREF43 combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM.\nSPGK BIBREF44 also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word co-occurrence networks and then uses a SVM to categorize documents.\nWMD BIBREF45 is an application of the well-known Earth Mover's Distance to text. A k-nearest neighbor classifier is used.\nS-WMD BIBREF46 is a supervised extension of the Word Mover's Distance.\nSemantic-CNN BIBREF47. Here, a CNN is applied to semantic units obtained by clustering words in the embedding space.\nLSTM-GRNN BIBREF26 is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector.\nHN-ATT BIBREF27 is another hierarchical model, where the same encoder architecture (a bidirectional GRU-RNN) is used for both sentences and documents, with different parameters. A self-attention mechanism is applied to the RNN annotations at each level.\n\nResults and ablations ::: Ablation studies\nTo understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.\nNumber of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary.\nUndirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation.\nNo master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document.\nNo renormalization. Here, we do not use the renormalization trick of BIBREF9 during MP (see subsection SECREF10). That is, Eq. DISPLAY_FORM11 becomes $\\mathbf {M}^{t+1} = \\textsc {MLP}^{t+1}\\big (\\mathbf {A}\\mathbf {H}^{t}\\big )$. In other words, instead of computing a weighted average of the incoming neighbors' feature vectors, we compute a weighted sum of them. Unlike the mean, which captures distributions, the sum captures structural information BIBREF14. As shown in Table TABREF29, using sum instead of mean decreases performance everywhere, suggesting that in our application, capturing the distribution of neighbor representations is more important that capturing their structure. We hypothesize that this is the case because statistical word co-occurrence networks tend to have similar structural properties, regardless of the topic, polarity, sentiment, etc. of the corresponding documents.\nNeighbors-only. In this experiment, we replaced the GRU combine function (see Eq. DISPLAY_FORM14) with the identity function. That is, we simply have $\\mathbf {H}^{t+1}$=$\\mathbf {M}^{t+1}$. Since $\\mathbf {A}$ has zero diagonal, by doing so, we completely ignore the previous feature of the node itself when updating its representation. That is, the update is based entirely on its neighbors. Except on Reuters (almost no change), performance always suffers, stressing the need to take into account the root node during updates, not only its neighborhood.\n\nAcknowledgments\nWe thank the NVidia corporation for the donation of a GPU as part of their GPU grant program.\n\nTable 1: Statistics of the datasets used in our experiments. CV indicates that cross-validation was used. # pretrained words refers to the number of words in the vocabulary having an entry in the Google News word vectors (except for Yelp2013).\n\nTable 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2).\n\nQuery: Which component has the greatest impact on performance?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Increasing number of message passing iterations showed consistent improvement in performance - around 1 point improvement compared between 1 and 4 iterations\", \"evidence\": [\"Results and ablations ::: Ablation studies\", \"To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.\", \"Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\\\\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\\\\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary.\"]}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Dracorhodin perchlorate induced human breast cancer MCF-7 apoptosis through mitochondrial pathways. It was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated. CONCLUSION: Therefore DP was a candidate for anti-breast cancer, DP induced apoptosis of MCF-7 through mitochondrial pathway. Dracorhodin perchlorate has been recently shown to induce apoptotic cell death in cancer cells. However, the molecular mechanisms underlying these effects are unknown in human gastric tumor cells. Dracorhodin perchlorate suppresses proliferation and induces apoptosis in human prostate cancer cell line PC-3. The growth inhibition and pro-apoptosis effects of dracorhodin perchlorate on human prostate cancer PC-3 cell line were examined. Dracorhodin perchlorate, an anthocyanin red pigment, induces human premyelocytic leukemia HL-60 cell death through apoptotic pathway. Dracorhodin perchlorate, an anthocyanin red pigment, induces human melanoma A375-S2 cell death through the apoptotic pathway. Dracorhodin perchlorate inhibited proliferation of several tumor cell lines. The growth inhibition and pro-apoptosis effects of dracorhodin perchlorate on human prostate cancer PC-3 cell line were examined. It was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated. Dracorhodin perchlorate (DP) was a synthetic analogue of the antimicrobial anthocyanin red pigment dracorhodin. It was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated. The growth inhibition and pro-apoptosis effects of dracorhodin perchlorate on human prostate cancer PC-3 cell line were examined. After administration of 10-80 μmol/L dracorhodin perchlorate for 12-48 h, cell viability of PC-3 cells was measured by MTT colorimetry. It was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated. This study would investigate whether DP was a candidate chemical of anti-human breast cancer. Dracorhodin perchlorate (DP) was a synthetic analogue of the antimicrobial anthocyanin red pigment dracorhodin. It was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated. The growth inhibition and pro-apoptosis effects of dracorhodin perchlorate on human prostate cancer PC-3 cell line were examined. After administration of 10-80 μmol/L dracorhodin perchlorate for 12-48 h, cell viability of PC-3 cells was measured by MTT colorimetry. Question: Dracorhodin perchlorate was tested for treatment of which cancers?
["prostate cancer", "gastric tumor", "melanoma", "premyelocytic leukemia"]
science.bioasq_list_qa.221
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Dracorhodin perchlorate induced human breast cancer MCF-7 apoptosis through mitochondrial pathways.\nIt was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated.\nCONCLUSION: Therefore DP was a candidate for anti-breast cancer, DP induced apoptosis of MCF-7 through mitochondrial pathway.\nDracorhodin perchlorate has been recently shown to induce apoptotic cell death in cancer cells. However, the molecular mechanisms underlying these effects are unknown in human gastric tumor cells.\nDracorhodin perchlorate suppresses proliferation and induces apoptosis in human prostate cancer cell line PC-3.\nThe growth inhibition and pro-apoptosis effects of dracorhodin perchlorate on human prostate cancer PC-3 cell line were examined.\nDracorhodin perchlorate, an anthocyanin red pigment, induces human premyelocytic leukemia HL-60 cell death through apoptotic pathway.\nDracorhodin perchlorate, an anthocyanin red pigment, induces human melanoma A375-S2 cell death through the apoptotic pathway.\nDracorhodin perchlorate inhibited proliferation of several tumor cell lines.\nThe growth inhibition and pro-apoptosis effects of dracorhodin perchlorate on human prostate cancer PC-3 cell line were examined.\nIt was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated.\nDracorhodin perchlorate (DP) was a synthetic analogue of the antimicrobial anthocyanin red pigment dracorhodin. It was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated.\nThe growth inhibition and pro-apoptosis effects of dracorhodin perchlorate on human prostate cancer PC-3 cell line were examined. After administration of 10-80 μmol/L dracorhodin perchlorate for 12-48 h, cell viability of PC-3 cells was measured by MTT colorimetry.\nIt was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated. This study would investigate whether DP was a candidate chemical of anti-human breast cancer.\nDracorhodin perchlorate (DP) was a synthetic analogue of the antimicrobial anthocyanin red pigment dracorhodin. It was reported that DP could induce apoptosis in human prostate cancer, human gastric tumor cells and human melanoma, but the cytotoxic effect of DP on human breast cancer was not investigated.\nThe growth inhibition and pro-apoptosis effects of dracorhodin perchlorate on human prostate cancer PC-3 cell line were examined. After administration of 10-80 μmol/L dracorhodin perchlorate for 12-48 h, cell viability of PC-3 cells was measured by MTT colorimetry.\n\nQuestion: Dracorhodin perchlorate was tested for treatment of which cancers?", "role": "user" }, { "content": "[\"prostate cancer\", \"gastric tumor\", \"melanoma\", \"premyelocytic leukemia\"]", "role": "assistant" } ]
science.discomat_te
| Bond | Bond energy (kcal/mol) | Relative probability at the following temperatures (degC) | Relative probability at the following temperatures (degC) | | Bond | Bond energy (kcal/mol) | 27 | 1000 | | Ge-Se | 49.1 | 1 | 1 | | Se-Se | 44.0 | 1.93x10-4 | 1.33x10-1 | | Se-Te | 40.6 | 6.50x10-7 | 3.47x10-2 | | Ge-Ge | 37.6 | 4.26x10-9 | 1.06x10-2 | | Ge-Te | 37.4 | 3.05x10-9 | 9.81x10-3 | | Te-Te | 33.0 | 1.91x10-12 | 1.72x10-3 | Caption: Bond energies and the relative probabilities of various bonds in Ge10Se90- x Te x (x =0, 10, 20, 30, 40, 50) glassy alloys Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "27", "type": "Other"} {"value": "1000", "type": "Other"} {"value": "49.1", "type": "Other"} {"value": "1", "type": "Other"} {"value": "1", "type": "Other"} {"value": "44.0", "type": "Other"} {"value": "40.6", "type": "Other"} {"value": "37.6", "type": "Other"} {"value": "37.4", "type": "Other"} {"value": "33.0", "type": "Other"}
science.discomat_te.132
[ { "content": "| Bond | Bond energy (kcal/mol) | Relative probability at the following temperatures (degC) | Relative probability at the following temperatures (degC) |\n| Bond | Bond energy (kcal/mol) | 27 | 1000 |\n| Ge-Se | 49.1 | 1 | 1 |\n| Se-Se | 44.0 | 1.93x10-4 | 1.33x10-1 |\n| Se-Te | 40.6 | 6.50x10-7 | 3.47x10-2 |\n| Ge-Ge | 37.6 | 4.26x10-9 | 1.06x10-2 |\n| Ge-Te | 37.4 | 3.05x10-9 | 9.81x10-3 |\n| Te-Te | 33.0 | 1.91x10-12 | 1.72x10-3 |\n\nCaption: Bond energies and the relative probabilities of various bonds in Ge10Se90- x Te x (x =0, 10, 20, 30, 40, 50) glassy alloys\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"27\", \"type\": \"Other\"}\n{\"value\": \"1000\", \"type\": \"Other\"}\n{\"value\": \"49.1\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"1\", \"type\": \"Other\"}\n{\"value\": \"44.0\", \"type\": \"Other\"}\n{\"value\": \"40.6\", \"type\": \"Other\"}\n{\"value\": \"37.6\", \"type\": \"Other\"}\n{\"value\": \"37.4\", \"type\": \"Other\"}\n{\"value\": \"33.0\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Effect of scale on catastrophic forgetting in neural networks Main Content: Continual learning is a current challenge in designing machine learning systems : when trained on multiple tasks in sequence , models tend to suffer performance losses on earlier tasks—this is known as catastrophic forgetting ( McCloskey & Cohen , 1989 ) . Catastrophic forgetting occurs in many settings , from supervised learning for classification ( Zenke et al. , 2017 ) to reinforcement learning for games ( Kirkpatrick et al. , 2017 ) and downstream applications such as translation ( Bapna & Firat , 2019 ) or medical diagnoses ( Gupta et al. , 2021 ) . While many methods have been proposed to help mitigate forgetting , none of these have fully solved the problem ; see Kemker et al . ( 2017 ) for a review . Increasing model and dataset size have been key to the success of deep learning ( Kolesnikov et al. , 2020 ; BIG-bench collaboration , 2021 ; Radford et al. , 2019 ; Zhai et al. , 2021 ) . A recent trend has been the use of increasingly large models pretrained on diverse datasets , which are subsequently fine-tuned or evaluated without fine-tuning on downstream tasks of interest ; see Bommasani et al . ( 2021 ) and references therein . A particularly striking realization of this has been large pretrained language models , which show consistent improvement across scales ( Kaplan et al. , 2020 ) and can perform well on diverse tasks with no finetuning ( Brown et al. , 2020 ) . From a continual learning perspective , this behavior is especially dramatic as , without finetuning , any training signal for these tasks appears mixed into a drastically larger pretraining dataset , and thus more than likely appeared many gradient steps before evaluation . This observation in large part inspired our work . Here , we empirically study continual learning in large pretrained models . Our results suggest that pretrained models are significantly more resistant to catastrophic forgetting than their randomlyinitialized counterparts , and that this resistance improves with the scale of both the model and pretraining data . Most of our experiments concern vision—specifically image classification ; but we also conduct a preliminary investigation of language models . Our key observations are as follows : • Resistance to catastrophic forgetting improves with model and pretraining dataset scale ( Figure 1 ) . • Large , pretrained image-classification models are significantly more resistant to forgetting than models trained from random initialization ( Figure 5 ) . • Both supervised and unsupervised , SimCLR ( Chen et al. , 2020 ) , pretraining improve forgetting ( Supplement , Figure 11 ) . • We perform initial experiments aimed at exploring why scale and pretraining reduce catastrophic forgetting , showing that distinct class representations in pretrained models are more orthogonal than in trained-from-scratch models ( Figure 8 ) . • We observe similar phenomena in language models ( Figure 9 ) . 1 RELATED WORK . Catastrophic forgetting Catastrophic forgetting was identified as a problem for neural networks as early as the late 1980s , as it occurred in the single-hidden-layer networks that were studied at the time ( see Delange et al . ( 2021 ) for a review ) . Within the catastrophic forgetting literature , a number of potential solutions to the problem have been devised ; some involve replaying previous-task examples during training on later tasks ( Rebuffi et al. , 2016 ; Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2018 ; Shin et al. , 2017 ; Riemer et al. , 2018 ) , while others like elastic weight consolidation ( Kirkpatrick et al. , 2017 ) limit the extent to which parameters can change once learned ( Zenke et al. , 2017 ; Lee et al. , 2017 ; Farajtabar et al. , 2019 ) . Perhaps most similar to our work are proposed mitigation methods which exploit the overparameterization of modern neural networks to perform continual learning of multiple tasks , e.g . by using pruning in Mallya & Lazebnik ( 2018 ) or task-specific subnetworks in Wortsman et al . ( 2020 ) and recent works studying the impact of pretraining Anonymous ( 2022 ) and overparameterization via network width Mirzadeh et al . ( 2021 ) on forgetting . Large-scale models The observation—initially in language models ( Kaplan et al. , 2020 ) —that model performance scales predictably with model and dataset size spurred an increased interest in training and deploying increasingly large pretrained models . Large-scale models emerged in natural-language processing with models such as GPT ( Radford et al. , 2019 ) and BERT ( Devlin et al. , 2019 ) , which are pretrained in an unsupervised manner and thus able to make use of the large corpora of natural text available on the internet . More recently , image-classification models have also benefited from pretraining on large datasets ( Kolesnikov et al. , 2020 ) , both supervised and unsupervised ; a key example is the Vision Transformer architecture of Dosovitskiy et al . ( 2020 ) , which—despite not using the inductive bias provided by convolutions—attains downstream performance comparable to convolutional networks after pretraining on ImageNet-21k ( Deng et al. , 2009 ) or JFT-300M ( Sun et al. , 2017 ) . On many current NLP and computer vision benchmarks , large pretrained models achieve top performance . All of the experiments in our work follow the pretrain-then-finetune method ; recently , another paradigm , side-tuning , was proposed by Zhang et al . ( 2020 ) , which avoids forgetting altogether by leaving the pretrained model fixed and employing auxiliary networks for downstream tasks . 2 EXPERIMENTAL SETUP . Tasks : Most of the experiments in this paper are conducted in the standard , task split setting ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) . We consider split CIFAR-10 task sequences – the ten class dataset is split and sequentially trained on two tasks of 5 classes each . We also consider sequential 10 and 50 class splits of CIFAR-100 . We also consider the more recent input-distribution-shift CIFAR-100 setup ( Ramasesh et al. , 2021 ) . In this case each task has the same target types made up a fixed subset of the 20 CIFAR-100 superclasses ( e.g . aquatic mammals , fish , ... ) . The sequence of tasks consists of modifying which subclasses are sampled from . For example Task A may have aquatic mammals drawn from beaver and dolphin , while Task B may have otter and whale . This setup allows one to probe forgetting without changing the model head , in a setting where task identity is not known to the model . For our language model experiments , we consider next token prediction generative tasks and use the IMDb Reviews ( Maas et al. , 2011 ) and english Wikipedia ( Foundation ) datasets . Vision models : In most of our experiments we use the Vision Transformer ( ViT ) and ResNet architectures . For the Vision Transformers , we use the ViT-S and ViT-B configurations from Dosovitskiy et al . ( 2020 ) and a smaller configuration we call ViT-xS – ranging from 5.7-86.7 million parameters . For ResNets , we use five models : ResNet26 , ResNet50 , ResNet101 , ResNet152 , and ResNet200 – ranging from 14.0-62.6 million parameters ; as in Kolesnikov et al . ( 2020 ) , these models use group normalization ( Wu & He , 2018 ) and weight standarization ( Qiao et al. , 2019 ) in place of batch normalization . Details can be found in Tables 1 and 2 in Appendix A.1.2 . Pretraining : Unless otherwise specified , we pretrain our image models in a supervised manner on the ImageNet21k dataset , which contains approximately 14 million images in about 26000 distinct categories using the Adam optimizer ( Kingma & Ba , 2017 ) . Further details are in the Appendix . Finetuning : When fine-tuning on multiple tasks in sequence , we use the SGD optimizer with momentum β = 0.9 . The first task training employs a cosine decay schedule with a linear warmup ; later tasks in the sequence are trained with fixed learning rates . For the split-CIFAR-10 task sequences , we use a multi-head setup in which each head is initialized to zeros . 3 RESULTS . 3.1 FORGETTING IMPROVES WITH MODEL SCALE . Our main finding is that for both pretrained Vision Transformers and pretrained ResNets , catastrophic forgetting is mitigated to a large extent by scale : that is , larger models suffer less from forgetting . This is shown in Figure 1 for the two-task split CIFAR-10 sequence : Task A { airplane , automobile , bird , cat , deer } → Task B { dog , frog , horse , ship , truck } . In order to make a fair comparison , we plot the test accuracies of the model on both tasks at different steps of Task B fine-tuning for a variety of learning rates . This allows us to see , for each model , the maximum performance achievable on the new task for a given performance on the initial task . We call this the forgetting frontier . Figure 1 shows that the forgetting frontier for both ViT and ResNet models improves significantly with scale . While a 5.7M-parameter ViT-xS model loses about 6.5 % in Task A accuracy over the course of learning to perform Task B , an 86.7M-parameter ViT-B model loses less than half a percent . To put this in context , in a previous studies of split-CIFAR-10 task sequences , a ResNet18 model lost close to 40 % accuracy with no mitigation methods , 5 % when using a replay buffer , and 20 % when using elastic weight consolidation ( see Figure 1 of Ramasesh et al . ( 2021 ) for an example ) . It is also important to note that this effect of scale and pretraining on improving forgetting frontiers does not seem to be architecture-dependent . Both vision transformers—based on self-attention— and ResNets—based on convolutions—exhibit improvements in their forgetting frontiers as they grow larger , as long as they are pretrained . We see this improvement in forgetting performance beyond the simple two-task split-CIFAR-10 scenario . Across a 10-task sequence of 10-class split-CIFAR-100 tasks ( Figure 2 left ) , sequential 50-class splits of CIFAR-100 ( Figure 2 right ) , and the CIFAR-100 single-head input-distributionshift task ( Figure 3 ) , large-scale pretrained models exhibit resistance to forgetting , and continual learning ability appears to improve systematically with scale . For detailed results , scaling plots , and forgetting frontiers in the 10-task scenario , see Section B.2 in the Supplement . 3.2 DEPENDENCE ON FINETUNING DATASET . It is interesting to study how the improved resistance to forgetting with scale depends on the choice of finetuning dataset . In particular , it is worth noting that , though quite different in format , CIFAR10/100 are semantically quite similar to subsets of ImageNet21k , our pretraining set . To empirically investigate the dependence on finetuning dataset we have run additional experiments similar to those presented in Figure 1 for the Oxford Pet and Flowers , Caltech Birds , Stanford Cars , street view house numbers ( SVHN ) and DomainNet Clipart datasets . Results are shown in Figure 4 and in Figure 17 . We note in particular that SVHN classes are numerals and so distinct from ImageNet21k classes , while the Clipart dataset consists of synthetic , rather than natural images . See the Supplement for further details . 3.3 FORGETTING IN PRETRAINED MODELS VS. MODELS TRAINED FROM SCRATCH . In the previous section , we saw that the amount of catastrophic forgetting in models pretrained on large datasets decreased with increasing model size , to the point that the largest models ( ViT-B , with 86.7M parameters and ResNet200 with 62.6M parameters ) suffer very little performance loss from second-task training . In this section , we investigate whether this improvement in forgetting is simply a consequence of the improvement in performance on Task A . That is , do pretraining and scale have benefits with respect to forgetting beyond simply improving the fine-tuned performance on downstream tasks ? To check this , we compare the performance and amount of forgetting in pretrained ResNet models to models trained from scratch . We handicap the pretrained ResNets so that their maximum accuracy on on Task A matches that of the corresponding trained-from-scratch ResNet ; this is done simply by reducing the number of steps for which we fine-tune . Figure 5 shows the results of this experiment , with additional results and details in Appendix C.1 . While the trained-from-scratch ResNets achieve the same performance as their pretrained counterparts on Task A , their Task A performance degrades significantly more than the pretrained ResNets when trained on Task B ; pretrained ResNets forget less , and this improves with model scale . This result suggests that pretraining and model scale confer benefits upon models which go beyond simply improving performance . From Figure 5 it is also clear ( see also Figure 19 in the appendix , which plots the same data in a different way ) that pretraining is required to see the benefits of scale . Simply scaling up the model size in trained-from-scratch ResNets does not yield clear improvements in their forgetting performance . We focused in this section exclusively on ResNets ( rather than Vision Transformers ) because obtaining decent performance on image classification tasks with Vision Transformers seems to require pretraining . In our experiments , for example , a trained-from-scratch ViT-S model was unable to achieve better than 75 % test accuracy on five-class CIFAR-10 , while the same model , pretrained , achieved better than 98 % accuracy .
The paper explores the effect of scale on catstrophic. They conduct several different experiments (for both ResNets and Vision Transformers) showing: - the size of the model for pretrained models reduces forgetting - (Longer) pretraining reduces forgetting - More pretraining training data size reduces forgetting - A metric used to measure overlap between model representations showed that pretraining results in more orthoganal representations between classes. - The length of finetuning isn't as impactful on pretrained models as randomly initialized models They perform the majority of their experiments on a two task, task incremental Split CIFAR-10 dataset. Some experiments were done on 2-task class incremental and 10-task task incremental Split CIFAR-100. They also conduct a brief set of experiments showing that scaling model size reduces forgetting for language as well.
science.mup_single_document_summarization.3708
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nEffect of scale on catastrophic forgetting in neural networks\n\nMain Content:\nContinual learning is a current challenge in designing machine learning systems : when trained on multiple tasks in sequence , models tend to suffer performance losses on earlier tasks—this is known as catastrophic forgetting ( McCloskey & Cohen , 1989 ) . Catastrophic forgetting occurs in many settings , from supervised learning for classification ( Zenke et al. , 2017 ) to reinforcement learning for games ( Kirkpatrick et al. , 2017 ) and downstream applications such as translation ( Bapna & Firat , 2019 ) or medical diagnoses ( Gupta et al. , 2021 ) . While many methods have been proposed to help mitigate forgetting , none of these have fully solved the problem ; see Kemker et al . ( 2017 ) for a review . Increasing model and dataset size have been key to the success of deep learning ( Kolesnikov et al. , 2020 ; BIG-bench collaboration , 2021 ; Radford et al. , 2019 ; Zhai et al. , 2021 ) . A recent trend has been the use of increasingly large models pretrained on diverse datasets , which are subsequently fine-tuned or evaluated without fine-tuning on downstream tasks of interest ; see Bommasani et al . ( 2021 ) and references therein . A particularly striking realization of this has been large pretrained language models , which show consistent improvement across scales ( Kaplan et al. , 2020 ) and can perform well on diverse tasks with no finetuning ( Brown et al. , 2020 ) . From a continual learning perspective , this behavior is especially dramatic as , without finetuning , any training signal for these tasks appears mixed into a drastically larger pretraining dataset , and thus more than likely appeared many gradient steps before evaluation . This observation in large part inspired our work . Here , we empirically study continual learning in large pretrained models . Our results suggest that pretrained models are significantly more resistant to catastrophic forgetting than their randomlyinitialized counterparts , and that this resistance improves with the scale of both the model and pretraining data . Most of our experiments concern vision—specifically image classification ; but we also conduct a preliminary investigation of language models . Our key observations are as follows : • Resistance to catastrophic forgetting improves with model and pretraining dataset scale ( Figure 1 ) . • Large , pretrained image-classification models are significantly more resistant to forgetting than models trained from random initialization ( Figure 5 ) . • Both supervised and unsupervised , SimCLR ( Chen et al. , 2020 ) , pretraining improve forgetting ( Supplement , Figure 11 ) . • We perform initial experiments aimed at exploring why scale and pretraining reduce catastrophic forgetting , showing that distinct class representations in pretrained models are more orthogonal than in trained-from-scratch models ( Figure 8 ) . • We observe similar phenomena in language models ( Figure 9 ) . 1 RELATED WORK . Catastrophic forgetting Catastrophic forgetting was identified as a problem for neural networks as early as the late 1980s , as it occurred in the single-hidden-layer networks that were studied at the time ( see Delange et al . ( 2021 ) for a review ) . Within the catastrophic forgetting literature , a number of potential solutions to the problem have been devised ; some involve replaying previous-task examples during training on later tasks ( Rebuffi et al. , 2016 ; Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2018 ; Shin et al. , 2017 ; Riemer et al. , 2018 ) , while others like elastic weight consolidation ( Kirkpatrick et al. , 2017 ) limit the extent to which parameters can change once learned ( Zenke et al. , 2017 ; Lee et al. , 2017 ; Farajtabar et al. , 2019 ) . Perhaps most similar to our work are proposed mitigation methods which exploit the overparameterization of modern neural networks to perform continual learning of multiple tasks , e.g . by using pruning in Mallya & Lazebnik ( 2018 ) or task-specific subnetworks in Wortsman et al . ( 2020 ) and recent works studying the impact of pretraining Anonymous ( 2022 ) and overparameterization via network width Mirzadeh et al . ( 2021 ) on forgetting . Large-scale models The observation—initially in language models ( Kaplan et al. , 2020 ) —that model performance scales predictably with model and dataset size spurred an increased interest in training and deploying increasingly large pretrained models . Large-scale models emerged in natural-language processing with models such as GPT ( Radford et al. , 2019 ) and BERT ( Devlin et al. , 2019 ) , which are pretrained in an unsupervised manner and thus able to make use of the large corpora of natural text available on the internet . More recently , image-classification models have also benefited from pretraining on large datasets ( Kolesnikov et al. , 2020 ) , both supervised and unsupervised ; a key example is the Vision Transformer architecture of Dosovitskiy et al . ( 2020 ) , which—despite not using the inductive bias provided by convolutions—attains downstream performance comparable to convolutional networks after pretraining on ImageNet-21k ( Deng et al. , 2009 ) or JFT-300M ( Sun et al. , 2017 ) . On many current NLP and computer vision benchmarks , large pretrained models achieve top performance . All of the experiments in our work follow the pretrain-then-finetune method ; recently , another paradigm , side-tuning , was proposed by Zhang et al . ( 2020 ) , which avoids forgetting altogether by leaving the pretrained model fixed and employing auxiliary networks for downstream tasks . 2 EXPERIMENTAL SETUP . Tasks : Most of the experiments in this paper are conducted in the standard , task split setting ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) . We consider split CIFAR-10 task sequences – the ten class dataset is split and sequentially trained on two tasks of 5 classes each . We also consider sequential 10 and 50 class splits of CIFAR-100 . We also consider the more recent input-distribution-shift CIFAR-100 setup ( Ramasesh et al. , 2021 ) . In this case each task has the same target types made up a fixed subset of the 20 CIFAR-100 superclasses ( e.g . aquatic mammals , fish , ... ) . The sequence of tasks consists of modifying which subclasses are sampled from . For example Task A may have aquatic mammals drawn from beaver and dolphin , while Task B may have otter and whale . This setup allows one to probe forgetting without changing the model head , in a setting where task identity is not known to the model . For our language model experiments , we consider next token prediction generative tasks and use the IMDb Reviews ( Maas et al. , 2011 ) and english Wikipedia ( Foundation ) datasets . Vision models : In most of our experiments we use the Vision Transformer ( ViT ) and ResNet architectures . For the Vision Transformers , we use the ViT-S and ViT-B configurations from Dosovitskiy et al . ( 2020 ) and a smaller configuration we call ViT-xS – ranging from 5.7-86.7 million parameters . For ResNets , we use five models : ResNet26 , ResNet50 , ResNet101 , ResNet152 , and ResNet200 – ranging from 14.0-62.6 million parameters ; as in Kolesnikov et al . ( 2020 ) , these models use group normalization ( Wu & He , 2018 ) and weight standarization ( Qiao et al. , 2019 ) in place of batch normalization . Details can be found in Tables 1 and 2 in Appendix A.1.2 . Pretraining : Unless otherwise specified , we pretrain our image models in a supervised manner on the ImageNet21k dataset , which contains approximately 14 million images in about 26000 distinct categories using the Adam optimizer ( Kingma & Ba , 2017 ) . Further details are in the Appendix . Finetuning : When fine-tuning on multiple tasks in sequence , we use the SGD optimizer with momentum β = 0.9 . The first task training employs a cosine decay schedule with a linear warmup ; later tasks in the sequence are trained with fixed learning rates . For the split-CIFAR-10 task sequences , we use a multi-head setup in which each head is initialized to zeros . 3 RESULTS . 3.1 FORGETTING IMPROVES WITH MODEL SCALE . Our main finding is that for both pretrained Vision Transformers and pretrained ResNets , catastrophic forgetting is mitigated to a large extent by scale : that is , larger models suffer less from forgetting . This is shown in Figure 1 for the two-task split CIFAR-10 sequence : Task A { airplane , automobile , bird , cat , deer } → Task B { dog , frog , horse , ship , truck } . In order to make a fair comparison , we plot the test accuracies of the model on both tasks at different steps of Task B fine-tuning for a variety of learning rates . This allows us to see , for each model , the maximum performance achievable on the new task for a given performance on the initial task . We call this the forgetting frontier . Figure 1 shows that the forgetting frontier for both ViT and ResNet models improves significantly with scale . While a 5.7M-parameter ViT-xS model loses about 6.5 % in Task A accuracy over the course of learning to perform Task B , an 86.7M-parameter ViT-B model loses less than half a percent . To put this in context , in a previous studies of split-CIFAR-10 task sequences , a ResNet18 model lost close to 40 % accuracy with no mitigation methods , 5 % when using a replay buffer , and 20 % when using elastic weight consolidation ( see Figure 1 of Ramasesh et al . ( 2021 ) for an example ) . It is also important to note that this effect of scale and pretraining on improving forgetting frontiers does not seem to be architecture-dependent . Both vision transformers—based on self-attention— and ResNets—based on convolutions—exhibit improvements in their forgetting frontiers as they grow larger , as long as they are pretrained . We see this improvement in forgetting performance beyond the simple two-task split-CIFAR-10 scenario . Across a 10-task sequence of 10-class split-CIFAR-100 tasks ( Figure 2 left ) , sequential 50-class splits of CIFAR-100 ( Figure 2 right ) , and the CIFAR-100 single-head input-distributionshift task ( Figure 3 ) , large-scale pretrained models exhibit resistance to forgetting , and continual learning ability appears to improve systematically with scale . For detailed results , scaling plots , and forgetting frontiers in the 10-task scenario , see Section B.2 in the Supplement . 3.2 DEPENDENCE ON FINETUNING DATASET . It is interesting to study how the improved resistance to forgetting with scale depends on the choice of finetuning dataset . In particular , it is worth noting that , though quite different in format , CIFAR10/100 are semantically quite similar to subsets of ImageNet21k , our pretraining set . To empirically investigate the dependence on finetuning dataset we have run additional experiments similar to those presented in Figure 1 for the Oxford Pet and Flowers , Caltech Birds , Stanford Cars , street view house numbers ( SVHN ) and DomainNet Clipart datasets . Results are shown in Figure 4 and in Figure 17 . We note in particular that SVHN classes are numerals and so distinct from ImageNet21k classes , while the Clipart dataset consists of synthetic , rather than natural images . See the Supplement for further details . 3.3 FORGETTING IN PRETRAINED MODELS VS. MODELS TRAINED FROM SCRATCH . In the previous section , we saw that the amount of catastrophic forgetting in models pretrained on large datasets decreased with increasing model size , to the point that the largest models ( ViT-B , with 86.7M parameters and ResNet200 with 62.6M parameters ) suffer very little performance loss from second-task training . In this section , we investigate whether this improvement in forgetting is simply a consequence of the improvement in performance on Task A . That is , do pretraining and scale have benefits with respect to forgetting beyond simply improving the fine-tuned performance on downstream tasks ? To check this , we compare the performance and amount of forgetting in pretrained ResNet models to models trained from scratch . We handicap the pretrained ResNets so that their maximum accuracy on on Task A matches that of the corresponding trained-from-scratch ResNet ; this is done simply by reducing the number of steps for which we fine-tune . Figure 5 shows the results of this experiment , with additional results and details in Appendix C.1 . While the trained-from-scratch ResNets achieve the same performance as their pretrained counterparts on Task A , their Task A performance degrades significantly more than the pretrained ResNets when trained on Task B ; pretrained ResNets forget less , and this improves with model scale . This result suggests that pretraining and model scale confer benefits upon models which go beyond simply improving performance . From Figure 5 it is also clear ( see also Figure 19 in the appendix , which plots the same data in a different way ) that pretraining is required to see the benefits of scale . Simply scaling up the model size in trained-from-scratch ResNets does not yield clear improvements in their forgetting performance . We focused in this section exclusively on ResNets ( rather than Vision Transformers ) because obtaining decent performance on image classification tasks with Vision Transformers seems to require pretraining . In our experiments , for example , a trained-from-scratch ViT-S model was unable to achieve better than 75 % test accuracy on five-class CIFAR-10 , while the same model , pretrained , achieved better than 98 % accuracy .", "role": "user" }, { "content": "The paper explores the effect of scale on catstrophic. They conduct several different experiments (for both ResNets and Vision Transformers) showing: - the size of the model for pretrained models reduces forgetting - (Longer) pretraining reduces forgetting - More pretraining training data size reduces forgetting - A metric used to measure overlap between model representations showed that pretraining results in more orthoganal representations between classes. - The length of finetuning isn't as impactful on pretrained models as randomly initialized models They perform the majority of their experiments on a two task, task incremental Split CIFAR-10 dataset. Some experiments were done on 2-task class incremental and 10-task task incremental Split CIFAR-100. They also conduct a brief set of experiments showing that scaling model size reduces forgetting for language as well.", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: The minor G allele of FOXO3 is related to less severe symptoms of Crohn's Disease. Research Abstract: The clinical course and eventual outcome, or prognosis, of complex diseases varies enormously between affected individuals. This variability critically determines the impact a disease has on a patient's life but is very poorly understood. Here, we exploit existing genome-wide association study data to gain insight into the role of genetics in prognosis. We identify a noncoding polymorphism in FOXO3A (rs12212067: T > G) at which the minor (G) allele, despite not being associated with disease susceptibility, is associated with a milder course of Crohn's disease and rheumatoid arthritis and with increased risk of severe malaria. Minor allele carriage is shown to limit inflammatory responses in monocytes via a FOXO3-driven pathway, which through TGFβ1 reduces production of proinflammatory cytokines, including TNFα, and increases production of anti-inflammatory cytokines, including IL-10. Thus, we uncover a shared genetic contribution to prognosis in distinct diseases that operates via a FOXO3-driven pathway modulating inflammatory responses.
{"verdict": "SUPPORT", "evidence": ["We identify a noncoding polymorphism in FOXO3A (rs12212067: T > G) at which the minor (G) allele, despite not being associated with disease susceptibility, is associated with a milder course of Crohn's disease and rheumatoid arthritis and with increased risk of severe malaria."]}
science.scifact_entailment.803
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nThe minor G allele of FOXO3 is related to less severe symptoms of Crohn's Disease.\n\nResearch Abstract:\nThe clinical course and eventual outcome, or prognosis, of complex diseases varies enormously between affected individuals.\nThis variability critically determines the impact a disease has on a patient's life but is very poorly understood.\nHere, we exploit existing genome-wide association study data to gain insight into the role of genetics in prognosis.\nWe identify a noncoding polymorphism in FOXO3A (rs12212067: T > G) at which the minor (G) allele, despite not being associated with disease susceptibility, is associated with a milder course of Crohn's disease and rheumatoid arthritis and with increased risk of severe malaria.\nMinor allele carriage is shown to limit inflammatory responses in monocytes via a FOXO3-driven pathway, which through TGFβ1 reduces production of proinflammatory cytokines, including TNFα, and increases production of anti-inflammatory cytokines, including IL-10.\nThus, we uncover a shared genetic contribution to prognosis in distinct diseases that operates via a FOXO3-driven pathway modulating inflammatory responses.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"We identify a noncoding polymorphism in FOXO3A (rs12212067: T > G) at which the minor (G) allele, despite not being associated with disease susceptibility, is associated with a milder course of Crohn's disease and rheumatoid arthritis and with increased risk of severe malaria.\"]}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Pallidal stimulation: an alternative to pallidotomy? A resurgence of interest in the surgical treatment of Parkinson's disease (PD) came with the rediscovery of posteroventral pallidotomy by Laitinen in 1985. Laitinen's procedure improved most symptoms in drug-resistant PD, which engendered wide interest in the neurosurgical community. Another lesioning procedure, ventrolateral thalamotomy, has become a powerful alternative to stimulate the nucleus ventralis intermedius, producing high long-term success rates and low morbidity rates. Pallidal stimulation has not met with the same success. According to the literature pallidotomy improves the "on" symptoms of PD, such as dyskinesias, as well as the "off" symptoms, such as rigidity, bradykinesia, and on-off fluctuations. Pallidal stimulation improves bradykinesia and rigidity to a minor extent; however, its strength seems to be in improving levodopa-induced dyskinesias. Stimulation often produces an improvement in the hyper- or dyskinetic upper limbs, but increases the "freezing" phenomenon in the lower limbs at the same time. Considering the small increase in the patient's independence, the high costs of bilateral implants, and the difficulty most patients experience in handling the devices, the question arises as to whether bilateral pallidal stimulation is a real alternative to pallidotomy.
{"CellLine": [], "Chemical": ["levodopa"], "Disease": ["PD", "rigidity", "Parkinson\u0027s disease", "bradykinesia", "dyskinesias", "hyper- or dyskinetic"], "Gene": [], "Species": ["patient", "patients"], "Variant": []}
science.biored_ner.1
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nPallidal stimulation: an alternative to pallidotomy? A resurgence of interest in the surgical treatment of Parkinson's disease (PD) came with the rediscovery of posteroventral pallidotomy by Laitinen in 1985. Laitinen's procedure improved most symptoms in drug-resistant PD, which engendered wide interest in the neurosurgical community. Another lesioning procedure, ventrolateral thalamotomy, has become a powerful alternative to stimulate the nucleus ventralis intermedius, producing high long-term success rates and low morbidity rates. Pallidal stimulation has not met with the same success. According to the literature pallidotomy improves the \"on\" symptoms of PD, such as dyskinesias, as well as the \"off\" symptoms, such as rigidity, bradykinesia, and on-off fluctuations. Pallidal stimulation improves bradykinesia and rigidity to a minor extent; however, its strength seems to be in improving levodopa-induced dyskinesias. Stimulation often produces an improvement in the hyper- or dyskinetic upper limbs, but increases the \"freezing\" phenomenon in the lower limbs at the same time. Considering the small increase in the patient's independence, the high costs of bilateral implants, and the difficulty most patients experience in handling the devices, the question arises as to whether bilateral pallidal stimulation is a real alternative to pallidotomy.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"levodopa\"], \"Disease\": [\"PD\", \"rigidity\", \"Parkinson\\u0027s disease\", \"bradykinesia\", \"dyskinesias\", \"hyper- or dyskinetic\"], \"Gene\": [], \"Species\": [\"patient\", \"patients\"], \"Variant\": []}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Constrained Physical-Statistics Models for Dynamical System Identification and Prediction Main Content: 1 INTRODUCTION . Dynamical systems prediction and identification find crucial applications ranging from medicine and the study of tumors ( Hanahan & Weinberg , 2011 ; Lu & Fei , 2014 ) to oceanic and climate forecasting ( Oreskes et al. , 1994 ; Caers , 2011 ) . The modeling of such systems traditionally rely on ordinary or partial differential equations ( ODE/PDE ) ( Madec , 2008 ; Marti et al. , 2010 ) , and their resolution via numerical solvers and data assimilation ( Ghil & Malanotte-Rizzoli , 1991 ) . In real world applications , two main pitfalls occur : first the dynamics may only be partially known and thus do not fully represent the studied phenomena ( Rio & Santoleri , 2018 ) ; second , the system state may only be partially observed as in ocean models ( Gaultier et al. , 2013 ) . Machine learning ( ML ) has become a complementary approach to traditional physics based models ( denoted MB for model based ) ( Reichstein et al. , 2019 ; Dueben & Bauer , 2018 ) . Both offer advantages : whereas MB approaches generalize and extrapolate better , ML high expressivity approaches benefit from the ongoing explosive growth of available data such as satellite observations , with reduced costs compared to classical data assimilation . In that perspective , recent lines of work tackle the learning of hybrid models relying on prior physical knowledge and machine learning ( Yin et al. , 2021 ; Mehta et al. , 2020 ) . Efficiently learning such decompositions actually means solving two different tasks : system identification , i.e . estimating the parameters of the physical model , and prediction , i.e . recovering the trajectories associated to the dynamics . Both are essential for hybrid MB/ML models of dynamical systems . Whereas prediction aims at robust extrapolation , identification accounts for physical interpretability of the MB/ML model . While solving both problems using model-based formulation admits well-known numerical solutions , for example using the adjoint method ( Le Dimet & Talagrand , 1986 ; Courtier et al. , 1994 ) , the combination of physical models and deep learning is still an open area of research . In this context , ML applications mainly focus on the prediction task , at the expense of the system identification : Willard et al . ( 2020 ) underlines the lack of generalizability of black-box ML models and their inability to produce physically sound results . Indeed , Ayed et al . ( 2020 ) show that without any prior knowledge , the recovered estimates of a dynamical system states are not physically plausible despite accurate predictions . Moreover , as noted by Yin et al . ( 2021 ) , learning a linear MB/ML decompo- sition with the sole supervision on the system trajectories is ill-posed and admits an infinite number of decompositions . Such observations highlight the need to incorporate physically motivated constraints in the learning of hybrid models , e.g . through regularization penalties , and several works propose additional constraints to guide the model towards physical solutions ( Jia et al. , 2019 ; Yin et al. , 2021 ; Linial et al. , 2021 ) . Finally , to complete prior dynamical knowledge with a data-driven component and ensure interpretability of the decomposition , we work out a principled framework that generalizes previous attempts in the regularization of hybrid models . Our contributions are : • In section 3.1 , we introduce a novel way to recover well-posedness and interpretability in the learning of hybrid MB/ML models through the control of an upper bound . We further extend our framework to incorporate auxiliary data when available to handle complex realworld problems . • In section 3.2 , we propose a novel alternate-optimization algorithm to learn hybrid models . • In section 3.3 , we provide an analysis of the convergence of the proposed algorithm on a simplified case and experimentally evidence the soundness of our approach on more complex settings of increasing difficulty including challenging real world problems ( section 4 ) . 2 BACKGROUND AND PROBLEM SETUP . We consider a dynamical system with state at time t denoted Zt = Z ( t ) . Zt might be fully or only partially observed : we write Zt = ( Xt , Yt ) , where Xt is the observed component and Yt the unobserved one . The evolution of Z is governed by a differential equation with dynamics : dZt dt = d dt ( Xt Yt ) = ( fX ( Zt ) fY ( Zt ) ) ( 1 ) The objective is to predict trajectories of X , i.e . to model the evolution of the observable part following dXtdt = fX ( Zt ) . For simplicity , we omit the index X in fX and write f ( . ) ∆ = fX ( . ) . Dynamical Hypothesis We assume partial knowledge of the dynamics of the observed Xt : dXt dt = f ( Zt ) = fk ( Zt ) + fu ( Zt ) ( 2 ) where fk ∈ Hk is a known operator with unknown parameters θ∗ , and fu ∈ Hu is the unknown residual dynamics . Hk and Hu denote function spaces . Note that the additive hypothesis in eq . ( 2 ) is not restrictive ( see discussion in appendix B ) . Learning Problem Our objective is to approximate f with a function h learned from the observed data . According to eq . ( 2 ) , we assume h = hk +hu . hk ∈ Hk belongs to the same hypothesis space as fk , i.e . it has the same parametric form . Its parameters are denoted θk . Note that hk ( . , θ∗ ) = fk . hu ∈ Hu is represented by a free form functional with parameters θu , e.g . a neural network . The learning problem is then to estimate from data the parameters of hk so that they match the true physical ones and those of hu so as to approximate at best the unknown dynamics f . In this regard , an intuitive training objective is to minimize a distance d between h = hk + hu and f : d ( h , f ) = EZ∼pZ‖h ( Z ) − f ( Z ) ‖2 , ( 3 ) where pZ is the distribution of the state values Z that accounts for varying initial states . Each Z then defines a training sample . Minimizing eq . ( 3 ) with h = hk + hu enables to predict accurate trajectories but may have an infinite number of solutions . For instance , hu may bypass the physical hypothesis hk . Thus , interpretability is not guaranteed . We develop in the following section a method to overcome this ill-posedness . 3 METHOD . In hybrid modeling , two criteria are essentials : 1. identifiability , i.e . the estimated parameters of hk should correspond to the true physical ones ; 2. prediction power , i.e . the statistical component hu should complete hk so that h = hk + hu performs accurate prediction over the system states . To control the contribution of each term hk and hu , we work upper bounds out of eq . ( 3 ) ( section 3.1 ) . We then propose to minimize d ( h , f ) while constraining the upper bounds , which provide us with a well-posed learning framework ( section 3.2 ) . Besides , we show that several previous works that introduced constrained optimization to solve related problems are specific cases of our formulation ( Yin et al. , 2021 ; Jia et al. , 2019 ; Linial et al. , 2021 ) . Finally , we introduce an alternate optimization algorithm which convergence is shown in section 3.3 for a linear approximation of f . 3.1 STRUCTURAL CONSTRAINTS FOR DYNAMICAL SYSTEMS . To ensure identifiability , we derive regularizations on hk and hu flowing from the control of an upper bound of d ( h , f ) . In particular , to minimize d ( hk , fk ) would enable us to accurately interpret hk as the true fk , and thus hu as the residual dynamics fu . However , since we do not access the parameters of fk , computing d ( hk , fk ) is not tractable . We then consider two possible situations . In the first one , the only available information on the physical system is the parametric form of fk ( or equivalently of hk ) , training thus only relies on observed trajectories ( section 3.1.1 ) . In the second one , we consider that auxiliary information about fk is available and will be used to minimize the distance between hk and fk ( section 3.1.2 ) . While the first setting is the more general , the physical prior it relies on is often insufficient to effectively handle real world situations . The second setting makes use of more informative priors and better corresponds to real cases as will be shown in the experimental section ( see section 4.2 ) . 3.1.1 CONTROLLING THE ML COMPONENT AND THE MB HYPOTHESIS . We propose a general approach to constrain the learning of hybrid models when one solely access the functional form of hk . In this case , to make hk accountable in our observed phenomena , a solution is to minimize d ( hk , f ) . Following the triangle inequality we link up both errors d ( h , f ) and d ( hk , f ) ( computations available in appendix C.1 ) : d ( h , f ) ≤ d ( h , hk ) + d ( hk , f ) = d ( hu , 0 ) + d ( hk , f ) ( 4 ) We want the physical-statistical model h = hk + hu to provide high quality forecasts . Minimizing the sole upper bound does not ensure such aim , as hu is only penalized through d ( hu , 0 ) and is not optimized to contribute to predictions . We thus propose to minimize d ( h , f ) while controlling both d ( hu , 0 ) and d ( hk , f ) . Such a control of the upper bound of eq . ( 4 ) amounts to balancing the contribution of the ML and the MB components . This will be formally introduced in section 3.2 . Link to the Literature The least action principle on the ML component i.e . constraining d ( hu , 0 ) is invoked for a geometric argument in ( Yin et al. , 2021 ) , and appears as a co-product of the introduction of d ( hk , f ) in eq . ( 4 ) . Optimizing d ( hk , f ) to match the physical model with observations is investigated in ( Forssell & Lindskog , 1997 ) . The general approach of eq . ( 4 ) allows us to perform prediction ( via h ) and system identification ( via hk ) on simple problems ( see section 4.1 ) . The learning of real-world complex dynamics , via data-driven hybrid models , often fails at yielding a physically sound estimation , as illustrated in section 4.2 . This suggests that learning complex dynamics requires additional information . In many real-world cases , auxiliary information is available in the form of measurements providing complementary information on fk . Indeed , a common issue in physics is to infer an unobserved variable of interest ( in our case fk parameters θ ? ) from indirect or noisy measurements that we refer to as proxy data . For instance , one can access a physical quantity but only at a coarse resolution , as in ( Um et al. , 2020 ; Belbute-Peres et al. , 2020 ) and in the real world example detailed in section 4.2 . We show in the next subsection how to incorporate such an information in order to approximate d ( hk , fk ) .
This paper studies the identification and prediction of dynamical systems from observed data by combining statistical approaches with physical priors. The main idea is to decompose the approximation of unknown dynamics into two parts: a parametric physics-based model $h_k$ (used to incorporate prior knowledge, or to identify parameters), and a non-parametric statistical model $h_u$ (used to fit the unknown, or unobserved degrees of freedom). Since the statistical part has a priori no restrictions, the overall problem is ill-posed in the sense that the statistical part can fit all of the dynamics. To ensure well-posedness, the authors propose to convert the joint learning problem to learning $h_k$ and $h_u$ subject to constraints: $h_k$ is given an allowed error tolerance and $h_u$ is given an allowed deviation from 0. The authors show that this modification implies well-posedness, and subsequently design an alternative minimization algorithms to solve for $h_k$ and $h_u$ iteratively. It is proved that this algorithm converges in the affine setting (via sequential projection to find intersection of convex sets) and it is demonstrated experimentally in low and high dimensional systems that this method works well in practice outside of the convex setting.
science.mup_single_document_summarization.9270
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nConstrained Physical-Statistics Models for Dynamical System Identification and Prediction\n\nMain Content:\n1 INTRODUCTION . Dynamical systems prediction and identification find crucial applications ranging from medicine and the study of tumors ( Hanahan & Weinberg , 2011 ; Lu & Fei , 2014 ) to oceanic and climate forecasting ( Oreskes et al. , 1994 ; Caers , 2011 ) . The modeling of such systems traditionally rely on ordinary or partial differential equations ( ODE/PDE ) ( Madec , 2008 ; Marti et al. , 2010 ) , and their resolution via numerical solvers and data assimilation ( Ghil & Malanotte-Rizzoli , 1991 ) . In real world applications , two main pitfalls occur : first the dynamics may only be partially known and thus do not fully represent the studied phenomena ( Rio & Santoleri , 2018 ) ; second , the system state may only be partially observed as in ocean models ( Gaultier et al. , 2013 ) . Machine learning ( ML ) has become a complementary approach to traditional physics based models ( denoted MB for model based ) ( Reichstein et al. , 2019 ; Dueben & Bauer , 2018 ) . Both offer advantages : whereas MB approaches generalize and extrapolate better , ML high expressivity approaches benefit from the ongoing explosive growth of available data such as satellite observations , with reduced costs compared to classical data assimilation . In that perspective , recent lines of work tackle the learning of hybrid models relying on prior physical knowledge and machine learning ( Yin et al. , 2021 ; Mehta et al. , 2020 ) . Efficiently learning such decompositions actually means solving two different tasks : system identification , i.e . estimating the parameters of the physical model , and prediction , i.e . recovering the trajectories associated to the dynamics . Both are essential for hybrid MB/ML models of dynamical systems . Whereas prediction aims at robust extrapolation , identification accounts for physical interpretability of the MB/ML model . While solving both problems using model-based formulation admits well-known numerical solutions , for example using the adjoint method ( Le Dimet & Talagrand , 1986 ; Courtier et al. , 1994 ) , the combination of physical models and deep learning is still an open area of research . In this context , ML applications mainly focus on the prediction task , at the expense of the system identification : Willard et al . ( 2020 ) underlines the lack of generalizability of black-box ML models and their inability to produce physically sound results . Indeed , Ayed et al . ( 2020 ) show that without any prior knowledge , the recovered estimates of a dynamical system states are not physically plausible despite accurate predictions . Moreover , as noted by Yin et al . ( 2021 ) , learning a linear MB/ML decompo- sition with the sole supervision on the system trajectories is ill-posed and admits an infinite number of decompositions . Such observations highlight the need to incorporate physically motivated constraints in the learning of hybrid models , e.g . through regularization penalties , and several works propose additional constraints to guide the model towards physical solutions ( Jia et al. , 2019 ; Yin et al. , 2021 ; Linial et al. , 2021 ) . Finally , to complete prior dynamical knowledge with a data-driven component and ensure interpretability of the decomposition , we work out a principled framework that generalizes previous attempts in the regularization of hybrid models . Our contributions are : • In section 3.1 , we introduce a novel way to recover well-posedness and interpretability in the learning of hybrid MB/ML models through the control of an upper bound . We further extend our framework to incorporate auxiliary data when available to handle complex realworld problems . • In section 3.2 , we propose a novel alternate-optimization algorithm to learn hybrid models . • In section 3.3 , we provide an analysis of the convergence of the proposed algorithm on a simplified case and experimentally evidence the soundness of our approach on more complex settings of increasing difficulty including challenging real world problems ( section 4 ) . 2 BACKGROUND AND PROBLEM SETUP . We consider a dynamical system with state at time t denoted Zt = Z ( t ) . Zt might be fully or only partially observed : we write Zt = ( Xt , Yt ) , where Xt is the observed component and Yt the unobserved one . The evolution of Z is governed by a differential equation with dynamics : dZt dt = d dt ( Xt Yt ) = ( fX ( Zt ) fY ( Zt ) ) ( 1 ) The objective is to predict trajectories of X , i.e . to model the evolution of the observable part following dXtdt = fX ( Zt ) . For simplicity , we omit the index X in fX and write f ( . ) ∆ = fX ( . ) . Dynamical Hypothesis We assume partial knowledge of the dynamics of the observed Xt : dXt dt = f ( Zt ) = fk ( Zt ) + fu ( Zt ) ( 2 ) where fk ∈ Hk is a known operator with unknown parameters θ∗ , and fu ∈ Hu is the unknown residual dynamics . Hk and Hu denote function spaces . Note that the additive hypothesis in eq . ( 2 ) is not restrictive ( see discussion in appendix B ) . Learning Problem Our objective is to approximate f with a function h learned from the observed data . According to eq . ( 2 ) , we assume h = hk +hu . hk ∈ Hk belongs to the same hypothesis space as fk , i.e . it has the same parametric form . Its parameters are denoted θk . Note that hk ( . , θ∗ ) = fk . hu ∈ Hu is represented by a free form functional with parameters θu , e.g . a neural network . The learning problem is then to estimate from data the parameters of hk so that they match the true physical ones and those of hu so as to approximate at best the unknown dynamics f . In this regard , an intuitive training objective is to minimize a distance d between h = hk + hu and f : d ( h , f ) = EZ∼pZ‖h ( Z ) − f ( Z ) ‖2 , ( 3 ) where pZ is the distribution of the state values Z that accounts for varying initial states . Each Z then defines a training sample . Minimizing eq . ( 3 ) with h = hk + hu enables to predict accurate trajectories but may have an infinite number of solutions . For instance , hu may bypass the physical hypothesis hk . Thus , interpretability is not guaranteed . We develop in the following section a method to overcome this ill-posedness . 3 METHOD . In hybrid modeling , two criteria are essentials : 1. identifiability , i.e . the estimated parameters of hk should correspond to the true physical ones ; 2. prediction power , i.e . the statistical component hu should complete hk so that h = hk + hu performs accurate prediction over the system states . To control the contribution of each term hk and hu , we work upper bounds out of eq . ( 3 ) ( section 3.1 ) . We then propose to minimize d ( h , f ) while constraining the upper bounds , which provide us with a well-posed learning framework ( section 3.2 ) . Besides , we show that several previous works that introduced constrained optimization to solve related problems are specific cases of our formulation ( Yin et al. , 2021 ; Jia et al. , 2019 ; Linial et al. , 2021 ) . Finally , we introduce an alternate optimization algorithm which convergence is shown in section 3.3 for a linear approximation of f . 3.1 STRUCTURAL CONSTRAINTS FOR DYNAMICAL SYSTEMS . To ensure identifiability , we derive regularizations on hk and hu flowing from the control of an upper bound of d ( h , f ) . In particular , to minimize d ( hk , fk ) would enable us to accurately interpret hk as the true fk , and thus hu as the residual dynamics fu . However , since we do not access the parameters of fk , computing d ( hk , fk ) is not tractable . We then consider two possible situations . In the first one , the only available information on the physical system is the parametric form of fk ( or equivalently of hk ) , training thus only relies on observed trajectories ( section 3.1.1 ) . In the second one , we consider that auxiliary information about fk is available and will be used to minimize the distance between hk and fk ( section 3.1.2 ) . While the first setting is the more general , the physical prior it relies on is often insufficient to effectively handle real world situations . The second setting makes use of more informative priors and better corresponds to real cases as will be shown in the experimental section ( see section 4.2 ) . 3.1.1 CONTROLLING THE ML COMPONENT AND THE MB HYPOTHESIS . We propose a general approach to constrain the learning of hybrid models when one solely access the functional form of hk . In this case , to make hk accountable in our observed phenomena , a solution is to minimize d ( hk , f ) . Following the triangle inequality we link up both errors d ( h , f ) and d ( hk , f ) ( computations available in appendix C.1 ) : d ( h , f ) ≤ d ( h , hk ) + d ( hk , f ) = d ( hu , 0 ) + d ( hk , f ) ( 4 ) We want the physical-statistical model h = hk + hu to provide high quality forecasts . Minimizing the sole upper bound does not ensure such aim , as hu is only penalized through d ( hu , 0 ) and is not optimized to contribute to predictions . We thus propose to minimize d ( h , f ) while controlling both d ( hu , 0 ) and d ( hk , f ) . Such a control of the upper bound of eq . ( 4 ) amounts to balancing the contribution of the ML and the MB components . This will be formally introduced in section 3.2 . Link to the Literature The least action principle on the ML component i.e . constraining d ( hu , 0 ) is invoked for a geometric argument in ( Yin et al. , 2021 ) , and appears as a co-product of the introduction of d ( hk , f ) in eq . ( 4 ) . Optimizing d ( hk , f ) to match the physical model with observations is investigated in ( Forssell & Lindskog , 1997 ) . The general approach of eq . ( 4 ) allows us to perform prediction ( via h ) and system identification ( via hk ) on simple problems ( see section 4.1 ) . The learning of real-world complex dynamics , via data-driven hybrid models , often fails at yielding a physically sound estimation , as illustrated in section 4.2 . This suggests that learning complex dynamics requires additional information . In many real-world cases , auxiliary information is available in the form of measurements providing complementary information on fk . Indeed , a common issue in physics is to infer an unobserved variable of interest ( in our case fk parameters θ ? ) from indirect or noisy measurements that we refer to as proxy data . For instance , one can access a physical quantity but only at a coarse resolution , as in ( Um et al. , 2020 ; Belbute-Peres et al. , 2020 ) and in the real world example detailed in section 4.2 . We show in the next subsection how to incorporate such an information in order to approximate d ( hk , fk ) .", "role": "user" }, { "content": "This paper studies the identification and prediction of dynamical systems from observed data by combining statistical approaches with physical priors. The main idea is to decompose the approximation of unknown dynamics into two parts: a parametric physics-based model $h_k$ (used to incorporate prior knowledge, or to identify parameters), and a non-parametric statistical model $h_u$ (used to fit the unknown, or unobserved degrees of freedom). Since the statistical part has a priori no restrictions, the overall problem is ill-posed in the sense that the statistical part can fit all of the dynamics. To ensure well-posedness, the authors propose to convert the joint learning problem to learning $h_k$ and $h_u$ subject to constraints: $h_k$ is given an allowed error tolerance and $h_u$ is given an allowed deviation from 0. The authors show that this modification implies well-posedness, and subsequently design an alternative minimization algorithms to solve for $h_k$ and $h_u$ iteratively. It is proved that this algorithm converges in the affine setting (via sequential projection to find intersection of convex sets) and it is demonstrated experimentally in low and high dimensional systems that this method works well in practice outside of the convex setting.", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: The Deep Weight Prior Main Content: 1 INTRODUCTION . Bayesian inference is a tool that , after observing training data , allows to transforms a prior distribution over parameters of a machine learning model to a posterior distribution . Recently , stochastic variational inference ( Hoffman et al. , 2013 ) – a method for approximate Bayesian inference – has been successfully adopted to obtain a variational approximation of a posterior distribution over weights of a deep neural network ( Kingma et al. , 2015 ) . Currently , there are two major directions for the development of Bayesian deep learning . The first direction can be summarized as the improvement of approximate inference with richer variational approximations and tighter variational bounds ( Dikmen et al. , 2015 ) . The second direction is the design of probabilistic models , in particular , prior distributions , that widen the scope of applicability of the Bayesian approach . Prior distributions play an important role for sparsification ( Molchanov et al. , 2017 ; Neklyudov et al. , 2017 ) , quantization ( Ullrich et al. , 2017 ) and compression ( Louizos et al. , 2017 ; Federici et al. , 2017 ) of deep learning models . Although these prior distributions proved to be helpful , they are limited to fully-factorized structure . Thus , the often observed spatial structure of convolutional filters can not be enforced with such priors . Convolutional neural networks are an example of the model family , where a correlation of the weights plays an important role , thus it may benefit from more flexible prior distributions . Convolutional neural networks are known to learn similar convolutional kernels on different datasets from similar domains ( Sharif Razavian et al. , 2014 ; Yosinski et al. , 2014 ) . Based on this fact , within ∗Equal contribution a specific data domain , we consider a distribution of convolution kernels of trained convolutional networks . In the rest of the paper , we refer to this distribution as the source kernel distribution . Our main assumption is that within a specific domain the source kernel distribution can be efficiently approximated with convolutional kernels of models that were trained on a small subset of problems from this domain . For example , given a specific architecture , we expect that kernels of a model trained on notMNIST dataset – a dataset of grayscale images – come from the same distribution as kernels of the model trained on MNIST dataset . In this work , we propose a method that estimates the source kernel distribution in an implicit form and allows us to perform variational inference with the specific type of implicit priors . Our contributions can be summarized as follows : 1 . We propose deep weight prior , a framework that approximates the source kernel distribution and incorporates prior knowledge about the structure of convolutional filters into the prior distribution . We also propose to use an implicit form of this prior ( Section 3.1 ) . 2 . We develop a method for variational inference with the proposed type of implicit priors ( Section 3.2 ) . 3 . In experiments ( Section 4 ) , we show that variational inference with deep weight prior significantly improves classification performance upon a number of popular prior distributions in the case of limited training data . We also find that initialization of conventional convolution networks with samples from a deep weight prior leads to faster convergence and better feature extraction without training i.e. , using random weights . 2 DEEP BAYES . In Bayesian setting , after observing a dataset D = { x1 , . . . , xN } of N points , the goal is to transform our prior knowledge p ( ω ) of the unobserved distribution parameters ω to the posterior distribution p ( ω | D ) . However , computing the posterior distribution through Bayes rule p ( ω | D ) = p ( D |ω ) p ( ω ) /p ( D ) may involve computationally intractable integrals . This problem , nonetheless , can be solved approximately . Variational Inference ( Jordan et al. , 1999 ) is one of such approximation methods . It reduces the inference to an optimization problem , where we optimize parameters θ of a variational approximation qθ ( ω ) , so that KL-divergence between qθ ( ω ) and p ( ω | D ) is minimized . This divergence in practice is minimized by maximizing the variational lower bound L ( θ ) of the marginal log-likelihood of the data w.r.t parameters θ of the variational approximation qθ ( W ) . L ( θ ) = LD −DKL ( qθ ( ω ) ‖p ( ω ) ) → max θ ( 1 ) where LD = Eqθ ( ω ) log p ( D |ω ) ( 2 ) The variational lower bound L ( θ ) consists of two terms : 1 ) the ( conditional ) expected log likelihood LD , and 2 ) the regularizerDKL ( qθ ( ω ) ‖p ( ω ) ) . Since log p ( D ) = L ( θ ) +DKL ( qθ ( ω ) ‖p ( ω | D ) ) and p ( D ) does not depend on qθ ( w ) maximizing of L ( θ ) minimizes DKL ( qθ ( ω ) ‖p ( ω | D ) ) . However , in case of intractable expectations in equation 1 neither the variational lower bound L ( θ ) nor its gradients can be computed in a closed form . Recently , Kingma & Welling ( 2013 ) and Rezende et al . ( 2014 ) proposed an efficient mini-batch based approach to stochastic variational inference , so-called stochastic gradient variational Bayes or doubly stochastic variational inference . The idea behind this framework is reparamtetrization , that represents samples from a parametric distribution qθ ( ω ) as a deterministic differentiable function ω = f ( θ , ) of parameters θ and an ( auxiliary ) noise variable ∼ p ( ) . Using this trick we can efficiently compute an unbiased stochastic gradient ∇θL of the variational lower bound w.r.t the parameters of the variational approximation . Bayesian Neural Networks . The stochastic gradient variational Bayes framework has been applied to approximate posterior distributions over parameters of deep neural networks ( Kingma et al. , 2015 ) . We consider a discriminative problem , where dataset D consists of N object-label pairs D = { ( xi , yi ) } Ni=1 . For this problem we maximize the variational lower bound L ( θ ) with respect to parameters θ of a variational approximation qθ ( W ) : L ( θ ) = N∑ i=1 Eqθ ( W ) log p ( yi |xi , W ) −DKL ( qθ ( W ) ‖p ( W ) ) → max θ ( 3 ) where W denotes weights of a neural network , qθ ( W ) is a variational distribution , that allows reparametrization ( Kingma & Welling , 2013 ; Figurnov et al. , 2018 ) and p ( W ) is a prior distribution . In the simplest case qθ ( W ) can be a fully-factorized normal distribution . However , more expressive variational approximations may lead to better quality of variational inference ( Louizos & Welling , 2017 ; Yin & Zhou , 2018 ) . Typically , Bayesian neural networks use fully-factorized normal or log-uniform priors ( Kingma et al. , 2015 ; Molchanov et al. , 2017 ; Louizos & Welling , 2017 ) . Variational Auto-encoder . Stochastic gradient variational Bayes has also been applied for building generative models . The variational auto-encoder proposed by Kingma & Welling ( 2013 ) maximizes a variational lower bound L ( θ , φ ) on the marginal log-likelihood by amortized variational inference : L ( θ , φ ) = N∑ i=1 Eqθ ( zi | xi ) log pφ ( xi | zi ) −DKL ( qθ ( zi |xi ) ‖p ( zi ) ) → max θ , φ , ( 4 ) where an inference model qθ ( zi |xi ) approximates the posterior distribution over local latent variables zi , reconstruction model pφ ( xi | zi ) transforms the distribution over latent variables to a conditional distribution in object space and a prior distribution over latent variables p ( zi ) . The vanilla VAE defines qθ ( z |x ) , pφ ( x | z ) , p ( z ) as fully-factorized distributions , however , a number of richer variational approximations and prior distributions have been proposed ( Rezende & Mohamed , 2015 ; Kingma et al. , 2016 ; Tomczak & Welling , 2017 ) . The approximation of the data distribution can then be defined as an intractable integral p ( x ) ≈ ∫ pφ ( x | z ) p ( z ) dz which we will refer to as an implicit distribution . 3 DEEP WEIGHT PRIOR . In this section , we introduce the deep weight prior – an expressive prior distribution that is based on generative models . This prior distribution allows us to encode and favor the structure of learned convolutional filters . We consider a neural network with L convolutional layers and denote parameters of l-th convolutional layer as wl ∈ RIl×Ol×Hl×Wl , where Il is the number of input channels , Ol is the number of output channels , Hl and Wl are spatial dimensions of kernels . Parameters of the neural network are denoted as W = ( w1 , . . . wL ) . A variational approximation qθ ( W ) and a prior distribution p ( W ) have the following factorization over layers , filters and channels : qθ ( W ) = L∏ l=1 Il∏ i=1 Ol∏ j=1 q ( wlij | θlij ) p ( W ) = L∏ l=1 Il∏ i=1 Ol∏ j=1 pl ( w l ij ) , ( 5 ) where wlij ∈ RHl×Wl is a kernel of j-th channel in i-th filter of l-th convolutional layer . We also assume that qθ ( W ) allows reparametrization . The prior distribution p ( W ) , in contrast to popular prior distributions , is not factorized over spatial dimensions of the filters Hl , Wl . For a specific data domain and architecture , we define the source kernel distribution – the distribution of trained convolutional kernels of the l-th convolutional layer . The source kernel distribution favors learned kernels , and thus it is a very natural candidate to be the prior distribution pl ( wlij ) for convolutional kernels of the l-th layer . Unfortunately , we do not have access to its probability density function ( p.d.f . ) , that is needed for most approximate inference methods e.g. , variational inference . Therefore , we assume that the p.d.f . of the source kernel distribution can be approximated using kernels of models trained on external datasets from the same domain . For example , given a specific architecture , we expect that kernels of a model trained on CIFAR-100 dataset come from the same distribution as kernels of the model trained on CIFAR-10 dataset . In other words , the p.d.f . of the source kernel distribution can be approximated using a small subset of problems from a specific data domain . In the next subsection , we propose to approximate this intractable probability density function of the source kernel distribution using the framework of generative models . Algorithm 1 Stochastic Variational Inference With Implicit Prior Distribution Require : dataset D = { ( xi , yi ) } Ni=1 Require : variational approximations q ( w | θlij ) and reverse models r ( z |w ; ψl ) Require : reconstruction models p ( w | z ; φl ) , priors for auxiliary variables pl ( z ) while not converged do M̂ ← mini-batch of objects form dataset D ŵlij ← sample weights from q ( w|θlij ) with reparametrization ẑlij ← sample auxiliary variables from r ( z | ŵlij ; ψl ) with reparametrization L̂aux ← LM̂+ ∑ l , i , j − log q ( ŵlij | θlij ) −log r ( ẑlij | ŵlij ; ψl ) +log pl ( ẑlij ) +log p ( ŵlij | ẑlij ; φl ) Obtain unbiased estimate ĝ with E [ ĝ ] = ∇Laux by differentiating L̂aux Update parameters θ and ψ using gradient ĝ and a stochastic optimization algorithm end while return Parameters θ , ψ
This paper considers learning informative priors for convolutional neural network models based on fits to data sets from similar problem domains. For trained networks on related datasets the authors use autoencoders to obtain an expressive prior on the filter weights, with independence assumed between different layers. The resulting prior is generative and its density has no closed form expression, and a novel variational method for dealing with this is described. Some empirical comparisons of the deep weight prior with alternative priors is considered, as well as a comparison of deep weight samples for initialization with alternative initialization schemes.
science.mup_single_document_summarization.10692
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nThe Deep Weight Prior\n\nMain Content:\n1 INTRODUCTION . Bayesian inference is a tool that , after observing training data , allows to transforms a prior distribution over parameters of a machine learning model to a posterior distribution . Recently , stochastic variational inference ( Hoffman et al. , 2013 ) – a method for approximate Bayesian inference – has been successfully adopted to obtain a variational approximation of a posterior distribution over weights of a deep neural network ( Kingma et al. , 2015 ) . Currently , there are two major directions for the development of Bayesian deep learning . The first direction can be summarized as the improvement of approximate inference with richer variational approximations and tighter variational bounds ( Dikmen et al. , 2015 ) . The second direction is the design of probabilistic models , in particular , prior distributions , that widen the scope of applicability of the Bayesian approach . Prior distributions play an important role for sparsification ( Molchanov et al. , 2017 ; Neklyudov et al. , 2017 ) , quantization ( Ullrich et al. , 2017 ) and compression ( Louizos et al. , 2017 ; Federici et al. , 2017 ) of deep learning models . Although these prior distributions proved to be helpful , they are limited to fully-factorized structure . Thus , the often observed spatial structure of convolutional filters can not be enforced with such priors . Convolutional neural networks are an example of the model family , where a correlation of the weights plays an important role , thus it may benefit from more flexible prior distributions . Convolutional neural networks are known to learn similar convolutional kernels on different datasets from similar domains ( Sharif Razavian et al. , 2014 ; Yosinski et al. , 2014 ) . Based on this fact , within ∗Equal contribution a specific data domain , we consider a distribution of convolution kernels of trained convolutional networks . In the rest of the paper , we refer to this distribution as the source kernel distribution . Our main assumption is that within a specific domain the source kernel distribution can be efficiently approximated with convolutional kernels of models that were trained on a small subset of problems from this domain . For example , given a specific architecture , we expect that kernels of a model trained on notMNIST dataset – a dataset of grayscale images – come from the same distribution as kernels of the model trained on MNIST dataset . In this work , we propose a method that estimates the source kernel distribution in an implicit form and allows us to perform variational inference with the specific type of implicit priors . Our contributions can be summarized as follows : 1 . We propose deep weight prior , a framework that approximates the source kernel distribution and incorporates prior knowledge about the structure of convolutional filters into the prior distribution . We also propose to use an implicit form of this prior ( Section 3.1 ) . 2 . We develop a method for variational inference with the proposed type of implicit priors ( Section 3.2 ) . 3 . In experiments ( Section 4 ) , we show that variational inference with deep weight prior significantly improves classification performance upon a number of popular prior distributions in the case of limited training data . We also find that initialization of conventional convolution networks with samples from a deep weight prior leads to faster convergence and better feature extraction without training i.e. , using random weights . 2 DEEP BAYES . In Bayesian setting , after observing a dataset D = { x1 , . . . , xN } of N points , the goal is to transform our prior knowledge p ( ω ) of the unobserved distribution parameters ω to the posterior distribution p ( ω | D ) . However , computing the posterior distribution through Bayes rule p ( ω | D ) = p ( D |ω ) p ( ω ) /p ( D ) may involve computationally intractable integrals . This problem , nonetheless , can be solved approximately . Variational Inference ( Jordan et al. , 1999 ) is one of such approximation methods . It reduces the inference to an optimization problem , where we optimize parameters θ of a variational approximation qθ ( ω ) , so that KL-divergence between qθ ( ω ) and p ( ω | D ) is minimized . This divergence in practice is minimized by maximizing the variational lower bound L ( θ ) of the marginal log-likelihood of the data w.r.t parameters θ of the variational approximation qθ ( W ) . L ( θ ) = LD −DKL ( qθ ( ω ) ‖p ( ω ) ) → max θ ( 1 ) where LD = Eqθ ( ω ) log p ( D |ω ) ( 2 ) The variational lower bound L ( θ ) consists of two terms : 1 ) the ( conditional ) expected log likelihood LD , and 2 ) the regularizerDKL ( qθ ( ω ) ‖p ( ω ) ) . Since log p ( D ) = L ( θ ) +DKL ( qθ ( ω ) ‖p ( ω | D ) ) and p ( D ) does not depend on qθ ( w ) maximizing of L ( θ ) minimizes DKL ( qθ ( ω ) ‖p ( ω | D ) ) . However , in case of intractable expectations in equation 1 neither the variational lower bound L ( θ ) nor its gradients can be computed in a closed form . Recently , Kingma & Welling ( 2013 ) and Rezende et al . ( 2014 ) proposed an efficient mini-batch based approach to stochastic variational inference , so-called stochastic gradient variational Bayes or doubly stochastic variational inference . The idea behind this framework is reparamtetrization , that represents samples from a parametric distribution qθ ( ω ) as a deterministic differentiable function ω = f ( θ , ) of parameters θ and an ( auxiliary ) noise variable ∼ p ( ) . Using this trick we can efficiently compute an unbiased stochastic gradient ∇θL of the variational lower bound w.r.t the parameters of the variational approximation . Bayesian Neural Networks . The stochastic gradient variational Bayes framework has been applied to approximate posterior distributions over parameters of deep neural networks ( Kingma et al. , 2015 ) . We consider a discriminative problem , where dataset D consists of N object-label pairs D = { ( xi , yi ) } Ni=1 . For this problem we maximize the variational lower bound L ( θ ) with respect to parameters θ of a variational approximation qθ ( W ) : L ( θ ) = N∑ i=1 Eqθ ( W ) log p ( yi |xi , W ) −DKL ( qθ ( W ) ‖p ( W ) ) → max θ ( 3 ) where W denotes weights of a neural network , qθ ( W ) is a variational distribution , that allows reparametrization ( Kingma & Welling , 2013 ; Figurnov et al. , 2018 ) and p ( W ) is a prior distribution . In the simplest case qθ ( W ) can be a fully-factorized normal distribution . However , more expressive variational approximations may lead to better quality of variational inference ( Louizos & Welling , 2017 ; Yin & Zhou , 2018 ) . Typically , Bayesian neural networks use fully-factorized normal or log-uniform priors ( Kingma et al. , 2015 ; Molchanov et al. , 2017 ; Louizos & Welling , 2017 ) . Variational Auto-encoder . Stochastic gradient variational Bayes has also been applied for building generative models . The variational auto-encoder proposed by Kingma & Welling ( 2013 ) maximizes a variational lower bound L ( θ , φ ) on the marginal log-likelihood by amortized variational inference : L ( θ , φ ) = N∑ i=1 Eqθ ( zi | xi ) log pφ ( xi | zi ) −DKL ( qθ ( zi |xi ) ‖p ( zi ) ) → max θ , φ , ( 4 ) where an inference model qθ ( zi |xi ) approximates the posterior distribution over local latent variables zi , reconstruction model pφ ( xi | zi ) transforms the distribution over latent variables to a conditional distribution in object space and a prior distribution over latent variables p ( zi ) . The vanilla VAE defines qθ ( z |x ) , pφ ( x | z ) , p ( z ) as fully-factorized distributions , however , a number of richer variational approximations and prior distributions have been proposed ( Rezende & Mohamed , 2015 ; Kingma et al. , 2016 ; Tomczak & Welling , 2017 ) . The approximation of the data distribution can then be defined as an intractable integral p ( x ) ≈ ∫ pφ ( x | z ) p ( z ) dz which we will refer to as an implicit distribution . 3 DEEP WEIGHT PRIOR . In this section , we introduce the deep weight prior – an expressive prior distribution that is based on generative models . This prior distribution allows us to encode and favor the structure of learned convolutional filters . We consider a neural network with L convolutional layers and denote parameters of l-th convolutional layer as wl ∈ RIl×Ol×Hl×Wl , where Il is the number of input channels , Ol is the number of output channels , Hl and Wl are spatial dimensions of kernels . Parameters of the neural network are denoted as W = ( w1 , . . . wL ) . A variational approximation qθ ( W ) and a prior distribution p ( W ) have the following factorization over layers , filters and channels : qθ ( W ) = L∏ l=1 Il∏ i=1 Ol∏ j=1 q ( wlij | θlij ) p ( W ) = L∏ l=1 Il∏ i=1 Ol∏ j=1 pl ( w l ij ) , ( 5 ) where wlij ∈ RHl×Wl is a kernel of j-th channel in i-th filter of l-th convolutional layer . We also assume that qθ ( W ) allows reparametrization . The prior distribution p ( W ) , in contrast to popular prior distributions , is not factorized over spatial dimensions of the filters Hl , Wl . For a specific data domain and architecture , we define the source kernel distribution – the distribution of trained convolutional kernels of the l-th convolutional layer . The source kernel distribution favors learned kernels , and thus it is a very natural candidate to be the prior distribution pl ( wlij ) for convolutional kernels of the l-th layer . Unfortunately , we do not have access to its probability density function ( p.d.f . ) , that is needed for most approximate inference methods e.g. , variational inference . Therefore , we assume that the p.d.f . of the source kernel distribution can be approximated using kernels of models trained on external datasets from the same domain . For example , given a specific architecture , we expect that kernels of a model trained on CIFAR-100 dataset come from the same distribution as kernels of the model trained on CIFAR-10 dataset . In other words , the p.d.f . of the source kernel distribution can be approximated using a small subset of problems from a specific data domain . In the next subsection , we propose to approximate this intractable probability density function of the source kernel distribution using the framework of generative models . Algorithm 1 Stochastic Variational Inference With Implicit Prior Distribution Require : dataset D = { ( xi , yi ) } Ni=1 Require : variational approximations q ( w | θlij ) and reverse models r ( z |w ; ψl ) Require : reconstruction models p ( w | z ; φl ) , priors for auxiliary variables pl ( z ) while not converged do M̂ ← mini-batch of objects form dataset D ŵlij ← sample weights from q ( w|θlij ) with reparametrization ẑlij ← sample auxiliary variables from r ( z | ŵlij ; ψl ) with reparametrization L̂aux ← LM̂+ ∑ l , i , j − log q ( ŵlij | θlij ) −log r ( ẑlij | ŵlij ; ψl ) +log pl ( ẑlij ) +log p ( ŵlij | ẑlij ; φl ) Obtain unbiased estimate ĝ with E [ ĝ ] = ∇Laux by differentiating L̂aux Update parameters θ and ψ using gradient ĝ and a stochastic optimization algorithm end while return Parameters θ , ψ", "role": "user" }, { "content": "This paper considers learning informative priors for convolutional neural network models based on fits to data sets from similar problem domains. For trained networks on related datasets the authors use autoencoders to obtain an expressive prior on the filter weights, with independence assumed between different layers. The resulting prior is generative and its density has no closed form expression, and a novel variational method for dealing with this is described. Some empirical comparisons of the deep weight prior with alternative priors is considered, as well as a comparison of deep weight samples for initialization with alternative initialization schemes.", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND:: Functional endoscopic sinus surgery (FESS) is the mainstay therapeutic management for nasal pathologies. We evaluated flupirtine, a centrally acting analgesic, for producing perfect perioperative conditions in FESS for adults in a day-care setting. ABSTRACT.MATERIALS AND METHODS:: Sixty-two patients (25–40 years) scheduled for FESS under general anesthesia were randomly divided into Group F (n = 31) receiving preoperative flupirtine (100 mg) and Group C (n = 31) receiving identical-looking placebo capsule per oral 60 min before induction of anesthesia. Perioperative Nasal bleeding and surgeon's satisfaction score during operation; amount and number of patients receiving fentanyl, propofol, and esmolol infusion for analgesia; maintenance of desired bispectral index (BIS) and deliberate hypotension, respectively. Postanesthesia Care Unit (PACU) and hospital stay, hemodynamic parameters, and side effects were all recorded for each patient. ABSTRACT.RESULTS:: Significantly, less number of patients and less dosage of esmolol were required (P = 0.0040 and 0.0001, respectively) in Group F as compared to that in Group C. Again, number of patients requiring fentanyl and dosage of the same drug was significantly lower in Group F. Dose of propofol for the maintenance of BIS was significantly lower in Group F. However, the duration of controlled hypotension was almost similar in both groups. Group F patients suffered significantly less nasal bleeding and surgeon's satisfaction score was also high in this group. Discharge time from PACU and hospital was similar between two groups without any appreciable side effects. ABSTRACT.CONCLUSION:: Flupirtine as a premedication found to be providing more favorable perioperative hemodynamic conditions, analgesia and thus allowing less nasal bleeding as well as more surgeons' satisfaction score. Extracted Data:
[["Group F receiving preoperative flupirtine", "Group C receiving identical-looking placebo", "number of patients and dosage of esmolol", "significantly decreased", "Significantly, less number of patients and less dosage of esmolol were required (P = 0.0040 and 0.0001, respectively) in Group F as compared to that in Group C."], ["Group F receiving preoperative flupirtine", "Group C receiving identical-looking placebo", "Side effects ", "no significant difference", "Discharge time from PACU and hospital was similar between two groups without any appreciable side effects."]]
science.evidence_inference.36
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: Functional endoscopic sinus surgery (FESS) is the mainstay therapeutic management for nasal pathologies. We evaluated flupirtine, a centrally acting analgesic, for producing perfect perioperative conditions in FESS for adults in a day-care setting.\n\nABSTRACT.MATERIALS AND METHODS:: Sixty-two patients (25–40 years) scheduled for FESS under general anesthesia were randomly divided into Group F (n = 31) receiving preoperative flupirtine (100 mg) and Group C (n = 31) receiving identical-looking placebo capsule per oral 60 min before induction of anesthesia. Perioperative Nasal bleeding and surgeon's satisfaction score during operation; amount and number of patients receiving fentanyl, propofol, and esmolol infusion for analgesia; maintenance of desired bispectral index (BIS) and deliberate hypotension, respectively. Postanesthesia Care Unit (PACU) and hospital stay, hemodynamic parameters, and side effects were all recorded for each patient.\n\nABSTRACT.RESULTS:: Significantly, less number of patients and less dosage of esmolol were required (P = 0.0040 and 0.0001, respectively) in Group F as compared to that in Group C. Again, number of patients requiring fentanyl and dosage of the same drug was significantly lower in Group F. Dose of propofol for the maintenance of BIS was significantly lower in Group F. However, the duration of controlled hypotension was almost similar in both groups. Group F patients suffered significantly less nasal bleeding and surgeon's satisfaction score was also high in this group. Discharge time from PACU and hospital was similar between two groups without any appreciable side effects.\n\nABSTRACT.CONCLUSION:: Flupirtine as a premedication found to be providing more favorable perioperative hemodynamic conditions, analgesia and thus allowing less nasal bleeding as well as more surgeons' satisfaction score.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Group F receiving preoperative flupirtine\", \"Group C receiving identical-looking placebo\", \"number of patients and dosage of esmolol\", \"significantly decreased\", \"Significantly, less number of patients and less dosage of esmolol were required (P = 0.0040 and 0.0001, respectively) in Group F as compared to that in Group C.\"], [\"Group F receiving preoperative flupirtine\", \"Group C receiving identical-looking placebo\", \"Side effects \", \"no significant difference\", \"Discharge time from PACU and hospital was similar between two groups without any appreciable side effects.\"]]", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: In this paper we evaluate four objective measures of speech with regards to intelligibility prediction of synthesized speech in diverse noisy situations. We evaluated three intel-ligibility measures, the Dau measure, the glimpse proportion and the Speech Intelligibility Index (SII) and a quality measure, the Perceptual Evaluation of Speech Quality (PESQ). For the generation of synthesized speech we used a state of the art HMM-based speech synthesis system. The noisy conditions comprised four additive noises. The measures were compared with subjective intelligibility scores obtained in listening tests. The results show the Dau and the glimpse measures to be the best predictors of intelligibility, with correlations of around 0.83 to subjective scores. All measures gave less accurate predictions of intelligibility for synthetic speech than have previously been found for natural speech; in particular the SII measure. In additional experiments, we processed the synthesized speech by an ideal binary mask before adding noise. The Glimpse measure gave the most accurate intelligibility predictions in this situation.
Output Entities: {"Generic": [], "Material": ["natural speech", "synthetic speech", "synthesized speech"], "Method": ["ideal binary mask", "predictors of intelligibility", "HMM-based speech synthesis system"], "Metric": ["quality measure", "correlations", "measures of speech", "Speech Intelligibility Index (SII)", "Dau measure", "glimpse proportion", "subjective intelligibility scores", "Perceptual Evaluation of Speech Quality (PESQ)", "intel-ligibility measures", "SII measure"], "OtherScientificTerm": ["additive noises", "subjective scores", "noisy conditions", "diverse noisy situations"], "Task": ["intelligibility prediction", "generation of synthesized speech"]}
science.scierc_ner.29
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nIn this paper we evaluate four objective measures of speech with regards to intelligibility prediction of synthesized speech in diverse noisy situations. We evaluated three intel-ligibility measures, the Dau measure, the glimpse proportion and the Speech Intelligibility Index (SII) and a quality measure, the Perceptual Evaluation of Speech Quality (PESQ). For the generation of synthesized speech we used a state of the art HMM-based speech synthesis system. The noisy conditions comprised four additive noises. The measures were compared with subjective intelligibility scores obtained in listening tests. The results show the Dau and the glimpse measures to be the best predictors of intelligibility, with correlations of around 0.83 to subjective scores. All measures gave less accurate predictions of intelligibility for synthetic speech than have previously been found for natural speech; in particular the SII measure. In additional experiments, we processed the synthesized speech by an ideal binary mask before adding noise. The Glimpse measure gave the most accurate intelligibility predictions in this situation.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [\"natural speech\", \"synthetic speech\", \"synthesized speech\"], \"Method\": [\"ideal binary mask\", \"predictors of intelligibility\", \"HMM-based speech synthesis system\"], \"Metric\": [\"quality measure\", \"correlations\", \"measures of speech\", \"Speech Intelligibility Index (SII)\", \"Dau measure\", \"glimpse proportion\", \"subjective intelligibility scores\", \"Perceptual Evaluation of Speech Quality (PESQ)\", \"intel-ligibility measures\", \"SII measure\"], \"OtherScientificTerm\": [\"additive noises\", \"subjective scores\", \"noisy conditions\", \"diverse noisy situations\"], \"Task\": [\"intelligibility prediction\", \"generation of synthesized speech\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: r-protein chaperones are found in complexes with their r-protein binding partners. Research Abstract: Biogenesis of ribosomes is an essential cellular process conserved across all eukaryotes and is known to require >170 genes for the assembly, modification, and trafficking of ribosome components through multiple cellular compartments. Despite intensive study, this pathway likely involves many additional genes. Here, we employ network-guided genetics-an approach for associating candidate genes with biological processes that capitalizes on recent advances in functional genomic and proteomic studies-to computationally identify additional ribosomal biogenesis genes. We experimentally evaluated >100 candidate yeast genes in a battery of assays, confirming involvement of at least 15 new genes, including previously uncharacterized genes (YDL063C, YIL091C, YOR287C, YOR006C/TSR3, YOL022C/TSR4). We associate the new genes with specific aspects of ribosomal subunit maturation, ribosomal particle association, and ribosomal subunit nuclear export, and we identify genes specifically required for the processing of 5S, 7S, 20S, 27S, and 35S rRNAs. These results reveal new connections between ribosome biogenesis and mRNA splicing and add >10% new genes-most with human orthologs-to the biogenesis pathway, significantly extending our understanding of a universally conserved eukaryotic process.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.910
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nr-protein chaperones are found in complexes with their r-protein binding partners.\n\nResearch Abstract:\nBiogenesis of ribosomes is an essential cellular process conserved across all eukaryotes and is known to require >170 genes for the assembly, modification, and trafficking of ribosome components through multiple cellular compartments.\nDespite intensive study, this pathway likely involves many additional genes.\nHere, we employ network-guided genetics-an approach for associating candidate genes with biological processes that capitalizes on recent advances in functional genomic and proteomic studies-to computationally identify additional ribosomal biogenesis genes.\nWe experimentally evaluated >100 candidate yeast genes in a battery of assays, confirming involvement of at least 15 new genes, including previously uncharacterized genes (YDL063C, YIL091C, YOR287C, YOR006C/TSR3, YOL022C/TSR4).\nWe associate the new genes with specific aspects of ribosomal subunit maturation, ribosomal particle association, and ribosomal subunit nuclear export, and we identify genes specifically required for the processing of 5S, 7S, 20S, 27S, and 35S rRNAs.\nThese results reveal new connections between ribosome biogenesis and mRNA splicing and add >10% new genes-most with human orthologs-to the biogenesis pathway, significantly extending our understanding of a universally conserved eukaryotic process.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.discomat_te
| Wavenumber (cm-1) | Assignment | | 400-600 | Bending vibrations of the Al-O-Al or Al-O-Si bonds [18,20] Si-O in the silicon-oxygen framework of the glass [10] | | 691-720 | Bending vibration of B-O-B in BO3 triangles [36-38] | | 830-883, 928-1191 | Stretching vibration of BO4 units in various structural groups [36-38] | | 1200-1500 | B-O stretching vibrations of trigonal BO3 units only [36-38] | Caption: Assignment of absorption bands in the infrared spectra of the glass system xPbO-RNa2B4O7-(100- R - x)CAS with 0 x 50, and 50 R 75mol%. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "400", "type": "Other"} {"value": "600", "type": "Other"} {"value": "691", "type": "Other"} {"value": "720", "type": "Other"} {"value": "1200", "type": "Other"} {"value": "1500", "type": "Other"}
science.discomat_te.37
[ { "content": "| Wavenumber (cm-1) | Assignment |\n| 400-600 | Bending vibrations of the Al-O-Al or Al-O-Si bonds [18,20] Si-O in the silicon-oxygen framework of the glass [10] |\n| 691-720 | Bending vibration of B-O-B in BO3 triangles [36-38] |\n| 830-883, 928-1191 | Stretching vibration of BO4 units in various structural groups [36-38] |\n| 1200-1500 | B-O stretching vibrations of trigonal BO3 units only [36-38] |\n\nCaption: Assignment of absorption bands in the infrared spectra of the glass system xPbO-RNa2B4O7-(100- R - x)CAS with 0 x 50, and 50 R 75mol%.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"400\", \"type\": \"Other\"}\n{\"value\": \"600\", \"type\": \"Other\"}\n{\"value\": \"691\", \"type\": \"Other\"}\n{\"value\": \"720\", \"type\": \"Other\"}\n{\"value\": \"1200\", \"type\": \"Other\"}\n{\"value\": \"1500\", \"type\": \"Other\"}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Mucins, as the major salivary proteins the two thirds of identified spots corresponding to amylases, cystatins and immunoglobulins several protein spots of immunoglobulin chains The concentration of total protein and salivary amylase was significantly lower in NWS and SWS of psoriatic patients compared to the healthy control. Question: List the main proteins found in human saliva.
["Amylases", "Cystatins", "Immunoglobulins", "Mucins"]
science.bioasq_list_qa.755
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Mucins, as the major salivary proteins\nthe two thirds of identified spots corresponding to amylases, cystatins and immunoglobulins\nseveral protein spots of immunoglobulin chains\nThe concentration of total protein and salivary amylase was significantly lower in NWS and SWS of psoriatic patients compared to the healthy control.\n\nQuestion: List the main proteins found in human saliva.", "role": "user" }, { "content": "[\"Amylases\", \"Cystatins\", \"Immunoglobulins\", \"Mucins\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.PURPOSE: Previous studies have not defined the role of telemonitoring with educational tools in outpatients with advanced cancers. We tested the effectiveness of standardized education and telemonitoring for improving pain, distress, anxiety, depression, quality of life (QoL), and performance in outpatients with advanced cancers. ABSTRACT.METHODS: A total of 108 patients were randomly assigned to receive pain education alone (control arm) or pain education plus telemonitoring (experimental arm). Nursing specialists provided video-assisted educational material in both arms and daily telemonitoring for the first week in the experimental arm. Assessment was performed at baseline and 1 week and included evaluations of pain (Brief Pain Inventory, BPI), distress (Distress Thermometer, DT), anxiety, and depression (Hospital Anxiety and Depression Scale, HADS), QoL (QLQ-C30), and a Karnofsky score. ABSTRACT.RESULTS: Overall (n = 108), pain intensity was significantly improved at 1 week, including worst pain (7.3 to 5.7, P < 0.01) and average pain (4.6 to 3.8, P < 0.01). Additionally, anxiety (HADS score ≥ 11, 75 % to 56 %, P < 0.01), depression (HADS score ≥ 11, 73 % to 51 %, P < 0.01), QoL (fatigue and insomnia), and the Karnofsky score (32 to 66, P < 0.01) were also significantly improved at 1 week. However, the level of distress did not improve. The telemonitoring plus standardized education group showed more significant improvement in portion of pain >4 on VAS scale (35 % vs. 19 %, P = 0.02). ABSTRACT.CONCLUSIONS: Standardized pain education using nursing specialists is an efficient way to improve not only pain itself but also anxiety, depression, performance, and QoL. The addition of telemonitoring helps to improve pain management in the outpatient setting. Extracted Data:
[["pain education plus telemonitoring (experimental arm)", "pain education alone (control arm)", "distress scale", "no significant difference", "the level of distress did not improve."]]
science.evidence_inference.229
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.PURPOSE: Previous studies have not defined the role of telemonitoring with educational tools in outpatients with advanced cancers. We tested the effectiveness of standardized education and telemonitoring for improving pain, distress, anxiety, depression, quality of life (QoL), and performance in outpatients with advanced cancers.\n\nABSTRACT.METHODS: A total of 108 patients were randomly assigned to receive pain education alone (control arm) or pain education plus telemonitoring (experimental arm). Nursing specialists provided video-assisted educational material in both arms and daily telemonitoring for the first week in the experimental arm. Assessment was performed at baseline and 1 week and included evaluations of pain (Brief Pain Inventory, BPI), distress (Distress Thermometer, DT), anxiety, and depression (Hospital Anxiety and Depression Scale, HADS), QoL (QLQ-C30), and a Karnofsky score.\n\nABSTRACT.RESULTS: Overall (n = 108), pain intensity was significantly improved at 1 week, including worst pain (7.3 to 5.7, P < 0.01) and average pain (4.6 to 3.8, P < 0.01). Additionally, anxiety (HADS score ≥ 11, 75 % to 56 %, P < 0.01), depression (HADS score ≥ 11, 73 % to 51 %, P < 0.01), QoL (fatigue and insomnia), and the Karnofsky score (32 to 66, P < 0.01) were also significantly improved at 1 week. However, the level of distress did not improve. The telemonitoring plus standardized education group showed more significant improvement in portion of pain >4 on VAS scale (35 % vs. 19 %, P = 0.02).\n\nABSTRACT.CONCLUSIONS: Standardized pain education using nursing specialists is an efficient way to improve not only pain itself but also anxiety, depression, performance, and QoL. The addition of telemonitoring helps to improve pain management in the outpatient setting.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"pain education plus telemonitoring (experimental arm)\", \"pain education alone (control arm)\", \"distress scale\", \"no significant difference\", \"the level of distress did not improve.\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Unilateral spinal anesthesia (USpA) has been reported to potentiate spinal anaesthesia and is used in geriatric patients. The purpose of this study was to determine the median effective dose (ED50) of 0.5% hypobaric bupivacaine and 0.5% hypobaric ropivacaine USpA for geriatric patients (age ≥ 70 years) undergoing elective hip replacement surgery. ABSTRACT.METHODS: A total of 60 geriatric patients (age ≥ 70 years) undergoing elective hip replacement surgery were enrolled in this study. The patients were randomized into 2 groups to receive either intrathecal 0.5% hypobaric bupivacaine USpA (group B) or 0.5% hypobaric ropivacaine USpA (group R). Effective anesthesia was defined as a T10 sensory blockade level maintained for more than 60 min, and a Bromage score of 3 on the operation side within 10 min after injection with no additional epidural anesthetic required during surgery. The ED50 of 0.5% hypobaric bupivacaine and 0.5% hypobaric ropivacaine was calculated using the Dixon and Massey formula. ABSTRACT.RESULTS: No significant differences were found between the two groups in terms of demographic data. The ED50 of 0.5% hypobaric bupivacaine USpA was 4.66 mg (95% confidence interval CI 4.69–4.63 mg) mg and that of 0.5% hypobaric ropivacaine USpA was 6.43 mg (95% CI 6.47–6.39 mg) for geriatric patients undergoing hip replacement surgery. ABSTRACT.CONCLUSION: We find the ED50 were lower, and the ED50 of 0.5% hypobaric bupivacaine and ropivacaine was 4.66 mg (95% CI 4.69–4.63 mg) and 6.43 mg (95% CI 6.47–6.39 mg), respectively, for USpA in geriatric patients (age ≥ 70 years) undergoing elective hip replacement surgery. Extracted Data:
[["intrathecal 0.5% hypobaric bupivacaine USpA (group B)", "0.5% hypobaric ropivacaine USpA (group R)", "the median effective dose (ED50) of 0.5% hypobaric bupivacaine", "significantly decreased", "The ED50 of 0.5% hypobaric bupivacaine USpA was 4.66\u2009mg (95% confidence interval CI 4.69\u20134.63\u2009mg) mg and that of 0.5% hypobaric ropivacaine USpA was 6.43\u2009mg (95% CI 6.47\u20136.39\u2009mg) for geriatric patients undergoing hip replacement surgery."]]
science.evidence_inference.226
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Unilateral spinal anesthesia (USpA) has been reported to potentiate spinal anaesthesia and is used in geriatric patients. The purpose of this study was to determine the median effective dose (ED50) of 0.5% hypobaric bupivacaine and 0.5% hypobaric ropivacaine USpA for geriatric patients (age ≥ 70 years) undergoing elective hip replacement surgery.\n\nABSTRACT.METHODS: A total of 60 geriatric patients (age ≥ 70 years) undergoing elective hip replacement surgery were enrolled in this study. The patients were randomized into 2 groups to receive either intrathecal 0.5% hypobaric bupivacaine USpA (group B) or 0.5% hypobaric ropivacaine USpA (group R). Effective anesthesia was defined as a T10 sensory blockade level maintained for more than 60 min, and a Bromage score of 3 on the operation side within 10 min after injection with no additional epidural anesthetic required during surgery. The ED50 of 0.5% hypobaric bupivacaine and 0.5% hypobaric ropivacaine was calculated using the Dixon and Massey formula.\n\nABSTRACT.RESULTS: No significant differences were found between the two groups in terms of demographic data. The ED50 of 0.5% hypobaric bupivacaine USpA was 4.66 mg (95% confidence interval CI 4.69–4.63 mg) mg and that of 0.5% hypobaric ropivacaine USpA was 6.43 mg (95% CI 6.47–6.39 mg) for geriatric patients undergoing hip replacement surgery.\n\nABSTRACT.CONCLUSION: \nWe find the ED50 were lower, and the ED50 of 0.5% hypobaric bupivacaine and ropivacaine was 4.66 mg (95% CI 4.69–4.63 mg) and 6.43 mg (95% CI 6.47–6.39 mg), respectively, for USpA in geriatric patients (age ≥ 70 years) undergoing elective hip replacement surgery.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"intrathecal 0.5% hypobaric bupivacaine USpA (group B)\", \"0.5% hypobaric ropivacaine USpA (group R)\", \"the median effective dose (ED50) of 0.5% hypobaric bupivacaine\", \"significantly decreased\", \"The ED50 of 0.5% hypobaric bupivacaine USpA was 4.66\\u2009mg (95% confidence interval CI 4.69\\u20134.63\\u2009mg) mg and that of 0.5% hypobaric ropivacaine USpA was 6.43\\u2009mg (95% CI 6.47\\u20136.39\\u2009mg) for geriatric patients undergoing hip replacement surgery.\"]]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Examining Scaling and Transfer of Language Model Architectures for Machine Translation Main Content: 1 INTRODUCTION . The popularity of large , general-purpose text generation models has skyrocketed in recent years due to their outstanding performance across a wide range of natural language processing tasks ( Brown et al. , 2020 ; Raffel et al. , 2020 ; Liu et al. , 2020 ; Xue et al. , 2021 ) . These studies , under large-scale pretraining and also model scaling , show the promise of moving to unified neural architectures and that dropping various task-specific inductive biases improves model generalization and/or performance ( Devlin et al. , 2019 ; Kaplan et al. , 2020 ) . In neural machine translation ( NMT ) , one essential task-specific inductive bias is to separately handle source sentence understanding and target sentence generation with the encoder-decoder paradigm ( EncDec ) . Although such bias significantly benefits translation ( Vaswani et al. , 2017 ; Chen et al. , 2018 ; Aharoni et al. , 2019 ; Barrault et al. , 2020 ; Ansari et al. , 2020 ) , some recent work shows insights challenging it , for example , aggressively simplifying the decoder yields little to no compromise on translation quality ( Kasai et al. , 2021 ) . This thereby inspires the question how the removal of this separation , i.e . using a single unified module for both encoding and decoding ( LMs ) , works for translation , and whether we can get any benefits out of that . Although some studies reported promising translation quality with LMs ( He et al. , 2018 ; Wang et al. , 2021 ) , they compare models under merely one configuration ( model size ) , neglecting that neural models follow scaling laws ( Kaplan et al. , 2020 ; Ghorbani et al. , 2021 ; Gordon et al. , 2021 ) where the impact of each added parameter on model performance might vary across different models . How the inductive biases of LMs and EncDec impact the model ’ s performance as we increase their size and the amounts of training data are intriguing yet missing in the literature . In this paper , we explore various configurations of LM architectures for translation as in Figure 1 , and compare them with EncDec on model scaling and cross-lingual transfer . We conduct a systematic study under a variety of data conditions , tasks ( bilingual , multilingual and zero-shot ) and model capacities . We examine architectural design choices associated with LMs , including causal masking ( CausalLM ) vs. full-visible masking ( PrefixLM ) for source sequences,1 layer-wise coordination ( He et al. , 2018 ) vs. final-layer source encodings ( TopOnly ) for target sequence generation , increasing LM depth vs. width , and also the effect of adding source language modeling loss for CausalLM . We evaluate how these architectural priors affect translation quality as we increase the number of parameters available to the model , under a diverse set of bilingual translation settings and also ( massively ) multilingual settings , with a special focus on transfer to low-resource languages and zero-shot directions . Our main findings are summarized below : • LMs show different scaling properties compared to EncDec . The architectural differences become less important as models scale , measured by reduced quality gap against EncDec , regardless of the language similarities , training data conditions and evaluation settings . • PrefixLM variants often outperform their CausalLM counterparts ; increasing LM depth benefits the translation task more than increasing the width ; and adding a source-side language modeling objective to CausalLM doesn ’ t affect translation quality . • Cross-lingual transfer also benefits from model scaling , where EncDec almost always dominates the quality Pareto frontier on supervised directions while zero-shot translation favors PrefixLM LMs . PrefixLM LMs significantly reduce off-target translations . • Although LMs could reach or even surpass the translation quality of EncDec , they still lag far behind EncDec with respect to computational efficiency as measured in FLOPs . 2 RELATED WORK . Using language models in the task of translation has a long history , particularly in the era of statistical machine translation ( SMT ) where LM was used as a separate yet crucial component ensuring the fluency of generation ( Stolcke , 2002 ; Heafield , 2011 ; Koehn , 2010 ) . With neural networks , NMT unified those isolated SMT components including LM under the encoder-decoder formulation ( Kalchbrenner & Blunsom , 2013 ; Cho et al. , 2014 ; Sutskever et al. , 2014 ; Bahdanau et al. , 2015 ) , which makes use of separate modules to process input and output . Further studies exploring architectural modifications by using LM alone as a translation model , nevertheless , got much less attention . He et al . ( 2018 ) proposed layer-wise coordination between encoder and decoder with tied weights , where each decoder layer attends to its corresponding encoder layer at the same depth as opposed to the conventional method of attending the top-most encoder representations . Later , 1Also known as unidirectional vs bidirectional language modelling , where in the unidirectional case a token representation takes into account only the preceding tokens and their representations , but the bidirectional case takes into account both preceding and following tokens in a sequence . Fonollosa et al . ( 2019 ) extended it with locality constraint . Dong et al . ( 2019 ) explored LMs for sequence generation under large-scale pretraining . Despite reporting promising results , these prior studies either focus only on bilingual tasks or do not consider the scaling properties of the models , leaving the picture incomplete : how the findings will change as we scale the models and how the languages benefit from/interfere each other as the architectural priors ( inductive biases ) change . Neural models follow some scaling laws . Kaplan et al . ( 2020 ) reported the test cross-entropy loss of LMs can be formulated as a power-law scaling function of either model size ( excluding embedding parameters ) or dataset size . Later on , researchers examined and confirmed such findings across different domains , including vision modeling ( Zhai et al. , 2021 ) , knowledge transfer from pretraining ( Hernandez et al. , 2021 ) , autoregressive generative modeling ( Henighan et al. , 2020 ) , and neural machine translation ( Gordon et al. , 2021 ; Ghorbani et al. , 2021 ) , to name a few . We find it essential to study the scaling behavior of new architectures and approaches given the recent evidence on the emergent properties of the models at scale ( Brown et al. , 2020 ) . Another critical component in machine translation is the number of languages being considered with the models , which is the very focus of multilingual NMT ( Firat et al. , 2016 ) . Cross-lingual transfer in multilingual NMT often results from parameter sharing across languages , which benefits low-resource languages and also enables zero-shot translation ( Johnson et al. , 2017 ) , although the quality on zero-shot directions is largely hindered by the off-target translation problem ( Arivazhagan et al. , 2019 ; Zhang et al. , 2020 ) . The structure of LMs further encourages parameter sharing , offering a chance to improve the transfer while magnifying the problem of interference ( negative-transfer ) ( Wang et al. , 2020 ; Zhang et al. , 2021 ) . Very recently , Wang et al . ( 2021 ) analyzed the cross-lingual transfer behavior of CausalLM , and reported encouraging zero-shot performance . However , we did not observe the same results likely because of data sampling , model architecture and optimization differences which zero-shot transfer is sensitive to . 3 LANGUAGE MODEL ARCHITECTURES FOR TRANSLATION . In this section , we briefly review EncDec and then present LM architectures for translation based on Transformer ( Vaswani et al. , 2017 ) . Table 1 compares different models . Given a source sequence X of length |X| and its target translation Y of length |Y | , EncDec performs translation via the following structure : Xl = FFN ◦ SAtt ( Xl−1 ) , Yl = FFN ◦ CAtt ◦ SAtt ( Yl−1 , XL ) , ( 1 ) where l denotes the layer index and ◦ indicates consecutive sublayers . Xl ∈ R|X|×d and Yl ∈ R|Y |×d are the layer representations of the source and target sequence respectively , with a model dimension of d. The first input layer ( X0 , Y0 ) is the summation of token embeddings and their positional encodings . We drop all the layer normalization and residual connections in our formulations for brevity . The encoder is a stack of L layers , each of which includes a multi-head self-attention sublayer ( SAtt ) followed by a feed-forward sublayer ( FFN ) . SAtt in the encoder is bidirectional with fullvisible masking that has full visibility to all source tokens , preceding and following . Its final-layer representations XL are fed to the decoder , which shares a similar structure to the encoder but with an additional ( multi-head ) cross-attention sublayer ( CAtt ) . Unlike encoder , SAtt in the decoder is unidirectional with causal masking , where attention to following tokens is disabled ( masked ) . CAtt can always access all source inputs , though . Note we set the encoder and decoder depth equally to L , and use dff to denote the intermediate dimension of FFN . EncDec is often optimized with the target translation objective based on YL : LEncDec ( X , Y ) = LTGT = − logP ( Y |X , YL ) . ( 2 ) Rather than separately modeling source and target sequences , LM handles both with a single module : [ Xl , Yl ] = FFN ◦ SAtt ( [ Xl−1 , Yl−1 ] , M ) , ( 3 ) where M ∈ { 0 , 1 } ( |X|+|Y | ) × ( |X|+|Y | ) is the attention mask that controls the information flow within the concatenated sequences ( [ · , · ] ) .2 Two LM variants explored by changing the structure of mask M , PrefixLM and CausalLM . PrefixLM merges different modules of EncDec , trained with LTGT . Its attention mask MPrefixLM ( i , j ) = 1 , if i ≥ j or j ≤ |X| ; otherwise 0 , ( 4 ) combines the encoder/decoder self-attention mask and the cross-attention mask of EncDec . CausalLM , by contrast , is a strict LM that applies causal masking to both sequences : MCausalLM ( i , j ) = 1 , if i ≥ j ; otherwise 0 . ( 5 ) Apart from LTGT , CausalLM also includes the source-side language modeling loss for training : LCausalLM ( X , Y ) = LSRC + LTGT = − logP ( X|XL ) − logP ( Y |X , YL ) . ( 6 ) To improve our understanding of LMs for translation , we further incorporate two extensions : PrefixLM + TopOnly The model defined in Equation 3 performs attention over the source and target sequence within the same layer . In contrast , EncDec always uses the topmost-layer source encodings for translation . We mimic this with TopOnly extension by feeding toplayer encodings , i.e . XL instead of Xl−1 , to each attention sublayer . It operates the same as EncDec but with the parameters of encoder and decoder tied . CausalLM + TgtOnly The inclusion of the source-side objective enriches CausalLM ’ s learning signal and encourages the model to absorb source language characteristics . However , it requires and occupies part of modeling capacity , which might negatively affect translation . To offset this impact , we add the TgtOnly extension that optimizes CausalLM with the target translation objective LTGTC alone , which also aligns better with EncDec and PrefixLM .
This paper gives a quite thorough comparison between language-model-based (LM) and encoder-decoder-based (EncDec) architectures for neural machine translation (NMT). The investigated LMs are varied: different masking strategies, different origins of source features, and so on. The authors conducted experiments on representative bilingual and multilingual benchmarks, summarizing some findings based on the experimental results.
science.mup_single_document_summarization.8057
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nExamining Scaling and Transfer of Language Model Architectures for Machine Translation\n\nMain Content:\n1 INTRODUCTION . The popularity of large , general-purpose text generation models has skyrocketed in recent years due to their outstanding performance across a wide range of natural language processing tasks ( Brown et al. , 2020 ; Raffel et al. , 2020 ; Liu et al. , 2020 ; Xue et al. , 2021 ) . These studies , under large-scale pretraining and also model scaling , show the promise of moving to unified neural architectures and that dropping various task-specific inductive biases improves model generalization and/or performance ( Devlin et al. , 2019 ; Kaplan et al. , 2020 ) . In neural machine translation ( NMT ) , one essential task-specific inductive bias is to separately handle source sentence understanding and target sentence generation with the encoder-decoder paradigm ( EncDec ) . Although such bias significantly benefits translation ( Vaswani et al. , 2017 ; Chen et al. , 2018 ; Aharoni et al. , 2019 ; Barrault et al. , 2020 ; Ansari et al. , 2020 ) , some recent work shows insights challenging it , for example , aggressively simplifying the decoder yields little to no compromise on translation quality ( Kasai et al. , 2021 ) . This thereby inspires the question how the removal of this separation , i.e . using a single unified module for both encoding and decoding ( LMs ) , works for translation , and whether we can get any benefits out of that . Although some studies reported promising translation quality with LMs ( He et al. , 2018 ; Wang et al. , 2021 ) , they compare models under merely one configuration ( model size ) , neglecting that neural models follow scaling laws ( Kaplan et al. , 2020 ; Ghorbani et al. , 2021 ; Gordon et al. , 2021 ) where the impact of each added parameter on model performance might vary across different models . How the inductive biases of LMs and EncDec impact the model ’ s performance as we increase their size and the amounts of training data are intriguing yet missing in the literature . In this paper , we explore various configurations of LM architectures for translation as in Figure 1 , and compare them with EncDec on model scaling and cross-lingual transfer . We conduct a systematic study under a variety of data conditions , tasks ( bilingual , multilingual and zero-shot ) and model capacities . We examine architectural design choices associated with LMs , including causal masking ( CausalLM ) vs. full-visible masking ( PrefixLM ) for source sequences,1 layer-wise coordination ( He et al. , 2018 ) vs. final-layer source encodings ( TopOnly ) for target sequence generation , increasing LM depth vs. width , and also the effect of adding source language modeling loss for CausalLM . We evaluate how these architectural priors affect translation quality as we increase the number of parameters available to the model , under a diverse set of bilingual translation settings and also ( massively ) multilingual settings , with a special focus on transfer to low-resource languages and zero-shot directions . Our main findings are summarized below : • LMs show different scaling properties compared to EncDec . The architectural differences become less important as models scale , measured by reduced quality gap against EncDec , regardless of the language similarities , training data conditions and evaluation settings . • PrefixLM variants often outperform their CausalLM counterparts ; increasing LM depth benefits the translation task more than increasing the width ; and adding a source-side language modeling objective to CausalLM doesn ’ t affect translation quality . • Cross-lingual transfer also benefits from model scaling , where EncDec almost always dominates the quality Pareto frontier on supervised directions while zero-shot translation favors PrefixLM LMs . PrefixLM LMs significantly reduce off-target translations . • Although LMs could reach or even surpass the translation quality of EncDec , they still lag far behind EncDec with respect to computational efficiency as measured in FLOPs . 2 RELATED WORK . Using language models in the task of translation has a long history , particularly in the era of statistical machine translation ( SMT ) where LM was used as a separate yet crucial component ensuring the fluency of generation ( Stolcke , 2002 ; Heafield , 2011 ; Koehn , 2010 ) . With neural networks , NMT unified those isolated SMT components including LM under the encoder-decoder formulation ( Kalchbrenner & Blunsom , 2013 ; Cho et al. , 2014 ; Sutskever et al. , 2014 ; Bahdanau et al. , 2015 ) , which makes use of separate modules to process input and output . Further studies exploring architectural modifications by using LM alone as a translation model , nevertheless , got much less attention . He et al . ( 2018 ) proposed layer-wise coordination between encoder and decoder with tied weights , where each decoder layer attends to its corresponding encoder layer at the same depth as opposed to the conventional method of attending the top-most encoder representations . Later , 1Also known as unidirectional vs bidirectional language modelling , where in the unidirectional case a token representation takes into account only the preceding tokens and their representations , but the bidirectional case takes into account both preceding and following tokens in a sequence . Fonollosa et al . ( 2019 ) extended it with locality constraint . Dong et al . ( 2019 ) explored LMs for sequence generation under large-scale pretraining . Despite reporting promising results , these prior studies either focus only on bilingual tasks or do not consider the scaling properties of the models , leaving the picture incomplete : how the findings will change as we scale the models and how the languages benefit from/interfere each other as the architectural priors ( inductive biases ) change . Neural models follow some scaling laws . Kaplan et al . ( 2020 ) reported the test cross-entropy loss of LMs can be formulated as a power-law scaling function of either model size ( excluding embedding parameters ) or dataset size . Later on , researchers examined and confirmed such findings across different domains , including vision modeling ( Zhai et al. , 2021 ) , knowledge transfer from pretraining ( Hernandez et al. , 2021 ) , autoregressive generative modeling ( Henighan et al. , 2020 ) , and neural machine translation ( Gordon et al. , 2021 ; Ghorbani et al. , 2021 ) , to name a few . We find it essential to study the scaling behavior of new architectures and approaches given the recent evidence on the emergent properties of the models at scale ( Brown et al. , 2020 ) . Another critical component in machine translation is the number of languages being considered with the models , which is the very focus of multilingual NMT ( Firat et al. , 2016 ) . Cross-lingual transfer in multilingual NMT often results from parameter sharing across languages , which benefits low-resource languages and also enables zero-shot translation ( Johnson et al. , 2017 ) , although the quality on zero-shot directions is largely hindered by the off-target translation problem ( Arivazhagan et al. , 2019 ; Zhang et al. , 2020 ) . The structure of LMs further encourages parameter sharing , offering a chance to improve the transfer while magnifying the problem of interference ( negative-transfer ) ( Wang et al. , 2020 ; Zhang et al. , 2021 ) . Very recently , Wang et al . ( 2021 ) analyzed the cross-lingual transfer behavior of CausalLM , and reported encouraging zero-shot performance . However , we did not observe the same results likely because of data sampling , model architecture and optimization differences which zero-shot transfer is sensitive to . 3 LANGUAGE MODEL ARCHITECTURES FOR TRANSLATION . In this section , we briefly review EncDec and then present LM architectures for translation based on Transformer ( Vaswani et al. , 2017 ) . Table 1 compares different models . Given a source sequence X of length |X| and its target translation Y of length |Y | , EncDec performs translation via the following structure : Xl = FFN ◦ SAtt ( Xl−1 ) , Yl = FFN ◦ CAtt ◦ SAtt ( Yl−1 , XL ) , ( 1 ) where l denotes the layer index and ◦ indicates consecutive sublayers . Xl ∈ R|X|×d and Yl ∈ R|Y |×d are the layer representations of the source and target sequence respectively , with a model dimension of d. The first input layer ( X0 , Y0 ) is the summation of token embeddings and their positional encodings . We drop all the layer normalization and residual connections in our formulations for brevity . The encoder is a stack of L layers , each of which includes a multi-head self-attention sublayer ( SAtt ) followed by a feed-forward sublayer ( FFN ) . SAtt in the encoder is bidirectional with fullvisible masking that has full visibility to all source tokens , preceding and following . Its final-layer representations XL are fed to the decoder , which shares a similar structure to the encoder but with an additional ( multi-head ) cross-attention sublayer ( CAtt ) . Unlike encoder , SAtt in the decoder is unidirectional with causal masking , where attention to following tokens is disabled ( masked ) . CAtt can always access all source inputs , though . Note we set the encoder and decoder depth equally to L , and use dff to denote the intermediate dimension of FFN . EncDec is often optimized with the target translation objective based on YL : LEncDec ( X , Y ) = LTGT = − logP ( Y |X , YL ) . ( 2 ) Rather than separately modeling source and target sequences , LM handles both with a single module : [ Xl , Yl ] = FFN ◦ SAtt ( [ Xl−1 , Yl−1 ] , M ) , ( 3 ) where M ∈ { 0 , 1 } ( |X|+|Y | ) × ( |X|+|Y | ) is the attention mask that controls the information flow within the concatenated sequences ( [ · , · ] ) .2 Two LM variants explored by changing the structure of mask M , PrefixLM and CausalLM . PrefixLM merges different modules of EncDec , trained with LTGT . Its attention mask MPrefixLM ( i , j ) = 1 , if i ≥ j or j ≤ |X| ; otherwise 0 , ( 4 ) combines the encoder/decoder self-attention mask and the cross-attention mask of EncDec . CausalLM , by contrast , is a strict LM that applies causal masking to both sequences : MCausalLM ( i , j ) = 1 , if i ≥ j ; otherwise 0 . ( 5 ) Apart from LTGT , CausalLM also includes the source-side language modeling loss for training : LCausalLM ( X , Y ) = LSRC + LTGT = − logP ( X|XL ) − logP ( Y |X , YL ) . ( 6 ) To improve our understanding of LMs for translation , we further incorporate two extensions : PrefixLM + TopOnly The model defined in Equation 3 performs attention over the source and target sequence within the same layer . In contrast , EncDec always uses the topmost-layer source encodings for translation . We mimic this with TopOnly extension by feeding toplayer encodings , i.e . XL instead of Xl−1 , to each attention sublayer . It operates the same as EncDec but with the parameters of encoder and decoder tied . CausalLM + TgtOnly The inclusion of the source-side objective enriches CausalLM ’ s learning signal and encourages the model to absorb source language characteristics . However , it requires and occupies part of modeling capacity , which might negatively affect translation . To offset this impact , we add the TgtOnly extension that optimizes CausalLM with the target translation objective LTGTC alone , which also aligns better with EncDec and PrefixLM .", "role": "user" }, { "content": "This paper gives a quite thorough comparison between language-model-based (LM) and encoder-decoder-based (EncDec) architectures for neural machine translation (NMT). The investigated LMs are varied: different masking strategies, different origins of source features, and so on. The authors conducted experiments on representative bilingual and multilingual benchmarks, summarizing some findings based on the experimental results.", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Dissecting Local Properties of Adversarial Examples Main Content: 1 INTRODUCTION . Despite deep neural networks ( DNNs ) perform well in many fields ( He et al. , 2016 ; Devlin et al. , 2019 ) , their counter-intuitive vulnerability attracts increasing attention , both for safety-critical applications ( Sharif et al. , 2016 ) and the black-box mechanism of DNNs ( Fazlyab et al. , 2019 ) . DNNs have been found vulnerable to adversarial examples ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) , where small perturbations on the input can easily change the predictions of a well-trained DNN with high confidence . In computer vision , adversarial examples exhibit their destructiveness both in the digital world and the physical world ( Kurakin et al. , 2017 ) . Since then , how to alleviate the vulnerability of DNN so as to narrow the performance gap between adversarial/natural examples is another key issue . Existing methods including defensive distillation ( Papernot et al. , 2016 ) and pixel denoising ( Liao et al. , 2018 ) have shown their limitations due to follow-up attack strategies ( Carlini & Wagner , 2017 ) or gradient masking ( Athalye et al. , 2018 ) . Amongst them , adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) and its variants ( Zhang et al. , 2019 ; Wang et al. , 2020b ) indicate their reliable robustness and outperform . Moreover , as a data augmentation method , adversarial training currently seems to rely on additional data ( Schmidt et al. , 2018 ; Rebuffi et al. , 2021 ) to further improve robustness , while is sensitive to some basic model hyper-parameters , e.g. , weight decay ( Pang et al. , 2021 ) . Apart from these , the effect of simply early stopping ( Rice et al. , 2020 ) even exceeds some promotion methods according to recent benchmarks ( Croce & Hein , 2020 ; Chen & Gu , 2020 ) . These studies arise our curiosity to further explore the relationship between adversarial examples and adversarial training , hoping to provide some new understanding . Recalling that high-frequency components can be potentially linked to adversarial examples ( Wang et al. , 2020a ; Yin et al. , 2019 ; Harder et al. , 2021 ) , however , few explorations discuss the relationship between high-frequency components and the destructiveness of adversarial examples . In this paper , we first demonstrate that high-frequency components of adversarial examples tend to mislead the standard DNNs , yet little impact on the adversarially robust models . We further show that adversarial examples statistically have more high-frequency components than natural ones , indicating relatively drastic local changes among pixels of adversarial examples . Since adversarial examples exhibit more semantically meaningful on robust models ( Tsipras et al. , 2019 ) , we further notice that adversarial examples show locally-consistent perturbations related to image shapes on adversariallytrained models , in contrast to disorderly perturbations on standard models . Both explorations on the frequency and spatial domain emphasize local properties of adversarial examples , and motivated by the local receptive field of the convolution kernels , we propose a locally intermediate response perspective to rethink the vulnerability of DNNs . Different from the existing global activation perspective ( Bai et al. , 2021 ; Xu et al. , 2019 ) , our local perspective reflects the joint effect of local features and the intermediate layers of the model . Based on the local perspective , we emphasize that large enough local response differences make it difficult for the network to treat an image and its potentially adversarial examples as one category , and demonstrate DNN models are naturally fragile at least attributed to it . Motivated by adversarially-trained models tend to have ‘ smooth ’ kernels ( Wang et al. , 2020a ) , we simply use the smoother kernels to alleviate local response differences on adversarially-trained models , which in turn affects the model robustness and reduces the robust overfitting ( Rice et al. , 2020 ) . To a certain extent , this explains why weight decay effectively affects model robustness ( Pang et al. , 2021 ) . Our main contributions are summarized as follows : • We first reveal some properties of adversarial examples in the frequency and spatial domain : 1 ) the high-frequency components of adversarial examples tend to mislead naturally-trained DNNs , yet have little impact on adversarially-trained models , and 2 ) adversarial examples have locally-consistent perturbations on adversarially-trained models , compared with disorderly local perturbations on naturally-trained models . • Then we introduce local response and emphasize its importance in the model adversarial robustness . That is , naturally-trained DNNs are often fragile , at least for non-ignorable local response differences through the same layer between potentially adversarial examples and natural ones . In contrast , adversarially-trained models effectively alleviate the local response differences . And the smoother adversarially-trained models show better adversarial robustness as they can reduce local response differences . • Finally we empirically study local response with generated adversarial examples . We further show that , compared with failed attacks , adversarial examples ( successful attacks ) statistically show larger local response differences with natural examples . Moreover , compared with adversarial examples generated by the model itself , those transferred by other models show markedly smaller local response differences . 2 RELATED WORK . Understandings of model vulnerability . Since the discovery of adversarial examples ( Szegedy et al. , 2014 ) , a number of understandings on model vulnerability has been developed . For instance , linear property of DNNs ( Goodfellow et al. , 2015 ) , submanifold ( Tanay & Griffin , 2016 ) and geometry of the manifold ( Gilmer et al. , 2018 ) were considered from the high-dimensional perspective ; the computing of Lipschitz constant ( Szegedy et al. , 2014 ; Fazlyab et al. , 2019 ) and lower/upper bounds ( Fawzi et al. , 2018 ; Weng et al. , 2018 ) were considered from the definition of model robustness ; non-robust features ( Ilyas et al. , 2019 ) , high-frequency components ( Wang et al. , 2020a ; Yin et al. , 2019 ) , high-rank features ( Jere et al. , 2020 ) and high-order interactions among pixels ( Ren et al. , 2021 ) were explored adversarial examples from the different perspectives on images , which imply our local perspective ; feature denosing ( Xie et al. , 2019 ) , robust pruning ( Madaan & Ju Hwang. , 2020 ) and activation suppressing ( Bai et al. , 2021 ) were focused on global intermediate activations of models . On the other hand , taken adversarial training into consideration , Tsipras et al . ( 2019 ) , Schmidt et al . ( 2018 ) and Zhang et al . ( 2019 ) explored the trade-off between robustness and accuracy ; Wang et al . ( 2020a ) found adversarially-trained models tend to show smooth kernels ; Tsipras et al . ( 2019 ) and Zhang & Zhu ( 2018 ) argued adversarially robust models learned more shape-biased representations . Different from these studies , we characterize the adversarial examples from both frequency and spatial domain to emphasize local properties of adversarial examples , and propose a locally intermediate response to rethink the vulnerability of DNNs . Adversarial training . Adversarial training can be seen as a min-max optimization problem : min θ 1 n n∑ i=1 max x′i∈B ( x ) L ( fθ ( x ′ i ) , yi ) where f denotes a DNN model with parameters θ , and ( xi , yi ) denotes a pair of a natural example xi and its ground-truth label yi . Given a classification loss L , the inner maximization problem can be regarded as searching for suitable perturbations in boundary B to maximize loss , while the outer minimization problem is to optimize model parameters on adversarial examples { x′i } ni=1 generated from the inner maximization . 3 DIFFERENT PERTURBATIONS AFFECT THE VULNERABILITY OF THE MODEL . In this section , we first investigate adversarial examples in the frequency and spatial domain , and show the connections between models and their adversarial examples . Note that our threat model is a white-box model , and the fact that the adversarial example is only defined by the misleading result under the legal threat model , our findings suggest that the properties of the sampled examples can broadly reflect the fragile tendency of the model . Setup . We generate ` ∞ bounded adversarial examples by PGD-10 ( maximum perturbation = 8/255 and step size 2/255 ) with random start for the robust model ( Madry et al. , 2018 ) . Specifically , we use ResNet-18 ( He et al. , 2016 ) as the backbone to train the standard and adversarially-trained models for 100 epochs on CIFAR-10 ( Krizhevsky , 2009 ) . Following ( Pang et al. , 2021 ) , we use the SGD optimizer with momentum 0.9 , weight decay 5 × 10−4 and initial learning rate 0.1 , with a three-stage learning rate divided by 10 at 50 and 75 epoch respectively . The robustness of both models is evaluated by PGD-20 ( step size 1/255 ) . 3.1 THE DESTRUCTIVENESS OF HIGH-FREQUENCY COMPONENTS FROM ADVERSARIES . Inspired by ( Wang et al. , 2020a ) , we are naturally curious whether the adversarial examples cause considerable damage to the model mainly because of their high-frequency components . To answer this question , Figure 1 illustrates the trend of model performance and robustness on the test set with the high-frequency components increased ( the increase of the filtering scale denotes that more highfrequency components are added to the filtered images ) . Figure 1 ( a ) shows that , for standard models , the high-frequency components of natural examples promote classification and reach the model performance ( green line ) ; on the contrary , the performance of the filtered adversarial examples first rises to get the highest accuracy 47.5 % , and then drops rapidly to reach 0.0 % ( red line ) . Obviously , in the low-frequency range , the performance of natural and adversarial examples are quite close , yet more high-frequency components widen the difference . That is , the special high-frequency components caused by adversarial perturbations exhibit a clear destructive effect on standard models , and simply filter out them can effectively alleviate the destructiveness of adversaries even on standard models . However , for robust models , we show that the prediction performance finally reaches robustness without a rapid drop in Figure 1 ( b ) . But surprisingly , we find that the performance of filtered adversarial examples in some range exceeds the final robustness 47.5 % ( red line ) , reaching a maximum of 51.2 % . That is , although these high-frequency components do not exhibit a clear destructive effect , simply filtering out them has a positive impact on alleviating robust overfitting ( Rice et al. , 2020 ) . We then swap their high-frequency components between both examples controlled by a frequency threshold in Figure 1 ( c ) -1 ( d ) . For merged natural examples with high-frequency components from adversaries , the increase of the frequency threshold controls the accuracy increasing from the model robustness ( red line ) to the model performance ( green line ) , the opposite occurs on merged adversarial examples . These clearly illustrate the boost effect of the high-frequency components from natural examples and the destructive effect of the high-frequency components from adversarial examples . To further illustrate , we find that statistically , the main difference in the frequency domain between natural and adversarial examples is concentrated in the high-frequency region in Figure 2 . We visualize the logarithmic amplitude spectrum of both examples . Figure 2 ( a ) and 2 ( b ) show that , compare with natural examples ’ , the high-frequency components of the adversaries are hard to ignore . Figure 2 ( c ) further emphasizes that adversarial examples markedly show more high-frequency components , indicating relatively drastic local changes among pixels . This statistical difference explains the high detection rate of using Magnitude Fourier Spectrum ( Harder et al. , 2021 ) to detect adversarial examples . Furthermore , Figure 2 ( d ) and 2 ( e ) show that the high-frequency components of adversarial examples generated by robust models are less than those from standard models , yet still more than natural examples ’ . Besides , the analysis of filtering out low-frequency components in Figure 6 ( Appendix A ) also emphasizes our statement . That is , compared with natural examples ’ , the special high-frequency components of adversarial examples show their serious misleading effects on standard models , yet are not enough to be fully responsible for the destructiveness to robust models .
This paper studies the properties of adversarial examples from a spatial and frequency perspective and shows that naturally trained models are more vulnerable to high-frequency components in adversarial examples. Perturbations for naturally trained models are disordered, but perturbations for adv-trained models are image shape related. Based on these observations, the authors find that smoother adversarially-trained can achieve better robustness.
science.mup_single_document_summarization.3685
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nDissecting Local Properties of Adversarial Examples\n\nMain Content:\n1 INTRODUCTION . Despite deep neural networks ( DNNs ) perform well in many fields ( He et al. , 2016 ; Devlin et al. , 2019 ) , their counter-intuitive vulnerability attracts increasing attention , both for safety-critical applications ( Sharif et al. , 2016 ) and the black-box mechanism of DNNs ( Fazlyab et al. , 2019 ) . DNNs have been found vulnerable to adversarial examples ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) , where small perturbations on the input can easily change the predictions of a well-trained DNN with high confidence . In computer vision , adversarial examples exhibit their destructiveness both in the digital world and the physical world ( Kurakin et al. , 2017 ) . Since then , how to alleviate the vulnerability of DNN so as to narrow the performance gap between adversarial/natural examples is another key issue . Existing methods including defensive distillation ( Papernot et al. , 2016 ) and pixel denoising ( Liao et al. , 2018 ) have shown their limitations due to follow-up attack strategies ( Carlini & Wagner , 2017 ) or gradient masking ( Athalye et al. , 2018 ) . Amongst them , adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) and its variants ( Zhang et al. , 2019 ; Wang et al. , 2020b ) indicate their reliable robustness and outperform . Moreover , as a data augmentation method , adversarial training currently seems to rely on additional data ( Schmidt et al. , 2018 ; Rebuffi et al. , 2021 ) to further improve robustness , while is sensitive to some basic model hyper-parameters , e.g. , weight decay ( Pang et al. , 2021 ) . Apart from these , the effect of simply early stopping ( Rice et al. , 2020 ) even exceeds some promotion methods according to recent benchmarks ( Croce & Hein , 2020 ; Chen & Gu , 2020 ) . These studies arise our curiosity to further explore the relationship between adversarial examples and adversarial training , hoping to provide some new understanding . Recalling that high-frequency components can be potentially linked to adversarial examples ( Wang et al. , 2020a ; Yin et al. , 2019 ; Harder et al. , 2021 ) , however , few explorations discuss the relationship between high-frequency components and the destructiveness of adversarial examples . In this paper , we first demonstrate that high-frequency components of adversarial examples tend to mislead the standard DNNs , yet little impact on the adversarially robust models . We further show that adversarial examples statistically have more high-frequency components than natural ones , indicating relatively drastic local changes among pixels of adversarial examples . Since adversarial examples exhibit more semantically meaningful on robust models ( Tsipras et al. , 2019 ) , we further notice that adversarial examples show locally-consistent perturbations related to image shapes on adversariallytrained models , in contrast to disorderly perturbations on standard models . Both explorations on the frequency and spatial domain emphasize local properties of adversarial examples , and motivated by the local receptive field of the convolution kernels , we propose a locally intermediate response perspective to rethink the vulnerability of DNNs . Different from the existing global activation perspective ( Bai et al. , 2021 ; Xu et al. , 2019 ) , our local perspective reflects the joint effect of local features and the intermediate layers of the model . Based on the local perspective , we emphasize that large enough local response differences make it difficult for the network to treat an image and its potentially adversarial examples as one category , and demonstrate DNN models are naturally fragile at least attributed to it . Motivated by adversarially-trained models tend to have ‘ smooth ’ kernels ( Wang et al. , 2020a ) , we simply use the smoother kernels to alleviate local response differences on adversarially-trained models , which in turn affects the model robustness and reduces the robust overfitting ( Rice et al. , 2020 ) . To a certain extent , this explains why weight decay effectively affects model robustness ( Pang et al. , 2021 ) . Our main contributions are summarized as follows : • We first reveal some properties of adversarial examples in the frequency and spatial domain : 1 ) the high-frequency components of adversarial examples tend to mislead naturally-trained DNNs , yet have little impact on adversarially-trained models , and 2 ) adversarial examples have locally-consistent perturbations on adversarially-trained models , compared with disorderly local perturbations on naturally-trained models . • Then we introduce local response and emphasize its importance in the model adversarial robustness . That is , naturally-trained DNNs are often fragile , at least for non-ignorable local response differences through the same layer between potentially adversarial examples and natural ones . In contrast , adversarially-trained models effectively alleviate the local response differences . And the smoother adversarially-trained models show better adversarial robustness as they can reduce local response differences . • Finally we empirically study local response with generated adversarial examples . We further show that , compared with failed attacks , adversarial examples ( successful attacks ) statistically show larger local response differences with natural examples . Moreover , compared with adversarial examples generated by the model itself , those transferred by other models show markedly smaller local response differences . 2 RELATED WORK . Understandings of model vulnerability . Since the discovery of adversarial examples ( Szegedy et al. , 2014 ) , a number of understandings on model vulnerability has been developed . For instance , linear property of DNNs ( Goodfellow et al. , 2015 ) , submanifold ( Tanay & Griffin , 2016 ) and geometry of the manifold ( Gilmer et al. , 2018 ) were considered from the high-dimensional perspective ; the computing of Lipschitz constant ( Szegedy et al. , 2014 ; Fazlyab et al. , 2019 ) and lower/upper bounds ( Fawzi et al. , 2018 ; Weng et al. , 2018 ) were considered from the definition of model robustness ; non-robust features ( Ilyas et al. , 2019 ) , high-frequency components ( Wang et al. , 2020a ; Yin et al. , 2019 ) , high-rank features ( Jere et al. , 2020 ) and high-order interactions among pixels ( Ren et al. , 2021 ) were explored adversarial examples from the different perspectives on images , which imply our local perspective ; feature denosing ( Xie et al. , 2019 ) , robust pruning ( Madaan & Ju Hwang. , 2020 ) and activation suppressing ( Bai et al. , 2021 ) were focused on global intermediate activations of models . On the other hand , taken adversarial training into consideration , Tsipras et al . ( 2019 ) , Schmidt et al . ( 2018 ) and Zhang et al . ( 2019 ) explored the trade-off between robustness and accuracy ; Wang et al . ( 2020a ) found adversarially-trained models tend to show smooth kernels ; Tsipras et al . ( 2019 ) and Zhang & Zhu ( 2018 ) argued adversarially robust models learned more shape-biased representations . Different from these studies , we characterize the adversarial examples from both frequency and spatial domain to emphasize local properties of adversarial examples , and propose a locally intermediate response to rethink the vulnerability of DNNs . Adversarial training . Adversarial training can be seen as a min-max optimization problem : min θ 1 n n∑ i=1 max x′i∈B ( x ) L ( fθ ( x ′ i ) , yi ) where f denotes a DNN model with parameters θ , and ( xi , yi ) denotes a pair of a natural example xi and its ground-truth label yi . Given a classification loss L , the inner maximization problem can be regarded as searching for suitable perturbations in boundary B to maximize loss , while the outer minimization problem is to optimize model parameters on adversarial examples { x′i } ni=1 generated from the inner maximization . 3 DIFFERENT PERTURBATIONS AFFECT THE VULNERABILITY OF THE MODEL . In this section , we first investigate adversarial examples in the frequency and spatial domain , and show the connections between models and their adversarial examples . Note that our threat model is a white-box model , and the fact that the adversarial example is only defined by the misleading result under the legal threat model , our findings suggest that the properties of the sampled examples can broadly reflect the fragile tendency of the model . Setup . We generate ` ∞ bounded adversarial examples by PGD-10 ( maximum perturbation = 8/255 and step size 2/255 ) with random start for the robust model ( Madry et al. , 2018 ) . Specifically , we use ResNet-18 ( He et al. , 2016 ) as the backbone to train the standard and adversarially-trained models for 100 epochs on CIFAR-10 ( Krizhevsky , 2009 ) . Following ( Pang et al. , 2021 ) , we use the SGD optimizer with momentum 0.9 , weight decay 5 × 10−4 and initial learning rate 0.1 , with a three-stage learning rate divided by 10 at 50 and 75 epoch respectively . The robustness of both models is evaluated by PGD-20 ( step size 1/255 ) . 3.1 THE DESTRUCTIVENESS OF HIGH-FREQUENCY COMPONENTS FROM ADVERSARIES . Inspired by ( Wang et al. , 2020a ) , we are naturally curious whether the adversarial examples cause considerable damage to the model mainly because of their high-frequency components . To answer this question , Figure 1 illustrates the trend of model performance and robustness on the test set with the high-frequency components increased ( the increase of the filtering scale denotes that more highfrequency components are added to the filtered images ) . Figure 1 ( a ) shows that , for standard models , the high-frequency components of natural examples promote classification and reach the model performance ( green line ) ; on the contrary , the performance of the filtered adversarial examples first rises to get the highest accuracy 47.5 % , and then drops rapidly to reach 0.0 % ( red line ) . Obviously , in the low-frequency range , the performance of natural and adversarial examples are quite close , yet more high-frequency components widen the difference . That is , the special high-frequency components caused by adversarial perturbations exhibit a clear destructive effect on standard models , and simply filter out them can effectively alleviate the destructiveness of adversaries even on standard models . However , for robust models , we show that the prediction performance finally reaches robustness without a rapid drop in Figure 1 ( b ) . But surprisingly , we find that the performance of filtered adversarial examples in some range exceeds the final robustness 47.5 % ( red line ) , reaching a maximum of 51.2 % . That is , although these high-frequency components do not exhibit a clear destructive effect , simply filtering out them has a positive impact on alleviating robust overfitting ( Rice et al. , 2020 ) . We then swap their high-frequency components between both examples controlled by a frequency threshold in Figure 1 ( c ) -1 ( d ) . For merged natural examples with high-frequency components from adversaries , the increase of the frequency threshold controls the accuracy increasing from the model robustness ( red line ) to the model performance ( green line ) , the opposite occurs on merged adversarial examples . These clearly illustrate the boost effect of the high-frequency components from natural examples and the destructive effect of the high-frequency components from adversarial examples . To further illustrate , we find that statistically , the main difference in the frequency domain between natural and adversarial examples is concentrated in the high-frequency region in Figure 2 . We visualize the logarithmic amplitude spectrum of both examples . Figure 2 ( a ) and 2 ( b ) show that , compare with natural examples ’ , the high-frequency components of the adversaries are hard to ignore . Figure 2 ( c ) further emphasizes that adversarial examples markedly show more high-frequency components , indicating relatively drastic local changes among pixels . This statistical difference explains the high detection rate of using Magnitude Fourier Spectrum ( Harder et al. , 2021 ) to detect adversarial examples . Furthermore , Figure 2 ( d ) and 2 ( e ) show that the high-frequency components of adversarial examples generated by robust models are less than those from standard models , yet still more than natural examples ’ . Besides , the analysis of filtering out low-frequency components in Figure 6 ( Appendix A ) also emphasizes our statement . That is , compared with natural examples ’ , the special high-frequency components of adversarial examples show their serious misleading effects on standard models , yet are not enough to be fully responsible for the destructiveness to robust models .", "role": "user" }, { "content": "This paper studies the properties of adversarial examples from a spatial and frequency perspective and shows that naturally trained models are more vulnerable to high-frequency components in adversarial examples. Perturbations for naturally trained models are disordered, but perturbations for adv-trained models are image shape related. Based on these observations, the authors find that smoother adversarially-trained can achieve better robustness.", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Energy balance requires hypothalamic glutamate neurotransmission. Research Abstract: The importance of neuropeptides in the hypothalamus has been experimentally established. Due to difficulties in assessing function in vivo, the roles of the fast-acting neurotransmitters glutamate and GABA are largely unknown. Synaptic vesicular transporters (VGLUTs for glutamate and VGAT for GABA) are required for vesicular uptake and, consequently, synaptic release of neurotransmitters. Ventromedial hypothalamic (VMH) neurons are predominantly glutamatergic and express VGLUT2. To evaluate the role of glutamate release from VMH neurons, we generated mice lacking VGLUT2 selectively in SF1 neurons (a major subset of VMH neurons). These mice have hypoglycemia during fasting secondary to impaired fasting-induced increases in the glucose-raising pancreatic hormone glucagon and impaired induction in liver of mRNAs encoding PGC-1alpha and the gluconeogenic enzymes PEPCK and G6Pase. Similarly, these mice have defective counterregulatory responses to insulin-induced hypoglycemia and 2-deoxyglucose (an antimetabolite). Thus, glutamate release from VMH neurons is an important component of the neurocircuitry that functions to prevent hypoglycemia.
{"verdict": "SUPPORT", "evidence": ["Thus, glutamate release from VMH neurons is an important component of the neurocircuitry that functions to prevent hypoglycemia."]}
science.scifact_entailment.251
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nEnergy balance requires hypothalamic glutamate neurotransmission.\n\nResearch Abstract:\nThe importance of neuropeptides in the hypothalamus has been experimentally established.\nDue to difficulties in assessing function in vivo, the roles of the fast-acting neurotransmitters glutamate and GABA are largely unknown.\nSynaptic vesicular transporters (VGLUTs for glutamate and VGAT for GABA) are required for vesicular uptake and, consequently, synaptic release of neurotransmitters.\nVentromedial hypothalamic (VMH) neurons are predominantly glutamatergic and express VGLUT2.\nTo evaluate the role of glutamate release from VMH neurons, we generated mice lacking VGLUT2 selectively in SF1 neurons (a major subset of VMH neurons).\nThese mice have hypoglycemia during fasting secondary to impaired fasting-induced increases in the glucose-raising pancreatic hormone glucagon and impaired induction in liver of mRNAs encoding PGC-1alpha and the gluconeogenic enzymes PEPCK and G6Pase.\nSimilarly, these mice have defective counterregulatory responses to insulin-induced hypoglycemia and 2-deoxyglucose (an antimetabolite).\nThus, glutamate release from VMH neurons is an important component of the neurocircuitry that functions to prevent hypoglycemia.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Thus, glutamate release from VMH neurons is an important component of the neurocircuitry that functions to prevent hypoglycemia.\"]}", "role": "assistant" } ]
science.discomat_te
| Glass | SiO2 (mol%) | P2O5 (mol%) | CaO (mol%) | MgO (mol%) | Na2O (mol%) | Y parameter | | CaMg25 | 25.37 | 10.87 | 32.62 | 31.14 | - | 0.82 | | CaMg33 | 33.60 | 6.40 | 19.00 | 41.00 | - | 1.14 | | CaNa46 | 46.00 | 2.60 | 27.00 | - | 24.40 | 1.90 | | CaNa54 | 54.00 | 1.70 | 22.00 | - | 22.30 | 2.39 | Caption: Composition of the bioactive glasses and the respective Stevels parameter Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "25.37", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "CaMg25"} {"value": "10.87", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "CaMg25"} {"value": "32.62", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "CaMg25"} {"value": "31.14", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "mol", "material": "CaMg25"} {"value": "0.82", "type": "Other"} {"value": "33.60", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "CaMg33"} {"value": "6.40", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "CaMg33"} {"value": "19.00", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "CaMg33"} {"value": "41.00", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "mol", "material": "CaMg33"} {"value": "1.14", "type": "Other"} {"value": "46.00", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "CaNa46"} {"value": "2.60", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "CaNa46"} {"value": "27.00", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "CaNa46"} {"value": "24.40", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "CaNa46"} {"value": "1.90", "type": "Other"} {"value": "54.00", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "CaNa54"} {"value": "1.70", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "CaNa54"} {"value": "22.00", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "CaNa54"} {"value": "22.30", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "CaNa54"} {"value": "2.39", "type": "Other"}
science.discomat_te.32
[ { "content": "| Glass | SiO2 (mol%) | P2O5 (mol%) | CaO (mol%) | MgO (mol%) | Na2O (mol%) | Y parameter |\n| CaMg25 | 25.37 | 10.87 | 32.62 | 31.14 | - | 0.82 |\n| CaMg33 | 33.60 | 6.40 | 19.00 | 41.00 | - | 1.14 |\n| CaNa46 | 46.00 | 2.60 | 27.00 | - | 24.40 | 1.90 |\n| CaNa54 | 54.00 | 1.70 | 22.00 | - | 22.30 | 2.39 |\n\nCaption: Composition of the bioactive glasses and the respective Stevels parameter\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"25.37\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"CaMg25\"}\n{\"value\": \"10.87\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"CaMg25\"}\n{\"value\": \"32.62\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"CaMg25\"}\n{\"value\": \"31.14\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"mol\", \"material\": \"CaMg25\"}\n{\"value\": \"0.82\", \"type\": \"Other\"}\n{\"value\": \"33.60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"CaMg33\"}\n{\"value\": \"6.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"CaMg33\"}\n{\"value\": \"19.00\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"CaMg33\"}\n{\"value\": \"41.00\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"mol\", \"material\": \"CaMg33\"}\n{\"value\": \"1.14\", \"type\": \"Other\"}\n{\"value\": \"46.00\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"CaNa46\"}\n{\"value\": \"2.60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"CaNa46\"}\n{\"value\": \"27.00\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"CaNa46\"}\n{\"value\": \"24.40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"CaNa46\"}\n{\"value\": \"1.90\", \"type\": \"Other\"}\n{\"value\": \"54.00\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"CaNa54\"}\n{\"value\": \"1.70\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"CaNa54\"}\n{\"value\": \"22.00\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"CaNa54\"}\n{\"value\": \"22.30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"CaNa54\"}\n{\"value\": \"2.39\", \"type\": \"Other\"}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Both recombinant FSH (r-FSH) and highly-purified, urinary FSH (HP-uFSH) are frequently used in ovulation induction associated with timed sexual intercourse. Their effectiveness is reported to be similar, and therefore the costs of treatment represent a major issue to be considered. Although several studies about costs in IVF have been published, data obtained in low-technology infertility treatments are still scarce. ABSTRACT.METHODS: Two hundred and sixty infertile women (184 with unexplained infertility, 76 with CC-resistant polycystic ovary syndrome) at their first treatment cycle were randomized and included in the study. Ovulation induction was accomplished by daily administration of rFSH or HP-uFSH according to a low-dose, step-up regimen aimed to obtain a monofollicular ovulation. A bi- or tri-follicular ovulation was anyway accepted, whereas hCG was withdrawn and the cycle cancelled when more than three follicles greater than or equal to 18 mm diameter were seen at ultrasound. The primary outcome measure was the cost of therapy per delivered baby, estimated according to a cost-minimization analysis. Secondary outcomes were the following: monofollicular ovulation rate, total FSH dose, cycle cancellation rate, length of the follicular phase, number of developing follicles (>12 mm diameter), endometrial thickness at hCG, incidence of twinning and ovarian hyperstimulation syndrome, delivery rate. ABSTRACT.RESULTS: The overall FSH dose needed to achieve ovulation was significantly lower with r-FSH, whereas all the other studied variables did not significantly differ with either treatments. However, a trend toward a higher delivery rate with r-FSH was observed in the whole group and also when results were considered subgrouping patients according to the indication to treatment. ABSTRACT.CONCLUSION: Considering the significantly lower number of vials/patient and the slight (although non-significant) increase in the delivery rate with r-FSH, the cost-minimization analysis showed a 9.4% reduction in the overall therapy cost per born baby in favor of r-FSH. Extracted Data:
[["recombinant FSH (r-FSH)", "highly-purified, urinary FSH (HP-uFSH)", "total FSH dose", "significantly decreased", "The overall FSH dose needed to achieve ovulation was significantly lower with r-FSH,"]]
science.evidence_inference.186
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Both recombinant FSH (r-FSH) and highly-purified, urinary FSH (HP-uFSH) are frequently used in ovulation induction associated with timed sexual intercourse. Their effectiveness is reported to be similar, and therefore the costs of treatment represent a major issue to be considered. Although several studies about costs in IVF have been published, data obtained in low-technology infertility treatments are still scarce.\n\nABSTRACT.METHODS: Two hundred and sixty infertile women (184 with unexplained infertility, 76 with CC-resistant polycystic ovary syndrome) at their first treatment cycle were randomized and included in the study. Ovulation induction was accomplished by daily administration of rFSH or HP-uFSH according to a low-dose, step-up regimen aimed to obtain a monofollicular ovulation. A bi- or tri-follicular ovulation was anyway accepted, whereas hCG was withdrawn and the cycle cancelled when more than three follicles greater than or equal to 18 mm diameter were seen at ultrasound. The primary outcome measure was the cost of therapy per delivered baby, estimated according to a cost-minimization analysis. Secondary outcomes were the following: monofollicular ovulation rate, total FSH dose, cycle cancellation rate, length of the follicular phase, number of developing follicles (>12 mm diameter), endometrial thickness at hCG, incidence of twinning and ovarian hyperstimulation syndrome, delivery rate.\n\nABSTRACT.RESULTS: The overall FSH dose needed to achieve ovulation was significantly lower with r-FSH, whereas all the other studied variables did not significantly differ with either treatments. However, a trend toward a higher delivery rate with r-FSH was observed in the whole group and also when results were considered subgrouping patients according to the indication to treatment.\n\nABSTRACT.CONCLUSION: Considering the significantly lower number of vials/patient and the slight (although non-significant) increase in the delivery rate with r-FSH, the cost-minimization analysis showed a 9.4% reduction in the overall therapy cost per born baby in favor of r-FSH.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"recombinant FSH (r-FSH)\", \"highly-purified, urinary FSH (HP-uFSH)\", \"total FSH dose\", \"significantly decreased\", \"The overall FSH dose needed to achieve ovulation was significantly lower with r-FSH,\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: To improve continuity of care at hospital admission and discharge and to decrease medication errors pharmaceutical care programs are developed. This study aims to determine the cost-effectiveness of the COACH program in comparison with usual care from a societal perspective. ABSTRACT.METHODS: A controlled clinical trial was performed at the Internal Medicine department of a general teaching hospital. All admitted patients using at least one prescription drug were included. The COACH program consisted of medication reconciliation, patient counselling at discharge, and communication to healthcare providers in primary care. The primary outcome was the proportion of patients with an unplanned rehospitalisation within three months after discharge. Also, the number of quality-adjusted life-years (QALYs) was assessed. Cost data were collected using cost diaries. Uncertainty surrounding cost differences and incremental cost-effectiveness ratios between the groups was estimated by bootstrapping. ABSTRACT.RESULTS: In the COACH program, 168 patients were included and in usual care 151 patients. There was no significant difference in the proportion of patients with unplanned rehospitalisations (mean difference 0.17%, 95% CI -8.85;8.51), and in QALYs (mean difference -0.0085, 95% CI -0.0170;0.0001). Total costs for the COACH program were non-significantly lower than usual care (-€1160, 95% CI -3168;847). Cost-effectiveness planes showed that the program was not cost-effective compared with usual care for unplanned rehospitalisations and QALYs gained. ABSTRACT.CONCLUSION: The COACH program was not cost-effective in comparison with usual care. Future studies should focus on high risk patients and include other outcomes (e.g. adverse drug events) as this may increase the chances of a cost-effective intervention. <p> Dutch trial register NTR1519 Extracted Data:
[["COACH program", "Control", "Quality-adjusted life-years ", "no significant difference", "There was no significant difference in the proportion of patients with unplanned rehospitalisations (mean difference 0.17%, 95% CI -8.85;8.51), and in QALYs (mean difference -0.0085, 95% CI -0.0170;0.0001)."], ["COACH program", "Control", "Unplanned rehospitalisations", "no significant difference", "There was no significant difference in the proportion of patients with unplanned rehospitalisations (mean difference 0.17%, 95% CI -8.85;8.51), and in QALYs (mean difference -0.0085, 95% CI -0.0170;0.0001)."], ["COACH program", "Control", "Total costs", "no significant difference", "Total costs for the COACH program were non-significantly lower than usual care (-\u20ac1160, 95% CI -3168;847)"]]
science.evidence_inference.108
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: To improve continuity of care at hospital admission and discharge and to decrease medication errors pharmaceutical care programs are developed. This study aims to determine the cost-effectiveness of the COACH program in comparison with usual care from a societal perspective.\n\nABSTRACT.METHODS: A controlled clinical trial was performed at the Internal Medicine department of a general teaching hospital. All admitted patients using at least one prescription drug were included. The COACH program consisted of medication reconciliation, patient counselling at discharge, and communication to healthcare providers in primary care. The primary outcome was the proportion of patients with an unplanned rehospitalisation within three months after discharge. Also, the number of quality-adjusted life-years (QALYs) was assessed. Cost data were collected using cost diaries. Uncertainty surrounding cost differences and incremental cost-effectiveness ratios between the groups was estimated by bootstrapping.\n\nABSTRACT.RESULTS: In the COACH program, 168 patients were included and in usual care 151 patients. There was no significant difference in the proportion of patients with unplanned rehospitalisations (mean difference 0.17%, 95% CI -8.85;8.51), and in QALYs (mean difference -0.0085, 95% CI -0.0170;0.0001). Total costs for the COACH program were non-significantly lower than usual care (-€1160, 95% CI -3168;847). Cost-effectiveness planes showed that the program was not cost-effective compared with usual care for unplanned rehospitalisations and QALYs gained.\n\nABSTRACT.CONCLUSION: The COACH program was not cost-effective in comparison with usual care. Future studies should focus on high risk patients and include other outcomes (e.g. adverse drug events) as this may increase the chances of a cost-effective intervention. <p> Dutch trial register NTR1519\n\nExtracted Data:", "role": "user" }, { "content": "[[\"COACH program\", \"Control\", \"Quality-adjusted life-years \", \"no significant difference\", \"There was no significant difference in the proportion of patients with unplanned rehospitalisations (mean difference 0.17%, 95% CI -8.85;8.51), and in QALYs (mean difference -0.0085, 95% CI -0.0170;0.0001).\"], [\"COACH program\", \"Control\", \"Unplanned rehospitalisations\", \"no significant difference\", \"There was no significant difference in the proportion of patients with unplanned rehospitalisations (mean difference 0.17%, 95% CI -8.85;8.51), and in QALYs (mean difference -0.0085, 95% CI -0.0170;0.0001).\"], [\"COACH program\", \"Control\", \"Total costs\", \"no significant difference\", \"Total costs for the COACH program were non-significantly lower than usual care (-\\u20ac1160, 95% CI -3168;847)\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: An oral marine protein supplement (MPS) is designed to promote hair growth in women with temporary thinning hair (Viviscal Extra Strength; Lifes2good, Inc., Chicago, IL). This double-blind, placebo-controlled study assessed the ability of MPS to promote terminal hair growth in adult women with self-perceived thinning hair associated with poor diet, stress, hormonal influences, or abnormal menstrual cycles. Adult women with thinning hair were randomized to receive MPS (N = 30) or placebo (N = 30) twice daily for 90 days. Digital images were obtained from a 4 cm2 area scalp target area. Each subject's hair was washed and shed hairs were collected and counted. After 90 days, these measures were repeated and subjects completed Quality of Life and Self-Assessment Questionnaires. MPS-treated subjects achieved a significant increase in the number of terminal hairs within the target area (P < 0.0001) which was significantly greater than placebo (P < 0.0001). MPS use also resulted in significantly less hair shedding (P = 0.002) and higher total Self-Assessment (P = 0.006) and Quality of Life Questionnaires scores (P = 0.035). There were no reported adverse events. MPS promotes hair growth and decreases hair loss in women suffering from temporary thinning hair. This trial is registered with ClinicalTrials.gov Identifier: NCT02297360. Extracted Data:
[["oral marine protein supplement (MPS)", "placebo", "mean number of terminal hairs", "significantly increased", "MPS-treated subjects achieved a significant increase in the number of terminal hairs within the target area (P \u003c 0.0001) which was significantly greater than placebo (P \u003c 0.0001)."], ["oral marine protein supplement (MPS)", "placebo", "hair shedding counts following hair washing", "significantly decreased", "MPS use also resulted in significantly less hair shedding (P = 0.002) and higher total Self-Assessment (P = 0.006) and Quality of Life Questionnaires scores (P = 0.035)."], ["oral marine protein supplement (MPS)", "placebo", "total scores on the Quality of Life Questionnaire", "significantly increased", "MPS use also resulted in significantly less hair shedding (P = 0.002) and higher total Self-Assessment (P = 0.006) and Quality of Life Questionnaires scores (P = 0.035)."]]
science.evidence_inference.50
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: An oral marine protein supplement (MPS) is designed to promote hair growth in women with temporary thinning hair (Viviscal Extra Strength; Lifes2good, Inc., Chicago, IL). This double-blind, placebo-controlled study assessed the ability of MPS to promote terminal hair growth in adult women with self-perceived thinning hair associated with poor diet, stress, hormonal influences, or abnormal menstrual cycles. Adult women with thinning hair were randomized to receive MPS (N = 30) or placebo (N = 30) twice daily for 90 days. Digital images were obtained from a 4 cm2 area scalp target area. Each subject's hair was washed and shed hairs were collected and counted. After 90 days, these measures were repeated and subjects completed Quality of Life and Self-Assessment Questionnaires. MPS-treated subjects achieved a significant increase in the number of terminal hairs within the target area (P < 0.0001) which was significantly greater than placebo (P < 0.0001). MPS use also resulted in significantly less hair shedding (P = 0.002) and higher total Self-Assessment (P = 0.006) and Quality of Life Questionnaires scores (P = 0.035). There were no reported adverse events. MPS promotes hair growth and decreases hair loss in women suffering from temporary thinning hair. This trial is registered with ClinicalTrials.gov Identifier: NCT02297360.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"oral marine protein supplement (MPS)\", \"placebo\", \"mean number of terminal hairs\", \"significantly increased\", \"MPS-treated subjects achieved a significant increase in the number of terminal hairs within the target area (P \\u003c 0.0001) which was significantly greater than placebo (P \\u003c 0.0001).\"], [\"oral marine protein supplement (MPS)\", \"placebo\", \"hair shedding counts following hair washing\", \"significantly decreased\", \"MPS use also resulted in significantly less hair shedding (P = 0.002) and higher total Self-Assessment (P = 0.006) and Quality of Life Questionnaires scores (P = 0.035).\"], [\"oral marine protein supplement (MPS)\", \"placebo\", \"total scores on the Quality of Life Questionnaire\", \"significantly increased\", \"MPS use also resulted in significantly less hair shedding (P = 0.002) and higher total Self-Assessment (P = 0.006) and Quality of Life Questionnaires scores (P = 0.035).\"]]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.OBJECTIVES: Primary: To compare the effectiveness of intensive group and individual interventions for smoking cessation in a primary health care setting; secondary: to identify the variables associated with smoking cessation. ABSTRACT.METHODS: Three-pronged clinical trial with randomisation at the individual level. We performed the following: an intensive individual intervention (III), an intensive group intervention (IGI) and a minimal intervention (MI). Included in the study were smokers who were prepared to quit smoking. Excluded from the study were individuals aged less than 18 years or with severe mental conditions or terminal illnesses. The outcome measure was continued abstinence at 12 months confirmed through CO-oximetry (CO). The analysis was based on intention to treat. ABSTRACT.RESULTS: In total, 287 smokers were recruited: 81 in the III, 111 in the IGI, and 95 in the MI. Continued abstinence at 12 months confirmed through CO was 7.4% in the III, 5.4% in the IGI, and 1% in the MI. No significant differences were noted between III and MI on the one hand, and between IGI and MI on the other [RR 7.04 (0.9-7.2) and RR 5.1 (0.6-41.9), respectively]. No differences were noted between IGI and III [RR 0.7 (0.2-2.2)]. In multivariate analysis, only overall visit length showed a statistically significant association with smoking cessation. ABSTRACT.CONCLUSIONS: The effectiveness of intensive smoking interventions in this study was lower than expected. No statistically significant differences were found between the results of individual and group interventions. ABSTRACT.TRIAL REGISTRATION NUMBER: ISRCTN32323770 Extracted Data:
[["Intensive group intervention (IGI)", "Intensive individual intervention (III)", "Continued abstinence at 1 year", "no significant difference", "Continued abstinence at 12 months confirmed through CO was 7.4% in the III, 5.4% in the IGI, and 1% in the MI. No significant differences were noted between III and MI on the one hand, and between IGI and MI on the other [RR 7.04 (0.9-7.2) and RR 5.1 (0.6-41.9), respectively]."], ["Intensive group intervention (IGI)", "Intensive individual intervention (III)", "Continued abstinence at 1 year", "no significant difference", "No differences were noted between IGI and III [RR 0.7 (0.2-2.2)]."], ["Intensive group intervention (IGI)", "Minimal intervention (MI)", "Continued abstinence at 1 year", "no significant difference", "Continued abstinence at 12 months confirmed through CO was 7.4% in the III, 5.4% in the IGI, and 1% in the MI. No significant differences were noted between III and MI on the one hand, and between IGI and MI on the other [RR 7.04 (0.9-7.2) and RR 5.1 (0.6-41.9), respectively]."], ["Intensive individual intervention (III)", "Minimal intervention (MI)", "Continued abstinence at 1 year", "no significant difference", "Continued abstinence at 12 months confirmed through CO was 7.4% in the III, 5.4% in the IGI, and 1% in the MI. No significant differences were noted between III and MI on the one hand, and between IGI and MI on the other [RR 7.04 (0.9-7.2) and RR 5.1 (0.6-41.9), respectively]."]]
science.evidence_inference.38
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.OBJECTIVES: Primary: To compare the effectiveness of intensive group and individual interventions for smoking cessation in a primary health care setting; secondary: to identify the variables associated with smoking cessation.\n\nABSTRACT.METHODS: Three-pronged clinical trial with randomisation at the individual level. We performed the following: an intensive individual intervention (III), an intensive group intervention (IGI) and a minimal intervention (MI). Included in the study were smokers who were prepared to quit smoking. Excluded from the study were individuals aged less than 18 years or with severe mental conditions or terminal illnesses. The outcome measure was continued abstinence at 12 months confirmed through CO-oximetry (CO). The analysis was based on intention to treat.\n\nABSTRACT.RESULTS: In total, 287 smokers were recruited: 81 in the III, 111 in the IGI, and 95 in the MI. Continued abstinence at 12 months confirmed through CO was 7.4% in the III, 5.4% in the IGI, and 1% in the MI. No significant differences were noted between III and MI on the one hand, and between IGI and MI on the other [RR 7.04 (0.9-7.2) and RR 5.1 (0.6-41.9), respectively]. No differences were noted between IGI and III [RR 0.7 (0.2-2.2)]. In multivariate analysis, only overall visit length showed a statistically significant association with smoking cessation.\n\nABSTRACT.CONCLUSIONS: The effectiveness of intensive smoking interventions in this study was lower than expected. No statistically significant differences were found between the results of individual and group interventions.\n\nABSTRACT.TRIAL REGISTRATION NUMBER: ISRCTN32323770\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Intensive group intervention (IGI)\", \"Intensive individual intervention (III)\", \"Continued abstinence at 1 year\", \"no significant difference\", \"Continued abstinence at 12 months confirmed through CO was 7.4% in the III, 5.4% in the IGI, and 1% in the MI. No significant differences were noted between III and MI on the one hand, and between IGI and MI on the other [RR 7.04 (0.9-7.2) and RR 5.1 (0.6-41.9), respectively].\"], [\"Intensive group intervention (IGI)\", \"Intensive individual intervention (III)\", \"Continued abstinence at 1 year\", \"no significant difference\", \"No differences were noted between IGI and III [RR 0.7 (0.2-2.2)].\"], [\"Intensive group intervention (IGI)\", \"Minimal intervention (MI)\", \"Continued abstinence at 1 year\", \"no significant difference\", \"Continued abstinence at 12 months confirmed through CO was 7.4% in the III, 5.4% in the IGI, and 1% in the MI. No significant differences were noted between III and MI on the one hand, and between IGI and MI on the other [RR 7.04 (0.9-7.2) and RR 5.1 (0.6-41.9), respectively].\"], [\"Intensive individual intervention (III)\", \"Minimal intervention (MI)\", \"Continued abstinence at 1 year\", \"no significant difference\", \"Continued abstinence at 12 months confirmed through CO was 7.4% in the III, 5.4% in the IGI, and 1% in the MI. No significant differences were noted between III and MI on the one hand, and between IGI and MI on the other [RR 7.04 (0.9-7.2) and RR 5.1 (0.6-41.9), respectively].\"]]", "role": "assistant" } ]
science.discomat_te
| Sample | Density in gcm-3 | Durability in ml HCl | T g in degC | a 100-300degC in 10-7 K-1 | | A | 2.4437+-0.0043 | 2.03+-0.12 | 569+-5 | 106+-5 | | B | 2.4444+-0.0003 | 2.67+-0.06 | 558+-5 | 107+-5 | | C | 2.4528+-0.0039 | 1.29+-0.03 | 553+-5 | 109+-5 | | D | 2.4591+-0.0005 | 1.95+-0.05 | 555+-5 | 105+-5 | | E | 2.4798+-0.0006 | 1.65+-0.05 | 554+-5 | 105+-5 | | F | 2.4629+-0.0008 | 1.93+-0.03 | 543+-5 | 117+-5 | | G | 2.5420+-0.0030 | 1.70+-0.06 | 535+-5 | 109+-5 | | H | 2.6155+-0.0027 | 1.58+-0.08 | 524+-5 | 92+-5 | | I | 2.5303+-0.0012 | 2.03+-0.16 | 546+-5 | 117+-5 | | K | 2.4889+-0.0014 | 1.85+-0.09 | 568+-5 | 107+-5 | | L | 2.5315+-0.0003 | 1.63+-0.10 | 540+-5 | 113+-5 | | M | 2.4993+-0.0012 | 1.51+-0.11 | 534+-5 | 115+-5 | Caption: Density, chemical durability (in ml HCl), glass transformation temperature, T g, and linear thermal expansion coefficient, a, of the glass samples studied Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "2.4437", "type": "Other"} {"value": "2.03", "type": "Other"} {"value": "569", "type": "Other"} {"value": "106", "type": "Other"} {"value": "2.4444", "type": "Other"} {"value": "2.67", "type": "Other"} {"value": "558", "type": "Other"} {"value": "107", "type": "Other"} {"value": "2.4528", "type": "Other"} {"value": "1.29", "type": "Other"} {"value": "553", "type": "Other"} {"value": "109", "type": "Other"} {"value": "2.4591", "type": "Other"} {"value": "1.95", "type": "Other"} {"value": "555", "type": "Other"} {"value": "105", "type": "Other"} {"value": "2.4798", "type": "Other"} {"value": "1.65", "type": "Other"} {"value": "554", "type": "Other"} {"value": "105", "type": "Other"} {"value": "2.4629", "type": "Other"} {"value": "1.93", "type": "Other"} {"value": "543", "type": "Other"} {"value": "117", "type": "Other"} {"value": "2.5420", "type": "Other"} {"value": "1.70", "type": "Other"} {"value": "535", "type": "Other"} {"value": "109", "type": "Other"} {"value": "2.6155", "type": "Other"} {"value": "1.58", "type": "Other"} {"value": "524", "type": "Other"} {"value": "92", "type": "Other"} {"value": "2.5303", "type": "Other"} {"value": "2.03", "type": "Other"} {"value": "546", "type": "Other"} {"value": "117", "type": "Other"} {"value": "2.4889", "type": "Other"} {"value": "1.85", "type": "Other"} {"value": "568", "type": "Other"} {"value": "107", "type": "Other"} {"value": "2.5315", "type": "Other"} {"value": "1.63", "type": "Other"} {"value": "540", "type": "Other"} {"value": "113", "type": "Other"} {"value": "2.4993", "type": "Other"} {"value": "1.51", "type": "Other"} {"value": "534", "type": "Other"} {"value": "115", "type": "Other"}
science.discomat_te.145
[ { "content": "| Sample | Density in gcm-3 | Durability in ml HCl | T g in degC | a 100-300degC in 10-7 K-1 |\n| A | 2.4437+-0.0043 | 2.03+-0.12 | 569+-5 | 106+-5 |\n| B | 2.4444+-0.0003 | 2.67+-0.06 | 558+-5 | 107+-5 |\n| C | 2.4528+-0.0039 | 1.29+-0.03 | 553+-5 | 109+-5 |\n| D | 2.4591+-0.0005 | 1.95+-0.05 | 555+-5 | 105+-5 |\n| E | 2.4798+-0.0006 | 1.65+-0.05 | 554+-5 | 105+-5 |\n| F | 2.4629+-0.0008 | 1.93+-0.03 | 543+-5 | 117+-5 |\n| G | 2.5420+-0.0030 | 1.70+-0.06 | 535+-5 | 109+-5 |\n| H | 2.6155+-0.0027 | 1.58+-0.08 | 524+-5 | 92+-5 |\n| I | 2.5303+-0.0012 | 2.03+-0.16 | 546+-5 | 117+-5 |\n| K | 2.4889+-0.0014 | 1.85+-0.09 | 568+-5 | 107+-5 |\n| L | 2.5315+-0.0003 | 1.63+-0.10 | 540+-5 | 113+-5 |\n| M | 2.4993+-0.0012 | 1.51+-0.11 | 534+-5 | 115+-5 |\n\nCaption: Density, chemical durability (in ml HCl), glass transformation temperature, T g, and linear thermal expansion coefficient, a, of the glass samples studied\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"2.4437\", \"type\": \"Other\"}\n{\"value\": \"2.03\", \"type\": \"Other\"}\n{\"value\": \"569\", \"type\": \"Other\"}\n{\"value\": \"106\", \"type\": \"Other\"}\n{\"value\": \"2.4444\", \"type\": \"Other\"}\n{\"value\": \"2.67\", \"type\": \"Other\"}\n{\"value\": \"558\", \"type\": \"Other\"}\n{\"value\": \"107\", \"type\": \"Other\"}\n{\"value\": \"2.4528\", \"type\": \"Other\"}\n{\"value\": \"1.29\", \"type\": \"Other\"}\n{\"value\": \"553\", \"type\": \"Other\"}\n{\"value\": \"109\", \"type\": \"Other\"}\n{\"value\": \"2.4591\", \"type\": \"Other\"}\n{\"value\": \"1.95\", \"type\": \"Other\"}\n{\"value\": \"555\", \"type\": \"Other\"}\n{\"value\": \"105\", \"type\": \"Other\"}\n{\"value\": \"2.4798\", \"type\": \"Other\"}\n{\"value\": \"1.65\", \"type\": \"Other\"}\n{\"value\": \"554\", \"type\": \"Other\"}\n{\"value\": \"105\", \"type\": \"Other\"}\n{\"value\": \"2.4629\", \"type\": \"Other\"}\n{\"value\": \"1.93\", \"type\": \"Other\"}\n{\"value\": \"543\", \"type\": \"Other\"}\n{\"value\": \"117\", \"type\": \"Other\"}\n{\"value\": \"2.5420\", \"type\": \"Other\"}\n{\"value\": \"1.70\", \"type\": \"Other\"}\n{\"value\": \"535\", \"type\": \"Other\"}\n{\"value\": \"109\", \"type\": \"Other\"}\n{\"value\": \"2.6155\", \"type\": \"Other\"}\n{\"value\": \"1.58\", \"type\": \"Other\"}\n{\"value\": \"524\", \"type\": \"Other\"}\n{\"value\": \"92\", \"type\": \"Other\"}\n{\"value\": \"2.5303\", \"type\": \"Other\"}\n{\"value\": \"2.03\", \"type\": \"Other\"}\n{\"value\": \"546\", \"type\": \"Other\"}\n{\"value\": \"117\", \"type\": \"Other\"}\n{\"value\": \"2.4889\", \"type\": \"Other\"}\n{\"value\": \"1.85\", \"type\": \"Other\"}\n{\"value\": \"568\", \"type\": \"Other\"}\n{\"value\": \"107\", \"type\": \"Other\"}\n{\"value\": \"2.5315\", \"type\": \"Other\"}\n{\"value\": \"1.63\", \"type\": \"Other\"}\n{\"value\": \"540\", \"type\": \"Other\"}\n{\"value\": \"113\", \"type\": \"Other\"}\n{\"value\": \"2.4993\", \"type\": \"Other\"}\n{\"value\": \"1.51\", \"type\": \"Other\"}\n{\"value\": \"534\", \"type\": \"Other\"}\n{\"value\": \"115\", \"type\": \"Other\"}", "role": "assistant" } ]
science.discomat_te
| Name | SiO2 | Al2O3 | Na2O | S (cm2/g) | | A9 | 54.8 | 8.7 | 36.5 | 368 | | A11 | 52.6 | 10.8 | 36.6 | 497 | | A13 | 52.0 | 12.6 | 35.5 | 497 | | A15 | 49.2 | 14.7 | 36.0 | 434 | | A17 | 46.0 | 16.0 | 38.0 | 429 | | A19 | 43.6 | 19.2 | 37.2 | 426 | Caption: Composition (% molar oxide) and specific surface area of the aluminosilicate powders Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "54.8", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "A9"} {"value": "8.7", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "A9"} {"value": "36.5", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "A9"} {"value": "368", "type": "Other"} {"value": "52.6", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "A11"} {"value": "10.8", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "A11"} {"value": "36.6", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "A11"} {"value": "497", "type": "Other"} {"value": "52.0", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "A13"} {"value": "12.6", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "A13"} {"value": "35.5", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "A13"} {"value": "497", "type": "Other"} {"value": "49.2", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "A15"} {"value": "14.7", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "A15"} {"value": "36.0", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "A15"} {"value": "434", "type": "Other"} {"value": "46.0", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "A17"} {"value": "16.0", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "A17"} {"value": "38.0", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "A17"} {"value": "429", "type": "Other"} {"value": "43.6", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "A19"} {"value": "19.2", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "A19"} {"value": "37.2", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "A19"} {"value": "426", "type": "Other"}
science.discomat_te.264
[ { "content": "| Name | SiO2 | Al2O3 | Na2O | S (cm2/g) |\n| A9 | 54.8 | 8.7 | 36.5 | 368 |\n| A11 | 52.6 | 10.8 | 36.6 | 497 |\n| A13 | 52.0 | 12.6 | 35.5 | 497 |\n| A15 | 49.2 | 14.7 | 36.0 | 434 |\n| A17 | 46.0 | 16.0 | 38.0 | 429 |\n| A19 | 43.6 | 19.2 | 37.2 | 426 |\n\nCaption: Composition (% molar oxide) and specific surface area of the aluminosilicate powders\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"54.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"A9\"}\n{\"value\": \"8.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"A9\"}\n{\"value\": \"36.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"A9\"}\n{\"value\": \"368\", \"type\": \"Other\"}\n{\"value\": \"52.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"A11\"}\n{\"value\": \"10.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"A11\"}\n{\"value\": \"36.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"A11\"}\n{\"value\": \"497\", \"type\": \"Other\"}\n{\"value\": \"52.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"A13\"}\n{\"value\": \"12.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"A13\"}\n{\"value\": \"35.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"A13\"}\n{\"value\": \"497\", \"type\": \"Other\"}\n{\"value\": \"49.2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"A15\"}\n{\"value\": \"14.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"A15\"}\n{\"value\": \"36.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"A15\"}\n{\"value\": \"434\", \"type\": \"Other\"}\n{\"value\": \"46.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"A17\"}\n{\"value\": \"16.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"A17\"}\n{\"value\": \"38.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"A17\"}\n{\"value\": \"429\", \"type\": \"Other\"}\n{\"value\": \"43.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"A19\"}\n{\"value\": \"19.2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"A19\"}\n{\"value\": \"37.2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"A19\"}\n{\"value\": \"426\", \"type\": \"Other\"}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: We apply a decision tree based approach to pronoun resolution in spoken dialogue. Our system deals with pronouns with NP-and non-NP-antecedents. We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron 's (2002) manually tuned system.
Output Entities: {"Generic": [], "Material": ["Switchboard dialogues"], "Method": ["decision tree based approach", "Byron \u0027s (2002) manually tuned system"], "Metric": [], "OtherScientificTerm": ["features", "pronouns", "NP - and non-NP-antecedents"], "Task": ["spoken dialogue", "pronoun resolution"]}
science.scierc_ner.10
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nWe apply a decision tree based approach to pronoun resolution in spoken dialogue. Our system deals with pronouns with NP-and non-NP-antecedents. We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron 's (2002) manually tuned system.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [\"Switchboard dialogues\"], \"Method\": [\"decision tree based approach\", \"Byron \\u0027s (2002) manually tuned system\"], \"Metric\": [], \"OtherScientificTerm\": [\"features\", \"pronouns\", \"NP - and non-NP-antecedents\"], \"Task\": [\"spoken dialogue\", \"pronoun resolution\"]}", "role": "assistant" } ]
science.scierc_ner
Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context. Input Abstract: In some auction domains, there is uncertainty regarding the final availability of the goods being auctioned off. For example, a government may auction off spectrum from its public safety network, but it may need this spectrum back in times of emergency. In such a domain, standard combinatorial auctions perform poorly because they lead to violations of individual rationality (IR), even in expectation, and to very low efficiency. In this paper, we study the design of core-selecting payment rules for such domains. Surprisingly, we show that in this new domain, there does not exist a payment rule with is guaranteed to be ex-post core-selecting. However, we show that by designing rules that are 'execution-contingent,' i.e., by charging payments that are conditioned on the realization of the availability of the goods, we can reduce IR violations. We design two core-selecting rules that always satisfy IR in expectation. To study the performance of our rules we perform a computational Bayes-Nash equilibrium analysis. We show that, in equilibrium, our new rules have better incentives, higher efficiency, and a lower rate of ex-post IR violations than standard core-selecting rules.
Output Entities: {"Generic": [], "Material": [], "Method": ["combinatorial auctions", "computational Bayes-Nash equilibrium analysis"], "Metric": ["rate of ex-post IR violations"], "OtherScientificTerm": ["core-selecting rules", "individual rationality (IR)", "payment rule", "violations of individual rationality (IR)"], "Task": ["design of core-selecting payment rules", "auction domains"]}
science.scierc_ner.36
[ { "content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nIn some auction domains, there is uncertainty regarding the final availability of the goods being auctioned off. For example, a government may auction off spectrum from its public safety network, but it may need this spectrum back in times of emergency. In such a domain, standard combinatorial auctions perform poorly because they lead to violations of individual rationality (IR), even in expectation, and to very low efficiency. In this paper, we study the design of core-selecting payment rules for such domains. Surprisingly, we show that in this new domain, there does not exist a payment rule with is guaranteed to be ex-post core-selecting. However, we show that by designing rules that are 'execution-contingent,' i.e., by charging payments that are conditioned on the realization of the availability of the goods, we can reduce IR violations. We design two core-selecting rules that always satisfy IR in expectation. To study the performance of our rules we perform a computational Bayes-Nash equilibrium analysis. We show that, in equilibrium, our new rules have better incentives, higher efficiency, and a lower rate of ex-post IR violations than standard core-selecting rules.", "role": "user" }, { "content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [], \"Method\": [\"combinatorial auctions\", \"computational Bayes-Nash equilibrium analysis\"], \"Metric\": [\"rate of ex-post IR violations\"], \"OtherScientificTerm\": [\"core-selecting rules\", \"individual rationality (IR)\", \"payment rule\", \"violations of individual rationality (IR)\"], \"Task\": [\"design of core-selecting payment rules\", \"auction domains\"]}", "role": "assistant" } ]
science.discomat_te
| Conditions | i H c (kA/m) | M s (Am2/kg) | M r (Am2/kg) | | As-casting | 10 | 107 | 10 | | Heat-treated at 948K | 101 | 105 | 46 | | Heat-treated at 983K | 70 | 106 | 44 | | Heat-treated at 1153K | 56 | 109 | 42 | Caption: Magnetic properties for Fe61Co10Nd3Y6B20 alloy at different conditions Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "10", "type": "Other"} {"value": "107", "type": "Other"} {"value": "10", "type": "Other"} {"value": "101", "type": "Other"} {"value": "105", "type": "Other"} {"value": "46", "type": "Other"} {"value": "70", "type": "Other"} {"value": "106", "type": "Other"} {"value": "44", "type": "Other"} {"value": "56", "type": "Other"} {"value": "109", "type": "Other"} {"value": "42", "type": "Other"}
science.discomat_te.211
[ { "content": "| Conditions | i H c (kA/m) | M s (Am2/kg) | M r (Am2/kg) |\n| As-casting | 10 | 107 | 10 |\n| Heat-treated at 948K | 101 | 105 | 46 |\n| Heat-treated at 983K | 70 | 106 | 44 |\n| Heat-treated at 1153K | 56 | 109 | 42 |\n\nCaption: Magnetic properties for Fe61Co10Nd3Y6B20 alloy at different conditions\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"107\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"101\", \"type\": \"Other\"}\n{\"value\": \"105\", \"type\": \"Other\"}\n{\"value\": \"46\", \"type\": \"Other\"}\n{\"value\": \"70\", \"type\": \"Other\"}\n{\"value\": \"106\", \"type\": \"Other\"}\n{\"value\": \"44\", \"type\": \"Other\"}\n{\"value\": \"56\", \"type\": \"Other\"}\n{\"value\": \"109\", \"type\": \"Other\"}\n{\"value\": \"42\", \"type\": \"Other\"}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: There are various works Cheng et al., 2018; Conneau et al., 2017; Fu et al., 2018; <cite>Hill et al., 2016</cite>; Iyyer et al., 2015; Kim, 2014; Kiros et al., 2015; Kusner et al., 2015; Le and Mikolov, 2014; Clinchant and Perronnin, 2013; Shen et al., 2018; Torki, 2018; Zhao et al., 2015; Zhou et al., 2018 ) that propose to build effective sentence-level or document-level representations based on word embeddings.
["Background"]
science.multicite_intent_classification.457
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThere are various works Cheng et al., 2018; Conneau et al., 2017; Fu et al., 2018; <cite>Hill et al., 2016</cite>; Iyyer et al., 2015; Kim, 2014; Kiros et al., 2015; Kusner et al., 2015; Le and Mikolov, 2014; Clinchant and Perronnin, 2013; Shen et al., 2018; Torki, 2018; Zhao et al., 2015; Zhou et al., 2018 ) that propose to build effective sentence-level or document-level representations based on word embeddings.", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Background: Heavy exercise cause muscle damage associated with production of inflammatory agents. The purpose of present study was to determine the effect of acute and 14-day Coenzyme Q10 supplementation on inflammatory, blood lactate and muscle damage in male middle-distance runners. <p> Methods: Eighteen male middle-distance runners in a randomized and quasi experimental study were allocated into two equal groups: supplement group (n=9, Coenzyme Q10: 5mg/kg/day) and placebo group (n= 9, Dextrose: 5mg/kg/day). After acute (1day) and 14-day supplementation, all subjects were participated in a training like running (competitive 3000 meters). Blood samples were obtained in the four phases: one hour before and 18-24 hours after two running protocols. Lactate, serum interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-alpha), C-reactive protein (CRP) and creatine kinase (CK) were analyzed. Repeated ANOVA and Bonferuni as a post hoc tests were used to determine the changes in four stages. Differences between groups were determined by t-test. <p> Results: The results showed that acute and short-term Coenzyme Q10 supplementation had not significant effect on basal parameters. The acute coenzyme Q10 supplementation attenuated only the exercise-induced increase in response of the plasma CRP. The short-term (14-day) coenzyme Q10 supplementation attenuated the exercise-induced increase in response of the lactate, serum interleukin- 6, tumor necrosis factor-alpha, and CRP in male middle-distance runners. However, the acute and short-term coenzyme Q10 supplementation had not any significant effect on the exerciseinduced increase response of total serum creatine kinase. <p> Conclusion: Based on the present results, it can be concluded that the 14-day coenzyme Q10 supplementation (5mg.kg-1.day-1) is more effective than the acute supplementation to overcome the exercise-induced adverse responses in some oxidative, inflammatory and biochemical parameters. Therefore, short-term coenzyme Q10 supplementation is recommended to reduce exercise-induced adverse consequences. Extracted Data:
[["14-day coenzyme Q10 supplementation (5mg.kg-1.day-1)", "acute supplementation of Q10", "overcome the exercise-induced adverse responses", "significantly increased", "Based on the present results, it can be concluded that the 14-day coenzyme Q10 supplementation\r\r\r\r\n(5mg.kg-1.day-1) is more effective than the acute supplementation to overcome the\r\r\r\r\nexercise-induced adverse responses in some oxidative, inflammatory and biochemical parameters.\r\r\r\r\nTherefore, short-term coenzyme Q10 supplementation is recommended to reduce exercise-induced\r\r\r\r\nadverse consequences."], ["acute coenzyme Q10 supplementation ", "placebo", "exercise-induced increase in response of the plasma CRP", "significantly decreased", "The acute coenzyme Q10 supplementation attenuated only the\r\r\r\r\nexercise-induced increase in response of the plasma CRP. The short-term (14-day) coenzyme Q10\r\r\r\r\nsupplementation attenuated the exercise-induced increase in response of the lactate, serum interleukin-\r\r\r\r\n6, tumor necrosis factor-alpha, and CRP in male middle-distance runners. "], ["coenzyme Q10 supplementation ", "placebo", "levels of serum CK", "no significant difference", "the acute\r\r\r\r\nand short-term coenzyme Q10 supplementation had not any significant effect on the exerciseinduced\r\r\r\r\nincrease response of total serum creatine kinase."]]
science.evidence_inference.93
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: \nBackground: Heavy exercise cause muscle damage associated with production of inflammatory\nagents. The purpose of present study was to determine the effect of acute and 14-day Coenzyme Q10\nsupplementation on inflammatory, blood lactate and muscle damage in male middle-distance runners.\n <p> \nMethods: Eighteen male middle-distance runners in a randomized and quasi experimental study\nwere allocated into two equal groups: supplement group (n=9, Coenzyme Q10: 5mg/kg/day) and\nplacebo group (n= 9, Dextrose: 5mg/kg/day). After acute (1day) and 14-day supplementation, all\nsubjects were participated in a training like running (competitive 3000 meters). Blood samples were\nobtained in the four phases: one hour before and 18-24 hours after two running protocols. Lactate,\nserum interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-alpha), C-reactive protein (CRP) and\ncreatine kinase (CK) were analyzed. Repeated ANOVA and Bonferuni as a post hoc tests were used\nto determine the changes in four stages. Differences between groups were determined by t-test.\n <p> \nResults: The results showed that acute and short-term Coenzyme Q10 supplementation had not\nsignificant effect on basal parameters. The acute coenzyme Q10 supplementation attenuated only the\nexercise-induced increase in response of the plasma CRP. The short-term (14-day) coenzyme Q10\nsupplementation attenuated the exercise-induced increase in response of the lactate, serum interleukin-\n6, tumor necrosis factor-alpha, and CRP in male middle-distance runners. However, the acute\nand short-term coenzyme Q10 supplementation had not any significant effect on the exerciseinduced\nincrease response of total serum creatine kinase.\n <p> \nConclusion: Based on the present results, it can be concluded that the 14-day coenzyme Q10 supplementation\n(5mg.kg-1.day-1) is more effective than the acute supplementation to overcome the\nexercise-induced adverse responses in some oxidative, inflammatory and biochemical parameters.\nTherefore, short-term coenzyme Q10 supplementation is recommended to reduce exercise-induced\nadverse consequences.\n\n\nExtracted Data:", "role": "user" }, { "content": "[[\"14-day coenzyme Q10 supplementation (5mg.kg-1.day-1)\", \"acute supplementation of Q10\", \"overcome the exercise-induced adverse responses\", \"significantly increased\", \"Based on the present results, it can be concluded that the 14-day coenzyme Q10 supplementation\\r\\r\\r\\r\\n(5mg.kg-1.day-1) is more effective than the acute supplementation to overcome the\\r\\r\\r\\r\\nexercise-induced adverse responses in some oxidative, inflammatory and biochemical parameters.\\r\\r\\r\\r\\nTherefore, short-term coenzyme Q10 supplementation is recommended to reduce exercise-induced\\r\\r\\r\\r\\nadverse consequences.\"], [\"acute coenzyme Q10 supplementation \", \"placebo\", \"exercise-induced increase in response of the plasma CRP\", \"significantly decreased\", \"The acute coenzyme Q10 supplementation attenuated only the\\r\\r\\r\\r\\nexercise-induced increase in response of the plasma CRP. The short-term (14-day) coenzyme Q10\\r\\r\\r\\r\\nsupplementation attenuated the exercise-induced increase in response of the lactate, serum interleukin-\\r\\r\\r\\r\\n6, tumor necrosis factor-alpha, and CRP in male middle-distance runners. \"], [\"coenzyme Q10 supplementation \", \"placebo\", \"levels of serum CK\", \"no significant difference\", \"the acute\\r\\r\\r\\r\\nand short-term coenzyme Q10 supplementation had not any significant effect on the exerciseinduced\\r\\r\\r\\r\\nincrease response of total serum creatine kinase.\"]]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Pro-inflammatory cytokines are up repressed during tumor development. Research Abstract: Paraneoplastic thrombocytosis is associated with many solid tumors and often correlates with reduced survival. Recent studies suggest that a pathogenic feed back loop may be operative between platelets and tumor cells, with reciprocal interactions between tumor growth/metastasis and thrombocytosis/platelet activation. Specific molecular pathways have been identified in which tumors can stimulate platelet production and activation; activated platelets can, in turn, promote tumor growth and metastasis. Taken together, these findings provide exciting new potential targets for therapeutic intervention.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.634
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nPro-inflammatory cytokines are up repressed during tumor development.\n\nResearch Abstract:\nParaneoplastic thrombocytosis is associated with many solid tumors and often correlates with reduced survival.\nRecent studies suggest that a pathogenic feed back loop may be operative between platelets and tumor cells, with reciprocal interactions between tumor growth/metastasis and thrombocytosis/platelet activation.\nSpecific molecular pathways have been identified in which tumors can stimulate platelet production and activation; activated platelets can, in turn, promote tumor growth and metastasis.\nTaken together, these findings provide exciting new potential targets for therapeutic intervention.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems. Further, there is no benchmark dataset for examining inappropriate biases in systems. Here for the first time, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We use the dataset to examine 219 automatic sentiment analysis systems that took part in a recent shared task, SemEval-2018 Task 1 'Affect in Tweets'. We find that several of the systems show statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for one race or one gender. We make the EEC freely available. The Equity Evaluation Corpus We now describe how we compiled a dataset of thousands of sentences to determine whether automatic systems consistently give higher (or lower) sentiment intensity scores to sentences involving a particular race or gender. There are several ways in which such a dataset may be compiled. We present below the choices that we made. We decided to use sentences involving at least one race- or gender-associated word. The sentences were intended to be short and grammatically simple. We also wanted some sentences to include expressions of sentiment and emotion, since the goal is to test sentiment and emotion systems. We, the authors of this paper, developed eleven sentence templates after several rounds of discussion and consensus building. They are shown in Table TABREF3 . The templates are divided into two groups. The first type (templates 1–7) includes emotion words. The purpose of this set is to have sentences expressing emotions. The second type (templates 8–11) does not include any emotion words. The purpose of this set is to have non-emotional (neutral) sentences. The templates include two variables: INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 . We generate sentences from the template by instantiating each variable with one of the pre-chosen values that the variable can take. Each of the eleven templates includes the variable INLINEFORM4 person INLINEFORM5 . INLINEFORM6 person INLINEFORM7 can be instantiated by any of the following noun phrases: For our study, we chose ten names of each kind from the study by Caliskan:2017 (see Table TABREF7 ). The full lists of noun phrases representing females and males, used in our study, are shown in Table TABREF8 . The second variable, INLINEFORM0 emotion word INLINEFORM1 , has two variants. Templates one through four include a variable for an emotional state word. The emotional state words correspond to four basic emotions: anger, fear, joy, and sadness. Specifically, for each of the emotions, we selected five words that convey that emotion in varying intensities. These words were taken from the categories in the Roget's Thesaurus corresponding to the four emotions: category #900 Resentment (for anger), category #860 Fear (for fear), category #836 Cheerfulness (for joy), and category #837 Dejection (for sadness). Templates five through seven include emotion words describing a situation or event. These words were also taken from the same thesaurus categories listed above. The full lists of emotion words (emotional state words and emotional situation/event words) are shown in Table TABREF10 . We generated sentences from the templates by replacing INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 variables with the values they can take. In total, 8,640 sentences were generated with the various combinations of INLINEFORM4 person INLINEFORM5 and INLINEFORM6 emotion word INLINEFORM7 values across the eleven templates. We manually examined the sentences to make sure they were grammatically well-formed. Notably, one can derive pairs of sentences from the EEC such that they differ only in one word corresponding to gender or race (e.g., `My daughter feels devastated' and `My son feels devastated'). We refer to the full set of 8,640 sentences as Equity Evaluation Corpus. Results The two sub-sections below present the results from the analysis for gender bias and race bias, respectively. Gender Bias Results Individual submission results were communicated to the participants. Here, we present the summary results across all the teams. The goal of this analysis is to gain a better understanding of biases across a large number of current sentiment analysis systems. Thus, we partition the submissions into three groups according to the bias they show: F=M not significant: submissions that showed no statistically significant difference in intensity scores predicted for corresponding female and male noun phrase sentences, F INLINEFORM0 –M INLINEFORM1 significant: submissions that consistently gave higher scores for sentences with female noun phrases than for corresponding sentences with male noun phrases, F INLINEFORM0 –M INLINEFORM1 significant: submissions that consistently gave lower scores for sentences with female noun phrases than for corresponding sentences with male noun phrases. For each system and each sentence pair, we calculate the score difference INLINEFORM0 as the score for the female noun phrase sentence minus the score for the corresponding male noun phrase sentence. Table TABREF24 presents the summary results for each of the bias groups. It has the following columns: #Subm.: number of submissions in each group. If all the systems are unbiased, then the number of submissions for the group F=M not significant would be the maximum, and the number of submissions in all other groups would be zero. Avg. score difference F INLINEFORM0 –M INLINEFORM1 : the average INLINEFORM2 for only those pairs where the score for the female noun phrase sentence is higher. The greater the magnitude of this score, the stronger the bias in systems that consistently give higher scores to female-associated sentences. Avg. score difference F INLINEFORM0 –M INLINEFORM1 : the average INLINEFORM2 for only those pairs where the score for the female noun phrase sentence is lower. The greater the magnitude of this score, the stronger the bias in systems that consistently give lower scores to female-associated sentences. Note that these numbers were first calculated separately for each submission, and then averaged over all the submissions within each submission group. The results are reported separately for submissions to each task (anger, fear, joy, sadness, and sentiment/valence intensity prediction). Observe that on the four emotion intensity prediction tasks, only about 12 of the 46 submissions (about 25% of the submissions) showed no statistically significant score difference. On the valence prediction task, only 5 of the 36 submissions (14% of the submissions) showed no statistically significant score difference. Thus 75% to 86% of the submissions consistently marked sentences of one gender higher than another. When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21–25) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8–13). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 . Figure FIGREF25 shows the score differences ( INLINEFORM0 ) for individual systems on the valence regression task. Plots for the four emotion intensity prediction tasks are shown in Figure FIGREF31 in the Appendix. Each point ( ▲, ▼, ●) on the plot corresponds to the difference in scores predicted by the system on one sentence pair. The systems are ordered by their rank (from first to last) on the task on the tweets test sets, as per the official evaluation metric (Spearman correlation with the gold intensity scores). We will refer to the difference between the maximal value of INLINEFORM1 and the minimal value of INLINEFORM2 for a particular system as the INLINEFORM3 –spread. Observe that the INLINEFORM4 –spreads for many systems are rather large, up to 0.57. Depending on the task, the top 10 or top 15 systems as well as some of the worst performing systems tend to have smaller INLINEFORM5 –spreads while the systems with medium to low performance show greater sensitivity to the gender-associated words. Also, most submissions that showed no statistically significant score differences (shown in green) performed poorly on the tweets test sets. Only three systems out of the top five on the anger intensity task and one system on the joy and sadness tasks showed no statistically significant score difference. This indicates that when considering only those systems that performed well on the intensity prediction task, the percentage of gender-biased systems are even higher than those indicated above. These results raise further questions such as `what exactly is the cause of such biases?' and `why is the bias impacted by the emotion task under consideration?'. Answering these questions will require further information on the resources that the teams used to develop their models, and we leave that for future work. Average score differences: For submissions that showed statistically significant score differences, the average score difference F INLINEFORM0 –M INLINEFORM1 and the average score difference F INLINEFORM2 –M INLINEFORM3 were INLINEFORM4 . Since the intensity scores range from 0 to 1, 0.03 is 3% of the full range. The maximal score difference ( INLINEFORM5 ) across all the submissions was as high as 0.34. Note, however, that these INLINEFORM6 s are the result of changing just one word in a sentence. In more complex sentences, several gender-associated words can appear, which may have a bigger impact. Also, whether consistent score differences of this magnitude will have significant repercussions in downstream applications, depends on the particular application. Analyses on only the neutral sentences in EEC and only the emotional sentences in EEC: We also performed a separate analysis using only those sentences from the EEC that included no emotion words. Recall that there are four templates that contain no emotion words. Tables TABREF26 shows these results. We observe similar trends as in the analysis on the full set. One noticeable difference is that the number of submissions that showed statistically significant score difference is much smaller for this data subset. However, the total number of comparisons on the subset (44) is much smaller than the total number of comparisons on the full set (1,584), which makes the statistical test less powerful. Note also that the average score differences on the subset (columns 3 and 4 in Table TABREF26 ) tend to be higher than the differences on the full set (columns 3 and 4 in Table TABREF24 ). This indicates that gender-associated words can have a bigger impact on system predictions for neutral sentences. We also performed an analysis by restricting the dataset to contain only the sentences with the emotion words corresponding to the emotion task (i.e., submissions to the anger intensity prediction task were evaluated only on sentences with anger words). The results (not shown here) were similar to the results on the full set. Discussion As mentioned in the introduction, bias can originate from any or several parts of a system: the labeled and unlabeled datasets used to learn different parts of the model, the language resources used (e.g., pre-trained word embeddings, lexicons), the learning method used (algorithm, features, parameters), etc. In our analysis, we found systems trained using a variety of algorithms (traditional as well as deep neural networks) and a variety of language resources showing gender and race biases. Further experiments may tease out the extent of bias in each of these parts. We also analyzed the output of our baseline SVM system trained using word unigrams (SVM-Unigrams). The system does not use any language resources other than the training data. We observe that this baseline system also shows small bias in gender and race. The INLINEFORM0 -spreads for this system were quite small: 0.09 to 0.2 on the gender sentence pairs and less than 0.002 on the race sentence pairs. The predicted intensity scores tended to be higher on the sentences with male noun phrases than on the sentences with female noun phrases for the tasks of anger, fear, and sadness intensity prediction. This tendency was reversed on the task of valence prediction. On the race sentence pairs, the system predicted higher intensity scores on the sentences with European American names for all four emotion intensity prediction tasks, and on the sentences with African American names for the task of valence prediction. This indicates that the training data contains some biases (in the form of some unigrams associated with a particular gender or race tending to appear in tweets labeled with certain emotions). The labeled datasets for the shared task were created using a fairly standard approach: polling Twitter with task-related query terms (in this case, emotion words) and then manually annotating the tweets with task-specific labels. The SVM-Unigram bias results show that data collected by distant supervision can be a source of bias. However, it should be noted that different learning methods in combination with different language resources can accentuate, reverse, or mask the bias present in the training data to different degrees. Table 3: Pairs of noun phrases representing a female or a male person used in this study. Table 4: Emotion words used in this study. Table 5: Analysis of gender bias: Summary results for 219 submissions from 50 teams on the Equity Evaluation Corpus (including both sentences with emotion words and sentences without emotion words). Figure 1: Analysis of gender bias: Box plot of the score differences on the gender sentence pairs for each system on the valence regression task. Each point on the plot corresponds to the difference in scores predicted by the system on one sentence pair. s represents F↑–M↓ significant group, t represents F↓–M↑ significant group, and l represents F=M not significant group. For each system, the bottom and top of a grey box are the first and third quartiles, and the band inside the box shows the second quartile (the median). The whiskers extend to 1.5 times the interquartile range (IQR = Q3 - Q1) from the edge of the box. The systems are ordered by rank (from first to last) on the task on the tweets test sets as per the official evaluation metric. Table 6: Analysis of gender bias: Summary results for 219 submissions from 50 teams on the subset of sentences from the Equity Evaluation Corpus that do not contain any emotion words. Table 7: Analysis of race bias: Summary results for 219 submissions from 50 teams on the Equity Evaluation Corpus (including both sentences with emotion words and sentences without emotion words). Query: What criteria are used to select the 8,640 English sentences? JSON Response:
{"answer": "Sentences involving at least one race- or gender-associated word, sentence have to be short and grammatically simple, sentence have to include expressions of sentiment and emotion.", "evidence": ["We decided to use sentences involving at least one race- or gender-associated word. The sentences were intended to be short and grammatically simple. We also wanted some sentences to include expressions of sentiment and emotion, since the goal is to test sentiment and emotion systems. We, the authors of this paper, developed eleven sentence templates after several rounds of discussion and consensus building. They are shown in Table TABREF3 . The templates are divided into two groups. The first type (templates 1\u20137) includes emotion words. The purpose of this set is to have sentences expressing emotions. The second type (templates 8\u201311) does not include any emotion words. The purpose of this set is to have non-emotional (neutral) sentences."]}
science.qasper_abstractive_qa.95.1
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nExamining Gender and Race Bias in Two Hundred Sentiment Analysis Systems\n\nAutomatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems. Further, there is no benchmark dataset for examining inappropriate biases in systems. Here for the first time, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We use the dataset to examine 219 automatic sentiment analysis systems that took part in a recent shared task, SemEval-2018 Task 1 'Affect in Tweets'. We find that several of the systems show statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for one race or one gender. We make the EEC freely available.\n\nThe Equity Evaluation Corpus\nWe now describe how we compiled a dataset of thousands of sentences to determine whether automatic systems consistently give higher (or lower) sentiment intensity scores to sentences involving a particular race or gender. There are several ways in which such a dataset may be compiled. We present below the choices that we made.\nWe decided to use sentences involving at least one race- or gender-associated word. The sentences were intended to be short and grammatically simple. We also wanted some sentences to include expressions of sentiment and emotion, since the goal is to test sentiment and emotion systems. We, the authors of this paper, developed eleven sentence templates after several rounds of discussion and consensus building. They are shown in Table TABREF3 . The templates are divided into two groups. The first type (templates 1–7) includes emotion words. The purpose of this set is to have sentences expressing emotions. The second type (templates 8–11) does not include any emotion words. The purpose of this set is to have non-emotional (neutral) sentences.\nThe templates include two variables: INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 . We generate sentences from the template by instantiating each variable with one of the pre-chosen values that the variable can take. Each of the eleven templates includes the variable INLINEFORM4 person INLINEFORM5 . INLINEFORM6 person INLINEFORM7 can be instantiated by any of the following noun phrases:\nFor our study, we chose ten names of each kind from the study by Caliskan:2017 (see Table TABREF7 ). The full lists of noun phrases representing females and males, used in our study, are shown in Table TABREF8 .\nThe second variable, INLINEFORM0 emotion word INLINEFORM1 , has two variants. Templates one through four include a variable for an emotional state word. The emotional state words correspond to four basic emotions: anger, fear, joy, and sadness. Specifically, for each of the emotions, we selected five words that convey that emotion in varying intensities. These words were taken from the categories in the Roget's Thesaurus corresponding to the four emotions: category #900 Resentment (for anger), category #860 Fear (for fear), category #836 Cheerfulness (for joy), and category #837 Dejection (for sadness). Templates five through seven include emotion words describing a situation or event. These words were also taken from the same thesaurus categories listed above. The full lists of emotion words (emotional state words and emotional situation/event words) are shown in Table TABREF10 .\nWe generated sentences from the templates by replacing INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 variables with the values they can take. In total, 8,640 sentences were generated with the various combinations of INLINEFORM4 person INLINEFORM5 and INLINEFORM6 emotion word INLINEFORM7 values across the eleven templates. We manually examined the sentences to make sure they were grammatically well-formed. Notably, one can derive pairs of sentences from the EEC such that they differ only in one word corresponding to gender or race (e.g., `My daughter feels devastated' and `My son feels devastated'). We refer to the full set of 8,640 sentences as Equity Evaluation Corpus.\n\nResults\nThe two sub-sections below present the results from the analysis for gender bias and race bias, respectively.\n\nGender Bias Results\nIndividual submission results were communicated to the participants. Here, we present the summary results across all the teams. The goal of this analysis is to gain a better understanding of biases across a large number of current sentiment analysis systems. Thus, we partition the submissions into three groups according to the bias they show:\nF=M not significant: submissions that showed no statistically significant difference in intensity scores predicted for corresponding female and male noun phrase sentences,\nF INLINEFORM0 –M INLINEFORM1 significant: submissions that consistently gave higher scores for sentences with female noun phrases than for corresponding sentences with male noun phrases,\nF INLINEFORM0 –M INLINEFORM1 significant: submissions that consistently gave lower scores for sentences with female noun phrases than for corresponding sentences with male noun phrases.\nFor each system and each sentence pair, we calculate the score difference INLINEFORM0 as the score for the female noun phrase sentence minus the score for the corresponding male noun phrase sentence. Table TABREF24 presents the summary results for each of the bias groups. It has the following columns:\n#Subm.: number of submissions in each group.\nIf all the systems are unbiased, then the number of submissions for the group F=M not significant would be the maximum, and the number of submissions in all other groups would be zero.\nAvg. score difference F INLINEFORM0 –M INLINEFORM1 : the average INLINEFORM2 for only those pairs where the score for the female noun phrase sentence is higher. The greater the magnitude of this score, the stronger the bias in systems that consistently give higher scores to female-associated sentences.\nAvg. score difference F INLINEFORM0 –M INLINEFORM1 : the average INLINEFORM2 for only those pairs where the score for the female noun phrase sentence is lower. The greater the magnitude of this score, the stronger the bias in systems that consistently give lower scores to female-associated sentences.\nNote that these numbers were first calculated separately for each submission, and then averaged over all the submissions within each submission group. The results are reported separately for submissions to each task (anger, fear, joy, sadness, and sentiment/valence intensity prediction).\nObserve that on the four emotion intensity prediction tasks, only about 12 of the 46 submissions (about 25% of the submissions) showed no statistically significant score difference. On the valence prediction task, only 5 of the 36 submissions (14% of the submissions) showed no statistically significant score difference. Thus 75% to 86% of the submissions consistently marked sentences of one gender higher than another.\nWhen predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21–25) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8–13). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 .\nFigure FIGREF25 shows the score differences ( INLINEFORM0 ) for individual systems on the valence regression task. Plots for the four emotion intensity prediction tasks are shown in Figure FIGREF31 in the Appendix. Each point ( ▲, ▼, ●) on the plot corresponds to the difference in scores predicted by the system on one sentence pair. The systems are ordered by their rank (from first to last) on the task on the tweets test sets, as per the official evaluation metric (Spearman correlation with the gold intensity scores). We will refer to the difference between the maximal value of INLINEFORM1 and the minimal value of INLINEFORM2 for a particular system as the INLINEFORM3 –spread. Observe that the INLINEFORM4 –spreads for many systems are rather large, up to 0.57. Depending on the task, the top 10 or top 15 systems as well as some of the worst performing systems tend to have smaller INLINEFORM5 –spreads while the systems with medium to low performance show greater sensitivity to the gender-associated words. Also, most submissions that showed no statistically significant score differences (shown in green) performed poorly on the tweets test sets. Only three systems out of the top five on the anger intensity task and one system on the joy and sadness tasks showed no statistically significant score difference. This indicates that when considering only those systems that performed well on the intensity prediction task, the percentage of gender-biased systems are even higher than those indicated above.\nThese results raise further questions such as `what exactly is the cause of such biases?' and `why is the bias impacted by the emotion task under consideration?'. Answering these questions will require further information on the resources that the teams used to develop their models, and we leave that for future work.\nAverage score differences: For submissions that showed statistically significant score differences, the average score difference F INLINEFORM0 –M INLINEFORM1 and the average score difference F INLINEFORM2 –M INLINEFORM3 were INLINEFORM4 . Since the intensity scores range from 0 to 1, 0.03 is 3% of the full range. The maximal score difference ( INLINEFORM5 ) across all the submissions was as high as 0.34. Note, however, that these INLINEFORM6 s are the result of changing just one word in a sentence. In more complex sentences, several gender-associated words can appear, which may have a bigger impact. Also, whether consistent score differences of this magnitude will have significant repercussions in downstream applications, depends on the particular application.\nAnalyses on only the neutral sentences in EEC and only the emotional sentences in EEC: We also performed a separate analysis using only those sentences from the EEC that included no emotion words. Recall that there are four templates that contain no emotion words. Tables TABREF26 shows these results. We observe similar trends as in the analysis on the full set. One noticeable difference is that the number of submissions that showed statistically significant score difference is much smaller for this data subset. However, the total number of comparisons on the subset (44) is much smaller than the total number of comparisons on the full set (1,584), which makes the statistical test less powerful. Note also that the average score differences on the subset (columns 3 and 4 in Table TABREF26 ) tend to be higher than the differences on the full set (columns 3 and 4 in Table TABREF24 ). This indicates that gender-associated words can have a bigger impact on system predictions for neutral sentences.\nWe also performed an analysis by restricting the dataset to contain only the sentences with the emotion words corresponding to the emotion task (i.e., submissions to the anger intensity prediction task were evaluated only on sentences with anger words). The results (not shown here) were similar to the results on the full set.\n\nDiscussion\nAs mentioned in the introduction, bias can originate from any or several parts of a system: the labeled and unlabeled datasets used to learn different parts of the model, the language resources used (e.g., pre-trained word embeddings, lexicons), the learning method used (algorithm, features, parameters), etc. In our analysis, we found systems trained using a variety of algorithms (traditional as well as deep neural networks) and a variety of language resources showing gender and race biases. Further experiments may tease out the extent of bias in each of these parts.\nWe also analyzed the output of our baseline SVM system trained using word unigrams (SVM-Unigrams). The system does not use any language resources other than the training data. We observe that this baseline system also shows small bias in gender and race. The INLINEFORM0 -spreads for this system were quite small: 0.09 to 0.2 on the gender sentence pairs and less than 0.002 on the race sentence pairs. The predicted intensity scores tended to be higher on the sentences with male noun phrases than on the sentences with female noun phrases for the tasks of anger, fear, and sadness intensity prediction. This tendency was reversed on the task of valence prediction. On the race sentence pairs, the system predicted higher intensity scores on the sentences with European American names for all four emotion intensity prediction tasks, and on the sentences with African American names for the task of valence prediction. This indicates that the training data contains some biases (in the form of some unigrams associated with a particular gender or race tending to appear in tweets labeled with certain emotions). The labeled datasets for the shared task were created using a fairly standard approach: polling Twitter with task-related query terms (in this case, emotion words) and then manually annotating the tweets with task-specific labels. The SVM-Unigram bias results show that data collected by distant supervision can be a source of bias. However, it should be noted that different learning methods in combination with different language resources can accentuate, reverse, or mask the bias present in the training data to different degrees.\n\nTable 3: Pairs of noun phrases representing a female or a male person used in this study.\n\nTable 4: Emotion words used in this study.\n\nTable 5: Analysis of gender bias: Summary results for 219 submissions from 50 teams on the Equity Evaluation Corpus (including both sentences with emotion words and sentences without emotion words).\n\nFigure 1: Analysis of gender bias: Box plot of the score differences on the gender sentence pairs for each system on the valence regression task. Each point on the plot corresponds to the difference in scores predicted by the system on one sentence pair. s represents F↑–M↓ significant group, t represents F↓–M↑ significant group, and l represents F=M not significant group. For each system, the bottom and top of a grey box are the first and third quartiles, and the band inside the box shows the second quartile (the median). The whiskers extend to 1.5 times the interquartile range (IQR = Q3 - Q1) from the edge of the box. The systems are ordered by rank (from first to last) on the task on the tweets test sets as per the official evaluation metric.\n\nTable 6: Analysis of gender bias: Summary results for 219 submissions from 50 teams on the subset of sentences from the Equity Evaluation Corpus that do not contain any emotion words.\n\nTable 7: Analysis of race bias: Summary results for 219 submissions from 50 teams on the Equity Evaluation Corpus (including both sentences with emotion words and sentences without emotion words).\n\nQuery: What criteria are used to select the 8,640 English sentences?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"Sentences involving at least one race- or gender-associated word, sentence have to be short and grammatically simple, sentence have to include expressions of sentiment and emotion.\", \"evidence\": [\"We decided to use sentences involving at least one race- or gender-associated word. The sentences were intended to be short and grammatically simple. We also wanted some sentences to include expressions of sentiment and emotion, since the goal is to test sentiment and emotion systems. We, the authors of this paper, developed eleven sentence templates after several rounds of discussion and consensus building. They are shown in Table TABREF3 . The templates are divided into two groups. The first type (templates 1\\u20137) includes emotion words. The purpose of this set is to have sentences expressing emotions. The second type (templates 8\\u201311) does not include any emotion words. The purpose of this set is to have non-emotional (neutral) sentences.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: CD28 initiates tonic signaling in conventional T cells, which causes an exhaustion phenotype and limited efficiency. Research Abstract: Chimeric antigen receptors (CARs) targeting CD19 have mediated dramatic antitumor responses in hematologic malignancies, but tumor regression has rarely occurred using CARs targeting other antigens. It remains unknown whether the impressive effects of CD19 CARs relate to greater susceptibility of hematologic malignancies to CAR therapies, or superior functionality of the CD19 CAR itself. We show that tonic CAR CD3-ζ phosphorylation, triggered by antigen-independent clustering of CAR single-chain variable fragments, can induce early exhaustion of CAR T cells that limits antitumor efficacy. Such activation is present to varying degrees in all CARs studied, except the highly effective CD19 CAR. We further determine that CD28 costimulation augments, whereas 4-1BB costimulation reduces, exhaustion induced by persistent CAR signaling. Our results provide biological explanations for the antitumor effects of CD19 CARs and for the observations that CD19 CAR T cells incorporating the 4-1BB costimulatory domain are more persistent than those incorporating CD28 in clinical trials.
{"verdict": "SUPPORT", "evidence": ["We further determine that CD28 costimulation augments, whereas 4-1BB costimulation reduces, exhaustion induced by persistent CAR signaling."]}
science.scifact_entailment.134
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nCD28 initiates tonic signaling in conventional T cells, which causes an exhaustion phenotype and limited efficiency.\n\nResearch Abstract:\nChimeric antigen receptors (CARs) targeting CD19 have mediated dramatic antitumor responses in hematologic malignancies, but tumor regression has rarely occurred using CARs targeting other antigens.\nIt remains unknown whether the impressive effects of CD19 CARs relate to greater susceptibility of hematologic malignancies to CAR therapies, or superior functionality of the CD19 CAR itself.\nWe show that tonic CAR CD3-ζ phosphorylation, triggered by antigen-independent clustering of CAR single-chain variable fragments, can induce early exhaustion of CAR T cells that limits antitumor efficacy.\nSuch activation is present to varying degrees in all CARs studied, except the highly effective CD19 CAR.\nWe further determine that CD28 costimulation augments, whereas 4-1BB costimulation reduces, exhaustion induced by persistent CAR signaling.\nOur results provide biological explanations for the antitumor effects of CD19 CARs and for the observations that CD19 CAR T cells incorporating the 4-1BB costimulatory domain are more persistent than those incorporating CD28 in clinical trials.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"We further determine that CD28 costimulation augments, whereas 4-1BB costimulation reduces, exhaustion induced by persistent CAR signaling.\"]}", "role": "assistant" } ]
science.discomat_te
| Label | Ionic conc., N(x1023 cm-3)+-0.01 | Ionic conc., N(x1023 cm-3)+-0.01 | Inter-ionic distance, R (A)+-0.01 | Inter-ionic distance, R (A)+-0.01 | | Label | Ti4+ | V5+ | Ti4+ | V5+ | | TV-10 | 0.00 | 10.92 | 0.00 | 0.96 | | TV-7.5 | 2.28 | 8.53 | 1.63 | 1.05 | | TV-5.0 | 4.75 | 5.94 | 1.28 | 1.18 | | TV-2.5 | 7.50 | 3.12 | 1.09 | 1.47 | | TV-0.0 | 10.6 | 0.00 | 0.98 | 0.00 | Caption: The ionic concentration of titanium (Ti4+) and vanadium (V5+) ions along with inter-ionic distance for the present glasses. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "0.00", "type": "Other"} {"value": "10.92", "type": "Other"} {"value": "0.00", "type": "Other"} {"value": "0.96", "type": "Other"} {"value": "2.28", "type": "Other"} {"value": "8.53", "type": "Other"} {"value": "1.63", "type": "Other"} {"value": "1.05", "type": "Other"} {"value": "4.75", "type": "Other"} {"value": "5.94", "type": "Other"} {"value": "1.28", "type": "Other"} {"value": "1.18", "type": "Other"} {"value": "7.50", "type": "Other"} {"value": "3.12", "type": "Other"} {"value": "1.09", "type": "Other"} {"value": "1.47", "type": "Other"} {"value": "10.6", "type": "Other"} {"value": "0.00", "type": "Other"} {"value": "0.98", "type": "Other"} {"value": "0.00", "type": "Other"}
science.discomat_te.25
[ { "content": "| Label | Ionic conc., N(x1023 cm-3)+-0.01 | Ionic conc., N(x1023 cm-3)+-0.01 | Inter-ionic distance, R (A)+-0.01 | Inter-ionic distance, R (A)+-0.01 |\n| Label | Ti4+ | V5+ | Ti4+ | V5+ |\n| TV-10 | 0.00 | 10.92 | 0.00 | 0.96 |\n| TV-7.5 | 2.28 | 8.53 | 1.63 | 1.05 |\n| TV-5.0 | 4.75 | 5.94 | 1.28 | 1.18 |\n| TV-2.5 | 7.50 | 3.12 | 1.09 | 1.47 |\n| TV-0.0 | 10.6 | 0.00 | 0.98 | 0.00 |\n\nCaption: The ionic concentration of titanium (Ti4+) and vanadium (V5+) ions along with inter-ionic distance for the present glasses.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"0.00\", \"type\": \"Other\"}\n{\"value\": \"10.92\", \"type\": \"Other\"}\n{\"value\": \"0.00\", \"type\": \"Other\"}\n{\"value\": \"0.96\", \"type\": \"Other\"}\n{\"value\": \"2.28\", \"type\": \"Other\"}\n{\"value\": \"8.53\", \"type\": \"Other\"}\n{\"value\": \"1.63\", \"type\": \"Other\"}\n{\"value\": \"1.05\", \"type\": \"Other\"}\n{\"value\": \"4.75\", \"type\": \"Other\"}\n{\"value\": \"5.94\", \"type\": \"Other\"}\n{\"value\": \"1.28\", \"type\": \"Other\"}\n{\"value\": \"1.18\", \"type\": \"Other\"}\n{\"value\": \"7.50\", \"type\": \"Other\"}\n{\"value\": \"3.12\", \"type\": \"Other\"}\n{\"value\": \"1.09\", \"type\": \"Other\"}\n{\"value\": \"1.47\", \"type\": \"Other\"}\n{\"value\": \"10.6\", \"type\": \"Other\"}\n{\"value\": \"0.00\", \"type\": \"Other\"}\n{\"value\": \"0.98\", \"type\": \"Other\"}\n{\"value\": \"0.00\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Tuformer: Data-Driven Design of Expressive Transformer by Tucker Tensor Representation Main Content: Transformers are neural network architectures that achieve remarkable performance in many areas . However , the core component of Transformers , multi-head selfattention ( MHSA ) , is mainly derived from heuristics , and the interactions across its components are not well understood . To address the problem , we first introduce a mathematically rigorous and yet intuitive tensor diagram representation of MHSA . Guided by tensor diagram representations , we formulate a design space where we can analyze the expressive power of the network structure , providing new directions and possibilities for enhanced performance . We then propose a novel design , namely Tunable Transformer ( Tuformers ) , with a guaranteed higher expressive power than MHSA by allowing learnable weight sharing across heads . Unlike vanilla Transformer models , where the number of heads is a pre-defined fixed constant , Tuformer ’ s structure is data-driven , and the number of heads is trainable . Training of Tuformers could be made very efficient as it allows initialization from existing pre-trained Transformer models . We test Tuformers on various tasks across multiple domains and show competitive results under a wide range of model sizes . 1 INTRODUCTION . Transformer models are first introduced by Vaswani et al . ( 2017 ) in the context of sequence modelling . They have demonstrated impressive results on a wide variety of tasks in many fields , such as language model pre-training ( Sanh et al. , 2019 ) , speech recognition ( Moritz et al. , 2020 ) , image classification ( Dosovitskiy et al. , 2020 ) , and generation ( Parmar et al. , 2018 ) . The core component of Transformer models is the multi-head self-attention ( MHSA ) which is extended from the standard attention mechanism ( Bahdanau et al. , 2014 ) . Each attention head in MHSA has a global receptive field , i.e. , each token ’ s representation is updated by attending to all other tokens , and H attention heads are computed in parallel and concatenated together . The current MHSA design is mainly derived from empirical studies or heuristics , leaving some unresolved challenges . ( 1 ) Lack of solid theoretical understanding . The theory behind MHSA is only starting to catch up with practice . The role of the components and their interactions in MHSA are not well understood rigorously from a theoretical perspective , which may lead to inefficiencies in the design . For example , Michel et al . ( 2019 ) show that most attention heads can be removed in the testing phase without much performance compromise , while Cordonnier et al . ( 2020 ) find empirical evidence of redundancy in key/query projections and propose a re-parameterization scheme . These works focus more on practical solutions , leading to questions about whether theoretical patterns exist in these designs . ( 2 ) The number of heads is not trainable . Intuitively , the heads in multi-heads are expected to capture different context information through each head . However , the number of heads is fixed in training . Thus , although we could tune the hyper-parameter , an exhaustive search would be time-consuming or not practical for large-scale tasks . ( 3 ) Hard to analyze the expressive power . Analyzing the expressive power ( as will be defined in Definition 2 ) of a neural network , i.e. , proving that some architectures are more expressive than others , is a non-trivial and challenging task . Such analysis has been done in Convolutional Neural Networks ( CNNs ) ( LeCun et al. , 1995 ; Cohen et al. , 2016 ) , Recurrent Neural Networks ( RNNs ) ( Mikolov et al. , 2010 ; Khrulkov et al. , 2018 ) , but no such work exists for interpreting MHSA or guiding the structural design . In response to the above challenges , we first interpret MHSA from a tensor representation perspective using the intuitive graphical tool , the tensor diagram . Current prevalent descriptions of MHSA use flow charts to convey high-level intuitions , which could cause ambiguities . Therefore , it is inevitable to pair those flow charts with mathematical formulas to understand the mechanism precisely . However , these two separated descriptions create difficulties for interpretations and inspections of the operations implemented in MHSA . To address this issue , we propose a graphical representation of MHSA , which is both semantically intuitive and mathematically rigorous . Specifically , we modify and extend the vanilla tensor diagrams ( Penrose , 1971 ) , which conveniently allow for rigorous graphical representation of multi-linear operations between multi-dimensional arrays ( i.e. , higher-order tensors ) to represent nonlinear operations . We also propose a novel data-driven structure that is guaranteed to have higher expressive power . Inspired from the tensor diagram representation of MHSA , we obtain a novel design , namely TunableHead Self-Attention ( THSA ) , which is a re-parameterization of the weight matrices to allow learnable weight sharing across heads . Transformers with THSA , named Tunable Transformers ( Tuformers ) , have several advantages compared against vanilla Transformers : ( 1 ) A guaranteed higher expressive power . We prove that MHSA is a special case of THSA . ( 2 ) The number of heads is trainable . The concept of the number of heads in THSA generalizes to the stable rank of the core matrix , allowing data-driven implicit training . ( 3 ) Tuformers allow initialization from pre-trained Transformers such as BERT ( Devlin et al. , 2019 ) and its variants . We experiment Tuformers with several tasks across multiple domains , from language modeling , machine translation to image generation under a wide range of model sizes . We demonstrate competitive results not only on Tuformers but also in cases where Tuformers are initialized with pre-trained Transformers for other downstream tasks and when combined with Linear Transformer ( Katharopoulos et al. , 2020 ) on the image generation task . Using tensor diagram representation , we project the current design of MHSA into its tensor form , which renders a holistic view of the weights in MHSA for better interpretation of the information flow and exchange among the components of MHSA . More importantly , we formulate a design space for MHSA that allows improving the expressive power of the network by finding better tensor representations of the weights through tensor representation theory in followup works . Summary of Contributions : . ( 1 ) We propose a mathematically rigorous and semantically intuitive tensor diagram representation of the multi-head self-attention , introducing a new tool to the ML community for future studies on interpretation and improvements of Transformers . ( 2 ) We propose a novel design of the MHSA , Tunable-Head Self-Attention ( THSA ) , resulting in Tuformers , which has a theoretical guaranteed improvement in expressive power . ( 3 ) We provide a constructive re-parameterization scheme from vanilla Transformers to Tuformers so that Tuformers can be easily used in fine-tuning tasks with pre-trained models and efficient Transformers such as Linear Transformer ( Katharopoulos et al. , 2020 ) and Performer ( Choromanski et al. , 2020 ) with the state-of-the-art computation and memory complexities . ( 4 ) We formulate a design space for attention unit design and a framework for analyzing their expressive power using tensor representation theory under the intuitive guidance of tensor diagrams . 2 TENSOR DIAGRAM REPRESENTATION . Notations . We use lower case letters ( e.g. , v ) to denote vectors , upper case letters ( e.g. , M ) to denote matrices , and curly letters ( e.g. , T ) to denote general tensors . For a tensor T ∈ RI1×···IM , we refer to the number of indices as order , each individual index as mode , and the length of one mode as dimension . For instance , T is an M th order tensor that has dimension Im at its mth mode . We reserve superscripts to distinguish similar arrays ( e.g. , WQ , WK , W V are query/key/value weight matrices ) , and subscripts to index the elements in an array ( e.g. , Wij is the ( i , j ) th element of W ) . We use colon : to slice an array ( e.g. , Wi , : denotes the ith row of W ) . We propose to use tensor diagrams ( Penrose , 1971 ) , a commonly used rigorous/precise and intuitive graphical representation for multi-linear operations among higher-order arrays ( i.e. , tensors ) , to represent multi-head self-attention ( MHSA ) . Since MHSA consists of multilinear operations and a nonlinear softmax function , we will introduce tensor diagrams and our novel design extending tensor diagrams to denote the composition of nonlinear and multilinear operations in MHSA . 2.1 TENSOR DIAGRAM BASICS v M 𝓣 Vector order 1 tensor Matrix order 2 tensor 3 dimensional array order 3 tensor A BA A B C Figure 1 : Arrays in tensor diagram . A vector is denoted as a node with 1 leg , a matrix as a node with 2 legs and an N -dimensional array as a node with N legs . Arrays denoted as nodes with legs . An array is represented as a node with leg ( s ) in a tensor diagram as shown in Figure 1 . We denote the order ( the number of dimensions ) of the array by the number of legs extending from the node . Each labeled leg represents one mode of a tensor . Every mode of the tensor needs to be uniquely labeled . We usually use the dimension of the mode as the label ( i.e. , an associated positive integer written on top of each leg ) . The legs do not need to be straight lines , and their orientations do not matter . Matrices M ∈ RA×B and M > ∈ RB×A can be represented via the same tensor diagram as long as the M node has two legs ( to denote that it is a matrix , wherever the legs extend to ) , labeled as A and B ( to denote its size ) . Operations in tensor diagrams . There are three types of operations in the calculation of MHSA : , contraction , softmax , and batch multiplication , as shown in Figure 2 and explained in its caption . Evaluation of tensor diagrams : ( 1 ) Evaluation order . We can evaluate a tensor diagram in any pair-wise order except for the nodes in the softmax box . Since the softmax function is nonlinear , we must first evaluate the nodes in the softmax box with arbitrary order before those outside the box . ( 2 ) Reading a tensor diagram . We can easily identify the output shape by the dangling edges . For instant , in Section 2.1 , there are three dangling legs M , N , R. Thus the output is a 3rd order tensor with dimensions M , N , R. Note that the softmax function does not change the shape of the tensor . Advantages of tensor diagrams : ( 1 ) Tensor diagram is orientation invariant , meaning that we can represent X and X > using the same tensor diagram . Thus in multi-head , we obtain a universal graphical representation regardless of whether we represent an embedding of each token in the sequence as rows or columns of the input embedding matrix X . ( 2 ) Multi-linear operations are concisely represented and interpreted . ( 3 ) The representation is both precise in math and intuitive , and the information flow is clear , making analysis of network structure more accessible . In addition , with the labels of the legs , we can read the model complexity explicitly . We include a comprehensive introduction to tensor diagram representation in Appendix A .
Focusing on the multi-head self-attention (MHSA) structure, this paper proposes an extension of the tensor diagram to denote self-attention (SA) structures more intuitively. Then inspired by the Tucker format, this paper also proposes a new form of SA named Tucker-Head Self-Attention (THSA), which can also be illustrated with the tensor diagram extension. Although THSA derives some success, THSA may lack theoretical guarantee and experiments seems insufficient.
science.mup_single_document_summarization.3642
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nTuformer: Data-Driven Design of Expressive Transformer by Tucker Tensor Representation\n\nMain Content:\nTransformers are neural network architectures that achieve remarkable performance in many areas . However , the core component of Transformers , multi-head selfattention ( MHSA ) , is mainly derived from heuristics , and the interactions across its components are not well understood . To address the problem , we first introduce a mathematically rigorous and yet intuitive tensor diagram representation of MHSA . Guided by tensor diagram representations , we formulate a design space where we can analyze the expressive power of the network structure , providing new directions and possibilities for enhanced performance . We then propose a novel design , namely Tunable Transformer ( Tuformers ) , with a guaranteed higher expressive power than MHSA by allowing learnable weight sharing across heads . Unlike vanilla Transformer models , where the number of heads is a pre-defined fixed constant , Tuformer ’ s structure is data-driven , and the number of heads is trainable . Training of Tuformers could be made very efficient as it allows initialization from existing pre-trained Transformer models . We test Tuformers on various tasks across multiple domains and show competitive results under a wide range of model sizes . 1 INTRODUCTION . Transformer models are first introduced by Vaswani et al . ( 2017 ) in the context of sequence modelling . They have demonstrated impressive results on a wide variety of tasks in many fields , such as language model pre-training ( Sanh et al. , 2019 ) , speech recognition ( Moritz et al. , 2020 ) , image classification ( Dosovitskiy et al. , 2020 ) , and generation ( Parmar et al. , 2018 ) . The core component of Transformer models is the multi-head self-attention ( MHSA ) which is extended from the standard attention mechanism ( Bahdanau et al. , 2014 ) . Each attention head in MHSA has a global receptive field , i.e. , each token ’ s representation is updated by attending to all other tokens , and H attention heads are computed in parallel and concatenated together . The current MHSA design is mainly derived from empirical studies or heuristics , leaving some unresolved challenges . ( 1 ) Lack of solid theoretical understanding . The theory behind MHSA is only starting to catch up with practice . The role of the components and their interactions in MHSA are not well understood rigorously from a theoretical perspective , which may lead to inefficiencies in the design . For example , Michel et al . ( 2019 ) show that most attention heads can be removed in the testing phase without much performance compromise , while Cordonnier et al . ( 2020 ) find empirical evidence of redundancy in key/query projections and propose a re-parameterization scheme . These works focus more on practical solutions , leading to questions about whether theoretical patterns exist in these designs . ( 2 ) The number of heads is not trainable . Intuitively , the heads in multi-heads are expected to capture different context information through each head . However , the number of heads is fixed in training . Thus , although we could tune the hyper-parameter , an exhaustive search would be time-consuming or not practical for large-scale tasks . ( 3 ) Hard to analyze the expressive power . Analyzing the expressive power ( as will be defined in Definition 2 ) of a neural network , i.e. , proving that some architectures are more expressive than others , is a non-trivial and challenging task . Such analysis has been done in Convolutional Neural Networks ( CNNs ) ( LeCun et al. , 1995 ; Cohen et al. , 2016 ) , Recurrent Neural Networks ( RNNs ) ( Mikolov et al. , 2010 ; Khrulkov et al. , 2018 ) , but no such work exists for interpreting MHSA or guiding the structural design . In response to the above challenges , we first interpret MHSA from a tensor representation perspective using the intuitive graphical tool , the tensor diagram . Current prevalent descriptions of MHSA use flow charts to convey high-level intuitions , which could cause ambiguities . Therefore , it is inevitable to pair those flow charts with mathematical formulas to understand the mechanism precisely . However , these two separated descriptions create difficulties for interpretations and inspections of the operations implemented in MHSA . To address this issue , we propose a graphical representation of MHSA , which is both semantically intuitive and mathematically rigorous . Specifically , we modify and extend the vanilla tensor diagrams ( Penrose , 1971 ) , which conveniently allow for rigorous graphical representation of multi-linear operations between multi-dimensional arrays ( i.e. , higher-order tensors ) to represent nonlinear operations . We also propose a novel data-driven structure that is guaranteed to have higher expressive power . Inspired from the tensor diagram representation of MHSA , we obtain a novel design , namely TunableHead Self-Attention ( THSA ) , which is a re-parameterization of the weight matrices to allow learnable weight sharing across heads . Transformers with THSA , named Tunable Transformers ( Tuformers ) , have several advantages compared against vanilla Transformers : ( 1 ) A guaranteed higher expressive power . We prove that MHSA is a special case of THSA . ( 2 ) The number of heads is trainable . The concept of the number of heads in THSA generalizes to the stable rank of the core matrix , allowing data-driven implicit training . ( 3 ) Tuformers allow initialization from pre-trained Transformers such as BERT ( Devlin et al. , 2019 ) and its variants . We experiment Tuformers with several tasks across multiple domains , from language modeling , machine translation to image generation under a wide range of model sizes . We demonstrate competitive results not only on Tuformers but also in cases where Tuformers are initialized with pre-trained Transformers for other downstream tasks and when combined with Linear Transformer ( Katharopoulos et al. , 2020 ) on the image generation task . Using tensor diagram representation , we project the current design of MHSA into its tensor form , which renders a holistic view of the weights in MHSA for better interpretation of the information flow and exchange among the components of MHSA . More importantly , we formulate a design space for MHSA that allows improving the expressive power of the network by finding better tensor representations of the weights through tensor representation theory in followup works . Summary of Contributions : . ( 1 ) We propose a mathematically rigorous and semantically intuitive tensor diagram representation of the multi-head self-attention , introducing a new tool to the ML community for future studies on interpretation and improvements of Transformers . ( 2 ) We propose a novel design of the MHSA , Tunable-Head Self-Attention ( THSA ) , resulting in Tuformers , which has a theoretical guaranteed improvement in expressive power . ( 3 ) We provide a constructive re-parameterization scheme from vanilla Transformers to Tuformers so that Tuformers can be easily used in fine-tuning tasks with pre-trained models and efficient Transformers such as Linear Transformer ( Katharopoulos et al. , 2020 ) and Performer ( Choromanski et al. , 2020 ) with the state-of-the-art computation and memory complexities . ( 4 ) We formulate a design space for attention unit design and a framework for analyzing their expressive power using tensor representation theory under the intuitive guidance of tensor diagrams . 2 TENSOR DIAGRAM REPRESENTATION . Notations . We use lower case letters ( e.g. , v ) to denote vectors , upper case letters ( e.g. , M ) to denote matrices , and curly letters ( e.g. , T ) to denote general tensors . For a tensor T ∈ RI1×···IM , we refer to the number of indices as order , each individual index as mode , and the length of one mode as dimension . For instance , T is an M th order tensor that has dimension Im at its mth mode . We reserve superscripts to distinguish similar arrays ( e.g. , WQ , WK , W V are query/key/value weight matrices ) , and subscripts to index the elements in an array ( e.g. , Wij is the ( i , j ) th element of W ) . We use colon : to slice an array ( e.g. , Wi , : denotes the ith row of W ) . We propose to use tensor diagrams ( Penrose , 1971 ) , a commonly used rigorous/precise and intuitive graphical representation for multi-linear operations among higher-order arrays ( i.e. , tensors ) , to represent multi-head self-attention ( MHSA ) . Since MHSA consists of multilinear operations and a nonlinear softmax function , we will introduce tensor diagrams and our novel design extending tensor diagrams to denote the composition of nonlinear and multilinear operations in MHSA . 2.1 TENSOR DIAGRAM BASICS v M 𝓣 Vector order 1 tensor Matrix order 2 tensor 3 dimensional array order 3 tensor A BA A B C Figure 1 : Arrays in tensor diagram . A vector is denoted as a node with 1 leg , a matrix as a node with 2 legs and an N -dimensional array as a node with N legs . Arrays denoted as nodes with legs . An array is represented as a node with leg ( s ) in a tensor diagram as shown in Figure 1 . We denote the order ( the number of dimensions ) of the array by the number of legs extending from the node . Each labeled leg represents one mode of a tensor . Every mode of the tensor needs to be uniquely labeled . We usually use the dimension of the mode as the label ( i.e. , an associated positive integer written on top of each leg ) . The legs do not need to be straight lines , and their orientations do not matter . Matrices M ∈ RA×B and M > ∈ RB×A can be represented via the same tensor diagram as long as the M node has two legs ( to denote that it is a matrix , wherever the legs extend to ) , labeled as A and B ( to denote its size ) . Operations in tensor diagrams . There are three types of operations in the calculation of MHSA : , contraction , softmax , and batch multiplication , as shown in Figure 2 and explained in its caption . Evaluation of tensor diagrams : ( 1 ) Evaluation order . We can evaluate a tensor diagram in any pair-wise order except for the nodes in the softmax box . Since the softmax function is nonlinear , we must first evaluate the nodes in the softmax box with arbitrary order before those outside the box . ( 2 ) Reading a tensor diagram . We can easily identify the output shape by the dangling edges . For instant , in Section 2.1 , there are three dangling legs M , N , R. Thus the output is a 3rd order tensor with dimensions M , N , R. Note that the softmax function does not change the shape of the tensor . Advantages of tensor diagrams : ( 1 ) Tensor diagram is orientation invariant , meaning that we can represent X and X > using the same tensor diagram . Thus in multi-head , we obtain a universal graphical representation regardless of whether we represent an embedding of each token in the sequence as rows or columns of the input embedding matrix X . ( 2 ) Multi-linear operations are concisely represented and interpreted . ( 3 ) The representation is both precise in math and intuitive , and the information flow is clear , making analysis of network structure more accessible . In addition , with the labels of the legs , we can read the model complexity explicitly . We include a comprehensive introduction to tensor diagram representation in Appendix A .", "role": "user" }, { "content": "Focusing on the multi-head self-attention (MHSA) structure, this paper proposes an extension of the tensor diagram to denote self-attention (SA) structures more intuitively. Then inspired by the Tucker format, this paper also proposes a new form of SA named Tucker-Head Self-Attention (THSA), which can also be illustrated with the tensor diagram extension. Although THSA derives some success, THSA may lack theoretical guarantee and experiments seems insufficient.", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: The approach in (<cite>Barzilay and McKeown, 2001</cite>) does not use deep linguistic analysis and therefore is suitable to noisy corpora like ours. The following provides a summary of <cite>their technique</cite>.
["Uses"]
science.multicite_intent_classification.387
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe approach in (<cite>Barzilay and McKeown, 2001</cite>) does not use deep linguistic analysis and therefore is suitable to noisy corpora like ours. The following provides a summary of <cite>their technique</cite>.", "role": "user" }, { "content": "[\"Uses\"]", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: Appropriate fluid therapy affects morbidity and mortality rates. A conclusion is yet to be reached on the role of crystalloids and colloids in immediate fluid therapy. This study was done to determine the suitable solution in immediate resuscitation of patients with hemorrhagic shock caused by tissue trauma. ABSTRACT.METHODS: One hundred trauma patients with hemorrhagic shock, who underwent fluid therapy in the emergency unit, were assigned randomly to two groups of hydroxyethyl starch (Voluven) and normal saline. Before and after fluid therapy, 1 cc of blood was taken from all patients in order to determine and compare base excess levels. ABSTRACT.RESULTS: In hydroxyethyl starch (Voluven) and normal saline groups, base excess level after solution therapy increased about 9.65 and 5.46 volumes, respectively, in which augmentation in hydroxyethyl starch (Voluven) group is significantly higher than normal saline group (P ≤ 0.001). ABSTRACT.CONCLUSION: By using hydroxyethyl starch (Voluven) for fluid therapy in hemorrhagic shock caused by trauma, serum base excess decreases and results in improvement in tissue perfusion and better balance in acid-base status and it seems to be superior over normal saline administration, but the building block of the ideal fluid therapy should still remain with the physician's final clinical judgment. Extracted Data:
[["Hydroxyethyl starch (Voluven)", "Normal saline", "Level of base excess after therapy", "significantly increased", "In hydroxyethyl starch (Voluven) and normal saline groups, base excess level after solution therapy increased about 9.65 and 5.46 volumes, respectively, in which augmentation in hydroxyethyl starch (Voluven) group is significantly higher than normal saline group (P \u2264 0.001)."]]
science.evidence_inference.69
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Appropriate fluid therapy affects morbidity and mortality rates. A conclusion is yet to be reached on the role of crystalloids and colloids in immediate fluid therapy. This study was done to determine the suitable solution in immediate resuscitation of patients with hemorrhagic shock caused by tissue trauma.\n\nABSTRACT.METHODS: One hundred trauma patients with hemorrhagic shock, who underwent fluid therapy in the emergency unit, were assigned randomly to two groups of hydroxyethyl starch (Voluven) and normal saline. Before and after fluid therapy, 1 cc of blood was taken from all patients in order to determine and compare base excess levels.\n\nABSTRACT.RESULTS: In hydroxyethyl starch (Voluven) and normal saline groups, base excess level after solution therapy increased about 9.65 and 5.46 volumes, respectively, in which augmentation in hydroxyethyl starch (Voluven) group is significantly higher than normal saline group (P ≤ 0.001).\n\nABSTRACT.CONCLUSION: By using hydroxyethyl starch (Voluven) for fluid therapy in hemorrhagic shock caused by trauma, serum base excess decreases and results in improvement in tissue perfusion and better balance in acid-base status and it seems to be superior over normal saline administration, but the building block of the ideal fluid therapy should still remain with the physician's final clinical judgment.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"Hydroxyethyl starch (Voluven)\", \"Normal saline\", \"Level of base excess after therapy\", \"significantly increased\", \"In hydroxyethyl starch (Voluven) and normal saline groups, base excess level after solution therapy increased about 9.65 and 5.46 volumes, respectively, in which augmentation in hydroxyethyl starch (Voluven) group is significantly higher than normal saline group (P \\u2264 0.001).\"]]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: siRNA knockdown of A20 slows tumor progression in an in vivo murine xenograft model. Research Abstract: OBJECTIVE A20 is a TNF-inducible primary response gene, which has been found to have antiapoptotic function in several cancer cells. This study investigates A20 expression in human glioma tissues and four glioma cell lines, and its effect on tumorigenesis of glioma cells and a mouse tumor model. METHODS Human glioma tissue samples and cells were subject to reverse transcription-PCR (RT-PCR), western blotting and immunohistochemistry. Glioma cells was tested by flow cytometry. A xenograft tumor model in mice was utilized to examine the knock-down effect of specific A20 siRNAs on tumorigenesis. RESULTS A20 was overexpressed in clinical glioma tissue samples (63.9%) and correlated with clinical staging. All four human glioma cell lines expressed A20, among which U87 displayed the strongest expression signals. Inhibiting A20 expression by siRNAs in vitro reduced the growth rates of glioma cells and resulted in G1/S arrest and increased apoptosis. In a mouse tumor model, local administration of siRNA significantly suppressed solid tumor growth. CONCLUSIONS A20 was overexpressed both in human glioma tissues and cell lines, and inhibiting A20 expression greatly slowed tumor cell growth in culture and in mice. These findings indicated that A20 is involved in tumorigenesis of human glioma, and may serve as a future therapeutic target.
{"verdict": "SUPPORT", "evidence": ["Inhibiting A20 expression by siRNAs in vitro reduced the growth rates of glioma cells and resulted in G1/S arrest and increased apoptosis.", "In a mouse tumor model, local administration of siRNA significantly suppressed solid tumor growth.", "CONCLUSIONS A20 was overexpressed both in human glioma tissues and cell lines, and inhibiting A20 expression greatly slowed tumor cell growth in culture and in mice."]}
science.scifact_entailment.914
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nsiRNA knockdown of A20 slows tumor progression in an in vivo murine xenograft model.\n\nResearch Abstract:\nOBJECTIVE A20 is a TNF-inducible primary response gene, which has been found to have antiapoptotic function in several cancer cells.\nThis study investigates A20 expression in human glioma tissues and four glioma cell lines, and its effect on tumorigenesis of glioma cells and a mouse tumor model.\nMETHODS Human glioma tissue samples and cells were subject to reverse transcription-PCR (RT-PCR), western blotting and immunohistochemistry.\nGlioma cells was tested by flow cytometry.\nA xenograft tumor model in mice was utilized to examine the knock-down effect of specific A20 siRNAs on tumorigenesis.\nRESULTS A20 was overexpressed in clinical glioma tissue samples (63.9%) and correlated with clinical staging.\nAll four human glioma cell lines expressed A20, among which U87 displayed the strongest expression signals.\nInhibiting A20 expression by siRNAs in vitro reduced the growth rates of glioma cells and resulted in G1/S arrest and increased apoptosis.\nIn a mouse tumor model, local administration of siRNA significantly suppressed solid tumor growth.\nCONCLUSIONS A20 was overexpressed both in human glioma tissues and cell lines, and inhibiting A20 expression greatly slowed tumor cell growth in culture and in mice.\nThese findings indicated that A20 is involved in tumorigenesis of human glioma, and may serve as a future therapeutic target.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Inhibiting A20 expression by siRNAs in vitro reduced the growth rates of glioma cells and resulted in G1/S arrest and increased apoptosis.\", \"In a mouse tumor model, local administration of siRNA significantly suppressed solid tumor growth.\", \"CONCLUSIONS A20 was overexpressed both in human glioma tissues and cell lines, and inhibiting A20 expression greatly slowed tumor cell growth in culture and in mice.\"]}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Wavelet-Packet Powered Deepfake Image Detection Main Content: 1 INTRODUCTION . While GANs can improve traffic prediction ( Aigner & Körner , 2019 ) , extract useful representations from data ( Radford et al. , 2015 ) , translate textual descriptions into images ( Zhang et al. , 2017 ) , transfer scene and style information between images ( Zhu et al. , 2017 ) , and generate original art ( Tan et al. , 2017 ) , they can also enable abusive actors to quickly and easily generate potentially damaging , highly realistic fake images , colloquially called deepfakes ( Parkin ; Frank et al. , 2020 ) . As the internet becomes a more prominent virtual public space for political discourse and social media outlet ( Applebaum , 2020 ) , deepfakes present a looming threat to its integrity that must be met with techniques for differentiating the real and trustworthy from the fake . Previous algorithmic techniques for separating real from computer-hallucinated images of people have relied on identifying fingerprints in either the spatial ( Yu et al. , 2019 ; Marra et al. , 2019 ) or frequency ( Frank et al. , 2020 ; Dzanic et al. , 2020 ) domain . However , to the best of our knowledge , no techniques have jointly considered the two domains in a multi-scale fashion . We believe our virtual public spaces and social media outlets will benefit from a growing , diverse toolbox of techniques enabling automatic detection of GAN-generated content , where the introduced waveletbased approach provides a competitive and interpretable addition . In this paper , we make the following contributions : • We present a principled wavelet-packet-based analysis of GAN-generated images . Compared to existing work in the frequency domain , we examine the spatio-frequency properties of GAN-generated content for the first time . We find significant differences between real and synthetic images in both the wavelet-packet mean and standard deviation , with increasing frequency and at the edges . • To the best of our knowledge , we present the first application and implementation of boundary wavelets for image analysis in the deep learning context . Proper implementation of boundary-wavelet filters allows us to share identical network architectures across different wavelet lengths . • As a result of the aforementioned wavelet-based analysis , we build classifiers to identify image sources . We work with fixed seed values for reproducibility and report mean and standard deviations over five runs whenever possible . Our systematic analysis shows slightly improved or competitive performance . The source code for our wavelet toolbox , the first publicly available toolbox for fast boundary-wavelet transforms in the python world , and our experiments is available at https : //github.com/ wavelet-detection-review . 2 RELATED WORK . 2.1 GENERATIVE ADVERSARIAL NETS . The advent of GANs ( Goodfellow et al. , 2014 ) heralded several successful image generation projects . For a recent review on GAN theory , architecture , algorithms , and applications , we refer the reader to Gui et al . ( 2020 ) . We highlight the contribution of the progressive growth technique , in which a small network is trained on smaller images then gradually increased in complexity/number of weights and image size , on the ability of optimization to converge at high resolutions of 1024× 1024 pixels ( Karras et al. , 2018 ) . As a supplement , style transfer methods ( Gatys et al. , 2016 ) have been integrated into new style-based generators . The resulting style-GANs have increased the statistical variation of the hallucinated faces ( Karras et al. , 2019 ; 2020 ) . Finally , regularization ( e.g. , using spectral methods ) , has increased the stability of the optimization process ( Miyato et al. , 2018 ) . The recent advances have allowed large-scale training ( Brock et al. , 2019 ) , which in turn improves the quality further . 2.2 WAVELETS . Originally developed within applied mathematics ( Mallat , 1989 ; Daubechies , 1992 ) , wavelets are a long-established part of the image analysis and processing toolbox ( Taubman & Marcellin , 2002 ; Strang & Nguyen , 1996 ; Jensen & la Cour-Harbo , 2001 ) . In the world of traditional image coding , wavelet-based codes have been developed to replace techniques based on the discrete cosine transform ( DCT ) ( Taubman & Marcellin , 2002 ) . Early applications of wavelets in machine learning studied the integration of wavelets into neural networks for function learning ( Zhang et al. , 1995 ) . Within this line of work , wavelets have previously served as input features ( Chen et al. , 2015 ) , or as early layers of scatter nets ( Mallat , 2012 ; Cotter , 2020 ) . Deeper inside neural networks , wavelets have appeared within pooling layers using static Haar ( Wang et al. , 2020c ; Williams & Li , 2018 ) or adaptive wavelets ( Wolter & Garcke , 2021 ) . Within the subfield of generative networks , ongoing concurrent research ( Gal et al. , 2021 ) is exploring the use of wavelets to improve GAN generated image content . 2.3 DEEPFAKE DETECTION . Previous work on deepfake image detection broadly falls into two categories . The first group works in the frequency domain . Projects include the study of natural and deep network generated images in the frequency space created by the DCT , as well as detectors based on Fourier features ( Zhang et al. , 2019 ; Durall et al. , 2019 ; Dzanic et al. , 2020 ; Frank et al. , 2020 ; Durall et al. , 2020 ; Giudice et al. , 2021 ) . In particular , Frank et al . ( 2020 ) visualizes the DCT transformed images and identifies artifacts created by different upsampling methods . We learn that the transformed images are efficient classification features , which allow significant parameter reductions in GAN identification classifiers . Instead of relying on the DCT , Dzanic et al . ( 2020 ) studied the distribution of Fourier coefficients for real and GAN-generated images . After noting significant deviations of the mean frequencies for GAN generated- and real-images , classifiers are trained . Similarly , Giudice et al . ( 2021 ) studies statistics of the DCT coefficients and uses estimations for each GAN-engine for classification . Dzanic et al . ( 2020 ) found high-frequency discrepancies for GAN-generated imagery , built detectors , and spoofed the newly trained Fourier-classifiers by manually adapting the coefficients in Fourier-space . Finally , He et al . ( 2021 ) combine 2d-FFT and DCT features and observed improved performance . The second group of classifiers works in the spatial or pixel domain , among others , ( Yu et al. , 2019 ; Wang et al. , 2020a ; b ; Zhao et al. , 2021a ; b ) train ( convolutional ) neural networks directly on the raw images to identify various GAN architectures . Building upon pixel-CNN Wang & Deng ( 2021 ) adds neural attention . According to ( Yu et al. , 2019 ) , the classifier features in the final layers constitute the fingerprint for each GAN and are interesting to visualize . Instead of relying on deep-learning , ( Marra et al. , 2019 ) proposed GAN-fingerprints working exclusively with denoising-filter and mean operations , whereas ( Guarnera et al. , 2020 ) computes convolutional traces . To the best of our knowledge , we are the first to combine both schools of thought by proposing to work directly with a spatio-frequency wavelet packet representation in a multi-scale fashion . As we demonstrate in the next section , wavelet packets allow visualization of frequency information while , at the same time , preserving local information to some degree . Note that Tang et al . ( 2021 ) uses the first level of a discrete wavelet transform , this means no multi-scale representation as a pre-processing step to compute spectral correlations between the color bands of an image , which are then further processed to obtain features for a classifier . 3 METHODS . “ Wavelets are localized waves , instead of oscillating forever , they drop to zero. ” This observation , in the words of Strang & Nguyen ( 1996 ) , leads to the first motivation for using wavelet transforms . In contrast to the periodic sine and cosine waves in the Fourier-basis , which are localized in frequency , wavelets are localized in both space and frequency . Since images often contain sharp borders , where pixel values change rapidly , slowly decaying Fourier coefficients caused by such short pulses can fill up the spectrum . Reconstruction of the original pulse depends on the cancellation of most coefficients . The locality of wavelets makes it easier to deal with such pulses . While previous approaches based on the fast Fourier transform ( FFT ) by Dzanic et al . ( 2020 ) and DCT from Frank et al . ( 2020 ) and Giudice et al . ( 2021 ) provide abundant frequency space information , the global nature of these bases means all spatial relations are missing . The desire to extend our knowledge by exploiting spatial information is our second reason to work with wavelets . Wavelets allow the representation of hierarchical levels of detail in space and frequency . Before discussing the mechanics of the fast wavelet transform ( FWT ) and its packet variant , we present a proof of concept experiment in Figure 1 . We investigate the coefficients of the level 3 Haar wavelet packet transform . Computations use 5K 1024× 1024 pixel images from Flickr Faces High Quality ( FFHQ ) ( Karras et al. , 2019 ) and 5K 1024× 1024 pixel images generated by StyleGAN . The leftmost columns of Figure 1 show a sample from both FFHQ and a StyleGAN generated one . Mean wavelet packets appear in the second , and their standard deviation in the third column . Finally , the rightmost column plots the absolute mean packets and standard deviation differences . For improved visibility , we re-scale the absolute values of the packet representation , averaged first over the three color bands , using the natural logarithm loge . We find that the mean wavelet packets of GAN-generated images are often significantly brighter . The differences become more pronounced as the frequency increases from the top left to the bottom right . The differences in high-frequency packets independently confirm Dzanic et al . ( 2020 ) , who saw the most significant differences for high-frequency Fourier-coefficients . The locality of the wavelet filters allows face-like shapes to appear , but the image edges stand out , allowing us to pinpoint the frequency disparities . We now have an intuition regarding the local origin of the differences . We note that Haar wavelet packets do not use padding or orthogonalization and conclude that the differences stem from GAN generation . For the standard deviations , the picture is reversed . Instead of the GAN the FFHQ-packets appear brighter . The StyleGAN packets do not deviate as much as the data in the original dataset . The observation suggests that the GAN did not capture the complete variance of the original data . Our evidence indicates that GAN-generated backgrounds are of particular monotony across all frequency bands . In the next section , we discuss how the fast wavelet transform ( FWT ) and wavelet packets in Figure 1 are computed . For multi-channel color images , we transform each color separately , but for simplicity , we will only consider single channels in the following exposition . 3.1 FAST WAVELET TRANSFORM . Wavelets can be defined by low- and corresponding high-pass filter or by a wavelet function and a scaling function , where we follow the filter construction , as common in signal processing . The FWT ( Mallat , 1989 ; Strang & Nguyen , 1996 ; Jensen & la Cour-Harbo , 2001 ) utilizes convolutions to decompose an input signal into its frequency components , where repeated application results in a multi-scale analysis . The resulting wavelet coefficients reflect scale ( of level q ) , which is related to frequency , and location , i.e. , the position in the transformed and scaled image of level q . Even though 2D wavelets such as the Mexican hat wavelet exist , one dimensional wavelet pairs are often transformed to 2D quadruples using the outer products ( Vyas et al. , 2018 ) : fa = fLf T L , fh = fLf T H , fv = fHf T L , fd = fHf T H. ( 1 ) Common choices for 1D wavelets are the Daubechies-wavelets ( “ db ” ) and their less asymmetrical variant , the symlets ( “ sym ” ) . We refer the reader to Daubechies ( 1992 ) for an in-depth discussion . In the equations above , f denotes a filter vector . For one-dimensional filters , the subscripts L denote the one-dimensional low-pass and H the high pass filter , respectively . In the two-dimensional case , a denotes the approximation coefficients , h denotes the horizontal coefficients , v denotes vertical coefficients , and d denotes the diagonal coefficients . The 2D transformation at representation level q + 1 requires the input xq as well as a filter quadruple fk for k ∈ [ a , h , v , d ] and is computed by xq ∗ fk = kq+1 , ( 2 ) where ∗ denotes stride two convolution . Note the input image is at level zero , i.e . x0 = I , while for stage q the low pass result of q − 1 is used as input . The wavelet transform is invertible . The forward or analysis transform relies on the analysis filters often denoted as Hk ( Strang & Nguyen , 1996 ) . The inverse or synthesis transform filters are typically written as Fk . The down- ↓2 and up- ↑2 arrow symbols describe the sampling operations in what is known as stride two convolution and transposed convolution in today ’ s machine learning literature . In order to make the transform invertible and plots interpretable , not just any filter will do . The perfect reconstruction and anti-aliasing conditions must hold . We refer to Strang & Nguyen ( 1996 ) and Jensen & la Cour-Harbo ( 2001 ) for an excellent further discussion of these conditions .
This paper presents an approach to detect GAN generated Deepfake images using wavelet features. The paper also showed linear separation of GANs using the proposed approach. Experiments presented on many datasets showed that the approach is promising and presents a method that integrates both spatial and frequency domain information.
science.mup_single_document_summarization.7392
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nWavelet-Packet Powered Deepfake Image Detection\n\nMain Content:\n1 INTRODUCTION . While GANs can improve traffic prediction ( Aigner & Körner , 2019 ) , extract useful representations from data ( Radford et al. , 2015 ) , translate textual descriptions into images ( Zhang et al. , 2017 ) , transfer scene and style information between images ( Zhu et al. , 2017 ) , and generate original art ( Tan et al. , 2017 ) , they can also enable abusive actors to quickly and easily generate potentially damaging , highly realistic fake images , colloquially called deepfakes ( Parkin ; Frank et al. , 2020 ) . As the internet becomes a more prominent virtual public space for political discourse and social media outlet ( Applebaum , 2020 ) , deepfakes present a looming threat to its integrity that must be met with techniques for differentiating the real and trustworthy from the fake . Previous algorithmic techniques for separating real from computer-hallucinated images of people have relied on identifying fingerprints in either the spatial ( Yu et al. , 2019 ; Marra et al. , 2019 ) or frequency ( Frank et al. , 2020 ; Dzanic et al. , 2020 ) domain . However , to the best of our knowledge , no techniques have jointly considered the two domains in a multi-scale fashion . We believe our virtual public spaces and social media outlets will benefit from a growing , diverse toolbox of techniques enabling automatic detection of GAN-generated content , where the introduced waveletbased approach provides a competitive and interpretable addition . In this paper , we make the following contributions : • We present a principled wavelet-packet-based analysis of GAN-generated images . Compared to existing work in the frequency domain , we examine the spatio-frequency properties of GAN-generated content for the first time . We find significant differences between real and synthetic images in both the wavelet-packet mean and standard deviation , with increasing frequency and at the edges . • To the best of our knowledge , we present the first application and implementation of boundary wavelets for image analysis in the deep learning context . Proper implementation of boundary-wavelet filters allows us to share identical network architectures across different wavelet lengths . • As a result of the aforementioned wavelet-based analysis , we build classifiers to identify image sources . We work with fixed seed values for reproducibility and report mean and standard deviations over five runs whenever possible . Our systematic analysis shows slightly improved or competitive performance . The source code for our wavelet toolbox , the first publicly available toolbox for fast boundary-wavelet transforms in the python world , and our experiments is available at https : //github.com/ wavelet-detection-review . 2 RELATED WORK . 2.1 GENERATIVE ADVERSARIAL NETS . The advent of GANs ( Goodfellow et al. , 2014 ) heralded several successful image generation projects . For a recent review on GAN theory , architecture , algorithms , and applications , we refer the reader to Gui et al . ( 2020 ) . We highlight the contribution of the progressive growth technique , in which a small network is trained on smaller images then gradually increased in complexity/number of weights and image size , on the ability of optimization to converge at high resolutions of 1024× 1024 pixels ( Karras et al. , 2018 ) . As a supplement , style transfer methods ( Gatys et al. , 2016 ) have been integrated into new style-based generators . The resulting style-GANs have increased the statistical variation of the hallucinated faces ( Karras et al. , 2019 ; 2020 ) . Finally , regularization ( e.g. , using spectral methods ) , has increased the stability of the optimization process ( Miyato et al. , 2018 ) . The recent advances have allowed large-scale training ( Brock et al. , 2019 ) , which in turn improves the quality further . 2.2 WAVELETS . Originally developed within applied mathematics ( Mallat , 1989 ; Daubechies , 1992 ) , wavelets are a long-established part of the image analysis and processing toolbox ( Taubman & Marcellin , 2002 ; Strang & Nguyen , 1996 ; Jensen & la Cour-Harbo , 2001 ) . In the world of traditional image coding , wavelet-based codes have been developed to replace techniques based on the discrete cosine transform ( DCT ) ( Taubman & Marcellin , 2002 ) . Early applications of wavelets in machine learning studied the integration of wavelets into neural networks for function learning ( Zhang et al. , 1995 ) . Within this line of work , wavelets have previously served as input features ( Chen et al. , 2015 ) , or as early layers of scatter nets ( Mallat , 2012 ; Cotter , 2020 ) . Deeper inside neural networks , wavelets have appeared within pooling layers using static Haar ( Wang et al. , 2020c ; Williams & Li , 2018 ) or adaptive wavelets ( Wolter & Garcke , 2021 ) . Within the subfield of generative networks , ongoing concurrent research ( Gal et al. , 2021 ) is exploring the use of wavelets to improve GAN generated image content . 2.3 DEEPFAKE DETECTION . Previous work on deepfake image detection broadly falls into two categories . The first group works in the frequency domain . Projects include the study of natural and deep network generated images in the frequency space created by the DCT , as well as detectors based on Fourier features ( Zhang et al. , 2019 ; Durall et al. , 2019 ; Dzanic et al. , 2020 ; Frank et al. , 2020 ; Durall et al. , 2020 ; Giudice et al. , 2021 ) . In particular , Frank et al . ( 2020 ) visualizes the DCT transformed images and identifies artifacts created by different upsampling methods . We learn that the transformed images are efficient classification features , which allow significant parameter reductions in GAN identification classifiers . Instead of relying on the DCT , Dzanic et al . ( 2020 ) studied the distribution of Fourier coefficients for real and GAN-generated images . After noting significant deviations of the mean frequencies for GAN generated- and real-images , classifiers are trained . Similarly , Giudice et al . ( 2021 ) studies statistics of the DCT coefficients and uses estimations for each GAN-engine for classification . Dzanic et al . ( 2020 ) found high-frequency discrepancies for GAN-generated imagery , built detectors , and spoofed the newly trained Fourier-classifiers by manually adapting the coefficients in Fourier-space . Finally , He et al . ( 2021 ) combine 2d-FFT and DCT features and observed improved performance . The second group of classifiers works in the spatial or pixel domain , among others , ( Yu et al. , 2019 ; Wang et al. , 2020a ; b ; Zhao et al. , 2021a ; b ) train ( convolutional ) neural networks directly on the raw images to identify various GAN architectures . Building upon pixel-CNN Wang & Deng ( 2021 ) adds neural attention . According to ( Yu et al. , 2019 ) , the classifier features in the final layers constitute the fingerprint for each GAN and are interesting to visualize . Instead of relying on deep-learning , ( Marra et al. , 2019 ) proposed GAN-fingerprints working exclusively with denoising-filter and mean operations , whereas ( Guarnera et al. , 2020 ) computes convolutional traces . To the best of our knowledge , we are the first to combine both schools of thought by proposing to work directly with a spatio-frequency wavelet packet representation in a multi-scale fashion . As we demonstrate in the next section , wavelet packets allow visualization of frequency information while , at the same time , preserving local information to some degree . Note that Tang et al . ( 2021 ) uses the first level of a discrete wavelet transform , this means no multi-scale representation as a pre-processing step to compute spectral correlations between the color bands of an image , which are then further processed to obtain features for a classifier . 3 METHODS . “ Wavelets are localized waves , instead of oscillating forever , they drop to zero. ” This observation , in the words of Strang & Nguyen ( 1996 ) , leads to the first motivation for using wavelet transforms . In contrast to the periodic sine and cosine waves in the Fourier-basis , which are localized in frequency , wavelets are localized in both space and frequency . Since images often contain sharp borders , where pixel values change rapidly , slowly decaying Fourier coefficients caused by such short pulses can fill up the spectrum . Reconstruction of the original pulse depends on the cancellation of most coefficients . The locality of wavelets makes it easier to deal with such pulses . While previous approaches based on the fast Fourier transform ( FFT ) by Dzanic et al . ( 2020 ) and DCT from Frank et al . ( 2020 ) and Giudice et al . ( 2021 ) provide abundant frequency space information , the global nature of these bases means all spatial relations are missing . The desire to extend our knowledge by exploiting spatial information is our second reason to work with wavelets . Wavelets allow the representation of hierarchical levels of detail in space and frequency . Before discussing the mechanics of the fast wavelet transform ( FWT ) and its packet variant , we present a proof of concept experiment in Figure 1 . We investigate the coefficients of the level 3 Haar wavelet packet transform . Computations use 5K 1024× 1024 pixel images from Flickr Faces High Quality ( FFHQ ) ( Karras et al. , 2019 ) and 5K 1024× 1024 pixel images generated by StyleGAN . The leftmost columns of Figure 1 show a sample from both FFHQ and a StyleGAN generated one . Mean wavelet packets appear in the second , and their standard deviation in the third column . Finally , the rightmost column plots the absolute mean packets and standard deviation differences . For improved visibility , we re-scale the absolute values of the packet representation , averaged first over the three color bands , using the natural logarithm loge . We find that the mean wavelet packets of GAN-generated images are often significantly brighter . The differences become more pronounced as the frequency increases from the top left to the bottom right . The differences in high-frequency packets independently confirm Dzanic et al . ( 2020 ) , who saw the most significant differences for high-frequency Fourier-coefficients . The locality of the wavelet filters allows face-like shapes to appear , but the image edges stand out , allowing us to pinpoint the frequency disparities . We now have an intuition regarding the local origin of the differences . We note that Haar wavelet packets do not use padding or orthogonalization and conclude that the differences stem from GAN generation . For the standard deviations , the picture is reversed . Instead of the GAN the FFHQ-packets appear brighter . The StyleGAN packets do not deviate as much as the data in the original dataset . The observation suggests that the GAN did not capture the complete variance of the original data . Our evidence indicates that GAN-generated backgrounds are of particular monotony across all frequency bands . In the next section , we discuss how the fast wavelet transform ( FWT ) and wavelet packets in Figure 1 are computed . For multi-channel color images , we transform each color separately , but for simplicity , we will only consider single channels in the following exposition . 3.1 FAST WAVELET TRANSFORM . Wavelets can be defined by low- and corresponding high-pass filter or by a wavelet function and a scaling function , where we follow the filter construction , as common in signal processing . The FWT ( Mallat , 1989 ; Strang & Nguyen , 1996 ; Jensen & la Cour-Harbo , 2001 ) utilizes convolutions to decompose an input signal into its frequency components , where repeated application results in a multi-scale analysis . The resulting wavelet coefficients reflect scale ( of level q ) , which is related to frequency , and location , i.e. , the position in the transformed and scaled image of level q . Even though 2D wavelets such as the Mexican hat wavelet exist , one dimensional wavelet pairs are often transformed to 2D quadruples using the outer products ( Vyas et al. , 2018 ) : fa = fLf T L , fh = fLf T H , fv = fHf T L , fd = fHf T H. ( 1 ) Common choices for 1D wavelets are the Daubechies-wavelets ( “ db ” ) and their less asymmetrical variant , the symlets ( “ sym ” ) . We refer the reader to Daubechies ( 1992 ) for an in-depth discussion . In the equations above , f denotes a filter vector . For one-dimensional filters , the subscripts L denote the one-dimensional low-pass and H the high pass filter , respectively . In the two-dimensional case , a denotes the approximation coefficients , h denotes the horizontal coefficients , v denotes vertical coefficients , and d denotes the diagonal coefficients . The 2D transformation at representation level q + 1 requires the input xq as well as a filter quadruple fk for k ∈ [ a , h , v , d ] and is computed by xq ∗ fk = kq+1 , ( 2 ) where ∗ denotes stride two convolution . Note the input image is at level zero , i.e . x0 = I , while for stage q the low pass result of q − 1 is used as input . The wavelet transform is invertible . The forward or analysis transform relies on the analysis filters often denoted as Hk ( Strang & Nguyen , 1996 ) . The inverse or synthesis transform filters are typically written as Fk . The down- ↓2 and up- ↑2 arrow symbols describe the sampling operations in what is known as stride two convolution and transposed convolution in today ’ s machine learning literature . In order to make the transform invertible and plots interpretable , not just any filter will do . The perfect reconstruction and anti-aliasing conditions must hold . We refer to Strang & Nguyen ( 1996 ) and Jensen & la Cour-Harbo ( 2001 ) for an excellent further discussion of these conditions .", "role": "user" }, { "content": "This paper presents an approach to detect GAN generated Deepfake images using wavelet features. The paper also showed linear separation of GANs using the proposed approach. Experiments presented on many datasets showed that the approach is promising and presents a method that integrates both spatial and frequency domain information.", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Algorithmic Instabilities of Accelerated Gradient Descent Main Content: We study the algorithmic stability of Nesterov ’ s accelerated gradient method . For convex quadratic objectives , Chen et al . [ 10 ] proved that the uniform stability of the method grows quadratically with the number of optimization steps , and conjectured that the same is true for the general convex and smooth case . We disprove this conjecture and show , for two notions of algorithmic stability ( including uniform stability ) , that the stability of Nesterov ’ s accelerated method in fact deteriorates exponentially fast with the number of gradient steps . This stands in sharp contrast to the bounds in the quadratic case , but also to known results for non-accelerated gradient methods where stability typically grows linearly with the number of steps . 1 Introduction . Algorithmic stability has emerged over the last two decades as a central tool for generalization analysis of learning algorithms . While the classical approach in generalization theory originating in the PAC learning framework appeal to uniform convergence arguments , more recent progress on stochastic convex optimization models , starting with the pioneering work of Bousquet and Elisseeff [ 6 ] and Shalev-Shwartz et al . [ 24 ] , has relied on stability analysis for deriving tight generalization results for convex risk minimizing algorithms . Perhaps the most common form of algorithmic stability is the so called uniform stability [ 6 ] . Roughly , the uniform stability of a learning algorithm is the worst-case change in its output model , in terms of its loss on an arbitrary example , when replacing a single sample in the data set used for training . Bousquet and Elisseeff [ 6 ] initially used uniform stability to argue about the generalization of empirical risk minimization with strongly convex losses . Shalev-Shwartz et al . [ 24 ] revisited this concept and studied the stability effect of regularization on the generalization of convex models . Their bounds were recently improved in a variety of ways [ 13 , 14 , 7 ] and their approach has been influential in a variety of settings ( e.g. , 18 , 16 , 9 ) . In fact , to this day , algorithmic stability is essentially the only general approach for obtaining tight ( dimension free ) generalization bounds for convex optimization algorithms applied to the empirical risk ( see 24 , 12 ) . Significant focus has been put recently on studying the stability properties of iterative optimization algorithms . Hardt et al . [ 17 ] considered stochastic gradient descent ( SGD ) and gave the first bounds on its uniform stability for a convex and smooth loss function , that grow linearly with the number of optimization steps . As observed by Feldman and Vondrak [ 13 ] and Chen et al . [ 10 ] , their arguments also apply with minor modifications to full-batch gradient descent ( GD ) . Bassily et al . [ 5 ] exhibited a significant gap in stability between the smooth and non-smooth cases , showing that non-smooth GD and SGD are inherently less stable than their smooth counterparts . Even further , algorithmic stability has also been used as an analysis technique in stochastic mini-batched iterative optimization ( e.g. , 25 , 1 ) , and has been proved crucial to the design and analysis of differentially private optimization algorithms [ 26 , 4 , 15 ] , both of which focusing primarily on smooth optimization . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . Having identified smoothness as key to algorithmic stability of iterative optimization methods , the following fundamental question emerges : how stable are optimal methods for smooth convex optimization ? In particular , what is the algorithmic stability of the celebrated Nesterov accelerated gradient ( NAG ) method [ 22 ] —a cornerstone of optimal methods in convex optimization ? Besides being a basic and natural question in its own right , its resolution could have important implications to the design and analysis of optimization algorithms , as well as serve to deepen our understanding of the generalization properties of iterative gradient methods . Chen et al . [ 10 ] addressed this question in the case of convex quadratic objectives and derived bounds on the uniform stability of NAG that grow quadratically with the number of gradient steps ( as opposed to the linear growth known for GD ) . They conjectured that similar bounds hold true more broadly , but fell short of proving this for general convex and smooth objectives . Our work is aimed at filling this gap . 1.1 Our Results . We establish tight algorithmic stability bounds for the Nesterov accelerated gradient method ( NAG ) . We show that , somewhat surprisingly , the uniform stability of NAG grows exponentially fast with the number of steps in the general convex and smooth setting . Namely , the uniform stability of ) -steps NAG with respect to a dataset of = examples is in general exp ( Ω ( ) ) ) /= , and in particular , after merely ) = $ ( log = ) steps the stability becomes the trivial Ω ( 1 ) . This result demonstrates a sharp contrast between the stability of NAG in the quadratic case and in the general convex , and disproves the conjecture of Chen et al . [ 10 ] that the uniform stability of NAG in the general convex setting is $ ( ) 2/= ) , as in the case of a quadratic objective . Our results in fact apply to a simpler notion of stability—one that is arguably more fundamental in the context of iterative optimization methods—which we term initialization stability . The initialization stability of an algorithm ( formally defined in Section 2 below ) measures the sensitivity of ’ s output to an ²-perturbation in its initialization point . For this notion , we demonstrate a construction of a smooth and convex objective function such that , for sufficiently small ² , the stability of ) -steps NAG is lower bounded by exp ( Ω ( ) ) ) ² . Here again , we exhibit a dramatic gap between the quadratic and general convex cases : for quadratic objectives , we show that the initialization stability of NAG is upper bounded by $ ( ) ² ) . For completeness , we also prove initialization stability upper bounds in a few relevant convex optimization settings : for GD , we analyze both the smooth and non-smooth cases ; for NAG , we give bounds for quadratic objectives as well as for general smooth ones . Table 1 summarizes the stability bounds we establish compared to existing bounds in the literature . Note in particular the remarkable exponential gap between the stability bounds for GD and NAG in the general smooth case , with respect to both stability definitions . Stability lower bounds for NAG are discussed in Sections 3 and 4 ; initialization stability upper bounds for the various settings and additional uniform stability bounds are detailed in the full version of the paper [ 3 ] . Finally , we remark that our focus here is on the general convex ( and smooth ) case , and we do not provide formal results for the strongly convex case . However , we argue that stability analysis in the latter case is not as compelling as in the general case . Indeed , a strongly convex objective admits a unique minimum , and so NAG will converge to an ²-neighborhood of this minimum in $ ( log ( 1/² ) ) steps from any initialization , at which point its stability becomes $ ( ² ) ; thus , with strong convexity perturbations in initialization get quickly washed away as the algorithm rapidly converges to the unique optimum . ( A similar reasoning also applies to uniform stability with strongly convex losses . ) 1.2 Overview of Main Ideas and Techniques . We now provide some intuition to our constructions and highlight some of the key ideas leading to our results . We start by revisiting the analysis of the quadratic case which is simpler and better understood . Why NAG is stable for quadratics : Consider a quadratic function 5 with Hessian matrix 0 . For analyzing the initialization stability of NAG , let us consider two runs of the method initialized at G0 , G̃0 respectively , and let ( GC , HC ) , ( G̃C , H̃C ) denote the corresponding NAG iterates at step C. Further , let us denote by ∆GC , GC − G̃C the difference between the two sequences of iterates . Using the update rule of NAG ( see Eqs . ( 1 ) and ( 2 ) below ) and the fact that for a quadratic 5 , differences between gradients can be expressed as ∇ 5 ( G ) − ∇ 5 ( G ′ ) = ( G − G ′ ) for any G , G ′ ∈ ℝ3 , it is straightforward to show that the distance ∆GC evolves according to ∆GC+1 = ( − η ) ( ( 1 + γC ) ∆GC − γC∆GC−1 ) . This recursion can be naturally put in matrix form , leading to : ( ∆G C+1 ∆GC ) = C∏ : =1 ( ( 1 + γ : ) −γ : 0 ) ( ∆G1 ∆G0 ) , where here = − η . Thus , for a quadratic 5 , bounding the divergence ‖∆GC ‖ between the two NAG sequences reduces to controlling the operator norm of the matrix product above , namely C∏ : =1 ( ( 1 + γ : ) −γ : 0 ) . Remarkably , it can be shown that this norm is $ ( C ) for any 0 and any choice of −1 ≤ γ1 , . . . , γC ≤ 1 . ( This can be seen by writing the Schur decomposition of the involved matrices , as we show in the full version of the paper [ 3 ] .1 ) As a consequence , the initialization stability of NAG for a quadratic objective 5 is shown to grow only linearly with the number of steps C. What breaks down in the general convex case : For a general convex ( twice-differentiable and smooth ) 5 , the Hessian matrix is of course no longer fixed across the execution . Assuming for simplicity the one-dimensional case , similar arguments show that the relevant operator norm is of the form C∏ : =1 ( ( 1 + γ : ) : −γ : : 0 ) , where 0 ≤ 1 , . . . , C ≤ 1 are related to Hessians of 5 taken at suitable points along the optimization trajectory . However , if : are allowed to vary arbitrarily between steps , the matrix product above might explode exponentially fast , even in the one-dimensional case ! Indeed , fix γ : = 0.9 for all : , and set : = 0 whenever : mod 3 = 0 and : = 1 otherwise ; then using simple linear algebra the operator norm of interest can be shown to satisfy ( ( 0 01 0 ) ( 1.9 −0.91 0 ) ( 1.9 −0.91 0 ) ) C/3 = ( 0 02.71 −1.71 ) C/3 ≥ 1.15C . How a hard function should look like : The exponential blowup we exhibited above hinged on a worst-case sequence 1 , . . . , C that varies significantly between consecutive steps . It remains unclear , however , what does this imply for the actual optimization setup we care about , and whether such a sequence can be realized by Hessians of a convex and smooth function 5 . Our main results essentially answer the latter question on the affirmative and build upon a construction of such a function 5 that directly imitates such a bad sequence . Concretely , we generate a hard function inductively based on a running execution of NAG , where in each step we amend the construction with a “ gadget ” function having a piecewise-constant Hessian ( that equals either 0 or the maximal β ) ; see Fig . 1 for an illustration of this construct . The interval pieces are carefully chosen based on the NAG iterates computed so far in a way that a slightly perturbed 1Chen et al . [ 10 ] give an alternative argument based on Chebyshev polynomials . execution would traverse through intervals with an appropriate pattern of Hessians that induces a behaviour similar to the one exhibited by the matrix products above , leading to an exponential blowup in the stability terms . Fig . 2 shows a simulation of the divergence between the two trajectories of NAG on the objective function we construct , illustrating how the divergence fluctuates between positive and negative values , with its absolute value growing exponentially with time . More technical details on this construction can be found in Section 3 . From initialization stability to uniform stability : Finally , we employ a simple reduction to translate our results regarding initialization stability to relate to uniform stability in the context of empirical risk optimization , where one uses ( full-batch ) NAG to minimize the ( convex , smooth ) empirical risk induced by a sample ( of = examples . Concretely , we show that by replacing a single example in ( , we can arrive at a scenario where after one step of NAG on the original and modified samples the respective iterates are ² = Θ ( 1/= ) away from each other , whereas in the remaining steps both empirical risks simulate our bad function from before . Thus , we again end up with an exponential increase in the divergence between the two executions , that leads to a similar increase in the algorithmic ( uniform ) stability : the latter becomes as large as Ω ( 1 ) after merely ) = $ ( log = ) steps of full-batch NAG . The formal details of this reduction can be found in Section 4 .
It is very interesting and attractive to investigate the algorithmic instabilities for the famous Nesterov accelerated gradient descent method. This concepts are very related to ordinary differential equation, such as initialization stability corresponding to continuous dependence of solution about initialization for long time. The uniform stability is for the objective function. The authors tries to use the so-called uniform stability to describe the NAG under noise cannot obtain the desired result. The new concept --- uniform stability is a novel point.
science.mup_single_document_summarization.11373
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAlgorithmic Instabilities of Accelerated Gradient Descent\n\nMain Content:\nWe study the algorithmic stability of Nesterov ’ s accelerated gradient method . For convex quadratic objectives , Chen et al . [ 10 ] proved that the uniform stability of the method grows quadratically with the number of optimization steps , and conjectured that the same is true for the general convex and smooth case . We disprove this conjecture and show , for two notions of algorithmic stability ( including uniform stability ) , that the stability of Nesterov ’ s accelerated method in fact deteriorates exponentially fast with the number of gradient steps . This stands in sharp contrast to the bounds in the quadratic case , but also to known results for non-accelerated gradient methods where stability typically grows linearly with the number of steps . 1 Introduction . Algorithmic stability has emerged over the last two decades as a central tool for generalization analysis of learning algorithms . While the classical approach in generalization theory originating in the PAC learning framework appeal to uniform convergence arguments , more recent progress on stochastic convex optimization models , starting with the pioneering work of Bousquet and Elisseeff [ 6 ] and Shalev-Shwartz et al . [ 24 ] , has relied on stability analysis for deriving tight generalization results for convex risk minimizing algorithms . Perhaps the most common form of algorithmic stability is the so called uniform stability [ 6 ] . Roughly , the uniform stability of a learning algorithm is the worst-case change in its output model , in terms of its loss on an arbitrary example , when replacing a single sample in the data set used for training . Bousquet and Elisseeff [ 6 ] initially used uniform stability to argue about the generalization of empirical risk minimization with strongly convex losses . Shalev-Shwartz et al . [ 24 ] revisited this concept and studied the stability effect of regularization on the generalization of convex models . Their bounds were recently improved in a variety of ways [ 13 , 14 , 7 ] and their approach has been influential in a variety of settings ( e.g. , 18 , 16 , 9 ) . In fact , to this day , algorithmic stability is essentially the only general approach for obtaining tight ( dimension free ) generalization bounds for convex optimization algorithms applied to the empirical risk ( see 24 , 12 ) . Significant focus has been put recently on studying the stability properties of iterative optimization algorithms . Hardt et al . [ 17 ] considered stochastic gradient descent ( SGD ) and gave the first bounds on its uniform stability for a convex and smooth loss function , that grow linearly with the number of optimization steps . As observed by Feldman and Vondrak [ 13 ] and Chen et al . [ 10 ] , their arguments also apply with minor modifications to full-batch gradient descent ( GD ) . Bassily et al . [ 5 ] exhibited a significant gap in stability between the smooth and non-smooth cases , showing that non-smooth GD and SGD are inherently less stable than their smooth counterparts . Even further , algorithmic stability has also been used as an analysis technique in stochastic mini-batched iterative optimization ( e.g. , 25 , 1 ) , and has been proved crucial to the design and analysis of differentially private optimization algorithms [ 26 , 4 , 15 ] , both of which focusing primarily on smooth optimization . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . Having identified smoothness as key to algorithmic stability of iterative optimization methods , the following fundamental question emerges : how stable are optimal methods for smooth convex optimization ? In particular , what is the algorithmic stability of the celebrated Nesterov accelerated gradient ( NAG ) method [ 22 ] —a cornerstone of optimal methods in convex optimization ? Besides being a basic and natural question in its own right , its resolution could have important implications to the design and analysis of optimization algorithms , as well as serve to deepen our understanding of the generalization properties of iterative gradient methods . Chen et al . [ 10 ] addressed this question in the case of convex quadratic objectives and derived bounds on the uniform stability of NAG that grow quadratically with the number of gradient steps ( as opposed to the linear growth known for GD ) . They conjectured that similar bounds hold true more broadly , but fell short of proving this for general convex and smooth objectives . Our work is aimed at filling this gap . 1.1 Our Results . We establish tight algorithmic stability bounds for the Nesterov accelerated gradient method ( NAG ) . We show that , somewhat surprisingly , the uniform stability of NAG grows exponentially fast with the number of steps in the general convex and smooth setting . Namely , the uniform stability of ) -steps NAG with respect to a dataset of = examples is in general exp ( Ω ( ) ) ) /= , and in particular , after merely ) = $ ( log = ) steps the stability becomes the trivial Ω ( 1 ) . This result demonstrates a sharp contrast between the stability of NAG in the quadratic case and in the general convex , and disproves the conjecture of Chen et al . [ 10 ] that the uniform stability of NAG in the general convex setting is $ ( ) 2/= ) , as in the case of a quadratic objective . Our results in fact apply to a simpler notion of stability—one that is arguably more fundamental in the context of iterative optimization methods—which we term initialization stability . The initialization stability of an algorithm ( formally defined in Section 2 below ) measures the sensitivity of ’ s output to an ²-perturbation in its initialization point . For this notion , we demonstrate a construction of a smooth and convex objective function such that , for sufficiently small ² , the stability of ) -steps NAG is lower bounded by exp ( Ω ( ) ) ) ² . Here again , we exhibit a dramatic gap between the quadratic and general convex cases : for quadratic objectives , we show that the initialization stability of NAG is upper bounded by $ ( ) ² ) . For completeness , we also prove initialization stability upper bounds in a few relevant convex optimization settings : for GD , we analyze both the smooth and non-smooth cases ; for NAG , we give bounds for quadratic objectives as well as for general smooth ones . Table 1 summarizes the stability bounds we establish compared to existing bounds in the literature . Note in particular the remarkable exponential gap between the stability bounds for GD and NAG in the general smooth case , with respect to both stability definitions . Stability lower bounds for NAG are discussed in Sections 3 and 4 ; initialization stability upper bounds for the various settings and additional uniform stability bounds are detailed in the full version of the paper [ 3 ] . Finally , we remark that our focus here is on the general convex ( and smooth ) case , and we do not provide formal results for the strongly convex case . However , we argue that stability analysis in the latter case is not as compelling as in the general case . Indeed , a strongly convex objective admits a unique minimum , and so NAG will converge to an ²-neighborhood of this minimum in $ ( log ( 1/² ) ) steps from any initialization , at which point its stability becomes $ ( ² ) ; thus , with strong convexity perturbations in initialization get quickly washed away as the algorithm rapidly converges to the unique optimum . ( A similar reasoning also applies to uniform stability with strongly convex losses . ) 1.2 Overview of Main Ideas and Techniques . We now provide some intuition to our constructions and highlight some of the key ideas leading to our results . We start by revisiting the analysis of the quadratic case which is simpler and better understood . Why NAG is stable for quadratics : Consider a quadratic function 5 with Hessian matrix 0 . For analyzing the initialization stability of NAG , let us consider two runs of the method initialized at G0 , G̃0 respectively , and let ( GC , HC ) , ( G̃C , H̃C ) denote the corresponding NAG iterates at step C. Further , let us denote by ∆GC , GC − G̃C the difference between the two sequences of iterates . Using the update rule of NAG ( see Eqs . ( 1 ) and ( 2 ) below ) and the fact that for a quadratic 5 , differences between gradients can be expressed as ∇ 5 ( G ) − ∇ 5 ( G ′ ) = ( G − G ′ ) for any G , G ′ ∈ ℝ3 , it is straightforward to show that the distance ∆GC evolves according to ∆GC+1 = ( − η ) ( ( 1 + γC ) ∆GC − γC∆GC−1 ) . This recursion can be naturally put in matrix form , leading to : ( ∆G C+1 ∆GC ) = C∏ : =1 ( ( 1 + γ : ) −γ : 0 ) ( ∆G1 ∆G0 ) , where here = − η . Thus , for a quadratic 5 , bounding the divergence ‖∆GC ‖ between the two NAG sequences reduces to controlling the operator norm of the matrix product above , namely C∏ : =1 ( ( 1 + γ : ) −γ : 0 ) . Remarkably , it can be shown that this norm is $ ( C ) for any 0 and any choice of −1 ≤ γ1 , . . . , γC ≤ 1 . ( This can be seen by writing the Schur decomposition of the involved matrices , as we show in the full version of the paper [ 3 ] .1 ) As a consequence , the initialization stability of NAG for a quadratic objective 5 is shown to grow only linearly with the number of steps C. What breaks down in the general convex case : For a general convex ( twice-differentiable and smooth ) 5 , the Hessian matrix is of course no longer fixed across the execution . Assuming for simplicity the one-dimensional case , similar arguments show that the relevant operator norm is of the form C∏ : =1 ( ( 1 + γ : ) : −γ : : 0 ) , where 0 ≤ 1 , . . . , C ≤ 1 are related to Hessians of 5 taken at suitable points along the optimization trajectory . However , if : are allowed to vary arbitrarily between steps , the matrix product above might explode exponentially fast , even in the one-dimensional case ! Indeed , fix γ : = 0.9 for all : , and set : = 0 whenever : mod 3 = 0 and : = 1 otherwise ; then using simple linear algebra the operator norm of interest can be shown to satisfy ( ( 0 01 0 ) ( 1.9 −0.91 0 ) ( 1.9 −0.91 0 ) ) C/3 = ( 0 02.71 −1.71 ) C/3 ≥ 1.15C . How a hard function should look like : The exponential blowup we exhibited above hinged on a worst-case sequence 1 , . . . , C that varies significantly between consecutive steps . It remains unclear , however , what does this imply for the actual optimization setup we care about , and whether such a sequence can be realized by Hessians of a convex and smooth function 5 . Our main results essentially answer the latter question on the affirmative and build upon a construction of such a function 5 that directly imitates such a bad sequence . Concretely , we generate a hard function inductively based on a running execution of NAG , where in each step we amend the construction with a “ gadget ” function having a piecewise-constant Hessian ( that equals either 0 or the maximal β ) ; see Fig . 1 for an illustration of this construct . The interval pieces are carefully chosen based on the NAG iterates computed so far in a way that a slightly perturbed 1Chen et al . [ 10 ] give an alternative argument based on Chebyshev polynomials . execution would traverse through intervals with an appropriate pattern of Hessians that induces a behaviour similar to the one exhibited by the matrix products above , leading to an exponential blowup in the stability terms . Fig . 2 shows a simulation of the divergence between the two trajectories of NAG on the objective function we construct , illustrating how the divergence fluctuates between positive and negative values , with its absolute value growing exponentially with time . More technical details on this construction can be found in Section 3 . From initialization stability to uniform stability : Finally , we employ a simple reduction to translate our results regarding initialization stability to relate to uniform stability in the context of empirical risk optimization , where one uses ( full-batch ) NAG to minimize the ( convex , smooth ) empirical risk induced by a sample ( of = examples . Concretely , we show that by replacing a single example in ( , we can arrive at a scenario where after one step of NAG on the original and modified samples the respective iterates are ² = Θ ( 1/= ) away from each other , whereas in the remaining steps both empirical risks simulate our bad function from before . Thus , we again end up with an exponential increase in the divergence between the two executions , that leads to a similar increase in the algorithmic ( uniform ) stability : the latter becomes as large as Ω ( 1 ) after merely ) = $ ( log = ) steps of full-batch NAG . The formal details of this reduction can be found in Section 4 .", "role": "user" }, { "content": "It is very interesting and attractive to investigate the algorithmic instabilities for the famous Nesterov accelerated gradient descent method. This concepts are very related to ordinary differential equation, such as initialization stability corresponding to continuous dependence of solution about initialization for long time. The uniform stability is for the objective function. The authors tries to use the so-called uniform stability to describe the NAG under noise cannot obtain the desired result. The new concept --- uniform stability is a novel point.", "role": "assistant" } ]
science.qasper_abstractive_qa
Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines. Document Content: Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis Training abstractive summarization models typically requires large amounts of data, which can be a limitation for many domains. In this paper we explore using domain transfer and data synthesis to improve the performance of recent abstractive summarization methods when applied to small corpora of student reflections. First, we explored whether tuning state of the art model trained on newspaper data could boost performance on student reflection data. Evaluations demonstrated that summaries produced by the tuned model achieved higher ROUGE scores compared to model trained on just student reflection data or just newspaper data. The tuned model also achieved higher scores compared to extractive summarization baselines, and additionally was judged to produce more coherent and readable summaries in human evaluations. Second, we explored whether synthesizing summaries of student data could additionally boost performance. We proposed a template-based model to synthesize new data, which when incorporated into training further increased ROUGE scores. Finally, we showed that combining data synthesis with domain transfer achieved higher ROUGE scores compared to only using one of the two approaches. Reflection Summarization Dataset Student reflections are comments provided by students in response to a set of instructor prompts. The prompts are directed towards gathering students' feedback on course material. Student reflections are collected directly following each of a set of classroom lectures over a semester. In this paper, the set of reflections for each prompt in each lecture is considered a student reflection document. The objective of our work is to provide a comprehensive and meaningful abstractive summary of each student reflection document. Our dataset consists of documents and summaries from four course instantiations: ENGR (Introduction to Materials Science and Engineering), Stat2015 and Stat2016 (Statistics for Industrial Engineers, taught in 2015 and 2016, respectively), and CS (Data Structures in Computer Science). All reflections were collected in response to two pedagogically-motivated prompts BIBREF16: “Point of Interest (POI): Describe what you found most interesting in today's class” and “Muddiest Point (MP): Describe what was confusing or needed more detail.” For each reflection document, at least one human (either a TA or domain expert) created summaries. Table TABREF4 shows example reference summary produced by one annotator for the CS course. Table TABREF5 summarizes the dataset in terms of number of lectures, number of prompts per lecture, average number of reflections per prompt, and number of abstractive reference summaries for each set of reflections. Experiments Our experimental designs address the following hypotheses: Hypothesis 1 (H1) : Training complex abstractive models with limited in-domain or large quantities of out-of-domain data won't be enough to outperform extractive baselines. Hypothesis 2 (H2) : Domain transfer helps abstractive models even if in-domain and out-of-domain data are very different and the amount of in-domain data is very small. Hypothesis 3 (H3) : Enriching abstractive training data with synthetic data helps overcome in-domain data scarcity. Hypothesis 4 (H4) : The proposed template-based synthesis model outperforms a simple word replacement model. Hypothesis 5 (H5) : Combining domain transfer with data synthesis outperforms using each approach on its own. Hypothesis 6 (H6) : The synthesis model can be extended to perform reflection summarization directly. Extractive Baselines (for testing H1). While BIBREF0 used Lead-3 as an extractive baseline, in our data sentence order doesn't matter as reflections are independent. We thus use a similar in concept baseline: randomly select N reflections. Since the baseline is random we report the average result of 100 runs. Following BIBREF5, we compare results to MEAD BIBREF20 and to BIBREF5's extractive phrase-based model. Since these models extracted 5 phrases as extractive summary, we use N=5 for our three extractive baselines. Additionally we compare to running only the extractive part of Fast-RL. Domain Transfer (for testing H2, H5). To observe the impact of using out-of-domain (news) data for pretraining to compensate for low resource in-domain (reflection) data, we train 3 variants of PG-net: model training on CNN/DM; model training on reflections; and model training on CNN/DM then tuning using reflections. Table TABREF11 shows example summaries generated by the three variants of PG-net for a CS document. For all experiments where reflections are used for training/tuning, we train using a leave one course out approach (i.e, in each fold, three courses are used for training and the remaining course for testing). If the experiment involves tuning a combined dictionary of CNN/DM and reflections is used to avoid domain mismatch. To tune model parameters, the best number of steps for training, the learning rate, etc., a randomly selected 50% of the training data is used for validation. We choose the parameters that maximize ROUGE scores over this validation set. To implement PG-net we use OpenNMT BIBREF21 with the original set of parameters. The out-of-domain model is trained for 100k steps using the CNN/DM dataset. Following base model training, we tune the model by training it using student reflections. The tuning is done by lowering the LR from 0.15 to 0.1 and training the model for additional 500 steps. The in-domain model is trained only using reflections. We use the same model architecture as above and train the model for 20k steps using adagrad and LR of 0.15. Synthesis Baseline (for testing H3, H4). Following BIBREF12, we developed a data synthesis baseline using word replacement via WordNet. The baseline iterates over all words in a summary. If word $X$ has $N$ synonyms in WordNet, the model creates $N$ new versions of the summary and corresponding reflections by replacing the word $X$ with each of the $N$ synonyms. Template Synthesis Model (for testing H4, H5). To synthesize summaries, we use the same leave one course out approach. For each course, we use the data from the other three courses to train the rewriting module and tune the scoring language model. We can also use the summaries from CNN/DM data as additional samples to further train the rewriting module. We then start synthesizing data using that training data as input. First templates are constructed. The templates are then clustered into 8 clusters. We decided to use 8 to avoid clustering templates from POI with MP, as the templates from both prompts would contain very different supporting words. We also wanted to avoid a high level of dissimilarity within each cluster, and allow some diversity. Following the clustering, the rewriting model produces candidate summaries for each human summary. The rewriting model is another PG-net with the same exact parameters. After producing the candidate summaries, a language model is used to score them. The language model is a single layer LSTM language model trained on 36K sentences from Wikipedia and fine tuned using student reflections. In this work we decided to pick only the highest 3 scored candidate summaries as synthetic data, to avoid adding ill-formed summaries to the training data. Since we are adding $N$ synthetic summaries for each set of reflections, that means we are essentially duplicating the size of our original reflection training data by $N$, which is 3 in our case. Table TABREF11 shows a human summary, the keywords extracted, then the output of injecting keywords in a different template using rewriting. Template-based Summarization (for testing H6). While the proposed template-based model was intended for data synthesis, with minor modification it can be adapted for summarization itself. Because the modifications introduce few parameters, the model is suitable for small datasets. Recall that for data synthesis, the input to the template method is a summary. Since for summarization the input instead is a set of reflections, we perform keyword extraction over the set of reflections. We then add an extra logistic regression classifier that uses the set of reflections as input and predicts a cluster of templates constructed from other courses. Using the keywords and the predicted cluster of templates, we use the same rewriting model to produce candidate summaries. The last step in the pipeline is scoring. In data synthesis, a reference summary is used for scoring; however, in summarization we don't have such a reference. To score the candidate summaries, the model only uses the language model and produces the candidate with the highest score. Results ROUGE Evaluation Results. Table TABREF13 presents summarization performance results for the 4 extractive baselines, for the original and proposed variants of PG-net, and finally for template-summarization. Following BIBREF0, performance is evaluated using ROUGE (1, 2, and $L$) BIBREF22 on F1. The motivation for using domain transfer and data synthesis is our hypothesis (H1). Table TABREF13 supports this hypothesis. All ROUGE scores for PG-net that outperform all extractive baselines (in italics) involve tuning and/or use of synthesised data, except for one R-1 (row 18). As for our second hypothesis (H2), table TABREF13 shows that it is a valid one. For PG-net, comparing the CNN/DM out-of-domain and Student Reflection in-domain results in rows (5 and 6) and (17 and 18) with their corresponding tuned results in rows 9 and 21, we see that fine tuning improves R-1, R-2, and R-$L$ for all courses (rows 5, 6, 9 and 17, 18, 21). Qualitatively, the examples presented in Table TABREF11 clearly show that tuning yields a more coherent and relevant summary. Over all courses, the tuned version of PG-net consistently outperforms the best baseline result for each metric (rows 9 vs. 1, 2, 3, 4 and 21 vs. 13, 14, 15, 16) except for R-2 in Stat2016. To validate our next set of hypothesises (H3, H4. H5), we use the synthesized data in two settings: either using it for training (rows 7, 8 and 19, 20) or tuning (rows 10, 11 and 22, 23). Table TABREF13 supports H4 by showing that the proposed synthesis model outperforms the WordNet baseline in training (rows 7, 8 and 19, 20) except Stat2016, and tuning (10, 11 and 22, 23) over all courses. It also shows that while adding synthetic data from the baseline is not always helpful, adding synthetic data from the template model helps to improve both the training and the tuning process. In both CS and ENGR courses, tuning with synthetic data enhances all ROUGE scores compared to tuning with only the original data. (rows 9 and 11). As for Stat2015, R-1 and R-$L$ improved, while R-2 decreased. For Stat2016, R-2 and R-$L$ improved, and R-1 decreased (rows 21 and 23). Training with both student reflection data and synthetic data compared to training with only student reflection data yields similar improvements, supporting H3 (rows 6, 8 and 18, 20). While the increase in ROUGE scores is small, our results show that enriching training data with synthetic data can benefit both the training and tuning of other models. In general, the best results are obtained when using data synthesis for both training and tuning (rows 11 and 23), supporting H5. Finally, while the goal of our template model was to synthesize data, using it for summarization is surprisingly competitive, supporting H6. We believe that training the model with little data is doable due to the small number of parameters (logistic regression classifier only). While rows 12 and 24 are never the best results, they are close to the best involving tuning. This encourages us to enhance our template model and explore templates not so tailored to our data. Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance. Conclusions and Future Work We explored improving the performance of neural abstractive summarizers when applied to the low resource domain of student reflections using three approaches: domain transfer, data synthesis and the combination of both. For domain transfer, state of the art abstractive summarization model was pretrained using out-of-domain data (CNN/DM), then tuned using in-domain data (student reflections). The process of tuning improved ROUGE scores on the student reflection data, and at the same time produced more readable summaries. To incorporate synthetic data, we proposed a new template based synthesis model to synthesize new summaries. We showed that enriching the training data with this synthesized data can further increase the benefits of using domain transfer / tuning to increase ROUGE scores. We additionally showed that the proposed synthesis model outperformed a word replacement synthesis baseline. Future plans include trying domain adaptation, enhancing the synthesising process by using other models, further exploring template-based methods, and extending the analysis of the synthesis model to cover other types of data like reviews and opinions. Acknowledgments The research reported here was supported, in whole or in part, by the institute of Education Sciences, U.S. Department of Education, through Grant R305A180477 to the University of Pittsburgh. The opinons expressed are those of the authors and do not represent the views of the institute or the U.S. Department of Education Table 3: Summaries generated by the three variants of PGnet for the same CS reflection document, and synthetic sample generated by the proposed template model. Table 4: ROUGE results. Italics indicates outperforms baselines. Boldface indicates best result over all models. Underlining Query: Who were the human evaluators used? JSON Response:
{"answer": "20 annotatos from author\u0027s institution", "evidence": ["Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don\u0027t read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance."]}
science.qasper_abstractive_qa.10.0
[ { "content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nAbstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis\n\nTraining abstractive summarization models typically requires large amounts of data, which can be a limitation for many domains. In this paper we explore using domain transfer and data synthesis to improve the performance of recent abstractive summarization methods when applied to small corpora of student reflections. First, we explored whether tuning state of the art model trained on newspaper data could boost performance on student reflection data. Evaluations demonstrated that summaries produced by the tuned model achieved higher ROUGE scores compared to model trained on just student reflection data or just newspaper data. The tuned model also achieved higher scores compared to extractive summarization baselines, and additionally was judged to produce more coherent and readable summaries in human evaluations. Second, we explored whether synthesizing summaries of student data could additionally boost performance. We proposed a template-based model to synthesize new data, which when incorporated into training further increased ROUGE scores. Finally, we showed that combining data synthesis with domain transfer achieved higher ROUGE scores compared to only using one of the two approaches.\n\nReflection Summarization Dataset\nStudent reflections are comments provided by students in response to a set of instructor prompts. The prompts are directed towards gathering students' feedback on course material. Student reflections are collected directly following each of a set of classroom lectures over a semester. In this paper, the set of reflections for each prompt in each lecture is considered a student reflection document. The objective of our work is to provide a comprehensive and meaningful abstractive summary of each student reflection document. Our dataset consists of documents and summaries from four course instantiations: ENGR (Introduction to Materials Science and Engineering), Stat2015 and Stat2016 (Statistics for Industrial Engineers, taught in 2015 and 2016, respectively), and CS (Data Structures in Computer Science). All reflections were collected in response to two pedagogically-motivated prompts BIBREF16: “Point of Interest (POI): Describe what you found most interesting in today's class” and “Muddiest Point (MP): Describe what was confusing or needed more detail.”\nFor each reflection document, at least one human (either a TA or domain expert) created summaries. Table TABREF4 shows example reference summary produced by one annotator for the CS course. Table TABREF5 summarizes the dataset in terms of number of lectures, number of prompts per lecture, average number of reflections per prompt, and number of abstractive reference summaries for each set of reflections.\n\nExperiments\nOur experimental designs address the following hypotheses:\nHypothesis 1 (H1) : Training complex abstractive models with limited in-domain or large quantities of out-of-domain data won't be enough to outperform extractive baselines.\nHypothesis 2 (H2) : Domain transfer helps abstractive models even if in-domain and out-of-domain data are very different and the amount of in-domain data is very small.\nHypothesis 3 (H3) : Enriching abstractive training data with synthetic data helps overcome in-domain data scarcity.\nHypothesis 4 (H4) : The proposed template-based synthesis model outperforms a simple word replacement model.\nHypothesis 5 (H5) : Combining domain transfer with data synthesis outperforms using each approach on its own.\nHypothesis 6 (H6) : The synthesis model can be extended to perform reflection summarization directly.\nExtractive Baselines (for testing H1). While BIBREF0 used Lead-3 as an extractive baseline, in our data sentence order doesn't matter as reflections are independent. We thus use a similar in concept baseline: randomly select N reflections. Since the baseline is random we report the average result of 100 runs. Following BIBREF5, we compare results to MEAD BIBREF20 and to BIBREF5's extractive phrase-based model. Since these models extracted 5 phrases as extractive summary, we use N=5 for our three extractive baselines. Additionally we compare to running only the extractive part of Fast-RL.\nDomain Transfer (for testing H2, H5).\nTo observe the impact of using out-of-domain (news) data for pretraining to compensate for low resource in-domain (reflection) data, we train 3 variants of PG-net: model training on CNN/DM; model training on reflections; and model training on CNN/DM then tuning using reflections. Table TABREF11 shows example summaries generated by the three variants of PG-net for a CS document. For all experiments where reflections are used for training/tuning, we train using a leave one course out approach (i.e, in each fold, three courses are used for training and the remaining course for testing). If the experiment involves tuning a combined dictionary of CNN/DM and reflections is used to avoid domain mismatch. To tune model parameters, the best number of steps for training, the learning rate, etc., a randomly selected 50% of the training data is used for validation. We choose the parameters that maximize ROUGE scores over this validation set.\nTo implement PG-net we use OpenNMT BIBREF21 with the original set of parameters. The out-of-domain model is trained for 100k steps using the CNN/DM dataset. Following base model training, we tune the model by training it using student reflections. The tuning is done by lowering the LR from 0.15 to 0.1 and training the model for additional 500 steps. The in-domain model is trained only using reflections. We use the same model architecture as above and train the model for 20k steps using adagrad and LR of 0.15.\nSynthesis Baseline (for testing H3, H4). Following BIBREF12, we developed a data synthesis baseline using word replacement via WordNet. The baseline iterates over all words in a summary. If word $X$ has $N$ synonyms in WordNet, the model creates $N$ new versions of the summary and corresponding reflections by replacing the word $X$ with each of the $N$ synonyms.\nTemplate Synthesis Model (for testing H4, H5). To synthesize summaries, we use the same leave one course out approach. For each course, we use the data from the other three courses to train the rewriting module and tune the scoring language model. We can also use the summaries from CNN/DM data as additional samples to further train the rewriting module. We then start synthesizing data using that training data as input. First templates are constructed. The templates are then clustered into 8 clusters. We decided to use 8 to avoid clustering templates from POI with MP, as the templates from both prompts would contain very different supporting words. We also wanted to avoid a high level of dissimilarity within each cluster, and allow some diversity. Following the clustering, the rewriting model produces candidate summaries for each human summary. The rewriting model is another PG-net with the same exact parameters. After producing the candidate summaries, a language model is used to score them. The language model is a single layer LSTM language model trained on 36K sentences from Wikipedia and fine tuned using student reflections. In this work we decided to pick only the highest 3 scored candidate summaries as synthetic data, to avoid adding ill-formed summaries to the training data. Since we are adding $N$ synthetic summaries for each set of reflections, that means we are essentially duplicating the size of our original reflection training data by $N$, which is 3 in our case. Table TABREF11 shows a human summary, the keywords extracted, then the output of injecting keywords in a different template using rewriting.\nTemplate-based Summarization (for testing H6). While the proposed template-based model was intended for data synthesis, with minor modification it can be adapted for summarization itself. Because the modifications introduce few parameters, the model is suitable for small datasets. Recall that for data synthesis, the input to the template method is a summary. Since for summarization the input instead is a set of reflections, we perform keyword extraction over the set of reflections. We then add an extra logistic regression classifier that uses the set of reflections as input and predicts a cluster of templates constructed from other courses. Using the keywords and the predicted cluster of templates, we use the same rewriting model to produce candidate summaries. The last step in the pipeline is scoring. In data synthesis, a reference summary is used for scoring; however, in summarization we don't have such a reference. To score the candidate summaries, the model only uses the language model and produces the candidate with the highest score.\n\nResults\nROUGE Evaluation Results.\nTable TABREF13 presents summarization performance results for the 4 extractive baselines, for the original and proposed variants of PG-net, and finally for template-summarization. Following BIBREF0, performance is evaluated using ROUGE (1, 2, and $L$) BIBREF22 on F1. The motivation for using domain transfer and data synthesis is our hypothesis (H1). Table TABREF13 supports this hypothesis. All ROUGE scores for PG-net that outperform all extractive baselines (in italics) involve tuning and/or use of synthesised data, except for one R-1 (row 18).\nAs for our second hypothesis (H2), table TABREF13 shows that it is a valid one. For PG-net, comparing the CNN/DM out-of-domain and Student Reflection in-domain results in rows (5 and 6) and (17 and 18) with their corresponding tuned results in rows 9 and 21, we see that fine tuning improves R-1, R-2, and R-$L$ for all courses (rows 5, 6, 9 and 17, 18, 21). Qualitatively, the examples presented in Table TABREF11 clearly show that tuning yields a more coherent and relevant summary. Over all courses, the tuned version of PG-net consistently outperforms the best baseline result for each metric (rows 9 vs. 1, 2, 3, 4 and 21 vs. 13, 14, 15, 16) except for R-2 in Stat2016.\nTo validate our next set of hypothesises (H3, H4. H5), we use the synthesized data in two settings: either using it for training (rows 7, 8 and 19, 20) or tuning (rows 10, 11 and 22, 23). Table TABREF13 supports H4 by showing that the proposed synthesis model outperforms the WordNet baseline in training (rows 7, 8 and 19, 20) except Stat2016, and tuning (10, 11 and 22, 23) over all courses. It also shows that while adding synthetic data from the baseline is not always helpful, adding synthetic data from the template model helps to improve both the training and the tuning process. In both CS and ENGR courses, tuning with synthetic data enhances all ROUGE scores compared to tuning with only the original data. (rows 9 and 11). As for Stat2015, R-1 and R-$L$ improved, while R-2 decreased. For Stat2016, R-2 and R-$L$ improved, and R-1 decreased (rows 21 and 23). Training with both student reflection data and synthetic data compared to training with only student reflection data yields similar improvements, supporting H3 (rows 6, 8 and 18, 20). While the increase in ROUGE scores is small, our results show that enriching training data with synthetic data can benefit both the training and tuning of other models. In general, the best results are obtained when using data synthesis for both training and tuning (rows 11 and 23), supporting H5.\nFinally, while the goal of our template model was to synthesize data, using it for summarization is surprisingly competitive, supporting H6. We believe that training the model with little data is doable due to the small number of parameters (logistic regression classifier only). While rows 12 and 24 are never the best results, they are close to the best involving tuning. This encourages us to enhance our template model and explore templates not so tailored to our data.\nHuman Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance.\n\nConclusions and Future Work\nWe explored improving the performance of neural abstractive summarizers when applied to the low resource domain of student reflections using three approaches: domain transfer, data synthesis and the combination of both. For domain transfer, state of the art abstractive summarization model was pretrained using out-of-domain data (CNN/DM), then tuned using in-domain data (student reflections). The process of tuning improved ROUGE scores on the student reflection data, and at the same time produced more readable summaries. To incorporate synthetic data, we proposed a new template based synthesis model to synthesize new summaries. We showed that enriching the training data with this synthesized data can further increase the benefits of using domain transfer / tuning to increase ROUGE scores. We additionally showed that the proposed synthesis model outperformed a word replacement synthesis baseline. Future plans include trying domain adaptation, enhancing the synthesising process by using other models, further exploring template-based methods, and extending the analysis of the synthesis model to cover other types of data like reviews and opinions.\n\nAcknowledgments\nThe research reported here was supported, in whole or in part, by the institute of Education Sciences, U.S. Department of Education, through Grant R305A180477 to the University of Pittsburgh. The opinons expressed are those of the authors and do not represent the views of the institute or the U.S. Department of Education\n\nTable 3: Summaries generated by the three variants of PGnet for the same CS reflection document, and synthetic sample generated by the proposed template model.\n\nTable 4: ROUGE results. Italics indicates outperforms baselines. Boldface indicates best result over all models. Underlining\n\nQuery: Who were the human evaluators used?\n\nJSON Response:", "role": "user" }, { "content": "{\"answer\": \"20 annotatos from author\\u0027s institution\", \"evidence\": [\"Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don\\u0027t read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance.\"]}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Analysis of -1082 IL-10 gene polymorphism in Iranian patients with generalized aggressive periodontitis. BACKGROUND: Periodontitis is a multifactorial disease and its severe forms, such as aggressive periodontitis, are suggested to have a genetic basis. Among the genetic factors, polymorphisms in cytokine genes have recently been described in susceptibility to periodontitis. IL-10 is a multi-functional cytokine thought to play a role in the pathogenesis of periodontitis. A substitution G/A polymorphism in the promoter region of the IL-10 gene at position -1082 has been associated with different amounts of IL-10 production. The aim of the present study was to investigate the possible links between -1082(G/A) polymorphism of the IL-10 gene and the generalized form of aggressive periodontitis. MATERIAL/METHODS: This study included 52 Iranian Khorasanian (north-east province of Iran) subjects suffering from generalized aggressive periodontitis referred to the Periodontology Department of Mashhad Dental School. They were compared to 61 age and sex-matched healthy controls of the same race. DNA was isolated from peripheral blood cells and genotyping was performed by means of the amplification refractory mutation system polymerase chain reaction (ARMS-PCR) method. Data were analyzed using the chi-squared test. RESULTS: There was no marked difference in genotype frequencies between the controls and generalized aggressive periodontitis patients (p=0.585). Moreover, no association between patients and normal subjects was found in their allele frequency (p=0.329). CONCLUSIONS: We conclude that the polymorphic nucleotide A at position -1082 of the IL-10 gene is not associated with generalized aggressive periodontitis in the Iranian population.
{"CellLine": [], "Chemical": [], "Disease": ["periodontitis", "Periodontitis", "aggressive periodontitis"], "Gene": ["IL-10"], "Species": ["patients"], "Variant": ["-1082(G/A)", "A at position -1082", "G/A"]}
science.biored_ner.28
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nAnalysis of -1082 IL-10 gene polymorphism in Iranian patients with generalized aggressive periodontitis. BACKGROUND: Periodontitis is a multifactorial disease and its severe forms, such as aggressive periodontitis, are suggested to have a genetic basis. Among the genetic factors, polymorphisms in cytokine genes have recently been described in susceptibility to periodontitis. IL-10 is a multi-functional cytokine thought to play a role in the pathogenesis of periodontitis. A substitution G/A polymorphism in the promoter region of the IL-10 gene at position -1082 has been associated with different amounts of IL-10 production. The aim of the present study was to investigate the possible links between -1082(G/A) polymorphism of the IL-10 gene and the generalized form of aggressive periodontitis. MATERIAL/METHODS: This study included 52 Iranian Khorasanian (north-east province of Iran) subjects suffering from generalized aggressive periodontitis referred to the Periodontology Department of Mashhad Dental School. They were compared to 61 age and sex-matched healthy controls of the same race. DNA was isolated from peripheral blood cells and genotyping was performed by means of the amplification refractory mutation system polymerase chain reaction (ARMS-PCR) method. Data were analyzed using the chi-squared test. RESULTS: There was no marked difference in genotype frequencies between the controls and generalized aggressive periodontitis patients (p=0.585). Moreover, no association between patients and normal subjects was found in their allele frequency (p=0.329). CONCLUSIONS: We conclude that the polymorphic nucleotide A at position -1082 of the IL-10 gene is not associated with generalized aggressive periodontitis in the Iranian population.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [], \"Disease\": [\"periodontitis\", \"Periodontitis\", \"aggressive periodontitis\"], \"Gene\": [\"IL-10\"], \"Species\": [\"patients\"], \"Variant\": [\"-1082(G/A)\", \"A at position -1082\", \"G/A\"]}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: d extracellular matrix (ECM) proteins in the arterial wall apolipoprotein A-I (apoA-I) Among the proteins with increased abundance were prominent extracellular matrix proteins such as biglycan and lumican, whereas cytoskeletal markers for contractile smooth muscle cells (SMCs) were decreased. , a biomarker panel of FBLN1C, APOE and CDH13 was identifie Question: List proteins that are contained in atherosclerotic plaques?
["extracellular matrix proteins", "biglycan", "Lumican", "Apolipoprotein A-I"]
science.bioasq_list_qa.795
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: d extracellular matrix (ECM) proteins in the arterial wall\napolipoprotein A-I (apoA-I)\nAmong the proteins with increased abundance were prominent extracellular matrix proteins such as biglycan and lumican, whereas cytoskeletal markers for contractile smooth muscle cells (SMCs) were decreased.\n, a biomarker panel of FBLN1C, APOE and CDH13 was identifie\n\nQuestion: List proteins that are contained in atherosclerotic plaques?", "role": "user" }, { "content": "[\"extracellular matrix proteins\", \"biglycan\", \"Lumican\", \"Apolipoprotein A-I\"]", "role": "assistant" } ]
science.discomat_te
| Sample code | P2O5 (mol%) | BaO(mol%) | CaF2 (mol%) | Tg (degC, +-3) | Tx (degC, +-3) | Tm (degC, +-3) | Tx -Tg (degC) | Kgl | | BaPF0 | 50 | 50 | 0 | 426 | 539 | 841 | 113 | 0.374 | | BaPF2 | 50 | 48 | 2 | 428 | 543 | 831 | 115 | 0.399 | | BaPF4 | 50 | 46 | 4 | 433 | 547 | 812 | 114 | 0.430 | | BaPF6 | 50 | 44 | 6 | 440 | 563 | 824 | 123 | 0.471 | | BaPF8 | 50 | 42 | 8 | 440 | 566 | 805 | 126 | 0.527 | | BaPF10 | 50 | 40 | 10 | 444 | 575 | 801 | 131 | 0.580 | Caption: Batch composition and differential thermal analysis results of BaPF glasses. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "50", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "BaPF0"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "BaPF0"} {"value": "0", "type": "Other"} {"value": "426", "type": "Other"} {"value": "539", "type": "Other"} {"value": "841", "type": "Other"} {"value": "113", "type": "Other"} {"value": "0.374", "type": "Other"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "BaPF2"} {"value": "48", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "BaPF2"} {"value": "2", "type": "Glass_Compound_Amount", "constituent": "CaF2", "unit": "mol", "material": "BaPF2"} {"value": "428", "type": "Other"} {"value": "543", "type": "Other"} {"value": "831", "type": "Other"} {"value": "115", "type": "Other"} {"value": "0.399", "type": "Other"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "BaPF4"} {"value": "46", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "BaPF4"} {"value": "4", "type": "Glass_Compound_Amount", "constituent": "CaF2", "unit": "mol", "material": "BaPF4"} {"value": "433", "type": "Other"} {"value": "547", "type": "Other"} {"value": "812", "type": "Other"} {"value": "114", "type": "Other"} {"value": "0.430", "type": "Other"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "BaPF6"} {"value": "44", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "BaPF6"} {"value": "6", "type": "Glass_Compound_Amount", "constituent": "CaF2", "unit": "mol", "material": "BaPF6"} {"value": "440", "type": "Other"} {"value": "563", "type": "Other"} {"value": "824", "type": "Other"} {"value": "123", "type": "Other"} {"value": "0.471", "type": "Other"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "BaPF8"} {"value": "42", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "BaPF8"} {"value": "8", "type": "Glass_Compound_Amount", "constituent": "CaF2", "unit": "mol", "material": "BaPF8"} {"value": "440", "type": "Other"} {"value": "566", "type": "Other"} {"value": "805", "type": "Other"} {"value": "126", "type": "Other"} {"value": "0.527", "type": "Other"} {"value": "50", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "BaPF10"} {"value": "40", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "BaPF10"} {"value": "10", "type": "Glass_Compound_Amount", "constituent": "CaF2", "unit": "mol", "material": "BaPF10"} {"value": "444", "type": "Other"} {"value": "575", "type": "Other"} {"value": "801", "type": "Other"} {"value": "131", "type": "Other"} {"value": "0.580", "type": "Other"}
science.discomat_te.371
[ { "content": "| Sample code | P2O5 (mol%) | BaO(mol%) | CaF2 (mol%) | Tg (degC, +-3) | Tx (degC, +-3) | Tm (degC, +-3) | Tx -Tg (degC) | Kgl |\n| BaPF0 | 50 | 50 | 0 | 426 | 539 | 841 | 113 | 0.374 |\n| BaPF2 | 50 | 48 | 2 | 428 | 543 | 831 | 115 | 0.399 |\n| BaPF4 | 50 | 46 | 4 | 433 | 547 | 812 | 114 | 0.430 |\n| BaPF6 | 50 | 44 | 6 | 440 | 563 | 824 | 123 | 0.471 |\n| BaPF8 | 50 | 42 | 8 | 440 | 566 | 805 | 126 | 0.527 |\n| BaPF10 | 50 | 40 | 10 | 444 | 575 | 801 | 131 | 0.580 |\n\nCaption: Batch composition and differential thermal analysis results of BaPF glasses.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"BaPF0\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"BaPF0\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"426\", \"type\": \"Other\"}\n{\"value\": \"539\", \"type\": \"Other\"}\n{\"value\": \"841\", \"type\": \"Other\"}\n{\"value\": \"113\", \"type\": \"Other\"}\n{\"value\": \"0.374\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"BaPF2\"}\n{\"value\": \"48\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"BaPF2\"}\n{\"value\": \"2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaF2\", \"unit\": \"mol\", \"material\": \"BaPF2\"}\n{\"value\": \"428\", \"type\": \"Other\"}\n{\"value\": \"543\", \"type\": \"Other\"}\n{\"value\": \"831\", \"type\": \"Other\"}\n{\"value\": \"115\", \"type\": \"Other\"}\n{\"value\": \"0.399\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"BaPF4\"}\n{\"value\": \"46\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"BaPF4\"}\n{\"value\": \"4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaF2\", \"unit\": \"mol\", \"material\": \"BaPF4\"}\n{\"value\": \"433\", \"type\": \"Other\"}\n{\"value\": \"547\", \"type\": \"Other\"}\n{\"value\": \"812\", \"type\": \"Other\"}\n{\"value\": \"114\", \"type\": \"Other\"}\n{\"value\": \"0.430\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"BaPF6\"}\n{\"value\": \"44\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"BaPF6\"}\n{\"value\": \"6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaF2\", \"unit\": \"mol\", \"material\": \"BaPF6\"}\n{\"value\": \"440\", \"type\": \"Other\"}\n{\"value\": \"563\", \"type\": \"Other\"}\n{\"value\": \"824\", \"type\": \"Other\"}\n{\"value\": \"123\", \"type\": \"Other\"}\n{\"value\": \"0.471\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"BaPF8\"}\n{\"value\": \"42\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"BaPF8\"}\n{\"value\": \"8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaF2\", \"unit\": \"mol\", \"material\": \"BaPF8\"}\n{\"value\": \"440\", \"type\": \"Other\"}\n{\"value\": \"566\", \"type\": \"Other\"}\n{\"value\": \"805\", \"type\": \"Other\"}\n{\"value\": \"126\", \"type\": \"Other\"}\n{\"value\": \"0.527\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"BaPF10\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"BaPF10\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaF2\", \"unit\": \"mol\", \"material\": \"BaPF10\"}\n{\"value\": \"444\", \"type\": \"Other\"}\n{\"value\": \"575\", \"type\": \"Other\"}\n{\"value\": \"801\", \"type\": \"Other\"}\n{\"value\": \"131\", \"type\": \"Other\"}\n{\"value\": \"0.580\", \"type\": \"Other\"}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT: Objectives were to evaluate effects of (1) repetitive arthrocentesis on biomarkers of inflammation (prostaglandin E2, PGE2) and aggrecan synthesis (chondroitin sulfate-846; CS) in synovial fluid (SF); (2) exercise and superoxide dismutase (SOD) supplementation on biomarkers of inflammation, antioxidant status, and aggrecan synthesis, in horses. Preliminary trial. Standardbreds underwent four arthrocentesis procedures within 48 h and exhibited elevated CS and no changes in PGE2. Exercise trial. this randomized crossover design used twelve Standardbred mares which received either treatment (3000 IU d−1 oral SOD powder) or placebo (cellulose powder) for 6 wks which culminated with them running a repeated sprint exercise test (RSET). Samples were collected before (PRE), during (PEAK), and following exercise (POST). Exercise resulted in increased (P < 0.05) antioxidant defenses including erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1β in blood, and decreased plasma nitric oxide. Exercise increased (P < 0.05) SF CS and adjusted-PGE2, and higher (P < 0.05) CS and PGE2 were found in hock versus carpus joints. No treatment effects were detected. Results suggest normal adaptive responses likely due to exercise-induced tissue microdamage and oxidative stress. Additional research is needed to identify benefit(s) of SOD supplementation in horses. Extracted Data:
[["exercise", "no exercise", "erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1? in blood", "significantly increased", "Exercise resulted in increased (P \u003c 0.05) antioxidant defenses including erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1\u03b2 in blood, and decreased plasma nitric oxide."], ["exercise", "no exercise", "plasma nitric oxide", "significantly decreased", "Exercise resulted in increased (P \u003c 0.05) antioxidant defenses including erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1\u03b2 in blood, and decreased plasma nitric oxide."], ["exercise", "no exercise", "chondroitin sulfate-846 (CS) in synovial fluid (SF) and adjusted-PGE2, ", "significantly increased", "Exercise increased (P \u003c 0.05) SF CS and adjusted-PGE2, and higher (P \u003c 0.05) CS and PGE2 were found in hock versus carpus joints."], ["3000 IU d?1 oral superoxide dismutase (SOD) powder for 6?wks - treatment group (TRT)", "placebo (cellulose powder) for 6?wks", "erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1? in blood, and decreased plasma nitric oxide.", "no significant difference", "There were no TRT effects detected for any parameter measured (P \u003e 0.05)."], ["3000 IU d?1 oral superoxide dismutase (SOD) powder for 6?wks - treatment group (TRT)", "placebo (cellulose powder) for 6?wks", "erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1? in blood, and decreased plasma nitric oxide.", "no significant difference", "No treatment effects were detected."]]
science.evidence_inference.9
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Objectives were to evaluate effects of (1) repetitive arthrocentesis on biomarkers of inflammation (prostaglandin E2, PGE2) and aggrecan synthesis (chondroitin sulfate-846; CS) in synovial fluid (SF); (2) exercise and superoxide dismutase (SOD) supplementation on biomarkers of inflammation, antioxidant status, and aggrecan synthesis, in horses. Preliminary trial. Standardbreds underwent four arthrocentesis procedures within 48 h and exhibited elevated CS and no changes in PGE2. Exercise trial. this randomized crossover design used twelve Standardbred mares which received either treatment (3000 IU d−1 oral SOD powder) or placebo (cellulose powder) for 6 wks which culminated with them running a repeated sprint exercise test (RSET). Samples were collected before (PRE), during (PEAK), and following exercise (POST). Exercise resulted in increased (P < 0.05) antioxidant defenses including erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1β in blood, and decreased plasma nitric oxide. Exercise increased (P < 0.05) SF CS and adjusted-PGE2, and higher (P < 0.05) CS and PGE2 were found in hock versus carpus joints. No treatment effects were detected. Results suggest normal adaptive responses likely due to exercise-induced tissue microdamage and oxidative stress. Additional research is needed to identify benefit(s) of SOD supplementation in horses.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"exercise\", \"no exercise\", \"erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1? in blood\", \"significantly increased\", \"Exercise resulted in increased (P \\u003c 0.05) antioxidant defenses including erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1\\u03b2 in blood, and decreased plasma nitric oxide.\"], [\"exercise\", \"no exercise\", \"plasma nitric oxide\", \"significantly decreased\", \"Exercise resulted in increased (P \\u003c 0.05) antioxidant defenses including erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1\\u03b2 in blood, and decreased plasma nitric oxide.\"], [\"exercise\", \"no exercise\", \"chondroitin sulfate-846 (CS) in synovial fluid (SF) and adjusted-PGE2, \", \"significantly increased\", \"Exercise increased (P \\u003c 0.05) SF CS and adjusted-PGE2, and higher (P \\u003c 0.05) CS and PGE2 were found in hock versus carpus joints.\"], [\"3000 IU d?1 oral superoxide dismutase (SOD) powder for 6?wks - treatment group (TRT)\", \"placebo (cellulose powder) for 6?wks\", \"erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1? in blood, and decreased plasma nitric oxide.\", \"no significant difference\", \"There were no TRT effects detected for any parameter measured (P \\u003e 0.05).\"], [\"3000 IU d?1 oral superoxide dismutase (SOD) powder for 6?wks - treatment group (TRT)\", \"placebo (cellulose powder) for 6?wks\", \"erythrocyte SOD, total glutathione, glutathione peroxidase, gene transcripts for interferon-gamma, interleukin-10, and interleukin-1? in blood, and decreased plasma nitric oxide.\", \"no significant difference\", \"No treatment effects were detected.\"]]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Cerebrospinal fluid penetration of high-dose daptomycin in suspected Staphylococcus aureus meningitis. OBJECTIVE: To report a case of methicillin-sensitive Staphylococcus aureus (MSSA) bacteremia with suspected MSSA meningitis treated with high-dose daptomycin assessed with concurrent serum and cerebrospinal fluid (CSF) concentrations. CASE SUMMARY: A 54-year-old male presented to the emergency department with generalized weakness and presumed health-care-associated pneumonia shown on chest radiograph. Treatment was empirically initiated with vancomycin, levofloxacin, and piperacillin/tazobactam. Blood cultures revealed S. aureus susceptible to oxacillin. Empiric antibiotic treatment was narrowed to nafcillin on day 4. On day 8, the patient developed acute renal failure (serum creatinine 1.9 mg/dL, increased from 1.2 mg/dL the previous day and 0.8 mg/dL on admission). The patient's Glasgow Coma Score was 3, with normal findings shown on computed tomography scan of the head 72 hours following an episode of cardiac arrest on day 10. The patient experienced relapsing MSSA bacteremia on day 9, increasing the suspicion for a central nervous system (CNS) infection. Nafcillin was discontinued and daptomycin 9 mg/kg daily was initiated for suspected meningitis and was continued until the patient's death on day 16. Daptomycin serum and CSF trough concentrations were 11.21 ug/mL and 0.52 ug/mL, respectively, prior to the third dose. Lumbar puncture results were inconclusive and no further blood cultures were positive for MSSA. Creatine kinase levels were normal prior to daptomycin therapy and were not reassessed. DISCUSSION: Daptomycin was initiated in our patient secondary to possible nafcillin-induced acute interstitial nephritis and relapsing bacteremia. At a dose of 9 mg/kg, resultant penetration of 5% was higher than in previous reports, more consistent with inflamed meninges. CONCLUSIONS: High-dose daptomycin may be an alternative option for MSSA bacteremia with or without a CNS source in patients who have failed or cannot tolerate standard therapy. Further clinical evaluation in patients with confirmed meningitis is warranted.
{"CellLine": [], "Chemical": ["Daptomycin", "oxacillin", "piperacillin/tazobactam", "antibiotic", "creatinine", "levofloxacin", "nafcillin", "daptomycin", "methicillin", "Nafcillin", "vancomycin"], "Disease": ["pneumonia", "Coma", "central nervous system (CNS) infection", "bacteremia", "weakness", "cardiac arrest", "interstitial nephritis", "meningitis", "acute renal failure", "death"], "Gene": ["Creatine kinase"], "Species": ["Staphylococcus aureus", "S. aureus", "patient", "patients"], "Variant": []}
science.biored_ner.70
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nCerebrospinal fluid penetration of high-dose daptomycin in suspected Staphylococcus aureus meningitis. OBJECTIVE: To report a case of methicillin-sensitive Staphylococcus aureus (MSSA) bacteremia with suspected MSSA meningitis treated with high-dose daptomycin assessed with concurrent serum and cerebrospinal fluid (CSF) concentrations. CASE SUMMARY: A 54-year-old male presented to the emergency department with generalized weakness and presumed health-care-associated pneumonia shown on chest radiograph. Treatment was empirically initiated with vancomycin, levofloxacin, and piperacillin/tazobactam. Blood cultures revealed S. aureus susceptible to oxacillin. Empiric antibiotic treatment was narrowed to nafcillin on day 4. On day 8, the patient developed acute renal failure (serum creatinine 1.9 mg/dL, increased from 1.2 mg/dL the previous day and 0.8 mg/dL on admission). The patient's Glasgow Coma Score was 3, with normal findings shown on computed tomography scan of the head 72 hours following an episode of cardiac arrest on day 10. The patient experienced relapsing MSSA bacteremia on day 9, increasing the suspicion for a central nervous system (CNS) infection. Nafcillin was discontinued and daptomycin 9 mg/kg daily was initiated for suspected meningitis and was continued until the patient's death on day 16. Daptomycin serum and CSF trough concentrations were 11.21 ug/mL and 0.52 ug/mL, respectively, prior to the third dose. Lumbar puncture results were inconclusive and no further blood cultures were positive for MSSA. Creatine kinase levels were normal prior to daptomycin therapy and were not reassessed. DISCUSSION: Daptomycin was initiated in our patient secondary to possible nafcillin-induced acute interstitial nephritis and relapsing bacteremia. At a dose of 9 mg/kg, resultant penetration of 5% was higher than in previous reports, more consistent with inflamed meninges. CONCLUSIONS: High-dose daptomycin may be an alternative option for MSSA bacteremia with or without a CNS source in patients who have failed or cannot tolerate standard therapy. Further clinical evaluation in patients with confirmed meningitis is warranted.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"Daptomycin\", \"oxacillin\", \"piperacillin/tazobactam\", \"antibiotic\", \"creatinine\", \"levofloxacin\", \"nafcillin\", \"daptomycin\", \"methicillin\", \"Nafcillin\", \"vancomycin\"], \"Disease\": [\"pneumonia\", \"Coma\", \"central nervous system (CNS) infection\", \"bacteremia\", \"weakness\", \"cardiac arrest\", \"interstitial nephritis\", \"meningitis\", \"acute renal failure\", \"death\"], \"Gene\": [\"Creatine kinase\"], \"Species\": [\"Staphylococcus aureus\", \"S. aureus\", \"patient\", \"patients\"], \"Variant\": []}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: CHOP is a general endoplasmic reticulum stress marker. Research Abstract: Endoplasmic reticulum (ER) stress causes pancreatic β-cell dysfunction and contributes to β-cell loss and the progression of type 2 diabetes. Wolfram syndrome 1 (WFS1) has been shown to be an important regulator of the ER stress signalling pathway; however, its role in β-cell function remains unclear. Here we provide evidence that WFS1 is essential for glucose- and glucagon-like peptide 1 (GLP-1)-stimulated cyclic AMP production and regulation of insulin biosynthesis and secretion. Stimulation with glucose causes WFS1 translocation from the ER to the plasma membrane, where it forms a complex with adenylyl cyclase 8 (AC8), an essential cAMP-generating enzyme in the β-cell that integrates glucose and GLP-1 signalling. ER stress and mutant WFS1 inhibit complex formation and activation of AC8, reducing cAMP synthesis and insulin secretion. These findings reveal that an ER-stress-related protein has a distinct role outside the ER regulating both insulin biosynthesis and secretion. The reduction of WFS1 protein on the plasma membrane during ER stress is a contributing factor for β-cell dysfunction and progression of type 2 diabetes.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.140
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nCHOP is a general endoplasmic reticulum stress marker.\n\nResearch Abstract:\nEndoplasmic reticulum (ER) stress causes pancreatic β-cell dysfunction and contributes to β-cell loss and the progression of type 2 diabetes.\nWolfram syndrome 1 (WFS1) has been shown to be an important regulator of the ER stress signalling pathway; however, its role in β-cell function remains unclear.\nHere we provide evidence that WFS1 is essential for glucose- and glucagon-like peptide 1 (GLP-1)-stimulated cyclic AMP production and regulation of insulin biosynthesis and secretion.\nStimulation with glucose causes WFS1 translocation from the ER to the plasma membrane, where it forms a complex with adenylyl cyclase 8 (AC8), an essential cAMP-generating enzyme in the β-cell that integrates glucose and GLP-1 signalling.\nER stress and mutant WFS1 inhibit complex formation and activation of AC8, reducing cAMP synthesis and insulin secretion.\nThese findings reveal that an ER-stress-related protein has a distinct role outside the ER regulating both insulin biosynthesis and secretion.\nThe reduction of WFS1 protein on the plasma membrane during ER stress is a contributing factor for β-cell dysfunction and progression of type 2 diabetes.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Ginsenoside Rg1 restores the impairment of learning induced by chronic morphine administration in rats. Rg1, as a ginsenoside extracted from Panax ginseng, could ameliorate spatial learning impairment. Previous studies have demonstrated that Rg1 might be a useful agent for the prevention and treatment of the adverse effects of morphine. The aim of this study was to investigate the effect of Rg1 on learning impairment by chronic morphine administration and the mechanism responsible for this effect. Male rats were subcutaneously injected with morphine (10 mg/kg) twice a day at 12 hour intervals for 10 days, and Rg1 (30 mg/kg) was intraperitoneally injected 2 hours after the second injection of morphine once a day for 10 days. Spatial learning capacity was assessed in the Morris water maze. The results showed that rats treated with Morphine/Rg1 decreased escape latency and increased the time spent in platform quadrant and entering frequency. By implantation of electrodes and electrophysiological recording in vivo, the results showed that Rg1 restored the long-term potentiation (LTP) impaired by morphine in both freely moving and anaesthetised rats. The electrophysiological recording in vitro showed that Rg1 restored the LTP in slices from the rats treated with morphine, but not changed LTP in the slices from normal saline- or morphine/Rg1-treated rats; this restoration could be inhibited by N-methyl-D-aspartate (NMDA) receptor antagonist MK801. We conclude that Rg1 may significantly improve the spatial learning capacity impaired by chonic morphine administration and restore the morphine-inhibited LTP. This effect is NMDA receptor dependent.
{"CellLine": [], "Chemical": ["MK801", "Ginsenoside Rg1", "ginsenoside", "Morphine", "morphine", "Rg1"], "Disease": ["learning impairment", "impairment of learning"], "Gene": ["N-methyl-D-aspartate (NMDA) receptor", "NMDA receptor"], "Species": ["rats", "Panax ginseng"], "Variant": []}
science.biored_ner.90
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nGinsenoside Rg1 restores the impairment of learning induced by chronic morphine administration in rats. Rg1, as a ginsenoside extracted from Panax ginseng, could ameliorate spatial learning impairment. Previous studies have demonstrated that Rg1 might be a useful agent for the prevention and treatment of the adverse effects of morphine. The aim of this study was to investigate the effect of Rg1 on learning impairment by chronic morphine administration and the mechanism responsible for this effect. Male rats were subcutaneously injected with morphine (10 mg/kg) twice a day at 12 hour intervals for 10 days, and Rg1 (30 mg/kg) was intraperitoneally injected 2 hours after the second injection of morphine once a day for 10 days. Spatial learning capacity was assessed in the Morris water maze. The results showed that rats treated with Morphine/Rg1 decreased escape latency and increased the time spent in platform quadrant and entering frequency. By implantation of electrodes and electrophysiological recording in vivo, the results showed that Rg1 restored the long-term potentiation (LTP) impaired by morphine in both freely moving and anaesthetised rats. The electrophysiological recording in vitro showed that Rg1 restored the LTP in slices from the rats treated with morphine, but not changed LTP in the slices from normal saline- or morphine/Rg1-treated rats; this restoration could be inhibited by N-methyl-D-aspartate (NMDA) receptor antagonist MK801. We conclude that Rg1 may significantly improve the spatial learning capacity impaired by chonic morphine administration and restore the morphine-inhibited LTP. This effect is NMDA receptor dependent.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"MK801\", \"Ginsenoside Rg1\", \"ginsenoside\", \"Morphine\", \"morphine\", \"Rg1\"], \"Disease\": [\"learning impairment\", \"impairment of learning\"], \"Gene\": [\"N-methyl-D-aspartate (NMDA) receptor\", \"NMDA receptor\"], \"Species\": [\"rats\", \"Panax ginseng\"], \"Variant\": []}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Dysarthria is a motor speech disorder which can be classified according to the underlying neuropathology and is associated with disturbances of respiration, laryngeal function, airflow direction, and articulation resulting in difficulties of speech quality and intelligibility. There are six major types of dysarthria: flaccid dysarthria associated with lower motor neuron impairment, spastic dysarthria associated with damaged upper motor neurons linked to the motor areas of the cerebral cortex, ataxic dysarthria primarily caused by cerebellar dysfunction, and hyperkinetic dysarthria and hypokinetic dysarthria, which are related to a disorder of the extrapyramidal system. The sixth is generally termed a mixed dysarthria and is associated with damage in more than one area, resulting in speech characteristics of at least two groups. address voice problems of stroke patients with mixed dysarthria Characterizing intonation deficit in motor speech disorders: an autosegmental-metrical analysis of spontaneous speech in hypokinetic dysarthria, ataxic dysarthria ataxic dysarthria (AT) Acoustic analysis provides objective quantitative measures of speech that enable a comprehensive and accurate understanding of motor disorders and complement the traditional measures. This paper aims to distinguish between normal and pathological speech, more specifically between apraxia of speech and spastic dysarthria Long-term phonatory instability in ataxic dysarthria. The objective of this study is to evaluate phonation in ataxic dysarthria and a control group of normal speakers Dysprosody is a common feature in speakers with hypokinetic dysarthria. The prosodic profile of Cantonese speakers with hypokinetic dysarthria is similar to those of other languages (for example, English). Classification of dysarthria types comprises flaccid, spastic, ataxic, hypo- and hyperkinetic and mixed dysarthria. Dysarthria affects linguistic domains such as respiration, phonation, articulation, resonance and prosody due to upper motor neuron, lower motor neuron, cerebellar or extrapyramidal tract lesions. The dysarthria types were spastic (15 subjects), flaccid (10), mixed (12), hypokinetic (12), hyperkinetic (9) and ataxic (8). Dysarthria is a motor speech disorder which can be classified according to the underlying neuropathology and is associated with disturbances of respiration, laryngeal function, airflow direction, and articulation resulting in difficulties of speech quality and intelligibility. There are six major types of dysarthria: flaccid dysarthria associated with lower motor neuron impairment, spastic dysarthria associated with damaged upper motor neurons linked to the motor areas of the cerebral cortex, ataxic dysarthria primarily caused by cerebellar dysfunction, and hyperkinetic dysarthria and hypokinetic dysarthria, which are related to a disorder of the extrapyramidal system. There are six major types of dysarthria: flaccid dysarthria associated with lower motor neuron impairment, spastic dysarthria associated with damaged upper motor neurons linked to the motor areas of the cerebral cortex, ataxic dysarthria primarily caused by cerebellar dysfunction, and hyperkinetic dysarthria and hypokinetic dysarthria, which are related to a disorder of the extrapyramidal system. Motor neuron disease encompasses a group of terminal, demyelinating diseases affecting upper- and lower-motor neurons and producing muscular weakness resulting in a flaccid, spastic, or spastic-flaccid dysarthria of speech. Dysarthria is a frequently occurring motor speech disorder which can be caused by neurological trauma, cerebral palsy, or degenerative neurological diseases. Dysarthria is a motor disorder of speech characterized by abnormalities of the articulation and intelligibility of speech. There are six major types of dysarthria: flaccid dysarthria associated with lower motor neuron impairment, spastic dysarthria associated with damaged upper motor neurons linked to the motor areas of the cerebral cortex, ataxic dysarthria primarily caused by cerebellar dysfunction, and hyperkinetic dysarthria and hypokinetic dysarthria, which are related to a disorder of the extrapyramidal system Question: Which are the major types of the motor speech disorder dysarthria?
["flaccid dysarthria", "spastic dysarthria", "ataxic dysarthria", "hyperkinetic dysarthria", "hypokinetic dysarthria", "mixed dysarthria"]
science.bioasq_list_qa.60
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Dysarthria is a motor speech disorder which can be classified according to the underlying neuropathology and is associated with disturbances of respiration, laryngeal function, airflow direction, and articulation resulting in difficulties of speech quality and intelligibility. There are six major types of dysarthria: flaccid dysarthria associated with lower motor neuron impairment, spastic dysarthria associated with damaged upper motor neurons linked to the motor areas of the cerebral cortex, ataxic dysarthria primarily caused by cerebellar dysfunction, and hyperkinetic dysarthria and hypokinetic dysarthria, which are related to a disorder of the extrapyramidal system. The sixth is generally termed a mixed dysarthria and is associated with damage in more than one area, resulting in speech characteristics of at least two groups.\naddress voice problems of stroke patients with mixed dysarthria\nCharacterizing intonation deficit in motor speech disorders: an autosegmental-metrical analysis of spontaneous speech in hypokinetic dysarthria, ataxic dysarthria\nataxic dysarthria (AT)\nAcoustic analysis provides objective quantitative measures of speech that enable a comprehensive and accurate understanding of motor disorders and complement the traditional measures. This paper aims to distinguish between normal and pathological speech, more specifically between apraxia of speech and spastic dysarthria\nLong-term phonatory instability in ataxic dysarthria.\nThe objective of this study is to evaluate phonation in ataxic dysarthria and a control group of normal speakers\nDysprosody is a common feature in speakers with hypokinetic dysarthria.\nThe prosodic profile of Cantonese speakers with hypokinetic dysarthria is similar to those of other languages (for example, English).\nClassification of dysarthria types comprises flaccid, spastic, ataxic, hypo- and hyperkinetic and mixed dysarthria.\nDysarthria affects linguistic domains such as respiration, phonation, articulation, resonance and prosody due to upper motor neuron, lower motor neuron, cerebellar or extrapyramidal tract lesions.\nThe dysarthria types were spastic (15 subjects), flaccid (10), mixed (12), hypokinetic (12), hyperkinetic (9) and ataxic (8).\nDysarthria is a motor speech disorder which can be classified according to the underlying neuropathology and is associated with disturbances of respiration, laryngeal function, airflow direction, and articulation resulting in difficulties of speech quality and intelligibility.\nThere are six major types of dysarthria: flaccid dysarthria associated with lower motor neuron impairment, spastic dysarthria associated with damaged upper motor neurons linked to the motor areas of the cerebral cortex, ataxic dysarthria primarily caused by cerebellar dysfunction, and hyperkinetic dysarthria and hypokinetic dysarthria, which are related to a disorder of the extrapyramidal system.\nThere are six major types of dysarthria: flaccid dysarthria associated with lower motor neuron impairment, spastic dysarthria associated with damaged upper motor neurons linked to the motor areas of the cerebral cortex, ataxic dysarthria primarily caused by cerebellar dysfunction, and hyperkinetic dysarthria and hypokinetic dysarthria, which are related to a disorder of the extrapyramidal system.\nMotor neuron disease encompasses a group of terminal, demyelinating diseases affecting upper- and lower-motor neurons and producing muscular weakness resulting in a flaccid, spastic, or spastic-flaccid dysarthria of speech.\nDysarthria is a frequently occurring motor speech disorder which can be caused by neurological trauma, cerebral palsy, or degenerative neurological diseases.\nDysarthria is a motor disorder of speech characterized by abnormalities of the articulation and intelligibility of speech.\nThere are six major types of dysarthria: flaccid dysarthria associated with lower motor neuron impairment, spastic dysarthria associated with damaged upper motor neurons linked to the motor areas of the cerebral cortex, ataxic dysarthria primarily caused by cerebellar dysfunction, and hyperkinetic dysarthria and hypokinetic dysarthria, which are related to a disorder of the extrapyramidal system\n\nQuestion: Which are the major types of the motor speech disorder dysarthria?", "role": "user" }, { "content": "[\"flaccid dysarthria\", \"spastic dysarthria\", \"ataxic dysarthria\", \"hyperkinetic dysarthria\", \"hypokinetic dysarthria\", \"mixed dysarthria\"]", "role": "assistant" } ]
science.discomat_te
| Samples | SiO2 | Al2O3 | P2O5 | Na2O | MgO | Li2O | K2O | Yb2O3 | Er2O3 | T g (K) | | Na | 59.5 | 3 | 3.5 | 18 | 16 | | | 0.5 | 0.125 | 837 | | NaLi | 59.5 | 3 | 3.5 | 9 | 16 | 9 | | 0.5 | 0.125 | 831 | | NaK | 59.5 | 3 | 3.5 | 9 | 16 | | 9 | 0.5 | 0.125 | 839 | Caption: The chemical composition (mol.%) of the glasses. Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance: {"value": "xx", "type": "Other"} {"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"} Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line. Output:
{"value": "59.5", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "Na"} {"value": "3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "Na"} {"value": "3.5", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "Na"} {"value": "18", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "Na"} {"value": "16", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "mol", "material": "Na"} {"value": "0.5", "type": "Glass_Compound_Amount", "constituent": "Yb2O3", "unit": "mol", "material": "Na"} {"value": "0.125", "type": "Glass_Compound_Amount", "constituent": "Er2O3", "unit": "mol", "material": "Na"} {"value": "837", "type": "Other"} {"value": "59.5", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "NaLi"} {"value": "3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "NaLi"} {"value": "3.5", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "NaLi"} {"value": "9", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "NaLi"} {"value": "16", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "mol", "material": "NaLi"} {"value": "9", "type": "Glass_Compound_Amount", "constituent": "Li2O", "unit": "mol", "material": "NaLi"} {"value": "0.5", "type": "Glass_Compound_Amount", "constituent": "Yb2O3", "unit": "mol", "material": "NaLi"} {"value": "0.125", "type": "Glass_Compound_Amount", "constituent": "Er2O3", "unit": "mol", "material": "NaLi"} {"value": "831", "type": "Other"} {"value": "59.5", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "NaK"} {"value": "3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "NaK"} {"value": "3.5", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "mol", "material": "NaK"} {"value": "9", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "NaK"} {"value": "16", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "mol", "material": "NaK"} {"value": "9", "type": "Glass_Compound_Amount", "constituent": "K2O", "unit": "mol", "material": "NaK"} {"value": "0.5", "type": "Glass_Compound_Amount", "constituent": "Yb2O3", "unit": "mol", "material": "NaK"} {"value": "0.125", "type": "Glass_Compound_Amount", "constituent": "Er2O3", "unit": "mol", "material": "NaK"} {"value": "839", "type": "Other"}
science.discomat_te.242
[ { "content": "| Samples | SiO2 | Al2O3 | P2O5 | Na2O | MgO | Li2O | K2O | Yb2O3 | Er2O3 | T g (K) |\n| Na | 59.5 | 3 | 3.5 | 18 | 16 | | | 0.5 | 0.125 | 837 |\n| NaLi | 59.5 | 3 | 3.5 | 9 | 16 | 9 | | 0.5 | 0.125 | 831 |\n| NaK | 59.5 | 3 | 3.5 | 9 | 16 | | 9 | 0.5 | 0.125 | 839 |\n\nCaption: The chemical composition (mol.%) of the glasses.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:", "role": "user" }, { "content": "{\"value\": \"59.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"Na\"}\n{\"value\": \"3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"Na\"}\n{\"value\": \"3.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"Na\"}\n{\"value\": \"18\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"Na\"}\n{\"value\": \"16\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"mol\", \"material\": \"Na\"}\n{\"value\": \"0.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Yb2O3\", \"unit\": \"mol\", \"material\": \"Na\"}\n{\"value\": \"0.125\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Er2O3\", \"unit\": \"mol\", \"material\": \"Na\"}\n{\"value\": \"837\", \"type\": \"Other\"}\n{\"value\": \"59.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"NaLi\"}\n{\"value\": \"3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"NaLi\"}\n{\"value\": \"3.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"NaLi\"}\n{\"value\": \"9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"NaLi\"}\n{\"value\": \"16\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"mol\", \"material\": \"NaLi\"}\n{\"value\": \"9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Li2O\", \"unit\": \"mol\", \"material\": \"NaLi\"}\n{\"value\": \"0.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Yb2O3\", \"unit\": \"mol\", \"material\": \"NaLi\"}\n{\"value\": \"0.125\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Er2O3\", \"unit\": \"mol\", \"material\": \"NaLi\"}\n{\"value\": \"831\", \"type\": \"Other\"}\n{\"value\": \"59.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"NaK\"}\n{\"value\": \"3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"NaK\"}\n{\"value\": \"3.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"mol\", \"material\": \"NaK\"}\n{\"value\": \"9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"NaK\"}\n{\"value\": \"16\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"mol\", \"material\": \"NaK\"}\n{\"value\": \"9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"K2O\", \"unit\": \"mol\", \"material\": \"NaK\"}\n{\"value\": \"0.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Yb2O3\", \"unit\": \"mol\", \"material\": \"NaK\"}\n{\"value\": \"0.125\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Er2O3\", \"unit\": \"mol\", \"material\": \"NaK\"}\n{\"value\": \"839\", \"type\": \"Other\"}", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach Main Content: ( N2/M ) bits ( resp . O ( N √ N/M ) real values ) to achieve minimax optimal generalization performance . Further , we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms , for distributedly training models to fit UCI benchmarking datasets . Moreover , each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms , and these numbers are independent of parameter and feature dimension . 1 INTRODUCTION . Recently , decentralized optimization has become a mainstay of the optimization research . In decentralized optimization , multiple local agents hold small to moderately sized private datasets , and collaborate by iteratively solving their local problems while sharing some information with other agents . Most of the existing decentralized learning algorithms are deeply rooted in classical consensus-based approaches ( Tsitsiklis , 1984 ) , where the agents repetitively share the local parameters with each other to reach an optimal consensual solution . However , the recent trend of using learning models in the overparameterized regime with very high-dimensional parameters ( He et al. , 2016 ; Vaswani et al. , 2017 ; Fedus et al. , 2021 ) poses a significant challenge to such parameter sharing approaches , mainly because sharing model parameters iteratively becomes excessively expensive as the parameter dimension grows . If the size of local data is much smaller than that of the parameters , perhaps a more efficient way is to directly share the local data . However , this approach raises privacy concerns , and it is rarely used in practice . Therefore , a fundamental question of decentralized learning in the overparameterized regime is : ( Q ) For overparameterized learning problems , how to design decentralized algorithms that achieve the best optimization/generalization performance by exchanging minimum amount of information ? We partially answer ( Q ) in the context of distributed kernel learning ( Vert et al. , 2004 ) . We depart from the popular consensus-based algorithms and propose an optimization framework that does not require the local agents to share model parameters or raw data . We focus on kernel learning because : ( i ) kernel methods provide an elegant way to model non-linear learning problems with complex data dependencies as simple linear problems ( Vert et al. , 2004 ; Hofmann et al. , 2008 ) , and ( ii ) kernelbased methods can be used to capture the behavior of a fully-trained deep network with large width ( Jacot et al. , 2018 ; Arora et al. , 2019 ; 2020 ) . Table 1 : Comparison of the total communication required per node by different algorithms for non-overparameterized ( NOP ) and overparameterized ( OP ) regimes . Please see Appendix B for a detailed discussion of the algorithms . Here N is entire sample size , UB on M denotes the upper bound on the number of nodes , M , d is the data dimension , β ≥ 2 is a constant , and T denotes the total communication ( iterations ) rounds utilized by the distributed algorithms . Algorithm Kernel UB on M Communication ( Real Values ) NOP OP DKRR-CM ( Lin et al. , 2020 ) Any O ( N T+1 2 ( T+2 ) ) dTN dTN DKRR-RF-CM ( Liu et al. , 2021 ) RF O ( N T+1 2 ( T+2 ) ) O ( T √ N ) O ( TNβ ) Decentralized-RF ( Richards et al. , 2020 ) RF O ( N 1 3 ) O ( T √ N ) O ( TNβ ) DKLA/COKE ( Xu et al. , 2020 ) RF Any M T √ N O ( TNβ ) RF N √ N M O ( N1+β M ) Algorithm 2 ( this work ) GIP Any M O ( N2 M ) O ( N2 M ) Distributed implementation of kernel learning problems is challenging . Current state-of-the-art algorithms for kernel learning either rely on sharing raw data among agents and/or imposing restrictions on the number of agents ( Zhang et al. , 2015 ; Lin et al. , 2017 ; Koppel et al. , 2018 ; Lin et al. , 2020 ; Hu et al. , 2020 ; Pradhan et al. , 2021 ; Predd et al. , 2006 ) . Some recent approaches rely on specific random feature ( RF ) kernels to alleviate some of the above problems . These algorithms reformulate the ( approximate ) problem in the parameter domain and solve it by iteratively sharing the ( potentially high-dimensional ) parameters ( Bouboulis et al. , 2017 ; Richards et al. , 2020 ; Xu et al. , 2020 ; Liu et al. , 2021 ) 1 . These algorithms suffer from excessive communication overhead , especially in the overparameterized regime where the number of parameters is larger than the data size N . For example , implementing the neural tangent kernel ( NTK ) with RF kernel requires at least O ( Nβ ) , β ≥ 2 , random features ( parameter dimension ) using ReLU activation ( Arora et al. , 2019 ; Han et al. , 2021 ) 2 . For such problems , in this work , we propose a novel algorithmic framework for decentralized kernel learning . Below , we list the major contributions of our work . [ GIP Kernel for Distributed Approximation ] We define a new class of kernels suitable for distributed implementation , Generalized inner-product ( GIP ) kernel , that is fully characterized by the angle between a pair of feature vectors and their respective norms . Many kernels of practical importance including the NTK can be represented as GIP kernel . Further , we propose a multi-agent kernel approximation method for estimating the GIP and the popular RF kernels at individual agents . [ One-shot and Iterative Scheme ] Based on the proposed kernel approximation , we develop two optimization algorithms , where the first one only needs one-shot information exchange , but requires sharing data labels among the agents ; the second one needs iterative information exchange , but does not need to share the data labels . A key feature of these algorithms is that neither the raw data features nor the ( high-dimensional ) parameters are exchanged among agents . [ Performance of the Approximation Framework ] We analyze the optimization and the generalization performance of the proposed approximation algorithms for ` 2 loss . We show that GIP kernel requires communicating O ( N2/M ) bits and the RF kernel requires communicating O ( N √ N/M ) real values per agent to achieve minimax optimal generalization performance . Importantly , the required communication is independent of the function class and the optimization algorithm . We validate the performance of our approximation algorithms on UCI benchmarking datasets . In Table 1 , we compare the communication requirements of the proposed approach to popular distributed kernel learning algorithms . Specifically , DKRR-CM ( Lin et al. , 2020 ) relies on sharing data and is therefore not preferred in practical settings . For the RF kernel , the proposed algorithm outperforms other algorithms in both non-overparameterized and the overparameterized regimes when T > N/M . In the overparameterized regime the GIP kernel is more communication efficient compared to other algorithms . Finally , note that since our analysis is developed using the multiagent-kernel-approximation , it does not impose any upper bounds on the number of machines in the network . Notations : We use R , Rd , and Rn×m to denote the sets of real numbers , d-dimensional Euclidean space , and real matrices of size n×m , respectively . We use N to denote the set of natural numbers . 1For a detailed literature review please see Appendix B . 2To achieve approximation error = O ( 1/ √ N ) . N ( 0 , Σ ) is multivariate normal distribution with zero mean and covariance Σ . Uniform distribution with support [ a , b ] is denoted by U [ a , b ] . 〈a , b〉 ( resp . 〈a , b〉H ) denotes the inner-product in Euclidean space ( resp . Hilbert space H ) . The inner-product defines the usual norms in corresponding spaces . Norm ‖A‖ of matrix A denotes the operator norm induced by ` 2 vector norm . We denote by [ a ] i or [ a ] ( i ) the ith element of a vector a . [ A · a ] ( i ) j denotes the ( i · j ) th element of vector A · a . Moreover , A ( : ,i ) is the ith column of A and [ A ] mk is the element corresponding to mth row and kth column . Notation m ∈ [ M ] denotes m ∈ { 1 , .. , M } . Finally , 1 [ E ] is the indicator function of event E . 2 PROBLEM STATEMENT . Given a probability distribution π ( x , y ) over X × R , we want to minimize the population loss L ( f ) = Ex , y∼π ( x , y ) [ ` ( f ( x ) , y ) ] , ( 1 ) where x ∈ X ⊂ Rd and y ∈ R denote the features and the labels , respectively . Here , f : X → R is an estimate of the true label y . We consider a distributed system of M agents , with each agent m ∈ [ M ] having access to a locally available independently and identically distributed ( i.i.d ) dataset Nm = { x ( i ) m , y ( i ) m } ni=1 with3 ( x ( i ) m , y ( i ) m ) ∼ π ( x , y ) . The total number of samples is N = nM . The goal of kernel learning with kernel function , k ( · , · ) : X × X → R , is to find a function f ∈ H ( whereH is the reproducing kernel Hilbert space ( RKHS ) associated with k ( Vert et al. , 2004 ) ) that minimizes ( 1 ) . We aim to solve the following ( decentralized ) empirical risk minimization problem min f∈H { R̂ ( f ) = L̂ ( f ) + λ 2 ‖f‖2H = 1 M M∑ m=1 L̂m ( f ) + λ 2 ‖f‖2H } , ( 2 ) where λ > 0 is the regularization parameter and L̂m ( f ) = 1n ∑ i∈Nm ` ( f ( x ( i ) m ) , y ( i ) m ) is the local loss at each m ∈ [ M ] . Problem ( 2 ) can be reformulated using the Representer theorem ( Schölkopf et al. , 2002 ) with L̂m ( α ) = 1n ∑ i∈Nm ` ( [ Kα ] ( i ) m , y ( i ) m ) , ∀m ∈ [ M ] , as min α∈RN { R̂ ( α ) = L̂ ( α ) + λ 2 ‖α‖2K = 1 M M∑ m=1 L̂m ( α ) + λ 2 ‖α‖2K } , ( 3 ) where K ∈ RN×N is the kernel matrix with elements k ( x ( i ) m , x ( j ) m̄ ) , ∀m , m̄ ∈ [ M ] , ∀i ∈ Nm and ∀j ∈ Nm̄ . The supervised ( centralized ) learning problem ( 3 ) is a classical problem in statistical learning ( Caponnetto & De Vito , 2007 ) and has been popularized recently due to connections with overparameterized neural network training ( Jacot et al. , 2018 ; Arora et al. , 2019 ) . An alternate way to solve problem ( 2 ) ( and ( 3 ) ) is by parameterizing f in ( 2 ) by θ ∈ RD as fD ( x ; θ ) = 〈θ , φD ( x ) 〉 where φD : X → RD is a finite dimensional feature map . Here , φD ( · ) is designed to approximate k ( · , · ) with kD ( x , x′ ) = 〈φD ( x ) , φD ( x′ ) 〉 ( Rahimi & Recht , 2008 ) . Using this approximation , problem ( 2 ) ( and ( 3 ) ) can be written in the parameter domain with L̂m , D ( θ ) = 1n ∑ i∈Nm ` ( 〈θ , φD ( x ( i ) m ) 〉 , y ( i ) m ) , ∀m ∈ [ M ] , as min θ∈RD { R̂D ( θ ) = L̂D ( θ ) + λ 2 ‖θ‖2 = 1 M M∑ m=1 L̂m , D ( θ ) + λ 2 ‖θ‖2 } . ( 4 ) Note that ( 4 ) is a D-dimensional problem , whereas ( 3 ) is an N -dimensional problem . Since ( 4 ) is in the standard finite-sum form , it can be solved using the standard parameter sharing decentralized optimization algorithms ( e.g. , DGD ( Richards et al. , 2020 ) or ADMM ( Xu et al. , 2020 ) ) , which share D-dimensional vectors iteratively . However , when ( 4 ) is overparameterized with very large D ( e.g. , D = O ( Nβ ) with β ≥ 2 for the NTK ) , such parameter sharing approaches are no longer feasible because of the increased communication complexity . An intuitive solution to avoid sharing these high-dimensional parameters is to directly solve ( 3 ) . However , it is by no means clear if and how one can efficiently solve ( 3 ) in a decentralized manner . The key challenge is that , unlike the conventional decentralized learning problems , here each loss term ` ( [ Kα ] ( i ) m , y ( i ) m ) is not separable 3The techniques presented in this work can be easily extended to unbalanced datasets , i.e. , when each agent has a dataset of different size . over the agents . Instead , each agent m ’ s local problem is dependent on k ( x ( i ) m , x ( j ) m̄ ) with m 6= m̄ . Importantly , without directly transmitting the data itself ( as has been done in Predd et al . ( 2006 ) ; Koppel et al . ( 2018 ) ; Lin et al . ( 2020 ) ) , it is not clear how one can obtain the required ( m·i ) th element of Kα . Therefore , to develop algorithms that avoid sharing high-dimensional parameters by directly ( approximately ) solving ( 3 ) , it is important to identify kernels that are suitable for decentralized implementation and propose efficient algorithms for learning with such kernels .
This work considers multi-agent optimization over RKHS together with random feature approximations. The crux is the development of two different techniques for decentralized computation through a novel introduction of a generalized inner-product kernel. Convergence analysis and numerical validation are provided.
science.mup_single_document_summarization.5874
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nDecentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach\n\nMain Content:\n( N2/M ) bits ( resp . O ( N √ N/M ) real values ) to achieve minimax optimal generalization performance . Further , we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms , for distributedly training models to fit UCI benchmarking datasets . Moreover , each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms , and these numbers are independent of parameter and feature dimension . 1 INTRODUCTION . Recently , decentralized optimization has become a mainstay of the optimization research . In decentralized optimization , multiple local agents hold small to moderately sized private datasets , and collaborate by iteratively solving their local problems while sharing some information with other agents . Most of the existing decentralized learning algorithms are deeply rooted in classical consensus-based approaches ( Tsitsiklis , 1984 ) , where the agents repetitively share the local parameters with each other to reach an optimal consensual solution . However , the recent trend of using learning models in the overparameterized regime with very high-dimensional parameters ( He et al. , 2016 ; Vaswani et al. , 2017 ; Fedus et al. , 2021 ) poses a significant challenge to such parameter sharing approaches , mainly because sharing model parameters iteratively becomes excessively expensive as the parameter dimension grows . If the size of local data is much smaller than that of the parameters , perhaps a more efficient way is to directly share the local data . However , this approach raises privacy concerns , and it is rarely used in practice . Therefore , a fundamental question of decentralized learning in the overparameterized regime is : ( Q ) For overparameterized learning problems , how to design decentralized algorithms that achieve the best optimization/generalization performance by exchanging minimum amount of information ? We partially answer ( Q ) in the context of distributed kernel learning ( Vert et al. , 2004 ) . We depart from the popular consensus-based algorithms and propose an optimization framework that does not require the local agents to share model parameters or raw data . We focus on kernel learning because : ( i ) kernel methods provide an elegant way to model non-linear learning problems with complex data dependencies as simple linear problems ( Vert et al. , 2004 ; Hofmann et al. , 2008 ) , and ( ii ) kernelbased methods can be used to capture the behavior of a fully-trained deep network with large width ( Jacot et al. , 2018 ; Arora et al. , 2019 ; 2020 ) . Table 1 : Comparison of the total communication required per node by different algorithms for non-overparameterized ( NOP ) and overparameterized ( OP ) regimes . Please see Appendix B for a detailed discussion of the algorithms . Here N is entire sample size , UB on M denotes the upper bound on the number of nodes , M , d is the data dimension , β ≥ 2 is a constant , and T denotes the total communication ( iterations ) rounds utilized by the distributed algorithms . Algorithm Kernel UB on M Communication ( Real Values ) NOP OP DKRR-CM ( Lin et al. , 2020 ) Any O ( N T+1 2 ( T+2 ) ) dTN dTN DKRR-RF-CM ( Liu et al. , 2021 ) RF O ( N T+1 2 ( T+2 ) ) O ( T √ N ) O ( TNβ ) Decentralized-RF ( Richards et al. , 2020 ) RF O ( N 1 3 ) O ( T √ N ) O ( TNβ ) DKLA/COKE ( Xu et al. , 2020 ) RF Any M T √ N O ( TNβ ) RF N √ N M O ( N1+β M ) Algorithm 2 ( this work ) GIP Any M O ( N2 M ) O ( N2 M ) Distributed implementation of kernel learning problems is challenging . Current state-of-the-art algorithms for kernel learning either rely on sharing raw data among agents and/or imposing restrictions on the number of agents ( Zhang et al. , 2015 ; Lin et al. , 2017 ; Koppel et al. , 2018 ; Lin et al. , 2020 ; Hu et al. , 2020 ; Pradhan et al. , 2021 ; Predd et al. , 2006 ) . Some recent approaches rely on specific random feature ( RF ) kernels to alleviate some of the above problems . These algorithms reformulate the ( approximate ) problem in the parameter domain and solve it by iteratively sharing the ( potentially high-dimensional ) parameters ( Bouboulis et al. , 2017 ; Richards et al. , 2020 ; Xu et al. , 2020 ; Liu et al. , 2021 ) 1 . These algorithms suffer from excessive communication overhead , especially in the overparameterized regime where the number of parameters is larger than the data size N . For example , implementing the neural tangent kernel ( NTK ) with RF kernel requires at least O ( Nβ ) , β ≥ 2 , random features ( parameter dimension ) using ReLU activation ( Arora et al. , 2019 ; Han et al. , 2021 ) 2 . For such problems , in this work , we propose a novel algorithmic framework for decentralized kernel learning . Below , we list the major contributions of our work . [ GIP Kernel for Distributed Approximation ] We define a new class of kernels suitable for distributed implementation , Generalized inner-product ( GIP ) kernel , that is fully characterized by the angle between a pair of feature vectors and their respective norms . Many kernels of practical importance including the NTK can be represented as GIP kernel . Further , we propose a multi-agent kernel approximation method for estimating the GIP and the popular RF kernels at individual agents . [ One-shot and Iterative Scheme ] Based on the proposed kernel approximation , we develop two optimization algorithms , where the first one only needs one-shot information exchange , but requires sharing data labels among the agents ; the second one needs iterative information exchange , but does not need to share the data labels . A key feature of these algorithms is that neither the raw data features nor the ( high-dimensional ) parameters are exchanged among agents . [ Performance of the Approximation Framework ] We analyze the optimization and the generalization performance of the proposed approximation algorithms for ` 2 loss . We show that GIP kernel requires communicating O ( N2/M ) bits and the RF kernel requires communicating O ( N √ N/M ) real values per agent to achieve minimax optimal generalization performance . Importantly , the required communication is independent of the function class and the optimization algorithm . We validate the performance of our approximation algorithms on UCI benchmarking datasets . In Table 1 , we compare the communication requirements of the proposed approach to popular distributed kernel learning algorithms . Specifically , DKRR-CM ( Lin et al. , 2020 ) relies on sharing data and is therefore not preferred in practical settings . For the RF kernel , the proposed algorithm outperforms other algorithms in both non-overparameterized and the overparameterized regimes when T > N/M . In the overparameterized regime the GIP kernel is more communication efficient compared to other algorithms . Finally , note that since our analysis is developed using the multiagent-kernel-approximation , it does not impose any upper bounds on the number of machines in the network . Notations : We use R , Rd , and Rn×m to denote the sets of real numbers , d-dimensional Euclidean space , and real matrices of size n×m , respectively . We use N to denote the set of natural numbers . 1For a detailed literature review please see Appendix B . 2To achieve approximation error = O ( 1/ √ N ) . N ( 0 , Σ ) is multivariate normal distribution with zero mean and covariance Σ . Uniform distribution with support [ a , b ] is denoted by U [ a , b ] . 〈a , b〉 ( resp . 〈a , b〉H ) denotes the inner-product in Euclidean space ( resp . Hilbert space H ) . The inner-product defines the usual norms in corresponding spaces . Norm ‖A‖ of matrix A denotes the operator norm induced by ` 2 vector norm . We denote by [ a ] i or [ a ] ( i ) the ith element of a vector a . [ A · a ] ( i ) j denotes the ( i · j ) th element of vector A · a . Moreover , A ( : ,i ) is the ith column of A and [ A ] mk is the element corresponding to mth row and kth column . Notation m ∈ [ M ] denotes m ∈ { 1 , .. , M } . Finally , 1 [ E ] is the indicator function of event E . 2 PROBLEM STATEMENT . Given a probability distribution π ( x , y ) over X × R , we want to minimize the population loss L ( f ) = Ex , y∼π ( x , y ) [ ` ( f ( x ) , y ) ] , ( 1 ) where x ∈ X ⊂ Rd and y ∈ R denote the features and the labels , respectively . Here , f : X → R is an estimate of the true label y . We consider a distributed system of M agents , with each agent m ∈ [ M ] having access to a locally available independently and identically distributed ( i.i.d ) dataset Nm = { x ( i ) m , y ( i ) m } ni=1 with3 ( x ( i ) m , y ( i ) m ) ∼ π ( x , y ) . The total number of samples is N = nM . The goal of kernel learning with kernel function , k ( · , · ) : X × X → R , is to find a function f ∈ H ( whereH is the reproducing kernel Hilbert space ( RKHS ) associated with k ( Vert et al. , 2004 ) ) that minimizes ( 1 ) . We aim to solve the following ( decentralized ) empirical risk minimization problem min f∈H { R̂ ( f ) = L̂ ( f ) + λ 2 ‖f‖2H = 1 M M∑ m=1 L̂m ( f ) + λ 2 ‖f‖2H } , ( 2 ) where λ > 0 is the regularization parameter and L̂m ( f ) = 1n ∑ i∈Nm ` ( f ( x ( i ) m ) , y ( i ) m ) is the local loss at each m ∈ [ M ] . Problem ( 2 ) can be reformulated using the Representer theorem ( Schölkopf et al. , 2002 ) with L̂m ( α ) = 1n ∑ i∈Nm ` ( [ Kα ] ( i ) m , y ( i ) m ) , ∀m ∈ [ M ] , as min α∈RN { R̂ ( α ) = L̂ ( α ) + λ 2 ‖α‖2K = 1 M M∑ m=1 L̂m ( α ) + λ 2 ‖α‖2K } , ( 3 ) where K ∈ RN×N is the kernel matrix with elements k ( x ( i ) m , x ( j ) m̄ ) , ∀m , m̄ ∈ [ M ] , ∀i ∈ Nm and ∀j ∈ Nm̄ . The supervised ( centralized ) learning problem ( 3 ) is a classical problem in statistical learning ( Caponnetto & De Vito , 2007 ) and has been popularized recently due to connections with overparameterized neural network training ( Jacot et al. , 2018 ; Arora et al. , 2019 ) . An alternate way to solve problem ( 2 ) ( and ( 3 ) ) is by parameterizing f in ( 2 ) by θ ∈ RD as fD ( x ; θ ) = 〈θ , φD ( x ) 〉 where φD : X → RD is a finite dimensional feature map . Here , φD ( · ) is designed to approximate k ( · , · ) with kD ( x , x′ ) = 〈φD ( x ) , φD ( x′ ) 〉 ( Rahimi & Recht , 2008 ) . Using this approximation , problem ( 2 ) ( and ( 3 ) ) can be written in the parameter domain with L̂m , D ( θ ) = 1n ∑ i∈Nm ` ( 〈θ , φD ( x ( i ) m ) 〉 , y ( i ) m ) , ∀m ∈ [ M ] , as min θ∈RD { R̂D ( θ ) = L̂D ( θ ) + λ 2 ‖θ‖2 = 1 M M∑ m=1 L̂m , D ( θ ) + λ 2 ‖θ‖2 } . ( 4 ) Note that ( 4 ) is a D-dimensional problem , whereas ( 3 ) is an N -dimensional problem . Since ( 4 ) is in the standard finite-sum form , it can be solved using the standard parameter sharing decentralized optimization algorithms ( e.g. , DGD ( Richards et al. , 2020 ) or ADMM ( Xu et al. , 2020 ) ) , which share D-dimensional vectors iteratively . However , when ( 4 ) is overparameterized with very large D ( e.g. , D = O ( Nβ ) with β ≥ 2 for the NTK ) , such parameter sharing approaches are no longer feasible because of the increased communication complexity . An intuitive solution to avoid sharing these high-dimensional parameters is to directly solve ( 3 ) . However , it is by no means clear if and how one can efficiently solve ( 3 ) in a decentralized manner . The key challenge is that , unlike the conventional decentralized learning problems , here each loss term ` ( [ Kα ] ( i ) m , y ( i ) m ) is not separable 3The techniques presented in this work can be easily extended to unbalanced datasets , i.e. , when each agent has a dataset of different size . over the agents . Instead , each agent m ’ s local problem is dependent on k ( x ( i ) m , x ( j ) m̄ ) with m 6= m̄ . Importantly , without directly transmitting the data itself ( as has been done in Predd et al . ( 2006 ) ; Koppel et al . ( 2018 ) ; Lin et al . ( 2020 ) ) , it is not clear how one can obtain the required ( m·i ) th element of Kα . Therefore , to develop algorithms that avoid sharing high-dimensional parameters by directly ( approximately ) solving ( 3 ) , it is important to identify kernels that are suitable for decentralized implementation and propose efficient algorithms for learning with such kernels .", "role": "user" }, { "content": "This work considers multi-agent optimization over RKHS together with random feature approximations. The crux is the development of two different techniques for decentralized computation through a novel introduction of a generalized inner-product kernel. Convergence analysis and numerical validation are provided.", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: LDL cholesterol has a causal role in the development of cardiovascular disease. Research Abstract: BACKGROUND LDL cholesterol has a causal role in the development of cardiovascular disease. Improved understanding of the biological mechanisms that underlie the metabolism and regulation of LDL cholesterol might help to identify novel therapeutic targets. We therefore did a genome-wide association study of LDL-cholesterol concentrations. METHODS We used genome-wide association data from up to 11,685 participants with measures of circulating LDL-cholesterol concentrations across five studies, including data for 293 461 autosomal single nucleotide polymorphisms (SNPs) with a minor allele frequency of 5% or more that passed our quality control criteria. We also used data from a second genome-wide array in up to 4337 participants from three of these five studies, with data for 290,140 SNPs. We did replication studies in two independent populations consisting of up to 4979 participants. Statistical approaches, including meta-analysis and linkage disequilibrium plots, were used to refine association signals; we analysed pooled data from all seven populations to determine the effect of each SNP on variations in circulating LDL-cholesterol concentrations. FINDINGS In our initial scan, we found two SNPs (rs599839 [p=1.7x10(-15)] and rs4970834 [p=3.0x10(-11)]) that showed genome-wide statistical association with LDL cholesterol at chromosomal locus 1p13.3. The second genome screen found a third statistically associated SNP at the same locus (rs646776 [p=4.3x10(-9)]). Meta-analysis of data from all studies showed an association of SNPs rs599839 (combined p=1.2x10(-33)) and rs646776 (p=4.8x10(-20)) with LDL-cholesterol concentrations. SNPs rs599839 and rs646776 both explained around 1% of the variation in circulating LDL-cholesterol concentrations and were associated with about 15% of an SD change in LDL cholesterol per allele, assuming an SD of 1 mmol/L. INTERPRETATION We found evidence for a novel locus for LDL cholesterol on chromosome 1p13.3. These results potentially provide insight into the biological mechanisms that underlie the regulation of LDL cholesterol and might help in the discovery of novel therapeutic targets for cardiovascular disease.
{"verdict": "SUPPORT", "evidence": ["BACKGROUND LDL cholesterol has a causal role in the development of cardiovascular disease."]}
science.scifact_entailment.438
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nLDL cholesterol has a causal role in the development of cardiovascular disease.\n\nResearch Abstract:\nBACKGROUND LDL cholesterol has a causal role in the development of cardiovascular disease.\nImproved understanding of the biological mechanisms that underlie the metabolism and regulation of LDL cholesterol might help to identify novel therapeutic targets.\nWe therefore did a genome-wide association study of LDL-cholesterol concentrations.\nMETHODS We used genome-wide association data from up to 11,685 participants with measures of circulating LDL-cholesterol concentrations across five studies, including data for 293 461 autosomal single nucleotide polymorphisms (SNPs) with a minor allele frequency of 5% or more that passed our quality control criteria.\nWe also used data from a second genome-wide array in up to 4337 participants from three of these five studies, with data for 290,140 SNPs.\nWe did replication studies in two independent populations consisting of up to 4979 participants.\nStatistical approaches, including meta-analysis and linkage disequilibrium plots, were used to refine association signals; we analysed pooled data from all seven populations to determine the effect of each SNP on variations in circulating LDL-cholesterol concentrations.\nFINDINGS In our initial scan, we found two SNPs (rs599839 [p=1.7x10(-15)] and rs4970834 [p=3.0x10(-11)]) that showed genome-wide statistical association with LDL cholesterol at chromosomal locus 1p13.3.\nThe second genome screen found a third statistically associated SNP at the same locus (rs646776 [p=4.3x10(-9)]).\nMeta-analysis of data from all studies showed an association of SNPs rs599839 (combined p=1.2x10(-33)) and rs646776 (p=4.8x10(-20)) with LDL-cholesterol concentrations.\nSNPs rs599839 and rs646776 both explained around 1% of the variation in circulating LDL-cholesterol concentrations and were associated with about 15% of an SD change in LDL cholesterol per allele, assuming an SD of 1 mmol/L. INTERPRETATION We found evidence for a novel locus for LDL cholesterol on chromosome 1p13.3.\nThese results potentially provide insight into the biological mechanisms that underlie the regulation of LDL cholesterol and might help in the discovery of novel therapeutic targets for cardiovascular disease.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"BACKGROUND LDL cholesterol has a causal role in the development of cardiovascular disease.\"]}", "role": "assistant" } ]
science.evidence_inference
Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as: - Treatment: The specific medical procedure or drug being evaluated. - Control: The standard or placebo used for comparison. If absent, use `null`. - Result: The health outcome being assessed. - Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change"). - Justification: The supporting details for the observed impact. Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>]. Abstract: ABSTRACT.BACKGROUND: The drug treatments of choice for obsessive-compulsive disorder (OCD) are serotonin reuptake inhibitors (SRIs). However, a correlation between the neuropeptide oxytocin in cerebrospinal fluid and the severity of OCD has previously been shown, and oxytocin and serotonin are interconnected within the brain. Few studies have investigated whether SRIs have any effect on oxytocin; thus, our aim was to explore the possibility that oxytocinergic mechanisms contribute to the anti-obsessive effect of SRIs. ABSTRACT.METHOD: In a randomized, double-blind trial, comparing SRIs (clomipramine and paroxetine) with placebo in 36 adults with OCD (characterized for subtypes), plasma oxytocin was measured with radioimmunoassay after plasma extraction, at baseline, after 1 week, and after 4 weeks of treatment, and related to baseline severity and clinical response after 12 weeks, as measured by the Yale-Brown Obsessive Compulsive Scale (Y-BOCS). ABSTRACT.RESULTS: Baseline oxytocin levels correlated positively with baseline Y-BOCS ratings, but only among the future SRI responders. Patients with early onset of OCD had higher baseline oxytocin. During treatment, plasma oxytocin did not differ between SRI and placebo treatment. In SRI responders, plasma oxytocin first decreased and then increased; in non-responders (to SRI as well as to placebo), the reverse was the case. After 4 weeks, treatment responders had attained higher oxytocin levels compared to non-responders. The intra-individual range (i.e. the variability) of plasma oxytocin between measurements was the measure that best differentiated responders from non-responders. This range was higher in responders than non-responders, and lower in patients with autistic traits. ABSTRACT.CONCLUSIONS: SRIs have highly variable effects on plasma oxytocin between individuals. The associations between baseline oxytocin and OCD severity and between oxytocin changes and treatment response support the notions that oxytocin is involved in OCD pathophysiology, and that the anti-obsessive effects of SRIs are partly exerted through oxytocinergic mechanisms. Extracted Data:
[["SRIs (clomipramine and paroxetine)", "placebo", "oxytocin levels", "no significant difference", "During treatment, plasma oxytocin did not differ between SRI and placebo treatment."]]
science.evidence_inference.150
[ { "content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: The drug treatments of choice for obsessive-compulsive disorder (OCD) are serotonin reuptake inhibitors (SRIs). However, a correlation between the neuropeptide oxytocin in cerebrospinal fluid and the severity of OCD has previously been shown, and oxytocin and serotonin are interconnected within the brain. Few studies have investigated whether SRIs have any effect on oxytocin; thus, our aim was to explore the possibility that oxytocinergic mechanisms contribute to the anti-obsessive effect of SRIs.\n\nABSTRACT.METHOD: In a randomized, double-blind trial, comparing SRIs (clomipramine and paroxetine) with placebo in 36 adults with OCD (characterized for subtypes), plasma oxytocin was measured with radioimmunoassay after plasma extraction, at baseline, after 1 week, and after 4 weeks of treatment, and related to baseline severity and clinical response after 12 weeks, as measured by the Yale-Brown Obsessive Compulsive Scale (Y-BOCS).\n\nABSTRACT.RESULTS: Baseline oxytocin levels correlated positively with baseline Y-BOCS ratings, but only among the future SRI responders. Patients with early onset of OCD had higher baseline oxytocin. During treatment, plasma oxytocin did not differ between SRI and placebo treatment. In SRI responders, plasma oxytocin first decreased and then increased; in non-responders (to SRI as well as to placebo), the reverse was the case. After 4 weeks, treatment responders had attained higher oxytocin levels compared to non-responders. The intra-individual range (i.e. the variability) of plasma oxytocin between measurements was the measure that best differentiated responders from non-responders. This range was higher in responders than non-responders, and lower in patients with autistic traits.\n\nABSTRACT.CONCLUSIONS: SRIs have highly variable effects on plasma oxytocin between individuals. The associations between baseline oxytocin and OCD severity and between oxytocin changes and treatment response support the notions that oxytocin is involved in OCD pathophysiology, and that the anti-obsessive effects of SRIs are partly exerted through oxytocinergic mechanisms.\n\nExtracted Data:", "role": "user" }, { "content": "[[\"SRIs (clomipramine and paroxetine)\", \"placebo\", \"oxytocin levels\", \"no significant difference\", \"During treatment, plasma oxytocin did not differ between SRI and placebo treatment.\"]]", "role": "assistant" } ]
science.mup_single_document_summarization
Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise. Title of the Paper: Pretext Tasks Selection for Multitask Self-Supervised Speech Representation Learning Main Content: Through solving pretext tasks , self-supervised learning leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task . In audio/speech signal processing , a wide range of features where engineered through decades of research efforts . As it turns out , learning to predict such features ( a.k.a pseudo-labels ) has proven to be a particularly relevant pretext task , leading to useful self-supervised representations which prove to be effective for downstream tasks . However , methods and common practices for combining such pretext tasks for better performance on the downstream task have not been explored and understood properly . In fact , the process relies almost exclusively on a computationally heavy experimental procedure , which becomes intractable with the increase of the number of pretext tasks . This paper introduces a method to select a group of pretext tasks among a set of candidates . The method we propose estimates calibrated weights for the partial losses corresponding to the considered pretext tasks during the self-supervised training process . The experiments conducted on automatic speech recognition , speaker and emotion recognition validate our approach , as the groups selected and weighted with our method perform better than classic baselines , thus facilitating the selection and combination of relevant pseudo-labels for self-supervised representation learning . 1 INTRODUCTION . Self-supervised learning ( SSL ) methods usually rely on a supervision obtained from the data itself through solving specific pretext tasks leveraging the underlying structure of the considered data ( Doersch et al. , 2016 ; Arandjelovic & Zisserman , 2018 ) . This technique is used in various domains including image processing ( Misra & Maaten , 2020 ; Jing & Tian , 2020 ; Grill et al. , 2020 ) , natural language understanding ( Chen et al. , 2020b ; Du et al. , 2020 ; Lan et al. , 2019 ) or speech and audio processing ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Jiang et al. , 2020 ) . It offers numerous advantages , such as the independence from labeled data , stronger performance on downstream tasks , more robust models and an easier transfer to low-resource setups ( e.g. , low-resource languages ) ( Baevski et al. , 2020b ; Jing & Tian , 2020 ) . The numerous existing SSL approaches are characterized by the nature of the pretext tasks they solve . For instance , common techniques include predictive coding ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Song et al. , 2020 ; Zhang et al. , 2020 ; Hsu et al. , 2021 ) , pseudo-label learning ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) , auto-encoding ( Renshaw et al. , 2015 ; Algayres et al. , 2020 ) , tripletloss learning ( Shor et al. , 2020 ; Peplinski et al. , 2020 ) , generative modelling ( Khurana et al. , 2020 ) or contrastive learning ( Saeed et al. , 2020 ; Jiang et al. , 2020 ) . More precisely , these pretext tasks may be defined through the choice of pretext labels , hereafter referred to as pseudo-labels . The automatic extraction of pseudo-labels for SSL ( i.e . from the data itself ) is common in many application domains , such as computer vision ( Noroozi & Favaro , 2017 ; Gidaris et al. , 2018 ) , music processing ( Hung et al. , 2019 ; Wu et al. , 2021 ) and speech processing ( Pascual et al. , 2019 ; Shukla et al. , 2020 ) , and is commonly referred to as multitask self supervised learning . In the specific context of speech processing , the process of designing pseudo-labels may benefit from decades of research in signal processing . For instance , potential candidates are pitch estimators , energy-based features , voicing state and many more . As demonstrated by Pascual et al . ( 2019 ) , multitask speech representation learning is a powerful tool to build representations that are beneficial for a wide range of distinct downstream tasks , by combining different pseudo-labels which “ intuitively ” correspond to these tasks . Unfortunately , there is no clear understanding of how these pseudo-labels may interact when optimised together , and therefore , no common practice of how to select groups of pseudo-labels to obtain better performance on a known downstream task . As a matter of fact , this design process has been essentially driven by empirical validation and there is therefore no evidence that the obtained model is even the best one . This empirical approach can rapidly become intractable with modern SSL architectures which may contain hundreds of millions or billions of parameters trained on thousands of hours of speech , not to mention the carbon footprint of such pseudo-label searches . For instance , the self-supervised training of a single state-of-the-art large wav2vec 2.0 model ( Baevski et al. , 2020b ) on 53.2k hours of speech requires 128 GPUs for 5.2 days . This work aims at providing a clear , efficient and theoretically motivated procedure for pseudolabel group selection and weighting based on conditional independence ( CI ) . The method presented allows one to design ahead of training the most adapted multitask self-supervised speech representation learning model which perfectly suits the considered downstream tasks . Such an approach may also enable researchers to save a substantial amount of time and computation usually devoted to pseudo-label search . Hence , the contributions of this work are fourfold : 1 . Introduce a theoretically motivated and computationally efficient method for the selection of pseudo-label groups among a set of candidates and with respect to the considered downstream tasks ( Sections 3 and 4 ) . 2 . Validate empirically the proposed approach with a first model SSL model relying on different sets of pseudo-labels corresponding to the ones obtained for three considered speech tasks . ( Sections 5 ) . 3 . Extend our method to the SOTA wav2vec 2.0 to enhance its performance ( Section 6 ) . 4 . Release the code base developed with SpeechBrain ( Ravanelli et al. , 2021 ) for replication and to encourage further investigations.1 The conducted experiments demonstrate that the proposed method allows for a more intelligent , i.e . better informed , pseudo-label group selection for multitask SSL settings . Indeed , we find that the models built with the proposed method obtain a word error rate and an equal error rate , respectively , 31.6 % and 27.4 % lower than the baseline , without the need for any empirical search . 2 RELATED WORKS AND MOTIVATIONS . SSL recently became a key component to achieve good performance on downstream tasks especially with low-resource setups , either in speech ( Baevski et al. , 2020b ; Conneau et al. , 2020 ) , natural language processing ( Lan et al. , 2019 ; Chen et al. , 2020b ) or computer vision ( Gidaris et al. , 2019 ; Misra & Maaten , 2020 ; Jing & Tian , 2020 ) . Due to its very nature , SSL relies on large amounts of unlabeled data used to train large deep neural networks over long periods of time . It it thus crucial to understand properly what makes a good SSL model to lower the amount of computation and time needed to obtain the best downstream performance . SSL for Speech . Self-supervised learning for speech has recently enabled researchers to reach stateof-the-art results on various speech processing tasks ( Fan et al. , 2021 ) . The most successful models rely on predictive and contrastive objectives ( Baevski et al. , 2020b ; Chung et al. , 2019 ; Hsu et al. , 2021 ; Shor et al. , 2021 ) performing well across the different tasks even in low-resource settings . This led to the design of different benchmarks evaluating the self-supervised representations in different languages ( Yang et al. , 2021 ; Evain et al. , 2021 ) . However , in contrast to this proposition , these works have not tried to motivate beforehand the different choices made in the self-supervision pipeline . Understanding SSL . A few works have tried to shed some theoretical light on the mainly empirical field of self-supervised learning . Following the different paradigms in SSL , various tracks have 1https : //github.com/iclrsubmitter22/iclr_2022_submission been followed to understand what makes for a good self-supervised representation , exploring different approaches ( Lee et al. , 2020 ; Arora et al. , 2019 ; Wei et al. , 2020 ) . On the one hand , contrastive learning ( Oord et al. , 2018 ; Chen et al. , 2020a ) has been advocated both theoretically and empirically to achieve a balance in the mutual information between alternative representations of the data , keeping just enough shared information to keep the class-related content ( Tschannen et al. , 2020 ; Tian et al. , 2020 ; Bachman et al. , 2019 ) . In a recent work from Li et al . ( 2021 ) , independence testing has been used to produce better transformations in contrastive learning settings for image representations . Predictive learning , on the other hand , requires the model to predict masked elements in sequential data . This technique is powerful on downstream tasks that can be reduced to a masking problem , as suggested by research on language modeling ( Saunshi et al. , 2020 ) . However , all these works have been focusing on computer vision or text-related applications , and none of them addressed the multi-tasked self supervision problem . Multi-task self-supervised learning . While the literature on multi-tasking in self-supervised learning remains scarce , it has been shown in classic supervised learning settings , that through estimates of similarity between tasks or thorough empirical testing , several tasks can take advantage of being solved with a common encoder ( Zamir et al. , 2018 ; Dwivedi & Roig , 2019 ; Shafey et al. , 2019 ; Chen et al. , 2015 ) . More specifically , combining pretext tasks with SSL has been mainly explored in computer vision and speech ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) . Pretext tasks such as Jigsaw ( Doersch et al. , 2016 ) , colourisation and rotation ( Gidaris et al. , 2018 ) have been combined successfully to improve downstream performance ( Kim et al. , 2018 ; Shin ’ ya Yamaguchi et al. ) . The two closest works to our line of research are from Lee et al . ( 2020 ) and Doersch et al . ( 2017 ) . The former shows that a theoretical link can be established between conditional independence and an improvement of the performance on the downstream task , while the latter proposes to select layers from a multitask self-supervised encoder according to the pretext task to be solved . However , in both cases , the studies do not offer practical and theoretical solutions to select groups of pseudo-labels to build an adapted SSL model that will perform well on the considered downstream tasks . Group feature selection . Finally , feature selection , and especially group feature selection is another close and inspiring field given the problem we consider . The relationship and interactions between features have been largely investigated in the supervised learning literature ( Guyon & Elisseeff , 2003 ) . This led to multiple solutions to the feature group selection problem , including LASSObased techniques ( Yuan & Lin , 2006 ) , or multiple kernel formulations ( Sonnenburg et al. , 2006 ; Rakotomamonjy et al. , 2007 ) . However , these works do not involve any self-supervision , and links between feature selection and self-supervision design and pretext-task selection are yet to be proved . We will further consider these lines of works for concurrent baselines . With this work , we aim at shortening the process of designing SSL models while giving insights on the pseudo-label importance and the underlying mechanisms between pretext and downstream tasks at the same time . We decided to experiment with speech due to the lack of literature on this domain for multitask SSL , and for the various pseudo-labels available , which are based on decades of signal processing research . The whole pipeline starting from the acoustic feature extraction to the downstream task scoring follows three major steps summarized in Figure 1 . First , for every downstream task , our method produces a pretext task selection and weighting . Then , a SSL model is trained , before being used as a feature extractor front-end to one or many downstream tasks .
This paper proposes a method to select a group of pretext tasks from a given set of tasks for optimising the network training during self-supervised learning phase. The weights for the given set of tasks are learned based on Hilbert Schmidt Independence Criterion (HSIC) using a few data samples. Using 2 downstream tasks, the authors show that their approach could benefit the learning of features relevant for the downstream tasks. Thus, improves the accuracy of these downstream tasks.
science.mup_single_document_summarization.5380
[ { "content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nPretext Tasks Selection for Multitask Self-Supervised Speech Representation Learning\n\nMain Content:\nThrough solving pretext tasks , self-supervised learning leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task . In audio/speech signal processing , a wide range of features where engineered through decades of research efforts . As it turns out , learning to predict such features ( a.k.a pseudo-labels ) has proven to be a particularly relevant pretext task , leading to useful self-supervised representations which prove to be effective for downstream tasks . However , methods and common practices for combining such pretext tasks for better performance on the downstream task have not been explored and understood properly . In fact , the process relies almost exclusively on a computationally heavy experimental procedure , which becomes intractable with the increase of the number of pretext tasks . This paper introduces a method to select a group of pretext tasks among a set of candidates . The method we propose estimates calibrated weights for the partial losses corresponding to the considered pretext tasks during the self-supervised training process . The experiments conducted on automatic speech recognition , speaker and emotion recognition validate our approach , as the groups selected and weighted with our method perform better than classic baselines , thus facilitating the selection and combination of relevant pseudo-labels for self-supervised representation learning . 1 INTRODUCTION . Self-supervised learning ( SSL ) methods usually rely on a supervision obtained from the data itself through solving specific pretext tasks leveraging the underlying structure of the considered data ( Doersch et al. , 2016 ; Arandjelovic & Zisserman , 2018 ) . This technique is used in various domains including image processing ( Misra & Maaten , 2020 ; Jing & Tian , 2020 ; Grill et al. , 2020 ) , natural language understanding ( Chen et al. , 2020b ; Du et al. , 2020 ; Lan et al. , 2019 ) or speech and audio processing ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Jiang et al. , 2020 ) . It offers numerous advantages , such as the independence from labeled data , stronger performance on downstream tasks , more robust models and an easier transfer to low-resource setups ( e.g. , low-resource languages ) ( Baevski et al. , 2020b ; Jing & Tian , 2020 ) . The numerous existing SSL approaches are characterized by the nature of the pretext tasks they solve . For instance , common techniques include predictive coding ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Song et al. , 2020 ; Zhang et al. , 2020 ; Hsu et al. , 2021 ) , pseudo-label learning ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) , auto-encoding ( Renshaw et al. , 2015 ; Algayres et al. , 2020 ) , tripletloss learning ( Shor et al. , 2020 ; Peplinski et al. , 2020 ) , generative modelling ( Khurana et al. , 2020 ) or contrastive learning ( Saeed et al. , 2020 ; Jiang et al. , 2020 ) . More precisely , these pretext tasks may be defined through the choice of pretext labels , hereafter referred to as pseudo-labels . The automatic extraction of pseudo-labels for SSL ( i.e . from the data itself ) is common in many application domains , such as computer vision ( Noroozi & Favaro , 2017 ; Gidaris et al. , 2018 ) , music processing ( Hung et al. , 2019 ; Wu et al. , 2021 ) and speech processing ( Pascual et al. , 2019 ; Shukla et al. , 2020 ) , and is commonly referred to as multitask self supervised learning . In the specific context of speech processing , the process of designing pseudo-labels may benefit from decades of research in signal processing . For instance , potential candidates are pitch estimators , energy-based features , voicing state and many more . As demonstrated by Pascual et al . ( 2019 ) , multitask speech representation learning is a powerful tool to build representations that are beneficial for a wide range of distinct downstream tasks , by combining different pseudo-labels which “ intuitively ” correspond to these tasks . Unfortunately , there is no clear understanding of how these pseudo-labels may interact when optimised together , and therefore , no common practice of how to select groups of pseudo-labels to obtain better performance on a known downstream task . As a matter of fact , this design process has been essentially driven by empirical validation and there is therefore no evidence that the obtained model is even the best one . This empirical approach can rapidly become intractable with modern SSL architectures which may contain hundreds of millions or billions of parameters trained on thousands of hours of speech , not to mention the carbon footprint of such pseudo-label searches . For instance , the self-supervised training of a single state-of-the-art large wav2vec 2.0 model ( Baevski et al. , 2020b ) on 53.2k hours of speech requires 128 GPUs for 5.2 days . This work aims at providing a clear , efficient and theoretically motivated procedure for pseudolabel group selection and weighting based on conditional independence ( CI ) . The method presented allows one to design ahead of training the most adapted multitask self-supervised speech representation learning model which perfectly suits the considered downstream tasks . Such an approach may also enable researchers to save a substantial amount of time and computation usually devoted to pseudo-label search . Hence , the contributions of this work are fourfold : 1 . Introduce a theoretically motivated and computationally efficient method for the selection of pseudo-label groups among a set of candidates and with respect to the considered downstream tasks ( Sections 3 and 4 ) . 2 . Validate empirically the proposed approach with a first model SSL model relying on different sets of pseudo-labels corresponding to the ones obtained for three considered speech tasks . ( Sections 5 ) . 3 . Extend our method to the SOTA wav2vec 2.0 to enhance its performance ( Section 6 ) . 4 . Release the code base developed with SpeechBrain ( Ravanelli et al. , 2021 ) for replication and to encourage further investigations.1 The conducted experiments demonstrate that the proposed method allows for a more intelligent , i.e . better informed , pseudo-label group selection for multitask SSL settings . Indeed , we find that the models built with the proposed method obtain a word error rate and an equal error rate , respectively , 31.6 % and 27.4 % lower than the baseline , without the need for any empirical search . 2 RELATED WORKS AND MOTIVATIONS . SSL recently became a key component to achieve good performance on downstream tasks especially with low-resource setups , either in speech ( Baevski et al. , 2020b ; Conneau et al. , 2020 ) , natural language processing ( Lan et al. , 2019 ; Chen et al. , 2020b ) or computer vision ( Gidaris et al. , 2019 ; Misra & Maaten , 2020 ; Jing & Tian , 2020 ) . Due to its very nature , SSL relies on large amounts of unlabeled data used to train large deep neural networks over long periods of time . It it thus crucial to understand properly what makes a good SSL model to lower the amount of computation and time needed to obtain the best downstream performance . SSL for Speech . Self-supervised learning for speech has recently enabled researchers to reach stateof-the-art results on various speech processing tasks ( Fan et al. , 2021 ) . The most successful models rely on predictive and contrastive objectives ( Baevski et al. , 2020b ; Chung et al. , 2019 ; Hsu et al. , 2021 ; Shor et al. , 2021 ) performing well across the different tasks even in low-resource settings . This led to the design of different benchmarks evaluating the self-supervised representations in different languages ( Yang et al. , 2021 ; Evain et al. , 2021 ) . However , in contrast to this proposition , these works have not tried to motivate beforehand the different choices made in the self-supervision pipeline . Understanding SSL . A few works have tried to shed some theoretical light on the mainly empirical field of self-supervised learning . Following the different paradigms in SSL , various tracks have 1https : //github.com/iclrsubmitter22/iclr_2022_submission been followed to understand what makes for a good self-supervised representation , exploring different approaches ( Lee et al. , 2020 ; Arora et al. , 2019 ; Wei et al. , 2020 ) . On the one hand , contrastive learning ( Oord et al. , 2018 ; Chen et al. , 2020a ) has been advocated both theoretically and empirically to achieve a balance in the mutual information between alternative representations of the data , keeping just enough shared information to keep the class-related content ( Tschannen et al. , 2020 ; Tian et al. , 2020 ; Bachman et al. , 2019 ) . In a recent work from Li et al . ( 2021 ) , independence testing has been used to produce better transformations in contrastive learning settings for image representations . Predictive learning , on the other hand , requires the model to predict masked elements in sequential data . This technique is powerful on downstream tasks that can be reduced to a masking problem , as suggested by research on language modeling ( Saunshi et al. , 2020 ) . However , all these works have been focusing on computer vision or text-related applications , and none of them addressed the multi-tasked self supervision problem . Multi-task self-supervised learning . While the literature on multi-tasking in self-supervised learning remains scarce , it has been shown in classic supervised learning settings , that through estimates of similarity between tasks or thorough empirical testing , several tasks can take advantage of being solved with a common encoder ( Zamir et al. , 2018 ; Dwivedi & Roig , 2019 ; Shafey et al. , 2019 ; Chen et al. , 2015 ) . More specifically , combining pretext tasks with SSL has been mainly explored in computer vision and speech ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) . Pretext tasks such as Jigsaw ( Doersch et al. , 2016 ) , colourisation and rotation ( Gidaris et al. , 2018 ) have been combined successfully to improve downstream performance ( Kim et al. , 2018 ; Shin ’ ya Yamaguchi et al. ) . The two closest works to our line of research are from Lee et al . ( 2020 ) and Doersch et al . ( 2017 ) . The former shows that a theoretical link can be established between conditional independence and an improvement of the performance on the downstream task , while the latter proposes to select layers from a multitask self-supervised encoder according to the pretext task to be solved . However , in both cases , the studies do not offer practical and theoretical solutions to select groups of pseudo-labels to build an adapted SSL model that will perform well on the considered downstream tasks . Group feature selection . Finally , feature selection , and especially group feature selection is another close and inspiring field given the problem we consider . The relationship and interactions between features have been largely investigated in the supervised learning literature ( Guyon & Elisseeff , 2003 ) . This led to multiple solutions to the feature group selection problem , including LASSObased techniques ( Yuan & Lin , 2006 ) , or multiple kernel formulations ( Sonnenburg et al. , 2006 ; Rakotomamonjy et al. , 2007 ) . However , these works do not involve any self-supervision , and links between feature selection and self-supervision design and pretext-task selection are yet to be proved . We will further consider these lines of works for concurrent baselines . With this work , we aim at shortening the process of designing SSL models while giving insights on the pseudo-label importance and the underlying mechanisms between pretext and downstream tasks at the same time . We decided to experiment with speech due to the lack of literature on this domain for multitask SSL , and for the various pseudo-labels available , which are based on decades of signal processing research . The whole pipeline starting from the acoustic feature extraction to the downstream task scoring follows three major steps summarized in Figure 1 . First , for every downstream task , our method produces a pretext task selection and weighting . Then , a SSL model is trained , before being used as a feature extractor front-end to one or many downstream tasks .", "role": "user" }, { "content": "This paper proposes a method to select a group of pretext tasks from a given set of tasks for optimising the network training during self-supervised learning phase. The weights for the given set of tasks are learned based on Hilbert Schmidt Independence Criterion (HSIC) using a few data samples. Using 2 downstream tasks, the authors show that their approach could benefit the learning of features relevant for the downstream tasks. Thus, improves the accuracy of these downstream tasks.", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Role of angiotensin II type 1a receptor in renal injury induced by deoxycorticosterone acetate-salt hypertension. The aim of this study was to investigate the in vivo role of angiotensin II type 1a (AT1a) receptor in renal damage as a result of hypertension by using transgenic mice with AT1a receptor gene disruption. Transgenic mice that express human liver-type fatty acid binding protein (L-FABP) with or without disruption of the AT1a receptor gene (L-FABP(+/-) AT1a(-/-), and L-FABP(+/-) AT1a(+/+), respectively) were used with urinary L-FABP as an indicator of tubulointerstitial damage. Those female mice were administered subcutaneously deoxycorticosterone acetate (DOCA)-salt tablets plus drinking water that contained 1% saline for 28 d after uninephrectomy. In L-FABP(+/-) AT1a(+/+) mice that received DOCA-salt treatment, hypertension was induced and slight expansion of glomerular area, glomerular sclerosis, and tubulointerstitial damage were observed. In L-FABP(+/-) AT1a(-/-) mice that received DOCA-salt treatment, hypertension was similarly induced and the degree of glomerular damage was significantly more severe than in L-FABP(+/-) AT1a(+/+)-DOCA mice. Urinary L-FABP levels were significantly higher in L-FABP(+/-) AT1a(-/-)-DOCA mice compared with those in L-FABP(+/-) AT1a(+/+)-DOCA mice. Hydralazine treatment significantly attenuated renal damage that was found in L-FABP(+/-) AT1a(-/-)-DOCA mice along with a reduction in blood pressure. In summary, activation of the AT1a receptor may contribute to maintenance of the glomerular structure against hypertensive renal damage.-Hisamichi, M., Kamijo-Ikemori, A., Sugaya, T., Ichikawa, D., Natsuki, T., Hoshino, S., Kimura, K., Shibagaki, Y. Role of angiotensin II type 1a receptor in renal injury induced by deoxycorticosterone acetate-salt hypertension.
{"CellLine": [], "Chemical": ["deoxycorticosterone acetate", "Hydralazine", "DOCA"], "Disease": ["glomerular damage", "glomerular sclerosis", "tubulointerstitial damage", "renal damage", "hypertensive renal damage", "hypertension", "renal injury"], "Gene": ["angiotensin II type 1a (AT1a) receptor", "angiotensin II type 1a receptor", "liver-type fatty acid binding protein", "L-FABP", "AT1a receptor", "AT1a"], "Species": ["mice", "human"], "Variant": []}
science.biored_ner.73
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nRole of angiotensin II type 1a receptor in renal injury induced by deoxycorticosterone acetate-salt hypertension. The aim of this study was to investigate the in vivo role of angiotensin II type 1a (AT1a) receptor in renal damage as a result of hypertension by using transgenic mice with AT1a receptor gene disruption. Transgenic mice that express human liver-type fatty acid binding protein (L-FABP) with or without disruption of the AT1a receptor gene (L-FABP(+/-) AT1a(-/-), and L-FABP(+/-) AT1a(+/+), respectively) were used with urinary L-FABP as an indicator of tubulointerstitial damage. Those female mice were administered subcutaneously deoxycorticosterone acetate (DOCA)-salt tablets plus drinking water that contained 1% saline for 28 d after uninephrectomy. In L-FABP(+/-) AT1a(+/+) mice that received DOCA-salt treatment, hypertension was induced and slight expansion of glomerular area, glomerular sclerosis, and tubulointerstitial damage were observed. In L-FABP(+/-) AT1a(-/-) mice that received DOCA-salt treatment, hypertension was similarly induced and the degree of glomerular damage was significantly more severe than in L-FABP(+/-) AT1a(+/+)-DOCA mice. Urinary L-FABP levels were significantly higher in L-FABP(+/-) AT1a(-/-)-DOCA mice compared with those in L-FABP(+/-) AT1a(+/+)-DOCA mice. Hydralazine treatment significantly attenuated renal damage that was found in L-FABP(+/-) AT1a(-/-)-DOCA mice along with a reduction in blood pressure. In summary, activation of the AT1a receptor may contribute to maintenance of the glomerular structure against hypertensive renal damage.-Hisamichi, M., Kamijo-Ikemori, A., Sugaya, T., Ichikawa, D., Natsuki, T., Hoshino, S., Kimura, K., Shibagaki, Y. Role of angiotensin II type 1a receptor in renal injury induced by deoxycorticosterone acetate-salt hypertension.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"deoxycorticosterone acetate\", \"Hydralazine\", \"DOCA\"], \"Disease\": [\"glomerular damage\", \"glomerular sclerosis\", \"tubulointerstitial damage\", \"renal damage\", \"hypertensive renal damage\", \"hypertension\", \"renal injury\"], \"Gene\": [\"angiotensin II type 1a (AT1a) receptor\", \"angiotensin II type 1a receptor\", \"liver-type fatty acid binding protein\", \"L-FABP\", \"AT1a receptor\", \"AT1a\"], \"Species\": [\"mice\", \"human\"], \"Variant\": []}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Congenital hypothyroidism due to a new deletion in the sodium/iodide symporter protein. OBJECTIVE: Iodide transport defect (ITD) is a rare disorder characterised by an inability of the thyroid to maintain an iodide gradient across the basolateral membrane of thyroid follicular cells, that often results in congenital hypothyroidism. When present the defect is also found in the salivary glands and gastric mucosa and it has been shown to arise from abnormalities of the sodium/iodide symporter (NIS). PATIENT: We describe a woman with hypothyroidism identified at the 3rd month of life. The diagnosis of ITD was suspected because of nodular goitre, and little if any iodide uptake by the thyroid and salivary glands. Treatment with iodide partially corrected the hypothyroidism; however, long-term substitution therapy with L-thyroxine was started. MEASUREMENTS: Thyroid radioiodide uptake was only 1.4% and 0.3% at 1 and 24 h after the administration of recombinant human TSH. The saliva to plasma I- ratio was 1.1 indicating that the inability of the thyroid gland to concentrate I- was also present in the salivary glands. RESULTS: Analysis of the patient's NIS gene revealed a 15 nucleotide (nt) deletion of the coding sequence (nt 1314 through nt 1328) and the insertion of 15 nt duplicating the first 15 nt of the adjacent intron. The patient was homozygous for this insertion/deletion, while both consanguineous parents were heterozygous. This deletion predicts the production of a protein lacking the five terminal amino acids of exon XI (439-443) which are located in the 6th intracellular loop. COS-7 cells transfected with a vector expressing the mutant del-(439-443) NIS failed to concentrate iodide, suggesting that the mutation was the direct cause of the ITD in this patient. CONCLUSION: In conclusion we describe the first Italian case of congenital hypothyroidism due to a new deletion in the NIS gene.
{"CellLine": ["COS-7"], "Chemical": ["radioiodide", "I-", "iodide", "L-thyroxine"], "Disease": ["inability of the thyroid", "ITD", "nodular goitre", "hypothyroidism", "Congenital hypothyroidism", "Iodide transport defect", "congenital hypothyroidism"], "Gene": ["sodium/iodide symporter", "NIS", "TSH"], "Species": ["woman", "human", "PATIENT", "patient"], "Variant": ["insertion of 15 nt duplicating the first 15", "lacking the five terminal amino acids of exon XI (439-443)", "del-(439-443)", "deletion of the coding sequence (nt 1314 through nt 1328)"]}
science.biored_ner.0
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nCongenital hypothyroidism due to a new deletion in the sodium/iodide symporter protein. OBJECTIVE: Iodide transport defect (ITD) is a rare disorder characterised by an inability of the thyroid to maintain an iodide gradient across the basolateral membrane of thyroid follicular cells, that often results in congenital hypothyroidism. When present the defect is also found in the salivary glands and gastric mucosa and it has been shown to arise from abnormalities of the sodium/iodide symporter (NIS). PATIENT: We describe a woman with hypothyroidism identified at the 3rd month of life. The diagnosis of ITD was suspected because of nodular goitre, and little if any iodide uptake by the thyroid and salivary glands. Treatment with iodide partially corrected the hypothyroidism; however, long-term substitution therapy with L-thyroxine was started. MEASUREMENTS: Thyroid radioiodide uptake was only 1.4% and 0.3% at 1 and 24 h after the administration of recombinant human TSH. The saliva to plasma I- ratio was 1.1 indicating that the inability of the thyroid gland to concentrate I- was also present in the salivary glands. RESULTS: Analysis of the patient's NIS gene revealed a 15 nucleotide (nt) deletion of the coding sequence (nt 1314 through nt 1328) and the insertion of 15 nt duplicating the first 15 nt of the adjacent intron. The patient was homozygous for this insertion/deletion, while both consanguineous parents were heterozygous. This deletion predicts the production of a protein lacking the five terminal amino acids of exon XI (439-443) which are located in the 6th intracellular loop. COS-7 cells transfected with a vector expressing the mutant del-(439-443) NIS failed to concentrate iodide, suggesting that the mutation was the direct cause of the ITD in this patient. CONCLUSION: In conclusion we describe the first Italian case of congenital hypothyroidism due to a new deletion in the NIS gene.", "role": "user" }, { "content": "{\"CellLine\": [\"COS-7\"], \"Chemical\": [\"radioiodide\", \"I-\", \"iodide\", \"L-thyroxine\"], \"Disease\": [\"inability of the thyroid\", \"ITD\", \"nodular goitre\", \"hypothyroidism\", \"Congenital hypothyroidism\", \"Iodide transport defect\", \"congenital hypothyroidism\"], \"Gene\": [\"sodium/iodide symporter\", \"NIS\", \"TSH\"], \"Species\": [\"woman\", \"human\", \"PATIENT\", \"patient\"], \"Variant\": [\"insertion of 15 nt duplicating the first 15\", \"lacking the five terminal amino acids of exon XI (439-443)\", \"del-(439-443)\", \"deletion of the coding sequence (nt 1314 through nt 1328)\"]}", "role": "assistant" } ]
science.multicite_intent_classification
Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text. Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose. Text with Citation: Universal schema, in particular, has found impressive accuracy gains by (1) treating the distant-supervision as a knowledge-base (KB) containing both structured relations such as bornIn * First two authors contributed equally to the paper. and surface form relations such as "was born in" extracted from text, and (2) by completing the entries in such a KB using joint and compact encoding of the dependencies between the relations <cite>(Riedel et al., 2013</cite>; Fan et al., 2014; .
["Background"]
science.multicite_intent_classification.1180
[ { "content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nUniversal schema, in particular, has found impressive accuracy gains by (1) treating the distant-supervision as a knowledge-base (KB) containing both structured relations such as bornIn * First two authors contributed equally to the paper. and surface form relations such as \"was born in\" extracted from text, and (2) by completing the entries in such a KB using joint and compact encoding of the dependencies between the relations <cite>(Riedel et al., 2013</cite>; Fan et al., 2014; .", "role": "user" }, { "content": "[\"Background\"]", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: The influence of the time interval between monoHER and doxorubicin administration on the protection against doxorubicin-induced cardiotoxicity in mice. PURPOSE: Despite its well-known cardiotoxicity, the anthracyclin doxorubicin (DOX) continues to be an effective and widely used chemotherapeutic agent. DOX-induced cardiac damage presumably results from the formation of free radicals by DOX. Reactive oxygen species particularly affect the cardiac myocytes because these cells seem to have a relatively poor antioxidant defense system. The semisynthetic flavonoid monohydroxyethylrutoside (monoHER) showed cardioprotection against DOX-induced cardiotoxicity through its radical scavenging and iron chelating properties. Because of the relatively short final half-life of monoHER (about 30 min), it is expected that the time interval between monoHER and DOX might be of influence on the cardioprotective effect of monoHER. Therefore, the aim of the present study was to investigate this possible effect. METHODS: Six groups of 6 BALB/c mice were treated with saline, DOX alone or DOX (4 mg/kg i.v.) preceded by monoHER (500 mg/kg i.p.) with an interval of 10, 30, 60 or 120 min. After a 6-week treatment period and additional observation for 2 weeks, the mice were sacrificed. Their cardiac tissues were processed for light microscopy, after which cardiomyocyte damage was evaluated according to Billingham (in Cancer Treat Rep 62(6):865-872, 1978). Microscopic evaluation revealed that treatment with DOX alone induced significant cardiac damage in comparison to the saline control group (P<0.001). RESULTS: The number of damaged cardiomyocytes was 9.6-fold (95% CI 4.4-21.0) higher in mice treated with DOX alone than that in animals of the control group. The ratio of aberrant cardiomyocytes in mice treated with DOX preceded by monoHER and those in mice treated with saline ranged from 1.6 to 2.8 (mean 2.2, 95% CI 1.2-4.1, P=0.019). The mean protective effect by adding monoHER before DOX led to a significant 4.4-fold reduction (P<0.001, 95% CI 2.3-8.2) of abnormal cardiomyocytes. This protective effect did not depend on the time interval between monoHER and DOX administration (P=0.345). CONCLUSION: The results indicate that in an outpatient clinical setting monoHER may be administered shortly before DOX.
{"CellLine": [], "Chemical": ["oxygen", "flavonoid", "DOX", "monoHER", "monohydroxyethylrutoside", "doxorubicin", "anthracyclin", "iron"], "Disease": ["cardiomyocyte damage", "cardiotoxicity", "cardiac damage"], "Gene": [], "Species": ["mice", "outpatient"], "Variant": []}
science.biored_ner.80
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nThe influence of the time interval between monoHER and doxorubicin administration on the protection against doxorubicin-induced cardiotoxicity in mice. PURPOSE: Despite its well-known cardiotoxicity, the anthracyclin doxorubicin (DOX) continues to be an effective and widely used chemotherapeutic agent. DOX-induced cardiac damage presumably results from the formation of free radicals by DOX. Reactive oxygen species particularly affect the cardiac myocytes because these cells seem to have a relatively poor antioxidant defense system. The semisynthetic flavonoid monohydroxyethylrutoside (monoHER) showed cardioprotection against DOX-induced cardiotoxicity through its radical scavenging and iron chelating properties. Because of the relatively short final half-life of monoHER (about 30 min), it is expected that the time interval between monoHER and DOX might be of influence on the cardioprotective effect of monoHER. Therefore, the aim of the present study was to investigate this possible effect. METHODS: Six groups of 6 BALB/c mice were treated with saline, DOX alone or DOX (4 mg/kg i.v.) preceded by monoHER (500 mg/kg i.p.) with an interval of 10, 30, 60 or 120 min. After a 6-week treatment period and additional observation for 2 weeks, the mice were sacrificed. Their cardiac tissues were processed for light microscopy, after which cardiomyocyte damage was evaluated according to Billingham (in Cancer Treat Rep 62(6):865-872, 1978). Microscopic evaluation revealed that treatment with DOX alone induced significant cardiac damage in comparison to the saline control group (P<0.001). RESULTS: The number of damaged cardiomyocytes was 9.6-fold (95% CI 4.4-21.0) higher in mice treated with DOX alone than that in animals of the control group. The ratio of aberrant cardiomyocytes in mice treated with DOX preceded by monoHER and those in mice treated with saline ranged from 1.6 to 2.8 (mean 2.2, 95% CI 1.2-4.1, P=0.019). The mean protective effect by adding monoHER before DOX led to a significant 4.4-fold reduction (P<0.001, 95% CI 2.3-8.2) of abnormal cardiomyocytes. This protective effect did not depend on the time interval between monoHER and DOX administration (P=0.345). CONCLUSION: The results indicate that in an outpatient clinical setting monoHER may be administered shortly before DOX.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"oxygen\", \"flavonoid\", \"DOX\", \"monoHER\", \"monohydroxyethylrutoside\", \"doxorubicin\", \"anthracyclin\", \"iron\"], \"Disease\": [\"cardiomyocyte damage\", \"cardiotoxicity\", \"cardiac damage\"], \"Gene\": [], \"Species\": [\"mice\", \"outpatient\"], \"Variant\": []}", "role": "assistant" } ]
science.bioasq_list_qa
Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array. Context: Focal segmental glomerulosclerosis and mild intellectual disability in a patient with a novel de novo truncating TRIM8 mutation. Mutations in the TRIM8 gene have been described in patients with severe developmental delay, intellectual disability and epilepsy. Only six patients have been described to date. All the previous mutations were truncating variants clustered in the C-terminus of the protein. A previous patient with TRIM8-related epileptic encephalopathy was reported to have nephrotic syndrome. Here we describe the clinical, radiological and histological features of an 8-year-old male patient with a TRIM8 mutation who, in contrast to previous patients, had only mild intellectual disability and well-controlled epilepsy. The patient was found to have proteinuria at 2 years of age. Renal biopsy findings were suggestive of focal segmental glomerulosclerosis. His kidney function declined and peritoneal dialysis was started at 5 years of age. He underwent renal transplant at 7 years of age. Trio-based whole genome sequencing identified a novel de novo heterozygous frameshift mutation in TRIM8 (NM_030912.2) c.1198_1220del, p.(Tyr400ArgfsTer2). This patient is further evidence that TRIM8 mutations cause a syndrome with both neurological and renal features. Our findings suggest the spectrum of TRIM8-related disease may be wider than previously thought with the possibility of milder neurodevelopmental problems and/or a more severe, progressive renal phenotype. We highlight the need for proteinuria screening in patients with TRIM8 mutations. Question: Which conditions are manifested by TRIM8 mutations?
["Focal segmental glomerulosclerosis", "Severe developmental delay", "Intellectual disability", "Epilepsy"]
science.bioasq_list_qa.724
[ { "content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Focal segmental glomerulosclerosis and mild intellectual disability in a patient with a novel de novo truncating TRIM8 mutation.\nMutations in the TRIM8 gene have been described in patients with severe developmental delay, intellectual disability and epilepsy. Only six patients have been described to date. All the previous mutations were truncating variants clustered in the C-terminus of the protein. A previous patient with TRIM8-related epileptic encephalopathy was reported to have nephrotic syndrome. Here we describe the clinical, radiological and histological features of an 8-year-old male patient with a TRIM8 mutation who, in contrast to previous patients, had only mild intellectual disability and well-controlled epilepsy. The patient was found to have proteinuria at 2 years of age. Renal biopsy findings were suggestive of focal segmental glomerulosclerosis. His kidney function declined and peritoneal dialysis was started at 5 years of age. He underwent renal transplant at 7 years of age. Trio-based whole genome sequencing identified a novel de novo heterozygous frameshift mutation in TRIM8 (NM_030912.2) c.1198_1220del, p.(Tyr400ArgfsTer2). This patient is further evidence that TRIM8 mutations cause a syndrome with both neurological and renal features. Our findings suggest the spectrum of TRIM8-related disease may be wider than previously thought with the possibility of milder neurodevelopmental problems and/or a more severe, progressive renal phenotype. We highlight the need for proteinuria screening in patients with TRIM8 mutations.\n\nQuestion: Which conditions are manifested by TRIM8 mutations?", "role": "user" }, { "content": "[\"Focal segmental glomerulosclerosis\", \"Severe developmental delay\", \"Intellectual disability\", \"Epilepsy\"]", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Migraine with aura is associated with ischemic stroke. Research Abstract: OBJECTIVE To evaluate the association between migraine and cardiovascular disease, including stroke, myocardial infarction, and death due to cardiovascular disease. DESIGN Systematic review and meta-analysis. DATA SOURCES Electronic databases (PubMed, Embase, Cochrane Library) and reference lists of included studies and reviews published until January 2009. Selection criteria Case-control and cohort studies investigating the association between any migraine or specific migraine subtypes and cardiovascular disease. Review methods Two investigators independently assessed eligibility of identified studies in a two step approach. Disagreements were resolved by consensus. Studies were grouped according to a priori categories on migraine and cardiovascular disease. DATA EXTRACTION Two investigators extracted data. Pooled relative risks and 95% confidence intervals were calculated. RESULTS Studies were heterogeneous for participant characteristics and definition of cardiovascular disease. Nine studies investigated the association between any migraine and ischaemic stroke (pooled relative risk 1.73, 95% confidence interval 1.31 to 2.29). Additional analyses indicated a significantly higher risk among people who had migraine with aura (2.16, 1.53 to 3.03) compared with people who had migraine without aura (1.23, 0.90 to 1.69; meta-regression for aura status P=0.02). Furthermore, results suggested a greater risk among women (2.08, 1.13 to 3.84) compared with men (1.37, 0.89 to 2.11). Age less than 45 years, smoking, and oral contraceptive use further increased the risk. Eight studies investigated the association between migraine and myocardial infarction (1.12, 0.95 to 1.32) and five between migraine and death due to cardiovascular disease (1.03, 0.79 to 1.34). Only one study investigated the association between women who had migraine with aura and myocardial infarction and death due to cardiovascular disease, showing a twofold increased risk. CONCLUSION Migraine is associated with a twofold increased risk of ischaemic stroke, which is only apparent among people who have migraine with aura. Our results also suggest a higher risk among women and risk was further magnified for people with migraine who were aged less than 45, smokers, and women who used oral contraceptives. We did not find an overall association between any migraine and myocardial infarction or death due to cardiovascular disease. Too few studies are available to reliably evaluate the impact of modifying factors, such as migraine aura, on these associations.
{"verdict": "SUPPORT", "evidence": ["Additional analyses indicated a significantly higher risk among people who had migraine with aura (2.16, 1.53 to 3.03) compared with people who had migraine without aura (1.23, 0.90 to 1.69; meta-regression for aura status P=0.02).", "CONCLUSION Migraine is associated with a twofold increased risk of ischaemic stroke, which is only apparent among people who have migraine with aura."]}
science.scifact_entailment.509
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nMigraine with aura is associated with ischemic stroke.\n\nResearch Abstract:\nOBJECTIVE To evaluate the association between migraine and cardiovascular disease, including stroke, myocardial infarction, and death due to cardiovascular disease.\nDESIGN Systematic review and meta-analysis.\nDATA SOURCES Electronic databases (PubMed, Embase, Cochrane Library) and reference lists of included studies and reviews published until January 2009.\nSelection criteria Case-control and cohort studies investigating the association between any migraine or specific migraine subtypes and cardiovascular disease.\nReview methods Two investigators independently assessed eligibility of identified studies in a two step approach.\nDisagreements were resolved by consensus.\nStudies were grouped according to a priori categories on migraine and cardiovascular disease.\nDATA EXTRACTION Two investigators extracted data.\nPooled relative risks and 95% confidence intervals were calculated.\nRESULTS Studies were heterogeneous for participant characteristics and definition of cardiovascular disease.\nNine studies investigated the association between any migraine and ischaemic stroke (pooled relative risk 1.73, 95% confidence interval 1.31 to 2.29).\nAdditional analyses indicated a significantly higher risk among people who had migraine with aura (2.16, 1.53 to 3.03) compared with people who had migraine without aura (1.23, 0.90 to 1.69; meta-regression for aura status P=0.02).\nFurthermore, results suggested a greater risk among women (2.08, 1.13 to 3.84) compared with men (1.37, 0.89 to 2.11).\nAge less than 45 years, smoking, and oral contraceptive use further increased the risk.\nEight studies investigated the association between migraine and myocardial infarction (1.12, 0.95 to 1.32) and five between migraine and death due to cardiovascular disease (1.03, 0.79 to 1.34).\nOnly one study investigated the association between women who had migraine with aura and myocardial infarction and death due to cardiovascular disease, showing a twofold increased risk.\nCONCLUSION Migraine is associated with a twofold increased risk of ischaemic stroke, which is only apparent among people who have migraine with aura.\nOur results also suggest a higher risk among women and risk was further magnified for people with migraine who were aged less than 45, smokers, and women who used oral contraceptives.\nWe did not find an overall association between any migraine and myocardial infarction or death due to cardiovascular disease.\nToo few studies are available to reliably evaluate the impact of modifying factors, such as migraine aura, on these associations.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Additional analyses indicated a significantly higher risk among people who had migraine with aura (2.16, 1.53 to 3.03) compared with people who had migraine without aura (1.23, 0.90 to 1.69; meta-regression for aura status P=0.02).\", \"CONCLUSION Migraine is associated with a twofold increased risk of ischaemic stroke, which is only apparent among people who have migraine with aura.\"]}", "role": "assistant" } ]
science.biored_ner
Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"]. Ensure the output is strictly the JSON object. Source Text: Novel somatic MEN1 gene alterations in sporadic primary hyperparathyroidism and correlation with clinical characteristics. Primary hyperparathyroidism (pHPT) is a common endocrine disease that in more than 95% of cases is sporadic and only in some cases is caused by inherited disorders, isolated or as part of multiple endocrine neoplasia (MEN1 and 2). Somatic mutations of MEN1 gene have also been described in sporadic parathyroid tumors. In our study, we examined the presence of alterations in MEN1 gene in a series of 39 patients who had undergone surgery for sporadic pHPT (35 with parathyroid adenoma or hyperplasia, 4 with a carcinoma). A genotype-phenotype correlation was also analysed. After DNA extraction from paraffin-embedded tissues, we amplified by PCR and sequenced the exons 2-10 of the MEN1 gene. Somatic MEN1 mutations were detected in 6 of the 35 patients with a benign parathyroid lesion examined (17.1%), whereas no alterations were found in the carcinomas. Four novel MEN1 gene mutations were identified as follows: one frameshift mutation (222insT, exon 2), one frameshift deletion (912delTA, exon 5), one in-frame deletion (835del18, exon 4) and one missense mutation (P291A, exon 6). In addition, one missense mutation (L89R, exon 2) and one nonsense mutation (Q536X, exon 10) were previously reported. Moreover, two polymorphisms were also found: one allele carried a R171Q polymorphism (1/39 tumors), while a D418D polymorphism (GAC/GAT) was found in 15 and 8 tumors in hetero (CT) and homozygosity (TT), respectively. In no case (mutations and/or polymorphisms) did we find a genotype-phenotype correlation. In conclusion, our data demonstrate the presence of somatic alterations of the MEN1 tumor suppressor gene in about one fifth of benign sporadic parathyroid tumors. The absence of a genotype-phenotype correlation, however, suggests the involvement of other genetic/epigenetic factors for the full expression of the disease.
{"CellLine": [], "Chemical": ["paraffin"], "Disease": ["primary hyperparathyroidism", "inherited disorders", "MEN1 and 2", "parathyroid adenoma or hyperplasia", "pHPT", "parathyroid tumors", "endocrine disease", "multiple endocrine neoplasia", "carcinoma", "Primary hyperparathyroidism", "benign parathyroid lesion", "tumor", "carcinomas", "tumors"], "Gene": ["MEN1"], "Species": ["patients"], "Variant": ["R171Q", "L89R", "GAC/GAT", "D418D", "222insT", "P291A", "Q536X", "835del18", "912delTA"]}
science.biored_ner.56
[ { "content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nNovel somatic MEN1 gene alterations in sporadic primary hyperparathyroidism and correlation with clinical characteristics. Primary hyperparathyroidism (pHPT) is a common endocrine disease that in more than 95% of cases is sporadic and only in some cases is caused by inherited disorders, isolated or as part of multiple endocrine neoplasia (MEN1 and 2). Somatic mutations of MEN1 gene have also been described in sporadic parathyroid tumors. In our study, we examined the presence of alterations in MEN1 gene in a series of 39 patients who had undergone surgery for sporadic pHPT (35 with parathyroid adenoma or hyperplasia, 4 with a carcinoma). A genotype-phenotype correlation was also analysed. After DNA extraction from paraffin-embedded tissues, we amplified by PCR and sequenced the exons 2-10 of the MEN1 gene. Somatic MEN1 mutations were detected in 6 of the 35 patients with a benign parathyroid lesion examined (17.1%), whereas no alterations were found in the carcinomas. Four novel MEN1 gene mutations were identified as follows: one frameshift mutation (222insT, exon 2), one frameshift deletion (912delTA, exon 5), one in-frame deletion (835del18, exon 4) and one missense mutation (P291A, exon 6). In addition, one missense mutation (L89R, exon 2) and one nonsense mutation (Q536X, exon 10) were previously reported. Moreover, two polymorphisms were also found: one allele carried a R171Q polymorphism (1/39 tumors), while a D418D polymorphism (GAC/GAT) was found in 15 and 8 tumors in hetero (CT) and homozygosity (TT), respectively. In no case (mutations and/or polymorphisms) did we find a genotype-phenotype correlation. In conclusion, our data demonstrate the presence of somatic alterations of the MEN1 tumor suppressor gene in about one fifth of benign sporadic parathyroid tumors. The absence of a genotype-phenotype correlation, however, suggests the involvement of other genetic/epigenetic factors for the full expression of the disease.", "role": "user" }, { "content": "{\"CellLine\": [], \"Chemical\": [\"paraffin\"], \"Disease\": [\"primary hyperparathyroidism\", \"inherited disorders\", \"MEN1 and 2\", \"parathyroid adenoma or hyperplasia\", \"pHPT\", \"parathyroid tumors\", \"endocrine disease\", \"multiple endocrine neoplasia\", \"carcinoma\", \"Primary hyperparathyroidism\", \"benign parathyroid lesion\", \"tumor\", \"carcinomas\", \"tumors\"], \"Gene\": [\"MEN1\"], \"Species\": [\"patients\"], \"Variant\": [\"R171Q\", \"L89R\", \"GAC/GAT\", \"D418D\", \"222insT\", \"P291A\", \"Q536X\", \"835del18\", \"912delTA\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: Post-transcriptional handling of mitochondrial transcripts occurs in mitochondrial RNA granules. Research Abstract: RNA-binding proteins are at the heart of posttranscriptional gene regulation, coordinating the processing, storage, and handling of cellular RNAs. We show here that GRSF1, previously implicated in the binding and selective translation of influenza mRNAs, is targeted to mitochondria where it forms granules that colocalize with foci of newly synthesized mtRNA next to mitochondrial nucleoids. GRSF1 preferentially binds RNAs transcribed from three contiguous genes on the light strand of mtDNA, the ND6 mRNA, and the long noncoding RNAs for cytb and ND5, each of which contains multiple consensus binding sequences. RNAi-mediated knockdown of GRSF1 leads to alterations in mitochondrial RNA stability, abnormal loading of mRNAs and lncRNAs on the mitochondrial ribosome, and impaired ribosome assembly. This results in a specific protein synthesis defect and a failure to assemble normal amounts of the oxidative phosphorylation complexes. These data implicate GRSF1 as a key regulator of posttranscriptional mitochondrial gene expression.
{"verdict": "SUPPORT", "evidence": ["We show here that GRSF1, previously implicated in the binding and selective translation of influenza mRNAs, is targeted to mitochondria where it forms granules that colocalize with foci of newly synthesized mtRNA next to mitochondrial nucleoids.", "These data implicate GRSF1 as a key regulator of posttranscriptional mitochondrial gene expression."]}
science.scifact_entailment.612
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nPost-transcriptional handling of mitochondrial transcripts occurs in mitochondrial RNA granules.\n\nResearch Abstract:\nRNA-binding proteins are at the heart of posttranscriptional gene regulation, coordinating the processing, storage, and handling of cellular RNAs.\nWe show here that GRSF1, previously implicated in the binding and selective translation of influenza mRNAs, is targeted to mitochondria where it forms granules that colocalize with foci of newly synthesized mtRNA next to mitochondrial nucleoids.\nGRSF1 preferentially binds RNAs transcribed from three contiguous genes on the light strand of mtDNA, the ND6 mRNA, and the long noncoding RNAs for cytb and ND5, each of which contains multiple consensus binding sequences.\nRNAi-mediated knockdown of GRSF1 leads to alterations in mitochondrial RNA stability, abnormal loading of mRNAs and lncRNAs on the mitochondrial ribosome, and impaired ribosome assembly.\nThis results in a specific protein synthesis defect and a failure to assemble normal amounts of the oxidative phosphorylation complexes.\nThese data implicate GRSF1 as a key regulator of posttranscriptional mitochondrial gene expression.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"We show here that GRSF1, previously implicated in the binding and selective translation of influenza mRNAs, is targeted to mitochondria where it forms granules that colocalize with foci of newly synthesized mtRNA next to mitochondrial nucleoids.\", \"These data implicate GRSF1 as a key regulator of posttranscriptional mitochondrial gene expression.\"]}", "role": "assistant" } ]
science.scifact_entailment
Determine the relationship between them and generate a JSON response. Scientific Claim: H4 G94P proteins induce degradation of free histones via Rad53-dependent mechanisms. Research Abstract: Histone levels are tightly regulated to prevent harmful effects such as genomic instability and hypersensitivity to DNA-damaging agents due to the accumulation of these highly basic proteins when DNA replication slows down or stops. Although chromosomal histones are stable, excess (non-chromatin bound) histones are rapidly degraded in a Rad53 (radiation sensitive 53) kinase-dependent manner in Saccharomyces cerevisiae. Here we demonstrate that excess histones associate with Rad53 in vivo and seem to undergo modifications such as tyrosine phosphorylation and polyubiquitylation, before their proteolysis by the proteasome. We have identified the Tyr 99 residue of histone H3 as being critical for the efficient ubiquitylation and degradation of this histone. We have also identified the ubiquitin conjugating enzymes (E2) Ubc4 and Ubc5, as well as the ubiquitin ligase (E3) Tom1 (temperature dependent organization in mitotic nucleus 1), as enzymes involved in the ubiquitylation of excess histones. Regulated histone proteolysis has major implications for the maintenance of epigenetic marks on chromatin, genomic stability and the packaging of sperm DNA.
{"verdict": "NEI", "evidence": []}
science.scifact_entailment.317
[ { "content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nH4 G94P proteins induce degradation of free histones via Rad53-dependent mechanisms.\n\nResearch Abstract:\nHistone levels are tightly regulated to prevent harmful effects such as genomic instability and hypersensitivity to DNA-damaging agents due to the accumulation of these highly basic proteins when DNA replication slows down or stops.\nAlthough chromosomal histones are stable, excess (non-chromatin bound) histones are rapidly degraded in a Rad53 (radiation sensitive 53) kinase-dependent manner in Saccharomyces cerevisiae.\nHere we demonstrate that excess histones associate with Rad53 in vivo and seem to undergo modifications such as tyrosine phosphorylation and polyubiquitylation, before their proteolysis by the proteasome.\nWe have identified the Tyr 99 residue of histone H3 as being critical for the efficient ubiquitylation and degradation of this histone.\nWe have also identified the ubiquitin conjugating enzymes (E2) Ubc4 and Ubc5, as well as the ubiquitin ligase (E3) Tom1 (temperature dependent organization in mitotic nucleus 1), as enzymes involved in the ubiquitylation of excess histones.\nRegulated histone proteolysis has major implications for the maintenance of epigenetic marks on chromatin, genomic stability and the packaging of sperm DNA.", "role": "user" }, { "content": "{\"verdict\": \"NEI\", \"evidence\": []}", "role": "assistant" } ]