dataset
stringclasses 9
values | input
stringlengths 351
18.6k
| output
stringlengths 8
7.48k
| id
stringlengths 20
47
| messages
listlengths 2
2
|
---|---|---|---|---|
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Learning Data Manipulation for Augmentation and Weighting
Manipulating data, such as weighting data examples or augmenting with new instances, has been increasingly used to improve model training. Previous work has studied various rule- or learning-based approaches designed for specific types of data manipulation. In this work, we propose a new method that supports learning different manipulation schemes with the same gradient-based algorithm. Our approach builds upon a recent connection of supervised learning and reinforcement learning (RL), and adapts an off-the-shelf reward learning algorithm from RL for joint data manipulation learning and model training. Different parameterization of the ``data reward'' function instantiates different manipulation schemes. We showcase data augmentation that learns a text transformation network, and data weighting that dynamically adapts the data sample importance. Experiments show the resulting algorithms significantly improve the image and text classification performance in low data regime and class-imbalance problems.
Related Work
Rich types of data manipulation have been increasingly used in modern machine learning pipelines. Previous work each has typically focused on a particular manipulation type. Data augmentation that perturbs examples without changing the labels is widely used especially in vision BIBREF10, BIBREF11 and speech BIBREF12, BIBREF13 domains. Common heuristic-based methods on images include cropping, mirroring, rotation BIBREF11, and so forth. Recent work has developed automated augmentation approaches BIBREF3, BIBREF2, BIBREF14, BIBREF15, BIBREF16. BIBREF17 additionally use large-scale unlabeled data. BIBREF3, BIBREF2 learn to induce the composition of data transformation operators. Instead of treating data augmentation as a policy in reinforcement learning BIBREF3, we formulate manipulation as a reward function and use efficient stochastic gradient descent to learn the manipulation parameters. Text data augmentation has also achieved impressive success, such as contextual augmentation BIBREF18, BIBREF19, back-translation BIBREF20, and manual approaches BIBREF21, BIBREF22. In addition to perturbing the input text as in classification tasks, text generation problems expose opportunities to adding noise also in the output text, such as BIBREF23, BIBREF24. Recent work BIBREF6 shows output nosing in sequence generation can be treated as an intermediate approach in between supervised learning and reinforcement learning, and developed a new sequence learning algorithm that interpolates between the spectrum of existing algorithms. We instantiate our approach for text contextual augmentation as in BIBREF18, BIBREF19, but enhance the previous work by additionally fine-tuning the augmentation network jointly with the target model.
Data weighting has been used in various algorithms, such as AdaBoost BIBREF0, self-paced learning BIBREF25, hard-example mining BIBREF26, and others BIBREF27, BIBREF28. These algorithms largely define sample weights based on training loss. Recent work BIBREF5, BIBREF29 learns a separate network to predict sample weights. Of particular relevance to our work is BIBREF4 which induces sample weights using a validation set. The data weighting mechanism instantiated by our framework has a key difference in that samples weights are treated as parameters that are updated iteratively, instead of re-estimated from scratch at each step. We show improved performance of our approach. Besides, our data manipulation approach is derived based on a different perspective of reward learning, instead of meta-learning as in BIBREF4.
Another popular type of data manipulation involves data synthesis, which creates entire artificial samples from scratch. GAN-based approaches have achieved impressive results for synthesizing conditional image data BIBREF30, BIBREF31. In the text domain, controllable text generation BIBREF32 presents a way of co-training the data generator and classifier in a cyclic manner within a joint VAE BIBREF33 and wake-sleep BIBREF34 framework. It is interesting to explore the instantiation of the present approach for adaptive data synthesis in the future.
Background ::: Equivalence between Data and Reward
The recent work BIBREF6 introduced a unifying perspective of reformulating maximum likelihood supervised learning as a special instance of a policy optimization framework. In this perspective, data examples providing supervision signals are equivalent to a specialized reward function. Since the original framework BIBREF6 was derived for sequence generation problems, here we present a slightly adapted formulation for our context of data manipulation.
To connect the maximum likelihood supervised learning with policy optimization, consider the model $p_\theta (y|\mathbf {x})$ as a policy that takes “action” $y$ given the “state” $\mathbf {x}$. Let $R(\mathbf {x}, y |\mathcal {D})\in \mathbb {R}$ denote a reward function, and $p(\mathbf {x})$ be the empirical data distribution which is known given $\mathcal {D}$. Further assume a variational distribution $q(\mathbf {x}, y)$ that factorizes as $q(\mathbf {x},y)=p(\mathbf {x})q(y|\mathbf {x})$. A variational policy optimization objective is then written as:
where $\text{KL}(\cdot \Vert \cdot )$ is the Kullback–Leibler divergence; $\text{H}(\cdot )$ is the Shannon entropy; and $\alpha ,\beta >0$ are balancing weights. The objective is in the same form with the RL-as-inference formalism of policy optimization BIBREF35, BIBREF36, BIBREF37. Intuitively, the objective maximizes the expected reward under $q$, and enforces the model $p_\theta $ to stay close to $q$, with a maximum entropy regularization over $q$. The problem is solved with an EM procedure that optimizes $q$ and $\mathbf {\theta }$ alternatingly:
where $Z$ is the normalization term. With the established framework, it is easy to show that the above optimization procedure reduces to maximum likelihood learning by taking $\alpha \rightarrow 0, \beta =1$, and the reward function:
That is, a sample $(\mathbf {x}, y)$ receives a unit reward only when it matches a training example in the dataset, while the reward is negative infinite in all other cases. To make the equivalence to maximum likelihood learning clearer, note that the above M-step now reduces to
where the joint distribution $p(\mathbf {x})\exp \lbrace R_\delta \rbrace /Z$ equals the empirical data distribution, which means the M-step is in fact maximizing the data log-likelihood of the model $p_\theta $.
Learning Data Manipulation ::: Instantiations: Augmentation & Weighting
As a case study, we show two parameterizations of $R_\phi $ which instantiate distinct data manipulation schemes. The first example learns augmentation for text data, a domain that has been less studied in the literature compared to vision and speech BIBREF18, BIBREF38. The second instance focuses on automated data weighting, which is applicable to any data domains.
Experiments
We empirically validate the proposed data manipulation approach through extensive experiments on learning augmentation and weighting. We study both text and image classification, in two difficult settings of low data regime and imbalanced labels.
Experiments ::: Low Data Regime ::: Setup
For text classification, we use the popular benchmark datasets, including SST-5 for 5-class sentence sentiment BIBREF41, IMDB for binary movie review sentiment BIBREF42, and TREC for 6-class question types BIBREF43. We subsample a small training set on each task by randomly picking 40 instances for each class. We further create small validation sets, i.e., 2 instances per class for SST-5, and 5 instances per class for IMDB and TREC, respectively. The reason we use slightly more validation examples on IMDB and TREC is that the model can easily achieve 100% validation accuracy if the validation sets are too small. Thus, the SST-5 task has 210 labeled examples in total, while IMDB has 90 labels and TREC has 270. Such extremely small datasets pose significant challenges for learning deep neural networks. Since the manipulation parameters are trained using the small validation sets, to avoid possible overfitting we restrict the training to small number (e.g., 5 or 10) of epochs. For image classification, we similarly create a small subset of the CIFAR10 data, which includes 40 instances per class for training, and 2 instances per class for validation.
Experiments ::: Low Data Regime ::: Results
Table TABREF26 shows the manipulation results on text classification. For data augmentation, our approach significantly improves over the base model on all the three datasets. Besides, compared to both the conventional synonym substitution and the approach that keeps the augmentation network fixed, our adaptive method that fine-tunes the augmentation network jointly with model training achieves superior results. Indeed, the heuristic-based synonym approach can sometimes harm the model performance (e.g., SST-5 and IMDB), as also observed in previous work BIBREF19, BIBREF18. This can be because the heuristic rules do not fit the task or datasets well. In contrast, learning-based augmentation has the advantage of adaptively generating useful samples to improve model training.
Table TABREF26 also shows the data weighting results. Our weight learning consistently improves over the base model and the latest weighting method BIBREF4. In particular, instead of re-estimating sample weights from scratch in each iteration BIBREF4, our approach treats the weights as manipulation parameters maintained throughout the training. We speculate that the parametric treatment can adapt weights more smoothly and provide historical information, which is beneficial in the small-data context.
It is interesting to see from Table TABREF26 that our augmentation method consistently outperforms the weighting method, showing that data augmentation can be a more suitable technique than data weighting for manipulating small-size data. Our approach provides the generality to instantiate diverse manipulation types and learn with the same single procedure.
To investigate the augmentation model and how the fine-tuning affects the augmentation results, we show in Figure TABREF27 the top-5 most probable word substitutions predicted by the augmentation model for two masked tokens, respectively. Comparing the results of epoch 1 and epoch 3, we can see the augmentation model evolves and dynamically adjusts the augmentation behavior as the training proceeds. Through fine-tuning, the model seems to make substitutions that are more coherent with the conditioning label and relevant to the original words (e.g., replacing the word “striking” with “bland” in epoch 1 v.s. “charming” in epoch 3).
Table TABREF27 shows the data weighting results on image classification. We evaluate two settings with the ResNet-34 base model being initialized randomly or with pretrained weights, respectively. Our data weighting consistently improves over the base model and BIBREF4 regardless of the initialization.
Experiments ::: Imbalanced Labels ::: Setup
Though the methods are broadly applicable to multi-way classification problems, here we only study binary classification tasks for simplicity. For text classification, we use the SST-2 sentiment analysis benchmark BIBREF41; while for image, we select class 1 and 2 from CIFAR10 for binary classification. We use the same processing on both datasets to build the class-imbalance setting. Specifically, we randomly select 1,000 training instances of class 2, and vary the number of class-1 instances in $\lbrace 20, 50, 100\rbrace $. For each dataset, we use 10 validation examples in each class. Trained models are evaluated on the full binary-class test set.
Experiments ::: Imbalanced Labels ::: Results
Table TABREF29 shows the classification results on SST-2 with varying imbalance ratios. We can see our data weighting performs best across all settings. In particular, the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000. Our method is again consistently better than BIBREF4, validating that the parametric treatment is beneficial. The proportion-based data weighting provides only limited improvement, showing the advantage of adaptive data weighting. The base model trained on the joint training-validation data for fixed steps fails to perform well, partly due to the lack of a proper mechanism for selecting steps.
Table TABREF30 shows the results on imbalanced CIFAR10 classification. Similarly, our method outperforms other comparison approaches. In contrast, the fixed proportion-based method sometimes harms the performance as in the 50:1000 and 100:1000 settings.
We also tested the text augmentation LM on the SST-2 imbalanced data. Interestingly, the augmentation tends to hinder model training and yields accuracy of around 50% (random guess). This is because the augmentation LM is first fit to the imbalanced data, which makes label preservation inaccurate and introduces lots of noise during augmentation. Though a more carefully designed augmentation mechanism can potentially help with imbalanced classification (e.g., augmenting only the rare classes), the above observation further shows that the varying data manipulation schemes have different applicable scopes. Our approach is thus favorable as the single algorithm can be instantiated to learn different schemes.
Table 1: Accuracy of Data Manipulation on Text Classification. All results are averaged over 15 runs ± one standard deviation. The numbers in parentheses next to the dataset names indicate the size of the datasets. For example, (40+2) denotes 40 training instances and 2 validation instances per class.
Table 4: Accuracy of Data Weighting on Imbalanced CIFAR10. The first row shows the number of training examples in each of the two classes.
Query: How much is classification performance improved in experiments for low data regime and class-imbalance problems?
JSON Response: | {"answer": "Low data: SST-5, TREC, IMDB around 1-2 accuracy points better than baseline\nImbalanced labels: the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000", "evidence": ["Table TABREF26 shows the manipulation results on text classification. For data augmentation, our approach significantly improves over the base model on all the three datasets. Besides, compared to both the conventional synonym substitution and the approach that keeps the augmentation network fixed, our adaptive method that fine-tunes the augmentation network jointly with model training achieves superior results. Indeed, the heuristic-based synonym approach can sometimes harm the model performance (e.g., SST-5 and IMDB), as also observed in previous work BIBREF19, BIBREF18. This can be because the heuristic rules do not fit the task or datasets well. In contrast, learning-based augmentation has the advantage of adaptively generating useful samples to improve model training.", "Table TABREF29 shows the classification results on SST-2 with varying imbalance ratios. We can see our data weighting performs best across all settings. In particular, the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000. Our method is again consistently better than BIBREF4, validating that the parametric treatment is beneficial. The proportion-based data weighting provides only limited improvement, showing the advantage of adaptive data weighting. The base model trained on the joint training-validation data for fixed steps fails to perform well, partly due to the lack of a proper mechanism for selecting steps."]} | science.qasper_abstractive_qa.235.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nLearning Data Manipulation for Augmentation and Weighting\n\nManipulating data, such as weighting data examples or augmenting with new instances, has been increasingly used to improve model training. Previous work has studied various rule- or learning-based approaches designed for specific types of data manipulation. In this work, we propose a new method that supports learning different manipulation schemes with the same gradient-based algorithm. Our approach builds upon a recent connection of supervised learning and reinforcement learning (RL), and adapts an off-the-shelf reward learning algorithm from RL for joint data manipulation learning and model training. Different parameterization of the ``data reward'' function instantiates different manipulation schemes. We showcase data augmentation that learns a text transformation network, and data weighting that dynamically adapts the data sample importance. Experiments show the resulting algorithms significantly improve the image and text classification performance in low data regime and class-imbalance problems.\n\nRelated Work\nRich types of data manipulation have been increasingly used in modern machine learning pipelines. Previous work each has typically focused on a particular manipulation type. Data augmentation that perturbs examples without changing the labels is widely used especially in vision BIBREF10, BIBREF11 and speech BIBREF12, BIBREF13 domains. Common heuristic-based methods on images include cropping, mirroring, rotation BIBREF11, and so forth. Recent work has developed automated augmentation approaches BIBREF3, BIBREF2, BIBREF14, BIBREF15, BIBREF16. BIBREF17 additionally use large-scale unlabeled data. BIBREF3, BIBREF2 learn to induce the composition of data transformation operators. Instead of treating data augmentation as a policy in reinforcement learning BIBREF3, we formulate manipulation as a reward function and use efficient stochastic gradient descent to learn the manipulation parameters. Text data augmentation has also achieved impressive success, such as contextual augmentation BIBREF18, BIBREF19, back-translation BIBREF20, and manual approaches BIBREF21, BIBREF22. In addition to perturbing the input text as in classification tasks, text generation problems expose opportunities to adding noise also in the output text, such as BIBREF23, BIBREF24. Recent work BIBREF6 shows output nosing in sequence generation can be treated as an intermediate approach in between supervised learning and reinforcement learning, and developed a new sequence learning algorithm that interpolates between the spectrum of existing algorithms. We instantiate our approach for text contextual augmentation as in BIBREF18, BIBREF19, but enhance the previous work by additionally fine-tuning the augmentation network jointly with the target model.\nData weighting has been used in various algorithms, such as AdaBoost BIBREF0, self-paced learning BIBREF25, hard-example mining BIBREF26, and others BIBREF27, BIBREF28. These algorithms largely define sample weights based on training loss. Recent work BIBREF5, BIBREF29 learns a separate network to predict sample weights. Of particular relevance to our work is BIBREF4 which induces sample weights using a validation set. The data weighting mechanism instantiated by our framework has a key difference in that samples weights are treated as parameters that are updated iteratively, instead of re-estimated from scratch at each step. We show improved performance of our approach. Besides, our data manipulation approach is derived based on a different perspective of reward learning, instead of meta-learning as in BIBREF4.\nAnother popular type of data manipulation involves data synthesis, which creates entire artificial samples from scratch. GAN-based approaches have achieved impressive results for synthesizing conditional image data BIBREF30, BIBREF31. In the text domain, controllable text generation BIBREF32 presents a way of co-training the data generator and classifier in a cyclic manner within a joint VAE BIBREF33 and wake-sleep BIBREF34 framework. It is interesting to explore the instantiation of the present approach for adaptive data synthesis in the future.\n\nBackground ::: Equivalence between Data and Reward\nThe recent work BIBREF6 introduced a unifying perspective of reformulating maximum likelihood supervised learning as a special instance of a policy optimization framework. In this perspective, data examples providing supervision signals are equivalent to a specialized reward function. Since the original framework BIBREF6 was derived for sequence generation problems, here we present a slightly adapted formulation for our context of data manipulation.\nTo connect the maximum likelihood supervised learning with policy optimization, consider the model $p_\\theta (y|\\mathbf {x})$ as a policy that takes “action” $y$ given the “state” $\\mathbf {x}$. Let $R(\\mathbf {x}, y |\\mathcal {D})\\in \\mathbb {R}$ denote a reward function, and $p(\\mathbf {x})$ be the empirical data distribution which is known given $\\mathcal {D}$. Further assume a variational distribution $q(\\mathbf {x}, y)$ that factorizes as $q(\\mathbf {x},y)=p(\\mathbf {x})q(y|\\mathbf {x})$. A variational policy optimization objective is then written as:\nwhere $\\text{KL}(\\cdot \\Vert \\cdot )$ is the Kullback–Leibler divergence; $\\text{H}(\\cdot )$ is the Shannon entropy; and $\\alpha ,\\beta >0$ are balancing weights. The objective is in the same form with the RL-as-inference formalism of policy optimization BIBREF35, BIBREF36, BIBREF37. Intuitively, the objective maximizes the expected reward under $q$, and enforces the model $p_\\theta $ to stay close to $q$, with a maximum entropy regularization over $q$. The problem is solved with an EM procedure that optimizes $q$ and $\\mathbf {\\theta }$ alternatingly:\nwhere $Z$ is the normalization term. With the established framework, it is easy to show that the above optimization procedure reduces to maximum likelihood learning by taking $\\alpha \\rightarrow 0, \\beta =1$, and the reward function:\nThat is, a sample $(\\mathbf {x}, y)$ receives a unit reward only when it matches a training example in the dataset, while the reward is negative infinite in all other cases. To make the equivalence to maximum likelihood learning clearer, note that the above M-step now reduces to\nwhere the joint distribution $p(\\mathbf {x})\\exp \\lbrace R_\\delta \\rbrace /Z$ equals the empirical data distribution, which means the M-step is in fact maximizing the data log-likelihood of the model $p_\\theta $.\n\nLearning Data Manipulation ::: Instantiations: Augmentation & Weighting\nAs a case study, we show two parameterizations of $R_\\phi $ which instantiate distinct data manipulation schemes. The first example learns augmentation for text data, a domain that has been less studied in the literature compared to vision and speech BIBREF18, BIBREF38. The second instance focuses on automated data weighting, which is applicable to any data domains.\n\nExperiments\nWe empirically validate the proposed data manipulation approach through extensive experiments on learning augmentation and weighting. We study both text and image classification, in two difficult settings of low data regime and imbalanced labels.\n\nExperiments ::: Low Data Regime ::: Setup\nFor text classification, we use the popular benchmark datasets, including SST-5 for 5-class sentence sentiment BIBREF41, IMDB for binary movie review sentiment BIBREF42, and TREC for 6-class question types BIBREF43. We subsample a small training set on each task by randomly picking 40 instances for each class. We further create small validation sets, i.e., 2 instances per class for SST-5, and 5 instances per class for IMDB and TREC, respectively. The reason we use slightly more validation examples on IMDB and TREC is that the model can easily achieve 100% validation accuracy if the validation sets are too small. Thus, the SST-5 task has 210 labeled examples in total, while IMDB has 90 labels and TREC has 270. Such extremely small datasets pose significant challenges for learning deep neural networks. Since the manipulation parameters are trained using the small validation sets, to avoid possible overfitting we restrict the training to small number (e.g., 5 or 10) of epochs. For image classification, we similarly create a small subset of the CIFAR10 data, which includes 40 instances per class for training, and 2 instances per class for validation.\n\nExperiments ::: Low Data Regime ::: Results\nTable TABREF26 shows the manipulation results on text classification. For data augmentation, our approach significantly improves over the base model on all the three datasets. Besides, compared to both the conventional synonym substitution and the approach that keeps the augmentation network fixed, our adaptive method that fine-tunes the augmentation network jointly with model training achieves superior results. Indeed, the heuristic-based synonym approach can sometimes harm the model performance (e.g., SST-5 and IMDB), as also observed in previous work BIBREF19, BIBREF18. This can be because the heuristic rules do not fit the task or datasets well. In contrast, learning-based augmentation has the advantage of adaptively generating useful samples to improve model training.\nTable TABREF26 also shows the data weighting results. Our weight learning consistently improves over the base model and the latest weighting method BIBREF4. In particular, instead of re-estimating sample weights from scratch in each iteration BIBREF4, our approach treats the weights as manipulation parameters maintained throughout the training. We speculate that the parametric treatment can adapt weights more smoothly and provide historical information, which is beneficial in the small-data context.\nIt is interesting to see from Table TABREF26 that our augmentation method consistently outperforms the weighting method, showing that data augmentation can be a more suitable technique than data weighting for manipulating small-size data. Our approach provides the generality to instantiate diverse manipulation types and learn with the same single procedure.\nTo investigate the augmentation model and how the fine-tuning affects the augmentation results, we show in Figure TABREF27 the top-5 most probable word substitutions predicted by the augmentation model for two masked tokens, respectively. Comparing the results of epoch 1 and epoch 3, we can see the augmentation model evolves and dynamically adjusts the augmentation behavior as the training proceeds. Through fine-tuning, the model seems to make substitutions that are more coherent with the conditioning label and relevant to the original words (e.g., replacing the word “striking” with “bland” in epoch 1 v.s. “charming” in epoch 3).\nTable TABREF27 shows the data weighting results on image classification. We evaluate two settings with the ResNet-34 base model being initialized randomly or with pretrained weights, respectively. Our data weighting consistently improves over the base model and BIBREF4 regardless of the initialization.\n\nExperiments ::: Imbalanced Labels ::: Setup\nThough the methods are broadly applicable to multi-way classification problems, here we only study binary classification tasks for simplicity. For text classification, we use the SST-2 sentiment analysis benchmark BIBREF41; while for image, we select class 1 and 2 from CIFAR10 for binary classification. We use the same processing on both datasets to build the class-imbalance setting. Specifically, we randomly select 1,000 training instances of class 2, and vary the number of class-1 instances in $\\lbrace 20, 50, 100\\rbrace $. For each dataset, we use 10 validation examples in each class. Trained models are evaluated on the full binary-class test set.\n\nExperiments ::: Imbalanced Labels ::: Results\nTable TABREF29 shows the classification results on SST-2 with varying imbalance ratios. We can see our data weighting performs best across all settings. In particular, the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000. Our method is again consistently better than BIBREF4, validating that the parametric treatment is beneficial. The proportion-based data weighting provides only limited improvement, showing the advantage of adaptive data weighting. The base model trained on the joint training-validation data for fixed steps fails to perform well, partly due to the lack of a proper mechanism for selecting steps.\nTable TABREF30 shows the results on imbalanced CIFAR10 classification. Similarly, our method outperforms other comparison approaches. In contrast, the fixed proportion-based method sometimes harms the performance as in the 50:1000 and 100:1000 settings.\nWe also tested the text augmentation LM on the SST-2 imbalanced data. Interestingly, the augmentation tends to hinder model training and yields accuracy of around 50% (random guess). This is because the augmentation LM is first fit to the imbalanced data, which makes label preservation inaccurate and introduces lots of noise during augmentation. Though a more carefully designed augmentation mechanism can potentially help with imbalanced classification (e.g., augmenting only the rare classes), the above observation further shows that the varying data manipulation schemes have different applicable scopes. Our approach is thus favorable as the single algorithm can be instantiated to learn different schemes.\n\nTable 1: Accuracy of Data Manipulation on Text Classification. All results are averaged over 15 runs ± one standard deviation. The numbers in parentheses next to the dataset names indicate the size of the datasets. For example, (40+2) denotes 40 training instances and 2 validation instances per class.\n\nTable 4: Accuracy of Data Weighting on Imbalanced CIFAR10. The first row shows the number of training examples in each of the two classes.\n\nQuery: How much is classification performance improved in experiments for low data regime and class-imbalance problems?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"Low data: SST-5, TREC, IMDB around 1-2 accuracy points better than baseline\\nImbalanced labels: the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000\", \"evidence\": [\"Table TABREF26 shows the manipulation results on text classification. For data augmentation, our approach significantly improves over the base model on all the three datasets. Besides, compared to both the conventional synonym substitution and the approach that keeps the augmentation network fixed, our adaptive method that fine-tunes the augmentation network jointly with model training achieves superior results. Indeed, the heuristic-based synonym approach can sometimes harm the model performance (e.g., SST-5 and IMDB), as also observed in previous work BIBREF19, BIBREF18. This can be because the heuristic rules do not fit the task or datasets well. In contrast, learning-based augmentation has the advantage of adaptively generating useful samples to improve model training.\", \"Table TABREF29 shows the classification results on SST-2 with varying imbalance ratios. We can see our data weighting performs best across all settings. In particular, the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000. Our method is again consistently better than BIBREF4, validating that the parametric treatment is beneficial. The proportion-based data weighting provides only limited improvement, showing the advantage of adaptive data weighting. The base model trained on the joint training-validation data for fixed steps fails to perform well, partly due to the lack of a proper mechanism for selecting steps.\"]}",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Cigarette smoking, both active and passive, is one of the leading causes of morbidity and mortality in cardiovascular disease. To assess the impact of brief smoking on the vasculature, we determined levels of circulating endothelial progenitor cells (EPCs) and circulating microparticles (MPs) following the smoking of one cigarette by young, healthy intermittent smokers.
ABSTRACT.MATERIALS AND METHODS: 12 healthy volunteers were randomized to either smoking or not smoking in a crossover fashion. Blood sampling was performed at baseline, 1, 4 and 24 hours following smoking/not smoking. The numbers of EPCs and MPs were determined by flow cytometry. MPs were measured from platelets, leukocytes and endothelial cells. Moreover, MPs were also labelled with anti-HMGB1 and SYTO 13 to assess the content of nuclear molecules.
ABSTRACT.RESULTS: Active smoking of one cigarette caused an immediate and significant increase in the numbers of circulating EPCs and MPs of platelet-, endothelial- and leukocyte origin. Levels of MPs containing nuclear molecules were increased, of which the majority were positive for CD41 and CD45 (platelet- and leukocyte origin). CD144 (VE-cadherin) or HMGB1 release did not significantly change during active smoking.
ABSTRACT.CONCLUSION: Brief active smoking of one cigarette generated an acute release of EPC and MPs, of which the latter contained nuclear matter. Together, these results demonstrate acute effects of cigarette smoke on endothelial, platelet and leukocyte function as well as injury to the vascular wall.
Extracted Data: | [["Smoking group", "Control", "Lactadherin+ CD144+", "no significant difference", "CD144 (VE-cadherin) or HMGB1 release did not significantly change during active smoking"]] | science.evidence_inference.146 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Cigarette smoking, both active and passive, is one of the leading causes of morbidity and mortality in cardiovascular disease. To assess the impact of brief smoking on the vasculature, we determined levels of circulating endothelial progenitor cells (EPCs) and circulating microparticles (MPs) following the smoking of one cigarette by young, healthy intermittent smokers.\n\nABSTRACT.MATERIALS AND METHODS: 12 healthy volunteers were randomized to either smoking or not smoking in a crossover fashion. Blood sampling was performed at baseline, 1, 4 and 24 hours following smoking/not smoking. The numbers of EPCs and MPs were determined by flow cytometry. MPs were measured from platelets, leukocytes and endothelial cells. Moreover, MPs were also labelled with anti-HMGB1 and SYTO 13 to assess the content of nuclear molecules.\n\nABSTRACT.RESULTS: Active smoking of one cigarette caused an immediate and significant increase in the numbers of circulating EPCs and MPs of platelet-, endothelial- and leukocyte origin. Levels of MPs containing nuclear molecules were increased, of which the majority were positive for CD41 and CD45 (platelet- and leukocyte origin). CD144 (VE-cadherin) or HMGB1 release did not significantly change during active smoking.\n\nABSTRACT.CONCLUSION: Brief active smoking of one cigarette generated an acute release of EPC and MPs, of which the latter contained nuclear matter. Together, these results demonstrate acute effects of cigarette smoke on endothelial, platelet and leukocyte function as well as injury to the vascular wall.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Smoking group\", \"Control\", \"Lactadherin+ CD144+\", \"no significant difference\", \"CD144 (VE-cadherin) or HMGB1 release did not significantly change during active smoking\"]]",
"role": "assistant"
}
] |
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
In this paper, we will describe a search tool for a huge set of ngrams. The tool supports queries with an arbitrary number of wildcards. It takes a fraction of a second for a search, and can provide the fillers of the wildcards. The system runs on a single Linux PC with reasonable size memory (less than 4GB) and disk space (less than 400GB). This system can be a very useful tool for linguistic knowledge discovery and other NLP tasks. | Output Entities:
{"Generic": [], "Material": [], "Method": ["search tool"], "Metric": [], "OtherScientificTerm": ["disk space", "ngrams", "memory"], "Task": ["NLP tasks", "linguistic knowledge discovery"]} | science.scierc_ner.37 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nIn this paper, we will describe a search tool for a huge set of ngrams. The tool supports queries with an arbitrary number of wildcards. It takes a fraction of a second for a search, and can provide the fillers of the wildcards. The system runs on a single Linux PC with reasonable size memory (less than 4GB) and disk space (less than 400GB). This system can be a very useful tool for linguistic knowledge discovery and other NLP tasks.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [], \"Method\": [\"search tool\"], \"Metric\": [], \"OtherScientificTerm\": [\"disk space\", \"ngrams\", \"memory\"], \"Task\": [\"NLP tasks\", \"linguistic knowledge discovery\"]}",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
PKG-la does not have a large impact on expression of spinal long term potentiation in PGK-la knockout mice.
Research Abstract:
Synaptic long-term potentiation (LTP) at spinal neurons directly communicating pain-specific inputs from the periphery to the brain has been proposed to serve as a trigger for pain hypersensitivity in pathological states.
Previous studies have functionally implicated the NMDA receptor-NO pathway and the downstream second messenger, cGMP, in these processes.
Because cGMP can broadly influence diverse ion-channels, kinases, and phosphodiesterases, pre- as well as post-synaptically, the precise identity of cGMP targets mediating spinal LTP, their mechanisms of action, and their locus in the spinal circuitry are still unclear.
Here, we found that Protein Kinase G1 (PKG-I) localized presynaptically in nociceptor terminals plays an essential role in the expression of spinal LTP.
Using the Cre-lox P system, we generated nociceptor-specific knockout mice lacking PKG-I specifically in presynaptic terminals of nociceptors in the spinal cord, but not in post-synaptic neurons or elsewhere (SNS-PKG-I(-/-) mice).
Patch clamp recordings showed that activity-induced LTP at identified synapses between nociceptors and spinal neurons projecting to the periaqueductal grey (PAG) was completely abolished in SNS-PKG-I(-/-) mice, although basal synaptic transmission was not affected.
Analyses of synaptic failure rates and paired-pulse ratios indicated a role for presynaptic PKG-I in regulating the probability of neurotransmitter release.
Inositol 1,4,5-triphosphate receptor 1 and myosin light chain kinase were recruited as key phosphorylation targets of presynaptic PKG-I in nociceptive neurons.
Finally, behavioural analyses in vivo showed marked defects in SNS-PKG-I(-/-) mice in several models of activity-induced nociceptive hypersensitivity, and pharmacological studies identified a clear contribution of PKG-I expressed in spinal terminals of nociceptors.
Our results thus indicate that presynaptic mechanisms involving an increase in release probability from nociceptors are operational in the expression of synaptic LTP on spinal-PAG projection neurons and that PKG-I localized in presynaptic nociceptor terminals plays an essential role in this process to regulate pain sensitivity. | {"verdict": "CONTRADICT", "evidence": ["Patch clamp recordings showed that activity-induced LTP at identified synapses between nociceptors and spinal neurons projecting to the periaqueductal grey (PAG) was completely abolished in SNS-PKG-I(-/-) mice, although basal synaptic transmission was not affected."]} | science.scifact_entailment.574 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nPKG-la does not have a large impact on expression of spinal long term potentiation in PGK-la knockout mice.\n\nResearch Abstract:\nSynaptic long-term potentiation (LTP) at spinal neurons directly communicating pain-specific inputs from the periphery to the brain has been proposed to serve as a trigger for pain hypersensitivity in pathological states.\nPrevious studies have functionally implicated the NMDA receptor-NO pathway and the downstream second messenger, cGMP, in these processes.\nBecause cGMP can broadly influence diverse ion-channels, kinases, and phosphodiesterases, pre- as well as post-synaptically, the precise identity of cGMP targets mediating spinal LTP, their mechanisms of action, and their locus in the spinal circuitry are still unclear.\nHere, we found that Protein Kinase G1 (PKG-I) localized presynaptically in nociceptor terminals plays an essential role in the expression of spinal LTP.\nUsing the Cre-lox P system, we generated nociceptor-specific knockout mice lacking PKG-I specifically in presynaptic terminals of nociceptors in the spinal cord, but not in post-synaptic neurons or elsewhere (SNS-PKG-I(-/-) mice).\nPatch clamp recordings showed that activity-induced LTP at identified synapses between nociceptors and spinal neurons projecting to the periaqueductal grey (PAG) was completely abolished in SNS-PKG-I(-/-) mice, although basal synaptic transmission was not affected.\nAnalyses of synaptic failure rates and paired-pulse ratios indicated a role for presynaptic PKG-I in regulating the probability of neurotransmitter release.\nInositol 1,4,5-triphosphate receptor 1 and myosin light chain kinase were recruited as key phosphorylation targets of presynaptic PKG-I in nociceptive neurons.\nFinally, behavioural analyses in vivo showed marked defects in SNS-PKG-I(-/-) mice in several models of activity-induced nociceptive hypersensitivity, and pharmacological studies identified a clear contribution of PKG-I expressed in spinal terminals of nociceptors.\nOur results thus indicate that presynaptic mechanisms involving an increase in release probability from nociceptors are operational in the expression of synaptic LTP on spinal-PAG projection neurons and that PKG-I localized in presynaptic nociceptor terminals plays an essential role in this process to regulate pain sensitivity.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Patch clamp recordings showed that activity-induced LTP at identified synapses between nociceptors and spinal neurons projecting to the periaqueductal grey (PAG) was completely abolished in SNS-PKG-I(-/-) mice, although basal synaptic transmission was not affected.\"]}",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND:: Delayed-release dimethyl fumarate (DMF) demonstrated efficacy and safety in the Phase 3 DEFINE and CONFIRM trials.
ABSTRACT.OBJECTIVE:: To evaluate delayed-release DMF in newly diagnosed relapsing–remitting multiple sclerosis (RRMS) patients, in a post-hoc analysis of integrated data from DEFINE and CONFIRM.
ABSTRACT.METHODS:: Patients included in the analysis were diagnosed with RRMS within 1 year prior to study entry and naive to MS disease-modifying therapy.
ABSTRACT.RESULTS:: The newly diagnosed population comprised 678 patients treated with placebo (n = 223) or delayed-release DMF 240 mg BID (n = 221) or TID (n = 234). At 2 years, delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p < 0.0001), risk of relapse by 54% and 57% (both p < 0.0001), and risk of 12-week confirmed disability progression by 71% (p < 0.0001) and 47% (p = 0.0085) versus placebo. In a subset of patients (MRI cohort), delayed-release DMF BID and TID reduced the mean number of new or enlarging T2-hyperintense lesions by 80% and 81%, gadolinium-enhancing lesion activity by 92% and 92%, and mean number of new non-enhancing T1-hypointense lesions by 68% and 70% (all p < 0.0001 versus placebo). Flushing and gastrointestinal events were associated with delayed-release DMF.
ABSTRACT.CONCLUSION:: Delayed-release DMF improved clinical and neuroradiological outcomes relative to placebo in newly diagnosed RRMS patients.
Extracted Data: | [["delayed-release DMF 240 mg TID", "placebo", "mean number of new or enlarging T2-hyperintense lesions - patients newly diagnosed with relapsing\u2013remitting multiple sclerosis (RRMS)", "significantly decreased", "In a subset of patients (MRI cohort), delayed-release DMF BID and TID reduced the mean number of new or enlarging T2-hyperintense lesions by 80% and 81%, gadolinium-enhancing lesion activity by 92% and 92%, and mean number of new non-enhancing T1-hypointense lesions by 68% and 70% (all p \u003c 0.0001 versus placebo)."], ["delayed-release DMF 240 mg BID", "placebo", "annualized relapse rate - patients newly diagnosed with relapsing\u2013remitting multiple sclerosis (RRMS)", "significantly decreased", "At 2 years, delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \u003c 0.0001)"], ["delayed-release DMF 240 mg BID", "placebo", "annualized relapse rate - patients newly diagnosed with relapsing\u2013remitting multiple sclerosis (RRMS)", "significantly decreased", "delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \u003c 0.0001),"], ["delayed-release DMF 240 mg TID", "placebo", "annualized relapse rate - patients newly diagnosed with relapsing\u2013remitting multiple sclerosis (RRMS)", "significantly decreased", "At 2 years, delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \u003c 0.0001)"], ["delayed-release DMF 240 mg TID", "placebo", "annualized relapse rate - patients newly diagnosed with relapsing\u2013remitting multiple sclerosis (RRMS)", "significantly decreased", "delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \u003c 0.0001),"], ["delayed-release DMF 240 mg BID", "placebo", "risk of relapse - patients newly diagnosed with relapsing\u2013remitting multiple sclerosis (RRMS)", "significantly decreased", "DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \u003c 0.0001), risk of relapse by 54% and 57% (both p \u003c 0.0001)"], ["delayed-release DMF 240 mg BID", "placebo", "mean number of new or enlarging T2-hyperintense lesions - patients newly diagnosed with relapsing\u2013remitting multiple sclerosis (RRMS)", "significantly decreased", "In a subset of patients (MRI cohort), delayed-release DMF BID and TID reduced the mean number of new or enlarging T2-hyperintense lesions by 80% and 81%, gadolinium-enhancing lesion activity by 92% and 92%, and mean number of new non-enhancing T1-hypointense lesions by 68% and 70% (all p \u003c 0.0001 versus placebo)."]] | science.evidence_inference.154 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: Delayed-release dimethyl fumarate (DMF) demonstrated efficacy and safety in the Phase 3 DEFINE and CONFIRM trials.\n\nABSTRACT.OBJECTIVE:: To evaluate delayed-release DMF in newly diagnosed relapsing–remitting multiple sclerosis (RRMS) patients, in a post-hoc analysis of integrated data from DEFINE and CONFIRM.\n\nABSTRACT.METHODS:: Patients included in the analysis were diagnosed with RRMS within 1 year prior to study entry and naive to MS disease-modifying therapy.\n\nABSTRACT.RESULTS:: The newly diagnosed population comprised 678 patients treated with placebo (n = 223) or delayed-release DMF 240 mg BID (n = 221) or TID (n = 234). At 2 years, delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p < 0.0001), risk of relapse by 54% and 57% (both p < 0.0001), and risk of 12-week confirmed disability progression by 71% (p < 0.0001) and 47% (p = 0.0085) versus placebo. In a subset of patients (MRI cohort), delayed-release DMF BID and TID reduced the mean number of new or enlarging T2-hyperintense lesions by 80% and 81%, gadolinium-enhancing lesion activity by 92% and 92%, and mean number of new non-enhancing T1-hypointense lesions by 68% and 70% (all p < 0.0001 versus placebo). Flushing and gastrointestinal events were associated with delayed-release DMF.\n\nABSTRACT.CONCLUSION:: Delayed-release DMF improved clinical and neuroradiological outcomes relative to placebo in newly diagnosed RRMS patients.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"delayed-release DMF 240 mg TID\", \"placebo\", \"mean number of new or enlarging T2-hyperintense lesions - patients newly diagnosed with relapsing\\u2013remitting multiple sclerosis (RRMS)\", \"significantly decreased\", \"In a subset of patients (MRI cohort), delayed-release DMF BID and TID reduced the mean number of new or enlarging T2-hyperintense lesions by 80% and 81%, gadolinium-enhancing lesion activity by 92% and 92%, and mean number of new non-enhancing T1-hypointense lesions by 68% and 70% (all p \\u003c 0.0001 versus placebo).\"], [\"delayed-release DMF 240 mg BID\", \"placebo\", \"annualized relapse rate - patients newly diagnosed with relapsing\\u2013remitting multiple sclerosis (RRMS)\", \"significantly decreased\", \"At 2 years, delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \\u003c 0.0001)\"], [\"delayed-release DMF 240 mg BID\", \"placebo\", \"annualized relapse rate - patients newly diagnosed with relapsing\\u2013remitting multiple sclerosis (RRMS)\", \"significantly decreased\", \"delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \\u003c 0.0001),\"], [\"delayed-release DMF 240 mg TID\", \"placebo\", \"annualized relapse rate - patients newly diagnosed with relapsing\\u2013remitting multiple sclerosis (RRMS)\", \"significantly decreased\", \"At 2 years, delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \\u003c 0.0001)\"], [\"delayed-release DMF 240 mg TID\", \"placebo\", \"annualized relapse rate - patients newly diagnosed with relapsing\\u2013remitting multiple sclerosis (RRMS)\", \"significantly decreased\", \"delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \\u003c 0.0001),\"], [\"delayed-release DMF 240 mg BID\", \"placebo\", \"risk of relapse - patients newly diagnosed with relapsing\\u2013remitting multiple sclerosis (RRMS)\", \"significantly decreased\", \"DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p \\u003c 0.0001), risk of relapse by 54% and 57% (both p \\u003c 0.0001)\"], [\"delayed-release DMF 240 mg BID\", \"placebo\", \"mean number of new or enlarging T2-hyperintense lesions - patients newly diagnosed with relapsing\\u2013remitting multiple sclerosis (RRMS)\", \"significantly decreased\", \"In a subset of patients (MRI cohort), delayed-release DMF BID and TID reduced the mean number of new or enlarging T2-hyperintense lesions by 80% and 81%, gadolinium-enhancing lesion activity by 92% and 92%, and mean number of new non-enhancing T1-hypointense lesions by 68% and 70% (all p \\u003c 0.0001 versus placebo).\"]]",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Revealing and Protecting Labels in Distributed Training
Main Content:
1 Introduction . Distributed learning paradigms such as federated learning [ 2 ] offer a mechanism for mobile devices to collaboratively train deep learning models via transmission of model updates over the network , while ensuring sensitive training data remains resident , locally , on device . Recent works have demonstrated that information about training data can be revealed from a gradient , or a model update in general . For instance , in the image classification domain , the leaked information ranges from some property of the data like membership inference [ 3 , 4 ] , to pixel-wise accurate image reconstruction [ 1 ] . To reconstruct the samples given to a model , Zhu et al . [ 1 ] propose Deep Leakage from Gradients ( DLG ) , or Gradients Matching ( GM ) , a method that iteratively updates a dummy training sample to minimize the distance of its gradient to a target gradient sent from a client . This has been used to successfully reconstruct large images in a mini-batch [ 5 , 6 , 7 ] . The application of GM has also been demonstrated in other domains , such as language modeling [ 1 ] and automatic speech recognition [ 8 ] . As opposed to model inputs being continuous , training labels being discrete prevents common techniques like gradient descent from being trivially employed for reconstructing them . Instead , training labels can be reconstructed jointly with model inputs by optimizing for continuous `` pseudo '' labels ( a probability distribution on labels ) [ 1 ] . However , this method is not guaranteed to succeed , ∗Work performed while at Google . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . for instance , when the weight update is computed from a single sample . Zhao et al . [ 9 ] first observe that a label of a single training image can be analytically derived from a gradient with 100 % success rate . Their method , however , only applies to gradients computed from a single sample , and does not apply to gradients computed from a mini-batch , or aggregated from multiple training steps . Yin et al . [ 7 ] and Wainakh et al . [ 10 ] extend this method to demonstrate that labels of samples in a mini-batch can be recovered . The method leads to high success rate on revealing labels of samples in the batch , but its applicability is restricted to classification models using a non-negative activation function before the fully-connected layer . While many works have focused on sample reconstruction , the knowledge of training labels has been shown to be necessary for high-quality reconstruction [ 7 ] . In sequence models ( e.g. , speech and language domains ) , revealing the labels itself can result in a significant leak of privacy . Speech transcript leakage imposes a significant threat to users ’ privacy and can lead to the exposure of speaker identities [ 8 ] . Text-form data may include secrets such as passwords , social security numbers , or sensitive words that reveal the context of the conversation . However , in sequence models that output natural language , the number of labels ( e.g. , characters , wordpieces , or words ) can be substantially larger than in image classification models . For such large search spaces , the joint optimization approach adopted in [ 1 ] becomes ineffective . In this work , we make the following contributions . 1 . We propose a method to reveal the entire set of labels used in computing a weight update . We refer to the method as Revealing Labels from Gradients ( RLG ) . The method is applicable with any model that uses a softmax activation with cross-entropy loss , thus is applicable to a wide variety of deep learning architectures . We empirically demonstrate that RLG can be applied to a parameter update generated from a mini-batch ( 1-step , N -sample ) , or after several steps ( K-step , 1-sample each ) , or even the most general case of K-step update of N -sample at each step . 2 . We discuss the effects of gradient quantization and sparsification techniques , namely SignSGD and GradDrop , in the prevention of RLG . These methods have been used to reduce the communication cost in distributed training . We discover that they are also effective in mitigating RLG . 3 . We further demonstrate that in sequence models , the knowledge about the set of labels can be used to fully reconstruct a sequence of labels in order , while GM has limited success . In addition to the parameter update , this requires the current state of the model . We demonstrate our method on 1 ) an image classification task and 2 ) an automatic speech recognition ( ASR ) task . For image classification , we present our results on ResNet [ 11 ] , which has often been the target of choice for attacks in previous works [ 5 , 7 ] , and EfficientNet [ 12 ] , the state-of-the-art architecture for image classification . While the approach proposed in [ 7 ] works for the ResNet , only our proposed method works for EfficientNet . For ASR , we demonstrate our method with the attention-based encoder-decoder architecture . We show that our method used in conjunction with GM can perform full transcript reconstruction , while none of prior works succeed . Our code is published at https : //github.com/googleinterns/learning-bag-of-words . 2 Revealing Training Labels from Gradients . 2.1 Background & Notation . We consider a scenario of model training where an adversary gets access to some model updates ( e.g. , gradients ) . This can be possible in distributed learning settings by honest-but-curious participants , or a honest-but-curious server orchestrating the training , or an adversary that compromises some client-to-server communication channel . Note that we do not cover distributed training paradigms that share an intermediate representation , since they usually involve disclosing labels to the training agent [ 13 ] and require different types of defense techniques [ 14 ] . For simplicity , we consider vanilla gradient sharing approach , and assume that no label obfuscation is performed on the client side . We consider model architectures having a softmax activation with cross-entropy loss for classification , which is standard in deep learning based architectures ( e.g . [ 11 , 15 ] ) . The goal is to reveal labels y for labeled samples of the form ( X , y ) . For each training sample , y could be a single label ( e.g. , in classification models ) , or a sequence of labels ( e.g. , in sequence models ) . Let W ∈ Rd×C and b ∈ RC be the weight and bias of the layer prior to the softmax function ( referred to as the projection layer ) , where d is the input dimension ( same as output dimension of the previous layer ) , and C is the number of classes . The projection layer maps the latent representation h ∈ R1×d extracted from the model input to z = hW + b ∈ R1×C , which represents the unnormalized log probabilities ( logits ) on the classes . We assume that the adversary can access ∆W , i.e . the weight update of the projection layer , computed using back-propagation . This could be computed from a single sample , a batch of several samples , or several batches . Note that our setting crucially does not require the adversary to have access to any other part of the update except ∆W ; however , accessing to the full model update can be useful for sequence reconstruction , as discussed in Section 4 . Additionally , the adversary is required to know the id to label mapping , so that columns in ∆W can be associated with actual labels . For the target of the method , we aim to reveal a set of labels from samples used to compute ∆W , i.e. , S = { y | y ∈ Y } . The method does not return the number of samples for each class . In this section , we show that ∆W can be represented as the product of two matrices ( Remark 1 ) , one of which can be related to Y ( Remark 3 ) . Based on this property , we propose a method using singular value decomposition ( SVD ) to reveal the set of labels S given access to ∆W . 2.2 Gradient of the Projection Layer . Before explaining our method , we want to derive some properties of ∆W . We first consider the case when the gradient is computed from a single sample with a single label ( X , y ) . The probability distribution ŷ over the classes is derived by applying the softmax function on z : ŷi = exp ( zi ) ∑ j exp ( zj ) . Training a classification model involves minimizing the cross-entropy loss L = − ∑ i [ y = i ] log ŷi = − log exp zyc∑ j∈C exp zj ( 1 ) Since z = hW + b , we have ∂z∂W = h > . The weight update for the projection layer can be represented as ∆W = ∂L ∂W = ∂L ∂z ∂z ∂W = h > g , where g = ∂L ∂z ( 2 ) ( Note that h and g are row vectors ) . We can generalize this property to the following settings , which involve more than one training samples and labels . Weight update by a mini-batch / sequence of labels The weight update of a N -sample minibatch or a sequence of length N is averaged from weight updates computed from each sample in the batch or each label in the sequence : ∆W = 1N ∑N i=1 h > i gi = H > G , where H = 1N [ h1 , . . . , hN ] , and G = [ g1 , . . . , gN ] Weight update by several training steps The weight update after K training steps is the sum of weight update at each step2 : ∆W = ∑K i=1 αi∆W ( i ) = ∑K i=1 αiH > ( i ) G ( i ) = H > G , where ∆W ( i ) and αi is the softmax gradient and learning rate at the time step i , respectively , H = [ α1H ( 1 ) , . . . , αKH ( K ) ] , and G = [ G ( 1 ) , . . . , G ( K ) ] . More generally , we have the following . Remark 1 . ∆W can be represented as the product of H > ∈ Rd×S and G ∈ RS×C , where S is the number of labels used to compute the weight update . We consider S unknown to the adversary . If the weight update is computed from a batch , S is the batch size . If the weight update is aggregated from several training steps , S is the total number of samples at these steps . In fact , S can be inferred from the rank of ∆W as follows . 2We consider non-adaptive optimization methods , e.g . FedAvg [ 16 ] , which are common in FL training . Remark 2 . Assume that S < min { d , C } and H , G are full-rank matrices . We have rank ( H ) = rank ( G ) = S , and S can be inferred from S = rank ( ∆W ) . Now we want to derive a relation between G and labels Y . Differentiating L w.r.t . z yields gji = ∇z j i = ∂L ∂zji = { −1 + softmax ( zji , zi ) if j = yi softmax ( zji , zi ) otherwise ( 3 ) Since the softmax function always returns a value in ( 0 , 1 ) , each row in G has a unique negative coordinate corresponding to the ground-truth label . Formally , we have Remark 3 . Let Neg ( u ) define the indices of negative coordinates in a vector u . For every row gi in G , Neg ( gi ) = { yi } . More intuitively , in order to minimize the loss , the probability of the ground-truth label should be pushed up to 1 ( by a negative gradient ) , and probabilities of other labels should be pushed down to 0 ( by a positive gradient ) . Hence , signs in the gradient matrix G can be directly linked to labels Y . Though G is not directly observable from ∆W , this property of G is critical for revealing the set of labels from ∆W , as explained in the next section . | The work shows that a set of labels used in a small batch of the training procedure is identifiable given the gradient of the last layer of the NN model with CE loss and soft-max activation. Also, they propose to modify gradients for defense. There are some theoretical and empirical gaps between this work and real-world settings. | science.mup_single_document_summarization.12464 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nRevealing and Protecting Labels in Distributed Training\n\nMain Content:\n1 Introduction . Distributed learning paradigms such as federated learning [ 2 ] offer a mechanism for mobile devices to collaboratively train deep learning models via transmission of model updates over the network , while ensuring sensitive training data remains resident , locally , on device . Recent works have demonstrated that information about training data can be revealed from a gradient , or a model update in general . For instance , in the image classification domain , the leaked information ranges from some property of the data like membership inference [ 3 , 4 ] , to pixel-wise accurate image reconstruction [ 1 ] . To reconstruct the samples given to a model , Zhu et al . [ 1 ] propose Deep Leakage from Gradients ( DLG ) , or Gradients Matching ( GM ) , a method that iteratively updates a dummy training sample to minimize the distance of its gradient to a target gradient sent from a client . This has been used to successfully reconstruct large images in a mini-batch [ 5 , 6 , 7 ] . The application of GM has also been demonstrated in other domains , such as language modeling [ 1 ] and automatic speech recognition [ 8 ] . As opposed to model inputs being continuous , training labels being discrete prevents common techniques like gradient descent from being trivially employed for reconstructing them . Instead , training labels can be reconstructed jointly with model inputs by optimizing for continuous `` pseudo '' labels ( a probability distribution on labels ) [ 1 ] . However , this method is not guaranteed to succeed , ∗Work performed while at Google . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . for instance , when the weight update is computed from a single sample . Zhao et al . [ 9 ] first observe that a label of a single training image can be analytically derived from a gradient with 100 % success rate . Their method , however , only applies to gradients computed from a single sample , and does not apply to gradients computed from a mini-batch , or aggregated from multiple training steps . Yin et al . [ 7 ] and Wainakh et al . [ 10 ] extend this method to demonstrate that labels of samples in a mini-batch can be recovered . The method leads to high success rate on revealing labels of samples in the batch , but its applicability is restricted to classification models using a non-negative activation function before the fully-connected layer . While many works have focused on sample reconstruction , the knowledge of training labels has been shown to be necessary for high-quality reconstruction [ 7 ] . In sequence models ( e.g. , speech and language domains ) , revealing the labels itself can result in a significant leak of privacy . Speech transcript leakage imposes a significant threat to users ’ privacy and can lead to the exposure of speaker identities [ 8 ] . Text-form data may include secrets such as passwords , social security numbers , or sensitive words that reveal the context of the conversation . However , in sequence models that output natural language , the number of labels ( e.g. , characters , wordpieces , or words ) can be substantially larger than in image classification models . For such large search spaces , the joint optimization approach adopted in [ 1 ] becomes ineffective . In this work , we make the following contributions . 1 . We propose a method to reveal the entire set of labels used in computing a weight update . We refer to the method as Revealing Labels from Gradients ( RLG ) . The method is applicable with any model that uses a softmax activation with cross-entropy loss , thus is applicable to a wide variety of deep learning architectures . We empirically demonstrate that RLG can be applied to a parameter update generated from a mini-batch ( 1-step , N -sample ) , or after several steps ( K-step , 1-sample each ) , or even the most general case of K-step update of N -sample at each step . 2 . We discuss the effects of gradient quantization and sparsification techniques , namely SignSGD and GradDrop , in the prevention of RLG . These methods have been used to reduce the communication cost in distributed training . We discover that they are also effective in mitigating RLG . 3 . We further demonstrate that in sequence models , the knowledge about the set of labels can be used to fully reconstruct a sequence of labels in order , while GM has limited success . In addition to the parameter update , this requires the current state of the model . We demonstrate our method on 1 ) an image classification task and 2 ) an automatic speech recognition ( ASR ) task . For image classification , we present our results on ResNet [ 11 ] , which has often been the target of choice for attacks in previous works [ 5 , 7 ] , and EfficientNet [ 12 ] , the state-of-the-art architecture for image classification . While the approach proposed in [ 7 ] works for the ResNet , only our proposed method works for EfficientNet . For ASR , we demonstrate our method with the attention-based encoder-decoder architecture . We show that our method used in conjunction with GM can perform full transcript reconstruction , while none of prior works succeed . Our code is published at https : //github.com/googleinterns/learning-bag-of-words . 2 Revealing Training Labels from Gradients . 2.1 Background & Notation . We consider a scenario of model training where an adversary gets access to some model updates ( e.g. , gradients ) . This can be possible in distributed learning settings by honest-but-curious participants , or a honest-but-curious server orchestrating the training , or an adversary that compromises some client-to-server communication channel . Note that we do not cover distributed training paradigms that share an intermediate representation , since they usually involve disclosing labels to the training agent [ 13 ] and require different types of defense techniques [ 14 ] . For simplicity , we consider vanilla gradient sharing approach , and assume that no label obfuscation is performed on the client side . We consider model architectures having a softmax activation with cross-entropy loss for classification , which is standard in deep learning based architectures ( e.g . [ 11 , 15 ] ) . The goal is to reveal labels y for labeled samples of the form ( X , y ) . For each training sample , y could be a single label ( e.g. , in classification models ) , or a sequence of labels ( e.g. , in sequence models ) . Let W ∈ Rd×C and b ∈ RC be the weight and bias of the layer prior to the softmax function ( referred to as the projection layer ) , where d is the input dimension ( same as output dimension of the previous layer ) , and C is the number of classes . The projection layer maps the latent representation h ∈ R1×d extracted from the model input to z = hW + b ∈ R1×C , which represents the unnormalized log probabilities ( logits ) on the classes . We assume that the adversary can access ∆W , i.e . the weight update of the projection layer , computed using back-propagation . This could be computed from a single sample , a batch of several samples , or several batches . Note that our setting crucially does not require the adversary to have access to any other part of the update except ∆W ; however , accessing to the full model update can be useful for sequence reconstruction , as discussed in Section 4 . Additionally , the adversary is required to know the id to label mapping , so that columns in ∆W can be associated with actual labels . For the target of the method , we aim to reveal a set of labels from samples used to compute ∆W , i.e. , S = { y | y ∈ Y } . The method does not return the number of samples for each class . In this section , we show that ∆W can be represented as the product of two matrices ( Remark 1 ) , one of which can be related to Y ( Remark 3 ) . Based on this property , we propose a method using singular value decomposition ( SVD ) to reveal the set of labels S given access to ∆W . 2.2 Gradient of the Projection Layer . Before explaining our method , we want to derive some properties of ∆W . We first consider the case when the gradient is computed from a single sample with a single label ( X , y ) . The probability distribution ŷ over the classes is derived by applying the softmax function on z : ŷi = exp ( zi ) ∑ j exp ( zj ) . Training a classification model involves minimizing the cross-entropy loss L = − ∑ i [ y = i ] log ŷi = − log exp zyc∑ j∈C exp zj ( 1 ) Since z = hW + b , we have ∂z∂W = h > . The weight update for the projection layer can be represented as ∆W = ∂L ∂W = ∂L ∂z ∂z ∂W = h > g , where g = ∂L ∂z ( 2 ) ( Note that h and g are row vectors ) . We can generalize this property to the following settings , which involve more than one training samples and labels . Weight update by a mini-batch / sequence of labels The weight update of a N -sample minibatch or a sequence of length N is averaged from weight updates computed from each sample in the batch or each label in the sequence : ∆W = 1N ∑N i=1 h > i gi = H > G , where H = 1N [ h1 , . . . , hN ] , and G = [ g1 , . . . , gN ] Weight update by several training steps The weight update after K training steps is the sum of weight update at each step2 : ∆W = ∑K i=1 αi∆W ( i ) = ∑K i=1 αiH > ( i ) G ( i ) = H > G , where ∆W ( i ) and αi is the softmax gradient and learning rate at the time step i , respectively , H = [ α1H ( 1 ) , . . . , αKH ( K ) ] , and G = [ G ( 1 ) , . . . , G ( K ) ] . More generally , we have the following . Remark 1 . ∆W can be represented as the product of H > ∈ Rd×S and G ∈ RS×C , where S is the number of labels used to compute the weight update . We consider S unknown to the adversary . If the weight update is computed from a batch , S is the batch size . If the weight update is aggregated from several training steps , S is the total number of samples at these steps . In fact , S can be inferred from the rank of ∆W as follows . 2We consider non-adaptive optimization methods , e.g . FedAvg [ 16 ] , which are common in FL training . Remark 2 . Assume that S < min { d , C } and H , G are full-rank matrices . We have rank ( H ) = rank ( G ) = S , and S can be inferred from S = rank ( ∆W ) . Now we want to derive a relation between G and labels Y . Differentiating L w.r.t . z yields gji = ∇z j i = ∂L ∂zji = { −1 + softmax ( zji , zi ) if j = yi softmax ( zji , zi ) otherwise ( 3 ) Since the softmax function always returns a value in ( 0 , 1 ) , each row in G has a unique negative coordinate corresponding to the ground-truth label . Formally , we have Remark 3 . Let Neg ( u ) define the indices of negative coordinates in a vector u . For every row gi in G , Neg ( gi ) = { yi } . More intuitively , in order to minimize the loss , the probability of the ground-truth label should be pushed up to 1 ( by a negative gradient ) , and probabilities of other labels should be pushed down to 0 ( by a positive gradient ) . Hence , signs in the gradient matrix G can be directly linked to labels Y . Though G is not directly observable from ∆W , this property of G is critical for revealing the set of labels from ∆W , as explained in the next section .",
"role": "user"
},
{
"content": "The work shows that a set of labels used in a small batch of the training procedure is identifiable given the gradient of the last layer of the NN model with CE loss and soft-max activation. Also, they propose to modify gradients for defense. There are some theoretical and empirical gaps between this work and real-world settings.",
"role": "assistant"
}
] |
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Multimodal Speech Emotion Recognition Using Audio and Text
Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers. In this paper, we propose a novel deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data. As emotional dialogue is composed of sound and spoken content, our model encodes the information from audio and text sequences using dual recurrent neural networks (RNNs) and then combines the information from these sources to predict the emotion class. This architecture analyzes speech data from the signal level to the language level, and it thus utilizes the information within the data more comprehensively than models that focus on audio features. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.
Related work
Classical machine learning algorithms, such as hidden Markov models (HMMs), support vector machines (SVMs), and decision tree-based methods, have been employed in speech emotion recognition problems BIBREF11 , BIBREF12 , BIBREF13 . Recently, researchers have proposed various neural network-based architectures to improve the performance of speech emotion recognition. An initial study utilized deep neural networks (DNNs) to extract high-level features from raw audio data and demonstrated its effectiveness in speech emotion recognition BIBREF14 . With the advancement of deep learning methods, more complex neural-based architectures have been proposed. Convolutional neural network (CNN)-based models have been trained on information derived from raw audio signals using spectrograms or audio features such as Mel-frequency cepstral coefficients (MFCCs) and low-level descriptors (LLDs) BIBREF15 , BIBREF16 , BIBREF17 . These neural network-based models are combined to produce higher-complexity models BIBREF18 , BIBREF19 , and these models achieved the best-recorded performance when applied to the IEMOCAP dataset.
Another line of research has focused on adopting variant machine learning techniques combined with neural network-based models. One researcher utilized the multiobject learning approach and used gender and naturalness as auxiliary tasks so that the neural network-based model learned more features from a given dataset BIBREF20 . Another researcher investigated transfer learning methods, leveraging external data from related domains BIBREF21 .
As emotional dialogue is composed of sound and spoken content, researchers have also investigated the combination of acoustic features and language information, built belief network-based methods of identifying emotional key phrases, and assessed the emotional salience of verbal cues from both phoneme sequences and words BIBREF22 , BIBREF23 . However, none of these studies have utilized information from speech signals and text sequences simultaneously in an end-to-end learning neural network-based model to classify emotions.
Audio Recurrent Encoder (ARE)
Motivated by the architecture used in BIBREF24 , BIBREF25 , we build an audio recurrent encoder (ARE) to predict the class of a given audio signal. Once MFCC features have been extracted from an audio signal, a subset of the sequential features is fed into the RNN (i.e., gated recurrent units (GRUs)), which leads to the formation of the network's internal hidden state INLINEFORM0 to model the time series patterns. This internal hidden state is updated at each time step with the input data INLINEFORM1 and the hidden state of the previous time step INLINEFORM2 as follows: DISPLAYFORM0
where INLINEFORM0 is the RNN function with weight parameter INLINEFORM1 , INLINEFORM2 represents the hidden state at t- INLINEFORM3 time step, and INLINEFORM4 represents the t- INLINEFORM5 MFCC features in INLINEFORM6 . After encoding the audio signal INLINEFORM7 with the RNN, the last hidden state of the RNN, INLINEFORM8 , is considered to be the representative vector that contains all of the sequential audio data. This vector is then concatenated with another prosodic feature vector, INLINEFORM9 , to generate a more informative vector representation of the signal, INLINEFORM10 . The MFCC and the prosodic features are extracted from the audio signal using the openSMILE toolkit BIBREF26 , INLINEFORM11 , respectively. Finally, the emotion class is predicted by applying the softmax function to the vector INLINEFORM12 . For a given audio sample INLINEFORM13 , we assume that INLINEFORM14 is the true label vector, which contains all zeros but contains a one at the correct class, and INLINEFORM15 is the predicted probability distribution from the softmax layer. The training objective then takes the following form: DISPLAYFORM0
where INLINEFORM0 is the calculated representative vector of the audio signal with dimensionality INLINEFORM1 . The INLINEFORM2 and the bias INLINEFORM3 are learned model parameters. C is the total number of classes, and N is the total number of samples used in training. The upper part of Figure shows the architecture of the ARE model.
Text Recurrent Encoder (TRE)
We assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 . We attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To use textual information, a speech transcript is tokenized and indexed into a sequence of tokens using the Natural Language Toolkit (NLTK) BIBREF27 . Each token is then passed through a word-embedding layer that converts a word index to a corresponding 300-dimensional vector that contains additional contextual meaning between words. The sequence of embedded tokens is fed into a text recurrent encoder (TRE) in such a way that the audio MFCC features are encoded using the ARE represented by equation EQREF2 . In this case, INLINEFORM0 is the t- INLINEFORM1 embedded token from the text input. Finally, the emotion class is predicted from the last hidden state of the text-RNN using the softmax function.
We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0
where INLINEFORM0 is last hidden state of the text-RNN, INLINEFORM1 , and the INLINEFORM2 and bias INLINEFORM3 are learned model parameters. The lower part of Figure indicates the architecture of the TRE model.
Multimodal Dual Recurrent Encoder (MDRE)
We present a novel architecture called the multimodal dual recurrent encoder (MDRE) to overcome the limitations of existing approaches. In this study, we consider multiple modalities, such as MFCC features, prosodic features and transcripts, which contain sequential audio information, statistical audio information and textual information, respectively. These types of data are the same as those used in the ARE and TRE cases. The MDRE model employs two RNNs to encode data from the audio signal and textual inputs independently. The audio-RNN encodes MFCC features from the audio signal using equation EQREF2 . The last hidden state of the audio-RNN is concatenated with the prosodic features to form the final vector representation INLINEFORM0 , and this vector is then passed through a fully connected neural network layer to form the audio encoding vector A. On the other hand, the text-RNN encodes the word sequence of the transcript using equation EQREF2 . The final hidden states of the text-RNN are also passed through another fully connected neural network layer to form a textual encoding vector T. Finally, the emotion class is predicted by applying the softmax function to the concatenation of the vectors A and T. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0
where INLINEFORM0 is the feed-forward neural network with weight parameter INLINEFORM1 , and INLINEFORM2 , INLINEFORM3 are final encoding vectors from the audio-RNN and text-RNN, respectively. INLINEFORM4 and the bias INLINEFORM5 are learned model parameters.
Multimodal Dual Recurrent Encoder with Attention (MDREA)
Inspired by the concept of the attention mechanism used in neural machine translation BIBREF28 , we propose a novel multimodal attention method to focus on the specific parts of a transcript that contain strong emotional information, conditioning on the audio information. Figure shows the architecture of the MDREA model. First, the audio data and text data are encoded with the audio-RNN and text-RNN using equation EQREF2 . We then consider the final audio encoding vector INLINEFORM0 as a context vector. As seen in equation EQREF9 , during each time step t, the dot product between the context vector e and the hidden state of the text-RNN at each t-th sequence INLINEFORM1 is evaluated to calculate a similarity score INLINEFORM2 . Using this score INLINEFORM3 as a weight parameter, the weighted sum of the sequences of the hidden state of the text-RNN, INLINEFORM4 , is calculated to generate an attention-application vector Z. This attention-application vector is concatenated with the final encoding vector of the audio-RNN INLINEFORM5 (equation EQREF7 ), which will be passed through the softmax function to predict the emotion class. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0
where INLINEFORM0 and the bias INLINEFORM1 are learned model parameters.
Dataset
We evaluate our model using the Interactive Emotional Dyadic Motion Capture (IEMOCAP) BIBREF18 dataset. This dataset was collected following theatrical theory in order to simulate natural dyadic interactions between actors. We use categorical evaluations with majority agreement. We use only four emotional categories happy, sad, angry, and neutral to compare the performance of our model with other research using the same categories. The IEMOCAP dataset includes five sessions, and each session contains utterances from two speakers (one male and one female). This data collection process resulted in 10 unique speakers. For consistent comparison with previous work, we merge the excitement dataset with the happiness dataset. The final dataset contains a total of 5531 utterances (1636 happy, 1084 sad, 1103 angry, 1708 neutral).
Feature extraction
To extract speech information from audio signals, we use MFCC values, which are widely used in analyzing audio signals. The MFCC feature set contains a total of 39 features, which include 12 MFCC parameters (1-12) from the 26 Mel-frequency bands and log-energy parameters, 13 delta and 13 acceleration coefficients The frame size is set to 25 ms at a rate of 10 ms with the Hamming function. According to the length of each wave file, the sequential step of the MFCC features is varied. To extract additional information from the data, we also use prosodic features, which show effectiveness in affective computing. The prosodic features are composed of 35 features, which include the F0 frequency, the voicing probability, and the loudness contours. All of these MFCC and prosodic features are extracted from the data using the OpenSMILE toolkit BIBREF26 .
Performance evaluation
As the dataset is not explicitly split beforehand into training, development, and testing sets, we perform 5-fold cross validation to determine the overall performance of the model. The data in each fold are split into training, development, and testing datasets (8:0.5:1.5, respectively). After training the model, we measure the weighted average precision (WAP) over the 5-fold dataset. We train and evaluate the model 10 times per fold, and the model performance is assessed in terms of the mean score and standard deviation.
We examine the WAP values, which are shown in Table 1. First, our ARE model shows the baseline performance because we use minimal audio features, such as the MFCC and prosodic features with simple architectures. On the other hand, the TRE model shows higher performance gain compared to the ARE. From this result, we note that textual data are informative in emotion prediction tasks, and the recurrent encoder model is effective in understanding these types of sequential data. Second, the newly proposed model, MDRE, shows a substantial performance gain. It thus achieves the state-of-the-art performance with a WAP value of 0.718. This result shows that multimodal information is a key factor in affective computing. Lastly, the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688) BIBREF19 . However, the MDREA model does not match the performance of the MDRE model, even though it utilizes a more complex architecture. We believe that this result arises because insufficient data are available to properly determine the complex model parameters in the MDREA model. Moreover, we presume that this model will show better performance when the audio signals are aligned with the textual sequence while applying the attention mechanism. We leave the implementation of this point as a future research direction.
To investigate the practical performance of the proposed models, we conduct further experiments with the ASR-processed transcript data (see “-ASR” models in Table ). The label accuracy of the processed transcripts is 5.53% WER. The TRE-ASR, MDRE-ASR and MDREA-ASR models reflect degraded performance compared to that of the TRE, MDRE and MDREA models. However, the performance of these models is still competitive; in particular, the MDRE-ASR model outperforms the previous best-performing model, 3CNN-LSTM10H (WAP 0.691 to 0.688).
Conclusions
In this paper, we propose a novel multimodal dual recurrent encoder model that simultaneously utilizes text data, as well as audio signals, to permit the better understanding of speech data. Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. Extensive experiments show that our proposed model outperforms other state-of-the-art methods in classifying the four emotion categories, and accuracies ranging from 68.8% to 71.8% are obtained when the model is applied to the IEMOCAP dataset. In particular, it resolves the issue in which predictions frequently incorrectly yield the neutral class, as occurs in previous models that focus on audio features.
In the future work, we aim to extend the modalities to audio, text and video inputs. Furthermore, we plan to investigate the application of the attention mechanism to data derived from multiple modalities. This approach seems likely to uncover enhanced learning schemes that will increase performance in both speech emotion recognition and other multimodal classification tasks.
Query: Do they use datasets with transcribed text or do they determine text from the audio?
JSON Response: | {"answer": "They use text transcription.", "evidence": ["We assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 . We attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To use textual information, a speech transcript is tokenized and indexed into a sequence of tokens using the Natural Language Toolkit (NLTK) BIBREF27 . Each token is then passed through a word-embedding layer that converts a word index to a corresponding 300-dimensional vector that contains additional contextual meaning between words. The sequence of embedded tokens is fed into a text recurrent encoder (TRE) in such a way that the audio MFCC features are encoded using the ARE represented by equation EQREF2 . In this case, INLINEFORM0 is the t- INLINEFORM1 embedded token from the text input. Finally, the emotion class is predicted from the last hidden state of the text-RNN using the softmax function."]} | science.qasper_abstractive_qa.191.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nMultimodal Speech Emotion Recognition Using Audio and Text\n\nSpeech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers. In this paper, we propose a novel deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data. As emotional dialogue is composed of sound and spoken content, our model encodes the information from audio and text sequences using dual recurrent neural networks (RNNs) and then combines the information from these sources to predict the emotion class. This architecture analyzes speech data from the signal level to the language level, and it thus utilizes the information within the data more comprehensively than models that focus on audio features. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.\n\nRelated work\nClassical machine learning algorithms, such as hidden Markov models (HMMs), support vector machines (SVMs), and decision tree-based methods, have been employed in speech emotion recognition problems BIBREF11 , BIBREF12 , BIBREF13 . Recently, researchers have proposed various neural network-based architectures to improve the performance of speech emotion recognition. An initial study utilized deep neural networks (DNNs) to extract high-level features from raw audio data and demonstrated its effectiveness in speech emotion recognition BIBREF14 . With the advancement of deep learning methods, more complex neural-based architectures have been proposed. Convolutional neural network (CNN)-based models have been trained on information derived from raw audio signals using spectrograms or audio features such as Mel-frequency cepstral coefficients (MFCCs) and low-level descriptors (LLDs) BIBREF15 , BIBREF16 , BIBREF17 . These neural network-based models are combined to produce higher-complexity models BIBREF18 , BIBREF19 , and these models achieved the best-recorded performance when applied to the IEMOCAP dataset.\nAnother line of research has focused on adopting variant machine learning techniques combined with neural network-based models. One researcher utilized the multiobject learning approach and used gender and naturalness as auxiliary tasks so that the neural network-based model learned more features from a given dataset BIBREF20 . Another researcher investigated transfer learning methods, leveraging external data from related domains BIBREF21 .\nAs emotional dialogue is composed of sound and spoken content, researchers have also investigated the combination of acoustic features and language information, built belief network-based methods of identifying emotional key phrases, and assessed the emotional salience of verbal cues from both phoneme sequences and words BIBREF22 , BIBREF23 . However, none of these studies have utilized information from speech signals and text sequences simultaneously in an end-to-end learning neural network-based model to classify emotions.\n\nAudio Recurrent Encoder (ARE)\nMotivated by the architecture used in BIBREF24 , BIBREF25 , we build an audio recurrent encoder (ARE) to predict the class of a given audio signal. Once MFCC features have been extracted from an audio signal, a subset of the sequential features is fed into the RNN (i.e., gated recurrent units (GRUs)), which leads to the formation of the network's internal hidden state INLINEFORM0 to model the time series patterns. This internal hidden state is updated at each time step with the input data INLINEFORM1 and the hidden state of the previous time step INLINEFORM2 as follows: DISPLAYFORM0 \nwhere INLINEFORM0 is the RNN function with weight parameter INLINEFORM1 , INLINEFORM2 represents the hidden state at t- INLINEFORM3 time step, and INLINEFORM4 represents the t- INLINEFORM5 MFCC features in INLINEFORM6 . After encoding the audio signal INLINEFORM7 with the RNN, the last hidden state of the RNN, INLINEFORM8 , is considered to be the representative vector that contains all of the sequential audio data. This vector is then concatenated with another prosodic feature vector, INLINEFORM9 , to generate a more informative vector representation of the signal, INLINEFORM10 . The MFCC and the prosodic features are extracted from the audio signal using the openSMILE toolkit BIBREF26 , INLINEFORM11 , respectively. Finally, the emotion class is predicted by applying the softmax function to the vector INLINEFORM12 . For a given audio sample INLINEFORM13 , we assume that INLINEFORM14 is the true label vector, which contains all zeros but contains a one at the correct class, and INLINEFORM15 is the predicted probability distribution from the softmax layer. The training objective then takes the following form: DISPLAYFORM0 \nwhere INLINEFORM0 is the calculated representative vector of the audio signal with dimensionality INLINEFORM1 . The INLINEFORM2 and the bias INLINEFORM3 are learned model parameters. C is the total number of classes, and N is the total number of samples used in training. The upper part of Figure shows the architecture of the ARE model.\n\nText Recurrent Encoder (TRE)\nWe assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 . We attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To use textual information, a speech transcript is tokenized and indexed into a sequence of tokens using the Natural Language Toolkit (NLTK) BIBREF27 . Each token is then passed through a word-embedding layer that converts a word index to a corresponding 300-dimensional vector that contains additional contextual meaning between words. The sequence of embedded tokens is fed into a text recurrent encoder (TRE) in such a way that the audio MFCC features are encoded using the ARE represented by equation EQREF2 . In this case, INLINEFORM0 is the t- INLINEFORM1 embedded token from the text input. Finally, the emotion class is predicted from the last hidden state of the text-RNN using the softmax function.\nWe use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 \nwhere INLINEFORM0 is last hidden state of the text-RNN, INLINEFORM1 , and the INLINEFORM2 and bias INLINEFORM3 are learned model parameters. The lower part of Figure indicates the architecture of the TRE model.\n\nMultimodal Dual Recurrent Encoder (MDRE)\nWe present a novel architecture called the multimodal dual recurrent encoder (MDRE) to overcome the limitations of existing approaches. In this study, we consider multiple modalities, such as MFCC features, prosodic features and transcripts, which contain sequential audio information, statistical audio information and textual information, respectively. These types of data are the same as those used in the ARE and TRE cases. The MDRE model employs two RNNs to encode data from the audio signal and textual inputs independently. The audio-RNN encodes MFCC features from the audio signal using equation EQREF2 . The last hidden state of the audio-RNN is concatenated with the prosodic features to form the final vector representation INLINEFORM0 , and this vector is then passed through a fully connected neural network layer to form the audio encoding vector A. On the other hand, the text-RNN encodes the word sequence of the transcript using equation EQREF2 . The final hidden states of the text-RNN are also passed through another fully connected neural network layer to form a textual encoding vector T. Finally, the emotion class is predicted by applying the softmax function to the concatenation of the vectors A and T. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 \nwhere INLINEFORM0 is the feed-forward neural network with weight parameter INLINEFORM1 , and INLINEFORM2 , INLINEFORM3 are final encoding vectors from the audio-RNN and text-RNN, respectively. INLINEFORM4 and the bias INLINEFORM5 are learned model parameters.\n\nMultimodal Dual Recurrent Encoder with Attention (MDREA)\nInspired by the concept of the attention mechanism used in neural machine translation BIBREF28 , we propose a novel multimodal attention method to focus on the specific parts of a transcript that contain strong emotional information, conditioning on the audio information. Figure shows the architecture of the MDREA model. First, the audio data and text data are encoded with the audio-RNN and text-RNN using equation EQREF2 . We then consider the final audio encoding vector INLINEFORM0 as a context vector. As seen in equation EQREF9 , during each time step t, the dot product between the context vector e and the hidden state of the text-RNN at each t-th sequence INLINEFORM1 is evaluated to calculate a similarity score INLINEFORM2 . Using this score INLINEFORM3 as a weight parameter, the weighted sum of the sequences of the hidden state of the text-RNN, INLINEFORM4 , is calculated to generate an attention-application vector Z. This attention-application vector is concatenated with the final encoding vector of the audio-RNN INLINEFORM5 (equation EQREF7 ), which will be passed through the softmax function to predict the emotion class. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 \nwhere INLINEFORM0 and the bias INLINEFORM1 are learned model parameters.\n\nDataset\nWe evaluate our model using the Interactive Emotional Dyadic Motion Capture (IEMOCAP) BIBREF18 dataset. This dataset was collected following theatrical theory in order to simulate natural dyadic interactions between actors. We use categorical evaluations with majority agreement. We use only four emotional categories happy, sad, angry, and neutral to compare the performance of our model with other research using the same categories. The IEMOCAP dataset includes five sessions, and each session contains utterances from two speakers (one male and one female). This data collection process resulted in 10 unique speakers. For consistent comparison with previous work, we merge the excitement dataset with the happiness dataset. The final dataset contains a total of 5531 utterances (1636 happy, 1084 sad, 1103 angry, 1708 neutral).\n\nFeature extraction\nTo extract speech information from audio signals, we use MFCC values, which are widely used in analyzing audio signals. The MFCC feature set contains a total of 39 features, which include 12 MFCC parameters (1-12) from the 26 Mel-frequency bands and log-energy parameters, 13 delta and 13 acceleration coefficients The frame size is set to 25 ms at a rate of 10 ms with the Hamming function. According to the length of each wave file, the sequential step of the MFCC features is varied. To extract additional information from the data, we also use prosodic features, which show effectiveness in affective computing. The prosodic features are composed of 35 features, which include the F0 frequency, the voicing probability, and the loudness contours. All of these MFCC and prosodic features are extracted from the data using the OpenSMILE toolkit BIBREF26 .\n\nPerformance evaluation\nAs the dataset is not explicitly split beforehand into training, development, and testing sets, we perform 5-fold cross validation to determine the overall performance of the model. The data in each fold are split into training, development, and testing datasets (8:0.5:1.5, respectively). After training the model, we measure the weighted average precision (WAP) over the 5-fold dataset. We train and evaluate the model 10 times per fold, and the model performance is assessed in terms of the mean score and standard deviation.\nWe examine the WAP values, which are shown in Table 1. First, our ARE model shows the baseline performance because we use minimal audio features, such as the MFCC and prosodic features with simple architectures. On the other hand, the TRE model shows higher performance gain compared to the ARE. From this result, we note that textual data are informative in emotion prediction tasks, and the recurrent encoder model is effective in understanding these types of sequential data. Second, the newly proposed model, MDRE, shows a substantial performance gain. It thus achieves the state-of-the-art performance with a WAP value of 0.718. This result shows that multimodal information is a key factor in affective computing. Lastly, the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688) BIBREF19 . However, the MDREA model does not match the performance of the MDRE model, even though it utilizes a more complex architecture. We believe that this result arises because insufficient data are available to properly determine the complex model parameters in the MDREA model. Moreover, we presume that this model will show better performance when the audio signals are aligned with the textual sequence while applying the attention mechanism. We leave the implementation of this point as a future research direction.\nTo investigate the practical performance of the proposed models, we conduct further experiments with the ASR-processed transcript data (see “-ASR” models in Table ). The label accuracy of the processed transcripts is 5.53% WER. The TRE-ASR, MDRE-ASR and MDREA-ASR models reflect degraded performance compared to that of the TRE, MDRE and MDREA models. However, the performance of these models is still competitive; in particular, the MDRE-ASR model outperforms the previous best-performing model, 3CNN-LSTM10H (WAP 0.691 to 0.688).\n\nConclusions\nIn this paper, we propose a novel multimodal dual recurrent encoder model that simultaneously utilizes text data, as well as audio signals, to permit the better understanding of speech data. Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. Extensive experiments show that our proposed model outperforms other state-of-the-art methods in classifying the four emotion categories, and accuracies ranging from 68.8% to 71.8% are obtained when the model is applied to the IEMOCAP dataset. In particular, it resolves the issue in which predictions frequently incorrectly yield the neutral class, as occurs in previous models that focus on audio features.\nIn the future work, we aim to extend the modalities to audio, text and video inputs. Furthermore, we plan to investigate the application of the attention mechanism to data derived from multiple modalities. This approach seems likely to uncover enhanced learning schemes that will increase performance in both speech emotion recognition and other multimodal classification tasks.\n\nQuery: Do they use datasets with transcribed text or do they determine text from the audio?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"They use text transcription.\", \"evidence\": [\"We assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 . We attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To use textual information, a speech transcript is tokenized and indexed into a sequence of tokens using the Natural Language Toolkit (NLTK) BIBREF27 . Each token is then passed through a word-embedding layer that converts a word index to a corresponding 300-dimensional vector that contains additional contextual meaning between words. The sequence of embedded tokens is fed into a text recurrent encoder (TRE) in such a way that the audio MFCC features are encoded using the ARE represented by equation EQREF2 . In this case, INLINEFORM0 is the t- INLINEFORM1 embedded token from the text input. Finally, the emotion class is predicted from the last hidden state of the text-RNN using the softmax function.\"]}",
"role": "assistant"
}
] |
science.discomat_te | | Sample | [BaO] in mol% | [Al2O3] in mol% | [B2O3] in mol% | [TiO2] in mol% | [ZrO2] in mol% | T g in degC | T D in degC |
| A | 30.0 | 33.3 | 36.7 | - | - | 585 | 644 |
| B | 28.8 | 33.3 | 35.5 | 2.5 | - | 588 | 641 |
| C | 27.7 | 33.3 | 34.2 | 5.0 | - | 590 | 648 |
| D | 26.3 | 33.3 | 33.0 | 7.5 | - | 595 | 650 |
| E | 29.3 | 32.5 | 35.8 | 2.4 | - | 582 | 637 |
| F | 28.6 | 31.7 | 35.0 | 4.8 | - | 582 | 627 |
| G | 27.9 | 31.0 | 34.1 | 7.0 | - | 583 | 628 |
| H | 27.3 | 30.3 | 33.4 | 9.1 | - | 586 | 635 |
| I | 26.7 | 29.6 | 32.6 | 11.1 | - | 587 | 637 |
| K | 29.3 | 32.5 | 35.8 | 1.2 | 1.2 | 602 | 671 |
| L | 28.6 | 31.7 | 35.0 | 2.4 | 2.4 | 607 | 657 |
| M | 27.9 | 31.0 | 34.1 | 3.5 | 3.5 | 601 | 642 |
| N | 27.3 | 30.3 | 33.4 | 4.5 | 4.5 | 608 | 664 |
| O | 26.7 | 29.6 | 32.6 | 5.6 | 5.6 | 607 | 664 |
Caption: Chemical compositions, glass transformation temperatures, T g, and dilatometric softening points, T D
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "30.0", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "A"}
{"value": "33.3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "A"}
{"value": "36.7", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "A"}
{"value": "585", "type": "Other"}
{"value": "644", "type": "Other"}
{"value": "28.8", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "B"}
{"value": "33.3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "B"}
{"value": "35.5", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "B"}
{"value": "2.5", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "B"}
{"value": "588", "type": "Other"}
{"value": "641", "type": "Other"}
{"value": "27.7", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "C"}
{"value": "33.3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "C"}
{"value": "34.2", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "C"}
{"value": "5.0", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "C"}
{"value": "590", "type": "Other"}
{"value": "648", "type": "Other"}
{"value": "26.3", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "D"}
{"value": "33.3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "D"}
{"value": "33.0", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "D"}
{"value": "7.5", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "D"}
{"value": "595", "type": "Other"}
{"value": "650", "type": "Other"}
{"value": "29.3", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "E"}
{"value": "32.5", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "E"}
{"value": "35.8", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "E"}
{"value": "2.4", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "E"}
{"value": "582", "type": "Other"}
{"value": "637", "type": "Other"}
{"value": "28.6", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "F"}
{"value": "31.7", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "F"}
{"value": "35.0", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "F"}
{"value": "4.8", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "F"}
{"value": "582", "type": "Other"}
{"value": "627", "type": "Other"}
{"value": "27.9", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "G"}
{"value": "31.0", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "G"}
{"value": "34.1", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "G"}
{"value": "7.0", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "G"}
{"value": "583", "type": "Other"}
{"value": "628", "type": "Other"}
{"value": "27.3", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "H"}
{"value": "30.3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "H"}
{"value": "33.4", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "H"}
{"value": "9.1", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "H"}
{"value": "586", "type": "Other"}
{"value": "635", "type": "Other"}
{"value": "26.7", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "I"}
{"value": "29.6", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "I"}
{"value": "32.6", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "I"}
{"value": "11.1", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "I"}
{"value": "587", "type": "Other"}
{"value": "637", "type": "Other"}
{"value": "29.3", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "K"}
{"value": "32.5", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "K"}
{"value": "35.8", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "K"}
{"value": "1.2", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "K"}
{"value": "1.2", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "mol", "material": "K"}
{"value": "602", "type": "Other"}
{"value": "671", "type": "Other"}
{"value": "28.6", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "L"}
{"value": "31.7", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "L"}
{"value": "35.0", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "L"}
{"value": "2.4", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "L"}
{"value": "2.4", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "mol", "material": "L"}
{"value": "607", "type": "Other"}
{"value": "657", "type": "Other"}
{"value": "27.9", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "M"}
{"value": "31.0", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "M"}
{"value": "34.1", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "M"}
{"value": "3.5", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "M"}
{"value": "3.5", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "mol", "material": "M"}
{"value": "601", "type": "Other"}
{"value": "642", "type": "Other"}
{"value": "27.3", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "N"}
{"value": "30.3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "N"}
{"value": "33.4", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "N"}
{"value": "4.5", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "N"}
{"value": "4.5", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "mol", "material": "N"}
{"value": "608", "type": "Other"}
{"value": "664", "type": "Other"}
{"value": "26.7", "type": "Glass_Compound_Amount", "constituent": "BaO", "unit": "mol", "material": "O"}
{"value": "29.6", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "mol", "material": "O"}
{"value": "32.6", "type": "Glass_Compound_Amount", "constituent": "B2O3", "unit": "mol", "material": "O"}
{"value": "5.6", "type": "Glass_Compound_Amount", "constituent": "TiO2", "unit": "mol", "material": "O"}
{"value": "5.6", "type": "Glass_Compound_Amount", "constituent": "ZrO2", "unit": "mol", "material": "O"}
{"value": "607", "type": "Other"}
{"value": "664", "type": "Other"} | science.discomat_te.234 | [
{
"content": "| Sample | [BaO] in mol% | [Al2O3] in mol% | [B2O3] in mol% | [TiO2] in mol% | [ZrO2] in mol% | T g in degC | T D in degC |\n| A | 30.0 | 33.3 | 36.7 | - | - | 585 | 644 |\n| B | 28.8 | 33.3 | 35.5 | 2.5 | - | 588 | 641 |\n| C | 27.7 | 33.3 | 34.2 | 5.0 | - | 590 | 648 |\n| D | 26.3 | 33.3 | 33.0 | 7.5 | - | 595 | 650 |\n| E | 29.3 | 32.5 | 35.8 | 2.4 | - | 582 | 637 |\n| F | 28.6 | 31.7 | 35.0 | 4.8 | - | 582 | 627 |\n| G | 27.9 | 31.0 | 34.1 | 7.0 | - | 583 | 628 |\n| H | 27.3 | 30.3 | 33.4 | 9.1 | - | 586 | 635 |\n| I | 26.7 | 29.6 | 32.6 | 11.1 | - | 587 | 637 |\n| K | 29.3 | 32.5 | 35.8 | 1.2 | 1.2 | 602 | 671 |\n| L | 28.6 | 31.7 | 35.0 | 2.4 | 2.4 | 607 | 657 |\n| M | 27.9 | 31.0 | 34.1 | 3.5 | 3.5 | 601 | 642 |\n| N | 27.3 | 30.3 | 33.4 | 4.5 | 4.5 | 608 | 664 |\n| O | 26.7 | 29.6 | 32.6 | 5.6 | 5.6 | 607 | 664 |\n\nCaption: Chemical compositions, glass transformation temperatures, T g, and dilatometric softening points, T D\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"30.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"A\"}\n{\"value\": \"33.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"A\"}\n{\"value\": \"36.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"A\"}\n{\"value\": \"585\", \"type\": \"Other\"}\n{\"value\": \"644\", \"type\": \"Other\"}\n{\"value\": \"28.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"B\"}\n{\"value\": \"33.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"B\"}\n{\"value\": \"35.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"B\"}\n{\"value\": \"2.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"B\"}\n{\"value\": \"588\", \"type\": \"Other\"}\n{\"value\": \"641\", \"type\": \"Other\"}\n{\"value\": \"27.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"C\"}\n{\"value\": \"33.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"C\"}\n{\"value\": \"34.2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"C\"}\n{\"value\": \"5.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"C\"}\n{\"value\": \"590\", \"type\": \"Other\"}\n{\"value\": \"648\", \"type\": \"Other\"}\n{\"value\": \"26.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"D\"}\n{\"value\": \"33.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"D\"}\n{\"value\": \"33.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"D\"}\n{\"value\": \"7.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"D\"}\n{\"value\": \"595\", \"type\": \"Other\"}\n{\"value\": \"650\", \"type\": \"Other\"}\n{\"value\": \"29.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"E\"}\n{\"value\": \"32.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"E\"}\n{\"value\": \"35.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"E\"}\n{\"value\": \"2.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"E\"}\n{\"value\": \"582\", \"type\": \"Other\"}\n{\"value\": \"637\", \"type\": \"Other\"}\n{\"value\": \"28.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"F\"}\n{\"value\": \"31.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"F\"}\n{\"value\": \"35.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"F\"}\n{\"value\": \"4.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"F\"}\n{\"value\": \"582\", \"type\": \"Other\"}\n{\"value\": \"627\", \"type\": \"Other\"}\n{\"value\": \"27.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"G\"}\n{\"value\": \"31.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"G\"}\n{\"value\": \"34.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"G\"}\n{\"value\": \"7.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"G\"}\n{\"value\": \"583\", \"type\": \"Other\"}\n{\"value\": \"628\", \"type\": \"Other\"}\n{\"value\": \"27.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"H\"}\n{\"value\": \"30.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"H\"}\n{\"value\": \"33.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"H\"}\n{\"value\": \"9.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"H\"}\n{\"value\": \"586\", \"type\": \"Other\"}\n{\"value\": \"635\", \"type\": \"Other\"}\n{\"value\": \"26.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"I\"}\n{\"value\": \"29.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"I\"}\n{\"value\": \"32.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"I\"}\n{\"value\": \"11.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"I\"}\n{\"value\": \"587\", \"type\": \"Other\"}\n{\"value\": \"637\", \"type\": \"Other\"}\n{\"value\": \"29.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"K\"}\n{\"value\": \"32.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"K\"}\n{\"value\": \"35.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"K\"}\n{\"value\": \"1.2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"K\"}\n{\"value\": \"1.2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"mol\", \"material\": \"K\"}\n{\"value\": \"602\", \"type\": \"Other\"}\n{\"value\": \"671\", \"type\": \"Other\"}\n{\"value\": \"28.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"L\"}\n{\"value\": \"31.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"L\"}\n{\"value\": \"35.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"L\"}\n{\"value\": \"2.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"L\"}\n{\"value\": \"2.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"mol\", \"material\": \"L\"}\n{\"value\": \"607\", \"type\": \"Other\"}\n{\"value\": \"657\", \"type\": \"Other\"}\n{\"value\": \"27.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"M\"}\n{\"value\": \"31.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"M\"}\n{\"value\": \"34.1\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"M\"}\n{\"value\": \"3.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"M\"}\n{\"value\": \"3.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"mol\", \"material\": \"M\"}\n{\"value\": \"601\", \"type\": \"Other\"}\n{\"value\": \"642\", \"type\": \"Other\"}\n{\"value\": \"27.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"N\"}\n{\"value\": \"30.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"N\"}\n{\"value\": \"33.4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"N\"}\n{\"value\": \"4.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"N\"}\n{\"value\": \"4.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"mol\", \"material\": \"N\"}\n{\"value\": \"608\", \"type\": \"Other\"}\n{\"value\": \"664\", \"type\": \"Other\"}\n{\"value\": \"26.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"BaO\", \"unit\": \"mol\", \"material\": \"O\"}\n{\"value\": \"29.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"mol\", \"material\": \"O\"}\n{\"value\": \"32.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"B2O3\", \"unit\": \"mol\", \"material\": \"O\"}\n{\"value\": \"5.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"TiO2\", \"unit\": \"mol\", \"material\": \"O\"}\n{\"value\": \"5.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZrO2\", \"unit\": \"mol\", \"material\": \"O\"}\n{\"value\": \"607\", \"type\": \"Other\"}\n{\"value\": \"664\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.discomat_te | | Stage | Description | Duration (sec) |
| Heating | Heat sample from room temperature to molding temperature. Glass sample rests on lower mold with a gap of 2.62mm from the upper mold. | 210 |
| Soaking | Molding temperature was maintained to achieve uniform temperature distribution within sample. The gap was maintained during this stage. | >500 |
| Main pressing | Force Control: A pressing force of 1500N was applied to the specimen until the desired center thickness value was achieved. | Variable |
| Slow cooling with a maintenance force | Controlled slow cooling with the applied "maintenance" force reduced to 500N. | 300 |
| Rapid cooling | Rapid cooling back to room temperature. | 650 |
Caption: Recommended duration times for the stages of the RCT used for Data Set #2 and the simulations unless noted otherwise. Specimen dimensions are always OD=19.15mm, ID=9.59mm inner diameter, and H=6.37mm.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "210", "type": "Other"}
{"value": "300", "type": "Other"}
{"value": "650", "type": "Other"} | science.discomat_te.121 | [
{
"content": "| Stage | Description | Duration (sec) |\n| Heating | Heat sample from room temperature to molding temperature. Glass sample rests on lower mold with a gap of 2.62mm from the upper mold. | 210 |\n| Soaking | Molding temperature was maintained to achieve uniform temperature distribution within sample. The gap was maintained during this stage. | >500 |\n| Main pressing | Force Control: A pressing force of 1500N was applied to the specimen until the desired center thickness value was achieved. | Variable |\n| Slow cooling with a maintenance force | Controlled slow cooling with the applied \"maintenance\" force reduced to 500N. | 300 |\n| Rapid cooling | Rapid cooling back to room temperature. | 650 |\n\nCaption: Recommended duration times for the stages of the RCT used for Data Set #2 and the simulations unless noted otherwise. Specimen dimensions are always OD=19.15mm, ID=9.59mm inner diameter, and H=6.37mm.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"210\", \"type\": \"Other\"}\n{\"value\": \"300\", \"type\": \"Other\"}\n{\"value\": \"650\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Multi-class Hierarchical Question Classification for Multiple Choice Science Exams
Prior work has demonstrated that question classification (QC), recognizing the problem domain of a question, can help answer it more accurately. However, developing strong QC algorithms has been hindered by the limited size and complexity of annotated data available. To address this, we present the largest challenge dataset for QC, containing 7,787 science exam questions paired with detailed classification labels from a fine-grained hierarchical taxonomy of 406 problem domains. We then show that a BERT-based model trained on this dataset achieves a large (+0.12 MAP) gain compared with previous methods, while also achieving state-of-the-art performance on benchmark open-domain and biomedical QC datasets. Finally, we show that using this model's predictions of question topic significantly improves the accuracy of a question answering system by +1.7% P@1, with substantial future gains possible as QC performance improves.
Introduction
Understanding what a question is asking is one of the first steps that humans use to work towards an answer. In the context of question answering, question classification allows automated systems to intelligently target their inference systems to domain-specific solvers capable of addressing specific kinds of questions and problem solving methods with high confidence and answer accuracy BIBREF0 , BIBREF1 .
To date, question classification has primarily been studied in the context of open-domain TREC questions BIBREF2 , with smaller recent datasets available in the biomedical BIBREF3 , BIBREF4 and education BIBREF5 domains. The open-domain TREC question corpus is a set of 5,952 short factoid questions paired with a taxonomy developed by Li and Roth BIBREF6 that includes 6 coarse answer types (such as entities, locations, and numbers), and 50 fine-grained types (e.g. specific kinds of entities, such as animals or vehicles). While a wide variety of syntactic, semantic, and other features and classification methods have been applied to this task, culminating in near-perfect classification performance BIBREF7 , recent work has demonstrated that QC methods developed on TREC questions generally fail to transfer to datasets with more complex questions such as those in the biomedical domain BIBREF3 , likely due in part to the simplicity and syntactic regularity of the questions, and the ability for simpler term-frequency models to achieve near-ceiling performance BIBREF8 . In this work we explore question classification in the context of multiple choice science exams. Standardized science exams have been proposed as a challenge task for question answering BIBREF9 , as most questions contain a variety of challenging inference problems BIBREF10 , BIBREF11 , require detailed scientific and common-sense knowledge to answer and explain the reasoning behind those answers BIBREF12 , and questions are often embedded in complex examples or other distractors. Question classification taxonomies and annotation are difficult and expensive to generate, and because of the unavailability of this data, to date most models for science questions use one or a small number of generic solvers that perform little or no question decomposition BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Our long-term interest is in developing methods that intelligently target their inferences to generate both correct answers and compelling human-readable explanations for the reasoning behind those answers. The lack of targeted solving – using the same methods for inferring answers to spatial questions about planetary motion, chemical questions about photosynthesis, and electrical questions about circuit continuity – is a substantial barrier to increasing performance (see Figure FIGREF1 ).
To address this need for developing methods of targetted inference, this work makes the following contributions:
Questions and Classification Taxonomy
Questions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. Each question is a 4-choice multiple choice question. Summary statistics comparing the complexity of ARC and TREC questions are shown in Table TABREF5 .
Taxonomy: Starting with the syllabus for the NY Regents exam, we identified 9 coarse question categories (Astronomy, Earth Science, Energy, Forces, Life Science, Matter, Safety, Scientific Method, Other), then through a data-driven analysis of 3 exam study guides and the 3,370 training questions, expanded the taxonomy to include 462 fine-grained categories across 6 hierarchical levels of granularity. The taxonomy is designed to allow categorizing questions into broad curriculum topics at it's coarsest level, while labels at full specificity separate questions into narrow problem domains suitable for targetted inference methods. Because of its size, a subset of the classification taxonomy is shown in Table TABREF6 , with the full taxonomy and class definitions included in the supplementary material.
Annotation: Because of the complexity of the questions, it is possible for one question to bridge multiple categories – for example, a wind power generation question may span both renewable energy and energy conversion. We allow up to 2 labels per question, and found that 16% of questions required multiple labels. Each question was independently annotated by two annotators, with the lead annotator a domain expert in standardized exams. Annotators first independently annotated the entire question set, then questions without complete agreement were discussed until resolution. Before resolution, interannotator agreement (Cohen's Kappa) was INLINEFORM0 = 0.58 at the finest level of granularity, and INLINEFORM1 = 0.85 when considering only the coarsest 9 categories. This is considered moderate to strong agreement BIBREF32 . Based on the results of our error analysis (see Section SECREF21 ), we estimate the overall accuracy of the question classification labels after resolution to be approximately 96%. While the full taxonomy contains 462 fine-grained categories derived from both standardized questions, study guides, and exam syllabi, we observed only 406 of these categories are tested in the ARC question set.
Automating Error Analyses with QC
Detailed error analyses for question answering systems are typically labor intensive, often requiring hours or days to perform manually. As a result error analyses are typically completed infrequently, in spite of their utility to key decisions in the algortithm or knowledge construction process. Here we show having access to detailed question classification labels specifying fine-grained problem domains provides a mechanism to automatically generate error analyses in seconds instead of days.
To illustrate the utility of this approach, Table TABREF26 shows the performance of the BERT QA+QC model broken down by specific question classes. This allows automatically identifying a given model's strengths – for example, here questions about Human Health, Material Properties, and Earth's Inner Core are well addressed by the BERT-QA model, and achieve well above the average QA performance of 49%. Similarly, areas of deficit include Changes of State, Reproduction, and Food Chain Processes questions, which see below-average QA performance. The lowest performing class, Safety Procedures, demonstrates that while this model has strong performance in many areas of scientific reasoning, it is worse than chance at answering questions about safety, and would be inappropriate to deploy for safety-critical tasks.
While this analysis is shown at an intermediate (L2) level of specificity for space, more detailed analyses are possible. For example, overall QA performance on Scientific Inference questions is near average (47%), but increasing granularity to L3 we observe that questions addressing Experiment Design or Making Inferences – challenging questions even for humans – perform poorly (33% and 20%) when answered by the QA system. This allows a system designer to intelligently target problem-specific knowledge resources and inference methods to address deficits in specific areas.
Acknowledgements
The authors wish to thank Elizabeth Wainwright and Stephen Marmorstein for piloting an earlier version of the question classification annotation. We thank the Allen Insitute for Artificial Intelligence and National Science Founation (NSF 1815948 to PJ) for funding this work.
Question Classification
Because of space limitations the question classification results are reported in Table TABREF7 only using Mean Average Precision (MAP). We also include Precision@1 (P@1), the overall accuracy of the highest-ranked prediction for each question classification model, in Table TABREF30 .
CNN: We implemented the CNN sentence classifier of Kim BIBREF28 , which demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model. We adapted the original CNN non-static model to multi-label classification by changing the fully connected softmax layer to sigmoid layer to produce a sigmoid output for each label simultaneously. We followed the same parameter settings reported by Kim et al. except the learning rate, which was tuned based on the development set. Pilot experiments did not show a performance improvement over the baseline model.
Label Definitions: Question terms can be mapped to categories using manual heuristics BIBREF19 . To mitigate sparsity and limit heuristic use, here we generated a feature comparing the cosine similarity of composite embedding vectors BIBREF51 representing question text and category definition text, using pretrained GloVe embeddings BIBREF52 . Pilot experiments showed that performance did not significantly improve.
Question Expansion with Hypernyms (Probase Version): One of the challenges of hypernym expansion BIBREF22 , BIBREF19 , BIBREF3 is determining a heuristic for the termination depth of hypernym expansion, as in Van-tu et al. BIBREF24 . Because science exam questions are often grounded in specific examples (e.g. a car rolling down a hill coming to a stop due to friction), we hypothesized that knowing certain categories of entities can be important for identifying specific question types – for example, observing that a question contains a kind of animal may be suggestive of a Life Science question, where similarly vehicles or materials present in questions may suggest questions about Forces or Matter, respectively. The challenge with WordNet is that key hypernyms can be at very different depths from query terms – for example, “cat” is distance 10 away from living thing, “car” is distance 4 away from vehicle, and “copper” is distance 2 away from material. Choosing a static threshold (or decaying threshold, as in Van-tu et al. BIBREF24 ) will inheriently reduce recall and limit the utility of this method of query expansion.
To address this, we piloted a hypernym expansion experiment using the Probase taxonomy BIBREF53 , a collection of 20.7M is-a pairs mined from the web, in place of WordNet. Because the taxonomic pairs in Probase come from use in naturalistic settings, links tend to jump levels in the WordNet taxonomy and be expressed in common forms. For example, INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , are each distance 1 in the Probase taxonomy, and high-frequency (i.e. high-confidence) taxonomic pairs.
Similar to query expansion using WordNet Hypernyms, our pilot experiments did not observe a benefit to using Probase hypernyms over the baseline model. An error analysis suggested that the large number of noisy and out-of-context links present in Probase may have reduced performance, and in response we constructed a filtered list of 710 key hypernym categories manually filtered from a list of hypernyms seeded using high-frequency words from an in-house corpus of 250 in-domain science textbooks. We also did not observe a benefit to question classification over the baseline model when expanding only to this manually curated list of key hypernyms.
Topic words: We made use of the 77 TREC word lists of Li and Roth BIBREF6 , containing a total of 3,257 terms, as well as an in-house set of 144 word lists on general and elementary science topics mined from the web, such as ANIMALS, VEGETABLES, and VEHICLES, containing a total of 29,059 words. To mitigate sparsity, features take the form of counts for a specific topic – detecting the words turtle and giraffe in a question would provide a count of 2 for the ANIMAL feature. This provides a light form of domain-specific entity and action (e.g. types of changes) recognition. Pilot experiments showed that this wordlist feature did add a modest performance benefit of approximately 2% to question classification accuracy. Taken together with our results on hypernym expansion, this suggests that manually curated wordlists can show modest benefits for question classification performance, but at the expense of substantial effort in authoring or collecting these extensive wordlists.
Hyperparameters: For each layer of the class label hierarchy, we tune the hyperparameters based on the development set. We use the pretrained BERT-Base (uncased) checkpoint. We use the following hyperparameters: maximum sequence length = 256, batch size = 16, learning rates: 2e-5 (L1), 5e-5 (L2-L6), epochs: 5 (L1), 25 (L2-L6).
Statistics: We use non-parametric bootstrap resampling to compare the baseline (Li and Roth BIBREF6 model) to all experimental models to determine significance, using 10,000 bootstrap resamples.
Table 10: Interannotator Agreement at L6 (the native level the annotation was completed at), as well as agreement for truncated levels of the heirarchy from coarse to fine classification.
Query: How was the dataset collected?
JSON Response: | {"answer": "from 3rd to 9th grade science questions collected from 12 US states", "evidence": ["Questions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. Each question is a 4-choice multiple choice question. Summary statistics comparing the complexity of ARC and TREC questions are shown in Table TABREF5 ."]} | science.qasper_abstractive_qa.165.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nMulti-class Hierarchical Question Classification for Multiple Choice Science Exams\n\nPrior work has demonstrated that question classification (QC), recognizing the problem domain of a question, can help answer it more accurately. However, developing strong QC algorithms has been hindered by the limited size and complexity of annotated data available. To address this, we present the largest challenge dataset for QC, containing 7,787 science exam questions paired with detailed classification labels from a fine-grained hierarchical taxonomy of 406 problem domains. We then show that a BERT-based model trained on this dataset achieves a large (+0.12 MAP) gain compared with previous methods, while also achieving state-of-the-art performance on benchmark open-domain and biomedical QC datasets. Finally, we show that using this model's predictions of question topic significantly improves the accuracy of a question answering system by +1.7% P@1, with substantial future gains possible as QC performance improves.\n\nIntroduction\nUnderstanding what a question is asking is one of the first steps that humans use to work towards an answer. In the context of question answering, question classification allows automated systems to intelligently target their inference systems to domain-specific solvers capable of addressing specific kinds of questions and problem solving methods with high confidence and answer accuracy BIBREF0 , BIBREF1 .\nTo date, question classification has primarily been studied in the context of open-domain TREC questions BIBREF2 , with smaller recent datasets available in the biomedical BIBREF3 , BIBREF4 and education BIBREF5 domains. The open-domain TREC question corpus is a set of 5,952 short factoid questions paired with a taxonomy developed by Li and Roth BIBREF6 that includes 6 coarse answer types (such as entities, locations, and numbers), and 50 fine-grained types (e.g. specific kinds of entities, such as animals or vehicles). While a wide variety of syntactic, semantic, and other features and classification methods have been applied to this task, culminating in near-perfect classification performance BIBREF7 , recent work has demonstrated that QC methods developed on TREC questions generally fail to transfer to datasets with more complex questions such as those in the biomedical domain BIBREF3 , likely due in part to the simplicity and syntactic regularity of the questions, and the ability for simpler term-frequency models to achieve near-ceiling performance BIBREF8 . In this work we explore question classification in the context of multiple choice science exams. Standardized science exams have been proposed as a challenge task for question answering BIBREF9 , as most questions contain a variety of challenging inference problems BIBREF10 , BIBREF11 , require detailed scientific and common-sense knowledge to answer and explain the reasoning behind those answers BIBREF12 , and questions are often embedded in complex examples or other distractors. Question classification taxonomies and annotation are difficult and expensive to generate, and because of the unavailability of this data, to date most models for science questions use one or a small number of generic solvers that perform little or no question decomposition BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Our long-term interest is in developing methods that intelligently target their inferences to generate both correct answers and compelling human-readable explanations for the reasoning behind those answers. The lack of targeted solving – using the same methods for inferring answers to spatial questions about planetary motion, chemical questions about photosynthesis, and electrical questions about circuit continuity – is a substantial barrier to increasing performance (see Figure FIGREF1 ).\nTo address this need for developing methods of targetted inference, this work makes the following contributions:\n\nQuestions and Classification Taxonomy\nQuestions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. Each question is a 4-choice multiple choice question. Summary statistics comparing the complexity of ARC and TREC questions are shown in Table TABREF5 .\nTaxonomy: Starting with the syllabus for the NY Regents exam, we identified 9 coarse question categories (Astronomy, Earth Science, Energy, Forces, Life Science, Matter, Safety, Scientific Method, Other), then through a data-driven analysis of 3 exam study guides and the 3,370 training questions, expanded the taxonomy to include 462 fine-grained categories across 6 hierarchical levels of granularity. The taxonomy is designed to allow categorizing questions into broad curriculum topics at it's coarsest level, while labels at full specificity separate questions into narrow problem domains suitable for targetted inference methods. Because of its size, a subset of the classification taxonomy is shown in Table TABREF6 , with the full taxonomy and class definitions included in the supplementary material.\nAnnotation: Because of the complexity of the questions, it is possible for one question to bridge multiple categories – for example, a wind power generation question may span both renewable energy and energy conversion. We allow up to 2 labels per question, and found that 16% of questions required multiple labels. Each question was independently annotated by two annotators, with the lead annotator a domain expert in standardized exams. Annotators first independently annotated the entire question set, then questions without complete agreement were discussed until resolution. Before resolution, interannotator agreement (Cohen's Kappa) was INLINEFORM0 = 0.58 at the finest level of granularity, and INLINEFORM1 = 0.85 when considering only the coarsest 9 categories. This is considered moderate to strong agreement BIBREF32 . Based on the results of our error analysis (see Section SECREF21 ), we estimate the overall accuracy of the question classification labels after resolution to be approximately 96%. While the full taxonomy contains 462 fine-grained categories derived from both standardized questions, study guides, and exam syllabi, we observed only 406 of these categories are tested in the ARC question set.\n\nAutomating Error Analyses with QC\nDetailed error analyses for question answering systems are typically labor intensive, often requiring hours or days to perform manually. As a result error analyses are typically completed infrequently, in spite of their utility to key decisions in the algortithm or knowledge construction process. Here we show having access to detailed question classification labels specifying fine-grained problem domains provides a mechanism to automatically generate error analyses in seconds instead of days.\nTo illustrate the utility of this approach, Table TABREF26 shows the performance of the BERT QA+QC model broken down by specific question classes. This allows automatically identifying a given model's strengths – for example, here questions about Human Health, Material Properties, and Earth's Inner Core are well addressed by the BERT-QA model, and achieve well above the average QA performance of 49%. Similarly, areas of deficit include Changes of State, Reproduction, and Food Chain Processes questions, which see below-average QA performance. The lowest performing class, Safety Procedures, demonstrates that while this model has strong performance in many areas of scientific reasoning, it is worse than chance at answering questions about safety, and would be inappropriate to deploy for safety-critical tasks.\nWhile this analysis is shown at an intermediate (L2) level of specificity for space, more detailed analyses are possible. For example, overall QA performance on Scientific Inference questions is near average (47%), but increasing granularity to L3 we observe that questions addressing Experiment Design or Making Inferences – challenging questions even for humans – perform poorly (33% and 20%) when answered by the QA system. This allows a system designer to intelligently target problem-specific knowledge resources and inference methods to address deficits in specific areas.\n\nAcknowledgements\nThe authors wish to thank Elizabeth Wainwright and Stephen Marmorstein for piloting an earlier version of the question classification annotation. We thank the Allen Insitute for Artificial Intelligence and National Science Founation (NSF 1815948 to PJ) for funding this work.\n\nQuestion Classification\nBecause of space limitations the question classification results are reported in Table TABREF7 only using Mean Average Precision (MAP). We also include Precision@1 (P@1), the overall accuracy of the highest-ranked prediction for each question classification model, in Table TABREF30 .\nCNN: We implemented the CNN sentence classifier of Kim BIBREF28 , which demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model. We adapted the original CNN non-static model to multi-label classification by changing the fully connected softmax layer to sigmoid layer to produce a sigmoid output for each label simultaneously. We followed the same parameter settings reported by Kim et al. except the learning rate, which was tuned based on the development set. Pilot experiments did not show a performance improvement over the baseline model.\nLabel Definitions: Question terms can be mapped to categories using manual heuristics BIBREF19 . To mitigate sparsity and limit heuristic use, here we generated a feature comparing the cosine similarity of composite embedding vectors BIBREF51 representing question text and category definition text, using pretrained GloVe embeddings BIBREF52 . Pilot experiments showed that performance did not significantly improve.\nQuestion Expansion with Hypernyms (Probase Version): One of the challenges of hypernym expansion BIBREF22 , BIBREF19 , BIBREF3 is determining a heuristic for the termination depth of hypernym expansion, as in Van-tu et al. BIBREF24 . Because science exam questions are often grounded in specific examples (e.g. a car rolling down a hill coming to a stop due to friction), we hypothesized that knowing certain categories of entities can be important for identifying specific question types – for example, observing that a question contains a kind of animal may be suggestive of a Life Science question, where similarly vehicles or materials present in questions may suggest questions about Forces or Matter, respectively. The challenge with WordNet is that key hypernyms can be at very different depths from query terms – for example, “cat” is distance 10 away from living thing, “car” is distance 4 away from vehicle, and “copper” is distance 2 away from material. Choosing a static threshold (or decaying threshold, as in Van-tu et al. BIBREF24 ) will inheriently reduce recall and limit the utility of this method of query expansion.\nTo address this, we piloted a hypernym expansion experiment using the Probase taxonomy BIBREF53 , a collection of 20.7M is-a pairs mined from the web, in place of WordNet. Because the taxonomic pairs in Probase come from use in naturalistic settings, links tend to jump levels in the WordNet taxonomy and be expressed in common forms. For example, INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , are each distance 1 in the Probase taxonomy, and high-frequency (i.e. high-confidence) taxonomic pairs.\nSimilar to query expansion using WordNet Hypernyms, our pilot experiments did not observe a benefit to using Probase hypernyms over the baseline model. An error analysis suggested that the large number of noisy and out-of-context links present in Probase may have reduced performance, and in response we constructed a filtered list of 710 key hypernym categories manually filtered from a list of hypernyms seeded using high-frequency words from an in-house corpus of 250 in-domain science textbooks. We also did not observe a benefit to question classification over the baseline model when expanding only to this manually curated list of key hypernyms.\nTopic words: We made use of the 77 TREC word lists of Li and Roth BIBREF6 , containing a total of 3,257 terms, as well as an in-house set of 144 word lists on general and elementary science topics mined from the web, such as ANIMALS, VEGETABLES, and VEHICLES, containing a total of 29,059 words. To mitigate sparsity, features take the form of counts for a specific topic – detecting the words turtle and giraffe in a question would provide a count of 2 for the ANIMAL feature. This provides a light form of domain-specific entity and action (e.g. types of changes) recognition. Pilot experiments showed that this wordlist feature did add a modest performance benefit of approximately 2% to question classification accuracy. Taken together with our results on hypernym expansion, this suggests that manually curated wordlists can show modest benefits for question classification performance, but at the expense of substantial effort in authoring or collecting these extensive wordlists.\nHyperparameters: For each layer of the class label hierarchy, we tune the hyperparameters based on the development set. We use the pretrained BERT-Base (uncased) checkpoint. We use the following hyperparameters: maximum sequence length = 256, batch size = 16, learning rates: 2e-5 (L1), 5e-5 (L2-L6), epochs: 5 (L1), 25 (L2-L6).\nStatistics: We use non-parametric bootstrap resampling to compare the baseline (Li and Roth BIBREF6 model) to all experimental models to determine significance, using 10,000 bootstrap resamples.\n\nTable 10: Interannotator Agreement at L6 (the native level the annotation was completed at), as well as agreement for truncated levels of the heirarchy from coarse to fine classification.\n\nQuery: How was the dataset collected?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"from 3rd to 9th grade science questions collected from 12 US states\", \"evidence\": [\"Questions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. Each question is a 4-choice multiple choice question. Summary statistics comparing the complexity of ARC and TREC questions are shown in Table TABREF5 .\"]}",
"role": "assistant"
}
] |
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts
Speech-related Brain Computer Interfaces (BCI) aim primarily at finding an alternative vocal communication pathway for people with speaking disabilities. As a step towards full decoding of imagined speech from active thoughts, we present a BCI system for subject-independent classification of phonological categories exploiting a novel deep learning based hierarchical feature extraction scheme. To better capture the complex representation of high-dimensional electroencephalography (EEG) data, we compute the joint variability of EEG electrodes into a channel cross-covariance matrix. We then extract the spatio-temporal information encoded within the matrix using a mixed deep neural network strategy. Our model framework is composed of a convolutional neural network (CNN), a long-short term network (LSTM), and a deep autoencoder. We train the individual networks hierarchically, feeding their combined outputs in a final gradient boosting classification step. Our best models achieve an average accuracy of 77.9% across five different binary classification tasks, providing a significant 22.5% improvement over previous methods. As we also show visually, our work demonstrates that the speech imagery EEG possesses significant discriminative information about the intended articulatory movements responsible for natural speech synthesis.
Introduction
Decoding intended speech or motor activity from brain signals is one of the major research areas in Brain Computer Interface (BCI) systems BIBREF0 , BIBREF1 . In particular, speech-related BCI technologies attempt to provide effective vocal communication strategies for controlling external devices through speech commands interpreted from brain signals BIBREF2 . Not only do they provide neuro-prosthetic help for people with speaking disabilities and neuro-muscular disorders like locked-in-syndrome, nasopharyngeal cancer, and amytotropic lateral sclerosis (ALS), but also equip people with a better medium to communicate and express thoughts, thereby improving the quality of rehabilitation and clinical neurology BIBREF3 , BIBREF4 . Such devices also have applications in entertainment, preventive treatments, personal communication, games, etc. Furthermore, BCI technologies can be utilized in silent communication, as in noisy environments, or situations where any sort of audio-visual communication is infeasible.
Among the various brain activity-monitoring modalities in BCI, electroencephalography (EEG) BIBREF5 , BIBREF6 has demonstrated promising potential to differentiate between various brain activities through measurement of related electric fields. EEG is non-invasive, portable, low cost, and provides satisfactory temporal resolution. This makes EEG suitable to realize BCI systems. EEG data, however, is challenging: these data are high dimensional, have poor SNR, and suffer from low spatial resolution and a multitude of artifacts. For these reasons, it is not particularly obvious how to decode the desired information from raw EEG signals. Although the area of BCI based speech intent recognition has received increasing attention among the research community in the past few years, most research has focused on classification of individual speech categories in terms of discrete vowels, phonemes and words BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . This includes categorization of imagined EEG signal into binary vowel categories like /a/, /u/ and rest BIBREF7 , BIBREF8 , BIBREF9 ; binary syllable classes like /ba/ and /ku/ BIBREF1 , BIBREF10 , BIBREF11 , BIBREF12 ; a handful of control words like 'up', 'down', 'left', 'right' and 'select' BIBREF15 or others like 'water', 'help', 'thanks', 'food', 'stop' BIBREF13 , Chinese characters BIBREF14 , etc. Such works mostly involve traditional signal processing or manual feature handcrafting along with linear classifiers (e.g., SVMs). In our recent work BIBREF16 , we introduced deep learning models for classification of vowels and words that achieved 23.45% improvement of accuracy over the baseline.
Production of articulatory speech is an extremely complicated process, thereby rendering understanding of the discriminative EEG manifold corresponding to imagined speech highly challenging. As a result, most of the existing approaches failed to achieve satisfactory accuracy on decoding speech tokens from the speech imagery EEG data. Perhaps, for these reasons, very little work has been devoted to relating the brain signals to the underlying articulation. The few exceptions include BIBREF17 , BIBREF18 . In BIBREF17 , Zhao et al. used manually handcrafted features from EEG data, combined with speech audio and facial features to achieve classification of the phonological categories varying based on the articulatory steps. However, the imagined speech classification accuracy based on EEG data alone, as reported in BIBREF17 , BIBREF18 , are not satisfactory in terms of accuracy and reliability. We now turn to describing our proposed models.
Proposed Framework
Cognitive learning process underlying articulatory speech production involves incorporation of intermediate feedback loops and utilization of past information stored in the form of memory as well as hierarchical combination of several feature extractors. To this end, we develop our mixed neural network architecture composed of three supervised and a single unsupervised learning step, discussed in the next subsections and shown in Fig. FIGREF1 . We formulate the problem of categorizing EEG data based on speech imagery as a non-linear mapping INLINEFORM0 of a multivariate time-series input sequence INLINEFORM1 to fixed output INLINEFORM2 , i.e, mathematically INLINEFORM3 : INLINEFORM4 , where c and t denote the EEG channels and time instants respectively.
Preprocessing step
We follow similar pre-processing steps on raw EEG data as reported in BIBREF17 (ocular artifact removal using blind source separation, bandpass filtering and subtracting mean value from each channel) except that we do not perform Laplacian filtering step since such high-pass filtering may decrease information content from the signals in the selected bandwidth.
Joint variability of electrodes
Multichannel EEG data is high dimensional multivariate time series data whose dimensionality depends on the number of electrodes. It is a major hurdle to optimally encode information from these EEG data into lower dimensional space. In fact, our investigation based on a development set (as we explain later) showed that well-known deep neural networks (e.g., fully connected networks such as convolutional neural networks, recurrent neural networks and autoencoders) fail to individually learn such complex feature representations from single-trial EEG data. Besides, we found that instead of using the raw multi-channel high-dimensional EEG requiring large training times and resource requirements, it is advantageous to first reduce its dimensionality by capturing the information transfer among the electrodes. Instead of the conventional approach of selecting a handful of channels as BIBREF17 , BIBREF18 , we address this by computing the channel cross-covariance, resulting in positive, semi-definite matrices encoding the connectivity of the electrodes. We define channel cross-covariance (CCV) between any two electrodes INLINEFORM0 and INLINEFORM1 as: INLINEFORM2 . Next, we reject the channels which have significantly lower cross-covariance than auto-covariance values (where auto-covariance implies CCV on same electrode). We found this measure to be essential as the higher cognitive processes underlying speech planning and synthesis involve frequent information exchange between different parts of the brain. Hence, such matrices often contain more discriminative features and hidden information than mere raw signals. This is essentially different than our previous work BIBREF16 where we extract per-channel 1-D covariance information and feed it to the networks. We present our sample 2-D EEG cross-covariance matrices (of two individuals) in Fig. FIGREF2 .
CNN & LSTM
In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN.
Deep autoencoder for spatio-temporal information
As we found the individually-trained parallel networks (CNN and LSTM) to be useful (see Table TABREF12 ), we suspected the combination of these two networks could provide a more powerful discriminative spatial and temporal representation of the data than each independent network. As such, we concatenate the last fully-connected layer from the CNN with its counterpart in the LSTM to compose a single feature vector based on these two penultimate layers. Ultimately, this forms a joint spatio-temporal encoding of the cross-covariance matrix.
In order to further reduce the dimensionality of the spatio-temporal encodings and cancel background noise effects BIBREF21 , we train an unsupervised deep autoenoder (DAE) on the fused heterogeneous features produced by the combined CNN and LSTM information. The DAE forms our second level of hierarchy, with 3 encoding and 3 decoding layers, and mean squared error (MSE) as the cost function.
Classification with Extreme Gradient Boost
At the third level of hierarchy, the discrete latent vector representation of the deep autoencoder is fed into an Extreme Gradient Boost based classification layer BIBREF22 , BIBREF23 motivated by BIBREF21 . It is a regularized gradient boosted decision tree that performs well on structured problems. Since our EEG-phonological pairwise classification has an internal structure involving individual phonemes and words, it seems to be a reasonable choice of classifier. The classifier receives its input from the latent vectors of the deep autoencoder and is trained in a supervised manner to output the final predicted classes corresponding to the speech imagery.
Dataset
We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels.
Training and hyperparameter selection
We performed two sets of experiments with the single-trial EEG data. In PHASE-ONE, our goals was to identify the best architectures and hyperparameters for our networks with a reasonable number of runs. For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE.
The architectural parameters and hyperparameters listed in Table TABREF6 were selected through an exhaustive grid-search based on the validation set of PHASE-ONE. We conducted a series of empirical studies starting from single hidden-layered networks for each of the blocks and, based on the validation accuracy, we increased the depth of each given network and selected the optimal parametric set from all possible combinations of parameters. For the gradient boosting classification, we fixed the maximum depth at 10, number of estimators at 5000, learning rate at 0.1, regularization coefficient at 0.3, subsample ratio at 0.8, and column-sample/iteration at 0.4. We did not find any notable change of accuracy while varying other hyperparameters while training gradient boost classifier.
Conclusion and future direction
In an attempt to move a step towards understanding the speech information encoded in brain signals, we developed a novel mixed deep neural network scheme for a number of binary classification tasks from speech imagery EEG data. Unlike previous approaches which mostly deal with subject-dependent classification of EEG into discrete vowel or word labels, this work investigates a subject-invariant mapping of EEG data with different phonological categories, varying widely in terms of underlying articulator motions (eg: involvement or non-involvement of lips and velum, variation of tongue movements etc). Our model takes an advantage of feature extraction capability of CNN, LSTM as well as the deep learning benefit of deep autoencoders. We took BIBREF17 , BIBREF18 as the baseline works investigating the same problem and compared our performance with theirs. Our proposed method highly outperforms the existing methods across all the five binary classification tasks by a large average margin of 22.51%.
Acknowledgments
This work was funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada and Canadian Institutes for Health Research (CIHR).
Fig. 2. Cross covariance Matrices : Rows correspond to two different subjects; Columns (from left to right) correspond to sample examples for bilabial, nasal, vowel, /uw/, and /iy/.
Table 1. Selected parameter sets
Fig. 3. tSNE feature visualization for ±nasal (left) and V/C classification (right). Red and green colours indicate the distribution of two different types of features
Table 2. Results in accuracy on 10% test data in the first study
Fig. 4. Kappa coefficient values for above-chance accuracy based on Table 2
Query: How was the spatial aspect of the EEG signal computed?
JSON Response: | {"answer": "They use four-layered 2D CNN and two fully connected hidden layers on the channel covariance matrix to compute the spatial aspect.", "evidence": ["In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN."]} | science.qasper_abstractive_qa.119.1 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nDeep Learning the EEG Manifold for Phonological Categorization from Active Thoughts\n\nSpeech-related Brain Computer Interfaces (BCI) aim primarily at finding an alternative vocal communication pathway for people with speaking disabilities. As a step towards full decoding of imagined speech from active thoughts, we present a BCI system for subject-independent classification of phonological categories exploiting a novel deep learning based hierarchical feature extraction scheme. To better capture the complex representation of high-dimensional electroencephalography (EEG) data, we compute the joint variability of EEG electrodes into a channel cross-covariance matrix. We then extract the spatio-temporal information encoded within the matrix using a mixed deep neural network strategy. Our model framework is composed of a convolutional neural network (CNN), a long-short term network (LSTM), and a deep autoencoder. We train the individual networks hierarchically, feeding their combined outputs in a final gradient boosting classification step. Our best models achieve an average accuracy of 77.9% across five different binary classification tasks, providing a significant 22.5% improvement over previous methods. As we also show visually, our work demonstrates that the speech imagery EEG possesses significant discriminative information about the intended articulatory movements responsible for natural speech synthesis.\n\nIntroduction\nDecoding intended speech or motor activity from brain signals is one of the major research areas in Brain Computer Interface (BCI) systems BIBREF0 , BIBREF1 . In particular, speech-related BCI technologies attempt to provide effective vocal communication strategies for controlling external devices through speech commands interpreted from brain signals BIBREF2 . Not only do they provide neuro-prosthetic help for people with speaking disabilities and neuro-muscular disorders like locked-in-syndrome, nasopharyngeal cancer, and amytotropic lateral sclerosis (ALS), but also equip people with a better medium to communicate and express thoughts, thereby improving the quality of rehabilitation and clinical neurology BIBREF3 , BIBREF4 . Such devices also have applications in entertainment, preventive treatments, personal communication, games, etc. Furthermore, BCI technologies can be utilized in silent communication, as in noisy environments, or situations where any sort of audio-visual communication is infeasible.\nAmong the various brain activity-monitoring modalities in BCI, electroencephalography (EEG) BIBREF5 , BIBREF6 has demonstrated promising potential to differentiate between various brain activities through measurement of related electric fields. EEG is non-invasive, portable, low cost, and provides satisfactory temporal resolution. This makes EEG suitable to realize BCI systems. EEG data, however, is challenging: these data are high dimensional, have poor SNR, and suffer from low spatial resolution and a multitude of artifacts. For these reasons, it is not particularly obvious how to decode the desired information from raw EEG signals. Although the area of BCI based speech intent recognition has received increasing attention among the research community in the past few years, most research has focused on classification of individual speech categories in terms of discrete vowels, phonemes and words BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . This includes categorization of imagined EEG signal into binary vowel categories like /a/, /u/ and rest BIBREF7 , BIBREF8 , BIBREF9 ; binary syllable classes like /ba/ and /ku/ BIBREF1 , BIBREF10 , BIBREF11 , BIBREF12 ; a handful of control words like 'up', 'down', 'left', 'right' and 'select' BIBREF15 or others like 'water', 'help', 'thanks', 'food', 'stop' BIBREF13 , Chinese characters BIBREF14 , etc. Such works mostly involve traditional signal processing or manual feature handcrafting along with linear classifiers (e.g., SVMs). In our recent work BIBREF16 , we introduced deep learning models for classification of vowels and words that achieved 23.45% improvement of accuracy over the baseline.\nProduction of articulatory speech is an extremely complicated process, thereby rendering understanding of the discriminative EEG manifold corresponding to imagined speech highly challenging. As a result, most of the existing approaches failed to achieve satisfactory accuracy on decoding speech tokens from the speech imagery EEG data. Perhaps, for these reasons, very little work has been devoted to relating the brain signals to the underlying articulation. The few exceptions include BIBREF17 , BIBREF18 . In BIBREF17 , Zhao et al. used manually handcrafted features from EEG data, combined with speech audio and facial features to achieve classification of the phonological categories varying based on the articulatory steps. However, the imagined speech classification accuracy based on EEG data alone, as reported in BIBREF17 , BIBREF18 , are not satisfactory in terms of accuracy and reliability. We now turn to describing our proposed models.\n\nProposed Framework\nCognitive learning process underlying articulatory speech production involves incorporation of intermediate feedback loops and utilization of past information stored in the form of memory as well as hierarchical combination of several feature extractors. To this end, we develop our mixed neural network architecture composed of three supervised and a single unsupervised learning step, discussed in the next subsections and shown in Fig. FIGREF1 . We formulate the problem of categorizing EEG data based on speech imagery as a non-linear mapping INLINEFORM0 of a multivariate time-series input sequence INLINEFORM1 to fixed output INLINEFORM2 , i.e, mathematically INLINEFORM3 : INLINEFORM4 , where c and t denote the EEG channels and time instants respectively.\n\nPreprocessing step\nWe follow similar pre-processing steps on raw EEG data as reported in BIBREF17 (ocular artifact removal using blind source separation, bandpass filtering and subtracting mean value from each channel) except that we do not perform Laplacian filtering step since such high-pass filtering may decrease information content from the signals in the selected bandwidth.\n\nJoint variability of electrodes\nMultichannel EEG data is high dimensional multivariate time series data whose dimensionality depends on the number of electrodes. It is a major hurdle to optimally encode information from these EEG data into lower dimensional space. In fact, our investigation based on a development set (as we explain later) showed that well-known deep neural networks (e.g., fully connected networks such as convolutional neural networks, recurrent neural networks and autoencoders) fail to individually learn such complex feature representations from single-trial EEG data. Besides, we found that instead of using the raw multi-channel high-dimensional EEG requiring large training times and resource requirements, it is advantageous to first reduce its dimensionality by capturing the information transfer among the electrodes. Instead of the conventional approach of selecting a handful of channels as BIBREF17 , BIBREF18 , we address this by computing the channel cross-covariance, resulting in positive, semi-definite matrices encoding the connectivity of the electrodes. We define channel cross-covariance (CCV) between any two electrodes INLINEFORM0 and INLINEFORM1 as: INLINEFORM2 . Next, we reject the channels which have significantly lower cross-covariance than auto-covariance values (where auto-covariance implies CCV on same electrode). We found this measure to be essential as the higher cognitive processes underlying speech planning and synthesis involve frequent information exchange between different parts of the brain. Hence, such matrices often contain more discriminative features and hidden information than mere raw signals. This is essentially different than our previous work BIBREF16 where we extract per-channel 1-D covariance information and feed it to the networks. We present our sample 2-D EEG cross-covariance matrices (of two individuals) in Fig. FIGREF2 .\n\nCNN & LSTM\nIn order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN.\n\nDeep autoencoder for spatio-temporal information\nAs we found the individually-trained parallel networks (CNN and LSTM) to be useful (see Table TABREF12 ), we suspected the combination of these two networks could provide a more powerful discriminative spatial and temporal representation of the data than each independent network. As such, we concatenate the last fully-connected layer from the CNN with its counterpart in the LSTM to compose a single feature vector based on these two penultimate layers. Ultimately, this forms a joint spatio-temporal encoding of the cross-covariance matrix.\nIn order to further reduce the dimensionality of the spatio-temporal encodings and cancel background noise effects BIBREF21 , we train an unsupervised deep autoenoder (DAE) on the fused heterogeneous features produced by the combined CNN and LSTM information. The DAE forms our second level of hierarchy, with 3 encoding and 3 decoding layers, and mean squared error (MSE) as the cost function.\n\nClassification with Extreme Gradient Boost\nAt the third level of hierarchy, the discrete latent vector representation of the deep autoencoder is fed into an Extreme Gradient Boost based classification layer BIBREF22 , BIBREF23 motivated by BIBREF21 . It is a regularized gradient boosted decision tree that performs well on structured problems. Since our EEG-phonological pairwise classification has an internal structure involving individual phonemes and words, it seems to be a reasonable choice of classifier. The classifier receives its input from the latent vectors of the deep autoencoder and is trained in a supervised manner to output the final predicted classes corresponding to the speech imagery.\n\nDataset\nWe evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels.\n\nTraining and hyperparameter selection\nWe performed two sets of experiments with the single-trial EEG data. In PHASE-ONE, our goals was to identify the best architectures and hyperparameters for our networks with a reasonable number of runs. For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE.\nThe architectural parameters and hyperparameters listed in Table TABREF6 were selected through an exhaustive grid-search based on the validation set of PHASE-ONE. We conducted a series of empirical studies starting from single hidden-layered networks for each of the blocks and, based on the validation accuracy, we increased the depth of each given network and selected the optimal parametric set from all possible combinations of parameters. For the gradient boosting classification, we fixed the maximum depth at 10, number of estimators at 5000, learning rate at 0.1, regularization coefficient at 0.3, subsample ratio at 0.8, and column-sample/iteration at 0.4. We did not find any notable change of accuracy while varying other hyperparameters while training gradient boost classifier.\n\nConclusion and future direction\nIn an attempt to move a step towards understanding the speech information encoded in brain signals, we developed a novel mixed deep neural network scheme for a number of binary classification tasks from speech imagery EEG data. Unlike previous approaches which mostly deal with subject-dependent classification of EEG into discrete vowel or word labels, this work investigates a subject-invariant mapping of EEG data with different phonological categories, varying widely in terms of underlying articulator motions (eg: involvement or non-involvement of lips and velum, variation of tongue movements etc). Our model takes an advantage of feature extraction capability of CNN, LSTM as well as the deep learning benefit of deep autoencoders. We took BIBREF17 , BIBREF18 as the baseline works investigating the same problem and compared our performance with theirs. Our proposed method highly outperforms the existing methods across all the five binary classification tasks by a large average margin of 22.51%.\n\nAcknowledgments\nThis work was funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada and Canadian Institutes for Health Research (CIHR).\n\nFig. 2. Cross covariance Matrices : Rows correspond to two different subjects; Columns (from left to right) correspond to sample examples for bilabial, nasal, vowel, /uw/, and /iy/.\n\nTable 1. Selected parameter sets\n\nFig. 3. tSNE feature visualization for ±nasal (left) and V/C classification (right). Red and green colours indicate the distribution of two different types of features\n\nTable 2. Results in accuracy on 10% test data in the first study\n\nFig. 4. Kappa coefficient values for above-chance accuracy based on Table 2\n\nQuery: How was the spatial aspect of the EEG signal computed?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"They use four-layered 2D CNN and two fully connected hidden layers on the channel covariance matrix to compute the spatial aspect.\", \"evidence\": [\"In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN.\"]}",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Towards Federated Learning on Time-Evolving Heterogeneous Data
Main Content:
1 INTRODUCTION . Federated Learning ( FL ) has recently emerged as a critical distributed machine learning paradigm to preserve user/client privacy . Clients engaged in the training process of FL only communicate their local model parameters , rather than their private local data , with the central server . As the workhorse algorithm in FL , FedAvg ( McMahan et al. , 2017 ) performs multiple local stochastic gradient descent ( SGD ) updates on the available clients before communicating with the server . Despite its success , FedAvg suffers from the large heterogeneity ( non-iid-ness ) in the data presented on the different clients , causing drift in each client ’ s updates and resulting in slow and unstable convergence ( Karimireddy et al. , 2020b ) . To address this issue , a new line of study has been suggested lately that either simulates the distribution of the whole dataset using preassigned weights of clients ( Wang et al. , 2020 ; Reisizadeh et al. , 2020 ; Mohri et al. , 2019 ; Li et al. , 2020a ) or adopts variance reduction methods ( Karimireddy et al. , 2020b ; a ; Das et al. , 2020 ; Haddadpour et al. , 2021 ) . However , the FL formulation in these approaches always assumes a fixed data distribution among clients throughout all training rounds , while in practice this assumption does not always hold because of the complex , uncontrolled , and unpredictable client behaviors . Local datasets , for example , often vary over time , and intermittent clients can join or depart during training without prior notice . The difficulty in addressing the time-varying heterogeneity of FL lies in the stateless nature of the local datasets , which includes the unpredictable future datasets and the impossibility of retaining all prior local datasets . To this end , we first provide a novel Continual Federated Learning ( CFL ) formulation and then propose a unified CFL framework as our solution . This framework encompasses a variety of design choices for approximating the local objective functions in the previous training rounds , with the difference between actual and estimated local object functions described as information loss for further analysis . To capture the time-varying data heterogeneity in the CFL framework , we expand the theoretical assumption on the client drift—which has been extensively utilized recently in Karimireddy et al . ( 2020b ) ; Khaled et al . ( 2020 ) ; Li et al . ( 2020b ) —to include both client drift and time drift . This allows us to quantify the difference between local client objective function and global objective function for the considered time-varying heterogeneous clients . We provide convergence rates for our unified CFL framework in conjunction with information loss and new models of both client and time drifts . Our analysis reveals a faster convergence of CFL framework than FedAvg , with the benefit dependent on approximation quality . The rate of FedAvg obtained from our framework is consistent with previous work , while a similarly simplified rate on Continual Learning ( CL ) ( French , 1999 ; Kirkpatrick et al. , 2017 ) for the convex and strongly convex cases is novel . Finally , in extensive empirical results , we demonstrate that CFL methods—stemmed from the CFL framework with different approximation techniques—significantly outperform the SOTA FL competitors on various realistic datasets . These numerical observations corroborate our theoretical findings . Contribution . We summarize our key contributions below . • We present a unified framework , termed Continual Federated Learning ( CFL ) , together with a novel client and time drift modeling approach , to capture complex FL scenarios involving timevarying heterogeneous data . This is the first theoretical study , to our knowledge , that describes the time-varying nature of FL . • We provide rigorous convergence analysis for the CFL methods . Our theoretical analysis explains the faster and more stabilized optimization of the CFL methods over that of FedAvg , and conjecture their variance reduction effect . In addition , we provide tight convergence rates for standalone CL methods on convex and strongly-convex problems : to the best of our knowledge , we are the first to provide such guarantees for CL on SGD . • We examine several approximation techniques for CFL framework : the insights therein offer a valuable practical guideline . We demonstrate the efficacy and necessity of CFL methods over the SOTA FL baselines across a range of time-varying heterogeneous data scenarios and datasets . 2 RELATED WORK . Federated Learning . FedAvg ( McMahan et al. , 2017 ; Lin et al. , 2020b ) is the de facto standard FL algorithm , in which multiple local SGD steps are executed on the available clients to alleviate the communication bottleneck . While communication efficient , heterogeneity , such as system heterogeneity ( Li et al. , 2018 ; Wang et al. , 2020 ; Mitra et al. , 2021 ; Diao et al. , 2021 ) and statistical/objective heterogeneity ( Li et al. , 2018 ; Wang et al. , 2020 ; Mitra et al. , 2021 ; Lin et al. , 2020a ; Karimireddy et al. , 2020b ; a ) , results in inconsistent optimization objectives and drifted clients models , impeding federated optimization considerably . A line of work has been proposed to address the heterogeneity in FL . FedProx ( Li et al. , 2018 ) adds the regularization term on the distance of local and global models when performing local training—similar formulations can be found in other recent FL works ( Hanzely & Richtárik , 2020 ; Dinh et al. , 2020 ; Li et al. , 2021 ) for various purposes . To address the issue of objective heterogeneity e.g . caused by heterogeneous data , works like SCAFFOLD ( Karimireddy et al. , 2020b ; Mitra et al. , 2021 ) introduce the idea of variance reduction on the client local update steps . FedNova ( Wang et al. , 2020 ) further proposes a general framework for unifying FedAvg and FedProx , and argues that while averaging , local updates should be normalized to minimize heterogeneity induced by different number of local update steps . However , most of these prior works focus on the fixed heterogeneity across clients and throughout the entire optimization procedure ; we instead consider the novel scenario with time-varying data heterogeneity . The theoretical study on the convergence of FedAvg can date back to the parallel SGD analysis on the identical functions Zinkevich et al . ( 2010 ) and recently is improved by Stich ( 2019 ) ; Stich & Karimireddy ( 2020 ) ; Patel & Dieuleveut ( 2019 ) ; Khaled et al . ( 2020 ) ; Woodworth et al . ( 2020b ) . For the analysis of heterogeneous data , Li et al . ( 2020b ) first give the convergence rate of FedAvg on non-iid datasets with random selection , assuming that the client optimum are -close . Woodworth et al . ( 2020a ) ; Khaled et al . ( 2020 ) give tighter convergence rates under the assumption of bounded gradient drift . All above works give a O ( 1/T ) convergence rate for convex local objective functions . More recently , Karimireddy et al . ( 2020b ) ; Koloskova et al . ( 2020 ) give the convergence analysis of local SGD for non-convex objective functions under bounded gradient noise assumptions , and obtain a O ( 1/ √ T ) convergence rate . Our theoretical analysis framework covers more challenging timevarying data heterogeneity in FL , which has not been considered in the community yet—our rate can be simplified to the standard FL scenario , matching the tight analysis in Karimireddy et al . ( 2020b ) . Continual Learning . Continual learning , also known as incremental learning or lifelong learning , aims to learn from ( time-varying ) sequential data while avoiding the problem of catastrophic forgetting ( French , 1999 ; Kirkpatrick et al. , 2017 ) . There exists a large amount of works , from the perspectives of regularization ( Kirkpatrick et al. , 2017 ; Li & Hoiem , 2017 ; Zenke et al. , 2017 ) , experience replay ( Castro et al. , 2018 ; Rebuffi et al. , 2017 ) , and dynamic architectures ( Maltoni & Lomonaco , 2019 ; Rusu et al. , 2016 ) . In this paper , we examine both regularization and experience replay based methods , and compare their empirical performance in depth . Despite the empirical success , the theoretical analysis of CL is limited : only a very recent preprint ( Yin et al. , 2020b ) provides a viewpoint of regularization-based continual learning , and only for the single worker scenario . In this work , we provide tight convergence analysis for both CL and CFL on SGD . Continual Federated Learning . To our knowledge , the scenario of CFL was originally described in Bui et al . ( 2018 ) in order to federated train Bayesian Neural Network and continually learn for Gaussian Process models—it is orthogonal to our optimization aspect in this paper . FedCurv ( Shoham et al. , 2019 ) blends EWC ( Kirkpatrick et al. , 2017 ) regularization with FedAvg , and empirically shows a faster convergence . FedWeIT ( Yoon et al. , 2021 ) extends the regularization idea , with a focus on selective inter-client knowledge transfer for task-adaptive parameters . In a very recent parallel ( and empirical ) work , CDA-FedAvg ( Casado et al. , 2021 ) uses a locally maintained long-term data samples , with asynchronous communication ; however , this design choice lacks sufficient numerical results to justify the performance gain ( even over FedAvg ) and it can not be applied to more general scenarios . Our theoretically sound CFL framework covers the regularization component of FedCurv and the core set part of CDA-FedAvg , and is orthogonal to the neural architecture manipulation idea in FedWeIT . 3 CONTINUAL FEDERATED LEARNING FRAMEWORK . 3.1 FORMULATION . Conventional FL Formulation . The standard FL typically considers a sum-structured distributed optimization problem as below : f ? = minω∈Rd [ f ( ω ) : = ∑N i=1 pifi ( ω ) ] , ( 1 ) where the objective function f ( ω ) : Rd → R is the weighted sum of the local objective functions fi ( ω ) : = EDi [ Fi ( ω ) ] of N nodes/clients , and pi is the weight of client i . In practice , it may be infeasible to select all clients each round , especially for cross-device setup ( McMahan et al. , 2017 ; Kairouz et al. , 2019 ) . The standard FedAvg then randomly selects S clients to receive the model parameters from the server ( S ≤ N ) in each communication round and performs K local SGD update steps in the form of ωt , i , k = ωt , i , k−1 − ηl ( ∇fi ( ωt , i , k−1 ) + νt , i , k−1 ) , with local step-size ηl and gradient noise νt , i , k−1 . The selected clients then communicate the updates ∆ωt , i = ωt , i , K − ωt with the server for the model aggregation : ωt+1 = ωt − ηgS ∑S i=1 ∆ωt , i . Continual FL Formulation . Despite its wide usage , the standard FL formulation given in Equation ( 1 ) can not properly reflect actual time-varying scenarios , such as local datasets changing over time or intermittent clients joining or leaving during training . To address this problem , we propose the Continual Federated Learning ( CFL ) formulation : f ? = minω∈Rd [ f ( ω ) : = ∑T t=1 ∑ i∈St pt , ift , i ( ω ) ] , ( CFL ) where ft , i ( ω ) represents the local objective function of client i at time t. St is a subset of clients sampled from all clients set Ω , where |St| = S. For the time-varying scenarios , client i could have different local objective functions ft , i ( ω ) due to the changing local datasets on different t . | This paper proposes to study time evolving heterogeneous data, and proposes Continual Federated Learning (CFL) to address this problem. Their analysis achieves this by introducing time-drift to capture data heterogeneity across time. Convergence results are presented followed by numerical results on time-varying and heterogeneous settings. | science.mup_single_document_summarization.8777 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nTowards Federated Learning on Time-Evolving Heterogeneous Data\n\nMain Content:\n1 INTRODUCTION . Federated Learning ( FL ) has recently emerged as a critical distributed machine learning paradigm to preserve user/client privacy . Clients engaged in the training process of FL only communicate their local model parameters , rather than their private local data , with the central server . As the workhorse algorithm in FL , FedAvg ( McMahan et al. , 2017 ) performs multiple local stochastic gradient descent ( SGD ) updates on the available clients before communicating with the server . Despite its success , FedAvg suffers from the large heterogeneity ( non-iid-ness ) in the data presented on the different clients , causing drift in each client ’ s updates and resulting in slow and unstable convergence ( Karimireddy et al. , 2020b ) . To address this issue , a new line of study has been suggested lately that either simulates the distribution of the whole dataset using preassigned weights of clients ( Wang et al. , 2020 ; Reisizadeh et al. , 2020 ; Mohri et al. , 2019 ; Li et al. , 2020a ) or adopts variance reduction methods ( Karimireddy et al. , 2020b ; a ; Das et al. , 2020 ; Haddadpour et al. , 2021 ) . However , the FL formulation in these approaches always assumes a fixed data distribution among clients throughout all training rounds , while in practice this assumption does not always hold because of the complex , uncontrolled , and unpredictable client behaviors . Local datasets , for example , often vary over time , and intermittent clients can join or depart during training without prior notice . The difficulty in addressing the time-varying heterogeneity of FL lies in the stateless nature of the local datasets , which includes the unpredictable future datasets and the impossibility of retaining all prior local datasets . To this end , we first provide a novel Continual Federated Learning ( CFL ) formulation and then propose a unified CFL framework as our solution . This framework encompasses a variety of design choices for approximating the local objective functions in the previous training rounds , with the difference between actual and estimated local object functions described as information loss for further analysis . To capture the time-varying data heterogeneity in the CFL framework , we expand the theoretical assumption on the client drift—which has been extensively utilized recently in Karimireddy et al . ( 2020b ) ; Khaled et al . ( 2020 ) ; Li et al . ( 2020b ) —to include both client drift and time drift . This allows us to quantify the difference between local client objective function and global objective function for the considered time-varying heterogeneous clients . We provide convergence rates for our unified CFL framework in conjunction with information loss and new models of both client and time drifts . Our analysis reveals a faster convergence of CFL framework than FedAvg , with the benefit dependent on approximation quality . The rate of FedAvg obtained from our framework is consistent with previous work , while a similarly simplified rate on Continual Learning ( CL ) ( French , 1999 ; Kirkpatrick et al. , 2017 ) for the convex and strongly convex cases is novel . Finally , in extensive empirical results , we demonstrate that CFL methods—stemmed from the CFL framework with different approximation techniques—significantly outperform the SOTA FL competitors on various realistic datasets . These numerical observations corroborate our theoretical findings . Contribution . We summarize our key contributions below . • We present a unified framework , termed Continual Federated Learning ( CFL ) , together with a novel client and time drift modeling approach , to capture complex FL scenarios involving timevarying heterogeneous data . This is the first theoretical study , to our knowledge , that describes the time-varying nature of FL . • We provide rigorous convergence analysis for the CFL methods . Our theoretical analysis explains the faster and more stabilized optimization of the CFL methods over that of FedAvg , and conjecture their variance reduction effect . In addition , we provide tight convergence rates for standalone CL methods on convex and strongly-convex problems : to the best of our knowledge , we are the first to provide such guarantees for CL on SGD . • We examine several approximation techniques for CFL framework : the insights therein offer a valuable practical guideline . We demonstrate the efficacy and necessity of CFL methods over the SOTA FL baselines across a range of time-varying heterogeneous data scenarios and datasets . 2 RELATED WORK . Federated Learning . FedAvg ( McMahan et al. , 2017 ; Lin et al. , 2020b ) is the de facto standard FL algorithm , in which multiple local SGD steps are executed on the available clients to alleviate the communication bottleneck . While communication efficient , heterogeneity , such as system heterogeneity ( Li et al. , 2018 ; Wang et al. , 2020 ; Mitra et al. , 2021 ; Diao et al. , 2021 ) and statistical/objective heterogeneity ( Li et al. , 2018 ; Wang et al. , 2020 ; Mitra et al. , 2021 ; Lin et al. , 2020a ; Karimireddy et al. , 2020b ; a ) , results in inconsistent optimization objectives and drifted clients models , impeding federated optimization considerably . A line of work has been proposed to address the heterogeneity in FL . FedProx ( Li et al. , 2018 ) adds the regularization term on the distance of local and global models when performing local training—similar formulations can be found in other recent FL works ( Hanzely & Richtárik , 2020 ; Dinh et al. , 2020 ; Li et al. , 2021 ) for various purposes . To address the issue of objective heterogeneity e.g . caused by heterogeneous data , works like SCAFFOLD ( Karimireddy et al. , 2020b ; Mitra et al. , 2021 ) introduce the idea of variance reduction on the client local update steps . FedNova ( Wang et al. , 2020 ) further proposes a general framework for unifying FedAvg and FedProx , and argues that while averaging , local updates should be normalized to minimize heterogeneity induced by different number of local update steps . However , most of these prior works focus on the fixed heterogeneity across clients and throughout the entire optimization procedure ; we instead consider the novel scenario with time-varying data heterogeneity . The theoretical study on the convergence of FedAvg can date back to the parallel SGD analysis on the identical functions Zinkevich et al . ( 2010 ) and recently is improved by Stich ( 2019 ) ; Stich & Karimireddy ( 2020 ) ; Patel & Dieuleveut ( 2019 ) ; Khaled et al . ( 2020 ) ; Woodworth et al . ( 2020b ) . For the analysis of heterogeneous data , Li et al . ( 2020b ) first give the convergence rate of FedAvg on non-iid datasets with random selection , assuming that the client optimum are -close . Woodworth et al . ( 2020a ) ; Khaled et al . ( 2020 ) give tighter convergence rates under the assumption of bounded gradient drift . All above works give a O ( 1/T ) convergence rate for convex local objective functions . More recently , Karimireddy et al . ( 2020b ) ; Koloskova et al . ( 2020 ) give the convergence analysis of local SGD for non-convex objective functions under bounded gradient noise assumptions , and obtain a O ( 1/ √ T ) convergence rate . Our theoretical analysis framework covers more challenging timevarying data heterogeneity in FL , which has not been considered in the community yet—our rate can be simplified to the standard FL scenario , matching the tight analysis in Karimireddy et al . ( 2020b ) . Continual Learning . Continual learning , also known as incremental learning or lifelong learning , aims to learn from ( time-varying ) sequential data while avoiding the problem of catastrophic forgetting ( French , 1999 ; Kirkpatrick et al. , 2017 ) . There exists a large amount of works , from the perspectives of regularization ( Kirkpatrick et al. , 2017 ; Li & Hoiem , 2017 ; Zenke et al. , 2017 ) , experience replay ( Castro et al. , 2018 ; Rebuffi et al. , 2017 ) , and dynamic architectures ( Maltoni & Lomonaco , 2019 ; Rusu et al. , 2016 ) . In this paper , we examine both regularization and experience replay based methods , and compare their empirical performance in depth . Despite the empirical success , the theoretical analysis of CL is limited : only a very recent preprint ( Yin et al. , 2020b ) provides a viewpoint of regularization-based continual learning , and only for the single worker scenario . In this work , we provide tight convergence analysis for both CL and CFL on SGD . Continual Federated Learning . To our knowledge , the scenario of CFL was originally described in Bui et al . ( 2018 ) in order to federated train Bayesian Neural Network and continually learn for Gaussian Process models—it is orthogonal to our optimization aspect in this paper . FedCurv ( Shoham et al. , 2019 ) blends EWC ( Kirkpatrick et al. , 2017 ) regularization with FedAvg , and empirically shows a faster convergence . FedWeIT ( Yoon et al. , 2021 ) extends the regularization idea , with a focus on selective inter-client knowledge transfer for task-adaptive parameters . In a very recent parallel ( and empirical ) work , CDA-FedAvg ( Casado et al. , 2021 ) uses a locally maintained long-term data samples , with asynchronous communication ; however , this design choice lacks sufficient numerical results to justify the performance gain ( even over FedAvg ) and it can not be applied to more general scenarios . Our theoretically sound CFL framework covers the regularization component of FedCurv and the core set part of CDA-FedAvg , and is orthogonal to the neural architecture manipulation idea in FedWeIT . 3 CONTINUAL FEDERATED LEARNING FRAMEWORK . 3.1 FORMULATION . Conventional FL Formulation . The standard FL typically considers a sum-structured distributed optimization problem as below : f ? = minω∈Rd [ f ( ω ) : = ∑N i=1 pifi ( ω ) ] , ( 1 ) where the objective function f ( ω ) : Rd → R is the weighted sum of the local objective functions fi ( ω ) : = EDi [ Fi ( ω ) ] of N nodes/clients , and pi is the weight of client i . In practice , it may be infeasible to select all clients each round , especially for cross-device setup ( McMahan et al. , 2017 ; Kairouz et al. , 2019 ) . The standard FedAvg then randomly selects S clients to receive the model parameters from the server ( S ≤ N ) in each communication round and performs K local SGD update steps in the form of ωt , i , k = ωt , i , k−1 − ηl ( ∇fi ( ωt , i , k−1 ) + νt , i , k−1 ) , with local step-size ηl and gradient noise νt , i , k−1 . The selected clients then communicate the updates ∆ωt , i = ωt , i , K − ωt with the server for the model aggregation : ωt+1 = ωt − ηgS ∑S i=1 ∆ωt , i . Continual FL Formulation . Despite its wide usage , the standard FL formulation given in Equation ( 1 ) can not properly reflect actual time-varying scenarios , such as local datasets changing over time or intermittent clients joining or leaving during training . To address this problem , we propose the Continual Federated Learning ( CFL ) formulation : f ? = minω∈Rd [ f ( ω ) : = ∑T t=1 ∑ i∈St pt , ift , i ( ω ) ] , ( CFL ) where ft , i ( ω ) represents the local objective function of client i at time t. St is a subset of clients sampled from all clients set Ω , where |St| = S. For the time-varying scenarios , client i could have different local objective functions ft , i ( ω ) due to the changing local datasets on different t .",
"role": "user"
},
{
"content": "This paper proposes to study time evolving heterogeneous data, and proposes Continual Federated Learning (CFL) to address this problem. Their analysis achieves this by introducing time-drift to capture data heterogeneity across time. Convergence results are presented followed by numerical results on time-varying and heterogeneous settings.",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: The best method of gestational age assessment is by ultrasound in the first trimester; however, this method is impractical in large field trials in rural areas. Our objective was to assess the validity of gestational age estimated from prospectively collected date of last menstrual period (LMP) using crown-rump length (CRL) measured in early pregnancy by ultrasound.
ABSTRACT.METHODS: As part of a large, cluster-randomized, controlled trial in rural Bangladesh, we collected dates of LMP by recall and as marked on a calendar every 5 weeks in women likely to become pregnant. Among those with a urine-test confirmed pregnancy, a subset with gestational age of <15 weeks (n = 353) were enrolled for ultrasound follow-up to measure CRL. We compared interview-assessed LMP with CRL gestational age estimates and classification of preterm, term, and post-term births.
ABSTRACT.RESULTS: LMP-based gestational age was higher than CRL by a mean (SD) of 2.8 (10.7) days; differences varied by maternal education and preterm birth (P < 0.05). Lin's concordance correlation coefficient was good at ultrasound [0.63 (95 % CI 0.56, 0.69)] and at birth [0.77 (95 % CI 0.73, 0.81)]. Validity of classifying preterm birth was high but post-term was lower, with specificity of 96 and 89 % and sensitivity of 86 and 67 %, respectively. Results were similar by parity.
ABSTRACT.CONCLUSIONS: Prospectively collected LMP provided a valid estimate of gestational age and preterm birth in a rural, low-income setting and may be a suitable alternative to ultrasound in programmatic settings and large field trials.
ABSTRACT.TRIAL REGISTRATION: ClinicalTrials.gov NCT00860470
ABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s41043-016-0072-y) contains supplementary material, which is available to authorized users.
Extracted Data: | [["Last menstrual period method", "Crown-rump length method", "Gestational age at birth ", "significantly increased", "LMP-based gestational age was higher than CRL by a mean (SD) of 2.8 (10.7) days"]] | science.evidence_inference.183 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: The best method of gestational age assessment is by ultrasound in the first trimester; however, this method is impractical in large field trials in rural areas. Our objective was to assess the validity of gestational age estimated from prospectively collected date of last menstrual period (LMP) using crown-rump length (CRL) measured in early pregnancy by ultrasound.\n\nABSTRACT.METHODS: As part of a large, cluster-randomized, controlled trial in rural Bangladesh, we collected dates of LMP by recall and as marked on a calendar every 5 weeks in women likely to become pregnant. Among those with a urine-test confirmed pregnancy, a subset with gestational age of <15 weeks (n = 353) were enrolled for ultrasound follow-up to measure CRL. We compared interview-assessed LMP with CRL gestational age estimates and classification of preterm, term, and post-term births.\n\nABSTRACT.RESULTS: LMP-based gestational age was higher than CRL by a mean (SD) of 2.8 (10.7) days; differences varied by maternal education and preterm birth (P < 0.05). Lin's concordance correlation coefficient was good at ultrasound [0.63 (95 % CI 0.56, 0.69)] and at birth [0.77 (95 % CI 0.73, 0.81)]. Validity of classifying preterm birth was high but post-term was lower, with specificity of 96 and 89 % and sensitivity of 86 and 67 %, respectively. Results were similar by parity.\n\nABSTRACT.CONCLUSIONS: Prospectively collected LMP provided a valid estimate of gestational age and preterm birth in a rural, low-income setting and may be a suitable alternative to ultrasound in programmatic settings and large field trials.\n\nABSTRACT.TRIAL REGISTRATION: ClinicalTrials.gov NCT00860470\n\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s41043-016-0072-y) contains supplementary material, which is available to authorized users.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Last menstrual period method\", \"Crown-rump length method\", \"Gestational age at birth \", \"significantly increased\", \"LMP-based gestational age was higher than CRL by a mean (SD) of 2.8 (10.7) days\"]]",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Incidence of 10/66 dementia is lower than the incidence of DSM-IV dementia.
Research Abstract:
BACKGROUND Results of the few cohort studies from countries with low incomes or middle incomes suggest a lower incidence of dementia than in high-income countries.
We assessed incidence of dementia according to criteria from the 10/66 Dementia Research Group and Diagnostic and Statistical Manual of Mental Disorders (DSM) IV, the effect of dementia at baseline on mortality, and the independent effects of age, sex, socioeconomic position, and indicators of cognitive reserve.
METHODS We did a population-based cohort study of all people aged 65 years and older living in urban sites in Cuba, the Dominican Republic, and Venezuela, and rural and urban sites in Peru, Mexico, and China, with ascertainment of incident 10/66 and DSM-IV dementia 3-5 years after cohort inception.
We used questionnaires to obtain information about age in years, sex, educational level, literacy, occupational attainment, and number of household assets.
We obtained information about mortality from all sites.
For participants who had died, we interviewed a friend or relative to ascertain the likelihood that they had dementia before death.
FINDINGS 12,887 participants were interviewed at baseline.
11,718 were free of dementia, of whom 8137 (69%) were reinterviewed, contributing 34,718 person-years of follow-up.
Incidence for 10/66 dementia varied between 18·2 and 30·4 per 1000 person-years, and were 1·4-2·7 times higher than were those for DSM-IV dementia (9·9-15·7 per 1000 person-years).
Mortality hazards were 1·56-5·69 times higher in individuals with dementia at baseline than in those who were dementia-free.
Informant reports suggested a high incidence of dementia before death; overall incidence might be 4-19% higher if these data were included.
10/66 dementia incidence was independently associated with increased age (HR 1·67; 95% CI 1·56-1·79), female sex (0·72; 0·61-0·84), and low education (0·89; 0·81-0·97), but not with occupational attainment (1·04; 0·95-1·13).
INTERPRETATION Our results provide supportive evidence for the cognitive reserve hypothesis, showing that in middle-income countries as in high-income countries, education, literacy, verbal fluency, and motor sequencing confer substantial protection against the onset of dementia.
FUNDING Wellcome Trust Health Consequences of Population Change Programme, WHO, US Alzheimer's Association, FONACIT/ CDCH/ UCV. | {"verdict": "CONTRADICT", "evidence": ["Incidence for 10/66 dementia varied between 18\u00b72 and 30\u00b74 per 1000 person-years, and were 1\u00b74-2\u00b77 times higher than were those for DSM-IV dementia (9\u00b79-15\u00b77 per 1000 person-years)."]} | science.scifact_entailment.381 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nIncidence of 10/66 dementia is lower than the incidence of DSM-IV dementia.\n\nResearch Abstract:\nBACKGROUND Results of the few cohort studies from countries with low incomes or middle incomes suggest a lower incidence of dementia than in high-income countries.\nWe assessed incidence of dementia according to criteria from the 10/66 Dementia Research Group and Diagnostic and Statistical Manual of Mental Disorders (DSM) IV, the effect of dementia at baseline on mortality, and the independent effects of age, sex, socioeconomic position, and indicators of cognitive reserve.\nMETHODS We did a population-based cohort study of all people aged 65 years and older living in urban sites in Cuba, the Dominican Republic, and Venezuela, and rural and urban sites in Peru, Mexico, and China, with ascertainment of incident 10/66 and DSM-IV dementia 3-5 years after cohort inception.\nWe used questionnaires to obtain information about age in years, sex, educational level, literacy, occupational attainment, and number of household assets.\nWe obtained information about mortality from all sites.\nFor participants who had died, we interviewed a friend or relative to ascertain the likelihood that they had dementia before death.\nFINDINGS 12,887 participants were interviewed at baseline.\n11,718 were free of dementia, of whom 8137 (69%) were reinterviewed, contributing 34,718 person-years of follow-up.\nIncidence for 10/66 dementia varied between 18·2 and 30·4 per 1000 person-years, and were 1·4-2·7 times higher than were those for DSM-IV dementia (9·9-15·7 per 1000 person-years).\nMortality hazards were 1·56-5·69 times higher in individuals with dementia at baseline than in those who were dementia-free.\nInformant reports suggested a high incidence of dementia before death; overall incidence might be 4-19% higher if these data were included.\n10/66 dementia incidence was independently associated with increased age (HR 1·67; 95% CI 1·56-1·79), female sex (0·72; 0·61-0·84), and low education (0·89; 0·81-0·97), but not with occupational attainment (1·04; 0·95-1·13).\nINTERPRETATION Our results provide supportive evidence for the cognitive reserve hypothesis, showing that in middle-income countries as in high-income countries, education, literacy, verbal fluency, and motor sequencing confer substantial protection against the onset of dementia.\nFUNDING Wellcome Trust Health Consequences of Population Change Programme, WHO, US Alzheimer's Association, FONACIT/ CDCH/ UCV.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Incidence for 10/66 dementia varied between 18\\u00b72 and 30\\u00b74 per 1000 person-years, and were 1\\u00b74-2\\u00b77 times higher than were those for DSM-IV dementia (9\\u00b79-15\\u00b77 per 1000 person-years).\"]}",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
We used the following probabilistic framework to this end<cite> [5]</cite> : | ["Uses"] | science.multicite_intent_classification.2054 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWe used the following probabilistic framework to this end<cite> [5]</cite> :",
"role": "user"
},
{
"content": "[\"Uses\"]",
"role": "assistant"
}
] |
science.discomat_te | | Heat-treatment conditions | Heat-treatment conditions | N(m -3) [10] | T C (degC) | c(mPa -1) |
| Temperature | Time | | | |
| 455degC | 4.0 h | 1.53x1013 | 646.1 | 0.193 |
| | 8.0 h | 5.19x1013 | 639.0 | 0.164 |
| | 15.0 h | 11.60x1013 | 632.3 | 0.159 |
| | | | | |
| 464degC | 1.0 h | 0.27x1013 | 658.1 | 0.233 |
| | 2.0 h | 0.95x1013 | 647.1 | 0.219 |
| | 4.0 h | 2.31x1013 | 643.5 | 0.184 |
| | 8.0 h | 4.98x1013 | 636.1 | 0.184 |
| | 15.0 h | 9.65x1013 | 632.4 | 0.169 |
| | | | | |
| 481degC | 0.5 h | 0.15x1013 | 660.3 | 0.265 |
| | 1.0 h | 0.30x1013 | 655.4 | 0.245 |
| | 3.0 h | 0.86x1013 | 648.1 | 0.219 |
| | 20.0 h | 5.74x1013 | 635.9 | 0.177 |
| | | | | |
| 497degC | 5.0 h | 0.30x1013 | 651.9 | 0.274 |
| | 8.0 h | 0.49x1013 | 651.7 | 0.234 |
Caption: Relationship between the number density and T C for Li2O*2SiO2 glass heat-treated under various conditions
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "646.1", "type": "Other"}
{"value": "0.193", "type": "Other"}
{"value": "639.0", "type": "Other"}
{"value": "0.164", "type": "Other"}
{"value": "632.3", "type": "Other"}
{"value": "0.159", "type": "Other"}
{"value": "658.1", "type": "Other"}
{"value": "0.233", "type": "Other"}
{"value": "647.1", "type": "Other"}
{"value": "0.219", "type": "Other"}
{"value": "643.5", "type": "Other"}
{"value": "0.184", "type": "Other"}
{"value": "636.1", "type": "Other"}
{"value": "0.184", "type": "Other"}
{"value": "632.4", "type": "Other"}
{"value": "0.169", "type": "Other"}
{"value": "660.3", "type": "Other"}
{"value": "0.265", "type": "Other"}
{"value": "655.4", "type": "Other"}
{"value": "0.245", "type": "Other"}
{"value": "648.1", "type": "Other"}
{"value": "0.219", "type": "Other"}
{"value": "635.9", "type": "Other"}
{"value": "0.177", "type": "Other"}
{"value": "651.9", "type": "Other"}
{"value": "0.274", "type": "Other"}
{"value": "651.7", "type": "Other"}
{"value": "0.234", "type": "Other"} | science.discomat_te.253 | [
{
"content": "| Heat-treatment conditions | Heat-treatment conditions | N(m -3) [10] | T C (degC) | c(mPa -1) |\n| Temperature | Time | | | |\n| 455degC | 4.0 h | 1.53x1013 | 646.1 | 0.193 |\n| | 8.0 h | 5.19x1013 | 639.0 | 0.164 |\n| | 15.0 h | 11.60x1013 | 632.3 | 0.159 |\n| | | | | |\n| 464degC | 1.0 h | 0.27x1013 | 658.1 | 0.233 |\n| | 2.0 h | 0.95x1013 | 647.1 | 0.219 |\n| | 4.0 h | 2.31x1013 | 643.5 | 0.184 |\n| | 8.0 h | 4.98x1013 | 636.1 | 0.184 |\n| | 15.0 h | 9.65x1013 | 632.4 | 0.169 |\n| | | | | |\n| 481degC | 0.5 h | 0.15x1013 | 660.3 | 0.265 |\n| | 1.0 h | 0.30x1013 | 655.4 | 0.245 |\n| | 3.0 h | 0.86x1013 | 648.1 | 0.219 |\n| | 20.0 h | 5.74x1013 | 635.9 | 0.177 |\n| | | | | |\n| 497degC | 5.0 h | 0.30x1013 | 651.9 | 0.274 |\n| | 8.0 h | 0.49x1013 | 651.7 | 0.234 |\n\nCaption: Relationship between the number density and T C for Li2O*2SiO2 glass heat-treated under various conditions\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"646.1\", \"type\": \"Other\"}\n{\"value\": \"0.193\", \"type\": \"Other\"}\n{\"value\": \"639.0\", \"type\": \"Other\"}\n{\"value\": \"0.164\", \"type\": \"Other\"}\n{\"value\": \"632.3\", \"type\": \"Other\"}\n{\"value\": \"0.159\", \"type\": \"Other\"}\n{\"value\": \"658.1\", \"type\": \"Other\"}\n{\"value\": \"0.233\", \"type\": \"Other\"}\n{\"value\": \"647.1\", \"type\": \"Other\"}\n{\"value\": \"0.219\", \"type\": \"Other\"}\n{\"value\": \"643.5\", \"type\": \"Other\"}\n{\"value\": \"0.184\", \"type\": \"Other\"}\n{\"value\": \"636.1\", \"type\": \"Other\"}\n{\"value\": \"0.184\", \"type\": \"Other\"}\n{\"value\": \"632.4\", \"type\": \"Other\"}\n{\"value\": \"0.169\", \"type\": \"Other\"}\n{\"value\": \"660.3\", \"type\": \"Other\"}\n{\"value\": \"0.265\", \"type\": \"Other\"}\n{\"value\": \"655.4\", \"type\": \"Other\"}\n{\"value\": \"0.245\", \"type\": \"Other\"}\n{\"value\": \"648.1\", \"type\": \"Other\"}\n{\"value\": \"0.219\", \"type\": \"Other\"}\n{\"value\": \"635.9\", \"type\": \"Other\"}\n{\"value\": \"0.177\", \"type\": \"Other\"}\n{\"value\": \"651.9\", \"type\": \"Other\"}\n{\"value\": \"0.274\", \"type\": \"Other\"}\n{\"value\": \"651.7\", \"type\": \"Other\"}\n{\"value\": \"0.234\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
To incorporate the different types of contexts, we propose to use a gate function similar to <cite>Noraset et al. (2017)</cite> to dynamically control how the global and local contexts influence the description. | ["Similarities"] | science.multicite_intent_classification.663 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nTo incorporate the different types of contexts, we propose to use a gate function similar to <cite>Noraset et al. (2017)</cite> to dynamically control how the global and local contexts influence the description.",
"role": "user"
},
{
"content": "[\"Similarities\"]",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND AND AIMS:: Low-volume high-concentration bupivacaine irrigation of the peritoneal cavity has been reported to be ineffective for short-term analgesia after laparoscopic cholecystectomy (LC). This study was conducted to evaluate the effectiveness of intraperitoneal instillation of high-volume low-concentration bupivacaine for post-operative analgesia in LC.
ABSTRACT.METHODS:: Sixty patients undergoing LC were included in this prospective, double-blind, randomised study. Patients were divided into two (n = 30) groups. In Group S, intraperitoneal irrigation was done with 500 ml of normal saline. In Group B, 20 ml of 0.5% (100 mg) bupivacaine was added to 480 ml of normal saline for intraperitoneal irrigation during and after surgery. Post-operative pain was assessed by numeric pain rating scale (NRS) at fixed time intervals. Duration of analgesia (DOA), total rescue analgesic requirement (intravenous tramadol), presence of shoulder pain, nausea and vomiting were recorded for the initial 24 h post-operatively.
ABSTRACT.RESULTS:: Mean DOA in Group S was 0.06 ± 0.172 h (3.6 ± 10.32 min) and that in Group B was 19.35 ± 8.64 h (P = 0.000). Cumulative requirement of rescue analgesic in 24 h in Group S was 123.33 ± 43.01 mg and that in Group B was 23.33 ± 43.01 mg (P = 0.000). There was no significant difference in incidence of shoulder pain, nausea and vomiting between the groups.
ABSTRACT.CONCLUSION:: High-volume low-concentration of intraperitoneal bupivacaine significantly increases post-operative DOA and reduces opioid requirement after LC.
Extracted Data: | [["20 ml of 0.5% (100 mg) bupivacaine was added to 480 ml of normal saline for intraperitoneal irrigation during and after surgery", "intraperitoneal irrigation was done with 500 ml of normal saline", "duration of analgesia", "significantly increased", "Mean DOA in Group S was 0.06 \u00b1 0.172 h (3.6 \u00b1 10.32 min) and that in Group B was 19.35 \u00b1 8.64 h (P = 0.000)."], ["20 ml of 0.5% (100 mg) bupivacaine was added to 480 ml of normal saline for intraperitoneal irrigation during and after surgery", "intraperitoneal irrigation was done with 500 ml of normal saline", "Incidence of shoulder pain or nausea and vomiting", "no significant difference", "There was no significant difference in incidence of shoulder pain, nausea and vomiting between the groups"]] | science.evidence_inference.240 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND AND AIMS:: Low-volume high-concentration bupivacaine irrigation of the peritoneal cavity has been reported to be ineffective for short-term analgesia after laparoscopic cholecystectomy (LC). This study was conducted to evaluate the effectiveness of intraperitoneal instillation of high-volume low-concentration bupivacaine for post-operative analgesia in LC.\n\nABSTRACT.METHODS:: Sixty patients undergoing LC were included in this prospective, double-blind, randomised study. Patients were divided into two (n = 30) groups. In Group S, intraperitoneal irrigation was done with 500 ml of normal saline. In Group B, 20 ml of 0.5% (100 mg) bupivacaine was added to 480 ml of normal saline for intraperitoneal irrigation during and after surgery. Post-operative pain was assessed by numeric pain rating scale (NRS) at fixed time intervals. Duration of analgesia (DOA), total rescue analgesic requirement (intravenous tramadol), presence of shoulder pain, nausea and vomiting were recorded for the initial 24 h post-operatively.\n\nABSTRACT.RESULTS:: Mean DOA in Group S was 0.06 ± 0.172 h (3.6 ± 10.32 min) and that in Group B was 19.35 ± 8.64 h (P = 0.000). Cumulative requirement of rescue analgesic in 24 h in Group S was 123.33 ± 43.01 mg and that in Group B was 23.33 ± 43.01 mg (P = 0.000). There was no significant difference in incidence of shoulder pain, nausea and vomiting between the groups.\n\nABSTRACT.CONCLUSION:: High-volume low-concentration of intraperitoneal bupivacaine significantly increases post-operative DOA and reduces opioid requirement after LC.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"20 ml of 0.5% (100 mg) bupivacaine was added to 480 ml of normal saline for intraperitoneal irrigation during and after surgery\", \"intraperitoneal irrigation was done with 500 ml of normal saline\", \"duration of analgesia\", \"significantly increased\", \"Mean DOA in Group S was 0.06 \\u00b1 0.172 h (3.6 \\u00b1 10.32 min) and that in Group B was 19.35 \\u00b1 8.64 h (P = 0.000).\"], [\"20 ml of 0.5% (100 mg) bupivacaine was added to 480 ml of normal saline for intraperitoneal irrigation during and after surgery\", \"intraperitoneal irrigation was done with 500 ml of normal saline\", \"Incidence of shoulder pain or nausea and vomiting\", \"no significant difference\", \"There was no significant difference in incidence of shoulder pain, nausea and vomiting between the groups\"]]",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Nosocomial infections are one of the main causes of mortality and morbidity in hospitals, especially in intensive care units (ICUs).
ABSTRACT.OBJECTIVE: The aim of this study was to examine the impact of garlic tablets on nosocomial infections in hospitalized patients in intensive care units.
ABSTRACT.METHODS: This clinical trial was carried out on 94 patients, admitted to the intensive care units in Kashani and Al-Zahra hospitals from January 21, 2014 to December 20, 2014. Firstly, the patients were randomly selected by simple sampling, then they were assigned into case and control groups. The case group administered one 400 mg garlic tablet daily for 6 days and the control group received placebo. During the study, inflammatory blood factors and infection occurrence in the two groups were compared. The Data were analyzed by SPSS software version 22 through descriptive tests such as independent t-test, Chi-square test, ANOVA and exact Fisher test for the analyses of primary and secondary outcomes.
ABSTRACT.RESULTS: During the study period, 78 cases of intravenous catheter tip were sent to laboratory for culture, of which, 37 cases were in the intervention group and 41 in the control group. Culture results of Catheter tips was positive in 5 cases and all five cases were in the control group. Frequency distribution of catheter tip culture was significantly higher in the control group than that of the intervention group (p=0.03).
ABSTRACT.CONCLUSION: Based on the results of our study, in people with weakened immune systems and in people with high incidence of opportunistic infections, it is necessary to strengthen their body's immune system stimulants before dealing with these infectious agents, and cause decrease in the diseases insusceptible people. It was suggested that garlic supplementation has shown to be effective in patients admitted to ICU, who are highly susceptible to nosocomial infection, and it can be used for the prevention of septicemia and urinary tract infections. However, further research with larger sample size is needed.
ABSTRACT.TRIAL REGISTRATION: The trial was registered at the Iranian Registry of Clinical Trials (http://www.irct.ir) with the Irct ID: IRCT207406156480N6.
ABSTRACT.FUNDING: Shahrekord University of Medical Sciences financially supported this research.
Extracted Data: | [["400 mg garlic tablet daily for 6 days", "placebo", "Frequency distribution of catheter tip culture", "significantly decreased", "Frequency distribution of catheter tip culture was significantly higher in the control group than that of the intervention group (p=0.03)."]] | science.evidence_inference.99 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Nosocomial infections are one of the main causes of mortality and morbidity in hospitals, especially in intensive care units (ICUs).\n\nABSTRACT.OBJECTIVE: The aim of this study was to examine the impact of garlic tablets on nosocomial infections in hospitalized patients in intensive care units.\n\nABSTRACT.METHODS: This clinical trial was carried out on 94 patients, admitted to the intensive care units in Kashani and Al-Zahra hospitals from January 21, 2014 to December 20, 2014. Firstly, the patients were randomly selected by simple sampling, then they were assigned into case and control groups. The case group administered one 400 mg garlic tablet daily for 6 days and the control group received placebo. During the study, inflammatory blood factors and infection occurrence in the two groups were compared. The Data were analyzed by SPSS software version 22 through descriptive tests such as independent t-test, Chi-square test, ANOVA and exact Fisher test for the analyses of primary and secondary outcomes.\n\nABSTRACT.RESULTS: During the study period, 78 cases of intravenous catheter tip were sent to laboratory for culture, of which, 37 cases were in the intervention group and 41 in the control group. Culture results of Catheter tips was positive in 5 cases and all five cases were in the control group. Frequency distribution of catheter tip culture was significantly higher in the control group than that of the intervention group (p=0.03).\n\nABSTRACT.CONCLUSION: Based on the results of our study, in people with weakened immune systems and in people with high incidence of opportunistic infections, it is necessary to strengthen their body's immune system stimulants before dealing with these infectious agents, and cause decrease in the diseases insusceptible people. It was suggested that garlic supplementation has shown to be effective in patients admitted to ICU, who are highly susceptible to nosocomial infection, and it can be used for the prevention of septicemia and urinary tract infections. However, further research with larger sample size is needed.\n\nABSTRACT.TRIAL REGISTRATION: The trial was registered at the Iranian Registry of Clinical Trials (http://www.irct.ir) with the Irct ID: IRCT207406156480N6.\n\nABSTRACT.FUNDING: Shahrekord University of Medical Sciences financially supported this research.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"400 mg garlic tablet daily for 6 days\", \"placebo\", \"Frequency distribution of catheter tip culture\", \"significantly decreased\", \"Frequency distribution of catheter tip culture was significantly higher in the control group than that of the intervention group (p=0.03).\"]]",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
A Good Image Generator Is What You Need for High-Resolution Video Synthesis
Main Content:
1 INTRODUCTION . Video synthesis seeks to generate a sequence of moving pictures from noise . While its closely related counterpart—image synthesis—has seen substantial advances in recent years , allowing for synthesizing at high resolutions ( Karras et al. , 2017 ) , rendering images often indistinguishable from real ones ( Karras et al. , 2019 ) , and supporting multiple classes of image content ( Zhang et al. , 2019 ) , contemporary improvements in the domain of video synthesis have been comparatively modest . Due to the statistical complexity of videos and larger model sizes , video synthesis produces relatively low-resolution videos , yet requires longer training times . For example , scaling the image generator of Brock et al . ( 2019 ) to generate 256 × 256 videos requires a substantial computational budget1 . Can we use a similar method to attain higher resolutions ? We believe a different approach is needed . There are two desired properties for generated videos : ( i ) high quality for each individual frame , and ( ii ) the frame sequence should be temporally consistent , i.e . depicting the same content with plausible motion . Previous works ( Tulyakov et al. , 2018 ; Clark et al. , 2019 ) attempt to achieve both goals with a single framework , making such methods computationally demanding when high resolution is desired . We suggest a different perspective on this problem . We hypothesize that , given an image generator that has learned the distribution of video frames as independent images , a video can be represented as a sequence of latent codes from this generator . The problem of video synthesis can then be framed as discovering a latent trajectory that renders temporally consistent images . Hence , we demonstrate that ( i ) can be addressed by a pre-trained and fixed image generator , and ( ii ) can be achieved using the proposed framework to create appropriate image sequences . ∗Work done while at Snap Inc. 1We estimate that the cost of training a model such as DVD-GAN ( Clark et al. , 2019 ) once requires > $ 30K . To discover the appropriate latent trajectory , we introduce a motion generator , implemented via two recurrent neural networks , that operates on the initial content code to obtain the motion representation . We model motion as a residual between continuous latent codes that are passed to the image generator for individual frame generation . Such a residual representation can also facilitate the disentangling of motion and content . The motion generator is trained using the chosen image discriminator with contrastive loss to force the content to be temporally consistent , and a patch-based multi-scale video discriminator for learning motion patterns . Our framework supports contemporary image generators such as StyleGAN2 ( Karras et al. , 2019 ) and BigGAN ( Brock et al. , 2019 ) . We name our approach as MoCoGAN-HD ( Motion and Content decomposed GAN for HighDefinition video synthesis ) as it features several major advantages over traditional video synthesis pipelines . First , it transcends the limited resolutions of existing techniques , allowing for the generation of high-quality videos at resolutions up to 1024 × 1024 . Second , as we search for a latent trajectory in an image generator , our method is computationally more efficient , requiring an order of magnitude less training time than previous video-based works ( Clark et al. , 2019 ) . Third , as the image generator is fixed , it can be trained on a separate high-quality image dataset . Due to the disentangled representation of motion and content , our approach can learn motion from a video dataset and apply it to an image dataset , even in the case of two datasets belonging to different domains . It thus unleashes the power of an image generator to synthesize high quality videos when a domain ( e.g. , dogs ) contains many high-quality images but no corresponding high-quality videos ( see Fig . 4 ) . In this manner , our method can generate realistic videos of objects it has never seen moving during training ( such as generating realistic pet face videos using motions extracted from images of talking people ) . We refer to this new video generation task as cross-domain video synthesis . Finally , we quantitatively and qualitatively evaluate our approach , attaining state-of-the-art performance on each benchmark , and establish a challenging new baseline for video synthesis methods . 2 RELATED WORK . Video Synthesis . Approaches to image generation and translation using Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) have demonstrated the ability to synthesize high quality images ( Radford et al. , 2016 ; Zhang et al. , 2019 ; Brock et al. , 2019 ; Donahue & Simonyan , 2019 ; Jin et al. , 2021 ) . Built upon image translation ( Isola et al. , 2017 ; Wang et al. , 2018b ) , works on video-to-video translation ( Bansal et al. , 2018 ; Wang et al. , 2018a ) are capable of converting an input video to a high-resolution output in another domain . However , the task of high-fidelity video generation , in the unconditional setting , is still a difficult and unresolved problem . Without the strong conditional inputs such as segmentation masks ( Wang et al. , 2019 ) or human poses ( Chan et al. , 2019 ; Ren et al. , 2020 ) that are employed by video-to-video translation works , generating videos following the distribution of training video samples is challenging . Earlier works on GANbased video modeling , including MDPGAN ( Yushchenko et al. , 2019 ) , VGAN ( Vondrick et al. , 2016 ) , TGAN ( Saito et al. , 2017 ) , MoCoGAN ( Tulyakov et al. , 2018 ) , ProgressiveVGAN ( Acharya et al. , 2018 ) , TGANv2 ( Saito et al. , 2020 ) show promising results on low-resolution datasets . Recent efforts demonstrate the capacity to generate more realistic videos , but with significantly more computation ( Clark et al. , 2019 ; Weissenborn et al. , 2020 ) . In this paper , we focus on generating realistic videos using manageable computational resources . LDVDGAN ( Kahembwe & Ramamoorthy , 2020 ) uses low dimensional discriminator to reduce model size and can generate videos with resolution up to 512× 512 , while we decrease training cost by utilizing a pre-trained image generator . The high-quality generation is achieved by using pre-trained image generators , while the motion trajectory is modeled within the latent space . Additionally , learning motion in the latent space allows us to easily adapt the video generation model to the task of video prediction ( Denton et al. , 2017 ) , in which the starting frame is given ( Denton & Fergus , 2018 ; Zhao et al. , 2018 ; Walker et al. , 2017 ; Villegas et al. , 2017b ; a ; Babaeizadeh et al. , 2017 ; Hsieh et al. , 2018 ; Byeon et al. , 2018 ) , by inverting the initial frame through the generator ( Abdal et al. , 2020 ) , instead of training an extra image encoder ( Tulyakov et al. , 2018 ; Zhang et al. , 2020 ) . Interpretable Latent Directions . The latent space of GANs is known to consist of semantically meaningful vectors for image manipulation . Both supervised methods , either using human annotations or pre-trained image classifiers ( Goetschalckx et al. , 2019 ; Shen et al. , 2020 ) , and unsupervised methods ( Jahanian et al. , 2020 ; Plumerault et al. , 2020 ) , are able to find interpretable directions for image editing , such as supervising directions for image rotation or background removal ( Voynov & Babenko , 2020 ; Shen & Zhou , 2020 ) . We further consider the motion vectors in the latent space . By disentangling the motion trajectories in an unsupervised fashion , we are able to transfer the motion information from a video dataset to an image dataset in which no temporal information is available . Contrastive Representation Learning is widely studied in unsupervised learning tasks ( He et al. , 2020 ; Chen et al. , 2020a ; b ; Hénaff et al. , 2020 ; Löwe et al. , 2019 ; Oord et al. , 2018 ; Misra & Maaten , 2020 ) . Related inputs , such as images ( Wu et al. , 2018 ) or latent representations ( Hjelm et al. , 2019 ) , which can vary while training due to data augmentation , are forced to be close by minimizing differences in their representation during training . Recent work ( Park et al. , 2020 ) applies noisecontrastive estimation ( Gutmann & Hyvärinen , 2010 ) to image generation tasks by learning the correspondence between image patches , achieving performance superior to that attained when using cycle-consistency constraints ( Zhu et al. , 2017 ; Yi et al. , 2017 ) . On the other hand , we learn an image discriminator to create videos with coherent content by leveraging contrastive loss ( Hadsell et al. , 2006 ) along with an adversarial loss ( Goodfellow et al. , 2014 ) . 3 METHOD . In this section , we introduce our method for high-resolution video generation . Our framework is built on top of a pre-trained image generator ( Karras et al. , 2020a ; b ; Zhao et al. , 2020a ; b ) , which helps to generate high-quality image frames and boosts the training efficiency with manageable computational resources . In addition , with the image generator fixed during training , we can disentangle video motion from image content , and enable video synthesis even when the image content and the video motion come from different domains . More specifically , our inference framework includes a motion generatorGM and an image generator GI . GM is implemented with two LSTM networks ( Hochreiter & Schmidhuber , 1997 ) and predicts the latent motion trajectory Z = { z1 , z2 , · · · , zn } , where n is the number of frames in the synthesized video . The image generator GI can thus synthesize each individual frame from the motion trajectory . The generated video sequence ṽ is given by ṽ = { x̃1 , x̃2 , · · · , x̃n } . For each synthesized frame x̃t , we have x̃t = GI ( zt ) for t = 1 , 2 , · · · , n. We also define the real video clip as v = { x1 , x2 , · · · , xn } and the training video distribution as pv . To train the motion generator GM to discover the desired motion trajectory , we apply a video discriminator to constrain the generated motion patterns to be similar to those of the training videos , and an image discriminator to force the frame content to be temporally consistent . Our framework is illustrated in Fig . 1 . We describe each component in more detail in the following sections . 3.1 MOTION GENERATOR . The motion generator GM predicts consecutive latent codes using an input code z1 ∈ Z , where the latent spaceZ is also shared by the image generator . For BigGAN ( Brock et al. , 2019 ) , we sample z1 from the normal distribution pz . For StyleGAN2 ( Karras et al. , 2020b ) , pz is the distribution after the multi-layer perceptron ( MLP ) , as the latent codes within this distribution can be semantically disentangled better than when using the normal distribution ( Shen et al. , 2020 ; Zhu et al. , 2020 ) . Formally , GM includes an LSTM encoder LSTMenc , which encodes z1 to get the initial hidden state , and a LSTM decoder LSTMdec , which estimates n− 1 continuous hidden states recursively : h1 , c1 = LSTMenc ( z1 ) , ht , ct = LSTMdec ( t , ( ht−1 , ct−1 ) ) , t = 2 , 3 , · · · , n , ( 1 ) where h and c denote the hidden state and cell state respectively , and t is a noise vector sampled from the normal distribution to model the motion diversity at timestamp t. Motion Disentanglement . Prior work ( Tulyakov et al. , 2018 ) applies ht as the motion code for the frame to be generated , while the content code is fixed for all frames . However , such a design requires a recurrent network to estimate the motion while preserving consistent content from the latent vector , which is difficult to learn in practice . Instead , we propose to use a sequence of motion residuals for estimating the motion trajectory . Specifically , we model the motion residual as the linear combination of a set of interpretable directions in the latent space ( Shen & Zhou , 2020 ; Härkönen et al. , 2020 ) . We first conduct principal component analysis ( PCA ) on m randomly sampled latent vectors from Z to get the basis V. Then , we estimate the motion direction from the previous frame zt−1 to the current frame zt by using ht and V as follows : zt = zt−1 + λ · ht ·V , t = 2 , 3 , · · · , n , ( 2 ) where the hidden state ht ∈ [ −1 , 1 ] , and λ controls the step given by the residual . With Eqn . 1 and Eqn . 2 , we have GM ( z1 ) = { z1 , z2 , · · · , zn } , and the generated video ṽ is given as ṽ = GI ( GM ( z1 ) ) . Motion Diversity . In Eqn . 1 , we introduce a noise vector t to control the diversity of motion . However , we observe that the LSTM decoder tends to neglect the t , resulting in motion mode collapse , meaning that GM can not capture the diverse motion patterns from training videos and generate distinct videos from one initial latent code with similar motion patterns for different sequences of noise vectors . To alleviate this issue , we introduce a mutual information loss Lm to maximize the mutual information between the hidden vector ht and the noise vector t. With sim ( u , v ) = uTv/ ‖u‖ ‖v‖ denoting the cosine similarity between vectors u and v , we define Lm as follows : Lm = 1 n− 1 n∑ t=2 sim ( H ( ht ) , t ) , ( 3 ) where H is a 2-layer MLP that serves as a mapping function . Learning . To learn the appropriate parameters for the motion generator GM , we apply a multi-scale video discriminator DV to tell whether a video sequence is real or synthesized . The discriminator is based on the architecture of PatchGAN ( Isola et al. , 2017 ) . However , we use 3D convolutional layers in DV , as they can model temporal dynamics better than 2D convolutional layers . We divide input video sequence into small 3D patches , and classify each patch as real or fake . The local responses for the input sequence are averaged to produce the final output . Additionally , each frame in the input video sequence is conditioned on the first frame , as it falls into the distribution of the pre-trained image generator , for more stable training . We thus optimize the following adversarial loss to learn GM and DV : LDV = Ev∼pv [ logDv ( v ) ] + Ez1∼pz [ log ( 1−DV ( GI ( GM ( z1 ) ) ) ) ] . ( 4 ) | This paper proposes a method to disentangle content and motion from videos for high-resolution video synthesis. The proposed method consists of a motion generator, pre-trained generator, image discriminator, and video discriminator. The motion generator predicts the latent motion trajectory z, which is residually updated over time. Then the image generator produces each individual frame from the motion trajectory. For training, five types of loss functions are combined. In experiments, video generation by the proposed method is performed on UCF-101, FaceForensics, and Sky time-Lapse datasets. Also, cross-domain video generation and more ablation studies were conducted to show the effectiveness of the proposed method. | science.mup_single_document_summarization.2561 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nA Good Image Generator Is What You Need for High-Resolution Video Synthesis\n\nMain Content:\n1 INTRODUCTION . Video synthesis seeks to generate a sequence of moving pictures from noise . While its closely related counterpart—image synthesis—has seen substantial advances in recent years , allowing for synthesizing at high resolutions ( Karras et al. , 2017 ) , rendering images often indistinguishable from real ones ( Karras et al. , 2019 ) , and supporting multiple classes of image content ( Zhang et al. , 2019 ) , contemporary improvements in the domain of video synthesis have been comparatively modest . Due to the statistical complexity of videos and larger model sizes , video synthesis produces relatively low-resolution videos , yet requires longer training times . For example , scaling the image generator of Brock et al . ( 2019 ) to generate 256 × 256 videos requires a substantial computational budget1 . Can we use a similar method to attain higher resolutions ? We believe a different approach is needed . There are two desired properties for generated videos : ( i ) high quality for each individual frame , and ( ii ) the frame sequence should be temporally consistent , i.e . depicting the same content with plausible motion . Previous works ( Tulyakov et al. , 2018 ; Clark et al. , 2019 ) attempt to achieve both goals with a single framework , making such methods computationally demanding when high resolution is desired . We suggest a different perspective on this problem . We hypothesize that , given an image generator that has learned the distribution of video frames as independent images , a video can be represented as a sequence of latent codes from this generator . The problem of video synthesis can then be framed as discovering a latent trajectory that renders temporally consistent images . Hence , we demonstrate that ( i ) can be addressed by a pre-trained and fixed image generator , and ( ii ) can be achieved using the proposed framework to create appropriate image sequences . ∗Work done while at Snap Inc. 1We estimate that the cost of training a model such as DVD-GAN ( Clark et al. , 2019 ) once requires > $ 30K . To discover the appropriate latent trajectory , we introduce a motion generator , implemented via two recurrent neural networks , that operates on the initial content code to obtain the motion representation . We model motion as a residual between continuous latent codes that are passed to the image generator for individual frame generation . Such a residual representation can also facilitate the disentangling of motion and content . The motion generator is trained using the chosen image discriminator with contrastive loss to force the content to be temporally consistent , and a patch-based multi-scale video discriminator for learning motion patterns . Our framework supports contemporary image generators such as StyleGAN2 ( Karras et al. , 2019 ) and BigGAN ( Brock et al. , 2019 ) . We name our approach as MoCoGAN-HD ( Motion and Content decomposed GAN for HighDefinition video synthesis ) as it features several major advantages over traditional video synthesis pipelines . First , it transcends the limited resolutions of existing techniques , allowing for the generation of high-quality videos at resolutions up to 1024 × 1024 . Second , as we search for a latent trajectory in an image generator , our method is computationally more efficient , requiring an order of magnitude less training time than previous video-based works ( Clark et al. , 2019 ) . Third , as the image generator is fixed , it can be trained on a separate high-quality image dataset . Due to the disentangled representation of motion and content , our approach can learn motion from a video dataset and apply it to an image dataset , even in the case of two datasets belonging to different domains . It thus unleashes the power of an image generator to synthesize high quality videos when a domain ( e.g. , dogs ) contains many high-quality images but no corresponding high-quality videos ( see Fig . 4 ) . In this manner , our method can generate realistic videos of objects it has never seen moving during training ( such as generating realistic pet face videos using motions extracted from images of talking people ) . We refer to this new video generation task as cross-domain video synthesis . Finally , we quantitatively and qualitatively evaluate our approach , attaining state-of-the-art performance on each benchmark , and establish a challenging new baseline for video synthesis methods . 2 RELATED WORK . Video Synthesis . Approaches to image generation and translation using Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) have demonstrated the ability to synthesize high quality images ( Radford et al. , 2016 ; Zhang et al. , 2019 ; Brock et al. , 2019 ; Donahue & Simonyan , 2019 ; Jin et al. , 2021 ) . Built upon image translation ( Isola et al. , 2017 ; Wang et al. , 2018b ) , works on video-to-video translation ( Bansal et al. , 2018 ; Wang et al. , 2018a ) are capable of converting an input video to a high-resolution output in another domain . However , the task of high-fidelity video generation , in the unconditional setting , is still a difficult and unresolved problem . Without the strong conditional inputs such as segmentation masks ( Wang et al. , 2019 ) or human poses ( Chan et al. , 2019 ; Ren et al. , 2020 ) that are employed by video-to-video translation works , generating videos following the distribution of training video samples is challenging . Earlier works on GANbased video modeling , including MDPGAN ( Yushchenko et al. , 2019 ) , VGAN ( Vondrick et al. , 2016 ) , TGAN ( Saito et al. , 2017 ) , MoCoGAN ( Tulyakov et al. , 2018 ) , ProgressiveVGAN ( Acharya et al. , 2018 ) , TGANv2 ( Saito et al. , 2020 ) show promising results on low-resolution datasets . Recent efforts demonstrate the capacity to generate more realistic videos , but with significantly more computation ( Clark et al. , 2019 ; Weissenborn et al. , 2020 ) . In this paper , we focus on generating realistic videos using manageable computational resources . LDVDGAN ( Kahembwe & Ramamoorthy , 2020 ) uses low dimensional discriminator to reduce model size and can generate videos with resolution up to 512× 512 , while we decrease training cost by utilizing a pre-trained image generator . The high-quality generation is achieved by using pre-trained image generators , while the motion trajectory is modeled within the latent space . Additionally , learning motion in the latent space allows us to easily adapt the video generation model to the task of video prediction ( Denton et al. , 2017 ) , in which the starting frame is given ( Denton & Fergus , 2018 ; Zhao et al. , 2018 ; Walker et al. , 2017 ; Villegas et al. , 2017b ; a ; Babaeizadeh et al. , 2017 ; Hsieh et al. , 2018 ; Byeon et al. , 2018 ) , by inverting the initial frame through the generator ( Abdal et al. , 2020 ) , instead of training an extra image encoder ( Tulyakov et al. , 2018 ; Zhang et al. , 2020 ) . Interpretable Latent Directions . The latent space of GANs is known to consist of semantically meaningful vectors for image manipulation . Both supervised methods , either using human annotations or pre-trained image classifiers ( Goetschalckx et al. , 2019 ; Shen et al. , 2020 ) , and unsupervised methods ( Jahanian et al. , 2020 ; Plumerault et al. , 2020 ) , are able to find interpretable directions for image editing , such as supervising directions for image rotation or background removal ( Voynov & Babenko , 2020 ; Shen & Zhou , 2020 ) . We further consider the motion vectors in the latent space . By disentangling the motion trajectories in an unsupervised fashion , we are able to transfer the motion information from a video dataset to an image dataset in which no temporal information is available . Contrastive Representation Learning is widely studied in unsupervised learning tasks ( He et al. , 2020 ; Chen et al. , 2020a ; b ; Hénaff et al. , 2020 ; Löwe et al. , 2019 ; Oord et al. , 2018 ; Misra & Maaten , 2020 ) . Related inputs , such as images ( Wu et al. , 2018 ) or latent representations ( Hjelm et al. , 2019 ) , which can vary while training due to data augmentation , are forced to be close by minimizing differences in their representation during training . Recent work ( Park et al. , 2020 ) applies noisecontrastive estimation ( Gutmann & Hyvärinen , 2010 ) to image generation tasks by learning the correspondence between image patches , achieving performance superior to that attained when using cycle-consistency constraints ( Zhu et al. , 2017 ; Yi et al. , 2017 ) . On the other hand , we learn an image discriminator to create videos with coherent content by leveraging contrastive loss ( Hadsell et al. , 2006 ) along with an adversarial loss ( Goodfellow et al. , 2014 ) . 3 METHOD . In this section , we introduce our method for high-resolution video generation . Our framework is built on top of a pre-trained image generator ( Karras et al. , 2020a ; b ; Zhao et al. , 2020a ; b ) , which helps to generate high-quality image frames and boosts the training efficiency with manageable computational resources . In addition , with the image generator fixed during training , we can disentangle video motion from image content , and enable video synthesis even when the image content and the video motion come from different domains . More specifically , our inference framework includes a motion generatorGM and an image generator GI . GM is implemented with two LSTM networks ( Hochreiter & Schmidhuber , 1997 ) and predicts the latent motion trajectory Z = { z1 , z2 , · · · , zn } , where n is the number of frames in the synthesized video . The image generator GI can thus synthesize each individual frame from the motion trajectory . The generated video sequence ṽ is given by ṽ = { x̃1 , x̃2 , · · · , x̃n } . For each synthesized frame x̃t , we have x̃t = GI ( zt ) for t = 1 , 2 , · · · , n. We also define the real video clip as v = { x1 , x2 , · · · , xn } and the training video distribution as pv . To train the motion generator GM to discover the desired motion trajectory , we apply a video discriminator to constrain the generated motion patterns to be similar to those of the training videos , and an image discriminator to force the frame content to be temporally consistent . Our framework is illustrated in Fig . 1 . We describe each component in more detail in the following sections . 3.1 MOTION GENERATOR . The motion generator GM predicts consecutive latent codes using an input code z1 ∈ Z , where the latent spaceZ is also shared by the image generator . For BigGAN ( Brock et al. , 2019 ) , we sample z1 from the normal distribution pz . For StyleGAN2 ( Karras et al. , 2020b ) , pz is the distribution after the multi-layer perceptron ( MLP ) , as the latent codes within this distribution can be semantically disentangled better than when using the normal distribution ( Shen et al. , 2020 ; Zhu et al. , 2020 ) . Formally , GM includes an LSTM encoder LSTMenc , which encodes z1 to get the initial hidden state , and a LSTM decoder LSTMdec , which estimates n− 1 continuous hidden states recursively : h1 , c1 = LSTMenc ( z1 ) , ht , ct = LSTMdec ( t , ( ht−1 , ct−1 ) ) , t = 2 , 3 , · · · , n , ( 1 ) where h and c denote the hidden state and cell state respectively , and t is a noise vector sampled from the normal distribution to model the motion diversity at timestamp t. Motion Disentanglement . Prior work ( Tulyakov et al. , 2018 ) applies ht as the motion code for the frame to be generated , while the content code is fixed for all frames . However , such a design requires a recurrent network to estimate the motion while preserving consistent content from the latent vector , which is difficult to learn in practice . Instead , we propose to use a sequence of motion residuals for estimating the motion trajectory . Specifically , we model the motion residual as the linear combination of a set of interpretable directions in the latent space ( Shen & Zhou , 2020 ; Härkönen et al. , 2020 ) . We first conduct principal component analysis ( PCA ) on m randomly sampled latent vectors from Z to get the basis V. Then , we estimate the motion direction from the previous frame zt−1 to the current frame zt by using ht and V as follows : zt = zt−1 + λ · ht ·V , t = 2 , 3 , · · · , n , ( 2 ) where the hidden state ht ∈ [ −1 , 1 ] , and λ controls the step given by the residual . With Eqn . 1 and Eqn . 2 , we have GM ( z1 ) = { z1 , z2 , · · · , zn } , and the generated video ṽ is given as ṽ = GI ( GM ( z1 ) ) . Motion Diversity . In Eqn . 1 , we introduce a noise vector t to control the diversity of motion . However , we observe that the LSTM decoder tends to neglect the t , resulting in motion mode collapse , meaning that GM can not capture the diverse motion patterns from training videos and generate distinct videos from one initial latent code with similar motion patterns for different sequences of noise vectors . To alleviate this issue , we introduce a mutual information loss Lm to maximize the mutual information between the hidden vector ht and the noise vector t. With sim ( u , v ) = uTv/ ‖u‖ ‖v‖ denoting the cosine similarity between vectors u and v , we define Lm as follows : Lm = 1 n− 1 n∑ t=2 sim ( H ( ht ) , t ) , ( 3 ) where H is a 2-layer MLP that serves as a mapping function . Learning . To learn the appropriate parameters for the motion generator GM , we apply a multi-scale video discriminator DV to tell whether a video sequence is real or synthesized . The discriminator is based on the architecture of PatchGAN ( Isola et al. , 2017 ) . However , we use 3D convolutional layers in DV , as they can model temporal dynamics better than 2D convolutional layers . We divide input video sequence into small 3D patches , and classify each patch as real or fake . The local responses for the input sequence are averaged to produce the final output . Additionally , each frame in the input video sequence is conditioned on the first frame , as it falls into the distribution of the pre-trained image generator , for more stable training . We thus optimize the following adversarial loss to learn GM and DV : LDV = Ev∼pv [ logDv ( v ) ] + Ez1∼pz [ log ( 1−DV ( GI ( GM ( z1 ) ) ) ) ] . ( 4 )",
"role": "user"
},
{
"content": "This paper proposes a method to disentangle content and motion from videos for high-resolution video synthesis. The proposed method consists of a motion generator, pre-trained generator, image discriminator, and video discriminator. The motion generator predicts the latent motion trajectory z, which is residually updated over time. Then the image generator produces each individual frame from the motion trajectory. For training, five types of loss functions are combined. In experiments, video generation by the proposed method is performed on UCF-101, FaceForensics, and Sky time-Lapse datasets. Also, cross-domain video generation and more ablation studies were conducted to show the effectiveness of the proposed method.",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
The main differences with the approaches described in [13,<cite> 15]</cite> are the use of multi-layered GRUs, importance sampling, the cyclic learning rate, snapshot ensembling and the use of vectorial rather than scalar attention. | ["Differences"] | science.multicite_intent_classification.2161 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe main differences with the approaches described in [13,<cite> 15]</cite> are the use of multi-layered GRUs, importance sampling, the cyclic learning rate, snapshot ensembling and the use of vectorial rather than scalar attention.",
"role": "user"
},
{
"content": "[\"Differences\"]",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Combining Analogy with Language Models for Knowledge Extraction
Main Content:
1 . Introduction . Knowledge bases ( KBs ) have proven to be a very useful resource in many artificial intelligence applications including Dialogue , Question Answering , and Semantic Search [ Bradeško et al. , 2010 , Lukovnikov et al. , 2017 , Ribeiro et al. , 2019 ] . While KBs serve as a good source of explicit knowledge , they often suffer from missing relations and facts . For this reason , extracting knowledge from text with the intent of improving existing KBs has been a longstanding goal in the field of Natural Language Processing . This task is usually referred to as knowledge extraction and is considered challenging because of the inherent diversity and ambiguity of natural language text . In recent years , there has been substantial progress in automated knowledge extraction , but the resources built by these methods are typically limited by at least one of these four constraints : ( 1 ) They lack general world knowledge about common nouns , targeting named entities instead . Examples include NELL [ Carlson et al. , 2010 ] and FreeBase [ Bollacker et al. , 2008 ] ( 2 ) They work with a small set of semantic relations . For instance , WordNet [ Miller , 1998 ] has less than a dozen . ( 3 ) They are supervised in nature , requiring vast amounts of training data . Examples include KBP37 [ Zhang and Wang , 2015 ] and TACRED [ Zhang et al. , 2017 ] . ( 4 ) They represent entities and relations as text strings , which are often ambiguous and harder to manipulate . Examples include ConceptNet [ Speer et al. , 2017 ] , Aristo Tuple KB [ Mishra et al. , 2017 ] , ATOMIC [ Sap et al. , 2019 ] , Dice [ Chalier et al. , 2020 ] , and Ascent [ Nguyen et al. , 2021 ] . By contrast , our goal is to populate an existing ontology , seeded by the facts and schemas already defined in the KB . We use the Companion Cognitive Architecture [ Forbus et al. , 2017 ] which has a general-purpose semantic parser capable of generating semantic representations grounded by relations and entities present in the architecture ’ s KB . In general , KBs containing common-sense world knowledge are incomplete . Consequently , a system has to rely on a small amount of training samples to learn new facts ( in our KB around 94.8 % of relations are associated with less than 100 facts ) . To solve this problem , we propose a novel technique based on analogical Q/A training [ Crouse et al. , 2018 ] . Our approach efficiently creates query cases which are later analogically retrieved and combined to interpret new texts . The system uses distant supervision to automatically generate training data for analogical training which extracts general knowledge from Simple English Wikipedia1 articles . In this work we combine structural similarity with BERT-guided word sense disambiguation and fact classification for few-shot learning of common-sense facts . To the best of our knowledge , we are the first to combine these two techniques . 2 . Related Work . Our work relates to learning structured knowledge from natural language text with the goal of extending a KB without relying on labeled corpora . Such tasks are often classified as information extraction ( IE ) or relation extraction ( RE ) . Learning under distant supervision for RE without labeled data was initially introduced by Mintz et al . [ 2009 ] . They showed how text can be heuristically aligned with a knowledge graph ( KG ) to generate instances used to train a classifier . Subsequent systems incorporated not only syntactic textual features but also jointly learned KG information to perform extraction , either by syntactic-semantic rules from random walks [ Lao et al. , 2012 ] , scoring functions of low-dimensional embeddings [ Weston et al. , 2013 ] , mutual attention between KG and text [ Han et al. , 2018a ] , or neural encoder-decoder models [ Distiawan et al. , 2019 ] . Other recent methods have focused on using neural networks in the context on few-shot relation extraction [ Han et al. , 2018b , Gao et al. , 2019 , Qu et al. , 2020 , Wang et al. , 2021 ] . Such methods have achieved impressive results on RE but focused on resources that either require many hundreds of training examples or mostly extract facts about named entities . The work of Sharma et al . [ 2019 ] is more related to ours as they used distant supervision to learn Cyc style facts [ Lenat and Guha , 1993 ] and were able to leverage the structure of the KB ’ s ontology to improve accuracy . Their experiments however were limited to a small set of relations and they used more shallow textual patterns instead of semantic-parsed interpretation of sentences . To avoid dependency on pre-defined schemas , Open IE [ Yates et al. , 2007 , Fader et al. , 2011 , Angeli et al. , 2015 , Romero et al. , 2019 , Romero and Razniewski , 2020 ] extracts knowledge from text using an open set of relations and entities . Commonly , the relation names are represented as text linking two arguments . This representation leads systems to commonly generate multiple instances of the same relation . Riedel et al . [ 2013 ] tackled the problem of identifying equivalence between extracted relations by learning canonical schemas . Mishra et al . [ 2017 ] used this approach to create a large KB with subject-predicate- 1. http : //simple.wikipedia.org/ object triples of generic statements about the world . The advantage of our approach over Open IE is that we output symbolic , unambiguous representation of knowledge . Textual representations make reasoning harder , and even though some of the predicates are typed , they often suffer from word sense ambiguity . Analogy uses the relational structure from semantic representations of texts as a way of learning abstract concepts without vast amounts of labeled data . This approach has been previously applied to solve many natural language problems such as word sense disambiguation [ Barbella and Forbus , 2013 ] , question answering [ Crouse et al. , 2018 ] , multimodal dialog systems [ Wilson et al. , 2019 ] and process understanding [ Ribeiro et al. , 2019 ] . In our work , analogical similarity is used to retrieve rule-like constructs called query cases . Retrieval of stored cases has been extensively used by many case-based reasoning systems [ Daniels and Rissland , 1995 , Burke et al. , 1997 ] . However , these case-based reasoning approaches differ from analogical learning since they use shallow pattern matching or standard information retrieval methods instead of structural similarity applied to parsed semantics . 3 . Problem Definition . The task is to populate an existing KB with facts extracted from natural language text , constrained by the entities and relations already defined in such KB . The standard relation extraction task represents structured knowledge as graphs [ Weston et al. , 2013 , Zhang et al. , 2020 ] . Relations are edges in the graph and are stored as triples ( h , r , t ) , as in ( Spain , borders , France ) . Here , the knowledge is represented using the CycL higher-order logic and OpenCyc ontology [ Lenat and Guha , 1993 ] , which is more expressive . Further details on CycL can be found in Appendix A . Formally , we denote the knowledge base as KB = ( C , R , F ) , where C is the set of concepts ( in CycL they can be either collections , entities , or logical functions ) , R is the set of relations , and F is the set of facts . Each fact f ∈ F is a nested tuple , represented as a tree data structure , where the root node f1 ∈ R is the relation and each internal node f2 , · · · , fn are the nested arguments . A text corpus is used as input , where each sentence is a sequence of tokens s = t1 , · · · , tm . The final task is to find all arguments for relations r ∈ R ; that express common-sense beliefs extracted from sentences in the corpus . Note that our proposed task is harder than common RE tasks . They often only attempt to infer either r given ( h , ? , t ) or t given ( h , r , ? ) , instead of predicting all arguments given a relation . 4 . Background . We use the Companion Cognitive Architecture as a foundation for our system , as it contains key components that allow for efficient knowledge extraction . Namely , we use the NextKB knowledge base , the CNLU natural language processing system [ Tomai and Forbus , 2009 ] , and analogical capabilities provided by SME , the Structure Mapping Engine [ Forbus et al. , 2017 ] . We briefly explain each of these components below . Knowledge Base : The NextKB 2 knowledge base integrates material from OpenCyc and FrameNet [ Baker et al. , 1998 ] , as well as a large-scale lexicon [ McFate and Forbus , 2011 ] . 2. http : //www.qrg.northwestern.edu/nextkb/index.html All facts in NextKB are represented using the CycL ontology language and they are used as seeds for further inference of missing facts . Natural Language Understanding : The CNLU language system combines Allen ’ s ( 1995 ) bottom-up chart parser with ideas from Discourse Representation Theory [ Kamp and Reyle , 2013 ] to produce semantic representations from the input sentences . Specifically , CNLU produces predicate calculus interpretations represented in CycL language . CNLU is task agnostic and the syntactic parse and word sense ambiguities are explicitly represented as distinct and internally consistent choice sets . Choice sets are often derived from FrameNet semantic frames . Logical constraints among the statements are designed such that by choosing one semantic interpretation , other conflicting choices are ruled out . Analogy : We use the Structure-Mapping Engine ( SME ) by Forbus et al . [ 2017 ] to match stored query cases . This component compares two structured , relational representations and creates one or more mappings between them . These mappings consist of ( 1 ) a set of correspondences , specifying what entities and predicates in one description go with entities and predicates in the other , ( 2 ) a score indicating the similarity between the representations , and ( 3 ) candidate inferences that project information from one description to the other . For more computationally efficient retrieval , our approach uses MAC/FAC [ Forbus et al. , 1995 ] to apply query cases learned during training for extraction of structured knowledge . BERT Language Model : The Bidirectional Encoder Representations from Transformers ( BERT ) [ Devlin et al. , 2018 ] is a language model trained on a large unlabeled corpus ( around 3.3B words from English Wikipedia and BooksCorpus ) that jointly conditions on both left and right context for all layers . We use BERT for word sense disambiguation and classification of extracted facts , as described in the following sections . The first input token is always a special token [ CLS ] which is used for sentence classification while sentence boundaries are marked with the special token [ SEP ] . The final hidden vector of the special [ CLS ] token is defined as TCLS ∈ RH , and the final hidden vector for the ith input token ti as Ti ∈ RH , where H is the hidden state size . | This paper presents an approach for augmenting a knowledge base with general knowledge facts by extracting them from text. The system parses new sentences, converts them into semantic relations, ties them to the existing knowledge graph to compute a connectivity with the current knowledge base, and uses BERT to evaluate plausibility. An experiment shows that the precision of the proposed system in extracting facts is quite high, higher than the baselines. | science.mup_single_document_summarization.9473 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nCombining Analogy with Language Models for Knowledge Extraction\n\nMain Content:\n1 . Introduction . Knowledge bases ( KBs ) have proven to be a very useful resource in many artificial intelligence applications including Dialogue , Question Answering , and Semantic Search [ Bradeško et al. , 2010 , Lukovnikov et al. , 2017 , Ribeiro et al. , 2019 ] . While KBs serve as a good source of explicit knowledge , they often suffer from missing relations and facts . For this reason , extracting knowledge from text with the intent of improving existing KBs has been a longstanding goal in the field of Natural Language Processing . This task is usually referred to as knowledge extraction and is considered challenging because of the inherent diversity and ambiguity of natural language text . In recent years , there has been substantial progress in automated knowledge extraction , but the resources built by these methods are typically limited by at least one of these four constraints : ( 1 ) They lack general world knowledge about common nouns , targeting named entities instead . Examples include NELL [ Carlson et al. , 2010 ] and FreeBase [ Bollacker et al. , 2008 ] ( 2 ) They work with a small set of semantic relations . For instance , WordNet [ Miller , 1998 ] has less than a dozen . ( 3 ) They are supervised in nature , requiring vast amounts of training data . Examples include KBP37 [ Zhang and Wang , 2015 ] and TACRED [ Zhang et al. , 2017 ] . ( 4 ) They represent entities and relations as text strings , which are often ambiguous and harder to manipulate . Examples include ConceptNet [ Speer et al. , 2017 ] , Aristo Tuple KB [ Mishra et al. , 2017 ] , ATOMIC [ Sap et al. , 2019 ] , Dice [ Chalier et al. , 2020 ] , and Ascent [ Nguyen et al. , 2021 ] . By contrast , our goal is to populate an existing ontology , seeded by the facts and schemas already defined in the KB . We use the Companion Cognitive Architecture [ Forbus et al. , 2017 ] which has a general-purpose semantic parser capable of generating semantic representations grounded by relations and entities present in the architecture ’ s KB . In general , KBs containing common-sense world knowledge are incomplete . Consequently , a system has to rely on a small amount of training samples to learn new facts ( in our KB around 94.8 % of relations are associated with less than 100 facts ) . To solve this problem , we propose a novel technique based on analogical Q/A training [ Crouse et al. , 2018 ] . Our approach efficiently creates query cases which are later analogically retrieved and combined to interpret new texts . The system uses distant supervision to automatically generate training data for analogical training which extracts general knowledge from Simple English Wikipedia1 articles . In this work we combine structural similarity with BERT-guided word sense disambiguation and fact classification for few-shot learning of common-sense facts . To the best of our knowledge , we are the first to combine these two techniques . 2 . Related Work . Our work relates to learning structured knowledge from natural language text with the goal of extending a KB without relying on labeled corpora . Such tasks are often classified as information extraction ( IE ) or relation extraction ( RE ) . Learning under distant supervision for RE without labeled data was initially introduced by Mintz et al . [ 2009 ] . They showed how text can be heuristically aligned with a knowledge graph ( KG ) to generate instances used to train a classifier . Subsequent systems incorporated not only syntactic textual features but also jointly learned KG information to perform extraction , either by syntactic-semantic rules from random walks [ Lao et al. , 2012 ] , scoring functions of low-dimensional embeddings [ Weston et al. , 2013 ] , mutual attention between KG and text [ Han et al. , 2018a ] , or neural encoder-decoder models [ Distiawan et al. , 2019 ] . Other recent methods have focused on using neural networks in the context on few-shot relation extraction [ Han et al. , 2018b , Gao et al. , 2019 , Qu et al. , 2020 , Wang et al. , 2021 ] . Such methods have achieved impressive results on RE but focused on resources that either require many hundreds of training examples or mostly extract facts about named entities . The work of Sharma et al . [ 2019 ] is more related to ours as they used distant supervision to learn Cyc style facts [ Lenat and Guha , 1993 ] and were able to leverage the structure of the KB ’ s ontology to improve accuracy . Their experiments however were limited to a small set of relations and they used more shallow textual patterns instead of semantic-parsed interpretation of sentences . To avoid dependency on pre-defined schemas , Open IE [ Yates et al. , 2007 , Fader et al. , 2011 , Angeli et al. , 2015 , Romero et al. , 2019 , Romero and Razniewski , 2020 ] extracts knowledge from text using an open set of relations and entities . Commonly , the relation names are represented as text linking two arguments . This representation leads systems to commonly generate multiple instances of the same relation . Riedel et al . [ 2013 ] tackled the problem of identifying equivalence between extracted relations by learning canonical schemas . Mishra et al . [ 2017 ] used this approach to create a large KB with subject-predicate- 1. http : //simple.wikipedia.org/ object triples of generic statements about the world . The advantage of our approach over Open IE is that we output symbolic , unambiguous representation of knowledge . Textual representations make reasoning harder , and even though some of the predicates are typed , they often suffer from word sense ambiguity . Analogy uses the relational structure from semantic representations of texts as a way of learning abstract concepts without vast amounts of labeled data . This approach has been previously applied to solve many natural language problems such as word sense disambiguation [ Barbella and Forbus , 2013 ] , question answering [ Crouse et al. , 2018 ] , multimodal dialog systems [ Wilson et al. , 2019 ] and process understanding [ Ribeiro et al. , 2019 ] . In our work , analogical similarity is used to retrieve rule-like constructs called query cases . Retrieval of stored cases has been extensively used by many case-based reasoning systems [ Daniels and Rissland , 1995 , Burke et al. , 1997 ] . However , these case-based reasoning approaches differ from analogical learning since they use shallow pattern matching or standard information retrieval methods instead of structural similarity applied to parsed semantics . 3 . Problem Definition . The task is to populate an existing KB with facts extracted from natural language text , constrained by the entities and relations already defined in such KB . The standard relation extraction task represents structured knowledge as graphs [ Weston et al. , 2013 , Zhang et al. , 2020 ] . Relations are edges in the graph and are stored as triples ( h , r , t ) , as in ( Spain , borders , France ) . Here , the knowledge is represented using the CycL higher-order logic and OpenCyc ontology [ Lenat and Guha , 1993 ] , which is more expressive . Further details on CycL can be found in Appendix A . Formally , we denote the knowledge base as KB = ( C , R , F ) , where C is the set of concepts ( in CycL they can be either collections , entities , or logical functions ) , R is the set of relations , and F is the set of facts . Each fact f ∈ F is a nested tuple , represented as a tree data structure , where the root node f1 ∈ R is the relation and each internal node f2 , · · · , fn are the nested arguments . A text corpus is used as input , where each sentence is a sequence of tokens s = t1 , · · · , tm . The final task is to find all arguments for relations r ∈ R ; that express common-sense beliefs extracted from sentences in the corpus . Note that our proposed task is harder than common RE tasks . They often only attempt to infer either r given ( h , ? , t ) or t given ( h , r , ? ) , instead of predicting all arguments given a relation . 4 . Background . We use the Companion Cognitive Architecture as a foundation for our system , as it contains key components that allow for efficient knowledge extraction . Namely , we use the NextKB knowledge base , the CNLU natural language processing system [ Tomai and Forbus , 2009 ] , and analogical capabilities provided by SME , the Structure Mapping Engine [ Forbus et al. , 2017 ] . We briefly explain each of these components below . Knowledge Base : The NextKB 2 knowledge base integrates material from OpenCyc and FrameNet [ Baker et al. , 1998 ] , as well as a large-scale lexicon [ McFate and Forbus , 2011 ] . 2. http : //www.qrg.northwestern.edu/nextkb/index.html All facts in NextKB are represented using the CycL ontology language and they are used as seeds for further inference of missing facts . Natural Language Understanding : The CNLU language system combines Allen ’ s ( 1995 ) bottom-up chart parser with ideas from Discourse Representation Theory [ Kamp and Reyle , 2013 ] to produce semantic representations from the input sentences . Specifically , CNLU produces predicate calculus interpretations represented in CycL language . CNLU is task agnostic and the syntactic parse and word sense ambiguities are explicitly represented as distinct and internally consistent choice sets . Choice sets are often derived from FrameNet semantic frames . Logical constraints among the statements are designed such that by choosing one semantic interpretation , other conflicting choices are ruled out . Analogy : We use the Structure-Mapping Engine ( SME ) by Forbus et al . [ 2017 ] to match stored query cases . This component compares two structured , relational representations and creates one or more mappings between them . These mappings consist of ( 1 ) a set of correspondences , specifying what entities and predicates in one description go with entities and predicates in the other , ( 2 ) a score indicating the similarity between the representations , and ( 3 ) candidate inferences that project information from one description to the other . For more computationally efficient retrieval , our approach uses MAC/FAC [ Forbus et al. , 1995 ] to apply query cases learned during training for extraction of structured knowledge . BERT Language Model : The Bidirectional Encoder Representations from Transformers ( BERT ) [ Devlin et al. , 2018 ] is a language model trained on a large unlabeled corpus ( around 3.3B words from English Wikipedia and BooksCorpus ) that jointly conditions on both left and right context for all layers . We use BERT for word sense disambiguation and classification of extracted facts , as described in the following sections . The first input token is always a special token [ CLS ] which is used for sentence classification while sentence boundaries are marked with the special token [ SEP ] . The final hidden vector of the special [ CLS ] token is defined as TCLS ∈ RH , and the final hidden vector for the ith input token ti as Ti ∈ RH , where H is the hidden state size .",
"role": "user"
},
{
"content": "This paper presents an approach for augmenting a knowledge base with general knowledge facts by extracting them from text. The system parses new sentences, converts them into semantic relations, ties them to the existing knowledge graph to compute a connectivity with the current knowledge base, and uses BERT to evaluate plausibility. An experiment shows that the precision of the proposed system in extracting facts is quite high, higher than the baselines.",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
One of earliest and pioneer work on the disambiguation of discourse connectives, Pitler and Nenkova<cite> [20]</cite> , showed that four syntactic features (see Section 3.4 for details about the features) and the connective itself can disambiguate discourse connectives with an accuracy of 95.04% within the PDTB [22] . | ["Background"] | science.multicite_intent_classification.2144 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nOne of earliest and pioneer work on the disambiguation of discourse connectives, Pitler and Nenkova<cite> [20]</cite> , showed that four syntactic features (see Section 3.4 for details about the features) and the connective itself can disambiguate discourse connectives with an accuracy of 95.04% within the PDTB [22] .",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Both Mordatch & Abbeel (2018) and <cite>Kottur et al. (2017)</cite> find that limiting the vocabulary size so that there aren't too many more words than there are objects to refer to encourages compositionality, which follows earlier results in evolutionary linguistics (Nowak et al., 2000) . | ["Background"] | science.multicite_intent_classification.644 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nBoth Mordatch & Abbeel (2018) and <cite>Kottur et al. (2017)</cite> find that limiting the vocabulary size so that there aren't too many more words than there are objects to refer to encourages compositionality, which follows earlier results in evolutionary linguistics (Nowak et al., 2000) .",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
] |
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Polymorphisms in the FOXP3 gene in Han Chinese psoriasis patients. BACKGROUND: Psoriasis is a common dermatological disorder, in which autoimmunity plays an important role. CD4(+)CD25(+) regulatory T cells (T-regs) have been suggested to be involved in the pathogenesis of some autoimmune diseases. T-regs express the fork head/winged helix transcription factor, FOXP3, which appears to be of key importance in the development and function of T-regs. Studies have found that single-nucleotide polymorphisms (SNPs) in the FOXP3 gene contribute to susceptibility to some autoimmune disorders. However, information about FOXP3 gene in psoriasis is limited. OBJECTIVE: This study evaluated the association between FOXP3 gene SNPs and susceptibility to psoriasis in a Han Chinese population. METHODS: In a hospital-based case-control study, 524 patients with psoriasis and 549 psoriasis-free controls were recruited according to age and gender. We investigated four SNPs in the FOXP3 gene (-6054, deletion/ATT; -3279, A/C; -924, A/G; IVS9+459, A/G) in psoriatic patients, and assessed allele and genotype frequencies in psoriatic patients (237 females, 287 males) and normal controls (272 females, 277 males). The polymorphisms were genotyped using the PCR sequence-specific primer (PCR-SSP) technique and PCR-restriction fragment length polymorphism (RFLP) analysis. RESULTS: We found that increased risk of psoriasis was associated with the FOXP3 -3279 AC genotype (adjusted OR, 1.32; 95% CI, 1.01-1.74) and the combined AC+AA genotype (adjusted OR, 1.38; 95% CI, 1.07-1.78), compared with the -3279 CC genotype. We also found that an increased risk of psoriasis was associated with the FOXP3 IVS9+459 GG genotype (adjusted OR, 2.24; 95% CI, 1.41-3.58). However, the combined GA+GG genotype showed no such tendency (adjusted OR=1.28; 95% CI, 1.00-1.64), compared with the IVS9+459 AA genotype. There was no evidence of an increased risk associated with the FOXP3-6054 deletion/ATT or FOXP3-924 A/G genotype. In combined genotype analyses, the FOXP3-3279 AC+AA genotype was more obviously associated in males (adjusted OR=1.60, 95% CI=1.11-2.31) and severe psoriasis patients (PASI score >20; adjusted OR=1.97, 95% CI=1.41-2.75). Meanwhile, the FOXP3 IVS9+459 GA+GG genotype was also associated with severe psoriasis patients (adjusted OR=1.69, 95% CI=1.21-2.36). CONCLUSIONS: FOXP3 polymorphisms appear to contribute to the risk of psoriasis in a Han Chinese population. Larger studies are needed to confirm these findings. | {"CellLine": [], "Chemical": [], "Disease": ["psoriatic", "autoimmune disorders", "Psoriasis", "psoriasis", "dermatological disorder", "autoimmune diseases"], "Gene": ["CD25", "CD4", "FOXP3"], "Species": ["patients"], "Variant": ["-924 A/G", "IVS9+459 GA+GG", "-6054 deletion/ATT", "-3279 CC", "-3279 AC+AA", "-924, A/G", "-6054, deletion/ATT", "-3279 AC", "IVS9+459 GG", "IVS9+459 AA", "IVS9+459, A/G", "-3279, A/C"]} | science.biored_ner.42 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nPolymorphisms in the FOXP3 gene in Han Chinese psoriasis patients. BACKGROUND: Psoriasis is a common dermatological disorder, in which autoimmunity plays an important role. CD4(+)CD25(+) regulatory T cells (T-regs) have been suggested to be involved in the pathogenesis of some autoimmune diseases. T-regs express the fork head/winged helix transcription factor, FOXP3, which appears to be of key importance in the development and function of T-regs. Studies have found that single-nucleotide polymorphisms (SNPs) in the FOXP3 gene contribute to susceptibility to some autoimmune disorders. However, information about FOXP3 gene in psoriasis is limited. OBJECTIVE: This study evaluated the association between FOXP3 gene SNPs and susceptibility to psoriasis in a Han Chinese population. METHODS: In a hospital-based case-control study, 524 patients with psoriasis and 549 psoriasis-free controls were recruited according to age and gender. We investigated four SNPs in the FOXP3 gene (-6054, deletion/ATT; -3279, A/C; -924, A/G; IVS9+459, A/G) in psoriatic patients, and assessed allele and genotype frequencies in psoriatic patients (237 females, 287 males) and normal controls (272 females, 277 males). The polymorphisms were genotyped using the PCR sequence-specific primer (PCR-SSP) technique and PCR-restriction fragment length polymorphism (RFLP) analysis. RESULTS: We found that increased risk of psoriasis was associated with the FOXP3 -3279 AC genotype (adjusted OR, 1.32; 95% CI, 1.01-1.74) and the combined AC+AA genotype (adjusted OR, 1.38; 95% CI, 1.07-1.78), compared with the -3279 CC genotype. We also found that an increased risk of psoriasis was associated with the FOXP3 IVS9+459 GG genotype (adjusted OR, 2.24; 95% CI, 1.41-3.58). However, the combined GA+GG genotype showed no such tendency (adjusted OR=1.28; 95% CI, 1.00-1.64), compared with the IVS9+459 AA genotype. There was no evidence of an increased risk associated with the FOXP3-6054 deletion/ATT or FOXP3-924 A/G genotype. In combined genotype analyses, the FOXP3-3279 AC+AA genotype was more obviously associated in males (adjusted OR=1.60, 95% CI=1.11-2.31) and severe psoriasis patients (PASI score >20; adjusted OR=1.97, 95% CI=1.41-2.75). Meanwhile, the FOXP3 IVS9+459 GA+GG genotype was also associated with severe psoriasis patients (adjusted OR=1.69, 95% CI=1.21-2.36). CONCLUSIONS: FOXP3 polymorphisms appear to contribute to the risk of psoriasis in a Han Chinese population. Larger studies are needed to confirm these findings.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [], \"Disease\": [\"psoriatic\", \"autoimmune disorders\", \"Psoriasis\", \"psoriasis\", \"dermatological disorder\", \"autoimmune diseases\"], \"Gene\": [\"CD25\", \"CD4\", \"FOXP3\"], \"Species\": [\"patients\"], \"Variant\": [\"-924 A/G\", \"IVS9+459 GA+GG\", \"-6054 deletion/ATT\", \"-3279 CC\", \"-3279 AC+AA\", \"-924, A/G\", \"-6054, deletion/ATT\", \"-3279 AC\", \"IVS9+459 GG\", \"IVS9+459 AA\", \"IVS9+459, A/G\", \"-3279, A/C\"]}",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: Visually impaired children are challenged everyday in their everyday skills. Oral hygiene practices among visually impaired children require a special approach with time and patience. The purpose of this study was to evaluate the effect of oral health education and fluoridated dentifrices on the oral health status of visually impaired children.
ABSTRACT.MATERIALS AND METHODS:: Fifty visually impaired children between 8 and 12 years of age formed the study group. Oral health education and motivation was done with the help of Braille. Modified Bass method of brushing was taught to the children and the required dental treatment was done. Subjects were randomly divided into two equal groups fluoridated and non-fluoridated. Oral hygiene index -simplified, DMFT, deft index, and Streptococcus mutans and Lactobacillus count were assessed at baseline, immediately after the treatment and at 3, 6, and 12 month intervals. The oral health awareness was assessed using a questionnaire at the beginning and end of the study.
ABSTRACT.RESULTS:: At baseline, the mean OHI-S, DMFT and deft scores were 2.72, 0.47, and 0.51 respectively. At the end of 12 months there was a significant decrease in OHI-S scores in the fluoridated group. No significant difference was seen in DMFT and deft between the fluoridated and non-fluoridated groups at the different time interval. After the oral health education and comprehensive treatment there was a reduction in Streptococcus mutans and Lactobacillus counts in both groups; however, at the end of 3, 6, 12 months there was a significant decrease in fluoridated group as compared to the non-fluoridated. The oral health awareness increased significantly at the end of the study.
ABSTRACT.CONCLUSION:: The oral health education and motivation formulated for the visually impaired children was effective in improving their oral health status. Fluoridated dentifrices decreased the Streptococcus mutans and Lactobacillus counts and improved the oral hygiene status.
Extracted Data: | [["fluoridated dentifrices", "non-fluoridated dentifrices", "mean DMFT and deft", "no significant difference", "No significant difference was seen in DMFT and deft between the fluoridated and non-fluoridated groups"]] | science.evidence_inference.245 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Visually impaired children are challenged everyday in their everyday skills. Oral hygiene practices among visually impaired children require a special approach with time and patience. The purpose of this study was to evaluate the effect of oral health education and fluoridated dentifrices on the oral health status of visually impaired children.\n\nABSTRACT.MATERIALS AND METHODS:: Fifty visually impaired children between 8 and 12 years of age formed the study group. Oral health education and motivation was done with the help of Braille. Modified Bass method of brushing was taught to the children and the required dental treatment was done. Subjects were randomly divided into two equal groups fluoridated and non-fluoridated. Oral hygiene index -simplified, DMFT, deft index, and Streptococcus mutans and Lactobacillus count were assessed at baseline, immediately after the treatment and at 3, 6, and 12 month intervals. The oral health awareness was assessed using a questionnaire at the beginning and end of the study.\n\nABSTRACT.RESULTS:: At baseline, the mean OHI-S, DMFT and deft scores were 2.72, 0.47, and 0.51 respectively. At the end of 12 months there was a significant decrease in OHI-S scores in the fluoridated group. No significant difference was seen in DMFT and deft between the fluoridated and non-fluoridated groups at the different time interval. After the oral health education and comprehensive treatment there was a reduction in Streptococcus mutans and Lactobacillus counts in both groups; however, at the end of 3, 6, 12 months there was a significant decrease in fluoridated group as compared to the non-fluoridated. The oral health awareness increased significantly at the end of the study.\n\nABSTRACT.CONCLUSION:: The oral health education and motivation formulated for the visually impaired children was effective in improving their oral health status. Fluoridated dentifrices decreased the Streptococcus mutans and Lactobacillus counts and improved the oral hygiene status.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"fluoridated dentifrices\", \"non-fluoridated dentifrices\", \"mean DMFT and deft\", \"no significant difference\", \"No significant difference was seen in DMFT and deft between the fluoridated and non-fluoridated groups\"]]",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Insulin decreases risk of severe kidney failure.
Research Abstract:
OBJECTIVE To assess the risks of amputation, blindness, severe kidney failure, hyperglycaemia, and hypoglycaemia in patients with type 2 diabetes associated with prescribed diabetes drugs, particularly newer agents including gliptins or glitazones (thiazolidinediones).
DESIGN Open cohort study in primary care.
SETTING 1243 practices contributing data to the QResearch database in England.
PARTICIPANTS 469,688 patients with type 2 diabetes aged 25-84 years between 1 April 2007 and 31 January 2015.
EXPOSURES Hypoglycaemic agents (glitazones, gliptins, metformin, sulphonylureas, insulin, and other) alone and in combination.
MAIN OUTCOME MEASURES First recorded diagnoses of amputation, blindness, severe kidney failure, hyperglycaemia, and hypoglycaemia recorded on patients' primary care, mortality, or hospital records.
Cox models estimated hazard ratios for diabetes treatments adjusting for potential confounders.
RESULTS 21,308 (4.5%) and 32,533 (6.9%) patients received prescriptions for glitazones and gliptins during follow-up, respectively.
Compared with non-use, glitazones were associated with a decreased risk of blindness (adjusted hazard ratio 0.71, 95% confidence interval 0.57 to 0.89; rate 14.4 per 10,000 person years of exposure) and an increased risk of hypoglycaemia (1.22, 1.10 to 1.37; 65.1); gliptins were associated with a decreased risk of hypoglycaemia (0.86, 0.77 to 0.96; 45.8).
Although the numbers of patients prescribed gliptin monotherapy or glitazones monotherapy were relatively low, there were significantly increased risks of severe kidney failure compared with metformin monotherapy (adjusted hazard ratio 2.55, 95% confidence interval 1.13 to 5.74).
We found significantly lower risks of hyperglycaemia among patients prescribed dual therapy involving metformin with either gliptins (0.78, 0.62 to 0.97) or glitazones (0.60, 0.45 to 0.80) compared with metformin monotherapy.
Patients prescribed triple therapy with metformin, sulphonylureas, and either gliptins (adjusted hazard ratio 5.07, 95% confidence interval 4.28 to 6.00) or glitazones (6.32, 5.35 to 7.45) had significantly higher risks of hypoglycaemia than those prescribed metformin monotherapy, but these risks were similar to those involving dual therapy with metformin and sulphonylureas (6.03, 5.47 to 6.63).
Patients prescribed triple therapy with metformin, sulphonylureas, and glitazones had a significantly reduced risk of blindness compared with metformin monotherapy (0.67, 0.48 to 0.94).
CONCLUSIONS We have found lower risks of hyperglycaemia among patients prescribed dual therapy involving metformin with either gliptins or glitazones compared with metformin alone.
Compared with metformin monotherapy, triple therapy with metformin, sulphonylureas, and either gliptins or glitazones was associated with an increased risk of hypoglycaemia, which was similar to the risk for dual therapy with metformin and sulphonylureas.
Compared with metformin monotherapy, triple therapy with metformin, sulphonylureas, and glitazones was associated with a reduced risk of blindness.
These results, while subject to residual confounding, could have implications for the prescribing of hypoglycaemic drugs. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.411 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nInsulin decreases risk of severe kidney failure.\n\nResearch Abstract:\nOBJECTIVE To assess the risks of amputation, blindness, severe kidney failure, hyperglycaemia, and hypoglycaemia in patients with type 2 diabetes associated with prescribed diabetes drugs, particularly newer agents including gliptins or glitazones (thiazolidinediones).\nDESIGN Open cohort study in primary care.\nSETTING 1243 practices contributing data to the QResearch database in England.\nPARTICIPANTS 469,688 patients with type 2 diabetes aged 25-84 years between 1 April 2007 and 31 January 2015.\nEXPOSURES Hypoglycaemic agents (glitazones, gliptins, metformin, sulphonylureas, insulin, and other) alone and in combination.\nMAIN OUTCOME MEASURES First recorded diagnoses of amputation, blindness, severe kidney failure, hyperglycaemia, and hypoglycaemia recorded on patients' primary care, mortality, or hospital records.\nCox models estimated hazard ratios for diabetes treatments adjusting for potential confounders.\nRESULTS 21,308 (4.5%) and 32,533 (6.9%) patients received prescriptions for glitazones and gliptins during follow-up, respectively.\nCompared with non-use, glitazones were associated with a decreased risk of blindness (adjusted hazard ratio 0.71, 95% confidence interval 0.57 to 0.89; rate 14.4 per 10,000 person years of exposure) and an increased risk of hypoglycaemia (1.22, 1.10 to 1.37; 65.1); gliptins were associated with a decreased risk of hypoglycaemia (0.86, 0.77 to 0.96; 45.8).\nAlthough the numbers of patients prescribed gliptin monotherapy or glitazones monotherapy were relatively low, there were significantly increased risks of severe kidney failure compared with metformin monotherapy (adjusted hazard ratio 2.55, 95% confidence interval 1.13 to 5.74).\nWe found significantly lower risks of hyperglycaemia among patients prescribed dual therapy involving metformin with either gliptins (0.78, 0.62 to 0.97) or glitazones (0.60, 0.45 to 0.80) compared with metformin monotherapy.\nPatients prescribed triple therapy with metformin, sulphonylureas, and either gliptins (adjusted hazard ratio 5.07, 95% confidence interval 4.28 to 6.00) or glitazones (6.32, 5.35 to 7.45) had significantly higher risks of hypoglycaemia than those prescribed metformin monotherapy, but these risks were similar to those involving dual therapy with metformin and sulphonylureas (6.03, 5.47 to 6.63).\nPatients prescribed triple therapy with metformin, sulphonylureas, and glitazones had a significantly reduced risk of blindness compared with metformin monotherapy (0.67, 0.48 to 0.94).\nCONCLUSIONS We have found lower risks of hyperglycaemia among patients prescribed dual therapy involving metformin with either gliptins or glitazones compared with metformin alone.\nCompared with metformin monotherapy, triple therapy with metformin, sulphonylureas, and either gliptins or glitazones was associated with an increased risk of hypoglycaemia, which was similar to the risk for dual therapy with metformin and sulphonylureas.\nCompared with metformin monotherapy, triple therapy with metformin, sulphonylureas, and glitazones was associated with a reduced risk of blindness.\nThese results, while subject to residual confounding, could have implications for the prescribing of hypoglycaemic drugs.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
A Closer Look at Deep Learning Heuristics: Learning rate restarts, Warmup and Distillation
Main Content:
1 INTRODUCTION . The introduction of heuristics such as normalization layers ( Ioffe & Szegedy , 2015 ; Ba et al. , 2016 ) , residual connections ( He et al. , 2016 ) , and learning rate strategies ( Loshchilov & Hutter , 2016 ; Goyal et al. , 2017 ; Smith , 2017 ) have greatly accelerated progress in Deep Learning . Many of these ingredients are now commonplace in modern architectures , and some of them have also been buttressed with theoretical guarantees ( Balduzzi et al. , 2017 ; Poggio & Liao , 2017 ; Hardt & Ma , 2016 ) . However , despite their simplicity and efficacy , why some of these heuristics work is still relatively unknown . Existing attempts at explaining these strategies empirically have been limited to intuitive explanations and the use of tools such as spectrum analysis ( Sagun et al. , 2017 ) , linear interpolation between two models and low-dimensional visualizations ( Li et al. , 2017 ) of the loss surface . In our work , we instead use recent tools built specifically for analyzing deep networks , viz. , mode connectivity ( Garipov et al. , 2018 ) and singular value canonical correlation analysis ( SVCCA ) ( Raghu et al. , 2017 ) . We investigate three strategies in detail : ( a ) cosine learning rate decay , ( b ) learning rate warmup , and ( c ) knowledge distillation , and list the summary of our contributions at the end of this section . Cosine annealing ( Loshchilov & Hutter , 2016 ) , also known as stochastic gradient descent with restarts ( SGDR ) , and more generally cyclical learning rate strategies ( Smith , 2017 ) , have been recently proposed to accelerate training of deep networks ( Coleman et al. , 2018 ) . The strategy involves reductions and restarts of learning rates over the course of training , and was motivated as means to escape spurious local minima . Experimental results have shown that SGDR often improves convergence both from the standpoint of iterations needed for convergence and the final objective . ∗Work performed while interning at Salesforce Research Learning rate warmup ( Goyal et al. , 2017 ) also constitutes an important ingredient in training deep networks , especially in the presence of large or dynamic batch sizes . It involves increasing the learning rate to a large value over a certain number of training iterations followed by decreasing the learning rate , which can be performed using step-decay , exponential decay or other such schemes . The strategy was proposed out of the need to induce stability in the initial phase of training with large learning rates ( due to large batch sizes ) . It has been employed in training of several architectures at scale including ResNets and Transformer networks ( Vaswani et al. , 2017 ) . Further , we investigate knowledge distillation ( KD ) ( Hinton et al. , 2015 ) . This strategy involves first training a ( teacher ) model on a typical loss function on the available data . Next , a different ( student ) model ( typically much smaller than the teacher model ) is trained , but instead of optimizing the loss function defined using hard data labels , this student model is trained to mimic the teacher model . It has been empirically found that a student network trained in this fashion significantly outperforms an identical network trained with the hard data labels . We defer a detailed discussion of the three heuristics , and existing explanations for their efficacy to sections 3 , 4 and 5 respectively . Finally , we briefly describe the tools we employ for analyzing the aforementioned heuristics . Mode connectivity ( MC ) is a recent observation that shows that , under circumstances , it is possible to connect any two local minima of deep networks via a piecewise-linear curve ( Garipov et al. , 2018 ; Draxler et al. , 2018 ) . This shows that local optima obtained through different means , and exhibiting different local and generalization properties , are connected . The authors propose an algorithm that locates such a curve . While not proposed as such , we employ this framework to better understand loss surfaces but begin our analysis in Section 2 by first establishing its robustness as a framework . Deep network analyses focusing on the weights of a network are inherently limited since there are several invariances in this , such as permutation and scaling . Recently , Raghu et al . ( 2017 ) propose using CCA along with some pre-processing steps to analyze the activations of networks , such that the resulting comparison is not dependent on permutations and scaling of neurons . They also prove the computational gains of using CCA over alternatives ( ( Li et al. , 2015 ) ) for representational analysis and employ it to better understand many phenomenon in deep learning . Contributions : • We use mode connectivity and CCA to improve understanding of cosine annealing , learning rate warmup and knowledge distillation . For mode connectivity , we also establish the robustness of the approach across changes in training choices for obtaining the modes . • We demonstrate that the reasons often quoted for the success of cosine annealing are not substantiated by our experiments , and that the iterates move over barriers after restarts but the explanation of escaping local minima might be an oversimplification . • We show that learning rate warmup primarily limits weight changes in the deeper layers and that freezing them achieves similar outcomes as warmup . • We show that the latent knowledge shared by the teacher in knowledge distillation is primarily disbursed in the deeper layers . 2 EMPIRICAL TOOLS . 2.1 MODE CONNECTIVITY . Garipov et al . ( 2018 ) introduce a framework , called mode connectivity , to obtain a low loss ( or high accuracy , in the case of classification ) curve of simple form , such as a piecewise linear curve , that connects optima ( modes of the loss function ) found independently . This observation suggests that points at the same loss function depth are connected , somewhat contrary to several empirical results claiming that minima are isolated or have barriers between them1 . Let wa ∈ RD and wb ∈ RD be two modes in the D-dimensional parameter space obtained by optimizing a given loss function L ( w ) ( like the cross-entropy loss ) . We represent a curve connecting 1Draxler et al . ( 2018 ) independently report the same observation for neural network loss landscapes , and claim that this is suggestive of the resilience of neural networks to perturbations in model parameters . wa and wb by φθ ( t ) : [ 0 , 1 ] → RD , such that φθ ( 0 ) = wa and φθ ( 1 ) = wb . To find a low loss path , we find the set of parameters θ ∈ RD that minimizes the following loss : ` ( θ ) = ∫ 1 0 L ( φθ ( t ) ) dt = Et∼U ( 0,1 ) L ( φθ ( t ) ) where U ( 0 , 1 ) is the uniform distribution in the interval [ 0 , 1 ] . To optimize ` ( θ ) for θ , we first need to chose a parametric form for φθ ( t ) . One of the forms proposed by Garipov et al . ( 2018 ) is a polygonal chain with a single bend at θ as follows φθ ( t ) = { 2 ( tθ + ( 0.5− t ) wa ) , if 0 ≤ t ≤ 0.5 2 ( ( t− 0.5 ) wb + ( 1− t ) θ ) if 0.5 < t ≤ 1 To minimize ` ( θ ) , we sample t ∼ U [ 0 , 1 ] at each iteration and use ∇θL ( φθ ( t ) ) as an unbiased estimate for the true gradient∇θ ` ( θ ) to perform updates on θ , where θ is initialized with 12 ( wa+wb ) . 2.1.1 RESILIENCE OF MODE CONNECTIVITY . To demonstrate that the curve-finding approach works in practice , Garipov et al . ( 2018 ) use two optima found using different initializations but a common training scheme which we detail below . We explore the limits of this procedure by connecting optima obtained from different training strategies . Our goal of this investigation is to first establish the robustness of the framework in order to seamlessly use it as a tool for analysis . In particular , we experiment with different initializations , optimizers , data augmentation choices , and hyperparameter settings including regularization , training batch sizes , and learning rate schemes . We note in passing that while the framework was proposed to connect two points in the parameter space that are at equal depth in the loss landscape , it is well-defined to also connect points at different depths ; in this case , the path corresponds to one that minimizes the average loss along the curve . Conventional wisdom suggests that the different training schemes mentioned above will converge to regions in the parameter space that are vastly different from each other . Examples of this include size of minibatches used during training ( Keskar et al. , 2016 ) , choice of optimizer ( Heusel et al. , 2017 ; Wilson et al. , 2017 ) , initialization ( Goodfellow et al. , 2016 ) and choice of regularizer . Having a high accuracy connection between these pairs would seem counterintuitive . For obtaining the reference model ( named mode G ) , we train the VGG-16 model architecture ( Simonyan & Zisserman , 2014 ) using CIFAR-10 training data ( Krizhevsky et al. , 2014 ) for 200 epochs with SGD . We then build 6 variants of the reference mode G as follows : we obtain mode A using a training batch size of 4000 , mode B by using the Adam optimizer instead of SGD , mode C with a linearly decaying learning rate instead of the step decay used in mode G , mode D using a smaller weight decay of 5 × 10−6 , mode E by increasing the variance of the initialization distribution to 3 × √ 2/n and mode F using no data augmentation . Note that for the set of modes { A , B , C , D , E , F } , all the other hyper-parameters and settings except the ones mentioned above are kept same as that for mode G. We use the mode connectivity algorithm on each of the 6 pairs of modes including G and another mode , resulting in curves GA , GB , GC , GD , GE , and GF . Figure 1 shows the validation accuracy for models on each of the 6 connecting curves during the 20th , 40th , 60th and 80th epochs of the mode connectivity training procedure and also for models on the line segment joining the two endpoints ( corresponding to the initialization for θ at epoch 0 ) . As described in Section 2.1 , for a polychain curve GX ( connecting modes G and X using the curve described by θ ) , model parameters φθ ( t ) on the curve are given by pφθ ( t ) = 2 ( tpθ + ( 0.5 − t ) pG ) if 0 ≤ t ≤ 0.5 and pφθ ( t ) = 2 ( ( t− 0.5 ) pX + ( 1− t ) pθ ) if 0.5 < t ≤ 1 where pG , pθ and pX are parameters of the models G , θ , and X respectively . Thus φθ ( 0 ) = G and φθ ( 1 ) = X . In a few epochs of the curve training , for all 6 pairs , we can find a curve such that each point on it generalizes almost as well as models from the pair that is being connected . Note that by virtue of existence of these 6 curves , there exists a high accuracy connecting curve ( albeit with multiple bends ) for each of the ( 7 2 ) pairs of modes . We refer the reader to Appendix 7 for a t-SNE plot of the modes and their connections , and also for additional plots and details . Having established the high likelihood of the existence of these curves , we use this procedure along with interpolation of the loss surface between parameters at different epochs as tools to analyze the dynamics of SGD and SGDR . | In this paper, authors propose a set of control experiments in order to get a better understanding of different deep learning heuristics: stochastic gradient with restart (SGDR), warmup and distillation. Authors leverage the recently proposed mode connectivity (which fits a simple piecewise linear curve to obtain a low loss path that connect two points in parameter space) and CCA is a way to compute a meaningful correlation of the networks activations. All the experiments are done using a VGG-16 networks on CIFAR10. | science.mup_single_document_summarization.10613 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nA Closer Look at Deep Learning Heuristics: Learning rate restarts, Warmup and Distillation\n\nMain Content:\n1 INTRODUCTION . The introduction of heuristics such as normalization layers ( Ioffe & Szegedy , 2015 ; Ba et al. , 2016 ) , residual connections ( He et al. , 2016 ) , and learning rate strategies ( Loshchilov & Hutter , 2016 ; Goyal et al. , 2017 ; Smith , 2017 ) have greatly accelerated progress in Deep Learning . Many of these ingredients are now commonplace in modern architectures , and some of them have also been buttressed with theoretical guarantees ( Balduzzi et al. , 2017 ; Poggio & Liao , 2017 ; Hardt & Ma , 2016 ) . However , despite their simplicity and efficacy , why some of these heuristics work is still relatively unknown . Existing attempts at explaining these strategies empirically have been limited to intuitive explanations and the use of tools such as spectrum analysis ( Sagun et al. , 2017 ) , linear interpolation between two models and low-dimensional visualizations ( Li et al. , 2017 ) of the loss surface . In our work , we instead use recent tools built specifically for analyzing deep networks , viz. , mode connectivity ( Garipov et al. , 2018 ) and singular value canonical correlation analysis ( SVCCA ) ( Raghu et al. , 2017 ) . We investigate three strategies in detail : ( a ) cosine learning rate decay , ( b ) learning rate warmup , and ( c ) knowledge distillation , and list the summary of our contributions at the end of this section . Cosine annealing ( Loshchilov & Hutter , 2016 ) , also known as stochastic gradient descent with restarts ( SGDR ) , and more generally cyclical learning rate strategies ( Smith , 2017 ) , have been recently proposed to accelerate training of deep networks ( Coleman et al. , 2018 ) . The strategy involves reductions and restarts of learning rates over the course of training , and was motivated as means to escape spurious local minima . Experimental results have shown that SGDR often improves convergence both from the standpoint of iterations needed for convergence and the final objective . ∗Work performed while interning at Salesforce Research Learning rate warmup ( Goyal et al. , 2017 ) also constitutes an important ingredient in training deep networks , especially in the presence of large or dynamic batch sizes . It involves increasing the learning rate to a large value over a certain number of training iterations followed by decreasing the learning rate , which can be performed using step-decay , exponential decay or other such schemes . The strategy was proposed out of the need to induce stability in the initial phase of training with large learning rates ( due to large batch sizes ) . It has been employed in training of several architectures at scale including ResNets and Transformer networks ( Vaswani et al. , 2017 ) . Further , we investigate knowledge distillation ( KD ) ( Hinton et al. , 2015 ) . This strategy involves first training a ( teacher ) model on a typical loss function on the available data . Next , a different ( student ) model ( typically much smaller than the teacher model ) is trained , but instead of optimizing the loss function defined using hard data labels , this student model is trained to mimic the teacher model . It has been empirically found that a student network trained in this fashion significantly outperforms an identical network trained with the hard data labels . We defer a detailed discussion of the three heuristics , and existing explanations for their efficacy to sections 3 , 4 and 5 respectively . Finally , we briefly describe the tools we employ for analyzing the aforementioned heuristics . Mode connectivity ( MC ) is a recent observation that shows that , under circumstances , it is possible to connect any two local minima of deep networks via a piecewise-linear curve ( Garipov et al. , 2018 ; Draxler et al. , 2018 ) . This shows that local optima obtained through different means , and exhibiting different local and generalization properties , are connected . The authors propose an algorithm that locates such a curve . While not proposed as such , we employ this framework to better understand loss surfaces but begin our analysis in Section 2 by first establishing its robustness as a framework . Deep network analyses focusing on the weights of a network are inherently limited since there are several invariances in this , such as permutation and scaling . Recently , Raghu et al . ( 2017 ) propose using CCA along with some pre-processing steps to analyze the activations of networks , such that the resulting comparison is not dependent on permutations and scaling of neurons . They also prove the computational gains of using CCA over alternatives ( ( Li et al. , 2015 ) ) for representational analysis and employ it to better understand many phenomenon in deep learning . Contributions : • We use mode connectivity and CCA to improve understanding of cosine annealing , learning rate warmup and knowledge distillation . For mode connectivity , we also establish the robustness of the approach across changes in training choices for obtaining the modes . • We demonstrate that the reasons often quoted for the success of cosine annealing are not substantiated by our experiments , and that the iterates move over barriers after restarts but the explanation of escaping local minima might be an oversimplification . • We show that learning rate warmup primarily limits weight changes in the deeper layers and that freezing them achieves similar outcomes as warmup . • We show that the latent knowledge shared by the teacher in knowledge distillation is primarily disbursed in the deeper layers . 2 EMPIRICAL TOOLS . 2.1 MODE CONNECTIVITY . Garipov et al . ( 2018 ) introduce a framework , called mode connectivity , to obtain a low loss ( or high accuracy , in the case of classification ) curve of simple form , such as a piecewise linear curve , that connects optima ( modes of the loss function ) found independently . This observation suggests that points at the same loss function depth are connected , somewhat contrary to several empirical results claiming that minima are isolated or have barriers between them1 . Let wa ∈ RD and wb ∈ RD be two modes in the D-dimensional parameter space obtained by optimizing a given loss function L ( w ) ( like the cross-entropy loss ) . We represent a curve connecting 1Draxler et al . ( 2018 ) independently report the same observation for neural network loss landscapes , and claim that this is suggestive of the resilience of neural networks to perturbations in model parameters . wa and wb by φθ ( t ) : [ 0 , 1 ] → RD , such that φθ ( 0 ) = wa and φθ ( 1 ) = wb . To find a low loss path , we find the set of parameters θ ∈ RD that minimizes the following loss : ` ( θ ) = ∫ 1 0 L ( φθ ( t ) ) dt = Et∼U ( 0,1 ) L ( φθ ( t ) ) where U ( 0 , 1 ) is the uniform distribution in the interval [ 0 , 1 ] . To optimize ` ( θ ) for θ , we first need to chose a parametric form for φθ ( t ) . One of the forms proposed by Garipov et al . ( 2018 ) is a polygonal chain with a single bend at θ as follows φθ ( t ) = { 2 ( tθ + ( 0.5− t ) wa ) , if 0 ≤ t ≤ 0.5 2 ( ( t− 0.5 ) wb + ( 1− t ) θ ) if 0.5 < t ≤ 1 To minimize ` ( θ ) , we sample t ∼ U [ 0 , 1 ] at each iteration and use ∇θL ( φθ ( t ) ) as an unbiased estimate for the true gradient∇θ ` ( θ ) to perform updates on θ , where θ is initialized with 12 ( wa+wb ) . 2.1.1 RESILIENCE OF MODE CONNECTIVITY . To demonstrate that the curve-finding approach works in practice , Garipov et al . ( 2018 ) use two optima found using different initializations but a common training scheme which we detail below . We explore the limits of this procedure by connecting optima obtained from different training strategies . Our goal of this investigation is to first establish the robustness of the framework in order to seamlessly use it as a tool for analysis . In particular , we experiment with different initializations , optimizers , data augmentation choices , and hyperparameter settings including regularization , training batch sizes , and learning rate schemes . We note in passing that while the framework was proposed to connect two points in the parameter space that are at equal depth in the loss landscape , it is well-defined to also connect points at different depths ; in this case , the path corresponds to one that minimizes the average loss along the curve . Conventional wisdom suggests that the different training schemes mentioned above will converge to regions in the parameter space that are vastly different from each other . Examples of this include size of minibatches used during training ( Keskar et al. , 2016 ) , choice of optimizer ( Heusel et al. , 2017 ; Wilson et al. , 2017 ) , initialization ( Goodfellow et al. , 2016 ) and choice of regularizer . Having a high accuracy connection between these pairs would seem counterintuitive . For obtaining the reference model ( named mode G ) , we train the VGG-16 model architecture ( Simonyan & Zisserman , 2014 ) using CIFAR-10 training data ( Krizhevsky et al. , 2014 ) for 200 epochs with SGD . We then build 6 variants of the reference mode G as follows : we obtain mode A using a training batch size of 4000 , mode B by using the Adam optimizer instead of SGD , mode C with a linearly decaying learning rate instead of the step decay used in mode G , mode D using a smaller weight decay of 5 × 10−6 , mode E by increasing the variance of the initialization distribution to 3 × √ 2/n and mode F using no data augmentation . Note that for the set of modes { A , B , C , D , E , F } , all the other hyper-parameters and settings except the ones mentioned above are kept same as that for mode G. We use the mode connectivity algorithm on each of the 6 pairs of modes including G and another mode , resulting in curves GA , GB , GC , GD , GE , and GF . Figure 1 shows the validation accuracy for models on each of the 6 connecting curves during the 20th , 40th , 60th and 80th epochs of the mode connectivity training procedure and also for models on the line segment joining the two endpoints ( corresponding to the initialization for θ at epoch 0 ) . As described in Section 2.1 , for a polychain curve GX ( connecting modes G and X using the curve described by θ ) , model parameters φθ ( t ) on the curve are given by pφθ ( t ) = 2 ( tpθ + ( 0.5 − t ) pG ) if 0 ≤ t ≤ 0.5 and pφθ ( t ) = 2 ( ( t− 0.5 ) pX + ( 1− t ) pθ ) if 0.5 < t ≤ 1 where pG , pθ and pX are parameters of the models G , θ , and X respectively . Thus φθ ( 0 ) = G and φθ ( 1 ) = X . In a few epochs of the curve training , for all 6 pairs , we can find a curve such that each point on it generalizes almost as well as models from the pair that is being connected . Note that by virtue of existence of these 6 curves , there exists a high accuracy connecting curve ( albeit with multiple bends ) for each of the ( 7 2 ) pairs of modes . We refer the reader to Appendix 7 for a t-SNE plot of the modes and their connections , and also for additional plots and details . Having established the high likelihood of the existence of these curves , we use this procedure along with interpolation of the loss surface between parameters at different epochs as tools to analyze the dynamics of SGD and SGDR .",
"role": "user"
},
{
"content": "In this paper, authors propose a set of control experiments in order to get a better understanding of different deep learning heuristics: stochastic gradient with restart (SGDR), warmup and distillation. Authors leverage the recently proposed mode connectivity (which fits a simple piecewise linear curve to obtain a low loss path that connect two points in parameter space) and CCA is a way to compute a meaningful correlation of the networks activations. All the experiments are done using a VGG-16 networks on CIFAR10.",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
When we sacrifice parsing speed, our model achieved around 2.9 points higher F-score than 's model and around 0.8 points higher F-score than <cite>Ninomiya et al. (2006)</cite> <cite>'s model 3</cite>. | ["Differences"] | science.multicite_intent_classification.1778 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWhen we sacrifice parsing speed, our model achieved around 2.9 points higher F-score than 's model and around 0.8 points higher F-score than <cite>Ninomiya et al. (2006)</cite> <cite>'s model 3</cite>.",
"role": "user"
},
{
"content": "[\"Differences\"]",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
RAD52 is involved in break-induced DNA replication (BIR).
Research Abstract:
Alternative lengthening of telomeres (ALT) is a telomerase-independent telomere maintenance mechanism that occurs in a subset of cancers.
By analyzing telomerase-positive cells and their human TERC knockout-derived ALT human cell lines, we show that ALT cells harbor more fragile telomeres representing telomere replication problems.
ALT-associated replication defects trigger mitotic DNA synthesis (MiDAS) at telomeres in a RAD52-dependent, but RAD51-independent, manner.
Telomeric MiDAS is a conservative DNA synthesis process, potentially mediated by break-induced replication, similar to type II ALT survivors in Saccharomyces cerevisiae Replication stresses induced by ectopic oncogenic expression of cyclin E, G-quadruplexes, or R-loop formation facilitate the ALT pathway and lead to telomere clustering, a hallmark of ALT cancers.
The TIMELESS/TIPIN complex suppresses telomere clustering and telomeric MiDAS, whereas the SMC5/6 complex promotes them.
In summary, ALT cells exhibit more telomere replication defects that result in persistent DNA damage responses at telomeres, leading to the engagement of telomeric MiDAS (spontaneous mitotic telomere synthesis) that is triggered by DNA replication stress, a potential driver of genomic duplications in cancer. | {"verdict": "SUPPORT", "evidence": ["ALT-associated replication defects trigger mitotic DNA synthesis (MiDAS) at telomeres in a RAD52-dependent, but RAD51-independent, manner."]} | science.scifact_entailment.651 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nRAD52 is involved in break-induced DNA replication (BIR).\n\nResearch Abstract:\nAlternative lengthening of telomeres (ALT) is a telomerase-independent telomere maintenance mechanism that occurs in a subset of cancers.\nBy analyzing telomerase-positive cells and their human TERC knockout-derived ALT human cell lines, we show that ALT cells harbor more fragile telomeres representing telomere replication problems.\nALT-associated replication defects trigger mitotic DNA synthesis (MiDAS) at telomeres in a RAD52-dependent, but RAD51-independent, manner.\nTelomeric MiDAS is a conservative DNA synthesis process, potentially mediated by break-induced replication, similar to type II ALT survivors in Saccharomyces cerevisiae Replication stresses induced by ectopic oncogenic expression of cyclin E, G-quadruplexes, or R-loop formation facilitate the ALT pathway and lead to telomere clustering, a hallmark of ALT cancers.\nThe TIMELESS/TIPIN complex suppresses telomere clustering and telomeric MiDAS, whereas the SMC5/6 complex promotes them.\nIn summary, ALT cells exhibit more telomere replication defects that result in persistent DNA damage responses at telomeres, leading to the engagement of telomeric MiDAS (spontaneous mitotic telomere synthesis) that is triggered by DNA replication stress, a potential driver of genomic duplications in cancer.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"ALT-associated replication defects trigger mitotic DNA synthesis (MiDAS) at telomeres in a RAD52-dependent, but RAD51-independent, manner.\"]}",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
We created a back-off model for the bracketing system of <cite>Ziering and Van der Plas (2014)</cite> and for AWDB that falls back to using χ 2 if no bracketing structure can be derived (system → χ 2 ). | ["Uses"] | science.multicite_intent_classification.2434 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWe created a back-off model for the bracketing system of <cite>Ziering and Van der Plas (2014)</cite> and for AWDB that falls back to using χ 2 if no bracketing structure can be derived (system → χ 2 ).",
"role": "user"
},
{
"content": "[\"Uses\"]",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: Trigeminal neuralgia is the most common neuralgia. Its therapeutic approach is challenging as the first line treatment often does not help, or even causes intolerable side effects. The aim of our randomized double blind, placebo controlled, crossover study was to investigate in a prospective way the effect of lidocaine in patients with trigeminal neuralgia. Twenty patients met our inclusion criteria and completed the study. Each patient underwent four weekly sessions, two of which were with lidocaine (5 mgs/kg) and two with placebo infusions administered over 60 minutes. Intravenous lidocaine was superior regarding the reduction of the intensity of pain, the allodynia, and the hyperalgesia compared to placebo. Moreover, contrary to placebo, lidocaine managed to maintain its therapeutic results for the first 24 hours after intravenous infusion. Although, intravenous lidocaine is not a first line treatment, when first line medications fail to help, pain specialists may try it as an add-on treatment. This trial is registered with NCT01955967.
Extracted Data: | [["lidocaine (5 mgs/kg)", "placebo", "intensity of pain", "significantly decreased", "ntravenous lidocaine was superior regarding the reduction of the intensity of pain, the allodynia, and the hyperalgesia compared to placebo."]] | science.evidence_inference.49 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Trigeminal neuralgia is the most common neuralgia. Its therapeutic approach is challenging as the first line treatment often does not help, or even causes intolerable side effects. The aim of our randomized double blind, placebo controlled, crossover study was to investigate in a prospective way the effect of lidocaine in patients with trigeminal neuralgia. Twenty patients met our inclusion criteria and completed the study. Each patient underwent four weekly sessions, two of which were with lidocaine (5 mgs/kg) and two with placebo infusions administered over 60 minutes. Intravenous lidocaine was superior regarding the reduction of the intensity of pain, the allodynia, and the hyperalgesia compared to placebo. Moreover, contrary to placebo, lidocaine managed to maintain its therapeutic results for the first 24 hours after intravenous infusion. Although, intravenous lidocaine is not a first line treatment, when first line medications fail to help, pain specialists may try it as an add-on treatment. This trial is registered with NCT01955967.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"lidocaine (5 mgs/kg)\", \"placebo\", \"intensity of pain\", \"significantly decreased\", \"ntravenous lidocaine was superior regarding the reduction of the intensity of pain, the allodynia, and the hyperalgesia compared to placebo.\"]]",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.INTRODUCTION: Adjunctive mealtime use of the amylin analog pramlintide improves postprandial hyperglycemia in patients with type 1 diabetes. This post hoc analysis of three randomized trials evaluated whether disease duration affected responses to pramlintide.
ABSTRACT.METHODS: Patients received mealtime pramlintide 30 or 60 μg (n = 714) or placebo (n = 537) as an adjunct to insulin and were stratified into tertiles by diabetes duration at baseline. Efficacy and safety end points were assessed at week 26 using analysis of covariance and logistic regression models.
ABSTRACT.RESULTS: Disease durations for tertiles 1, 2, and 3 were 6.7, 16.5, and 29.9 years, respectively. In all tertiles, pramlintide resulted in greater reductions in glycated hemoglobin (HbA1c) and weight than placebo, with greater weight reductions and insulin sparing in tertiles 2 and 3. Insulin dose and weight increased in the placebo group in all tertiles. Baseline HbA1c was a predictor of HbA1c lowering in both treatment groups (P < 0.0001); higher daily insulin predicted a smaller percent increase in insulin dose for placebo (P = 0.01); and higher body weight predicted greater weight loss in both pramlintide- and placebo-treated patients (P < 0.05). Event rates for severe hypoglycemia were similar for pramlintide and placebo and increased with longer duration of diabetes for both groups. Nausea with pramlintide increased with longer disease duration.
ABSTRACT.CONCLUSION: Mealtime pramlintide resulted in greater reductions in HbA1c than placebo, regardless of diabetes duration at baseline. Longer disease duration appeared to augment insulin sparing and weight loss with pramlintide, with a potential for increased incidence of hypoglycemia and nausea.
ABSTRACT.FUNDING: The design and conduct of the study were supported by Amylin Pharmaceuticals, San Diego, CA, USA.
ABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s12325-016-0326-5) contains supplementary material, which is available to authorized users.
Extracted Data: | [["Pramlintide ", "Placebo", "Weight ", "significantly decreased", "pramlintide resulted in greater reductions in glycated hemoglobin (HbA1c) and weight than placebo"], ["Pramlintide ", "Placebo", "Severe hypoglycemia ", "no significant difference", "Event rates for severe hypoglycemia were similar for pramlintide and placebo"]] | science.evidence_inference.180 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.INTRODUCTION: Adjunctive mealtime use of the amylin analog pramlintide improves postprandial hyperglycemia in patients with type 1 diabetes. This post hoc analysis of three randomized trials evaluated whether disease duration affected responses to pramlintide.\n\nABSTRACT.METHODS: Patients received mealtime pramlintide 30 or 60 μg (n = 714) or placebo (n = 537) as an adjunct to insulin and were stratified into tertiles by diabetes duration at baseline. Efficacy and safety end points were assessed at week 26 using analysis of covariance and logistic regression models.\n\nABSTRACT.RESULTS: Disease durations for tertiles 1, 2, and 3 were 6.7, 16.5, and 29.9 years, respectively. In all tertiles, pramlintide resulted in greater reductions in glycated hemoglobin (HbA1c) and weight than placebo, with greater weight reductions and insulin sparing in tertiles 2 and 3. Insulin dose and weight increased in the placebo group in all tertiles. Baseline HbA1c was a predictor of HbA1c lowering in both treatment groups (P < 0.0001); higher daily insulin predicted a smaller percent increase in insulin dose for placebo (P = 0.01); and higher body weight predicted greater weight loss in both pramlintide- and placebo-treated patients (P < 0.05). Event rates for severe hypoglycemia were similar for pramlintide and placebo and increased with longer duration of diabetes for both groups. Nausea with pramlintide increased with longer disease duration.\n\nABSTRACT.CONCLUSION: Mealtime pramlintide resulted in greater reductions in HbA1c than placebo, regardless of diabetes duration at baseline. Longer disease duration appeared to augment insulin sparing and weight loss with pramlintide, with a potential for increased incidence of hypoglycemia and nausea.\n\nABSTRACT.FUNDING: The design and conduct of the study were supported by Amylin Pharmaceuticals, San Diego, CA, USA.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s12325-016-0326-5) contains supplementary material, which is available to authorized users.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Pramlintide \", \"Placebo\", \"Weight \", \"significantly decreased\", \"pramlintide resulted in greater reductions in glycated hemoglobin (HbA1c) and weight than placebo\"], [\"Pramlintide \", \"Placebo\", \"Severe hypoglycemia \", \"no significant difference\", \"Event rates for severe hypoglycemia were similar for pramlintide and placebo\"]]",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
The data thus obtained is used for training a linear regression model (similarly to<cite> (Gardner et al., 2014)</cite> ), and tested on the provided test sets and evaluated using mean average precision (MAP). | ["Similarities"] | science.multicite_intent_classification.1654 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe data thus obtained is used for training a linear regression model (similarly to<cite> (Gardner et al., 2014)</cite> ), and tested on the provided test sets and evaluated using mean average precision (MAP).",
"role": "user"
},
{
"content": "[\"Similarities\"]",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND:: The challenge in providing analgesia for spine surgeries is to provide extended postoperative pain relief and simultaneously allow early neurological assessment and mobilization. Our study aimed to evaluate the analgesic efficacy of intravenous versus caudal dexamethasone in lumbosacral spine surgeries.
ABSTRACT.MATERIALS AND METHODS:: In this prospective double-blind study, a total of 96 patients undergoing lumbosacral spine surgery were randomized into three groups to receive 25 ml of preemptive caudal epidural injection of either injection ropivacaine 0.2% (Group A, n = 32), a 25 ml of injection ropivacaine 0.2%, and intravenous injection dexamethasone 8 mg (Group B, n = 32) or 25 ml mixture of injection ropivacaine 0.2% with injection dexamethasone 8 mg (Group C, n = 32) under general anesthesia. Visual analog scale (VAS), heart rate, blood pressures, blood sugar levels, and time to rescue analgesia were recorded at regular intervals for the first 24 h. Time to discharge was noted. Analysis of variance has been used to find the significance of study parameters between the groups of patients. Statistical software, namely, SAS 9.2 and SPSS 15.0, have been used for the analysis of the data.
ABSTRACT.RESULTS:: The mean VAS was significantly lower in the Group C for up to 24 h following the caudal block. No significant hemodynamic changes were noted in any of the groups. The intravenous dexamethasone group showed higher blood glucose levels at 24 h but was not clinically relevant
ABSTRACT.CONCLUSION:: These results suggest that injection dexamethasone is a safe adjunct to caudal ropivacaine in lumbosacral spine surgeries.
Extracted Data: | [["Ropivacaine 0.2% with injection dexamethasone 8 mg (Group C)", "Ropivacaine 0.2% alone (Group A) ", "Mean visual analog scale", "significantly decreased", "The mean VAS was significantly lower in the Group C for up to 24 h following the caudal block"]] | science.evidence_inference.7 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: The challenge in providing analgesia for spine surgeries is to provide extended postoperative pain relief and simultaneously allow early neurological assessment and mobilization. Our study aimed to evaluate the analgesic efficacy of intravenous versus caudal dexamethasone in lumbosacral spine surgeries.\n\nABSTRACT.MATERIALS AND METHODS:: In this prospective double-blind study, a total of 96 patients undergoing lumbosacral spine surgery were randomized into three groups to receive 25 ml of preemptive caudal epidural injection of either injection ropivacaine 0.2% (Group A, n = 32), a 25 ml of injection ropivacaine 0.2%, and intravenous injection dexamethasone 8 mg (Group B, n = 32) or 25 ml mixture of injection ropivacaine 0.2% with injection dexamethasone 8 mg (Group C, n = 32) under general anesthesia. Visual analog scale (VAS), heart rate, blood pressures, blood sugar levels, and time to rescue analgesia were recorded at regular intervals for the first 24 h. Time to discharge was noted. Analysis of variance has been used to find the significance of study parameters between the groups of patients. Statistical software, namely, SAS 9.2 and SPSS 15.0, have been used for the analysis of the data.\n\nABSTRACT.RESULTS:: The mean VAS was significantly lower in the Group C for up to 24 h following the caudal block. No significant hemodynamic changes were noted in any of the groups. The intravenous dexamethasone group showed higher blood glucose levels at 24 h but was not clinically relevant\n\nABSTRACT.CONCLUSION:: These results suggest that injection dexamethasone is a safe adjunct to caudal ropivacaine in lumbosacral spine surgeries.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Ropivacaine 0.2% with injection dexamethasone 8 mg (Group C)\", \"Ropivacaine 0.2% alone (Group A) \", \"Mean visual analog scale\", \"significantly decreased\", \"The mean VAS was significantly lower in the Group C for up to 24 h following the caudal block\"]]",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Hydrotherapy is highly valued by people with rheumatoid arthritis yet few studies have compared the benefits of exercises in heated water against exercises on land. In particular, data on quality of life is rarely reported. This is especially important because patients treated with hydrotherapy often report an enhanced sense of well-being. We report a randomised controlled trial in which we compared the effects of hydrotherapy with exercises on land on overall response to treatment, physical function and quality of life in patients with rheumatoid arthritis.
ABSTRACT.METHODS: One hundred and fifteen patients with RA were randomised to receive a weekly 30-minute session of hydrotherapy or similar exercises on land for 6 weeks. Our primary outcome was a self-rated global impression of change – a measure of treatment effect on a 7-point scale ranging from 1(very much worse) to 7 (very much better) assessed immediately on completion of treatment. Secondary outcomes including EuroQol health related quality of life, EuroQol health status valuation, HAQ, 10 metre walk time and pain scores were collected at baseline, after treatment and 3 months later. Binary outcomes were analysed by Fisher's exact test and continuous variables by Wilcoxon or Mann-Whitney tests.
ABSTRACT.RESULTS: Baseline characteristics of the two groups were comparable. Significantly more patients treated with hydrotherapy (40/46, 87%) were much better or very much better than the patients treated with land exercise (19/40, 47.5%), p < 0.001 Fisher's exact test. Eleven patients allocated land exercise failed to complete treatment compared with 4 patients allocated hydrotherapy (p = 0.09). Sensitivity analyses confirmed an advantage for hydrotherapy if we assumed non-completers would all not have responded (response rates 70% versus 38%; p < 0.001) or if we assumed that non-completers would have had the same response as completers (response rates 82% versus 55% p = 0.002). Ten metre walk time improved after treatment in both cases (median pre-treatment time for both groups combined 10.9 seconds, post-treatment 9.1 s, and 3 months later 9.6 s). There was however no difference between treatment groups. Similarly there were no significant differences between groups in terms of changes to HAQ, EQ-5D utility score, EQ VAS and pain VAS.
ABSTRACT.CONCLUSION: Patients with RA treated with hydrotherapy are more likely to report feeling much better or very much better than those treated with land exercises immediately on completion of the treatment programme. This perceived benefit was not reflected by differences between groups in 10-metre walk times, functional scores, quality of life measures and pain scores.
Extracted Data: | [["receive a weekly 30-minute session of hydrotherapy for 6 weeks", "similar exercises on land for 6 weeks", "number of patients that feel much better", "significantly increased", "Significantly more patients treated with hydrotherapy (40/46, 87%) were much better or very much better than the patients treated with land exercise (19/40, 47.5%), p \u003c 0.001 Fisher\u0027s exact test."], ["receive a weekly 30-minute session of hydrotherapy for 6 weeks", "similar exercises on land for 6 weeks", "number of patient failed to complete treatment", "no significant difference", "Eleven patients allocated land exercise failed to complete treatment compared with 4 patients allocated hydrotherapy (p = 0.09)."], ["receive a weekly 30-minute session of hydrotherapy for 6 weeks", "similar exercises on land for 6 weeks", "Ten-metre walk time", "no significant difference", "Ten metre walk time improved after treatment in both cases (median pre-treatment time for both groups combined 10.9 seconds, post-treatment 9.1 s, and 3 months later 9.6 s). There was however no difference between treatment groups."], ["receive a weekly 30-minute session of hydrotherapy for 6 weeks", "similar exercises on land for 6 weeks", "HAQ, EQ-5D utility score, EQ VAS and pain VAS", "no significant difference", "there were no significant differences between groups in terms of changes to HAQ, EQ-5D utility score, EQ VAS and pain VAS."]] | science.evidence_inference.202 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Hydrotherapy is highly valued by people with rheumatoid arthritis yet few studies have compared the benefits of exercises in heated water against exercises on land. In particular, data on quality of life is rarely reported. This is especially important because patients treated with hydrotherapy often report an enhanced sense of well-being. We report a randomised controlled trial in which we compared the effects of hydrotherapy with exercises on land on overall response to treatment, physical function and quality of life in patients with rheumatoid arthritis.\n\nABSTRACT.METHODS: One hundred and fifteen patients with RA were randomised to receive a weekly 30-minute session of hydrotherapy or similar exercises on land for 6 weeks. Our primary outcome was a self-rated global impression of change – a measure of treatment effect on a 7-point scale ranging from 1(very much worse) to 7 (very much better) assessed immediately on completion of treatment. Secondary outcomes including EuroQol health related quality of life, EuroQol health status valuation, HAQ, 10 metre walk time and pain scores were collected at baseline, after treatment and 3 months later. Binary outcomes were analysed by Fisher's exact test and continuous variables by Wilcoxon or Mann-Whitney tests.\n\nABSTRACT.RESULTS: Baseline characteristics of the two groups were comparable. Significantly more patients treated with hydrotherapy (40/46, 87%) were much better or very much better than the patients treated with land exercise (19/40, 47.5%), p < 0.001 Fisher's exact test. Eleven patients allocated land exercise failed to complete treatment compared with 4 patients allocated hydrotherapy (p = 0.09). Sensitivity analyses confirmed an advantage for hydrotherapy if we assumed non-completers would all not have responded (response rates 70% versus 38%; p < 0.001) or if we assumed that non-completers would have had the same response as completers (response rates 82% versus 55% p = 0.002). Ten metre walk time improved after treatment in both cases (median pre-treatment time for both groups combined 10.9 seconds, post-treatment 9.1 s, and 3 months later 9.6 s). There was however no difference between treatment groups. Similarly there were no significant differences between groups in terms of changes to HAQ, EQ-5D utility score, EQ VAS and pain VAS.\n\nABSTRACT.CONCLUSION: Patients with RA treated with hydrotherapy are more likely to report feeling much better or very much better than those treated with land exercises immediately on completion of the treatment programme. This perceived benefit was not reflected by differences between groups in 10-metre walk times, functional scores, quality of life measures and pain scores.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"receive a weekly 30-minute session of hydrotherapy for 6 weeks\", \"similar exercises on land for 6 weeks\", \"number of patients that feel much better\", \"significantly increased\", \"Significantly more patients treated with hydrotherapy (40/46, 87%) were much better or very much better than the patients treated with land exercise (19/40, 47.5%), p \\u003c 0.001 Fisher\\u0027s exact test.\"], [\"receive a weekly 30-minute session of hydrotherapy for 6 weeks\", \"similar exercises on land for 6 weeks\", \"number of patient failed to complete treatment\", \"no significant difference\", \"Eleven patients allocated land exercise failed to complete treatment compared with 4 patients allocated hydrotherapy (p = 0.09).\"], [\"receive a weekly 30-minute session of hydrotherapy for 6 weeks\", \"similar exercises on land for 6 weeks\", \"Ten-metre walk time\", \"no significant difference\", \"Ten metre walk time improved after treatment in both cases (median pre-treatment time for both groups combined 10.9 seconds, post-treatment 9.1 s, and 3 months later 9.6 s). There was however no difference between treatment groups.\"], [\"receive a weekly 30-minute session of hydrotherapy for 6 weeks\", \"similar exercises on land for 6 weeks\", \"HAQ, EQ-5D utility score, EQ VAS and pain VAS\", \"no significant difference\", \"there were no significant differences between groups in terms of changes to HAQ, EQ-5D utility score, EQ VAS and pain VAS.\"]]",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
DFSSATTEN: Dynamic Fine-grained Structured Sparse Attention Mechanism
Main Content:
1 INTRODUCTION . Transformers ( Vaswani et al. , 2017 ) have achieved competitive performance across various domains like NLP ( Ott et al. , 2018 ) and Computer Vision ( Dosovitskiy et al. , 2021 ) . The key feature that sets them apart from traditional neural network architectures is the attention mechanism ( Vaswani et al. , 2017 ) , which allows the transformers to gather information from the embeddings of elements in the input sequence in an adaptive and learnable manner . Nevertheless , the high computation cost and memory footprint brought by the attention mechanism make it difficult to apply transformers to latency-sensitive tasks . Many efficient attention mechanisms ( Tay et al. , 2020b ; Zaheer et al. , 2020 ; Beltagy et al. , 2020 ; Tay et al. , 2020a ; Roy et al. , 2021 ; Kitaev et al. , 2020 ) have been proposed to address this issue . However , most of them drastically modify the original full attention mechanism and introduce a handful of hyper-parameters to tune . Therefore , they require tremendous engineering effort to deploy and optimize . Besides , they usually need to be trained from scratch instead of exploiting pretrained models like BERT ( Devlin et al. , 2019 ) . Some of them rely on fixed sparse patterns or extremely high sparsity to achieve wall-clock time speedup . Therefore , these methods usually require thousands of pretraining or fine-tuning steps on specific tasks and toilsome tuning of several hyper-parameters to reach good accuracy . Last but not least , previous methods usually introduce additional operators like top-k , sort that cause large overheads and offset their benefits at moderate sequence length . In this paper , we present DFSSATTEN , a simple and effective sparse attention mechanism that address the limitations mentioned above . DFSSATTEN dynamically prunes the full attention score matrix using 50 % fine-grained structured sparse patterns ( NVIDIA , 2020 ) . This pattern is GPU friendly and can leverage the new sparse tensor core on NVIDIA A100 GPU ( Mishra et al. , 2021 ) . Our DFSSATTEN offers several advantages over existing studies : • It requires minimal changes to the original full-attention with no hyper-parameters to tune . This makes it a drop-in replacement of the full attention that only requires to change a few lines of code . Moreover , it can directly exploit existing pretrained models like BERT ( Devlin et al. , 2019 ) and RoBERTa ( Liu et al. , 2020 ) . • We dynamically prune the attention scores based on their magnitude under only 50 % sparsity . This allows the pruned attention matrix to reserve the important entries , achieving on par model accuracy with full attention even without fine-tuning . • Our method introduces zero overhead on existing GPU hardware . As a result , we are able to achieve wall-clock time speedup and memory footprint reduction over the full attention in arbitrary sequence length . To conclude , our main contributions are summarized below : • We propose DFSSATTEN , a dynamic sparse attention mechanism that is a drop-in proxy of the full attention mechanism . Its effectiveness is justified by both empirical and theoretical evidence . • We present a dedicated CUDA kernel design to completely remove the pruning overhead . The pruning is implemented as an epilogue of the dense matrix multiplication which produces the attention score matrix . • We evaluate DFSSATTEN on tasks cross various domains and sequence lengths . It achieves 1.27 ∼ 1.89× speedup over the full attention with no accuracy loss . 2 BACKGROUND AND MOTIVATION . We first introduce the preliminaries , notations , and background of our paper . 2.1 FULL ATTENTION MECHANISM . Given an input sequence X= ( x1 , .. , xn ) ∈ Rn×d , the full attention mechanism can be defined as O = Softmax ( QKT / √ d ) V , ( 1 ) where Q = XWq , K = XWk , and V = XWv are query , key , and value matrices . QKT forms a full-quadratic adjacency matrix , whose edge weights are the dot-product similarity between all the elements in the sequence . This adjacency matrix is standardized with 1/ √ d to keep the unit second moment and then normalized with softmax . At last , the row feature vectors in V are aggregated according to the normalized adjacency matrix by multiplying them together . In the rest of this paper , we denote A = Softmax ( QKT / √ d ) for simplicity . We refer QKT as the attention score matrix and A as the attention weight matrix . 2.2 EFFICIENT ATTENTION MECHANISM . The high computation cost and memory footprint in the full attention mechanism come from A , whose size grows quadratically with the sequence length n. To address this issue , various efficient attention mechanisms have been proposed ( Tay et al. , 2020b ) . Fixed Sparse Patterns . Zaheer et al . ( 2020 ) ; Beltagy et al . ( 2020 ) apply a set of fixed sparse attention patterns on A , like global attention and sliding window attention . These patterns are constructed from empirical observations and designed GPU-friendly to achieve wall-time speedup . However , as these patterns are designed empirically and fixed during inference , there is no guarantee that they can always capture the important entries in A or transfer easily across different tasks . Dynamic Sparse Patterns . Ham et al . ( 2021 ) dynamically generate fine-grained sparse attention patterns on A with low-cost binary hashing . However , this technique requires specialized hardware to achieve speedup , so it is not available on general-purpose hardware like GPU . Tay et al . ( 2020a ) ; Roy et al . ( 2021 ) ; Kitaev et al . ( 2020 ) apply various clustering methods and only compute the attention within each cluster . Although computing full attention in each cluster is more friendly to GPU compared with fine-grained sparsity , the clustering methods contain several GPU-unfriendly operators like top-k and sorting that offsets their benefits under moderate sequence length . Low Rank / Kernel . Wang et al . ( 2020 ) project A from n × n to n × k with linear projection . Choromanski et al . ( 2021 ) introduce the FAVOR+ which approximates the softmax with kernel method . This allows them to change the computation order and reduce the asymptotic complexity to linear . However , the low-rank projection and kernel construction also introduce considerable overhead . This makes these methods only effective under long sequence length . Besides , the previous studies drastically change the attention mechanisms , tens of thousands pretraining or finetuning steps are required to reach a comparable performance with the origin full attention mechanism . So they require tremendous engineering effort to deploy . 2.3 FINE-GRAINED STRUCTURED SPARSITY IN NVIDIA A100 GPU . NVIDIA introduces the fine-grained structured sparsity in the A100 GPU . As shown in Figure 1 , the dense input matrix is pruned with fine-grained structured pruning . If the data type is float , 1:2 sparsity is used which selects the larger one in two consecutive entries . If the data type is bfloat16 or float16 , the 2:4 sparsity is used which selects two larger ones among four consecutive elements . After the pruning , the result is compressed to nonzeros and metadata . The nonzeros contain the value of reserved data that is 50 % smaller than the original one . The metadata records the index of the nonzeros in the origins matrix . It takes 4 bit metadata to record the decision of each 1:2 or 2:4 selection . Therefore , the metadata is only 1/16 of the original dense matrix in terms of bits . This compressed sparse matrix can be multiplied with a dense matrix under the support of the sparse tensor core to achieve significant speedup . This fine-grained structured sparsity has been applied to the static weight matrices in various neural network models including transformer ( Mishra et al. , 2021 ) . It can effectively accelerate the feedforward part of the transformer up to 1.9× . However , to the best of our knowledge , no previous studies use it in the attention mechanism where the attention weight matrix is dynamically generated for each sequence . One plausible explanation is that during pruning , GPU must read the whole dense matrix to be pruned from the memory . Then , after selecting the elements to be reserved under the 1:2 and 2:4 pattern , it must also generate the metadata encoded in a special format such that the metadata can be used efficiently later . All these overheads will offset the benefit brought by the pruning if we do it on the fly . 3 DYNAMIC FINE-GRAINED STRUCTURED SPARSE ATTENTION MECHANISM . In this section , we first give an overview of our DFSSATTEN method . Then , we discuss the design considerations of exploring sparsity in attention and the choice of sparse granularity in our method for GPU-friendly implementation and effectiveness . Finally , we briefly introduce our GPU kernel design to remove pruning overhead . Our proposed DFSSATTEN mechanism is simple and effective , as illustrated in Figure 2 . Compared with the full-quadratic attention mechanism , our method dynamically prunes attention scores without incurring storage or computation overhead , while maintaining the effectiveness of attention . More importantly , our method can achieve practical speedups of attention on existing GPU hardware with customized CUDA kernels . Listing 1 shows all the modifications to be made to use DFSSATTEN . # Full attention mechanism def full_attention ( q , k , v ) : attn_weight = torch.bmm ( q , k.transpose ( 1 , 2 ) ) attn_weight = torch.nn.functional.softmax ( attn_weight , -1 ) return torch.bmm ( attn_weight , v ) # DFSS attention mechanism import dspattn def dfss_attention ( q , k , v ) : attn_weight , metadata = dspattn.bmm ( q , k ) attn_weight = torch.nn.functional.softmax ( attn_weight , -1 ) return dspattn.spmm ( attn_weight , metadata , v ) Listing 1 : Example of using DFSSATTEN . The “ dspattn ” is the package we developed . 3.1 DESIGN CONSIDERATIONS FOR EXPLOITING ATTENTION SPARSITY As illustrated in Figure 3 , the attention mechanism can be considered as three stages : QKT , Softmax , and AV . To design a sparse attention mechanism , the first decision to make is where should we induce pruning to sparsify the attention . If we start from 0 , all the three stages will be benefited from the sparsity given effective implementation on GPU : the dense matrix multiplication between Q and K will be replaced with the sampled dense-dense matrix multiplication ( SDDMM ) which only computes the entries identified by the sparse pattern . The Softmax only operates on the nonzero values in each row . The original dense matrix multiplication between A and V will be replaced with a sparse matrix-matrix multiplication ( SpMM ) which multiplies a sparse matrix with a dense matrix . However , as it is not possible to exactly know which entry in QKT has higher magnitude before computing QKT , starting from 0 usually requires some additional components to predict the location of important entries . Starting from 1 requires us to compute a dense matrix multiplication between Q and K. The benefit is that we can explicitly select important entries from QKT without prediction . As the softmax is a monotonically increasing function , starting from 2 does not offer any benefits over 1 but throws away the opportunity to accelerate Softmax . In this paper , we choose to start from 1 based on two considerations . First , replacing the dense matrix multiplication with SDDMM at QKT offers limited speedup even at high sparsity . Chen et al . ( 2021b ) show that it is difficult for SDDMM to achieve speedup over its dense counterpart under 80 % sparsity even with some structured design . Second , starting from 1 allows us to keep our design simple such that it does not introduce additional overhead or hyper-parameters to tune . | This paper aims at improving the computational efficiency of the attention mechanism by leveraging the specific sparse pattern supported by sparse tensor cores of NVIDIA A100, with a particular focus on delivering practical running speedup. To achieve this, the authors proposed DFSSATTEN, which shows both theoretical and empirical advantage in terms of performance and speedup compared to various baselines. In particular, DFSSATTEN yields 1.27~1.89x speedup over the vanilla attention network across different sequence lengths. | science.mup_single_document_summarization.4517 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nDFSSATTEN: Dynamic Fine-grained Structured Sparse Attention Mechanism\n\nMain Content:\n1 INTRODUCTION . Transformers ( Vaswani et al. , 2017 ) have achieved competitive performance across various domains like NLP ( Ott et al. , 2018 ) and Computer Vision ( Dosovitskiy et al. , 2021 ) . The key feature that sets them apart from traditional neural network architectures is the attention mechanism ( Vaswani et al. , 2017 ) , which allows the transformers to gather information from the embeddings of elements in the input sequence in an adaptive and learnable manner . Nevertheless , the high computation cost and memory footprint brought by the attention mechanism make it difficult to apply transformers to latency-sensitive tasks . Many efficient attention mechanisms ( Tay et al. , 2020b ; Zaheer et al. , 2020 ; Beltagy et al. , 2020 ; Tay et al. , 2020a ; Roy et al. , 2021 ; Kitaev et al. , 2020 ) have been proposed to address this issue . However , most of them drastically modify the original full attention mechanism and introduce a handful of hyper-parameters to tune . Therefore , they require tremendous engineering effort to deploy and optimize . Besides , they usually need to be trained from scratch instead of exploiting pretrained models like BERT ( Devlin et al. , 2019 ) . Some of them rely on fixed sparse patterns or extremely high sparsity to achieve wall-clock time speedup . Therefore , these methods usually require thousands of pretraining or fine-tuning steps on specific tasks and toilsome tuning of several hyper-parameters to reach good accuracy . Last but not least , previous methods usually introduce additional operators like top-k , sort that cause large overheads and offset their benefits at moderate sequence length . In this paper , we present DFSSATTEN , a simple and effective sparse attention mechanism that address the limitations mentioned above . DFSSATTEN dynamically prunes the full attention score matrix using 50 % fine-grained structured sparse patterns ( NVIDIA , 2020 ) . This pattern is GPU friendly and can leverage the new sparse tensor core on NVIDIA A100 GPU ( Mishra et al. , 2021 ) . Our DFSSATTEN offers several advantages over existing studies : • It requires minimal changes to the original full-attention with no hyper-parameters to tune . This makes it a drop-in replacement of the full attention that only requires to change a few lines of code . Moreover , it can directly exploit existing pretrained models like BERT ( Devlin et al. , 2019 ) and RoBERTa ( Liu et al. , 2020 ) . • We dynamically prune the attention scores based on their magnitude under only 50 % sparsity . This allows the pruned attention matrix to reserve the important entries , achieving on par model accuracy with full attention even without fine-tuning . • Our method introduces zero overhead on existing GPU hardware . As a result , we are able to achieve wall-clock time speedup and memory footprint reduction over the full attention in arbitrary sequence length . To conclude , our main contributions are summarized below : • We propose DFSSATTEN , a dynamic sparse attention mechanism that is a drop-in proxy of the full attention mechanism . Its effectiveness is justified by both empirical and theoretical evidence . • We present a dedicated CUDA kernel design to completely remove the pruning overhead . The pruning is implemented as an epilogue of the dense matrix multiplication which produces the attention score matrix . • We evaluate DFSSATTEN on tasks cross various domains and sequence lengths . It achieves 1.27 ∼ 1.89× speedup over the full attention with no accuracy loss . 2 BACKGROUND AND MOTIVATION . We first introduce the preliminaries , notations , and background of our paper . 2.1 FULL ATTENTION MECHANISM . Given an input sequence X= ( x1 , .. , xn ) ∈ Rn×d , the full attention mechanism can be defined as O = Softmax ( QKT / √ d ) V , ( 1 ) where Q = XWq , K = XWk , and V = XWv are query , key , and value matrices . QKT forms a full-quadratic adjacency matrix , whose edge weights are the dot-product similarity between all the elements in the sequence . This adjacency matrix is standardized with 1/ √ d to keep the unit second moment and then normalized with softmax . At last , the row feature vectors in V are aggregated according to the normalized adjacency matrix by multiplying them together . In the rest of this paper , we denote A = Softmax ( QKT / √ d ) for simplicity . We refer QKT as the attention score matrix and A as the attention weight matrix . 2.2 EFFICIENT ATTENTION MECHANISM . The high computation cost and memory footprint in the full attention mechanism come from A , whose size grows quadratically with the sequence length n. To address this issue , various efficient attention mechanisms have been proposed ( Tay et al. , 2020b ) . Fixed Sparse Patterns . Zaheer et al . ( 2020 ) ; Beltagy et al . ( 2020 ) apply a set of fixed sparse attention patterns on A , like global attention and sliding window attention . These patterns are constructed from empirical observations and designed GPU-friendly to achieve wall-time speedup . However , as these patterns are designed empirically and fixed during inference , there is no guarantee that they can always capture the important entries in A or transfer easily across different tasks . Dynamic Sparse Patterns . Ham et al . ( 2021 ) dynamically generate fine-grained sparse attention patterns on A with low-cost binary hashing . However , this technique requires specialized hardware to achieve speedup , so it is not available on general-purpose hardware like GPU . Tay et al . ( 2020a ) ; Roy et al . ( 2021 ) ; Kitaev et al . ( 2020 ) apply various clustering methods and only compute the attention within each cluster . Although computing full attention in each cluster is more friendly to GPU compared with fine-grained sparsity , the clustering methods contain several GPU-unfriendly operators like top-k and sorting that offsets their benefits under moderate sequence length . Low Rank / Kernel . Wang et al . ( 2020 ) project A from n × n to n × k with linear projection . Choromanski et al . ( 2021 ) introduce the FAVOR+ which approximates the softmax with kernel method . This allows them to change the computation order and reduce the asymptotic complexity to linear . However , the low-rank projection and kernel construction also introduce considerable overhead . This makes these methods only effective under long sequence length . Besides , the previous studies drastically change the attention mechanisms , tens of thousands pretraining or finetuning steps are required to reach a comparable performance with the origin full attention mechanism . So they require tremendous engineering effort to deploy . 2.3 FINE-GRAINED STRUCTURED SPARSITY IN NVIDIA A100 GPU . NVIDIA introduces the fine-grained structured sparsity in the A100 GPU . As shown in Figure 1 , the dense input matrix is pruned with fine-grained structured pruning . If the data type is float , 1:2 sparsity is used which selects the larger one in two consecutive entries . If the data type is bfloat16 or float16 , the 2:4 sparsity is used which selects two larger ones among four consecutive elements . After the pruning , the result is compressed to nonzeros and metadata . The nonzeros contain the value of reserved data that is 50 % smaller than the original one . The metadata records the index of the nonzeros in the origins matrix . It takes 4 bit metadata to record the decision of each 1:2 or 2:4 selection . Therefore , the metadata is only 1/16 of the original dense matrix in terms of bits . This compressed sparse matrix can be multiplied with a dense matrix under the support of the sparse tensor core to achieve significant speedup . This fine-grained structured sparsity has been applied to the static weight matrices in various neural network models including transformer ( Mishra et al. , 2021 ) . It can effectively accelerate the feedforward part of the transformer up to 1.9× . However , to the best of our knowledge , no previous studies use it in the attention mechanism where the attention weight matrix is dynamically generated for each sequence . One plausible explanation is that during pruning , GPU must read the whole dense matrix to be pruned from the memory . Then , after selecting the elements to be reserved under the 1:2 and 2:4 pattern , it must also generate the metadata encoded in a special format such that the metadata can be used efficiently later . All these overheads will offset the benefit brought by the pruning if we do it on the fly . 3 DYNAMIC FINE-GRAINED STRUCTURED SPARSE ATTENTION MECHANISM . In this section , we first give an overview of our DFSSATTEN method . Then , we discuss the design considerations of exploring sparsity in attention and the choice of sparse granularity in our method for GPU-friendly implementation and effectiveness . Finally , we briefly introduce our GPU kernel design to remove pruning overhead . Our proposed DFSSATTEN mechanism is simple and effective , as illustrated in Figure 2 . Compared with the full-quadratic attention mechanism , our method dynamically prunes attention scores without incurring storage or computation overhead , while maintaining the effectiveness of attention . More importantly , our method can achieve practical speedups of attention on existing GPU hardware with customized CUDA kernels . Listing 1 shows all the modifications to be made to use DFSSATTEN . # Full attention mechanism def full_attention ( q , k , v ) : attn_weight = torch.bmm ( q , k.transpose ( 1 , 2 ) ) attn_weight = torch.nn.functional.softmax ( attn_weight , -1 ) return torch.bmm ( attn_weight , v ) # DFSS attention mechanism import dspattn def dfss_attention ( q , k , v ) : attn_weight , metadata = dspattn.bmm ( q , k ) attn_weight = torch.nn.functional.softmax ( attn_weight , -1 ) return dspattn.spmm ( attn_weight , metadata , v ) Listing 1 : Example of using DFSSATTEN . The “ dspattn ” is the package we developed . 3.1 DESIGN CONSIDERATIONS FOR EXPLOITING ATTENTION SPARSITY As illustrated in Figure 3 , the attention mechanism can be considered as three stages : QKT , Softmax , and AV . To design a sparse attention mechanism , the first decision to make is where should we induce pruning to sparsify the attention . If we start from 0 , all the three stages will be benefited from the sparsity given effective implementation on GPU : the dense matrix multiplication between Q and K will be replaced with the sampled dense-dense matrix multiplication ( SDDMM ) which only computes the entries identified by the sparse pattern . The Softmax only operates on the nonzero values in each row . The original dense matrix multiplication between A and V will be replaced with a sparse matrix-matrix multiplication ( SpMM ) which multiplies a sparse matrix with a dense matrix . However , as it is not possible to exactly know which entry in QKT has higher magnitude before computing QKT , starting from 0 usually requires some additional components to predict the location of important entries . Starting from 1 requires us to compute a dense matrix multiplication between Q and K. The benefit is that we can explicitly select important entries from QKT without prediction . As the softmax is a monotonically increasing function , starting from 2 does not offer any benefits over 1 but throws away the opportunity to accelerate Softmax . In this paper , we choose to start from 1 based on two considerations . First , replacing the dense matrix multiplication with SDDMM at QKT offers limited speedup even at high sparsity . Chen et al . ( 2021b ) show that it is difficult for SDDMM to achieve speedup over its dense counterpart under 80 % sparsity even with some structured design . Second , starting from 1 allows us to keep our design simple such that it does not introduce additional overhead or hyper-parameters to tune .",
"role": "user"
},
{
"content": "This paper aims at improving the computational efficiency of the attention mechanism by leveraging the specific sparse pattern supported by sparse tensor cores of NVIDIA A100, with a particular focus on delivering practical running speedup. To achieve this, the authors proposed DFSSATTEN, which shows both theoretical and empirical advantage in terms of performance and speedup compared to various baselines. In particular, DFSSATTEN yields 1.27~1.89x speedup over the vanilla attention network across different sequence lengths.",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Systems that are trained on clean data generally perform poorly when faced with such errors at test time (Heigold et al., 2017; <cite>Belinkov and Bisk, 2018</cite>) . One potential solution is to introduce noise at training time, an approach that is similar in spirit to the use of adversarial examples in other areas of machine learning (Goodfellow et al., 2014) and natural language processing (Ebrahimi et al., 2018) . So far, using synthetic noise at training time has been found to only improve performance on test data with exactly the same kind of synthetic noise, while at the same time impairing performance on clean test data (Heigold et al., 2017; <cite>Belinkov and Bisk, 2018</cite>) . We desire methods that yield good performance on both clean text as well as naturally-occurring noise, but this is beyond the reach of current techniques. | ["Background", "Motivation"] | science.multicite_intent_classification.142 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nSystems that are trained on clean data generally perform poorly when faced with such errors at test time (Heigold et al., 2017; <cite>Belinkov and Bisk, 2018</cite>) . One potential solution is to introduce noise at training time, an approach that is similar in spirit to the use of adversarial examples in other areas of machine learning (Goodfellow et al., 2014) and natural language processing (Ebrahimi et al., 2018) . So far, using synthetic noise at training time has been found to only improve performance on test data with exactly the same kind of synthetic noise, while at the same time impairing performance on clean test data (Heigold et al., 2017; <cite>Belinkov and Bisk, 2018</cite>) . We desire methods that yield good performance on both clean text as well as naturally-occurring noise, but this is beyond the reach of current techniques.",
"role": "user"
},
{
"content": "[\"Background\", \"Motivation\"]",
"role": "assistant"
}
] |
science.discomat_te | | Excited state | 3 F 3(+3 F 2) | 3 H 4 |
| Excitation wavelength (nm) | 690 | 794 |
| Emission wavelength (nm) | 695 | 798 |
| Lifetime (mm) | 10.2+-1 | 115+-10 |
| Transition rate (s-1) | (9.8+-1)x104 | (8.7+-0.7)x103 |
| Calculated radiative transition rate (s-1) | 1.28x104 | 8.04x103 |
| Quantum efficiency of emission (%) | 13+-1 | 93+-7 |
| Nonradiative transition rate (s-1) | 8.47x104 | 644 |
| Energy gap to the next lower state (cm-1) | 1839 | 4258 |
Caption: Luminescence characteristics of excited states of Tm3+ in 60GaS3/2 *10GeS2 *30LaS3/2 glass
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "690", "type": "Other"}
{"value": "794", "type": "Other"}
{"value": "695", "type": "Other"}
{"value": "798", "type": "Other"}
{"value": "10.2", "type": "Other"}
{"value": "115", "type": "Other"}
{"value": "9.8", "type": "Other"}
{"value": "8.7", "type": "Other"}
{"value": "13", "type": "Other"}
{"value": "93", "type": "Other"}
{"value": "644", "type": "Other"}
{"value": "1839", "type": "Other"}
{"value": "4258", "type": "Other"} | science.discomat_te.362 | [
{
"content": "| Excited state | 3 F 3(+3 F 2) | 3 H 4 |\n| Excitation wavelength (nm) | 690 | 794 |\n| Emission wavelength (nm) | 695 | 798 |\n| Lifetime (mm) | 10.2+-1 | 115+-10 |\n| Transition rate (s-1) | (9.8+-1)x104 | (8.7+-0.7)x103 |\n| Calculated radiative transition rate (s-1) | 1.28x104 | 8.04x103 |\n| Quantum efficiency of emission (%) | 13+-1 | 93+-7 |\n| Nonradiative transition rate (s-1) | 8.47x104 | 644 |\n| Energy gap to the next lower state (cm-1) | 1839 | 4258 |\n\nCaption: Luminescence characteristics of excited states of Tm3+ in 60GaS3/2 *10GeS2 *30LaS3/2 glass\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"690\", \"type\": \"Other\"}\n{\"value\": \"794\", \"type\": \"Other\"}\n{\"value\": \"695\", \"type\": \"Other\"}\n{\"value\": \"798\", \"type\": \"Other\"}\n{\"value\": \"10.2\", \"type\": \"Other\"}\n{\"value\": \"115\", \"type\": \"Other\"}\n{\"value\": \"9.8\", \"type\": \"Other\"}\n{\"value\": \"8.7\", \"type\": \"Other\"}\n{\"value\": \"13\", \"type\": \"Other\"}\n{\"value\": \"93\", \"type\": \"Other\"}\n{\"value\": \"644\", \"type\": \"Other\"}\n{\"value\": \"1839\", \"type\": \"Other\"}\n{\"value\": \"4258\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.discomat_te | | Name | SiO2 | PbO | Others | S (cm2/g) |
| P9 | 77.6 | 8.9 | 13.5 | 460 |
| P19 | 70.3 | 18.9 | 10.8 | 360 |
| P37 | 58.0 | 36.9 | 5.7 | 340 |
Caption: Composition (% molar oxide) and specific surface area of the lead-silicate powders
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "77.6", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "P9"}
{"value": "8.9", "type": "Glass_Compound_Amount", "constituent": "PbO", "unit": "mol", "material": "P9"}
{"value": "13.5", "type": "Glass_Compound_Amount", "constituent": "Others", "unit": "mol", "material": "P9"}
{"value": "460", "type": "Other"}
{"value": "70.3", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "P19"}
{"value": "18.9", "type": "Glass_Compound_Amount", "constituent": "PbO", "unit": "mol", "material": "P19"}
{"value": "10.8", "type": "Glass_Compound_Amount", "constituent": "Others", "unit": "mol", "material": "P19"}
{"value": "360", "type": "Other"}
{"value": "58.0", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "P37"}
{"value": "36.9", "type": "Glass_Compound_Amount", "constituent": "PbO", "unit": "mol", "material": "P37"}
{"value": "5.7", "type": "Glass_Compound_Amount", "constituent": "Others", "unit": "mol", "material": "P37"}
{"value": "340", "type": "Other"} | science.discomat_te.265 | [
{
"content": "| Name | SiO2 | PbO | Others | S (cm2/g) |\n| P9 | 77.6 | 8.9 | 13.5 | 460 |\n| P19 | 70.3 | 18.9 | 10.8 | 360 |\n| P37 | 58.0 | 36.9 | 5.7 | 340 |\n\nCaption: Composition (% molar oxide) and specific surface area of the lead-silicate powders\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"77.6\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"P9\"}\n{\"value\": \"8.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbO\", \"unit\": \"mol\", \"material\": \"P9\"}\n{\"value\": \"13.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Others\", \"unit\": \"mol\", \"material\": \"P9\"}\n{\"value\": \"460\", \"type\": \"Other\"}\n{\"value\": \"70.3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"P19\"}\n{\"value\": \"18.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbO\", \"unit\": \"mol\", \"material\": \"P19\"}\n{\"value\": \"10.8\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Others\", \"unit\": \"mol\", \"material\": \"P19\"}\n{\"value\": \"360\", \"type\": \"Other\"}\n{\"value\": \"58.0\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"P37\"}\n{\"value\": \"36.9\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"PbO\", \"unit\": \"mol\", \"material\": \"P37\"}\n{\"value\": \"5.7\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Others\", \"unit\": \"mol\", \"material\": \"P37\"}\n{\"value\": \"340\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Expression of oncolytic virus antigens as peptides makes relapse more likely.
Research Abstract:
Tumor recurrence represents a major clinical challenge.
Our data show that emergent recurrent tumors acquire a phenotype radically different from that of their originating primary tumors.
This phenotype allows them to evade a host-derived innate immune response elicited by the progression from minimal residual disease (MRD) to actively growing recurrence.
Screening for this innate response predicted accurately in which mice recurrence would occur.
Premature induction of recurrence resensitized MRD to the primary therapy, suggesting a possible paradigm shift for clinical treatment of dormant disease in which the current expectant approach is replaced with active attempts to uncover MRD before evolution of the escape phenotype is complete.
By combining screening with second-line treatments targeting innate insensitivity, up to 100% of mice that would have otherwise relapsed were cured.
These data may open new avenues for early detection and appropriately timed, highly targeted treatment of tumor recurrence irrespective of tumor type or frontline treatment. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.269 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nExpression of oncolytic virus antigens as peptides makes relapse more likely.\n\nResearch Abstract:\nTumor recurrence represents a major clinical challenge.\nOur data show that emergent recurrent tumors acquire a phenotype radically different from that of their originating primary tumors.\nThis phenotype allows them to evade a host-derived innate immune response elicited by the progression from minimal residual disease (MRD) to actively growing recurrence.\nScreening for this innate response predicted accurately in which mice recurrence would occur.\nPremature induction of recurrence resensitized MRD to the primary therapy, suggesting a possible paradigm shift for clinical treatment of dormant disease in which the current expectant approach is replaced with active attempts to uncover MRD before evolution of the escape phenotype is complete.\nBy combining screening with second-line treatments targeting innate insensitivity, up to 100% of mice that would have otherwise relapsed were cured.\nThese data may open new avenues for early detection and appropriately timed, highly targeted treatment of tumor recurrence irrespective of tumor type or frontline treatment.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Relevance Readability Total Sentence Rewrite<cite> (Chen and Bansal, 2018)</cite> 56 59 115 BERTSUM (Liu, 2019) 58 60 118 BERT-ext + abs + RL + rerank (ours) 66 61 127 which has the highest summary-level ROUGE-L score, from all the possible combinations of sentences. | ["Differences"] | science.multicite_intent_classification.332 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nRelevance Readability Total Sentence Rewrite<cite> (Chen and Bansal, 2018)</cite> 56 59 115 BERTSUM (Liu, 2019) 58 60 118 BERT-ext + abs + RL + rerank (ours) 66 61 127 which has the highest summary-level ROUGE-L score, from all the possible combinations of sentences.",
"role": "user"
},
{
"content": "[\"Differences\"]",
"role": "assistant"
}
] |
science.discomat_te | | Frequency (cm-1) | Frequency (cm-1) | Frequency (cm-1) | Frequency (cm-1) | Frequency (cm-1) | Assignments |
| G1 | G2 | G3 | G4 | G5 | Assignments |
| 329+-8 | 346+-7 | 358+-5 | 348+-4 | 339+-1 | Coupled vibration of M-O and SiO4 network [a,b] |
| 425+-8 | 440+-8 | 425+-8 | 425+-5 | 437+-5 | Out-of-plane motion of bridging oxygen [a,c] |
| 463+-7 | 483+-7 | 494+-6 | - | - | Asymmetric deformation of Q4 (rocking of bridging oxygen) [c] |
| - | - | - | 502+-3 | 500+-4 | Si-O-Si bending of four-membered siloxane ring of SiO4 [a] |
| 579 | 605 | 624 | 636 | 646 | Rocking motion of bridging oxygen in structural units that contain NBO [b,c] |
| 789+-1 | 767+-2 | 750+-3 | - | - | Oxygen bending with some contribution from Si atom [b,c] |
| - | 886+-1 | 869+-2 | 860+-1 | 860+-1 | Symmetric stretching of Q0 [c,d] |
| - | 923+-1 | 931+-7 | 922+-8 | 902+-1 | Symmetric stretching of Q1 [c,d] |
| 961+-1 | 962+-1 | 984+-2 | 978+-2 | 956+-1 | Symmetric stretching of Q2 [c,d] |
| 1040+-5 | 1034+-3 | 1037+-6 | 1034+-6 | 1028+-3 | Asymmetric stretching of bridging oxygen in all Q species [b] |
| 1104+-2 | 1087+-1 | 1065+-7 | 1062+-7 | 1056+-6 | Symmetric stretching of Q3 [c,d] |
| 1170+-7 | 1146+-5 | 1165+-7 | 1121+-7 | - | Asymmetric stretching of Q4 with Si-O-Si =160deg [a,b,c] |
Caption: Frequencies and assignments of Raman bands for the xNa2O*MgO*4SiO2 glasses
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "329", "type": "Other"}
{"value": "346", "type": "Other"}
{"value": "358", "type": "Other"}
{"value": "348", "type": "Other"}
{"value": "339", "type": "Other"}
{"value": "425", "type": "Other"}
{"value": "440", "type": "Other"}
{"value": "425", "type": "Other"}
{"value": "425", "type": "Other"}
{"value": "437", "type": "Other"}
{"value": "463", "type": "Other"}
{"value": "483", "type": "Other"}
{"value": "494", "type": "Other"}
{"value": "502", "type": "Other"}
{"value": "500", "type": "Other"}
{"value": "579", "type": "Other"}
{"value": "605", "type": "Other"}
{"value": "624", "type": "Other"}
{"value": "636", "type": "Other"}
{"value": "646", "type": "Other"}
{"value": "789", "type": "Other"}
{"value": "767", "type": "Other"}
{"value": "750", "type": "Other"}
{"value": "886", "type": "Other"}
{"value": "869", "type": "Other"}
{"value": "860", "type": "Other"}
{"value": "860", "type": "Other"}
{"value": "923", "type": "Other"}
{"value": "931", "type": "Other"}
{"value": "922", "type": "Other"}
{"value": "902", "type": "Other"}
{"value": "961", "type": "Other"}
{"value": "962", "type": "Other"}
{"value": "984", "type": "Other"}
{"value": "978", "type": "Other"}
{"value": "956", "type": "Other"}
{"value": "1040", "type": "Other"}
{"value": "1034", "type": "Other"}
{"value": "1037", "type": "Other"}
{"value": "1034", "type": "Other"}
{"value": "1028", "type": "Other"}
{"value": "1104", "type": "Other"}
{"value": "1087", "type": "Other"}
{"value": "1065", "type": "Other"}
{"value": "1062", "type": "Other"}
{"value": "1056", "type": "Other"}
{"value": "1170", "type": "Other"}
{"value": "1146", "type": "Other"}
{"value": "1165", "type": "Other"}
{"value": "1121", "type": "Other"} | science.discomat_te.19 | [
{
"content": "| Frequency (cm-1) | Frequency (cm-1) | Frequency (cm-1) | Frequency (cm-1) | Frequency (cm-1) | Assignments |\n| G1 | G2 | G3 | G4 | G5 | Assignments |\n| 329+-8 | 346+-7 | 358+-5 | 348+-4 | 339+-1 | Coupled vibration of M-O and SiO4 network [a,b] |\n| 425+-8 | 440+-8 | 425+-8 | 425+-5 | 437+-5 | Out-of-plane motion of bridging oxygen [a,c] |\n| 463+-7 | 483+-7 | 494+-6 | - | - | Asymmetric deformation of Q4 (rocking of bridging oxygen) [c] |\n| - | - | - | 502+-3 | 500+-4 | Si-O-Si bending of four-membered siloxane ring of SiO4 [a] |\n| 579 | 605 | 624 | 636 | 646 | Rocking motion of bridging oxygen in structural units that contain NBO [b,c] |\n| 789+-1 | 767+-2 | 750+-3 | - | - | Oxygen bending with some contribution from Si atom [b,c] |\n| - | 886+-1 | 869+-2 | 860+-1 | 860+-1 | Symmetric stretching of Q0 [c,d] |\n| - | 923+-1 | 931+-7 | 922+-8 | 902+-1 | Symmetric stretching of Q1 [c,d] |\n| 961+-1 | 962+-1 | 984+-2 | 978+-2 | 956+-1 | Symmetric stretching of Q2 [c,d] |\n| 1040+-5 | 1034+-3 | 1037+-6 | 1034+-6 | 1028+-3 | Asymmetric stretching of bridging oxygen in all Q species [b] |\n| 1104+-2 | 1087+-1 | 1065+-7 | 1062+-7 | 1056+-6 | Symmetric stretching of Q3 [c,d] |\n| 1170+-7 | 1146+-5 | 1165+-7 | 1121+-7 | - | Asymmetric stretching of Q4 with Si-O-Si =160deg [a,b,c] |\n\nCaption: Frequencies and assignments of Raman bands for the xNa2O*MgO*4SiO2 glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"329\", \"type\": \"Other\"}\n{\"value\": \"346\", \"type\": \"Other\"}\n{\"value\": \"358\", \"type\": \"Other\"}\n{\"value\": \"348\", \"type\": \"Other\"}\n{\"value\": \"339\", \"type\": \"Other\"}\n{\"value\": \"425\", \"type\": \"Other\"}\n{\"value\": \"440\", \"type\": \"Other\"}\n{\"value\": \"425\", \"type\": \"Other\"}\n{\"value\": \"425\", \"type\": \"Other\"}\n{\"value\": \"437\", \"type\": \"Other\"}\n{\"value\": \"463\", \"type\": \"Other\"}\n{\"value\": \"483\", \"type\": \"Other\"}\n{\"value\": \"494\", \"type\": \"Other\"}\n{\"value\": \"502\", \"type\": \"Other\"}\n{\"value\": \"500\", \"type\": \"Other\"}\n{\"value\": \"579\", \"type\": \"Other\"}\n{\"value\": \"605\", \"type\": \"Other\"}\n{\"value\": \"624\", \"type\": \"Other\"}\n{\"value\": \"636\", \"type\": \"Other\"}\n{\"value\": \"646\", \"type\": \"Other\"}\n{\"value\": \"789\", \"type\": \"Other\"}\n{\"value\": \"767\", \"type\": \"Other\"}\n{\"value\": \"750\", \"type\": \"Other\"}\n{\"value\": \"886\", \"type\": \"Other\"}\n{\"value\": \"869\", \"type\": \"Other\"}\n{\"value\": \"860\", \"type\": \"Other\"}\n{\"value\": \"860\", \"type\": \"Other\"}\n{\"value\": \"923\", \"type\": \"Other\"}\n{\"value\": \"931\", \"type\": \"Other\"}\n{\"value\": \"922\", \"type\": \"Other\"}\n{\"value\": \"902\", \"type\": \"Other\"}\n{\"value\": \"961\", \"type\": \"Other\"}\n{\"value\": \"962\", \"type\": \"Other\"}\n{\"value\": \"984\", \"type\": \"Other\"}\n{\"value\": \"978\", \"type\": \"Other\"}\n{\"value\": \"956\", \"type\": \"Other\"}\n{\"value\": \"1040\", \"type\": \"Other\"}\n{\"value\": \"1034\", \"type\": \"Other\"}\n{\"value\": \"1037\", \"type\": \"Other\"}\n{\"value\": \"1034\", \"type\": \"Other\"}\n{\"value\": \"1028\", \"type\": \"Other\"}\n{\"value\": \"1104\", \"type\": \"Other\"}\n{\"value\": \"1087\", \"type\": \"Other\"}\n{\"value\": \"1065\", \"type\": \"Other\"}\n{\"value\": \"1062\", \"type\": \"Other\"}\n{\"value\": \"1056\", \"type\": \"Other\"}\n{\"value\": \"1170\", \"type\": \"Other\"}\n{\"value\": \"1146\", \"type\": \"Other\"}\n{\"value\": \"1165\", \"type\": \"Other\"}\n{\"value\": \"1121\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: maternal immune activation (MIA) mouse model that is known to display features of ASD.
BTBR T+tf/J (BTBR), a model of ASD with cognitive deficits,
C58/J mouse strain, a model of ASD core symptoms.
the C58/J mouse model of autism
Both C58/J and Grin1 knockdown mice, another model of ASD-like behavior,
Mice injected subcutaneously with 600 mg/kg valproic acid (VPA600) at gestational day 12.5 show reduced social interaction in adulthood (at 8 weeks of age), and they have been proposed as a mouse model of autism.
BTBR T+tf/J (BTBR), a commonly-used model presenting all core autism-related phenotypes
The Autism ProSAP1/Shank2 mouse model
Shank3-deficient mice
a panel of Shank mutant mouse models
maternal immune activation (MIA) mouse model of gestational poly(IC) exposure
poly(IC) mouse model of autism spectrum disorders
BTBR T+ tf/J strain, a mouse model of autism.
The inbred BTBR T+ tf/J (BTBR) strain, a putative mouse model of autism,
C57BL/6J strain
SYNGAP1 mouse model of ID/ASD
Tsc2f/-;Cre mice demonstrated increased repetitive behavior as assessed with marble burying activity
BTBR T+ tf/J mouse model for autism spectrum disorders
Furthermore, maternal infection (maternal immune activation, MIA) is a risk factor for ASD.
NR1(neo-/-) mice
mice heterozygous for the Gabrb3 gene
Gestational immune activation and Tsc2 haploinsufficiency
novel transgenic mouse, MALTT,
sing a mouse model, CD38, a transmembrane protein expressed in immune cells but also in brain, was found to be critical for social behavior via regulation of oxytocin secretion.
fmr1 knockout mouse model
TS2-neo mouse
Congenic Tbx1 heterozygous (HT) mice
the valproic acid (VPA) mouse model of autism
indicates that the MeCP2+/- model may be useful for preclinical development targeting specific cortical processing abnormalities in RTT with potential relevance to ASD.
(Dp(11)17/+)
BTBR T+tf/J (BTBR) inbred mice are frequently used as a model of autism spectrum disorders (ASD)
Intact and impaired executive abilities in the BTBR mouse model of autism.
Question: List mouse models for autism spectrum disorder (ASD). | ["maternal immune activation (MIA) mouse model of gestational poly(IC) exposure", "BTBR T+tf/J (BTBR)"] | science.bioasq_list_qa.294 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: maternal immune activation (MIA) mouse model that is known to display features of ASD.\nBTBR T+tf/J (BTBR), a model of ASD with cognitive deficits,\nC58/J mouse strain, a model of ASD core symptoms.\nthe C58/J mouse model of autism\nBoth C58/J and Grin1 knockdown mice, another model of ASD-like behavior,\nMice injected subcutaneously with 600 mg/kg valproic acid (VPA600) at gestational day 12.5 show reduced social interaction in adulthood (at 8 weeks of age), and they have been proposed as a mouse model of autism.\nBTBR T+tf/J (BTBR), a commonly-used model presenting all core autism-related phenotypes\nThe Autism ProSAP1/Shank2 mouse model\nShank3-deficient mice\na panel of Shank mutant mouse models\nmaternal immune activation (MIA) mouse model of gestational poly(IC) exposure\npoly(IC) mouse model of autism spectrum disorders\nBTBR T+ tf/J strain, a mouse model of autism.\nThe inbred BTBR T+ tf/J (BTBR) strain, a putative mouse model of autism,\nC57BL/6J strain\nSYNGAP1 mouse model of ID/ASD\nTsc2f/-;Cre mice demonstrated increased repetitive behavior as assessed with marble burying activity\nBTBR T+ tf/J mouse model for autism spectrum disorders\nFurthermore, maternal infection (maternal immune activation, MIA) is a risk factor for ASD.\nNR1(neo-/-) mice\nmice heterozygous for the Gabrb3 gene\nGestational immune activation and Tsc2 haploinsufficiency\nnovel transgenic mouse, MALTT,\nsing a mouse model, CD38, a transmembrane protein expressed in immune cells but also in brain, was found to be critical for social behavior via regulation of oxytocin secretion.\nfmr1 knockout mouse model\nTS2-neo mouse\nCongenic Tbx1 heterozygous (HT) mice\nthe valproic acid (VPA) mouse model of autism\nindicates that the MeCP2+/- model may be useful for preclinical development targeting specific cortical processing abnormalities in RTT with potential relevance to ASD.\n(Dp(11)17/+)\nBTBR T+tf/J (BTBR) inbred mice are frequently used as a model of autism spectrum disorders (ASD)\nIntact and impaired executive abilities in the BTBR mouse model of autism.\n\nQuestion: List mouse models for autism spectrum disorder (ASD).",
"role": "user"
},
{
"content": "[\"maternal immune activation (MIA) mouse model of gestational poly(IC) exposure\", \"BTBR T+tf/J (BTBR)\"]",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.OBJECTIVE: To evaluate the efficacy, tolerability, and safety of once-daily 1200 mg and 2400 mg SPN-804 (Oxtellar XRTM, Supernus Pharmaceuticals), an extended-release tablet formulation of oxcarbazepine (OXC), added to 1-3 concomitant antiepileptic drugs (AEDs) in adults with refractory partial-onset seizures, with or without secondary generalization.
ABSTRACT.METHODS: The Prospective, Randomized Study of OXC XR in Subjects with Partial Epilepsy Refractory (PROSPER) study was a multinational, randomized, double-blind, parallel-group Phase 3 study. The primary efficacy endpoint was median percent reduction from baseline in monthly (28-day) seizure frequency for the 16-week double-blind treatment period in the intent-to-treat (ITT) population with analyzable seizure data. Other efficacy analyses included proportion of patients with ≥ 50% seizure reduction, proportion of patients seizure free, and the relationship between clinical response and plasma concentration.
ABSTRACT.RESULTS: Median percent reduction was -28.7% for placebo, −38.2% (P = 0.08 vs placebo) for once-daily SPN-804 1200 mg, and −42.9% (P = 0.003) for SPN-804 2400 mg. Responder rates were 28.1%, 36.1% (P = 0.08), and 40.7% (P = 0.02); 16-week seizure-free rates in a pragmatic ITT analysis were 3.3%, 4.9% (P = 0.59), and 11.4% (P = 0.008), respectively. When data were analyzed separately for study site clusters, a post hoc analysis demonstrated that both SPN-804 dosages were significantly superior to placebo in median percent seizure reduction (placebo: −13.3%; 1200 mg: −34.5%, P = 0.02; 2400 mg: −52.7%, P = 0.006) in the North American study site cluster. A concentration–response analysis also supported a clinically meaningful effect for 1200 mg. Adverse event types reflected the drug's established profile. Adverse event frequency was consistent with a pharmacokinetic profile in which SPN-804 produces lower peak plasma concentrations vs immediate-release OXC. Once-daily dosing was not associated with any new safety signals.
ABSTRACT.CONCLUSIONS: Adjunctive once-daily SPN-804 improved seizure control in patients with inadequately controlled partial-onset seizures. Adverse event occurrence and discontinuations due to adverse events suggest improved tolerability vs previously published data with immediate-release OXC.
Extracted Data: | [["Extended-release oxcarbazepine 2400 mg", "Control", "Overall seizure median percent improvement ", "significantly increased", "Median percent reduction was -28.7% for placebo, \u221238.2% (P = 0.08 vs placebo) for once-daily SPN-804 1200 mg, and \u221242.9% (P = 0.003) for SPN-804 2400 mg"], ["Extended-release oxcarbazepine 2400 mg", "Control", "Overall seizure median percent improvement ", "significantly increased", "When data were analyzed separately for study site clusters, a post hoc analysis demonstrated that both SPN-804 dosages were significantly superior to placebo in median percent seizure reduction (placebo: \u221213.3%; 1200 mg: \u221234.5%, P = 0.02; 2400 mg: \u221252.7%, P = 0.006) in the North American study site cluster."], ["Extended-release oxcarbazepine 1200 mg", "Control", "Overall seizure median percent reduction", "no significant difference", "Median percent reduction was -28.7% for placebo, \u221238.2% (P = 0.08 vs placebo) for once-daily SPN-804 1200 mg, and \u221242.9% (P = 0.003) for SPN-804 2400 mg"], ["Extended-release oxcarbazepine 1200 mg", "Control", "Seizure median percent improvement in North America", "significantly increased", "both SPN-804 dosages were significantly superior to placebo in median percent seizure reduction (placebo: \u221213.3%; 1200 mg: \u221234.5%, P = 0.02; 2400 mg: \u221252.7%, P = 0.006)"], ["Extended-release oxcarbazepine 2400 mg", "Control", "Seizure median percent improvement in North America", "significantly increased", "both SPN-804 dosages were significantly superior to placebo in median percent seizure reduction (placebo: \u221213.3%; 1200 mg: \u221234.5%, P = 0.02; 2400 mg: \u221252.7%, P = 0.006) in the North American study site cluster"]] | science.evidence_inference.109 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.OBJECTIVE: To evaluate the efficacy, tolerability, and safety of once-daily 1200 mg and 2400 mg SPN-804 (Oxtellar XRTM, Supernus Pharmaceuticals), an extended-release tablet formulation of oxcarbazepine (OXC), added to 1-3 concomitant antiepileptic drugs (AEDs) in adults with refractory partial-onset seizures, with or without secondary generalization.\n\nABSTRACT.METHODS: The Prospective, Randomized Study of OXC XR in Subjects with Partial Epilepsy Refractory (PROSPER) study was a multinational, randomized, double-blind, parallel-group Phase 3 study. The primary efficacy endpoint was median percent reduction from baseline in monthly (28-day) seizure frequency for the 16-week double-blind treatment period in the intent-to-treat (ITT) population with analyzable seizure data. Other efficacy analyses included proportion of patients with ≥ 50% seizure reduction, proportion of patients seizure free, and the relationship between clinical response and plasma concentration.\n\nABSTRACT.RESULTS: Median percent reduction was -28.7% for placebo, −38.2% (P = 0.08 vs placebo) for once-daily SPN-804 1200 mg, and −42.9% (P = 0.003) for SPN-804 2400 mg. Responder rates were 28.1%, 36.1% (P = 0.08), and 40.7% (P = 0.02); 16-week seizure-free rates in a pragmatic ITT analysis were 3.3%, 4.9% (P = 0.59), and 11.4% (P = 0.008), respectively. When data were analyzed separately for study site clusters, a post hoc analysis demonstrated that both SPN-804 dosages were significantly superior to placebo in median percent seizure reduction (placebo: −13.3%; 1200 mg: −34.5%, P = 0.02; 2400 mg: −52.7%, P = 0.006) in the North American study site cluster. A concentration–response analysis also supported a clinically meaningful effect for 1200 mg. Adverse event types reflected the drug's established profile. Adverse event frequency was consistent with a pharmacokinetic profile in which SPN-804 produces lower peak plasma concentrations vs immediate-release OXC. Once-daily dosing was not associated with any new safety signals.\n\nABSTRACT.CONCLUSIONS: Adjunctive once-daily SPN-804 improved seizure control in patients with inadequately controlled partial-onset seizures. Adverse event occurrence and discontinuations due to adverse events suggest improved tolerability vs previously published data with immediate-release OXC.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Extended-release oxcarbazepine 2400 mg\", \"Control\", \"Overall seizure median percent improvement \", \"significantly increased\", \"Median percent reduction was -28.7% for placebo, \\u221238.2% (P = 0.08 vs placebo) for once-daily SPN-804 1200 mg, and \\u221242.9% (P = 0.003) for SPN-804 2400 mg\"], [\"Extended-release oxcarbazepine 2400 mg\", \"Control\", \"Overall seizure median percent improvement \", \"significantly increased\", \"When data were analyzed separately for study site clusters, a post hoc analysis demonstrated that both SPN-804 dosages were significantly superior to placebo in median percent seizure reduction (placebo: \\u221213.3%; 1200 mg: \\u221234.5%, P = 0.02; 2400 mg: \\u221252.7%, P = 0.006) in the North American study site cluster.\"], [\"Extended-release oxcarbazepine 1200 mg\", \"Control\", \"Overall seizure median percent reduction\", \"no significant difference\", \"Median percent reduction was -28.7% for placebo, \\u221238.2% (P = 0.08 vs placebo) for once-daily SPN-804 1200 mg, and \\u221242.9% (P = 0.003) for SPN-804 2400 mg\"], [\"Extended-release oxcarbazepine 1200 mg\", \"Control\", \"Seizure median percent improvement in North America\", \"significantly increased\", \"both SPN-804 dosages were significantly superior to placebo in median percent seizure reduction (placebo: \\u221213.3%; 1200 mg: \\u221234.5%, P = 0.02; 2400 mg: \\u221252.7%, P = 0.006)\"], [\"Extended-release oxcarbazepine 2400 mg\", \"Control\", \"Seizure median percent improvement in North America\", \"significantly increased\", \"both SPN-804 dosages were significantly superior to placebo in median percent seizure reduction (placebo: \\u221213.3%; 1200 mg: \\u221234.5%, P = 0.02; 2400 mg: \\u221252.7%, P = 0.006) in the North American study site cluster\"]]",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Weighted model estimation for offline model-based reinforcement learning
Main Content:
1 Introduction . Reinforcement learning ( RL ) is a framework to learn a policy in an unknown environment [ 1 ] . Model-based RL ( MBRL ) is an approach that explicitly estimates a transition model and utilizes it to improve a policy . Compared to model-free RL , one major advantage of MBRL is the data efficiency [ 2–4 ] , which is important for applications where the data collection is expensive , such as robotics or healthcare . This paper focuses on offline MBRL , i.e . MBRL that learns a policy from previously collected datasets . One of the main challenges in offline MBRL is distribution shift [ 5 ] . The distribution of offline data is different from the distribution of data obtained in the future when applying an improved policy . This is a situation called covariate shift , where a model estimated using standard supervised learning techniques such as empirical risk minimization ( ERM ) may not generalize to test data . A major idea in MBRL is to estimate a model using such a standard method and to improve a policy using the estimated model in regions where the predictions are accurate [ 6–8 ] . A point to be improved here is model estimation without considering covariate shift . In supervised learning under covariate shift , importance-weighted ERM can obtain good predictive performance [ 9 ] . If a model in offline MBRL can be estimated in a similar way , then the estimated model will make more accurate predictions , resulting in increased policy improvement . Motivated by this , this paper tackles the issue of weighted model estimation considering covariate shift in offline MBRL . From the viewpoint of covariate shift , a natural idea is weighting with the ratio of the state-action distributions of offline data and real future data , as the agent uses a model to predict real future data . For convenience , this paper calls the ratio “ the natural weight. ” However , estimating the natural weight is one of the main challenges for off-policy evaluation [ 5 , 10 ] . Compared to density ratio estimation for supervised learning [ 11 ] , the difficulty of off-policy evaluation is the inaccessibility of samples from the distribution of real future data . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . As an alternative , this paper considers weighting with the state-action distribution ratio of offline data and “ simulated ” future data . This paper calls the ratio “ the artificial weight , ” in contrast to the natural weight . Simulated future data can be generated in offline simulation , unlike real future data . Since samples from both distributions are available , the artificial weight can be relatively easily obtained by standard density ratio estimation techniques for supervised learning . Based on the artificial weight , this paper defines a loss function for offline MBRL in Section 4 . This paper presents an algorithm to optimize it in Section 5 . The question here is the validity of weighting with the artificial weight , as it may not be natural for covariate shift . This paper justifies it as evaluating an upper bound of the policy evaluation error in Section 4 . This paper demonstrates that it works in practice in numerical experiments in Section 6 . The above is the contribution of this paper . 2 Related Works . Most of offline MBRL researches [ 7 , 8 , 12 ] discuss how to learn a policy using a learned model . In contrast , this paper studies how to estimate a model considering covariate shift in MBRL , which depends on a policy to be applied . In principle , these can be combined as an algorithm that repeats between estimating a model and learning a policy , like Algorithm 3 . Importance-sampling-based methods estimate the natural weight , namely , the distribution ratio of offline data and real future data on a trajectory or state ( -action ) space [ 13–15 , 10 ] . The advantage is unbiasedness , while the disadvantage is high variance . Model-based methods , including this paper , evaluate policies utilizing models trained using data . Model-based methods have the opposite advantage and disadvantage of importance-sampling-based methods . Doubly robust methods combine importance-sampling-based and model-based methods [ 16–18 ] . In principle , incorporating the idea of this paper as a model-based component is possible . VAML [ 19 , 20 ] defines a state-action-wise loss function based on the expected error of Bellman operator and learns a model to optimize a weighted sum . Similar ideas are found in [ 21–23 ] . The main focus is the state-action-wise loss function , while there are few studies about how to weight it is . The few studies , e.g . GAMPS [ 24 ] , use the natural weight . In contrast , this paper simply uses the negative log likelihood loss function for a state-action pair and discusses the artificial weight , which is relatively easy to estimate . This paper is similar to VMBPO [ 25 ] in that an EM-style algorithm updates a model and a policy using a universal loss function . The differences is that this paper considers covariate shift but does not use the idea of RL as inference [ 26 ] . Formulation that explicitly incorporates RL as inference is also an interesting topic . Another approach is learning a model robust against covariate shift by reducing the difference between the distributions of offline data and future data on a feature space [ 22 , 27 , 28 ] . Since covariate shift measured in a feature space is encouraged to decrease , policy improvement increasing covariate shift is also discouraged . That is , such an idea implicitly limits policy improvement . This paper differs in that it is trying to utilize covariate shift . 3 Preliminaries . This paper discusses a discounted infinite horizon MDP [ 29 ] . Let s be a state . Let a be an action . Let ρ ( s ) be an initial state distribution . Let P ( s′|s , a ) be a transition probability function . Let r ( s , a ) be a reward function . Let γ be a discount factor . Let π be a policy . The value function is V πP ( s 0 ) = Eat∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∞∑ t=0 γtr ( st , at ) ∣∣∣∣∣ s0 ] . The expected return is ηπP = Es0∼ρ [ V πP ( s ) ] . The state distribution is dπP ( s ) = ( 1− γ ) ∞∑ t=0 γtPr ( st = s ∣∣ ρ , π , P ) . The state-action distribution is dπP ( s , a ) = d π P ( s ) π ( a|s ) . Note that ηπP = E ( s , a ) ∼dπP [ r ( s , a ) 1−γ ] . Let P∗ denote the true transition probability , which is unknown to the agent . Let Pθ denote the transition probability model parameterized by θ . Let D = { ( sn , an , s′n ) } Nn=1 be offline data , which is obtained from P∗ . Let dDP∗ be the distribution of the input part of D , i.e . { ( sn , an ) } N n=1 . Let D̂πθ be simulation data generated given ( Pθ , π ) with state initialization with probability 1−γ , which is equal to sampling ( s , a ) from dπPθ . Let cθ ( s , a ) = Es′∼P∗ ( ·|s , a ) [ − lnPθ ( s ′|s , a ) ] be the cross entropy between P∗ ( ·|s , a ) and Pθ ( ·|s , a ) . Let h ( s , a ) = Es′∼P∗ ( ·|s , a ) [ − lnP∗ ( s′|s , a ) ] be the self-entropy of P∗ ( ·|s , a ) . 4 Loss Function . This paper discusses the theory for the case where the reward function is known . For the unknown reward function , if the prediction of a reward model is bounded , the same loss functions can be derived , see Appendix . 4.1 Policy Evaluation . The goal of policy evaluation is estimating ηπP∗ when given π . For a model-based setting , the goal reduces to finding θ decreasing |ηπP∗ − η π Pθ | , which is bounded as |ηπP∗ − η π Pθ | ≤ γE ( s , a ) ∼dπPθ [ ∣∣∑ s′ ( P∗ ( s′|s , a ) − Pθ ( s′|s , a ) ) V πP∗ ( s ′ ) ∣∣ ] 1− γ ≤ BE ( s , a ) ∼dπPθ [ ||P∗ ( ·|s , a ) − Pθ ( ·|s , a ) ||1 ] √ 2 ≤ B √ E ( s , a ) ∼dπPθ [ cθ ( s , a ) − h ( s , a ) ] = B √ E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) cθ ( s , a ) ] − E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) h ( s , a ) ] ≤ B √ E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) cθ ( s , a ) ] − hmin , ( 1 ) where B = γmaxs , a |r ( s , a ) | √ 2 ( 1−γ ) 2 , w π θ ( s , a ) = dπPθ ( s , a ) dDP∗ ( s , a ) , and hmin = mins , a h ( s , a ) . The first inequality is the Telescoping lemma [ 21 ] . The second inequality holds from Holder ’ s inequality and ||V πP ||∞ ≤ maxs , a |r ( s , a ) | 1−γ . The third inequality holds from Pinsker ’ s inequality and Jensen ’ s inequality . When trying to optimize the third inequality , E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) h ( s , a ) ] is difficult to handle , as h ( s , a ) is unknown . The fourth inequality ( 1 ) excludes the difficult part from the optimization . This paper considers decreasing the policy evaluation error by minimizing Equation ( 1 ) . What is to be minimized is Lπ ( θ ) = E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) cθ ( s , a ) ] = E ( s , a ) ∼dDP∗ , s′∼P∗ ( ·|s , a ) [ −w π θ ( s , a ) lnPθ ( s′|s , a ) ] . The empirical average of Lπ ( θ ) is L̂π ( θ ) = − 1 N N∑ n=1 wπθ ( sn , an ) lnPθ ( s′n|sn , an ) , ( 2 ) Equation ( 2 ) is a loss function weighted with wπθ ( s , a ) , which is the artificial weight , namely , the distribution ratio of offline data D and simulated future data D̂πθ . Thus , weighting with the artificial weight is justified as evaluating an upper bound of the policy evaluation error . 4.2 Policy optimization . The goal of policy optimization is finding π increasing ηπP∗ . In offline MBRL , the agent utilizes η π Pθ for the optimization . A lower bound of ηπP∗ is ηπP∗ ≥ η π Pθ − |η π P∗ − η π Pθ | ≥ η π Pθ −B √ Lπ ( θ ) − hmin . The first term evaluates the expected return by model Pθ , and the second penalizes the policy evaluation error . Inspired from increasing ηπP∗ by maximizing the lower bound , this paper defines a generalization loss function for offline MBRL by J ( θ , π ) = −ηπPθ +B ′ √ Lπ ( θ ) − hmin ( 3 ) where B′ ∈ [ 0 , B ] is a tunable coefficient to scale the penalty for the policy evaluation error . The agent optimizes an estimate of Equation ( 3 ) , where empirical averages of ηπPθ and L π ( θ ) are weighted with the artificial weight . Similarly to the discussion in Section 4.1 , weighting with the artificial ratio is justified as evaluating the expected return with the penalty for the policy evaluation error . Remark . If using the Telescoping lemma with respect to dπP∗ instead of d π Pθ , the policy evaluation error is similarly bounded as |ηπP∗ − η π Pθ | ≤ B √√√√E ( s , a ) ∼dDP∗ [ dπP∗ ( s , a ) dDP∗ ( s , a ) cθ ( s , a ) ] − hmin , ( 4 ) where dπP∗ ( s , a ) dDP∗ ( s , a ) is the natural weight . There are two advantages to consider Equation ( 1 ) rather than Equation ( 4 ) . The first is that the artificial weight can be more easily ontained , as described in Section 1 . The second is that Equation ( 1 ) leads to a standard MDP optimization problem , as described in Section 5.2 , while Equation ( 4 ) results in a more complicated optimization problem related to both dπPθ and d π P∗ . | The paper addresses the problem of covariate shift in offline memory-based reinforcement learning by proposing a novel method for more accurate evaluation of the expected return of a given policy by means of simulation of a learned model of the real system that might not be entirely correct. Two algorithm that uses a weighting factor, called the artificial weight, are proposed for tuning the learned system's model to predict more accurately the parts of the state space visited by the policy being evaluated, and thus improve the estimate of the expected return of this policy. This improved estimate can further be used for policy improvement, as customary for RL algorithms. An empirical evaluation in simulated environments demonstrates the validity of the proposed approach, both for policy evaluation and for policy improvement. Update: I appreciate the clarification about the evaluation task the authors are solving (pendulum swing-up, not stabilization). I also noticed that the other reviewers also found the advantages of the proposed algorithm not all that overwhelming. This, combined with the generally unclear presentation, makes the paper borderline. I am revising my rating to 6, just above borderline - it would still be good to accept this paper, if possible, and depending on the ratings of other papers. | science.mup_single_document_summarization.15204 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nWeighted model estimation for offline model-based reinforcement learning\n\nMain Content:\n1 Introduction . Reinforcement learning ( RL ) is a framework to learn a policy in an unknown environment [ 1 ] . Model-based RL ( MBRL ) is an approach that explicitly estimates a transition model and utilizes it to improve a policy . Compared to model-free RL , one major advantage of MBRL is the data efficiency [ 2–4 ] , which is important for applications where the data collection is expensive , such as robotics or healthcare . This paper focuses on offline MBRL , i.e . MBRL that learns a policy from previously collected datasets . One of the main challenges in offline MBRL is distribution shift [ 5 ] . The distribution of offline data is different from the distribution of data obtained in the future when applying an improved policy . This is a situation called covariate shift , where a model estimated using standard supervised learning techniques such as empirical risk minimization ( ERM ) may not generalize to test data . A major idea in MBRL is to estimate a model using such a standard method and to improve a policy using the estimated model in regions where the predictions are accurate [ 6–8 ] . A point to be improved here is model estimation without considering covariate shift . In supervised learning under covariate shift , importance-weighted ERM can obtain good predictive performance [ 9 ] . If a model in offline MBRL can be estimated in a similar way , then the estimated model will make more accurate predictions , resulting in increased policy improvement . Motivated by this , this paper tackles the issue of weighted model estimation considering covariate shift in offline MBRL . From the viewpoint of covariate shift , a natural idea is weighting with the ratio of the state-action distributions of offline data and real future data , as the agent uses a model to predict real future data . For convenience , this paper calls the ratio “ the natural weight. ” However , estimating the natural weight is one of the main challenges for off-policy evaluation [ 5 , 10 ] . Compared to density ratio estimation for supervised learning [ 11 ] , the difficulty of off-policy evaluation is the inaccessibility of samples from the distribution of real future data . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . As an alternative , this paper considers weighting with the state-action distribution ratio of offline data and “ simulated ” future data . This paper calls the ratio “ the artificial weight , ” in contrast to the natural weight . Simulated future data can be generated in offline simulation , unlike real future data . Since samples from both distributions are available , the artificial weight can be relatively easily obtained by standard density ratio estimation techniques for supervised learning . Based on the artificial weight , this paper defines a loss function for offline MBRL in Section 4 . This paper presents an algorithm to optimize it in Section 5 . The question here is the validity of weighting with the artificial weight , as it may not be natural for covariate shift . This paper justifies it as evaluating an upper bound of the policy evaluation error in Section 4 . This paper demonstrates that it works in practice in numerical experiments in Section 6 . The above is the contribution of this paper . 2 Related Works . Most of offline MBRL researches [ 7 , 8 , 12 ] discuss how to learn a policy using a learned model . In contrast , this paper studies how to estimate a model considering covariate shift in MBRL , which depends on a policy to be applied . In principle , these can be combined as an algorithm that repeats between estimating a model and learning a policy , like Algorithm 3 . Importance-sampling-based methods estimate the natural weight , namely , the distribution ratio of offline data and real future data on a trajectory or state ( -action ) space [ 13–15 , 10 ] . The advantage is unbiasedness , while the disadvantage is high variance . Model-based methods , including this paper , evaluate policies utilizing models trained using data . Model-based methods have the opposite advantage and disadvantage of importance-sampling-based methods . Doubly robust methods combine importance-sampling-based and model-based methods [ 16–18 ] . In principle , incorporating the idea of this paper as a model-based component is possible . VAML [ 19 , 20 ] defines a state-action-wise loss function based on the expected error of Bellman operator and learns a model to optimize a weighted sum . Similar ideas are found in [ 21–23 ] . The main focus is the state-action-wise loss function , while there are few studies about how to weight it is . The few studies , e.g . GAMPS [ 24 ] , use the natural weight . In contrast , this paper simply uses the negative log likelihood loss function for a state-action pair and discusses the artificial weight , which is relatively easy to estimate . This paper is similar to VMBPO [ 25 ] in that an EM-style algorithm updates a model and a policy using a universal loss function . The differences is that this paper considers covariate shift but does not use the idea of RL as inference [ 26 ] . Formulation that explicitly incorporates RL as inference is also an interesting topic . Another approach is learning a model robust against covariate shift by reducing the difference between the distributions of offline data and future data on a feature space [ 22 , 27 , 28 ] . Since covariate shift measured in a feature space is encouraged to decrease , policy improvement increasing covariate shift is also discouraged . That is , such an idea implicitly limits policy improvement . This paper differs in that it is trying to utilize covariate shift . 3 Preliminaries . This paper discusses a discounted infinite horizon MDP [ 29 ] . Let s be a state . Let a be an action . Let ρ ( s ) be an initial state distribution . Let P ( s′|s , a ) be a transition probability function . Let r ( s , a ) be a reward function . Let γ be a discount factor . Let π be a policy . The value function is V πP ( s 0 ) = Eat∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∞∑ t=0 γtr ( st , at ) ∣∣∣∣∣ s0 ] . The expected return is ηπP = Es0∼ρ [ V πP ( s ) ] . The state distribution is dπP ( s ) = ( 1− γ ) ∞∑ t=0 γtPr ( st = s ∣∣ ρ , π , P ) . The state-action distribution is dπP ( s , a ) = d π P ( s ) π ( a|s ) . Note that ηπP = E ( s , a ) ∼dπP [ r ( s , a ) 1−γ ] . Let P∗ denote the true transition probability , which is unknown to the agent . Let Pθ denote the transition probability model parameterized by θ . Let D = { ( sn , an , s′n ) } Nn=1 be offline data , which is obtained from P∗ . Let dDP∗ be the distribution of the input part of D , i.e . { ( sn , an ) } N n=1 . Let D̂πθ be simulation data generated given ( Pθ , π ) with state initialization with probability 1−γ , which is equal to sampling ( s , a ) from dπPθ . Let cθ ( s , a ) = Es′∼P∗ ( ·|s , a ) [ − lnPθ ( s ′|s , a ) ] be the cross entropy between P∗ ( ·|s , a ) and Pθ ( ·|s , a ) . Let h ( s , a ) = Es′∼P∗ ( ·|s , a ) [ − lnP∗ ( s′|s , a ) ] be the self-entropy of P∗ ( ·|s , a ) . 4 Loss Function . This paper discusses the theory for the case where the reward function is known . For the unknown reward function , if the prediction of a reward model is bounded , the same loss functions can be derived , see Appendix . 4.1 Policy Evaluation . The goal of policy evaluation is estimating ηπP∗ when given π . For a model-based setting , the goal reduces to finding θ decreasing |ηπP∗ − η π Pθ | , which is bounded as |ηπP∗ − η π Pθ | ≤ γE ( s , a ) ∼dπPθ [ ∣∣∑ s′ ( P∗ ( s′|s , a ) − Pθ ( s′|s , a ) ) V πP∗ ( s ′ ) ∣∣ ] 1− γ ≤ BE ( s , a ) ∼dπPθ [ ||P∗ ( ·|s , a ) − Pθ ( ·|s , a ) ||1 ] √ 2 ≤ B √ E ( s , a ) ∼dπPθ [ cθ ( s , a ) − h ( s , a ) ] = B √ E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) cθ ( s , a ) ] − E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) h ( s , a ) ] ≤ B √ E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) cθ ( s , a ) ] − hmin , ( 1 ) where B = γmaxs , a |r ( s , a ) | √ 2 ( 1−γ ) 2 , w π θ ( s , a ) = dπPθ ( s , a ) dDP∗ ( s , a ) , and hmin = mins , a h ( s , a ) . The first inequality is the Telescoping lemma [ 21 ] . The second inequality holds from Holder ’ s inequality and ||V πP ||∞ ≤ maxs , a |r ( s , a ) | 1−γ . The third inequality holds from Pinsker ’ s inequality and Jensen ’ s inequality . When trying to optimize the third inequality , E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) h ( s , a ) ] is difficult to handle , as h ( s , a ) is unknown . The fourth inequality ( 1 ) excludes the difficult part from the optimization . This paper considers decreasing the policy evaluation error by minimizing Equation ( 1 ) . What is to be minimized is Lπ ( θ ) = E ( s , a ) ∼dDP∗ [ w π θ ( s , a ) cθ ( s , a ) ] = E ( s , a ) ∼dDP∗ , s′∼P∗ ( ·|s , a ) [ −w π θ ( s , a ) lnPθ ( s′|s , a ) ] . The empirical average of Lπ ( θ ) is L̂π ( θ ) = − 1 N N∑ n=1 wπθ ( sn , an ) lnPθ ( s′n|sn , an ) , ( 2 ) Equation ( 2 ) is a loss function weighted with wπθ ( s , a ) , which is the artificial weight , namely , the distribution ratio of offline data D and simulated future data D̂πθ . Thus , weighting with the artificial weight is justified as evaluating an upper bound of the policy evaluation error . 4.2 Policy optimization . The goal of policy optimization is finding π increasing ηπP∗ . In offline MBRL , the agent utilizes η π Pθ for the optimization . A lower bound of ηπP∗ is ηπP∗ ≥ η π Pθ − |η π P∗ − η π Pθ | ≥ η π Pθ −B √ Lπ ( θ ) − hmin . The first term evaluates the expected return by model Pθ , and the second penalizes the policy evaluation error . Inspired from increasing ηπP∗ by maximizing the lower bound , this paper defines a generalization loss function for offline MBRL by J ( θ , π ) = −ηπPθ +B ′ √ Lπ ( θ ) − hmin ( 3 ) where B′ ∈ [ 0 , B ] is a tunable coefficient to scale the penalty for the policy evaluation error . The agent optimizes an estimate of Equation ( 3 ) , where empirical averages of ηπPθ and L π ( θ ) are weighted with the artificial weight . Similarly to the discussion in Section 4.1 , weighting with the artificial ratio is justified as evaluating the expected return with the penalty for the policy evaluation error . Remark . If using the Telescoping lemma with respect to dπP∗ instead of d π Pθ , the policy evaluation error is similarly bounded as |ηπP∗ − η π Pθ | ≤ B √√√√E ( s , a ) ∼dDP∗ [ dπP∗ ( s , a ) dDP∗ ( s , a ) cθ ( s , a ) ] − hmin , ( 4 ) where dπP∗ ( s , a ) dDP∗ ( s , a ) is the natural weight . There are two advantages to consider Equation ( 1 ) rather than Equation ( 4 ) . The first is that the artificial weight can be more easily ontained , as described in Section 1 . The second is that Equation ( 1 ) leads to a standard MDP optimization problem , as described in Section 5.2 , while Equation ( 4 ) results in a more complicated optimization problem related to both dπPθ and d π P∗ .",
"role": "user"
},
{
"content": "The paper addresses the problem of covariate shift in offline memory-based reinforcement learning by proposing a novel method for more accurate evaluation of the expected return of a given policy by means of simulation of a learned model of the real system that might not be entirely correct. Two algorithm that uses a weighting factor, called the artificial weight, are proposed for tuning the learned system's model to predict more accurately the parts of the state space visited by the policy being evaluated, and thus improve the estimate of the expected return of this policy. This improved estimate can further be used for policy improvement, as customary for RL algorithms. An empirical evaluation in simulated environments demonstrates the validity of the proposed approach, both for policy evaluation and for policy improvement. Update: I appreciate the clarification about the evaluation task the authors are solving (pendulum swing-up, not stabilization). I also noticed that the other reviewers also found the advantages of the proposed algorithm not all that overwhelming. This, combined with the generally unclear presentation, makes the paper borderline. I am revising my rating to 6, just above borderline - it would still be good to accept this paper, if possible, and depending on the ratings of other papers.",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT: Food cues of palatable food are omnipresent, thereby simulating the intake of unhealthy snack food among children. As a consequence, this might lead to a higher intake of energy-dense snacks and less fruit and vegetables, a habit that increases the risk of developing chronic diseases. The aim of this experimental study is to examine whether playing a memory game with fruit affects fruit intake among young children. We used a randomized between-subject design with 127 children (age: 7–12 y) who played a memory-game, containing either fruit (n = 64) or non-food products (n = 63). While playing the memory-game in a separate room in school during school hours, free intake of fruit (mandarins, apples, bananas, and grapes) was measured. Afterwards, the children completed self-report measures, and length and weight were assessed. The main finding is that playing a memory-game containing fruit increases overall fruit intake (P = 0.016). Children who played the fruit version of the memory-game ate more bananas (P = 0.015) and mandarins (P = 0.036) than children who played the non-food memory-game; no effects were found for apples (P > 0.05) and grapes (P > 0.05). The findings suggest that playing a memory-game with fruit stimulates fruit intake among young children. This is an important finding because children eat insufficient fruit, according to international standards, and more traditional health interventions have limited success. Healthy eating habits of children maintain when they become adults, making it important to stimulate fruit intake among children in an enjoyable way. <p> Trial registration: Nederlands Trial Register TC = 5687
Extracted Data: | [["memory game with fruit", "no fruit game", "higher banana intake", "significantly increased", "Children who played the fruit version of the memory-game ate more bananas (P = 0.015) and mandarins (P = 0.036) than children who played the non-food memory-game;"], ["memory game with fruit", "no fruit game", "higher apple intake", "no significant difference", "no effects were found for apples (P \u003e 0.05) and grapes (P \u003e 0.05)"], ["memory game with fruit", "no fruit game", "overall fruit intake", "significantly increased", "The main finding is that playing a memory-game containing fruit increases overall fruit intake (P = 0.016)."], ["memory game with fruit", "no fruit game", "higher grape intake", "no significant difference", "Children who played the fruit version of the memory-game ate more bananas (P = 0.015) and mandarins (P = 0.036) than children who played the non-food memory-game; no effects were found for apples (P \u003e 0.05) and grapes (P \u003e 0.05)"]] | science.evidence_inference.140 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT: Food cues of palatable food are omnipresent, thereby simulating the intake of unhealthy snack food among children. As a consequence, this might lead to a higher intake of energy-dense snacks and less fruit and vegetables, a habit that increases the risk of developing chronic diseases. The aim of this experimental study is to examine whether playing a memory game with fruit affects fruit intake among young children. We used a randomized between-subject design with 127 children (age: 7–12 y) who played a memory-game, containing either fruit (n = 64) or non-food products (n = 63). While playing the memory-game in a separate room in school during school hours, free intake of fruit (mandarins, apples, bananas, and grapes) was measured. Afterwards, the children completed self-report measures, and length and weight were assessed. The main finding is that playing a memory-game containing fruit increases overall fruit intake (P = 0.016). Children who played the fruit version of the memory-game ate more bananas (P = 0.015) and mandarins (P = 0.036) than children who played the non-food memory-game; no effects were found for apples (P > 0.05) and grapes (P > 0.05). The findings suggest that playing a memory-game with fruit stimulates fruit intake among young children. This is an important finding because children eat insufficient fruit, according to international standards, and more traditional health interventions have limited success. Healthy eating habits of children maintain when they become adults, making it important to stimulate fruit intake among children in an enjoyable way. <p> Trial registration: Nederlands Trial Register TC = 5687\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"memory game with fruit\", \"no fruit game\", \"higher banana intake\", \"significantly increased\", \"Children who played the fruit version of the memory-game ate more bananas (P = 0.015) and mandarins (P = 0.036) than children who played the non-food memory-game;\"], [\"memory game with fruit\", \"no fruit game\", \"higher apple intake\", \"no significant difference\", \"no effects were found for apples (P \\u003e 0.05) and grapes (P \\u003e 0.05)\"], [\"memory game with fruit\", \"no fruit game\", \"overall fruit intake\", \"significantly increased\", \"The main finding is that playing a memory-game containing fruit increases overall fruit intake (P = 0.016).\"], [\"memory game with fruit\", \"no fruit game\", \"higher grape intake\", \"no significant difference\", \"Children who played the fruit version of the memory-game ate more bananas (P = 0.015) and mandarins (P = 0.036) than children who played the non-food memory-game; no effects were found for apples (P \\u003e 0.05) and grapes (P \\u003e 0.05)\"]]",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Primary cervical cytology screening with HPV detection has higher longitudinal sensitivity to detect severe cervical intraepithelial neoplasia than conventional cytology.
Research Abstract:
BACKGROUND Human papillomavirus (HPV) testing is more sensitive for the detection of high-grade cervical lesions than is cytology, but detection of HPV by DNA screening in two screening rounds 5 years apart has not been assessed.
The aim of this study was to assess whether HPV DNA testing in the first screen decreases detection of cervical intraepithelial neoplasia (CIN) grade 3 or worse, CIN grade 2 or worse, and cervical cancer in the second screening.
METHODS In this randomised trial, women aged 29-56 years participating in the cervical screening programme in the Netherlands were randomly assigned to receive HPV DNA (GP5+/6+-PCR method) and cytology co-testing or cytology testing alone, from January, 1999, to September, 2002.
Randomisation (in a 1:1 ratio) was done with computer-generated random numbers after the cervical specimen had been taken.
At the second screening 5 years later, HPV DNA and cytology co-testing was done in both groups; researchers were masked to the patient's assignment.
The primary endpoint was the number of CIN grade 3 or worse detected.
Analysis was done by intention to screen.
The trial is now finished and is registered, number ISRCTN20781131.
FINDINGS 22,420 women were randomly assigned to the intervention group and 22 518 to the control group; 19 999 in the intervention group and 20,106 in the control group were eligible for analysis at the first screen.
At the second screen, 19 579 women in the intervention group and 19,731 in the control group were eligible, of whom 16,750 and 16,743, respectively, attended the second screen.
In the second round, CIN grade 3 or worse was less common in the intervention group than in the control group (88 of 19 579 in the intervention group vs 122 of 19,731 in the control group; relative risk 0·73, 95% CI 0·55-0·96; p=0·023).
Cervical cancer was also less common in the intervention group than in the control group (four of 19 579 in the intervention group vs 14 of 19,731; 0·29, 0·10-0·87; p=0·031).
In the baseline round, detection of CIN grade 3 or worse did not differ significantly between groups (171 of 19 999 vs 150 of 20,106; 1·15, 0·92-1·43; p=0·239) but was significantly more common in women with normal cytology (34 of 19,286 vs 12 of 19,373; 2·85, 1·47-5·49; p=0·001).
Furthermore, significantly more cases of CIN grade 2 or worse were detected in the intervention group than in the control group (267 of 19 999 vs 215 of 20,106; 1·25, 1·05-1·50; p=0·015).
In the second screen, fewer HPV16-positive CIN grade 3 or worse were detected in the intervention group than in the control group (17 of 9481 vs 35 of 9354; 0·48, 0·27-0·85; p=0·012); detection of non-HPV16-positive CIN grade 3 or worse did not differ between groups (25 of 9481 vs 25 of 9354; 0·99, 0·57-1·72; p=1·00).
The cumulative detection of CIN grade 3 or worse and CIN grade 2 or worse did not differ significantly between study arms, neither for the whole study group (CIN grade 3 or worse: 259 of 19 999 vs 272 of 20,106; 0·96, 0·81-1·14, p=0·631; CIN grade 2 or worse: 427 of 19 999 vs 399 of 20,106; 1·08, 0·94-1·24; p=0·292), nor for subgroups of women invited for the first time (CIN grade 3 or worse in women aged 29-33 years: 102 of 3139 vs 105 of 3128; 0·97, 0·74-1·27; CIN grade 2 or worse in women aged 29-33 years: 153 of 3139 vs 151 of 3128; 1·01, 0·81-1·26; CIN grade 3 or worse in women aged 34-56 years: 157 of 16,860 vs 167 of 16 978; 0·95, 0·76-1·18; CIN grade 2 or worse in women aged 34-56 years: 274 of 16,860 vs 248 of 16 978; 1·11, 0·94-1·32).
INTERPRETATION Implementation of HPV DNA testing in cervical screening leads to earlier detection of clinically relevant CIN grade 2 or worse, which when adequately treated, improves protection against CIN grade 3 or worse and cervical cancer.
Early detection of high-grade cervical legions caused by HPV16 was a major component of this benefit.
Our results lend support to the use of HPV DNA testing for all women aged 29 years and older.
FUNDING Zorg Onderzoek Nederland (Netherlands Organisation for Health Research and Development). | {"verdict": "SUPPORT", "evidence": ["Furthermore, significantly more cases of CIN grade 2 or worse were detected in the intervention group than in the control group (267 of 19 999 vs 215 of 20,106; 1\u00b725, 1\u00b705-1\u00b750; p=0\u00b7015).", "INTERPRETATION Implementation of HPV DNA testing in cervical screening leads to earlier detection of clinically relevant CIN grade 2 or worse, which when adequately treated, improves protection against CIN grade 3 or worse and cervical cancer."]} | science.scifact_entailment.630 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nPrimary cervical cytology screening with HPV detection has higher longitudinal sensitivity to detect severe cervical intraepithelial neoplasia than conventional cytology.\n\nResearch Abstract:\nBACKGROUND Human papillomavirus (HPV) testing is more sensitive for the detection of high-grade cervical lesions than is cytology, but detection of HPV by DNA screening in two screening rounds 5 years apart has not been assessed.\nThe aim of this study was to assess whether HPV DNA testing in the first screen decreases detection of cervical intraepithelial neoplasia (CIN) grade 3 or worse, CIN grade 2 or worse, and cervical cancer in the second screening.\nMETHODS In this randomised trial, women aged 29-56 years participating in the cervical screening programme in the Netherlands were randomly assigned to receive HPV DNA (GP5+/6+-PCR method) and cytology co-testing or cytology testing alone, from January, 1999, to September, 2002.\nRandomisation (in a 1:1 ratio) was done with computer-generated random numbers after the cervical specimen had been taken.\nAt the second screening 5 years later, HPV DNA and cytology co-testing was done in both groups; researchers were masked to the patient's assignment.\nThe primary endpoint was the number of CIN grade 3 or worse detected.\nAnalysis was done by intention to screen.\nThe trial is now finished and is registered, number ISRCTN20781131.\nFINDINGS 22,420 women were randomly assigned to the intervention group and 22 518 to the control group; 19 999 in the intervention group and 20,106 in the control group were eligible for analysis at the first screen.\nAt the second screen, 19 579 women in the intervention group and 19,731 in the control group were eligible, of whom 16,750 and 16,743, respectively, attended the second screen.\nIn the second round, CIN grade 3 or worse was less common in the intervention group than in the control group (88 of 19 579 in the intervention group vs 122 of 19,731 in the control group; relative risk 0·73, 95% CI 0·55-0·96; p=0·023).\nCervical cancer was also less common in the intervention group than in the control group (four of 19 579 in the intervention group vs 14 of 19,731; 0·29, 0·10-0·87; p=0·031).\nIn the baseline round, detection of CIN grade 3 or worse did not differ significantly between groups (171 of 19 999 vs 150 of 20,106; 1·15, 0·92-1·43; p=0·239) but was significantly more common in women with normal cytology (34 of 19,286 vs 12 of 19,373; 2·85, 1·47-5·49; p=0·001).\nFurthermore, significantly more cases of CIN grade 2 or worse were detected in the intervention group than in the control group (267 of 19 999 vs 215 of 20,106; 1·25, 1·05-1·50; p=0·015).\nIn the second screen, fewer HPV16-positive CIN grade 3 or worse were detected in the intervention group than in the control group (17 of 9481 vs 35 of 9354; 0·48, 0·27-0·85; p=0·012); detection of non-HPV16-positive CIN grade 3 or worse did not differ between groups (25 of 9481 vs 25 of 9354; 0·99, 0·57-1·72; p=1·00).\nThe cumulative detection of CIN grade 3 or worse and CIN grade 2 or worse did not differ significantly between study arms, neither for the whole study group (CIN grade 3 or worse: 259 of 19 999 vs 272 of 20,106; 0·96, 0·81-1·14, p=0·631; CIN grade 2 or worse: 427 of 19 999 vs 399 of 20,106; 1·08, 0·94-1·24; p=0·292), nor for subgroups of women invited for the first time (CIN grade 3 or worse in women aged 29-33 years: 102 of 3139 vs 105 of 3128; 0·97, 0·74-1·27; CIN grade 2 or worse in women aged 29-33 years: 153 of 3139 vs 151 of 3128; 1·01, 0·81-1·26; CIN grade 3 or worse in women aged 34-56 years: 157 of 16,860 vs 167 of 16 978; 0·95, 0·76-1·18; CIN grade 2 or worse in women aged 34-56 years: 274 of 16,860 vs 248 of 16 978; 1·11, 0·94-1·32).\nINTERPRETATION Implementation of HPV DNA testing in cervical screening leads to earlier detection of clinically relevant CIN grade 2 or worse, which when adequately treated, improves protection against CIN grade 3 or worse and cervical cancer.\nEarly detection of high-grade cervical legions caused by HPV16 was a major component of this benefit.\nOur results lend support to the use of HPV DNA testing for all women aged 29 years and older.\nFUNDING Zorg Onderzoek Nederland (Netherlands Organisation for Health Research and Development).",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Furthermore, significantly more cases of CIN grade 2 or worse were detected in the intervention group than in the control group (267 of 19 999 vs 215 of 20,106; 1\\u00b725, 1\\u00b705-1\\u00b750; p=0\\u00b7015).\", \"INTERPRETATION Implementation of HPV DNA testing in cervical screening leads to earlier detection of clinically relevant CIN grade 2 or worse, which when adequately treated, improves protection against CIN grade 3 or worse and cervical cancer.\"]}",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Generating High-Fidelity Privacy-Conscious Synthetic Patient Data for Causal Effect Estimation with Multiple Treatments
Main Content:
1 INTRODUCTION . In health care , studying the causal effect of treatments on patients is critical to advance personalized medicine . Observing an association between a drug ( exposure or treatment ) and subsequent adverse or beneficial event ( outcome ) is not enough to claim the treatment is indeed the cause of the observed outcome due to the existence of confounding variables , defined as factors that affect both the treatments and outcomes . Randomized clinical trials ( RCTs ) have been the gold standard for estimating causal relationships between intervention and outcome . However , RCTs are sometimes not feasible due to logistical , ethical , or financial considerations . Further , randomized experiments may not always be generalizable , due to the restricted population used in the experiments . In the past decade , observational data has become a viable alternative to RCTs to infer causal treatment effects due to both the increasingly available patient data captured in Electronic Health Records ( EHRs ) ( Henry et al . ( 2016 ) ) and the remarkable advances of machine learning techniques and capabilities . Typically , EHRs capture potential confounding factors such as race , gender , geographic location , eventual proxies of social determinants of health , as well as medical characteristics such as comorbidities and laboratory results . Many causal inference models have been proposed to estimate treatment effects from observational data . Validation of these models with realistic benchmarks , however , remains a fundamental chal- lenge due to three reasons . First , the ground truth of treatment effects in a realistic setting is unknown . In real world , we can not compute the treatment effect by directly comparing the potential outcomes of different treatments because of the fundamental problem of causal inference : for a given patient and treatment , we can only observe the factual , defined as the patient outcome for the given treatment , but not the counterfactual , defined as the patient outcome if the treatment had been different . Second , legal and ethical issues around un-consented patient data and privacy created a significant barrier in accessing routinely EHRs by the machine learning community . In order to mitigate the legal and ethical risks of sharing sensitive information , de-identification of patient records is a commonly used practice . However , previous work has shown that de-identification is not sufficient for avoiding re-identification through linkage with other identifiable datasets Sweeney ( 1997 ) ; Emam et al . ( 2011 ) ; Malin & Sweeney ( 2004 ) . Third , most publicly available datasets support either binary or very few treatments , while there has been growing literature developing techniques with multiple treatments in recent years ( Lopez & Gutman ( 2017 ) ) . To address these challenges , in this work we generated a large-scale and realistic patient dataset that mimics real patient data distributions , supports multiple treatments , and provides ground truth for the effects of these treatments . The datasets we generated are synthetic patients with hypertension modeled on a massive nationwide cohort patients ’ history of diagnoses , medications , and laboratory values . We designed a data generation process by adapting an Anonymization Through Data Synthesis Using Generative Adversarial Networks ( ADS-GAN by Yoon et al . ( 2020 ) ) model for fictitious patient information generation and using a neural network for treatment outcome generation . The synthetic dataset demonstrates strong similarity to the original dataset as measured by the Wasserstein distance . In addition , we ensure that the original patients ’ privacy is preserved so that our dataset can be made available to the research community to evaluate causal inference models . We demonstrated the use of the synthetic data by applying our dataset to evaluate five models : one inverse probability treatment weighting ( IPTW ) model , one propensity matching model , one propensity score stratification model all introduced by Rosenbaum & Rubin ( 1983 ) , and two models in the doubly robust family ( Foster & Syrgkanis ( 2020 ) ) . To our knowledge , this is the first large scale dataset that mimics real data joint distributions with multiple treatments and known causal effects . The approach we used can be readily extended to other types of diseases in the clinical domain , and to datasets in other domains as well . The rest of this paper is organized as follows . In Section 2 , we discuss related works . The details of our method are presented in Section 4 . In Section 5 , we discuss the evaluation metrics for the quality of the data and results of evaluating several established causal inference models with the data . We discuss the limitations of the work in Section 6 and conclude the paper in Section 7 . 2 RELATED WORK . Our work is related to several existing works on publicly available databases , fictitious patient record creations , and data generation processes . 2.1 PUBLICLY AVAILABLE DATASETS . First used in ( Hill ( 2011 ) ) , the Infant Health and Development Program ( IHDP ) is a randomized controlled study designed to evaluate the effect of home visit from specialist doctors on the cognitive test scores of premature infants . It contains 747 subjects and 25 variables that describe the characteristics of the infants and their mothers . The Jobs dataset by LaLonde ( 1986 ) is a benchmark used by the causal inference community , where the treatment is job training and the outcomes are income and employment status after training . The Twins dataset , originally used for evaluating causal inference in ( Louizos et al . ( 2017 ) ; Yao et al . ( 2018 ) ) , consists of samples from twin births in the U.S. between the years 1989 and 1991 provided in ( Almond et al . ( 2005 ) ) . The Annual Atlantic Causal Inference Conference ( ACIC ) data challenge provides an opportunity to compare causal inference methodologies across a variety of data generating processes . Some of the above mentioned datasets do not provide true causal effects . Others are small in size so the models validated on such datasets may perform very differently in a more general real-world setting . All the datasets created for ACIC challenge were designed specifically for competitions . The covariates in the data are either drawn from publicly available databases , or simulated . In the former case , the datasets are limited by small populations ( Du et al . ( 2021 ) ) and arbitrarily designed data generation processes , which did not aim to capture any real-world causal relationships ( Karavani et al . ( 2019 ) ) . In the latter case , the distribution of the data is not realistic , i.e , dissimilar to the distribution of any real dataset . 2.2 EHR DATA GENERATION . Walonoski et al . ( 2018 ) generated synthetic EHRs based on publicly available information . The focus of their work is on generating the life cycle of a patient and how a disease evolves over time . Goncalves et al . ( 2020 ) evaluated three synthetic data generation models–probabilistic models , classification-based imputation models , and generative adversarial neural networks–in generating realistic EHR data . Tucker et al . ( 2020 ) used a Bayesian network model to generate synthetic data based on the Clinical Practice Research Datalink ( CPRD ) in the UK . Benaim et al . ( 2020 ) evaluated synthetic data produced from 5 contemporary studies using MDClone . Wang et al . ( 2021 ) proposed a framework to generate and evaluate synthetic health care data , and the key requirements of synthetic data for multiple purposes . All of these works focus on EHR data generation producing patient variables but without ground truth for causal effects . In contrast , the focus of our work is not only on generating patient variables , but producing ground truth for causal effects as well . 2.3 POTENTIAL OUTCOME GENERATION . To validate their models , many researchers such as ( Schuler & Rose ( 2017 ) ) created synthetic covariates and produced potential outcomes with a designed data generation process . Such datasets are not designed to approximate any real data distributions . Franklin et al . ( 2014 ) ; Neal et al . ( 2020 ) generated potential outcomes from covariates with known causal effects , but without any regard to patient privacy . We addressed the critical issue of patient privacy concerns so our data can be made available for the research community to evaluate their models . 3 PATIENT CLAIM DATA AND INCLUSION EXCLUSION CRITERIA . To make our synthetic data realistic , we generate the data based on a real-world patient claim database from a large insurance company in the United States . This database contains 5 billion insurance claims ( diagnoses , procedures , and drug prescriptions or refills ) and lab test results from 56.4 million patients who subscribed to the company ’ s service within a 5-year time period between December 2014 and December 2020 . From this database , we extracted a subset of patients affected by hypertension . We chose hypertension because there are a large number of related claims , making it easier to learn the data distribution . In addition , since it is a condition affecting nearly half of adults in the United States ( 116 million , or 47 % ) , our generated dataset can be directly used for clinical researchers to develop and evaluate their models for this important disease . Patients are included in the dataset if they have a medical claim indicating hypertension ( ICD code I10 , I11.9 , I12.9 , and I13.10 ) or currently treated with anti-hypertensive medications . We exclude patients from the dataset if they are age < 18 or age > 85 , affected by white coat hypertension , secondary hypertension , malignant cancers , dementia , or are pregnant . After applying the above mentioned inclusion and exclusion criteria , we have about 1.6 million patients included in this study . We further exclude patients treated with a combination of drugs rather than a single drug . We then rank the drugs by the number of patients treated with each drug , and only keep patients treated with one of the 28 most popular drugs . These filtering steps produce about 262 , 000 patients in the study . The distribution of this dataset is then learned and used to generate synthetic patients , viewed as samples drawn from the learned distribution . The patients ’ diagnoses and treatment history and how their conditions evolve over time are captured by trajectory data consisting of labs , diagnoses and their corresponding dates . For the convenience of data processing and analysis , we convert the trajectory data into tabular data with rows representing different patients ( samples ) and columns representing patient features ( variables ) including patient demographics , diagnoses , medications and labs . In Table . 1 , we list and briefly describe these 60 patient variables : 2 variables ( F1 ) describing the systolic blood pressure before the treatment and the date it was measured , 2 variables ( F2 ) describing the same metric but after the treatment , 3 variables ( F3 ) indicating current and prior drug usage and refill information , 4 variables ( F4 ) describing patient basic information ( age , gender , ethnicity ) , 30 variables ( F5 ) indicating laboratory measurements , 2 variables ( F6 ) indicating the presence or absence of comorbid conditions defined by the Charlson Comorbidity Index ( Charlson et al . ( 1987 ) ) , 15 variables ( F7 ) describing the patient ’ s zip code , the racial makeup and income levels in the patient ’ s zip code tabulation area ( ZCTA ) , 2 variables ( F8 ) indicating meta information . In this study , we are interested in the causal effects of anti-hypertensive drugs ( current drugs of F3 ) on patient outcomes , measured as the difference between the first ( F1 ) and second lab results ( F2 ) . | The paper studies the problem of generating synthetic patient data for the evaluation of causal inference models. The generated patient data is expected to highly mimic the distribution of the original dataset while also taking patient privacy into consideration. Experiments are conducted on the synthetic dataset with three well-established causal inference models. | science.mup_single_document_summarization.3580 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nGenerating High-Fidelity Privacy-Conscious Synthetic Patient Data for Causal Effect Estimation with Multiple Treatments\n\nMain Content:\n1 INTRODUCTION . In health care , studying the causal effect of treatments on patients is critical to advance personalized medicine . Observing an association between a drug ( exposure or treatment ) and subsequent adverse or beneficial event ( outcome ) is not enough to claim the treatment is indeed the cause of the observed outcome due to the existence of confounding variables , defined as factors that affect both the treatments and outcomes . Randomized clinical trials ( RCTs ) have been the gold standard for estimating causal relationships between intervention and outcome . However , RCTs are sometimes not feasible due to logistical , ethical , or financial considerations . Further , randomized experiments may not always be generalizable , due to the restricted population used in the experiments . In the past decade , observational data has become a viable alternative to RCTs to infer causal treatment effects due to both the increasingly available patient data captured in Electronic Health Records ( EHRs ) ( Henry et al . ( 2016 ) ) and the remarkable advances of machine learning techniques and capabilities . Typically , EHRs capture potential confounding factors such as race , gender , geographic location , eventual proxies of social determinants of health , as well as medical characteristics such as comorbidities and laboratory results . Many causal inference models have been proposed to estimate treatment effects from observational data . Validation of these models with realistic benchmarks , however , remains a fundamental chal- lenge due to three reasons . First , the ground truth of treatment effects in a realistic setting is unknown . In real world , we can not compute the treatment effect by directly comparing the potential outcomes of different treatments because of the fundamental problem of causal inference : for a given patient and treatment , we can only observe the factual , defined as the patient outcome for the given treatment , but not the counterfactual , defined as the patient outcome if the treatment had been different . Second , legal and ethical issues around un-consented patient data and privacy created a significant barrier in accessing routinely EHRs by the machine learning community . In order to mitigate the legal and ethical risks of sharing sensitive information , de-identification of patient records is a commonly used practice . However , previous work has shown that de-identification is not sufficient for avoiding re-identification through linkage with other identifiable datasets Sweeney ( 1997 ) ; Emam et al . ( 2011 ) ; Malin & Sweeney ( 2004 ) . Third , most publicly available datasets support either binary or very few treatments , while there has been growing literature developing techniques with multiple treatments in recent years ( Lopez & Gutman ( 2017 ) ) . To address these challenges , in this work we generated a large-scale and realistic patient dataset that mimics real patient data distributions , supports multiple treatments , and provides ground truth for the effects of these treatments . The datasets we generated are synthetic patients with hypertension modeled on a massive nationwide cohort patients ’ history of diagnoses , medications , and laboratory values . We designed a data generation process by adapting an Anonymization Through Data Synthesis Using Generative Adversarial Networks ( ADS-GAN by Yoon et al . ( 2020 ) ) model for fictitious patient information generation and using a neural network for treatment outcome generation . The synthetic dataset demonstrates strong similarity to the original dataset as measured by the Wasserstein distance . In addition , we ensure that the original patients ’ privacy is preserved so that our dataset can be made available to the research community to evaluate causal inference models . We demonstrated the use of the synthetic data by applying our dataset to evaluate five models : one inverse probability treatment weighting ( IPTW ) model , one propensity matching model , one propensity score stratification model all introduced by Rosenbaum & Rubin ( 1983 ) , and two models in the doubly robust family ( Foster & Syrgkanis ( 2020 ) ) . To our knowledge , this is the first large scale dataset that mimics real data joint distributions with multiple treatments and known causal effects . The approach we used can be readily extended to other types of diseases in the clinical domain , and to datasets in other domains as well . The rest of this paper is organized as follows . In Section 2 , we discuss related works . The details of our method are presented in Section 4 . In Section 5 , we discuss the evaluation metrics for the quality of the data and results of evaluating several established causal inference models with the data . We discuss the limitations of the work in Section 6 and conclude the paper in Section 7 . 2 RELATED WORK . Our work is related to several existing works on publicly available databases , fictitious patient record creations , and data generation processes . 2.1 PUBLICLY AVAILABLE DATASETS . First used in ( Hill ( 2011 ) ) , the Infant Health and Development Program ( IHDP ) is a randomized controlled study designed to evaluate the effect of home visit from specialist doctors on the cognitive test scores of premature infants . It contains 747 subjects and 25 variables that describe the characteristics of the infants and their mothers . The Jobs dataset by LaLonde ( 1986 ) is a benchmark used by the causal inference community , where the treatment is job training and the outcomes are income and employment status after training . The Twins dataset , originally used for evaluating causal inference in ( Louizos et al . ( 2017 ) ; Yao et al . ( 2018 ) ) , consists of samples from twin births in the U.S. between the years 1989 and 1991 provided in ( Almond et al . ( 2005 ) ) . The Annual Atlantic Causal Inference Conference ( ACIC ) data challenge provides an opportunity to compare causal inference methodologies across a variety of data generating processes . Some of the above mentioned datasets do not provide true causal effects . Others are small in size so the models validated on such datasets may perform very differently in a more general real-world setting . All the datasets created for ACIC challenge were designed specifically for competitions . The covariates in the data are either drawn from publicly available databases , or simulated . In the former case , the datasets are limited by small populations ( Du et al . ( 2021 ) ) and arbitrarily designed data generation processes , which did not aim to capture any real-world causal relationships ( Karavani et al . ( 2019 ) ) . In the latter case , the distribution of the data is not realistic , i.e , dissimilar to the distribution of any real dataset . 2.2 EHR DATA GENERATION . Walonoski et al . ( 2018 ) generated synthetic EHRs based on publicly available information . The focus of their work is on generating the life cycle of a patient and how a disease evolves over time . Goncalves et al . ( 2020 ) evaluated three synthetic data generation models–probabilistic models , classification-based imputation models , and generative adversarial neural networks–in generating realistic EHR data . Tucker et al . ( 2020 ) used a Bayesian network model to generate synthetic data based on the Clinical Practice Research Datalink ( CPRD ) in the UK . Benaim et al . ( 2020 ) evaluated synthetic data produced from 5 contemporary studies using MDClone . Wang et al . ( 2021 ) proposed a framework to generate and evaluate synthetic health care data , and the key requirements of synthetic data for multiple purposes . All of these works focus on EHR data generation producing patient variables but without ground truth for causal effects . In contrast , the focus of our work is not only on generating patient variables , but producing ground truth for causal effects as well . 2.3 POTENTIAL OUTCOME GENERATION . To validate their models , many researchers such as ( Schuler & Rose ( 2017 ) ) created synthetic covariates and produced potential outcomes with a designed data generation process . Such datasets are not designed to approximate any real data distributions . Franklin et al . ( 2014 ) ; Neal et al . ( 2020 ) generated potential outcomes from covariates with known causal effects , but without any regard to patient privacy . We addressed the critical issue of patient privacy concerns so our data can be made available for the research community to evaluate their models . 3 PATIENT CLAIM DATA AND INCLUSION EXCLUSION CRITERIA . To make our synthetic data realistic , we generate the data based on a real-world patient claim database from a large insurance company in the United States . This database contains 5 billion insurance claims ( diagnoses , procedures , and drug prescriptions or refills ) and lab test results from 56.4 million patients who subscribed to the company ’ s service within a 5-year time period between December 2014 and December 2020 . From this database , we extracted a subset of patients affected by hypertension . We chose hypertension because there are a large number of related claims , making it easier to learn the data distribution . In addition , since it is a condition affecting nearly half of adults in the United States ( 116 million , or 47 % ) , our generated dataset can be directly used for clinical researchers to develop and evaluate their models for this important disease . Patients are included in the dataset if they have a medical claim indicating hypertension ( ICD code I10 , I11.9 , I12.9 , and I13.10 ) or currently treated with anti-hypertensive medications . We exclude patients from the dataset if they are age < 18 or age > 85 , affected by white coat hypertension , secondary hypertension , malignant cancers , dementia , or are pregnant . After applying the above mentioned inclusion and exclusion criteria , we have about 1.6 million patients included in this study . We further exclude patients treated with a combination of drugs rather than a single drug . We then rank the drugs by the number of patients treated with each drug , and only keep patients treated with one of the 28 most popular drugs . These filtering steps produce about 262 , 000 patients in the study . The distribution of this dataset is then learned and used to generate synthetic patients , viewed as samples drawn from the learned distribution . The patients ’ diagnoses and treatment history and how their conditions evolve over time are captured by trajectory data consisting of labs , diagnoses and their corresponding dates . For the convenience of data processing and analysis , we convert the trajectory data into tabular data with rows representing different patients ( samples ) and columns representing patient features ( variables ) including patient demographics , diagnoses , medications and labs . In Table . 1 , we list and briefly describe these 60 patient variables : 2 variables ( F1 ) describing the systolic blood pressure before the treatment and the date it was measured , 2 variables ( F2 ) describing the same metric but after the treatment , 3 variables ( F3 ) indicating current and prior drug usage and refill information , 4 variables ( F4 ) describing patient basic information ( age , gender , ethnicity ) , 30 variables ( F5 ) indicating laboratory measurements , 2 variables ( F6 ) indicating the presence or absence of comorbid conditions defined by the Charlson Comorbidity Index ( Charlson et al . ( 1987 ) ) , 15 variables ( F7 ) describing the patient ’ s zip code , the racial makeup and income levels in the patient ’ s zip code tabulation area ( ZCTA ) , 2 variables ( F8 ) indicating meta information . In this study , we are interested in the causal effects of anti-hypertensive drugs ( current drugs of F3 ) on patient outcomes , measured as the difference between the first ( F1 ) and second lab results ( F2 ) .",
"role": "user"
},
{
"content": "The paper studies the problem of generating synthetic patient data for the evaluation of causal inference models. The generated patient data is expected to highly mimic the distribution of the original dataset while also taking patient privacy into consideration. Experiments are conducted on the synthetic dataset with three well-established causal inference models.",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: Diabetes is now a global epidemic, but most cases are now in low- and middle-income countries. Diabetes self-management education (DSME) is key to enabling patients to manage their chronic condition and can reduce the occurrence of costly and devastating complications. However, there is limited evidence on the effectiveness of different DSME programmes in resource limited settings.
ABSTRACT.METHODS: We conducted an unblinded, parallel-group, individually-randomised controlled trial at the University of Calabar Teaching Hospital (Nigeria) to evaluate whether an intensive and systematic DSME programme, using structured guidelines, improved glycaemic control compared to the existing ad hoc patient education (clinical practice was unchanged). Eligible patients (≥18 years, HbA1c > 8.5% and physically able to participate) were randomly allocated by permuted block randomisation to participate for six months in either an intensive or conventional education group. The primary outcome was HbA1c (%) at six-months.
ABSTRACT.RESULTS: We randomised 59 participants to each group and obtained six-month HbA1c outcomes from 53 and 51 participants in the intensive and conventional education groups, respectively. Intensive group participants had a mean six-month HbA1c (%) of 8.4 (95% CI: 8 to 8.9), while participants in the conventional education group had a mean six-month HbA1c (%) of 10.2 (95% CI: 9.8 to 10.7). The difference was statistically (P < 0.0001) and clinically significant, with intensive group participants having HbA1c outcomes on average -1.8 (95% CI: -2.4 to -1.2) percentage points lower than conventional group participants. Results were robust to adjustment for a range of covariates and multiple imputation of missing outcome data.
ABSTRACT.CONCLUSIONS: This study demonstrates the effectiveness of a structured, guideline-based DSME intervention in a LMIC setting versus a pragmatic comparator. The intervention is potentially replicable at other levels of the Nigerian healthcare system and in other LMICs, where nurses/diabetes educators can run the programme.
ABSTRACT.TRIAL REGISTRATION: Pan African Clinical Trial Registry PACTR20130200047835
Extracted Data: | [["Intensive Patient Education", "Conventional Education", "HbA1c", "significantly decreased", "Intensive group participants had a mean six-month HbA1c (%) of 8.4 (95% CI: 8 to 8.9), while participants in the conventional education group had a mean six-month HbA1c (%) of 10.2 (95% CI: 9.8 to 10.7). The difference was statistically (P \u003c 0.0001) and clinically significant,"]] | science.evidence_inference.174 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: Diabetes is now a global epidemic, but most cases are now in low- and middle-income countries. Diabetes self-management education (DSME) is key to enabling patients to manage their chronic condition and can reduce the occurrence of costly and devastating complications. However, there is limited evidence on the effectiveness of different DSME programmes in resource limited settings.\n\nABSTRACT.METHODS: We conducted an unblinded, parallel-group, individually-randomised controlled trial at the University of Calabar Teaching Hospital (Nigeria) to evaluate whether an intensive and systematic DSME programme, using structured guidelines, improved glycaemic control compared to the existing ad hoc patient education (clinical practice was unchanged). Eligible patients (≥18 years, HbA1c > 8.5% and physically able to participate) were randomly allocated by permuted block randomisation to participate for six months in either an intensive or conventional education group. The primary outcome was HbA1c (%) at six-months.\n\nABSTRACT.RESULTS: We randomised 59 participants to each group and obtained six-month HbA1c outcomes from 53 and 51 participants in the intensive and conventional education groups, respectively. Intensive group participants had a mean six-month HbA1c (%) of 8.4 (95% CI: 8 to 8.9), while participants in the conventional education group had a mean six-month HbA1c (%) of 10.2 (95% CI: 9.8 to 10.7). The difference was statistically (P < 0.0001) and clinically significant, with intensive group participants having HbA1c outcomes on average -1.8 (95% CI: -2.4 to -1.2) percentage points lower than conventional group participants. Results were robust to adjustment for a range of covariates and multiple imputation of missing outcome data.\n\nABSTRACT.CONCLUSIONS: This study demonstrates the effectiveness of a structured, guideline-based DSME intervention in a LMIC setting versus a pragmatic comparator. The intervention is potentially replicable at other levels of the Nigerian healthcare system and in other LMICs, where nurses/diabetes educators can run the programme.\n\nABSTRACT.TRIAL REGISTRATION: Pan African Clinical Trial Registry PACTR20130200047835\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"Intensive Patient Education\", \"Conventional Education\", \"HbA1c\", \"significantly decreased\", \"Intensive group participants had a mean six-month HbA1c (%) of 8.4 (95% CI: 8 to 8.9), while participants in the conventional education group had a mean six-month HbA1c (%) of 10.2 (95% CI: 9.8 to 10.7). The difference was statistically (P \\u003c 0.0001) and clinically significant,\"]]",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Accelerated Variance Reduced Stochastic Extragradient Method for Sparse Machine Learning Problems
Main Content:
1 INTRODUCTION . In this paper , we mainly consider the following composite convex optimization problem : min x∈Rd { P ( x ) def = F ( x ) +R ( x ) = 1 n n∑ i=1 fi ( x ) +R ( x ) } ( 1 ) where F ( x ) : Rd→R is the average of smooth convex component functions fi ( x ) , and R ( x ) is a relatively simple convex function ( but may not be differentiable ) . In this paper , we use ‖·‖ to denote the standard Euclidean norm , and ‖·‖1 to denote the ` 1-norm . Moreover , we use P∗ to denote the real optimal value of P ( · ) , and P̂∗ to denote the optimal value obtained by algorithms . This form of optimization problems often appears in machine learning , signal processing , data science , statistics and operations research , and has a wide range of applications such as regularized empirical risk minimization ( ERM ) , sparse coding for image and video recovery , and representation learning for object recognition . Specifically , for a collection of given training examples { ( a1 , b1 ) , ... , ( an , bn ) } , where ai ∈ Rd , bi ∈ R ( i = 1 , 2 , ... , n ) and ai is a feature vector , while bi is the desired response . When fi ( x ) = 12 ( a T i x−bi ) 2 , we can obtain the ridge regression problem by setting R ( x ) = λ2 ‖x‖ 2 . We also get the Lasso or Elastic-Net problems by setting R ( x ) =λ‖x‖1 or R ( x ) = λ22 ‖x‖ 2+λ1‖x‖1 , respectively . Moreover , if we set fi ( x ) = log ( 1+exp ( −bixTai ) ) , we will get the regularized logistic regression problem . 1.1 RECENT RESEARCH PROGRESS . The proximal gradient descent ( PGD ) method is a standard and effective method for Problem ( 1 ) , and can achieve linear convergence for strongly convex problems . Its accelerated algorithms , e.g. , accelerated proximal gradient ( APG ) ( Tseng ( 2008 ) ; Beck & Teboulle ( 2009 ) ) , attain the convergence rate ofO ( 1/T 2 ) for non-strongly convex problems , where T denotes the number of iterations . In recent years , stochastic gradient descent ( SGD ) has been successfully applied to many large-scale learning problems , such as training for deep networks and linear prediction ( Tong ( 2004 ) ) , because of its significantly lower per-iteration complexity than deterministic methods , i.e. , O ( d ) vs. O ( nd ) . Besides , many tricks for SGD have also been proposed , such as Loshchilov & Hutter ( 2016 ) . However , the variance of the stochastic gradient may be large due to random sampling ( Johnson & Tong ( 2013 ) ) , which leads that the algorithm requires a gradually reduced step size , thus it will converge slow . Even under the strongly convex condition , SGD only achieves a sub-linear convergence rate O ( 1/T ) . Recently , many SGD methods with variance reduction have been proposed . For the case of R ( x ) = 0 , Roux et al . ( 2012 ) developed a stochastic average gradient descent ( SAG ) method , which is a randomized variant of the incremental aggregated gradient method proposed by Blatt et al . ( 2007 ) . Then stochastic variance reduced gradient ( SVRG ) ( Johnson & Tong ( 2013 ) ) was proposed , and has been widely introduced into various subsequent optimization algorithms , due to its lower storage space ( i.e. , O ( d ) ) than that of SAG ( i.e. , O ( nd ) ) . SVRG reduced the variance effectively by changing the estimation of stochastic gradients . The introduction of a snapshot point x̃ mainly has the effect of correcting the direction of gradient descent , and reduces the variance . Later , Konečný & Richtárik ( 2013 ) proposed the semi-stochastic gradient descent methods as well as their mini-batch version ( Konečný et al . ( 2014 ) ) . And their asynchronous distributed variant ( Ruiliang et al . ( 2016 ) ) is also been proposed later . More recently , Lin & Tong ( 2014 ) proposed the Prox-SVRG method , which introduced the proximal operator , and then applied the idea of SVRG to solve the non-smooth optimization problems . However , Prox-SVRG can only be used to solve the strongly convex optimization problems . In order to solve the non-strongly convex problems , Zeyuan & Yuan ( 2016 ) proposed the SVRG++ algorithm . Besides , to accelerate the algorithm and reducing the complexity , by combining the main ideas of APG and Prox-SVRG , Nitanda ( 2014 ) proposed an accelerated variance reduction proximal stochastic gradient descent ( Acc-Prox-SVRG ) method , which can effectively reduce the complexity of the algorithm compared to the two basic algorithms . Very recently , Zeyuan ( 2017 ) developed a novel Katyusha algorithm which introduced the Katyusha momentum to accelerate the algorithm . With the development of parallel and distributed computing which can effectively reduce computing time and improve performance , Ryu & Wotao ( 2017 ) came up with an algorithm called Proximal Proximal Gradient , which combined the proximal gradient method and ADMM ( Gabay & Mercier ( 1976 ) ) . Furthermore , it is easy to implement in parallel and distributed environments because of its innovative algorithm structure . 1.2 OUR MAIN CONTRIBUTIONS . We find that due to the introduction of proximal operator , there is a gap between P̂∗ and P∗ , and its theoretical derivation can be seen in Appendix A . To address this issue , Nguyen et al . ( 2017 ) proposed the idea of extragradient which can be seen as a guide during the process , and introduced it into the optimization problems . Intuitively , this additional iteration allows us to examine the geometry of the problem and consider its curvature information , which is one of the most important bottlenecks for first order methods . By using the idea of extragradient , we can get a better result in each inner-iteration . Therefore , the idea of extragradient is our main motivation . In this paper , we propose a novel algorithm for solving non-smooth optimization problems . The main contributions of this paper are summarized as follows . • In order to improve the result of the gap between P̂∗ and P∗ , and achieve fast convergence , a novel algorithm , which combines the idea of extragradient , Prox-SVRG and the trick of momentum acceleration , is proposed , called accelerated variance reduced stochastic extragradient descent ( AVR-SExtraGD ) . •We provide the convergence analysis of our algorithm , which shows that AVR-SExtraGD achieves linear convergence for strongly convex problems , and the convergence condition in the non-strongly convex case is also given . According to the convergence rate , we can know that AVR-SExtraGD has the same excellent result as the best-known algorithms , such as Katyusha . • Finally , we show by experiments that the performance of AVR-SExtraGD ( as well as VRSExtraGD , which is the basic algorithm of AVR-SExtraGD ) is obviously better than the popular algorithm , Prox-SVRG , which confirms the advantage of extragradient . For the widely used accelerated algorithm , Katyusha , the performance of our algorithm is still improved . 2 RELATED WORK . 2.1 BASIC ASSUMPTIONS . We first make the following assumptions to solve the problem ( 1 ) : Assumption 1 ( Smoothness ) . The convex function F ( · ) is L-smooth , i.e. , there exists a constant L > 0 such that for any x , y∈Rd , ‖∇F ( x ) −∇F ( y ) ‖≤L‖x−y‖ . Assumption 2 ( Lower Semi-continuity ) . The regularization function R ( · ) is a lower semicontinuous function , i.e. , ∀x0∈Rd , lim inf x→x0 R ( x ) ≥R ( x0 ) . But it is not necessarily differentiable or continuous .. Assumption 3 ( Strong Convexity ) . In Problem ( 1 ) , the functionR ( · ) is µ-strongly convex , i.e. , there exists a constant µ > 0 such that for all x , y∈Rd , it holds that R ( x ) ≥ R ( y ) + 〈G , x− y〉+ µ 2 ‖x− y‖2 , ( 2 ) where G∈∂R ( y ) which is the set of sub-gradient of R ( · ) at y . 2.2 PROX-SVRG AND EXTRAGRADIENT DESCENT METHODS . An effective method for solving Problem ( 1 ) is Prox-SVRG which improved Prox-FG ( Lions & Mercier ( 1979 ) ) and Prox-SG ( Langford et al . ( 2009 ) ) by introducing the stochastic gradient and combining the idea of SVRG , respectively . For strongly convex problems , Prox-SVRG can reach linear convergence with a constant step size , and its main update rules are ∇̃fik ( xk−1 ) = ∇fik ( xk−1 ) −∇fik ( x̃ ) +∇F ( x̃ ) ; xk = ProxRη ( xk−1 − η∇̃fik ( xk−1 ) ) , ( 3 ) where x̃ is the snapshot point used in SVRG , ∇̃fik ( xk−1 ) is the variance reduced stochastic gradient estimator , and ProxRη ( · ) is the proximal operator . Although Prox-SVRG can converge fast , because of proximal operator , the final solution has the deviation , which makes the solution inaccurate , thus Prox-SVRG still needs to be further improved , which is our important motivation . The extragradient method was first proposed by Korpelevič ( 1976 ) . It is a classical method for solving variational inequality problems , and it generates an estimation sequence by using two projection gradients in each iteration . By combining this idea with some first-order descent methods , Nguyen et al . ( 2017 ) proposed an extended extragradient method ( EEG ) which can effectively solve the problem ( 1 ) , and can also solve relatively more general problems as follows : min x∈Rd { P ( x ) def = F ( x ) +R ( x ) } where F ( x ) is not necessarily composed by multiple functions fi ( x ) . Unlike the classical extragradient method , EEG uses proximal gradient instead of orthogonal projection in each iteration . The main update rules of EEG are yk = ProxRsk ( xk − sk∇F ( xk ) ) ; xk+1 = Prox R αk ( xk − αk∇F ( yk ) ) , where sk and αk are two step sizes . From the update rules of EEG , we can see that in each iteration , EEG needs to calculate two gradients , which will definitely slow down the algorithm . Therefore , the algorithm needs to be further accelerated by an efficient technique . 2.3 MOMENTUM ACCELERATION AND MIG . Firstly , we introduce the momentum acceleration technique whose main update rules are vdwt = βvdwt−1 + ( 1− β ) dwt ; wt = wt−1 − αvdwt , where dw is the gradient of the objective function at w , β is a parameter , and α is a step size . The update rules take not only the gradient of the current position , but also the gradient of the past position into account , which makes the final descent direction of wt after using momentum Algorithm 1 AVR-SExtraGD Input : Initial vector x0 , the number of epochs S , the number of iterations m per epoch , the step sizes η1 , η2 , momentum parameter β , and the set K. Initialize : x̃0 = x10 = x0 , ρ = 1+ηµ . 1 : for s = 1 , 2 , . . . , S do 2 : Compute∇F ( x̃s−1 ) ; 3 : βs = β ( SC ) or βs = 2s+4 ( non-SC ) ; 4 : for k = 1 , 2 , . . . , m do 5 : Pick ik uniformly at random from { 1 , ... , n } ; 6 : if k ∈ K then 7 : xsk−1/2 = Prox R η1 ( xsk−1 − η1∇̃fik ( βsxsk−1 + ( 1−βs ) x̃s−1 ) ) ; 8 : xsk = Prox R η2 ( xsk−1/2 − η2∇̃fik ( βsx s k−1/2 + ( 1−βs ) x̃ s−1 ) ) ; 9 : else 10 : xsk = Prox R η1 ( x s k−1 − η1∇̃fik ( βsxsk−1 + ( 1−βs ) x̃s−1 ) ) ; 11 : end if 12 : end for 13 : x̃s = βs ( ∑m k=1 ρ k−1 ) −1 ∑m k=1 ρ k−1 x s k−1/2+x s k 2 + ( 1−βs ) x̃ s−1 ( SC ) or x̃s = βsm ∑m k=1 xsk−1/2+x s k 2 + ( 1−βs ) x̃ s−1 ( non-SC ) ; 14 : xs+10 = x s m ; 15 : end for Output : x̃S . reduce the oscillation of descent , thus this method can effectively accelerate the convergence of the algorithm . According to the Nesterov ’ s momentum , lots of accelerated algorithms were proposed , such as APG and Acc-Prox-SVRG . Later , Zeyuan ( 2017 ) proposed Katyusha to further accelerate the algorithm , and MiG ( Kaiwen et al . ( 2018 ) ) was proposed to simplify the structure of Katyusha , and the momentum acceleration of MiG is embodied in each iteration as follows : ysk−1 = βsx s k−1 + ( 1− βs ) x̃s−1 . Moreover , it is easy to get that the oracle complexity of MiG is less than that of Prox-SVRG and APG , which means that MiG can effectively accelerate the original Prox-SVRG algorithm . Therefore , we can also use this acceleration technique to accelerate our algorithm and address the issue of slow convergence due to the calculations of two different gradients . | The paper proposes an optimization method for solving unconstrained convex optimization problems where the objective function consists of a sum of several smooth components f_i and a (not necessarily smooth) convex function R. The proposed method AVR-SExtraGD is a stochastic descent method building on the previous algorithms Prox-SVRG (Lin 2014) and Katyusha (Zeyuan 2017). The previous Prox-SVRG method using a proximal operator is explained to converge fast but leads to inaccurate final solutions, while the Katyusha method is an algorithm based on momentum acceleration. The current paper builds on these two approaches and applies the momentum acceleration technique in a stochastic extragradient descent framework to achieve fast convergence. | science.mup_single_document_summarization.18566 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAccelerated Variance Reduced Stochastic Extragradient Method for Sparse Machine Learning Problems\n\nMain Content:\n1 INTRODUCTION . In this paper , we mainly consider the following composite convex optimization problem : min x∈Rd { P ( x ) def = F ( x ) +R ( x ) = 1 n n∑ i=1 fi ( x ) +R ( x ) } ( 1 ) where F ( x ) : Rd→R is the average of smooth convex component functions fi ( x ) , and R ( x ) is a relatively simple convex function ( but may not be differentiable ) . In this paper , we use ‖·‖ to denote the standard Euclidean norm , and ‖·‖1 to denote the ` 1-norm . Moreover , we use P∗ to denote the real optimal value of P ( · ) , and P̂∗ to denote the optimal value obtained by algorithms . This form of optimization problems often appears in machine learning , signal processing , data science , statistics and operations research , and has a wide range of applications such as regularized empirical risk minimization ( ERM ) , sparse coding for image and video recovery , and representation learning for object recognition . Specifically , for a collection of given training examples { ( a1 , b1 ) , ... , ( an , bn ) } , where ai ∈ Rd , bi ∈ R ( i = 1 , 2 , ... , n ) and ai is a feature vector , while bi is the desired response . When fi ( x ) = 12 ( a T i x−bi ) 2 , we can obtain the ridge regression problem by setting R ( x ) = λ2 ‖x‖ 2 . We also get the Lasso or Elastic-Net problems by setting R ( x ) =λ‖x‖1 or R ( x ) = λ22 ‖x‖ 2+λ1‖x‖1 , respectively . Moreover , if we set fi ( x ) = log ( 1+exp ( −bixTai ) ) , we will get the regularized logistic regression problem . 1.1 RECENT RESEARCH PROGRESS . The proximal gradient descent ( PGD ) method is a standard and effective method for Problem ( 1 ) , and can achieve linear convergence for strongly convex problems . Its accelerated algorithms , e.g. , accelerated proximal gradient ( APG ) ( Tseng ( 2008 ) ; Beck & Teboulle ( 2009 ) ) , attain the convergence rate ofO ( 1/T 2 ) for non-strongly convex problems , where T denotes the number of iterations . In recent years , stochastic gradient descent ( SGD ) has been successfully applied to many large-scale learning problems , such as training for deep networks and linear prediction ( Tong ( 2004 ) ) , because of its significantly lower per-iteration complexity than deterministic methods , i.e. , O ( d ) vs. O ( nd ) . Besides , many tricks for SGD have also been proposed , such as Loshchilov & Hutter ( 2016 ) . However , the variance of the stochastic gradient may be large due to random sampling ( Johnson & Tong ( 2013 ) ) , which leads that the algorithm requires a gradually reduced step size , thus it will converge slow . Even under the strongly convex condition , SGD only achieves a sub-linear convergence rate O ( 1/T ) . Recently , many SGD methods with variance reduction have been proposed . For the case of R ( x ) = 0 , Roux et al . ( 2012 ) developed a stochastic average gradient descent ( SAG ) method , which is a randomized variant of the incremental aggregated gradient method proposed by Blatt et al . ( 2007 ) . Then stochastic variance reduced gradient ( SVRG ) ( Johnson & Tong ( 2013 ) ) was proposed , and has been widely introduced into various subsequent optimization algorithms , due to its lower storage space ( i.e. , O ( d ) ) than that of SAG ( i.e. , O ( nd ) ) . SVRG reduced the variance effectively by changing the estimation of stochastic gradients . The introduction of a snapshot point x̃ mainly has the effect of correcting the direction of gradient descent , and reduces the variance . Later , Konečný & Richtárik ( 2013 ) proposed the semi-stochastic gradient descent methods as well as their mini-batch version ( Konečný et al . ( 2014 ) ) . And their asynchronous distributed variant ( Ruiliang et al . ( 2016 ) ) is also been proposed later . More recently , Lin & Tong ( 2014 ) proposed the Prox-SVRG method , which introduced the proximal operator , and then applied the idea of SVRG to solve the non-smooth optimization problems . However , Prox-SVRG can only be used to solve the strongly convex optimization problems . In order to solve the non-strongly convex problems , Zeyuan & Yuan ( 2016 ) proposed the SVRG++ algorithm . Besides , to accelerate the algorithm and reducing the complexity , by combining the main ideas of APG and Prox-SVRG , Nitanda ( 2014 ) proposed an accelerated variance reduction proximal stochastic gradient descent ( Acc-Prox-SVRG ) method , which can effectively reduce the complexity of the algorithm compared to the two basic algorithms . Very recently , Zeyuan ( 2017 ) developed a novel Katyusha algorithm which introduced the Katyusha momentum to accelerate the algorithm . With the development of parallel and distributed computing which can effectively reduce computing time and improve performance , Ryu & Wotao ( 2017 ) came up with an algorithm called Proximal Proximal Gradient , which combined the proximal gradient method and ADMM ( Gabay & Mercier ( 1976 ) ) . Furthermore , it is easy to implement in parallel and distributed environments because of its innovative algorithm structure . 1.2 OUR MAIN CONTRIBUTIONS . We find that due to the introduction of proximal operator , there is a gap between P̂∗ and P∗ , and its theoretical derivation can be seen in Appendix A . To address this issue , Nguyen et al . ( 2017 ) proposed the idea of extragradient which can be seen as a guide during the process , and introduced it into the optimization problems . Intuitively , this additional iteration allows us to examine the geometry of the problem and consider its curvature information , which is one of the most important bottlenecks for first order methods . By using the idea of extragradient , we can get a better result in each inner-iteration . Therefore , the idea of extragradient is our main motivation . In this paper , we propose a novel algorithm for solving non-smooth optimization problems . The main contributions of this paper are summarized as follows . • In order to improve the result of the gap between P̂∗ and P∗ , and achieve fast convergence , a novel algorithm , which combines the idea of extragradient , Prox-SVRG and the trick of momentum acceleration , is proposed , called accelerated variance reduced stochastic extragradient descent ( AVR-SExtraGD ) . •We provide the convergence analysis of our algorithm , which shows that AVR-SExtraGD achieves linear convergence for strongly convex problems , and the convergence condition in the non-strongly convex case is also given . According to the convergence rate , we can know that AVR-SExtraGD has the same excellent result as the best-known algorithms , such as Katyusha . • Finally , we show by experiments that the performance of AVR-SExtraGD ( as well as VRSExtraGD , which is the basic algorithm of AVR-SExtraGD ) is obviously better than the popular algorithm , Prox-SVRG , which confirms the advantage of extragradient . For the widely used accelerated algorithm , Katyusha , the performance of our algorithm is still improved . 2 RELATED WORK . 2.1 BASIC ASSUMPTIONS . We first make the following assumptions to solve the problem ( 1 ) : Assumption 1 ( Smoothness ) . The convex function F ( · ) is L-smooth , i.e. , there exists a constant L > 0 such that for any x , y∈Rd , ‖∇F ( x ) −∇F ( y ) ‖≤L‖x−y‖ . Assumption 2 ( Lower Semi-continuity ) . The regularization function R ( · ) is a lower semicontinuous function , i.e. , ∀x0∈Rd , lim inf x→x0 R ( x ) ≥R ( x0 ) . But it is not necessarily differentiable or continuous .. Assumption 3 ( Strong Convexity ) . In Problem ( 1 ) , the functionR ( · ) is µ-strongly convex , i.e. , there exists a constant µ > 0 such that for all x , y∈Rd , it holds that R ( x ) ≥ R ( y ) + 〈G , x− y〉+ µ 2 ‖x− y‖2 , ( 2 ) where G∈∂R ( y ) which is the set of sub-gradient of R ( · ) at y . 2.2 PROX-SVRG AND EXTRAGRADIENT DESCENT METHODS . An effective method for solving Problem ( 1 ) is Prox-SVRG which improved Prox-FG ( Lions & Mercier ( 1979 ) ) and Prox-SG ( Langford et al . ( 2009 ) ) by introducing the stochastic gradient and combining the idea of SVRG , respectively . For strongly convex problems , Prox-SVRG can reach linear convergence with a constant step size , and its main update rules are ∇̃fik ( xk−1 ) = ∇fik ( xk−1 ) −∇fik ( x̃ ) +∇F ( x̃ ) ; xk = ProxRη ( xk−1 − η∇̃fik ( xk−1 ) ) , ( 3 ) where x̃ is the snapshot point used in SVRG , ∇̃fik ( xk−1 ) is the variance reduced stochastic gradient estimator , and ProxRη ( · ) is the proximal operator . Although Prox-SVRG can converge fast , because of proximal operator , the final solution has the deviation , which makes the solution inaccurate , thus Prox-SVRG still needs to be further improved , which is our important motivation . The extragradient method was first proposed by Korpelevič ( 1976 ) . It is a classical method for solving variational inequality problems , and it generates an estimation sequence by using two projection gradients in each iteration . By combining this idea with some first-order descent methods , Nguyen et al . ( 2017 ) proposed an extended extragradient method ( EEG ) which can effectively solve the problem ( 1 ) , and can also solve relatively more general problems as follows : min x∈Rd { P ( x ) def = F ( x ) +R ( x ) } where F ( x ) is not necessarily composed by multiple functions fi ( x ) . Unlike the classical extragradient method , EEG uses proximal gradient instead of orthogonal projection in each iteration . The main update rules of EEG are yk = ProxRsk ( xk − sk∇F ( xk ) ) ; xk+1 = Prox R αk ( xk − αk∇F ( yk ) ) , where sk and αk are two step sizes . From the update rules of EEG , we can see that in each iteration , EEG needs to calculate two gradients , which will definitely slow down the algorithm . Therefore , the algorithm needs to be further accelerated by an efficient technique . 2.3 MOMENTUM ACCELERATION AND MIG . Firstly , we introduce the momentum acceleration technique whose main update rules are vdwt = βvdwt−1 + ( 1− β ) dwt ; wt = wt−1 − αvdwt , where dw is the gradient of the objective function at w , β is a parameter , and α is a step size . The update rules take not only the gradient of the current position , but also the gradient of the past position into account , which makes the final descent direction of wt after using momentum Algorithm 1 AVR-SExtraGD Input : Initial vector x0 , the number of epochs S , the number of iterations m per epoch , the step sizes η1 , η2 , momentum parameter β , and the set K. Initialize : x̃0 = x10 = x0 , ρ = 1+ηµ . 1 : for s = 1 , 2 , . . . , S do 2 : Compute∇F ( x̃s−1 ) ; 3 : βs = β ( SC ) or βs = 2s+4 ( non-SC ) ; 4 : for k = 1 , 2 , . . . , m do 5 : Pick ik uniformly at random from { 1 , ... , n } ; 6 : if k ∈ K then 7 : xsk−1/2 = Prox R η1 ( xsk−1 − η1∇̃fik ( βsxsk−1 + ( 1−βs ) x̃s−1 ) ) ; 8 : xsk = Prox R η2 ( xsk−1/2 − η2∇̃fik ( βsx s k−1/2 + ( 1−βs ) x̃ s−1 ) ) ; 9 : else 10 : xsk = Prox R η1 ( x s k−1 − η1∇̃fik ( βsxsk−1 + ( 1−βs ) x̃s−1 ) ) ; 11 : end if 12 : end for 13 : x̃s = βs ( ∑m k=1 ρ k−1 ) −1 ∑m k=1 ρ k−1 x s k−1/2+x s k 2 + ( 1−βs ) x̃ s−1 ( SC ) or x̃s = βsm ∑m k=1 xsk−1/2+x s k 2 + ( 1−βs ) x̃ s−1 ( non-SC ) ; 14 : xs+10 = x s m ; 15 : end for Output : x̃S . reduce the oscillation of descent , thus this method can effectively accelerate the convergence of the algorithm . According to the Nesterov ’ s momentum , lots of accelerated algorithms were proposed , such as APG and Acc-Prox-SVRG . Later , Zeyuan ( 2017 ) proposed Katyusha to further accelerate the algorithm , and MiG ( Kaiwen et al . ( 2018 ) ) was proposed to simplify the structure of Katyusha , and the momentum acceleration of MiG is embodied in each iteration as follows : ysk−1 = βsx s k−1 + ( 1− βs ) x̃s−1 . Moreover , it is easy to get that the oracle complexity of MiG is less than that of Prox-SVRG and APG , which means that MiG can effectively accelerate the original Prox-SVRG algorithm . Therefore , we can also use this acceleration technique to accelerate our algorithm and address the issue of slow convergence due to the calculations of two different gradients .",
"role": "user"
},
{
"content": "The paper proposes an optimization method for solving unconstrained convex optimization problems where the objective function consists of a sum of several smooth components f_i and a (not necessarily smooth) convex function R. The proposed method AVR-SExtraGD is a stochastic descent method building on the previous algorithms Prox-SVRG (Lin 2014) and Katyusha (Zeyuan 2017). The previous Prox-SVRG method using a proximal operator is explained to converge fast but leads to inaccurate final solutions, while the Katyusha method is an algorithm based on momentum acceleration. The current paper builds on these two approaches and applies the momentum acceleration technique in a stochastic extragradient descent framework to achieve fast convergence.",
"role": "assistant"
}
] |
science.biored_ner | Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: ["CellLine", "Variant", "Species", "Disease", "Chemical", "Gene"].
Ensure the output is strictly the JSON object.
Source Text:
Glial activation and post-synaptic neurotoxicity: the key events in Streptozotocin (ICV) induced memory impairment in rats. In the present study the role of glial activation and post synaptic toxicity in ICV Streptozotocin (STZ) induced memory impaired rats was explored. In experiment set up 1: Memory deficit was found in Morris water maze test on 14-16 days after STZ (ICV; 3mg/Kg) administration. STZ causes increased expression of GFAP, CD11b and TNF-a indicating glial activation and neuroinflammation. STZ also significantly increased the level of ROS, nitrite, Ca(2+) and reduced the mitochondrial activity in synaptosomal preparation illustrating free radical generation and excitotoxicity. Increased expression and activity of Caspase-3 was also observed in STZ treated rat which specify apoptotic cell death in hippocampus and cortex. STZ treatment showed decrease expression of post synaptic markers CaMKIIa and PSD-95, while, expression of pre synaptic markers (synaptophysin and SNAP-25) remains unaltered indicating selective post synaptic neurotoxicity. Oral treatment with Memantine (10mg/kg) and Ibuprofen (50 mg/kg) daily for 13 days attenuated STZ induced glial activation, apoptotic cell death and post synaptic neurotoxicity in rat brain. Further, in experiment set up 2: where memory function was not affected i.e. 7-9 days after STZ treatment. The level of GFAP, CD11b, TNF-a, ROS and nitrite levels were increased. On the other hand, apoptotic marker, synaptic markers, mitochondrial activity and Ca(2+) levels remained unaffected. Collective data indicates that neuroinflammatory process and oxidative stress occurs earlier to apoptosis and does not affect memory function. Present study clearly suggests that glial activation and post synaptic neurotoxicity are the key factors in STZ induced memory impairment and neuronal cell death. | {"CellLine": [], "Chemical": ["nitrite", "Ca(2+)", "Ibuprofen", "ROS", "Memantine", "STZ", "Streptozotocin"], "Disease": ["neurotoxicity", "neuroinflammation", "toxicity", "Memory deficit", "excitotoxicity", "neuroinflammatory", "memory impaired", "memory impairment"], "Gene": ["TNF-a", "SNAP-25", "Caspase-3", "synaptophysin", "PSD-95", "GFAP", "CD11b", "CaMKIIa"], "Species": ["rats", "rat"], "Variant": []} | science.biored_ner.97 | [
{
"content": "Analyze the provided biomedical abstract and identify distinct entities. Focus on the specified categories: [\"CellLine\", \"Variant\", \"Species\", \"Disease\", \"Chemical\", \"Gene\"].\n\nEnsure the output is strictly the JSON object.\n\nSource Text:\n\nGlial activation and post-synaptic neurotoxicity: the key events in Streptozotocin (ICV) induced memory impairment in rats. In the present study the role of glial activation and post synaptic toxicity in ICV Streptozotocin (STZ) induced memory impaired rats was explored. In experiment set up 1: Memory deficit was found in Morris water maze test on 14-16 days after STZ (ICV; 3mg/Kg) administration. STZ causes increased expression of GFAP, CD11b and TNF-a indicating glial activation and neuroinflammation. STZ also significantly increased the level of ROS, nitrite, Ca(2+) and reduced the mitochondrial activity in synaptosomal preparation illustrating free radical generation and excitotoxicity. Increased expression and activity of Caspase-3 was also observed in STZ treated rat which specify apoptotic cell death in hippocampus and cortex. STZ treatment showed decrease expression of post synaptic markers CaMKIIa and PSD-95, while, expression of pre synaptic markers (synaptophysin and SNAP-25) remains unaltered indicating selective post synaptic neurotoxicity. Oral treatment with Memantine (10mg/kg) and Ibuprofen (50 mg/kg) daily for 13 days attenuated STZ induced glial activation, apoptotic cell death and post synaptic neurotoxicity in rat brain. Further, in experiment set up 2: where memory function was not affected i.e. 7-9 days after STZ treatment. The level of GFAP, CD11b, TNF-a, ROS and nitrite levels were increased. On the other hand, apoptotic marker, synaptic markers, mitochondrial activity and Ca(2+) levels remained unaffected. Collective data indicates that neuroinflammatory process and oxidative stress occurs earlier to apoptosis and does not affect memory function. Present study clearly suggests that glial activation and post synaptic neurotoxicity are the key factors in STZ induced memory impairment and neuronal cell death.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"Chemical\": [\"nitrite\", \"Ca(2+)\", \"Ibuprofen\", \"ROS\", \"Memantine\", \"STZ\", \"Streptozotocin\"], \"Disease\": [\"neurotoxicity\", \"neuroinflammation\", \"toxicity\", \"Memory deficit\", \"excitotoxicity\", \"neuroinflammatory\", \"memory impaired\", \"memory impairment\"], \"Gene\": [\"TNF-a\", \"SNAP-25\", \"Caspase-3\", \"synaptophysin\", \"PSD-95\", \"GFAP\", \"CD11b\", \"CaMKIIa\"], \"Species\": [\"rats\", \"rat\"], \"Variant\": []}",
"role": "assistant"
}
] |
science.discomat_te | | (mol%) | (mol%) | (mol%) | (mol%) | Density (g/cm3) | Number density of Er3+ ion (1020 cm-3) |
| GaS3/2 | GeS2 | LaS3/2 | ErS3/2 | Density (g/cm3) | Number density of Er3+ ion (1020 cm-3) |
| For emission decay curve measurements | For emission decay curve measurements | For emission decay curve measurements | For emission decay curve measurements | For emission decay curve measurements | For emission decay curve measurements |
| 50 | 20 | 29.5 | 0.5 | 3.83 | 8.14 |
| 50 | 20 | 25 | 5 | 3.88 | 0.809 |
| | | | | | |
| For absorption measurement | For absorption measurement | For absorption measurement | For absorption measurement | For absorption measurement | For absorption measurement |
| 50 | 20 | 28 | 2 | 3.85 | 3.25 |
Caption: Composition and number density of Er3+ ion of the glasses used in this study
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "50", "type": "Glass_Compound_Amount", "constituent": "GaS3/2", "unit": "mol", "material": "xx"}
{"value": "20", "type": "Glass_Compound_Amount", "constituent": "GeS2", "unit": "mol", "material": "xx"}
{"value": "29.5", "type": "Glass_Compound_Amount", "constituent": "LaS3/2", "unit": "mol", "material": "xx"}
{"value": "0.5", "type": "Glass_Compound_Amount", "constituent": "ErS3/2", "unit": "mol", "material": "xx"}
{"value": "3.83", "type": "Other"}
{"value": "8.14", "type": "Other"}
{"value": "50", "type": "Glass_Compound_Amount", "constituent": "GaS3/2", "unit": "mol", "material": "xx"}
{"value": "20", "type": "Glass_Compound_Amount", "constituent": "GeS2", "unit": "mol", "material": "xx"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "LaS3/2", "unit": "mol", "material": "xx"}
{"value": "5", "type": "Glass_Compound_Amount", "constituent": "ErS3/2", "unit": "mol", "material": "xx"}
{"value": "3.88", "type": "Other"}
{"value": "0.809", "type": "Other"}
{"value": "50", "type": "Glass_Compound_Amount", "constituent": "GaS3/2", "unit": "mol", "material": "xx"}
{"value": "20", "type": "Glass_Compound_Amount", "constituent": "GeS2", "unit": "mol", "material": "xx"}
{"value": "28", "type": "Glass_Compound_Amount", "constituent": "LaS3/2", "unit": "mol", "material": "xx"}
{"value": "2", "type": "Glass_Compound_Amount", "constituent": "ErS3/2", "unit": "mol", "material": "xx"}
{"value": "3.85", "type": "Other"}
{"value": "3.25", "type": "Other"} | science.discomat_te.1 | [
{
"content": "| (mol%) | (mol%) | (mol%) | (mol%) | Density (g/cm3) | Number density of Er3+ ion (1020 cm-3) |\n| GaS3/2 | GeS2 | LaS3/2 | ErS3/2 | Density (g/cm3) | Number density of Er3+ ion (1020 cm-3) |\n| For emission decay curve measurements | For emission decay curve measurements | For emission decay curve measurements | For emission decay curve measurements | For emission decay curve measurements | For emission decay curve measurements |\n| 50 | 20 | 29.5 | 0.5 | 3.83 | 8.14 |\n| 50 | 20 | 25 | 5 | 3.88 | 0.809 |\n| | | | | | |\n| For absorption measurement | For absorption measurement | For absorption measurement | For absorption measurement | For absorption measurement | For absorption measurement |\n| 50 | 20 | 28 | 2 | 3.85 | 3.25 |\n\nCaption: Composition and number density of Er3+ ion of the glasses used in this study\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GaS3/2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeS2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"29.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"LaS3/2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"0.5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ErS3/2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"3.83\", \"type\": \"Other\"}\n{\"value\": \"8.14\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GaS3/2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeS2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"LaS3/2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ErS3/2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"3.88\", \"type\": \"Other\"}\n{\"value\": \"0.809\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GaS3/2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"GeS2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"28\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"LaS3/2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"2\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ErS3/2\", \"unit\": \"mol\", \"material\": \"xx\"}\n{\"value\": \"3.85\", \"type\": \"Other\"}\n{\"value\": \"3.25\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.discomat_te | | Level | Lifetime from Judd-Ofelt (ms) | Lifetime from Judd-Ofelt (ms) | Experimental lifetime (+-0.2ms) | Experimental lifetime (+-0.2ms) | Experimental lifetime (+-0.2ms) | Experimental lifetime (+-0.2ms) |
| Level | Glass (+-6%) | 'Crystal' (+-16%) | PG | GCB | GCC | GCD |
| 3H4 (l =1460nm) | 0.4 | 0.5 | <0.5 | 0.6 | 1.6 | 1.6 |
| 3H5 (l =1214nm) | 1.6 | 2.2 | 1.5 | 8.4 | 4.0 | 6.0 |
| 3F4 (l =1622nm) | 2.4 | 5.1 | 2.6 | 10.6 | 8.4 | 10.0 |
Caption: Calculated and t 1/e experimental lifetimes of the three first excited levels of Tm3+ ion (the wavelength of the measurements are indicated)
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "0.4", "type": "Other"}
{"value": "0.5", "type": "Other"}
{"value": "0.5", "type": "Other"}
{"value": "0.6", "type": "Other"}
{"value": "1.6", "type": "Other"}
{"value": "1.6", "type": "Other"}
{"value": "1.6", "type": "Other"}
{"value": "2.2", "type": "Other"}
{"value": "1.5", "type": "Other"}
{"value": "8.4", "type": "Other"}
{"value": "4.0", "type": "Other"}
{"value": "6.0", "type": "Other"}
{"value": "2.4", "type": "Other"}
{"value": "5.1", "type": "Other"}
{"value": "2.6", "type": "Other"}
{"value": "10.6", "type": "Other"}
{"value": "8.4", "type": "Other"}
{"value": "10.0", "type": "Other"} | science.discomat_te.345 | [
{
"content": "| Level | Lifetime from Judd-Ofelt (ms) | Lifetime from Judd-Ofelt (ms) | Experimental lifetime (+-0.2ms) | Experimental lifetime (+-0.2ms) | Experimental lifetime (+-0.2ms) | Experimental lifetime (+-0.2ms) |\n| Level | Glass (+-6%) | 'Crystal' (+-16%) | PG | GCB | GCC | GCD |\n| 3H4 (l =1460nm) | 0.4 | 0.5 | <0.5 | 0.6 | 1.6 | 1.6 |\n| 3H5 (l =1214nm) | 1.6 | 2.2 | 1.5 | 8.4 | 4.0 | 6.0 |\n| 3F4 (l =1622nm) | 2.4 | 5.1 | 2.6 | 10.6 | 8.4 | 10.0 |\n\nCaption: Calculated and t 1/e experimental lifetimes of the three first excited levels of Tm3+ ion (the wavelength of the measurements are indicated)\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"0.4\", \"type\": \"Other\"}\n{\"value\": \"0.5\", \"type\": \"Other\"}\n{\"value\": \"0.5\", \"type\": \"Other\"}\n{\"value\": \"0.6\", \"type\": \"Other\"}\n{\"value\": \"1.6\", \"type\": \"Other\"}\n{\"value\": \"1.6\", \"type\": \"Other\"}\n{\"value\": \"1.6\", \"type\": \"Other\"}\n{\"value\": \"2.2\", \"type\": \"Other\"}\n{\"value\": \"1.5\", \"type\": \"Other\"}\n{\"value\": \"8.4\", \"type\": \"Other\"}\n{\"value\": \"4.0\", \"type\": \"Other\"}\n{\"value\": \"6.0\", \"type\": \"Other\"}\n{\"value\": \"2.4\", \"type\": \"Other\"}\n{\"value\": \"5.1\", \"type\": \"Other\"}\n{\"value\": \"2.6\", \"type\": \"Other\"}\n{\"value\": \"10.6\", \"type\": \"Other\"}\n{\"value\": \"8.4\", \"type\": \"Other\"}\n{\"value\": \"10.0\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Annotation projection To build the taggers for new languages, we resort to annotation projection following<cite> Plank and Agić (2018)</cite> . | ["Uses"] | science.multicite_intent_classification.1237 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nAnnotation projection To build the taggers for new languages, we resort to annotation projection following<cite> Plank and Agić (2018)</cite> .",
"role": "user"
},
{
"content": "[\"Uses\"]",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Increased purity of cytoplasmic membranes isolated from overexpressors is indicated by stronger spots for OmpF in 2D BN-PAGE gels.
Research Abstract:
Gram-negative bacteria have an outer membrane (OM) that functions as a barrier to protect the cell from toxic compounds such as antibiotics and detergents.
The OM is a highly asymmetric bilayer composed of phospholipids, glycolipids, and proteins.
Assembly of this essential organelle occurs outside the cytoplasm in an environment that lacks obvious energy sources such as ATP, and the mechanisms involved are poorly understood.
We describe the identification of a multiprotein complex required for the assembly of proteins in the OM of Escherichia coli.
We also demonstrate genetic interactions between genes encoding components of this protein assembly complex and imp, which encodes a protein involved in the assembly of lipopolysaccharides (LPS) in the OM.
These genetic interactions suggest a role for YfgL, one of the lipoprotein components of the protein assembly complex, in a homeostatic control mechanism that coordinates the overall OM assembly process. | {"verdict": "NEI", "evidence": []} | science.scifact_entailment.398 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nIncreased purity of cytoplasmic membranes isolated from overexpressors is indicated by stronger spots for OmpF in 2D BN-PAGE gels.\n\nResearch Abstract:\nGram-negative bacteria have an outer membrane (OM) that functions as a barrier to protect the cell from toxic compounds such as antibiotics and detergents.\nThe OM is a highly asymmetric bilayer composed of phospholipids, glycolipids, and proteins.\nAssembly of this essential organelle occurs outside the cytoplasm in an environment that lacks obvious energy sources such as ATP, and the mechanisms involved are poorly understood.\nWe describe the identification of a multiprotein complex required for the assembly of proteins in the OM of Escherichia coli.\nWe also demonstrate genetic interactions between genes encoding components of this protein assembly complex and imp, which encodes a protein involved in the assembly of lipopolysaccharides (LPS) in the OM.\nThese genetic interactions suggest a role for YfgL, one of the lipoprotein components of the protein assembly complex, in a homeostatic control mechanism that coordinates the overall OM assembly process.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.discomat_te | | Glasses | mol% | mol% | mol% | mol% | Density (g/cm3)+-0.002 | Total number of atoms in the simulation box |
| Glasses | SiO2 | ZnO | Na2O | CaO | Density (g/cm3)+-0.002 | Total number of atoms in the simulation box |
| NC2S x =0 | 50 | | 25 | 25 | 2.712 | 1188 |
| NZ5 x =0.2 | 50 | 5 | 20 | 25 | 2.794 | 1166 |
| NZ15 x =0.6 | 50 | 15 | 10 | 25 | 2.964 | 1113 |
| NZ25 x =1 | 50 | 25 | | 25 | 3.137 | 1080 |
Caption: Batch composition (mol%), density and total number of atoms in the simulation box for the glasses studied
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "50", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "NC2S x =0"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "NC2S x =0"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "NC2S x =0"}
{"value": "2.712", "type": "Other"}
{"value": "1188", "type": "Other"}
{"value": "50", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "NZ5 x =0.2"}
{"value": "5", "type": "Glass_Compound_Amount", "constituent": "ZnO", "unit": "mol", "material": "NZ5 x =0.2"}
{"value": "20", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "NZ5 x =0.2"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "NZ5 x =0.2"}
{"value": "2.794", "type": "Other"}
{"value": "1166", "type": "Other"}
{"value": "50", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "NZ15 x =0.6"}
{"value": "15", "type": "Glass_Compound_Amount", "constituent": "ZnO", "unit": "mol", "material": "NZ15 x =0.6"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "Na2O", "unit": "mol", "material": "NZ15 x =0.6"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "NZ15 x =0.6"}
{"value": "2.964", "type": "Other"}
{"value": "1113", "type": "Other"}
{"value": "50", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "mol", "material": "NZ25 x =1"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "ZnO", "unit": "mol", "material": "NZ25 x =1"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "CaO", "unit": "mol", "material": "NZ25 x =1"}
{"value": "3.137", "type": "Other"}
{"value": "1080", "type": "Other"} | science.discomat_te.259 | [
{
"content": "| Glasses | mol% | mol% | mol% | mol% | Density (g/cm3)+-0.002 | Total number of atoms in the simulation box |\n| Glasses | SiO2 | ZnO | Na2O | CaO | Density (g/cm3)+-0.002 | Total number of atoms in the simulation box |\n| NC2S x =0 | 50 | | 25 | 25 | 2.712 | 1188 |\n| NZ5 x =0.2 | 50 | 5 | 20 | 25 | 2.794 | 1166 |\n| NZ15 x =0.6 | 50 | 15 | 10 | 25 | 2.964 | 1113 |\n| NZ25 x =1 | 50 | 25 | | 25 | 3.137 | 1080 |\n\nCaption: Batch composition (mol%), density and total number of atoms in the simulation box for the glasses studied\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"NC2S x =0\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"NC2S x =0\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"NC2S x =0\"}\n{\"value\": \"2.712\", \"type\": \"Other\"}\n{\"value\": \"1188\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"NZ5 x =0.2\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZnO\", \"unit\": \"mol\", \"material\": \"NZ5 x =0.2\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"NZ5 x =0.2\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"NZ5 x =0.2\"}\n{\"value\": \"2.794\", \"type\": \"Other\"}\n{\"value\": \"1166\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"NZ15 x =0.6\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZnO\", \"unit\": \"mol\", \"material\": \"NZ15 x =0.6\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Na2O\", \"unit\": \"mol\", \"material\": \"NZ15 x =0.6\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"NZ15 x =0.6\"}\n{\"value\": \"2.964\", \"type\": \"Other\"}\n{\"value\": \"1113\", \"type\": \"Other\"}\n{\"value\": \"50\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"mol\", \"material\": \"NZ25 x =1\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"ZnO\", \"unit\": \"mol\", \"material\": \"NZ25 x =1\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"CaO\", \"unit\": \"mol\", \"material\": \"NZ25 x =1\"}\n{\"value\": \"3.137\", \"type\": \"Other\"}\n{\"value\": \"1080\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
In this paper we target at generating generic action proposals in unconstrained videos. Each action proposal corresponds to a temporal series of spatial bounding boxes, i.e., a spatio-temporal video tube, which has a good potential to locate one human action. Assuming each action is performed by a human with meaningful motion, both appearance and motion cues are utilized to measure the ac-tionness of the video tubes. After picking those spatiotem-poral paths of high actionness scores, our action proposal generation is formulated as a maximum set coverage problem, where greedy search is performed to select a set of action proposals that can maximize the overall actionness score. Compared with existing action proposal approaches, our action proposals do not rely on video segmentation and can be generated in nearly real-time. Experimental results on two challenging datasets, MSRII and UCF 101, validate the superior performance of our action proposals as well as competitive results on action detection and search. | Output Entities:
{"Generic": ["datasets"], "Material": ["unconstrained videos", "UCF 101", "video tubes", "MSRII"], "Method": ["action proposal approaches", "video segmentation", "greedy search"], "Metric": ["actionness score", "actionness scores", "ac-tionness"], "OtherScientificTerm": ["appearance and motion cues", "generic action proposals", "spatio-temporal video tube", "action proposals", "human action", "temporal series of spatial bounding boxes", "action proposal"], "Task": ["maximum set coverage problem", "action proposal generation", "action detection and search"]} | science.scierc_ner.44 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nIn this paper we target at generating generic action proposals in unconstrained videos. Each action proposal corresponds to a temporal series of spatial bounding boxes, i.e., a spatio-temporal video tube, which has a good potential to locate one human action. Assuming each action is performed by a human with meaningful motion, both appearance and motion cues are utilized to measure the ac-tionness of the video tubes. After picking those spatiotem-poral paths of high actionness scores, our action proposal generation is formulated as a maximum set coverage problem, where greedy search is performed to select a set of action proposals that can maximize the overall actionness score. Compared with existing action proposal approaches, our action proposals do not rely on video segmentation and can be generated in nearly real-time. Experimental results on two challenging datasets, MSRII and UCF 101, validate the superior performance of our action proposals as well as competitive results on action detection and search.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [\"datasets\"], \"Material\": [\"unconstrained videos\", \"UCF 101\", \"video tubes\", \"MSRII\"], \"Method\": [\"action proposal approaches\", \"video segmentation\", \"greedy search\"], \"Metric\": [\"actionness score\", \"actionness scores\", \"ac-tionness\"], \"OtherScientificTerm\": [\"appearance and motion cues\", \"generic action proposals\", \"spatio-temporal video tube\", \"action proposals\", \"human action\", \"temporal series of spatial bounding boxes\", \"action proposal\"], \"Task\": [\"maximum set coverage problem\", \"action proposal generation\", \"action detection and search\"]}",
"role": "assistant"
}
] |
science.discomat_te | | Sample | Nitrogen content | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) |
| | | Al | Li | Na | K | Ca | Fe | OH |
| N-doped | 3000 | <0.01 | <0.01 | <0.01 | <0.01 | <0.01 | <0.01 | 0 |
| VAD | 0 | <0.01 | <0.01 | <0.01 | <0.01 | <0.01 | <0.01 | 100 |
| Type I | 0 | 6.9 | 0.1 | 0.2 | 0.06 | 0.4 | 0.1 | 10 |
| Type II | 0 | 7.3 | 0.1 | 0.1 | 0.03 | 0.4 | 0.1 | 300 |
| Type III | 0 | <0.05 | <0.05 | <0.05 | <0.05 | <0.05 | <0.05 | 800 |
Caption: Purity of nitrogen-incorporated silica glasses and representative pure silica glasses
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "3000", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0", "type": "Other"}
{"value": "0", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "0.01", "type": "Other"}
{"value": "100", "type": "Other"}
{"value": "0", "type": "Other"}
{"value": "6.9", "type": "Other"}
{"value": "0.1", "type": "Other"}
{"value": "0.2", "type": "Other"}
{"value": "0.06", "type": "Other"}
{"value": "0.4", "type": "Other"}
{"value": "0.1", "type": "Other"}
{"value": "10", "type": "Other"}
{"value": "0", "type": "Other"}
{"value": "7.3", "type": "Other"}
{"value": "0.1", "type": "Other"}
{"value": "0.1", "type": "Other"}
{"value": "0.03", "type": "Other"}
{"value": "0.4", "type": "Other"}
{"value": "0.1", "type": "Other"}
{"value": "300", "type": "Other"}
{"value": "0", "type": "Other"}
{"value": "0.05", "type": "Other"}
{"value": "0.05", "type": "Other"}
{"value": "0.05", "type": "Other"}
{"value": "0.05", "type": "Other"}
{"value": "0.05", "type": "Other"}
{"value": "0.05", "type": "Other"}
{"value": "800", "type": "Other"} | science.discomat_te.296 | [
{
"content": "| Sample | Nitrogen content | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) | Impurities (ppm) |\n| | | Al | Li | Na | K | Ca | Fe | OH |\n| N-doped | 3000 | <0.01 | <0.01 | <0.01 | <0.01 | <0.01 | <0.01 | 0 |\n| VAD | 0 | <0.01 | <0.01 | <0.01 | <0.01 | <0.01 | <0.01 | 100 |\n| Type I | 0 | 6.9 | 0.1 | 0.2 | 0.06 | 0.4 | 0.1 | 10 |\n| Type II | 0 | 7.3 | 0.1 | 0.1 | 0.03 | 0.4 | 0.1 | 300 |\n| Type III | 0 | <0.05 | <0.05 | <0.05 | <0.05 | <0.05 | <0.05 | 800 |\n\nCaption: Purity of nitrogen-incorporated silica glasses and representative pure silica glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"3000\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"0.01\", \"type\": \"Other\"}\n{\"value\": \"100\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"6.9\", \"type\": \"Other\"}\n{\"value\": \"0.1\", \"type\": \"Other\"}\n{\"value\": \"0.2\", \"type\": \"Other\"}\n{\"value\": \"0.06\", \"type\": \"Other\"}\n{\"value\": \"0.4\", \"type\": \"Other\"}\n{\"value\": \"0.1\", \"type\": \"Other\"}\n{\"value\": \"10\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"7.3\", \"type\": \"Other\"}\n{\"value\": \"0.1\", \"type\": \"Other\"}\n{\"value\": \"0.1\", \"type\": \"Other\"}\n{\"value\": \"0.03\", \"type\": \"Other\"}\n{\"value\": \"0.4\", \"type\": \"Other\"}\n{\"value\": \"0.1\", \"type\": \"Other\"}\n{\"value\": \"300\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"0.05\", \"type\": \"Other\"}\n{\"value\": \"0.05\", \"type\": \"Other\"}\n{\"value\": \"0.05\", \"type\": \"Other\"}\n{\"value\": \"0.05\", \"type\": \"Other\"}\n{\"value\": \"0.05\", \"type\": \"Other\"}\n{\"value\": \"0.05\", \"type\": \"Other\"}\n{\"value\": \"800\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: New graduates report intense stress during the transition from school to their first work settings. Managing this transition is important to reduce turnover rates. This study compared the effects of an externship program and a corporate-academic cooperation program on enhancing junior college students' nursing competence and retention rates in the first 3 months and 1 year of initial employment.
ABSTRACT.METHODS: This two-phase study adopted a pretest and posttest quasi-experimental design. All participants were graduating students drawn from a 5-year junior nursing college in Taiwan. There were 19 and 24 students who participated in the phase I externship program and phase II corporate-academic cooperation program, respectively. The nursing competence of the students had to be evaluated by mentors within 48 hours of practicum training and after practicum training. The retention rate was also surveyed at 3 months and 1 year after beginning employment.
ABSTRACT.RESULTS: Students who participated in the corporate-academic cooperation program achieved a statistically significant improvement in nursing competence and retention rates relative to those who participated in the externship program (p < 0.01 and p < 0.05, respectively).
ABSTRACT.CONCLUSIONS: The corporate-academic cooperation program facilitates the transition of junior college nursing students into independent staff nurses, enhances their nursing competence, and boosts retention rates.
Extracted Data: | [["phase I externship program", "phase II corporate-academic cooperation program", "improvement in nursing", "significantly increased", "Students who participated in the corporate-academic cooperation program achieved a statistically significant improvement in nursing competence and retention rates relative to those who participated in the externship program (p \u0026lt; 0.01 and p \u0026lt; 0.05, respectively)."]] | science.evidence_inference.233 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: New graduates report intense stress during the transition from school to their first work settings. Managing this transition is important to reduce turnover rates. This study compared the effects of an externship program and a corporate-academic cooperation program on enhancing junior college students' nursing competence and retention rates in the first 3 months and 1 year of initial employment.\n\nABSTRACT.METHODS: This two-phase study adopted a pretest and posttest quasi-experimental design. All participants were graduating students drawn from a 5-year junior nursing college in Taiwan. There were 19 and 24 students who participated in the phase I externship program and phase II corporate-academic cooperation program, respectively. The nursing competence of the students had to be evaluated by mentors within 48 hours of practicum training and after practicum training. The retention rate was also surveyed at 3 months and 1 year after beginning employment.\n\nABSTRACT.RESULTS: Students who participated in the corporate-academic cooperation program achieved a statistically significant improvement in nursing competence and retention rates relative to those who participated in the externship program (p < 0.01 and p < 0.05, respectively).\n\nABSTRACT.CONCLUSIONS: The corporate-academic cooperation program facilitates the transition of junior college nursing students into independent staff nurses, enhances their nursing competence, and boosts retention rates.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"phase I externship program\", \"phase II corporate-academic cooperation program\", \"improvement in nursing\", \"significantly increased\", \"Students who participated in the corporate-academic cooperation program achieved a statistically significant improvement in nursing competence and retention rates relative to those who participated in the externship program (p \\u0026lt; 0.01 and p \\u0026lt; 0.05, respectively).\"]]",
"role": "assistant"
}
] |
science.discomat_te | | Sample | Sample appearance | I 2o |
| As-quenched | Transparent | 0.03 |
| 620degC-1h | Transparent | 0.05 |
| 632degC-16h | Transparent | 0.2 |
| 680degC-1h | Opalescent | 0.2 |
| 700degC-1h | Non-transparent | 0.1 |
| 727degC-1h | Non-transparent | 0.05 |
| 800degC-1h | Non-transparent | 0 |
Caption: The SHG values (I 2o ) compared to a-quartz powdered reference sample of 80KTP-20KBS glass samples. The reproducibility is inside +-5%
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "0.03", "type": "Other"}
{"value": "0.05", "type": "Other"}
{"value": "0.2", "type": "Other"}
{"value": "0.2", "type": "Other"}
{"value": "0.1", "type": "Other"}
{"value": "0.05", "type": "Other"}
{"value": "0", "type": "Other"} | science.discomat_te.110 | [
{
"content": "| Sample | Sample appearance | I 2o |\n| As-quenched | Transparent | 0.03 |\n| 620degC-1h | Transparent | 0.05 |\n| 632degC-16h | Transparent | 0.2 |\n| 680degC-1h | Opalescent | 0.2 |\n| 700degC-1h | Non-transparent | 0.1 |\n| 727degC-1h | Non-transparent | 0.05 |\n| 800degC-1h | Non-transparent | 0 |\n\nCaption: The SHG values (I 2o ) compared to a-quartz powdered reference sample of 80KTP-20KBS glass samples. The reproducibility is inside +-5%\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"0.03\", \"type\": \"Other\"}\n{\"value\": \"0.05\", \"type\": \"Other\"}\n{\"value\": \"0.2\", \"type\": \"Other\"}\n{\"value\": \"0.2\", \"type\": \"Other\"}\n{\"value\": \"0.1\", \"type\": \"Other\"}\n{\"value\": \"0.05\", \"type\": \"Other\"}\n{\"value\": \"0\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Indeed, Henaff et. al. <cite>[18]</cite> found that if <cite>they</cite> tie the key vectors to entities in the text then the memories contain information about the state of those entities. | ["Background"] | science.multicite_intent_classification.1357 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nIndeed, Henaff et. al. <cite>[18]</cite> found that if <cite>they</cite> tie the key vectors to entities in the text then the memories contain information about the state of those entities.",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Removal of H3K9me3 by ectopic expression of other H3K9 demethylases decreases reprogramming efficiency in SCNT experiments.
Research Abstract:
Aberrant epigenetic reprogramming often results in developmental defects in somatic cell nuclear transfer (SCNT) embryos during embryonic genome activation (EGA).
Bovine eight-cell SCNT embryos exhibit global hypermethylation of histone H3 lysine 9 tri- and di-methylation (H3K9me3/2), but the intrinsic reason for this remains elusive.
Here, we provide evidence that two H3K9 demethylase genes, lysine-specific demethylase 4D (KDM4D) and 4E (KDM4E), are related to active H3K9me3/2 demethylation in in vitro fertilized (IVF) embryos and are deficiently expressed in cloned embryos at the time of EGA.
Moreover, KDM4E plays a more crucial role in IVF and SCNT embryonic development, and overexpression of KDM4E can restore the global transcriptome, improve blastocyst formation and increase the cloning efficiency of SCNT embryos.
Our results thereby indicate that KDM4E can function as a crucial epigenetic regulator of EGA and as an internal defective factor responsible for persistent H3K9me3/2 barriers to SCNT-mediated reprogramming.
Furthermore, we show that interactions between RNA and KDM4E are essential for H3K9 demethylation during EGA.
These observations advance the understanding of incomplete nuclear reprogramming and are of great importance for transgenic cattle procreation. | {"verdict": "CONTRADICT", "evidence": ["Moreover, KDM4E plays a more crucial role in IVF and SCNT embryonic development, and overexpression of KDM4E can restore the global transcriptome, improve blastocyst formation and increase the cloning efficiency of SCNT embryos."]} | science.scifact_entailment.676 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nRemoval of H3K9me3 by ectopic expression of other H3K9 demethylases decreases reprogramming efficiency in SCNT experiments.\n\nResearch Abstract:\nAberrant epigenetic reprogramming often results in developmental defects in somatic cell nuclear transfer (SCNT) embryos during embryonic genome activation (EGA).\nBovine eight-cell SCNT embryos exhibit global hypermethylation of histone H3 lysine 9 tri- and di-methylation (H3K9me3/2), but the intrinsic reason for this remains elusive.\nHere, we provide evidence that two H3K9 demethylase genes, lysine-specific demethylase 4D (KDM4D) and 4E (KDM4E), are related to active H3K9me3/2 demethylation in in vitro fertilized (IVF) embryos and are deficiently expressed in cloned embryos at the time of EGA.\nMoreover, KDM4E plays a more crucial role in IVF and SCNT embryonic development, and overexpression of KDM4E can restore the global transcriptome, improve blastocyst formation and increase the cloning efficiency of SCNT embryos.\nOur results thereby indicate that KDM4E can function as a crucial epigenetic regulator of EGA and as an internal defective factor responsible for persistent H3K9me3/2 barriers to SCNT-mediated reprogramming.\nFurthermore, we show that interactions between RNA and KDM4E are essential for H3K9 demethylation during EGA.\nThese observations advance the understanding of incomplete nuclear reprogramming and are of great importance for transgenic cattle procreation.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Moreover, KDM4E plays a more crucial role in IVF and SCNT embryonic development, and overexpression of KDM4E can restore the global transcriptome, improve blastocyst formation and increase the cloning efficiency of SCNT embryos.\"]}",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Our A-bots and Q-bots have the same architecture and hyperparameter variations as in <cite>Kottur et al. (2017)</cite> , but with our cultural transmission training procedure and some other differences identified below. | ["Similarities", "Differences"] | science.multicite_intent_classification.634 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nOur A-bots and Q-bots have the same architecture and hyperparameter variations as in <cite>Kottur et al. (2017)</cite> , but with our cultural transmission training procedure and some other differences identified below.",
"role": "user"
},
{
"content": "[\"Similarities\", \"Differences\"]",
"role": "assistant"
}
] |
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: the cannonical polyadenylation signal sequence AATAAA
Two AATAAA motifs are coded in the last exon of this gene
The results demonstrate that the intact AAUAAA is not required for RNA polyadenylation but is required for the cleavage step preceding polyadenylation to occur efficiently
the early poly(A) consensus signal was mutated from AAUAAA to UGUAAA
Functional polyadenylation [poly(A)] sites consist of two sequence elements, the AAUAAA and G/U box signals, that closely flank the site of mRNA 3'-end formation
the appropriate spacing of the AAUAAA and G/U box signals is critical for poly(A) site function
Two AATAAA hexanucleotide sequences are present in the 2092 nucleotide interval. The first one functions as the major polyA signal
Question: What is the most prominent sequence consensus for the polyadenylation site? | ["AATAAA", "AAUAAA"] | science.bioasq_list_qa.395 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: the cannonical polyadenylation signal sequence AATAAA\nTwo AATAAA motifs are coded in the last exon of this gene\nThe results demonstrate that the intact AAUAAA is not required for RNA polyadenylation but is required for the cleavage step preceding polyadenylation to occur efficiently\nthe early poly(A) consensus signal was mutated from AAUAAA to UGUAAA\nFunctional polyadenylation [poly(A)] sites consist of two sequence elements, the AAUAAA and G/U box signals, that closely flank the site of mRNA 3'-end formation\nthe appropriate spacing of the AAUAAA and G/U box signals is critical for poly(A) site function\nTwo AATAAA hexanucleotide sequences are present in the 2092 nucleotide interval. The first one functions as the major polyA signal\n\nQuestion: What is the most prominent sequence consensus for the polyadenylation site?",
"role": "user"
},
{
"content": "[\"AATAAA\", \"AAUAAA\"]",
"role": "assistant"
}
] |
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
In this work, we present a technique for robust estimation, which by explicitly incorporating the inherent uncertainty of the estimation procedure, results in a more efficient robust estimation algorithm. In addition, we build on recent work in randomized model verification, and use this to characterize the ` non-randomness ' of a solution. The combination of these two strategies results in a robust estimation procedure that provides a significant speed-up over existing RANSAC techniques, while requiring no prior information to guide the sampling process. In particular, our algorithm requires, on average, 3-10 times fewer samples than standard RANSAC, which is in close agreement with theoretical predictions. The efficiency of the algorithm is demonstrated on a selection of geometric estimation problems. | Output Entities:
{"Generic": ["technique", "strategies"], "Material": [], "Method": ["efficient robust estimation algorithm", "RANSAC techniques"], "Metric": [], "OtherScientificTerm": ["theoretical predictions", "inherent uncertainty of the estimation procedure", "prior information", "sampling process"], "Task": ["robust estimation", "randomized model verification", "geometric estimation problems"]} | science.scierc_ner.5 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nIn this work, we present a technique for robust estimation, which by explicitly incorporating the inherent uncertainty of the estimation procedure, results in a more efficient robust estimation algorithm. In addition, we build on recent work in randomized model verification, and use this to characterize the ` non-randomness ' of a solution. The combination of these two strategies results in a robust estimation procedure that provides a significant speed-up over existing RANSAC techniques, while requiring no prior information to guide the sampling process. In particular, our algorithm requires, on average, 3-10 times fewer samples than standard RANSAC, which is in close agreement with theoretical predictions. The efficiency of the algorithm is demonstrated on a selection of geometric estimation problems.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [\"technique\", \"strategies\"], \"Material\": [], \"Method\": [\"efficient robust estimation algorithm\", \"RANSAC techniques\"], \"Metric\": [], \"OtherScientificTerm\": [\"theoretical predictions\", \"inherent uncertainty of the estimation procedure\", \"prior information\", \"sampling process\"], \"Task\": [\"robust estimation\", \"randomized model verification\", \"geometric estimation problems\"]}",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
In particular, TR-R13 takes the output predictions of matrix factorization, and combines it with an entity-type aware RESCAL model . 3 Tensor factorization approaches perform poorly on this data. We present results for Model E, but other formulations such as PARAFAC, TransE, RESCAL, and Tucker2 achieved even lower accuracy; this is consistent with the results in . | ["Differences"] | science.multicite_intent_classification.1194 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nIn particular, TR-R13 takes the output predictions of matrix factorization, and combines it with an entity-type aware RESCAL model . 3 Tensor factorization approaches perform poorly on this data. We present results for Model E, but other formulations such as PARAFAC, TransE, RESCAL, and Tucker2 achieved even lower accuracy; this is consistent with the results in .",
"role": "user"
},
{
"content": "[\"Differences\"]",
"role": "assistant"
}
] |
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: In 19 of 142 patients (13%), the PLN founder mutation c.40_42delAGA (p.Arg14del) was identified. PLN mutation carriers more often had low-voltage electrocardiograms (p = 0.004), inverted T waves in leads V4 to V6 (p <0.001), and additional structural (p = 0.007) or functional (p = 0.017) left ventricular impairment, whereas desmosomal mutation carriers had more solitary right ventricular abnormalities.
Recently, we showed that the c.40_42delAGA (p.Arg14del) mutation in the phospholamban (PLN) gene can be identified in 10-15 % of Dutch patients with dilated cardiomyopathy or arrhythmogenic cardiomyopathy.
PLN mutation R14del was identified in 12 (12 %) ARVC patients and in 39 (15 %) DCM patients.
The PLN R14del founder mutation is present in a substantial number of patients clinically diagnosed with DCM or ARVC. R14del+ patients diagnosed with DCM showed an arrhythmogenic phenotype, and SCD at young age can be the presenting symptom. These findings support the concept of 'arrhythmogenic cardiomyopathy'.
Herein we reveal mechanistic insight into how four hereditary mutants of phospholamban, Arg(9) to Cys, Arg(9) to Leu, Arg(9) to His, and Arg(14) deletion, alter regulation of SERCA. Deletion of Arg(14) disrupts the protein kinase A recognition motif, which abrogates phospholamban phosphorylation and results in constitutive SERCA inhibition. Mutation of Arg(9) causes more complex changes in function, where hydrophobic substitutions such as cysteine and leucine eliminate both SERCA inhibition and phospholamban phosphorylation, whereas an aromatic substitution such as histidine selectively disrupts phosphorylation. We demonstrate that the role of Arg(9) in phospholamban function is multifaceted: it is important for inhibition of SERCA, it increases the efficiency of phosphorylation, and it is critical for protein kinase A recognition in the context of the phospholamban pentamer. Given the synergistic consequences on contractility, it is not surprising that the mutants cause lethal, hereditary dilated cardiomyopathy.
Phospholamban (PLN) is a key regulator of SR and cardiac function, and PLN mutations in humans have been associated with dilated cardiomyopathy (DCM). We previously reported the deletion of the highly conserved amino acid residue arginine 14 (nucleic acids 39, 40 and 41) in DCM patients.
To assess the function of this mutant PLN, we introduced the PLN-R14Del in cardiac myocytes of the PLN null mouse.
Thus, human PLN-R14Del is misrouted to the sarcolemma, in the absence of endogenous PLN, and alters NKA activity, leading to cardiac remodeling.
Mutations in the gene encoding PLN have been associated with idiopathic dilated cardiomyopathy; however, no systematic search for PLN mutations in heart failure has been conducted.
Two patients presented a G-T missense mutation at the G26 nucleotide, which encodes an Arg-Leu substitution at codon 9 (R9L).
One patient presented a G-A missense mutation at the same nucleotide, which encodes an Arg-His substitution at codon 9 (R9H). The fourth affected patient presented a T-G nonsense mutation at the nucleotide 116, substituting a termination codon for Leu-39 (L39stop).
The genetic analysis was focused on R9C mutation with the ability to block PLN phosphorylation leading to chronic inhibition of SERCA2a activity. Another analysed mutation causing the alteration of PLN level in cells was related to the substitution of a leucine residue at position 39 with a premature stop codon (L39X).
A missense mutation in PLN cytoplasmic domain (R9C) triggers dilated cardiomyopathy in humans, leading to premature death.
These effects are enhanced under oxidizing conditions, suggesting that oxidative stress may exacerbate the cardiotoxic effects of the PLN(R9C) mutant.
We identified 1 family with a deletion of arginine 14 in the PLN. Interestingly, unlike other individuals reported with the identical PLN mutation, these individuals were not diagnosed with dilated cardiomyopathy until their seventh decade when they were only mildly symptomatic with congestive heart failure.
Phospholamban R14 deletion results in late-onset, mild, hereditary dilated cardiomyopathy.
Through genetic screening of dilated cardiomyopathy patients, we identified a previously uncharacterized deletion of arginine 14 (PLN-R14Del) in the coding region of the phospholamban (PLN) gene in a large family with hereditary heart failure. No homozygous individuals were identified. By middle age, heterozygous individuals developed left ventricular dilation, contractile dysfunction, and episodic ventricular arrhythmias, with overt heart failure in some cases.
Transgenic mice overexpressing the mutant PLN-R14Del recapitulated human cardiomyopathy exhibiting similar histopathologic abnormalities and premature death.
A mutation in the human phospholamban gene, deleting arginine 14, results in lethal, hereditary cardiomyopathy.
Here, a T116G point mutation, substituting a termination codon for Leu-39 (L39stop), was identified in two families with hereditary heart failure. The heterozygous individuals exhibited hypertrophy without diminished contractile performance. Strikingly, both individuals homozygous for L39stop developed dilated cardiomyopathy and heart failure, requiring cardiac transplantation at ages 16 and 27.
Here we report that an inherited human dilated cardiomyopathy with refractory congestive heart failure is caused by a dominant Arg --> Cys missense mutation at residue 9 (R9C) in phospholamban (PLN), a transmembrane phosphoprotein that inhibits the cardiac sarcoplasmic reticular Ca2+-adenosine triphosphatase (SERCA2a) pump. Transgenic PLN(R9C) mice recapitulated human heart failure with premature death. Cellular and biochemical studies revealed that, unlike wild-type PLN, PLN(R9C) did not directly inhibit SERCA2a. Rather, PLN(R9C) trapped protein kinase A (PKA), which blocked PKA-mediated phosphorylation of wild-type PLN and in turn delayed decay of calcium transients in myocytes.
Phospholamban R14del mutation in patients diagnosed with dilated cardiomyopathy or arrhythmogenic right ventricular cardiomyopathy: evidence supporting the concept of arrhythmogenic cardiomyopathy.
The PLN -42 C>G mutation was found in one patient with late onset familial apical hypertrophic cardiomyopathy
Question: Which mutations of phopspholamban have been found in patients with cardiomyopathy? | ["c.40_42delAGA(p.Arg14del)", "PLN L39stop", "Arg(9) to Cys", "Arg(9) to Leu", "Arg(9) to His"] | science.bioasq_list_qa.39 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: In 19 of 142 patients (13%), the PLN founder mutation c.40_42delAGA (p.Arg14del) was identified. PLN mutation carriers more often had low-voltage electrocardiograms (p = 0.004), inverted T waves in leads V4 to V6 (p <0.001), and additional structural (p = 0.007) or functional (p = 0.017) left ventricular impairment, whereas desmosomal mutation carriers had more solitary right ventricular abnormalities.\nRecently, we showed that the c.40_42delAGA (p.Arg14del) mutation in the phospholamban (PLN) gene can be identified in 10-15 % of Dutch patients with dilated cardiomyopathy or arrhythmogenic cardiomyopathy.\nPLN mutation R14del was identified in 12 (12 %) ARVC patients and in 39 (15 %) DCM patients.\nThe PLN R14del founder mutation is present in a substantial number of patients clinically diagnosed with DCM or ARVC. R14del+ patients diagnosed with DCM showed an arrhythmogenic phenotype, and SCD at young age can be the presenting symptom. These findings support the concept of 'arrhythmogenic cardiomyopathy'.\nHerein we reveal mechanistic insight into how four hereditary mutants of phospholamban, Arg(9) to Cys, Arg(9) to Leu, Arg(9) to His, and Arg(14) deletion, alter regulation of SERCA. Deletion of Arg(14) disrupts the protein kinase A recognition motif, which abrogates phospholamban phosphorylation and results in constitutive SERCA inhibition. Mutation of Arg(9) causes more complex changes in function, where hydrophobic substitutions such as cysteine and leucine eliminate both SERCA inhibition and phospholamban phosphorylation, whereas an aromatic substitution such as histidine selectively disrupts phosphorylation. We demonstrate that the role of Arg(9) in phospholamban function is multifaceted: it is important for inhibition of SERCA, it increases the efficiency of phosphorylation, and it is critical for protein kinase A recognition in the context of the phospholamban pentamer. Given the synergistic consequences on contractility, it is not surprising that the mutants cause lethal, hereditary dilated cardiomyopathy.\nPhospholamban (PLN) is a key regulator of SR and cardiac function, and PLN mutations in humans have been associated with dilated cardiomyopathy (DCM). We previously reported the deletion of the highly conserved amino acid residue arginine 14 (nucleic acids 39, 40 and 41) in DCM patients.\nTo assess the function of this mutant PLN, we introduced the PLN-R14Del in cardiac myocytes of the PLN null mouse.\nThus, human PLN-R14Del is misrouted to the sarcolemma, in the absence of endogenous PLN, and alters NKA activity, leading to cardiac remodeling.\nMutations in the gene encoding PLN have been associated with idiopathic dilated cardiomyopathy; however, no systematic search for PLN mutations in heart failure has been conducted.\nTwo patients presented a G-T missense mutation at the G26 nucleotide, which encodes an Arg-Leu substitution at codon 9 (R9L).\nOne patient presented a G-A missense mutation at the same nucleotide, which encodes an Arg-His substitution at codon 9 (R9H). The fourth affected patient presented a T-G nonsense mutation at the nucleotide 116, substituting a termination codon for Leu-39 (L39stop).\nThe genetic analysis was focused on R9C mutation with the ability to block PLN phosphorylation leading to chronic inhibition of SERCA2a activity. Another analysed mutation causing the alteration of PLN level in cells was related to the substitution of a leucine residue at position 39 with a premature stop codon (L39X).\nA missense mutation in PLN cytoplasmic domain (R9C) triggers dilated cardiomyopathy in humans, leading to premature death.\nThese effects are enhanced under oxidizing conditions, suggesting that oxidative stress may exacerbate the cardiotoxic effects of the PLN(R9C) mutant.\nWe identified 1 family with a deletion of arginine 14 in the PLN. Interestingly, unlike other individuals reported with the identical PLN mutation, these individuals were not diagnosed with dilated cardiomyopathy until their seventh decade when they were only mildly symptomatic with congestive heart failure.\nPhospholamban R14 deletion results in late-onset, mild, hereditary dilated cardiomyopathy.\nThrough genetic screening of dilated cardiomyopathy patients, we identified a previously uncharacterized deletion of arginine 14 (PLN-R14Del) in the coding region of the phospholamban (PLN) gene in a large family with hereditary heart failure. No homozygous individuals were identified. By middle age, heterozygous individuals developed left ventricular dilation, contractile dysfunction, and episodic ventricular arrhythmias, with overt heart failure in some cases.\nTransgenic mice overexpressing the mutant PLN-R14Del recapitulated human cardiomyopathy exhibiting similar histopathologic abnormalities and premature death.\nA mutation in the human phospholamban gene, deleting arginine 14, results in lethal, hereditary cardiomyopathy.\nHere, a T116G point mutation, substituting a termination codon for Leu-39 (L39stop), was identified in two families with hereditary heart failure. The heterozygous individuals exhibited hypertrophy without diminished contractile performance. Strikingly, both individuals homozygous for L39stop developed dilated cardiomyopathy and heart failure, requiring cardiac transplantation at ages 16 and 27.\nHere we report that an inherited human dilated cardiomyopathy with refractory congestive heart failure is caused by a dominant Arg --> Cys missense mutation at residue 9 (R9C) in phospholamban (PLN), a transmembrane phosphoprotein that inhibits the cardiac sarcoplasmic reticular Ca2+-adenosine triphosphatase (SERCA2a) pump. Transgenic PLN(R9C) mice recapitulated human heart failure with premature death. Cellular and biochemical studies revealed that, unlike wild-type PLN, PLN(R9C) did not directly inhibit SERCA2a. Rather, PLN(R9C) trapped protein kinase A (PKA), which blocked PKA-mediated phosphorylation of wild-type PLN and in turn delayed decay of calcium transients in myocytes.\nPhospholamban R14del mutation in patients diagnosed with dilated cardiomyopathy or arrhythmogenic right ventricular cardiomyopathy: evidence supporting the concept of arrhythmogenic cardiomyopathy.\nThe PLN -42 C>G mutation was found in one patient with late onset familial apical hypertrophic cardiomyopathy\n\nQuestion: Which mutations of phopspholamban have been found in patients with cardiomyopathy?",
"role": "user"
},
{
"content": "[\"c.40_42delAGA(p.Arg14del)\", \"PLN L39stop\", \"Arg(9) to Cys\", \"Arg(9) to Leu\", \"Arg(9) to His\"]",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
A country's Vaccine Alliance (GAVI) eligibility is not indictivate of accelerated adoption of the Hub vaccine.
Research Abstract:
BACKGROUND Adoption of new and underutilized vaccines by national immunization programs is an essential step towards reducing child mortality.
Policy decisions to adopt new vaccines in high mortality countries often lag behind decisions in high-income countries.
Using the case of Haemophilus influenzae type b (Hib) vaccine, this paper endeavors to explain these delays through the analysis of country-level economic, epidemiological, programmatic and policy-related factors, as well as the role of the Global Alliance for Vaccines and Immunisation (GAVI Alliance).
METHODS AND FINDINGS Data for 147 countries from 1990 to 2007 were analyzed in accelerated failure time models to identify factors that are associated with the time to decision to adopt Hib vaccine.
In multivariable models that control for Gross National Income, region, and burden of Hib disease, the receipt of GAVI support speeded the time to decision by a factor of 0.37 (95% CI 0.18-0.76), or 63%.
The presence of two or more neighboring country adopters accelerated decisions to adopt by a factor of 0.50 (95% CI 0.33-0.75).
For each 1% increase in vaccine price, decisions to adopt are delayed by a factor of 1.02 (95% CI 1.00-1.04).
Global recommendations and local studies were not associated with time to decision.
CONCLUSIONS This study substantiates previous findings related to vaccine price and presents new evidence to suggest that GAVI eligibility is associated with accelerated decisions to adopt Hib vaccine.
The influence of neighboring country decisions was also highly significant, suggesting that approaches to support the adoption of new vaccines should consider supply- and demand-side factors. | {"verdict": "CONTRADICT", "evidence": ["CONCLUSIONS This study substantiates previous findings related to vaccine price and presents new evidence to suggest that GAVI eligibility is associated with accelerated decisions to adopt Hib vaccine."]} | science.scifact_entailment.22 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nA country's Vaccine Alliance (GAVI) eligibility is not indictivate of accelerated adoption of the Hub vaccine.\n\nResearch Abstract:\nBACKGROUND Adoption of new and underutilized vaccines by national immunization programs is an essential step towards reducing child mortality.\nPolicy decisions to adopt new vaccines in high mortality countries often lag behind decisions in high-income countries.\nUsing the case of Haemophilus influenzae type b (Hib) vaccine, this paper endeavors to explain these delays through the analysis of country-level economic, epidemiological, programmatic and policy-related factors, as well as the role of the Global Alliance for Vaccines and Immunisation (GAVI Alliance).\nMETHODS AND FINDINGS Data for 147 countries from 1990 to 2007 were analyzed in accelerated failure time models to identify factors that are associated with the time to decision to adopt Hib vaccine.\nIn multivariable models that control for Gross National Income, region, and burden of Hib disease, the receipt of GAVI support speeded the time to decision by a factor of 0.37 (95% CI 0.18-0.76), or 63%.\nThe presence of two or more neighboring country adopters accelerated decisions to adopt by a factor of 0.50 (95% CI 0.33-0.75).\nFor each 1% increase in vaccine price, decisions to adopt are delayed by a factor of 1.02 (95% CI 1.00-1.04).\nGlobal recommendations and local studies were not associated with time to decision.\nCONCLUSIONS This study substantiates previous findings related to vaccine price and presents new evidence to suggest that GAVI eligibility is associated with accelerated decisions to adopt Hib vaccine.\nThe influence of neighboring country decisions was also highly significant, suggesting that approaches to support the adoption of new vaccines should consider supply- and demand-side factors.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"CONCLUSIONS This study substantiates previous findings related to vaccine price and presents new evidence to suggest that GAVI eligibility is associated with accelerated decisions to adopt Hib vaccine.\"]}",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Mouse models can be generated using "artificial spermatids."
Research Abstract:
Haploid cells are amenable for genetic analysis.
Recent success in the derivation of mouse haploid embryonic stem cells (haESCs) via parthenogenesis has enabled genetic screening in mammalian cells.
However, successful generation of live animals from these haESCs, which is needed to extend the genetic analysis to the organism level, has not been achieved.
Here, we report the derivation of haESCs from androgenetic blastocysts.
These cells, designated as AG-haESCs, partially maintain paternal imprints, express classical ESC pluripotency markers, and contribute to various tissues, including the germline, upon injection into diploid blastocysts.
Strikingly, live mice can be obtained upon injection of AG-haESCs into MII oocytes, and these mice bear haESC-carried genetic traits and develop into fertile adults.
Furthermore, gene targeting via homologous recombination is feasible in the AG-haESCs.
Our results demonstrate that AG-haESCs can be used as a genetically tractable fertilization agent for the production of live animals via injection into oocytes. | {"verdict": "SUPPORT", "evidence": ["Strikingly, live mice can be obtained upon injection of AG-haESCs into MII oocytes, and these mice bear haESC-carried genetic traits and develop into fertile adults."]} | science.scifact_entailment.520 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nMouse models can be generated using \"artificial spermatids.\"\n\nResearch Abstract:\nHaploid cells are amenable for genetic analysis.\nRecent success in the derivation of mouse haploid embryonic stem cells (haESCs) via parthenogenesis has enabled genetic screening in mammalian cells.\nHowever, successful generation of live animals from these haESCs, which is needed to extend the genetic analysis to the organism level, has not been achieved.\nHere, we report the derivation of haESCs from androgenetic blastocysts.\nThese cells, designated as AG-haESCs, partially maintain paternal imprints, express classical ESC pluripotency markers, and contribute to various tissues, including the germline, upon injection into diploid blastocysts.\nStrikingly, live mice can be obtained upon injection of AG-haESCs into MII oocytes, and these mice bear haESC-carried genetic traits and develop into fertile adults.\nFurthermore, gene targeting via homologous recombination is feasible in the AG-haESCs.\nOur results demonstrate that AG-haESCs can be used as a genetically tractable fertilization agent for the production of live animals via injection into oocytes.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Strikingly, live mice can be obtained upon injection of AG-haESCs into MII oocytes, and these mice bear haESC-carried genetic traits and develop into fertile adults.\"]}",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
The portions that are pre-trained are the same parts of the architecture as in<cite> Ammanabrolu and Riedl (2019)</cite> . | ["Similarities", "Uses"] | science.multicite_intent_classification.450 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nThe portions that are pre-trained are the same parts of the architecture as in<cite> Ammanabrolu and Riedl (2019)</cite> .",
"role": "user"
},
{
"content": "[\"Similarities\", \"Uses\"]",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: The improper control of self-care in hypertension imposes a lot of financial burden on the health-care system. On the other hand, the importance of participatory care and high effectiveness of self-management programs have been confirmed. This study was aimed to examine the effect of an educational intervention on self-efficacy, self-care behaviors and blood pressure (BP) of hypertensive obese or overweight women in the primary health-care setting in Isfahan, Iran.
ABSTRACT.METHODS: This randomized controlled trial was an educational intervention program. It was performed among 146 hypertensive women of 30-65 age range who referred to 6 health care centers of Isfahan that randomly assigned to a control and intervention groups. The interventional group participated in the 6 weekly sessions including exercises, weight control, medication adherence, and home self-monitoring based on goal setting, and promotion of self-efficacy. The control group received routine care from health-care center and any special intervention has been done for the control group. Output variables were analyzed after intervention, and 6-month follow-up.
ABSTRACT.RESULTS: There are no significant differences between age, weight, body mass index and BP and biochemical variables except lipids as well as behavioral factors at the baseline. After 6 months intervention self-efficacy (< 0.001) and physical activity (< 0.001) improvement of in the intervention group was significantly different in comparison with the control group. After 6 months, there was a significant reduction in systolic (P < 0.001) and diastolic BP (P = 0.010) in the intervention group.
ABSTRACT.CONCLUSION: Participatory method of education could help us to convince patients to have better self-care to control disease. Furthermore, since adherence to the treatment of hypertensive patients in our society is low, organizing such courses can teach essential knowledge and skills to lifestyle change and prevention of complications. Performing these courses is recommended for other chronic disease patients in health-care centers to assess self-management programs on self-care behavior.
Extracted Data: | [["6 weekly sessions including exercises, weight control, medication adherence, and home self-monitoring", "routine care from health-care center", "average of physical activity", "significantly increased", "After 6 months intervention self-efficacy (\u003c 0.001) and physical activity (\u003c 0.001) improvement of in the intervention group was significantly different in comparison with the control group."], ["6 weekly sessions including exercises, weight control, medication adherence, and home self-monitoring", "routine care from health-care center", "systolic and diastolic BP", "significantly decreased", "After 6 months, there was a significant reduction in systolic (P \u003c 0.001) and diastolic BP (P = 0.010) in the intervention group."]] | science.evidence_inference.65 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: The improper control of self-care in hypertension imposes a lot of financial burden on the health-care system. On the other hand, the importance of participatory care and high effectiveness of self-management programs have been confirmed. This study was aimed to examine the effect of an educational intervention on self-efficacy, self-care behaviors and blood pressure (BP) of hypertensive obese or overweight women in the primary health-care setting in Isfahan, Iran.\n\nABSTRACT.METHODS: This randomized controlled trial was an educational intervention program. It was performed among 146 hypertensive women of 30-65 age range who referred to 6 health care centers of Isfahan that randomly assigned to a control and intervention groups. The interventional group participated in the 6 weekly sessions including exercises, weight control, medication adherence, and home self-monitoring based on goal setting, and promotion of self-efficacy. The control group received routine care from health-care center and any special intervention has been done for the control group. Output variables were analyzed after intervention, and 6-month follow-up.\n\nABSTRACT.RESULTS: There are no significant differences between age, weight, body mass index and BP and biochemical variables except lipids as well as behavioral factors at the baseline. After 6 months intervention self-efficacy (< 0.001) and physical activity (< 0.001) improvement of in the intervention group was significantly different in comparison with the control group. After 6 months, there was a significant reduction in systolic (P < 0.001) and diastolic BP (P = 0.010) in the intervention group.\n\nABSTRACT.CONCLUSION: Participatory method of education could help us to convince patients to have better self-care to control disease. Furthermore, since adherence to the treatment of hypertensive patients in our society is low, organizing such courses can teach essential knowledge and skills to lifestyle change and prevention of complications. Performing these courses is recommended for other chronic disease patients in health-care centers to assess self-management programs on self-care behavior.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"6 weekly sessions including exercises, weight control, medication adherence, and home self-monitoring\", \"routine care from health-care center\", \"average of physical activity\", \"significantly increased\", \"After 6 months intervention self-efficacy (\\u003c 0.001) and physical activity (\\u003c 0.001) improvement of in the intervention group was significantly different in comparison with the control group.\"], [\"6 weekly sessions including exercises, weight control, medication adherence, and home self-monitoring\", \"routine care from health-care center\", \"systolic and diastolic BP\", \"significantly decreased\", \"After 6 months, there was a significant reduction in systolic (P \\u003c 0.001) and diastolic BP (P = 0.010) in the intervention group.\"]]",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
AMP-activated protein kinase (AMPK) activation reduces inflammation-related fibrosis in the lungs.
Research Abstract:
Fibrosis is a pathological result of a dysfunctional repair response to tissue injury and occurs in a number of organs, including the lungs1.
Cellular metabolism regulates tissue repair and remodelling responses to injury2-4.
AMPK is a critical sensor of cellular bioenergetics and controls the switch from anabolic to catabolic metabolism5.
However, the role of AMPK in fibrosis is not well understood.
Here, we demonstrate that in humans with idiopathic pulmonary fibrosis (IPF) and in an experimental mouse model of lung fibrosis, AMPK activity is lower in fibrotic regions associated with metabolically active and apoptosis-resistant myofibroblasts.
Pharmacological activation of AMPK in myofibroblasts from lungs of humans with IPF display lower fibrotic activity, along with enhanced mitochondrial biogenesis and normalization of sensitivity to apoptosis.
In a bleomycin model of lung fibrosis in mice, metformin therapeutically accelerates the resolution of well-established fibrosis in an AMPK-dependent manner.
These studies implicate deficient AMPK activation in non-resolving, pathologic fibrotic processes, and support a role for metformin (or other AMPK activators) to reverse established fibrosis by facilitating deactivation and apoptosis of myofibroblasts. | {"verdict": "SUPPORT", "evidence": ["Pharmacological activation of AMPK in myofibroblasts from lungs of humans with IPF display lower fibrotic activity, along with enhanced mitochondrial biogenesis and normalization of sensitivity to apoptosis.", "In a bleomycin model of lung fibrosis in mice, metformin therapeutically accelerates the resolution of well-established fibrosis in an AMPK-dependent manner.", "These studies implicate deficient AMPK activation in non-resolving, pathologic fibrotic processes, and support a role for metformin (or other AMPK activators) to reverse established fibrosis by facilitating deactivation and apoptosis of myofibroblasts."]} | science.scifact_entailment.38 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nAMP-activated protein kinase (AMPK) activation reduces inflammation-related fibrosis in the lungs.\n\nResearch Abstract:\nFibrosis is a pathological result of a dysfunctional repair response to tissue injury and occurs in a number of organs, including the lungs1.\nCellular metabolism regulates tissue repair and remodelling responses to injury2-4.\nAMPK is a critical sensor of cellular bioenergetics and controls the switch from anabolic to catabolic metabolism5.\nHowever, the role of AMPK in fibrosis is not well understood.\nHere, we demonstrate that in humans with idiopathic pulmonary fibrosis (IPF) and in an experimental mouse model of lung fibrosis, AMPK activity is lower in fibrotic regions associated with metabolically active and apoptosis-resistant myofibroblasts.\nPharmacological activation of AMPK in myofibroblasts from lungs of humans with IPF display lower fibrotic activity, along with enhanced mitochondrial biogenesis and normalization of sensitivity to apoptosis.\nIn a bleomycin model of lung fibrosis in mice, metformin therapeutically accelerates the resolution of well-established fibrosis in an AMPK-dependent manner.\nThese studies implicate deficient AMPK activation in non-resolving, pathologic fibrotic processes, and support a role for metformin (or other AMPK activators) to reverse established fibrosis by facilitating deactivation and apoptosis of myofibroblasts.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Pharmacological activation of AMPK in myofibroblasts from lungs of humans with IPF display lower fibrotic activity, along with enhanced mitochondrial biogenesis and normalization of sensitivity to apoptosis.\", \"In a bleomycin model of lung fibrosis in mice, metformin therapeutically accelerates the resolution of well-established fibrosis in an AMPK-dependent manner.\", \"These studies implicate deficient AMPK activation in non-resolving, pathologic fibrotic processes, and support a role for metformin (or other AMPK activators) to reverse established fibrosis by facilitating deactivation and apoptosis of myofibroblasts.\"]}",
"role": "assistant"
}
] |
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: Due to side effects such as urinary tract and genital infections and decrease in blood pressure, proper patient selection for drug initiation and close monitoring will be important.
Urogenital side-effects related to the enhanced glycosuria can be troublesome, yet seldom lead to discontinuation.
Promotion of urinary loss of glucose by SGLT2 inhibitors embodies a new principle of control in type 2 diabetes that has several advantages with some urogenital side-effects, both of which are evaluated in this review.
Promotion of urinary loss of glucose by SGLT2 inhibitors embodies a new principle of control in type 2 diabetes that has several advantages with some urogenital side-effects, both of which are evaluated in this review
Effect of SGLT2 inhibitors in a murine model of urinary tract infection with Candida albicans.
Promotion of urinary loss of glucose by SGLT2 inhibitors embodies a new principle of control in type 2 diabetes that has several advantages with some urogenital side-effects,
Urogenital side-effects related to the enhanced glycosuria can be troublesome, yet seldom lead to discontinuation.
Promotion of urinary loss of glucose by SGLT2 inhibitors embodies a new principle of control in type 2 diabetes that has several advantages with some urogenital side-effects, both of which are evaluated in this review.
There are some side effects that warrant further investigation and establishing whether SGLT2 inhibition provides a renal benefit relies on future long-term studies with specific renal end-points.
It should be noted that the currently approved agents have side effects that include an increased incidence of genital infections, predominantly in women.
Question: List side effects of SGLT2 inhibitors? | ["urinary tract infections", "genital infections", "low blood pressure"] | science.bioasq_list_qa.20 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: Due to side effects such as urinary tract and genital infections and decrease in blood pressure, proper patient selection for drug initiation and close monitoring will be important.\nUrogenital side-effects related to the enhanced glycosuria can be troublesome, yet seldom lead to discontinuation.\nPromotion of urinary loss of glucose by SGLT2 inhibitors embodies a new principle of control in type 2 diabetes that has several advantages with some urogenital side-effects, both of which are evaluated in this review.\nPromotion of urinary loss of glucose by SGLT2 inhibitors embodies a new principle of control in type 2 diabetes that has several advantages with some urogenital side-effects, both of which are evaluated in this review\nEffect of SGLT2 inhibitors in a murine model of urinary tract infection with Candida albicans.\nPromotion of urinary loss of glucose by SGLT2 inhibitors embodies a new principle of control in type 2 diabetes that has several advantages with some urogenital side-effects,\nUrogenital side-effects related to the enhanced glycosuria can be troublesome, yet seldom lead to discontinuation.\nPromotion of urinary loss of glucose by SGLT2 inhibitors embodies a new principle of control in type 2 diabetes that has several advantages with some urogenital side-effects, both of which are evaluated in this review.\nThere are some side effects that warrant further investigation and establishing whether SGLT2 inhibition provides a renal benefit relies on future long-term studies with specific renal end-points.\nIt should be noted that the currently approved agents have side effects that include an increased incidence of genital infections, predominantly in women.\n\nQuestion: List side effects of SGLT2 inhibitors?",
"role": "user"
},
{
"content": "[\"urinary tract infections\", \"genital infections\", \"low blood pressure\"]",
"role": "assistant"
}
] |
science.discomat_te | | | BaCMAS1 | BaCMAS2 | BaCMAS3 | BaCMAS4 |
| L th | 0.595 | 0.597 | 0.600 | 0.599 |
| r (gcm-3) | 2.90+-0.007 | 2.94+-0.003 | 3.03+-0.001 | 3.01+-0.002 |
| V m (cm3 mol-1) | 20.16+-0.04 | 20.31+-0.03 | 20.33+-0.01 | 20.40+-0.01 |
| V o (cm3 mol-1) | 13.48+-0.03 | 13.58+-0.02 | 13.67+-0.01 | 13.64+-0.01 |
| V e | 0.82+-0.04 | 0.75+-0.03 | 0.51+-0.01 | 0.54+-0.01 |
| T dg (K) | 927 | 950 | 954 | 915 |
| T c (K) | 1172 | 1180 | 1181 | 1157 |
| T p (K) | 1197 | 1204 | 1203 | 1177 |
| Avrami parameter, <n> | 2.03+-0.04 | 2.12+-0.02 | 2.10+-0.05 | 2.11+-0.02 |
| E c (kJmol-1) | 437.12 | 426.43 | 378.228 | 400.477 |
Caption: Properties of the glasses.
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "0.595", "type": "Other"}
{"value": "0.597", "type": "Other"}
{"value": "0.600", "type": "Other"}
{"value": "0.599", "type": "Other"}
{"value": "2.90", "type": "Other"}
{"value": "2.94", "type": "Other"}
{"value": "3.03", "type": "Other"}
{"value": "3.01", "type": "Other"}
{"value": "20.16", "type": "Other"}
{"value": "20.31", "type": "Other"}
{"value": "20.33", "type": "Other"}
{"value": "20.40", "type": "Other"}
{"value": "13.48", "type": "Other"}
{"value": "13.58", "type": "Other"}
{"value": "13.67", "type": "Other"}
{"value": "13.64", "type": "Other"}
{"value": "0.82", "type": "Other"}
{"value": "0.75", "type": "Other"}
{"value": "0.51", "type": "Other"}
{"value": "0.54", "type": "Other"}
{"value": "927", "type": "Other"}
{"value": "950", "type": "Other"}
{"value": "954", "type": "Other"}
{"value": "915", "type": "Other"}
{"value": "1172", "type": "Other"}
{"value": "1180", "type": "Other"}
{"value": "1181", "type": "Other"}
{"value": "1157", "type": "Other"}
{"value": "1197", "type": "Other"}
{"value": "1204", "type": "Other"}
{"value": "1203", "type": "Other"}
{"value": "1177", "type": "Other"}
{"value": "2.03", "type": "Other"}
{"value": "2.12", "type": "Other"}
{"value": "2.10", "type": "Other"}
{"value": "2.11", "type": "Other"}
{"value": "437.12", "type": "Other"}
{"value": "426.43", "type": "Other"}
{"value": "378.228", "type": "Other"}
{"value": "400.477", "type": "Other"} | science.discomat_te.52 | [
{
"content": "| | BaCMAS1 | BaCMAS2 | BaCMAS3 | BaCMAS4 |\n| L th | 0.595 | 0.597 | 0.600 | 0.599 |\n| r (gcm-3) | 2.90+-0.007 | 2.94+-0.003 | 3.03+-0.001 | 3.01+-0.002 |\n| V m (cm3 mol-1) | 20.16+-0.04 | 20.31+-0.03 | 20.33+-0.01 | 20.40+-0.01 |\n| V o (cm3 mol-1) | 13.48+-0.03 | 13.58+-0.02 | 13.67+-0.01 | 13.64+-0.01 |\n| V e | 0.82+-0.04 | 0.75+-0.03 | 0.51+-0.01 | 0.54+-0.01 |\n| T dg (K) | 927 | 950 | 954 | 915 |\n| T c (K) | 1172 | 1180 | 1181 | 1157 |\n| T p (K) | 1197 | 1204 | 1203 | 1177 |\n| Avrami parameter, <n> | 2.03+-0.04 | 2.12+-0.02 | 2.10+-0.05 | 2.11+-0.02 |\n| E c (kJmol-1) | 437.12 | 426.43 | 378.228 | 400.477 |\n\nCaption: Properties of the glasses.\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"0.595\", \"type\": \"Other\"}\n{\"value\": \"0.597\", \"type\": \"Other\"}\n{\"value\": \"0.600\", \"type\": \"Other\"}\n{\"value\": \"0.599\", \"type\": \"Other\"}\n{\"value\": \"2.90\", \"type\": \"Other\"}\n{\"value\": \"2.94\", \"type\": \"Other\"}\n{\"value\": \"3.03\", \"type\": \"Other\"}\n{\"value\": \"3.01\", \"type\": \"Other\"}\n{\"value\": \"20.16\", \"type\": \"Other\"}\n{\"value\": \"20.31\", \"type\": \"Other\"}\n{\"value\": \"20.33\", \"type\": \"Other\"}\n{\"value\": \"20.40\", \"type\": \"Other\"}\n{\"value\": \"13.48\", \"type\": \"Other\"}\n{\"value\": \"13.58\", \"type\": \"Other\"}\n{\"value\": \"13.67\", \"type\": \"Other\"}\n{\"value\": \"13.64\", \"type\": \"Other\"}\n{\"value\": \"0.82\", \"type\": \"Other\"}\n{\"value\": \"0.75\", \"type\": \"Other\"}\n{\"value\": \"0.51\", \"type\": \"Other\"}\n{\"value\": \"0.54\", \"type\": \"Other\"}\n{\"value\": \"927\", \"type\": \"Other\"}\n{\"value\": \"950\", \"type\": \"Other\"}\n{\"value\": \"954\", \"type\": \"Other\"}\n{\"value\": \"915\", \"type\": \"Other\"}\n{\"value\": \"1172\", \"type\": \"Other\"}\n{\"value\": \"1180\", \"type\": \"Other\"}\n{\"value\": \"1181\", \"type\": \"Other\"}\n{\"value\": \"1157\", \"type\": \"Other\"}\n{\"value\": \"1197\", \"type\": \"Other\"}\n{\"value\": \"1204\", \"type\": \"Other\"}\n{\"value\": \"1203\", \"type\": \"Other\"}\n{\"value\": \"1177\", \"type\": \"Other\"}\n{\"value\": \"2.03\", \"type\": \"Other\"}\n{\"value\": \"2.12\", \"type\": \"Other\"}\n{\"value\": \"2.10\", \"type\": \"Other\"}\n{\"value\": \"2.11\", \"type\": \"Other\"}\n{\"value\": \"437.12\", \"type\": \"Other\"}\n{\"value\": \"426.43\", \"type\": \"Other\"}\n{\"value\": \"378.228\", \"type\": \"Other\"}\n{\"value\": \"400.477\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.discomat_te | | Fine | Fine | Fine | Fine | Coarse | Coarse | Coarse | Coarse |
| Using T x h | R 2 | Using T c h | R 2 | Using T x h | R 2 | Using T c h | R 2 |
| 3.71-23.5K W | 0.95 | 4.0-20.5K W | 0.96 | 3.74-21.4K W | 0.94 | 4.44-21.4K W | 0.96 |
| 3.16-4.11K H | 0.96 | 3.0-2.44K H | 0.96 | 3.03-3.13K H | 0.95 | 3.17-2.26K H | 0.96 |
| 16.6-34.1K LL | 0.91 | 16.7-33.1K LL | 0.97 | 15.6-31.4K LL | 0.91 | 17.7-34.6K LL | 0.97 |
| 3.58-0.0167K 2 | 0.82 | 3.84-0.014K 2 | 0.83 | 3.44-0.014K 2 | 0.78 | 4.11-0.014K 2 | 0.80 |
| No correlation | - | 13.9-16.7K 3 | 0.89 | 12.4-15.1K 3 | 0.81 | 14.4-17.0K 3 | 0.88 |
| | | | | | | | |
| 3.55-0.479K SP with R 2 =0.86 | 3.55-0.479K SP with R 2 =0.86 | 3.55-0.479K SP with R 2 =0.86 | 3.55-0.479K SP with R 2 =0.86 | 3.3-0.266K SP with R 2 =0.81 | 3.3-0.266K SP with R 2 =0.81 | 3.3-0.266K SP with R 2 =0.81 | 3.3-0.266K SP with R 2 =0.81 |
| 2.33-0.239K 4 with R 2 =0.78 | 2.33-0.239K 4 with R 2 =0.78 | 2.33-0.239K 4 with R 2 =0.78 | 2.33-0.239K 4 with R 2 =0.78 | 2.24-0.136K 4 with R 2 =0.76 | 2.24-0.136K 4 with R 2 =0.76 | 2.24-0.136K 4 with R 2 =0.76 | 2.24-0.136K 4 with R 2 =0.76 |
Caption: Linear fittings of log q cr = C + DK, where C and D are constants, K is the GS parameter utilized and R 2 is the corresponding correlation factor of some curves presented in Figs. 3 and 4
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "0.95", "type": "Other"}
{"value": "0.96", "type": "Other"}
{"value": "0.94", "type": "Other"}
{"value": "0.96", "type": "Other"}
{"value": "0.96", "type": "Other"}
{"value": "0.96", "type": "Other"}
{"value": "0.95", "type": "Other"}
{"value": "0.96", "type": "Other"}
{"value": "0.91", "type": "Other"}
{"value": "0.97", "type": "Other"}
{"value": "0.91", "type": "Other"}
{"value": "0.97", "type": "Other"}
{"value": "0.82", "type": "Other"}
{"value": "0.83", "type": "Other"}
{"value": "0.78", "type": "Other"}
{"value": "0.80", "type": "Other"}
{"value": "0.89", "type": "Other"}
{"value": "0.81", "type": "Other"}
{"value": "0.88", "type": "Other"} | science.discomat_te.426 | [
{
"content": "| Fine | Fine | Fine | Fine | Coarse | Coarse | Coarse | Coarse |\n| Using T x h | R 2 | Using T c h | R 2 | Using T x h | R 2 | Using T c h | R 2 |\n| 3.71-23.5K W | 0.95 | 4.0-20.5K W | 0.96 | 3.74-21.4K W | 0.94 | 4.44-21.4K W | 0.96 |\n| 3.16-4.11K H | 0.96 | 3.0-2.44K H | 0.96 | 3.03-3.13K H | 0.95 | 3.17-2.26K H | 0.96 |\n| 16.6-34.1K LL | 0.91 | 16.7-33.1K LL | 0.97 | 15.6-31.4K LL | 0.91 | 17.7-34.6K LL | 0.97 |\n| 3.58-0.0167K 2 | 0.82 | 3.84-0.014K 2 | 0.83 | 3.44-0.014K 2 | 0.78 | 4.11-0.014K 2 | 0.80 |\n| No correlation | - | 13.9-16.7K 3 | 0.89 | 12.4-15.1K 3 | 0.81 | 14.4-17.0K 3 | 0.88 |\n| | | | | | | | |\n| 3.55-0.479K SP with R 2 =0.86 | 3.55-0.479K SP with R 2 =0.86 | 3.55-0.479K SP with R 2 =0.86 | 3.55-0.479K SP with R 2 =0.86 | 3.3-0.266K SP with R 2 =0.81 | 3.3-0.266K SP with R 2 =0.81 | 3.3-0.266K SP with R 2 =0.81 | 3.3-0.266K SP with R 2 =0.81 |\n| 2.33-0.239K 4 with R 2 =0.78 | 2.33-0.239K 4 with R 2 =0.78 | 2.33-0.239K 4 with R 2 =0.78 | 2.33-0.239K 4 with R 2 =0.78 | 2.24-0.136K 4 with R 2 =0.76 | 2.24-0.136K 4 with R 2 =0.76 | 2.24-0.136K 4 with R 2 =0.76 | 2.24-0.136K 4 with R 2 =0.76 |\n\nCaption: Linear fittings of log q cr = C + DK, where C and D are constants, K is the GS parameter utilized and R 2 is the corresponding correlation factor of some curves presented in Figs. 3 and 4\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"0.95\", \"type\": \"Other\"}\n{\"value\": \"0.96\", \"type\": \"Other\"}\n{\"value\": \"0.94\", \"type\": \"Other\"}\n{\"value\": \"0.96\", \"type\": \"Other\"}\n{\"value\": \"0.96\", \"type\": \"Other\"}\n{\"value\": \"0.96\", \"type\": \"Other\"}\n{\"value\": \"0.95\", \"type\": \"Other\"}\n{\"value\": \"0.96\", \"type\": \"Other\"}\n{\"value\": \"0.91\", \"type\": \"Other\"}\n{\"value\": \"0.97\", \"type\": \"Other\"}\n{\"value\": \"0.91\", \"type\": \"Other\"}\n{\"value\": \"0.97\", \"type\": \"Other\"}\n{\"value\": \"0.82\", \"type\": \"Other\"}\n{\"value\": \"0.83\", \"type\": \"Other\"}\n{\"value\": \"0.78\", \"type\": \"Other\"}\n{\"value\": \"0.80\", \"type\": \"Other\"}\n{\"value\": \"0.89\", \"type\": \"Other\"}\n{\"value\": \"0.81\", \"type\": \"Other\"}\n{\"value\": \"0.88\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Mitochondria play a major role in energy production.
Research Abstract:
Mitochondria are the primary energy-generating system in most eukaryotic cells.
Additionally, they participate in intermediary metabolism, calcium signaling, and apoptosis.
Given these well-established functions, it might be expected that mitochondrial dysfunction would give rise to a simple and predictable set of defects in all tissues.
However, mitochondrial dysfunction has pleiotropic effects in multicellular organisms.
Clearly, much about the basic biology of mitochondria remains to be understood.
Here we discuss recent work that suggests that the dynamics (fusion and fission) of these organelles is important in development and disease. | {"verdict": "SUPPORT", "evidence": ["Mitochondria are the primary energy-generating system in most eukaryotic cells."]} | science.scifact_entailment.512 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nMitochondria play a major role in energy production.\n\nResearch Abstract:\nMitochondria are the primary energy-generating system in most eukaryotic cells.\nAdditionally, they participate in intermediary metabolism, calcium signaling, and apoptosis.\nGiven these well-established functions, it might be expected that mitochondrial dysfunction would give rise to a simple and predictable set of defects in all tissues.\nHowever, mitochondrial dysfunction has pleiotropic effects in multicellular organisms.\nClearly, much about the basic biology of mitochondria remains to be understood.\nHere we discuss recent work that suggests that the dynamics (fusion and fission) of these organelles is important in development and disease.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Mitochondria are the primary energy-generating system in most eukaryotic cells.\"]}",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Characterizing and Measuring the Similarity of Neural Networks with Persistent Homology
Main Content:
1 INTRODUCTION . Machine learning practitioners can train different neural networks for the same task . Even for the same neural architecture , there are many hyperparameters , such as the number of neurons per layer or the number of layers . Moreover , the final weights for the same architecture and hyperparameters can vary depending on the initialization and the optimization process itself , which is stochastic . Thus , there is no direct way of comparing neural networks accounting for the fact that neural networks solving the same task should be measured as being similar , regardless of the specific weights . This also prevents one from finding and comparing modules inside neural networks ( e.g. , determining if a given sub-network does the same function as other sub-network in another model ) . Moreover , there are no well-known methods for effectively characterizing neural networks . This work aims to characterize neural networks such that they can be measured to be similar once trained for the same task , with independence of the particular architecture , initialization , or optimization process . To characterize neural networks and measuring their similarity , we assume two different similarities that should be related if the characterization is correct , namely , 1. similarity between the tasks a model is trained on , and 2. similarity between neural networks themselves considered as mathematical objects .. We do not provide a mathematical definition for the first one , rather an intuitive one , and understand is as qualitative similarity between semantically close tasks . For instance , we think that MNIST ( 10 labels ) is closer to MNIST restricted to 2 labels than to language identification , and that MNIST is closer to MNIST restricted to 8 labels than to MNIST restricted to 2 labels . For the second one , in this work we propose a similarity measure . Ideally , this similarity measure should then capture 1. the semantic similarity between the underlying tasks neural networks have been trained on ( understanding that networks performing similar tasks should be similar , regardless of their specific weights and architectures ) , and 2. intrinsic properties of the neural network should also be captured to some degree . Thus , we consider two scenarios , that is , comparing NNs trained for similar tasks , using the mentioned task similarity , and comparing neural networks trained on identical tasks but with different architectures . We focus on Multi-Layer Perceptrons ( MLPs ) for the sake of simplicity . We start by observing that we can represent a neural network as a directed weighted graph to which we can associate certain topological concepts.1 Considering it as a simplicial complex , we obtain its associated Persistent Diagram . Then , we can compute distances between Persistent Diagrams of different neural networks . 1See Jonsson ( 2007 ) for a complete reference on graph topology . The proposed experiments aim to show that the selected structural feature , Persistent Homology , serves to relate neural networks trained for similar problems and that such a comparison can be performed by means of a predefined measure between the associated Persistent Homology diagrams . To test the hypothesis , we study different classical problems ( MNIST , Fashion MNIST , CIFAR-10 , and language identification and text classification datasets ) , different architectures ( number and size of layers ) as well as a control experiment ( input order , to which neural network similarity should be invariant ) . The code and results are fully open source in the Supplementary Material under a MIT license . In summary , the main contributions of this work are the following : 1 . We propose an effective graph characterization strategy of neural networks based on Persistent Homology . 2 . Based on this characterization , we suggest a similarity measure of neural networks . 3 . We provide empirical evidence that this Persistent Homology framework captures valuable information from neural networks and that the proposed similarity measure is meaningful . The remainder of this paper is organized as follows . In Section 2 , we go through the related work . Then , in Section 3 we describe our proposal and the experimental framework to validate it . Finally , in sections 4 and 5 we report and discuss the results and arrive to conclusions , respectively . 2 RELATED WORK . One of the fundamental papers of Topological Data Analysis ( TDA ) is presented in Carlsson ( 2009 ) and suggests the use of Algebraic Topology to obtain qualitative information and deal with metrics for large amounts of data . For an extensive overview of simplicial topology on graphs , see Giblin ( 1977 ) ; Jonsson ( 2007 ) . Aktas et al . ( 2019 ) provide a thorough analysis of PH methods . More recently , a number of publications have dealt with the study of the capacity of neural networks using PH . Guss & Salakhutdinov ( 2018 ) characterize learnability of different neural architectures by computable measures of data complexity . Donier ( 2019 ) propose the concept of spatial capacity allocation analysis . Konuk & Smith ( 2019 ) propose an empirical study of how NNs handle changes in topological complexity of the input data . Rieck et al . ( 2019b ) introduce the neural persistence metric , a complexity measure based on TDA on weighted stratified graphs . This work suggests a representation of the neural network as a multipartite graph . They perform the filtering of the Persistent Homology diagrams independently for each layer . As the filtration contains at most 1-simplices ( edges ) , they only capture zero-dimensional topological information , i.e . connectivity information . When consecutive layer analysis is used , the global topology of the network is not taken into account making the strong assumption that the NNs encode the learned information layer pairwise exclusively . Additionally , there are trivial global transformations of a NN that are not captured by analyzing pairs of layers : • Superfluous cycle insertions : for example , add two neurons and connect their input to a single neuron in a lower layer and their two outputs to a single output neuron in an upper layer with opposite weights . • Identity layer insertions : for instance , insert an intermediate identity layer with neurons and trivially connect to the next layer . • Non-planar neural networks analysis : the analysis of neural networks that use multiple connections between non-consecutive layers require higher order topological analysis . In terms of pure neural network analysis , there are relevant works , like Hofer et al . ( 2020 ) , that study topological regularization . Clough et al . ( 2020 ) introduce a method for training neural networks for image segmentation with prior topology knowledge , specifically via Betti numbers . Corneanu et al . ( 2020 ) try to estimate ( with limited success ) the performance gap between training and testing via neuron activations and linear regression of the Betti numbers . This type of representation depend on the input data and not only on the NN function under study . Instead , we are interested in characterising and comparing NNs as functions , independently of the data to which they are applied . On the other hand , topological analysis of decision boundaries has been a very prolific area . Ramamurthy et al . ( 2019 ) propose a labeled Vietoris-Rips complex to perform PH inference of decision boundaries for quantification of the complexity of neural networks . Naitzat et al . ( 2020 ) experiment on the PH of a wide range of point cloud input datasets for a binary classification problems to see that NNs transform a topologically rich dataset ( in terms of Betti numbers ) into a topologically simpler one as it passes through the layers . They also verify that the reduction in Betti numbers is significantly faster for ReLU activations than hyperbolic tangent activations . Regarding neural network representations , one of the most related works to ours , Gebhart et al . ( 2019 ) , focuses on topological representations of neural networks . They introduce a method for computing PH over the graphical activation structure of neural networks , which provides access to the task-relevant substructures activated throughout the network for a given input . Interestingly , in Watanabe & Yamana ( 2020 ) , authors work on neural network representations through simplicial complexes based on deep Taylor decomposition and they calculate the PH of neural networks in this representation . In Chowdhury et al . ( 2019 ) , they use directed homology to represent MLPs . They show that the path homology of these networks is non-trivial in higher dimensions and depends on the number and size of the network layers . They investigate homological differences between distinct neural network architectures . As far as neural network similarity measures are concerned , the literature is not especially prolific . In Kornblith et al . ( 2019 ) , authors examine similarity measures for representations ( meaning , outputs of different layers ) of neural networks based on canonical correlation analysis . However , note that this method compares neural network representations ( intermediate outputs ) , not the neural networks themselves . Remarkably , in Ashmore & Gashler ( 2015 ) , authors do deal with the intrinsic similarity of neural networks themselves based on Forward Bipartite Alignment . Specifically , they propose an algorithm for aligning the topological structures of two neural networks . Their algorithm finds optimal bipartite matches between the nodes of the two MLPs by solving the well-known graph cutting problem . The alignment enables applications such as visualizations or improving ensembles . However , the methods only works under very restrictive assumptions,2 and this line of work does not appear to have been followed up . Finally , we note that there has been a considerable growth of interest in applied topology in the recent years . This popularity increase and the development of new software libraries,3 along with the growth of computational capabilities , have empowered new works . Some of the most remarkable libraries are Ripser Tralie et al . ( 2018 ) ; Bauer ( 2021 ) , and Flagser Lütgehetmann et al . ( 2019 ) . They are focused on the efficient computation of PH . For GPU-Accelerated computation of Vietoris-Rips PH , Ripser++ Zhang et al . ( 2020 ) offers an important speedup . The Python library we are using , Giotto-TDA Tauzin et al . ( 2020 ) , makes use of both above libraries underneath . We have seen that there is a trend towards the use of algebraic topology methods for having a better understanding of phenomena of neural networks and having more principled deep learning algorithms . Nevertheless , little to no works have proposed neural network characterizations or similarity measures based on intrinsic properties of the networks , which is what we intend to do . 3 METHODOLOGY . In this section , we propose our method , which is heavily based on concepts from algebraic topology . We refer the reader to the Supplementary Material for the mathematical definitions . In this section , we also describe the conducted experiments . Intrinsically characterizing and comparing neural networks is a difficult , unsolved problem . First , the network should be represented in an object that captures as much information as possible and then it should be compared with a measure depending on the latent structure . Due to the stochasticity of both the initialization and training procedure , networks are parameterized differently . For the same task , different functions that effectively solve it can be obtained . Being able to compare the trained networks can be helpful to detect similar neural structures . We want to obtain topological characterizations associated to neural networks trained on a given task . For doing so , we use the Persistence Homology ( from now on , PH ) of the graph associated to a 2For example , the two neural networks `` must have the same number of units in each of their corresponding layers '' , and the match is done layer by layer . 3https : //www.math.colostate.edu/~adams/advising/appliedTopologySoftware/ neural network . We compute the PH for various neural networks learned on different tasks . We then compare all the diagrams for each one of the task . More specifically , for each of the studied tasks ( image classification on MNIST , Fashion MNIST and CIFAR-10 ; language identification , and text classification on the Reuters dataset ) ,4 we proceed as follows : 1 . We train several neural network models on the particular problem . 2 . We create a directed graph from the weights of the trained neural networks ( after changing the direction of the negative edges and normalising the weights of the edges ) . 3 . We consider the directed graph as a simplicial complex and calculate its PH , using the weight of the edges as the filtering parameter , which range from 0 to 1 . This way we obtain the so-called Persistence Diagram . 4 . We compute the distances between the Persistence Diagrams ( prior discretization of the Persistence Diagram so that it can be computed ) of the different networks . 5 . Finally , we analyze the similarity between different neural networks trained for the same task , for a similar task , and for a completely different task , independently of the concrete architecture , to see whether there is topological similarity . As baselines , we set two standard matrix comparison methods that are the 1-Norm and the Frobenius norm . Having adjacency matrix A and B , we compute the difference as norm ( A−B ) . However , these methods only work for matrices of similar size and thus , they are not general enough . We could also have used the Fast Approximate Quadratic assignment algorithm suggested in Vogelstein et al . ( 2015 ) , but for large networks this method becomes unfeasible to compute . | In this work, the authors propose to characterize neural networks with Topological Data Analysis, more precisely with its main descriptor, the so-called persistence diagram, in order to be able to compare neural networks with different numbers of layers, different numbers of neurons, or trained on different data sets. More specifically, they show that the computational graphs corresponding to the neural networks can be filtered using the edge weights that are learnt during training, in order to produce filtered flag complexes, from which persistent homology can be computed. Then, the authors interpreted the distances between the persistence diagrams obtained from networks with varying parameters (number of layers, neurons, labels), and showed that, often, the distances have intuitive correlation with the complexity of the networks. | science.mup_single_document_summarization.5111 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nCharacterizing and Measuring the Similarity of Neural Networks with Persistent Homology \n\nMain Content:\n1 INTRODUCTION . Machine learning practitioners can train different neural networks for the same task . Even for the same neural architecture , there are many hyperparameters , such as the number of neurons per layer or the number of layers . Moreover , the final weights for the same architecture and hyperparameters can vary depending on the initialization and the optimization process itself , which is stochastic . Thus , there is no direct way of comparing neural networks accounting for the fact that neural networks solving the same task should be measured as being similar , regardless of the specific weights . This also prevents one from finding and comparing modules inside neural networks ( e.g. , determining if a given sub-network does the same function as other sub-network in another model ) . Moreover , there are no well-known methods for effectively characterizing neural networks . This work aims to characterize neural networks such that they can be measured to be similar once trained for the same task , with independence of the particular architecture , initialization , or optimization process . To characterize neural networks and measuring their similarity , we assume two different similarities that should be related if the characterization is correct , namely , 1. similarity between the tasks a model is trained on , and 2. similarity between neural networks themselves considered as mathematical objects .. We do not provide a mathematical definition for the first one , rather an intuitive one , and understand is as qualitative similarity between semantically close tasks . For instance , we think that MNIST ( 10 labels ) is closer to MNIST restricted to 2 labels than to language identification , and that MNIST is closer to MNIST restricted to 8 labels than to MNIST restricted to 2 labels . For the second one , in this work we propose a similarity measure . Ideally , this similarity measure should then capture 1. the semantic similarity between the underlying tasks neural networks have been trained on ( understanding that networks performing similar tasks should be similar , regardless of their specific weights and architectures ) , and 2. intrinsic properties of the neural network should also be captured to some degree . Thus , we consider two scenarios , that is , comparing NNs trained for similar tasks , using the mentioned task similarity , and comparing neural networks trained on identical tasks but with different architectures . We focus on Multi-Layer Perceptrons ( MLPs ) for the sake of simplicity . We start by observing that we can represent a neural network as a directed weighted graph to which we can associate certain topological concepts.1 Considering it as a simplicial complex , we obtain its associated Persistent Diagram . Then , we can compute distances between Persistent Diagrams of different neural networks . 1See Jonsson ( 2007 ) for a complete reference on graph topology . The proposed experiments aim to show that the selected structural feature , Persistent Homology , serves to relate neural networks trained for similar problems and that such a comparison can be performed by means of a predefined measure between the associated Persistent Homology diagrams . To test the hypothesis , we study different classical problems ( MNIST , Fashion MNIST , CIFAR-10 , and language identification and text classification datasets ) , different architectures ( number and size of layers ) as well as a control experiment ( input order , to which neural network similarity should be invariant ) . The code and results are fully open source in the Supplementary Material under a MIT license . In summary , the main contributions of this work are the following : 1 . We propose an effective graph characterization strategy of neural networks based on Persistent Homology . 2 . Based on this characterization , we suggest a similarity measure of neural networks . 3 . We provide empirical evidence that this Persistent Homology framework captures valuable information from neural networks and that the proposed similarity measure is meaningful . The remainder of this paper is organized as follows . In Section 2 , we go through the related work . Then , in Section 3 we describe our proposal and the experimental framework to validate it . Finally , in sections 4 and 5 we report and discuss the results and arrive to conclusions , respectively . 2 RELATED WORK . One of the fundamental papers of Topological Data Analysis ( TDA ) is presented in Carlsson ( 2009 ) and suggests the use of Algebraic Topology to obtain qualitative information and deal with metrics for large amounts of data . For an extensive overview of simplicial topology on graphs , see Giblin ( 1977 ) ; Jonsson ( 2007 ) . Aktas et al . ( 2019 ) provide a thorough analysis of PH methods . More recently , a number of publications have dealt with the study of the capacity of neural networks using PH . Guss & Salakhutdinov ( 2018 ) characterize learnability of different neural architectures by computable measures of data complexity . Donier ( 2019 ) propose the concept of spatial capacity allocation analysis . Konuk & Smith ( 2019 ) propose an empirical study of how NNs handle changes in topological complexity of the input data . Rieck et al . ( 2019b ) introduce the neural persistence metric , a complexity measure based on TDA on weighted stratified graphs . This work suggests a representation of the neural network as a multipartite graph . They perform the filtering of the Persistent Homology diagrams independently for each layer . As the filtration contains at most 1-simplices ( edges ) , they only capture zero-dimensional topological information , i.e . connectivity information . When consecutive layer analysis is used , the global topology of the network is not taken into account making the strong assumption that the NNs encode the learned information layer pairwise exclusively . Additionally , there are trivial global transformations of a NN that are not captured by analyzing pairs of layers : • Superfluous cycle insertions : for example , add two neurons and connect their input to a single neuron in a lower layer and their two outputs to a single output neuron in an upper layer with opposite weights . • Identity layer insertions : for instance , insert an intermediate identity layer with neurons and trivially connect to the next layer . • Non-planar neural networks analysis : the analysis of neural networks that use multiple connections between non-consecutive layers require higher order topological analysis . In terms of pure neural network analysis , there are relevant works , like Hofer et al . ( 2020 ) , that study topological regularization . Clough et al . ( 2020 ) introduce a method for training neural networks for image segmentation with prior topology knowledge , specifically via Betti numbers . Corneanu et al . ( 2020 ) try to estimate ( with limited success ) the performance gap between training and testing via neuron activations and linear regression of the Betti numbers . This type of representation depend on the input data and not only on the NN function under study . Instead , we are interested in characterising and comparing NNs as functions , independently of the data to which they are applied . On the other hand , topological analysis of decision boundaries has been a very prolific area . Ramamurthy et al . ( 2019 ) propose a labeled Vietoris-Rips complex to perform PH inference of decision boundaries for quantification of the complexity of neural networks . Naitzat et al . ( 2020 ) experiment on the PH of a wide range of point cloud input datasets for a binary classification problems to see that NNs transform a topologically rich dataset ( in terms of Betti numbers ) into a topologically simpler one as it passes through the layers . They also verify that the reduction in Betti numbers is significantly faster for ReLU activations than hyperbolic tangent activations . Regarding neural network representations , one of the most related works to ours , Gebhart et al . ( 2019 ) , focuses on topological representations of neural networks . They introduce a method for computing PH over the graphical activation structure of neural networks , which provides access to the task-relevant substructures activated throughout the network for a given input . Interestingly , in Watanabe & Yamana ( 2020 ) , authors work on neural network representations through simplicial complexes based on deep Taylor decomposition and they calculate the PH of neural networks in this representation . In Chowdhury et al . ( 2019 ) , they use directed homology to represent MLPs . They show that the path homology of these networks is non-trivial in higher dimensions and depends on the number and size of the network layers . They investigate homological differences between distinct neural network architectures . As far as neural network similarity measures are concerned , the literature is not especially prolific . In Kornblith et al . ( 2019 ) , authors examine similarity measures for representations ( meaning , outputs of different layers ) of neural networks based on canonical correlation analysis . However , note that this method compares neural network representations ( intermediate outputs ) , not the neural networks themselves . Remarkably , in Ashmore & Gashler ( 2015 ) , authors do deal with the intrinsic similarity of neural networks themselves based on Forward Bipartite Alignment . Specifically , they propose an algorithm for aligning the topological structures of two neural networks . Their algorithm finds optimal bipartite matches between the nodes of the two MLPs by solving the well-known graph cutting problem . The alignment enables applications such as visualizations or improving ensembles . However , the methods only works under very restrictive assumptions,2 and this line of work does not appear to have been followed up . Finally , we note that there has been a considerable growth of interest in applied topology in the recent years . This popularity increase and the development of new software libraries,3 along with the growth of computational capabilities , have empowered new works . Some of the most remarkable libraries are Ripser Tralie et al . ( 2018 ) ; Bauer ( 2021 ) , and Flagser Lütgehetmann et al . ( 2019 ) . They are focused on the efficient computation of PH . For GPU-Accelerated computation of Vietoris-Rips PH , Ripser++ Zhang et al . ( 2020 ) offers an important speedup . The Python library we are using , Giotto-TDA Tauzin et al . ( 2020 ) , makes use of both above libraries underneath . We have seen that there is a trend towards the use of algebraic topology methods for having a better understanding of phenomena of neural networks and having more principled deep learning algorithms . Nevertheless , little to no works have proposed neural network characterizations or similarity measures based on intrinsic properties of the networks , which is what we intend to do . 3 METHODOLOGY . In this section , we propose our method , which is heavily based on concepts from algebraic topology . We refer the reader to the Supplementary Material for the mathematical definitions . In this section , we also describe the conducted experiments . Intrinsically characterizing and comparing neural networks is a difficult , unsolved problem . First , the network should be represented in an object that captures as much information as possible and then it should be compared with a measure depending on the latent structure . Due to the stochasticity of both the initialization and training procedure , networks are parameterized differently . For the same task , different functions that effectively solve it can be obtained . Being able to compare the trained networks can be helpful to detect similar neural structures . We want to obtain topological characterizations associated to neural networks trained on a given task . For doing so , we use the Persistence Homology ( from now on , PH ) of the graph associated to a 2For example , the two neural networks `` must have the same number of units in each of their corresponding layers '' , and the match is done layer by layer . 3https : //www.math.colostate.edu/~adams/advising/appliedTopologySoftware/ neural network . We compute the PH for various neural networks learned on different tasks . We then compare all the diagrams for each one of the task . More specifically , for each of the studied tasks ( image classification on MNIST , Fashion MNIST and CIFAR-10 ; language identification , and text classification on the Reuters dataset ) ,4 we proceed as follows : 1 . We train several neural network models on the particular problem . 2 . We create a directed graph from the weights of the trained neural networks ( after changing the direction of the negative edges and normalising the weights of the edges ) . 3 . We consider the directed graph as a simplicial complex and calculate its PH , using the weight of the edges as the filtering parameter , which range from 0 to 1 . This way we obtain the so-called Persistence Diagram . 4 . We compute the distances between the Persistence Diagrams ( prior discretization of the Persistence Diagram so that it can be computed ) of the different networks . 5 . Finally , we analyze the similarity between different neural networks trained for the same task , for a similar task , and for a completely different task , independently of the concrete architecture , to see whether there is topological similarity . As baselines , we set two standard matrix comparison methods that are the 1-Norm and the Frobenius norm . Having adjacency matrix A and B , we compute the difference as norm ( A−B ) . However , these methods only work for matrices of similar size and thus , they are not general enough . We could also have used the Fast Approximate Quadratic assignment algorithm suggested in Vogelstein et al . ( 2015 ) , but for large networks this method becomes unfeasible to compute .",
"role": "user"
},
{
"content": "In this work, the authors propose to characterize neural networks with Topological Data Analysis, more precisely with its main descriptor, the so-called persistence diagram, in order to be able to compare neural networks with different numbers of layers, different numbers of neurons, or trained on different data sets. More specifically, they show that the computational graphs corresponding to the neural networks can be filtered using the edge weights that are learnt during training, in order to produce filtered flag complexes, from which persistent homology can be computed. Then, the authors interpreted the distances between the persistence diagrams obtained from networks with varying parameters (number of layers, neurons, labels), and showed that, often, the distances have intuitive correlation with the complexity of the networks.",
"role": "assistant"
}
] |
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
We investigate the problem of fine-grained sketch-based image retrieval (SBIR), where free-hand human sketches are used as queries to perform instance-level retrieval of images. This is an extremely challenging task because (i) visual comparisons not only need to be fine-grained but also executed cross-domain, (ii) free-hand (finger) sketches are highly abstract, making fine-grained matching harder, and most importantly (iii) annotated cross-domain sketch-photo datasets required for training are scarce, challenging many state-of-the-art machine learning techniques. In this paper, for the first time, we address all these challenges, providing a step towards the capabilities that would underpin a commercial sketch-based image retrieval application. We introduce a new database of 1,432 sketch-photo pairs from two categories with 32,000 fine-grained triplet ranking annotations. We then develop a deep triplet-ranking model for instance-level SBIR with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data. Extensive experiments are carried out to contribute a variety of insights into the challenges of data sufficiency and over-fitting avoidance when training deep networks for fine-grained cross-domain ranking tasks. | Output Entities:
{"Generic": [], "Material": ["insufficient fine-grained training data", "annotated cross-domain sketch-photo datasets"], "Method": ["deep triplet-ranking model", "staged pre-training strategy", "deep networks", "data augmentation", "machine learning techniques"], "Metric": [], "OtherScientificTerm": ["free-hand (finger) sketches", "fine-grained triplet ranking annotations", "data sufficiency", "visual comparisons", "over-fitting avoidance", "free-hand human sketches"], "Task": ["instance-level retrieval of images", "fine-grained cross-domain ranking tasks", "sketch-based image retrieval application", "instance-level SBIR", "fine-grained sketch-based image retrieval (SBIR)", "fine-grained matching"]} | science.scierc_ner.43 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nWe investigate the problem of fine-grained sketch-based image retrieval (SBIR), where free-hand human sketches are used as queries to perform instance-level retrieval of images. This is an extremely challenging task because (i) visual comparisons not only need to be fine-grained but also executed cross-domain, (ii) free-hand (finger) sketches are highly abstract, making fine-grained matching harder, and most importantly (iii) annotated cross-domain sketch-photo datasets required for training are scarce, challenging many state-of-the-art machine learning techniques. In this paper, for the first time, we address all these challenges, providing a step towards the capabilities that would underpin a commercial sketch-based image retrieval application. We introduce a new database of 1,432 sketch-photo pairs from two categories with 32,000 fine-grained triplet ranking annotations. We then develop a deep triplet-ranking model for instance-level SBIR with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data. Extensive experiments are carried out to contribute a variety of insights into the challenges of data sufficiency and over-fitting avoidance when training deep networks for fine-grained cross-domain ranking tasks.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [], \"Material\": [\"insufficient fine-grained training data\", \"annotated cross-domain sketch-photo datasets\"], \"Method\": [\"deep triplet-ranking model\", \"staged pre-training strategy\", \"deep networks\", \"data augmentation\", \"machine learning techniques\"], \"Metric\": [], \"OtherScientificTerm\": [\"free-hand (finger) sketches\", \"fine-grained triplet ranking annotations\", \"data sufficiency\", \"visual comparisons\", \"over-fitting avoidance\", \"free-hand human sketches\"], \"Task\": [\"instance-level retrieval of images\", \"fine-grained cross-domain ranking tasks\", \"sketch-based image retrieval application\", \"instance-level SBIR\", \"fine-grained sketch-based image retrieval (SBIR)\", \"fine-grained matching\"]}",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND: This study was designed to determine whether small diameter needles for oocyte retrieval alter oocyte yields in patients undergoing IVF in comparison to standard large diameter needles.
ABSTRACT.METHODS: We conducted a prospective pilot study of 21 consecutive favorable prognosis patients. In each patient one ovary was randomly allocated to retrieval with either a 20 G/ 35 mm (thin) or 17 G/ 35 mm (standard) needle, the other ovary was then retrieved with the opposite needle.
ABSTRACT.RESULTS: The standard diameter needle was used to collect a total of 215 oocytes from 355 aspirated follicles (60.6%) compared to 203 oocytes from 352 aspirated follicles (57.7%) with the thinner needle (p = 0.23). Stratifying outcomes by anti-Müllerian hormone (AMH), as indicator of ovarian reserve, and by body mass index (BMI) the oocyte yields, still, did not differ (AMH, r (17) = −0.20, p = 0.44; BMI, r (17) =0.02, p = 0.96). Outcomes also did not vary among women with diminished ovarian reserve (p = 0.17) and in women with normal ovarian reserve (p = 1.00). Operating time was, however, significantly increased by 3.3 minutes per ovary (z = −3.08, p = 0.002) with the thinner needle.
ABSTRACT.CONCLUSIONS: Needle diameter does not affect oocyte yield, including in obese patients and patients with diminished ovarian reserve. Thinner needles appear to significantly prolong operating time.
Extracted Data: | [["20 G/ 35 mm (thin) needle", "17 G/ 35 mm (standard) needle", "The number of oocytes per the number of aspirated follicles", "no significant difference", "The standard diameter needle was used to collect a total of 215 oocytes from 355 aspirated follicles (60.6%) compared to 203 oocytes from 352 aspirated follicles (57.7%) with the thinner needle (p = 0.23)."], ["20 G/ 35 mm (thin) needle", "17 G/ 35 mm (standard) needle", "oocyte yield among women with normal ovarian reserve", "no significant difference", "Stratifying outcomes by anti-Mu\u0308llerian hormone (AMH), as indicator of ovarian reserve, and by body mass index (BMI) the oocyte yields, still, did not differ (AMH, r (17) = \u22120.20, p = 0.44; BMI, r (17) =0.02, p = 0.96)."], ["20 G/ 35 mm (thin) needle", "17 G/ 35 mm (standard) needle", "Operating time", "significantly increased", "Operating time was, however, significantly increased by 3.3 minutes per ovary (z = \u22123.08, p = 0.002) with the thinner needle."], ["20 G/ 35 mm (thin) needle", "17 G/ 35 mm (standard) needle", "oocyte yields among women with diminished ovarian reserve", "no significant difference", "Stratifying outcomes by anti-Mu\u0308llerian hormone (AMH), as indicator of ovarian reserve, and by body mass index (BMI) the oocyte yields, still, did not differ (AMH, r (17) = \u22120.20, p = 0.44; BMI, r (17) =0.02, p = 0.96)."], ["20 G/ 35 mm (thin) needle", "17 G/ 35 mm (standard) needle", "Oocyte yield", "no significant difference", "the oocyte yields, still, did not differ (AMH, r (17) = \u22120.20, p = 0.44; BMI, r (17) =0.02, p = 0.96)."]] | science.evidence_inference.30 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND: This study was designed to determine whether small diameter needles for oocyte retrieval alter oocyte yields in patients undergoing IVF in comparison to standard large diameter needles.\n\nABSTRACT.METHODS: We conducted a prospective pilot study of 21 consecutive favorable prognosis patients. In each patient one ovary was randomly allocated to retrieval with either a 20 G/ 35 mm (thin) or 17 G/ 35 mm (standard) needle, the other ovary was then retrieved with the opposite needle.\n\nABSTRACT.RESULTS: The standard diameter needle was used to collect a total of 215 oocytes from 355 aspirated follicles (60.6%) compared to 203 oocytes from 352 aspirated follicles (57.7%) with the thinner needle (p = 0.23). Stratifying outcomes by anti-Müllerian hormone (AMH), as indicator of ovarian reserve, and by body mass index (BMI) the oocyte yields, still, did not differ (AMH, r (17) = −0.20, p = 0.44; BMI, r (17) =0.02, p = 0.96). Outcomes also did not vary among women with diminished ovarian reserve (p = 0.17) and in women with normal ovarian reserve (p = 1.00). Operating time was, however, significantly increased by 3.3 minutes per ovary (z = −3.08, p = 0.002) with the thinner needle.\n\nABSTRACT.CONCLUSIONS: Needle diameter does not affect oocyte yield, including in obese patients and patients with diminished ovarian reserve. Thinner needles appear to significantly prolong operating time.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"20 G/ 35 mm (thin) needle\", \"17 G/ 35 mm (standard) needle\", \"The number of oocytes per the number of aspirated follicles\", \"no significant difference\", \"The standard diameter needle was used to collect a total of 215 oocytes from 355 aspirated follicles (60.6%) compared to 203 oocytes from 352 aspirated follicles (57.7%) with the thinner needle (p = 0.23).\"], [\"20 G/ 35 mm (thin) needle\", \"17 G/ 35 mm (standard) needle\", \"oocyte yield among women with normal ovarian reserve\", \"no significant difference\", \"Stratifying outcomes by anti-Mu\\u0308llerian hormone (AMH), as indicator of ovarian reserve, and by body mass index (BMI) the oocyte yields, still, did not differ (AMH, r (17) = \\u22120.20, p = 0.44; BMI, r (17) =0.02, p = 0.96).\"], [\"20 G/ 35 mm (thin) needle\", \"17 G/ 35 mm (standard) needle\", \"Operating time\", \"significantly increased\", \"Operating time was, however, significantly increased by 3.3 minutes per ovary (z = \\u22123.08, p = 0.002) with the thinner needle.\"], [\"20 G/ 35 mm (thin) needle\", \"17 G/ 35 mm (standard) needle\", \"oocyte yields among women with diminished ovarian reserve\", \"no significant difference\", \"Stratifying outcomes by anti-Mu\\u0308llerian hormone (AMH), as indicator of ovarian reserve, and by body mass index (BMI) the oocyte yields, still, did not differ (AMH, r (17) = \\u22120.20, p = 0.44; BMI, r (17) =0.02, p = 0.96).\"], [\"20 G/ 35 mm (thin) needle\", \"17 G/ 35 mm (standard) needle\", \"Oocyte yield\", \"no significant difference\", \"the oocyte yields, still, did not differ (AMH, r (17) = \\u22120.20, p = 0.44; BMI, r (17) =0.02, p = 0.96).\"]]",
"role": "assistant"
}
] |
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
For intelligent interactive systems to communicate with humans in a natural manner, they must have knowledge about the system users. This paper explores the role of user modeling in such systems. It begins with a characterization of what a user model is and how it can be used. The types of information that a user model may be required to keep about a user are then identified and discussed. User models themselves can vary greatly depending on the requirements of the situation and the implementation, so several dimensions along which they can be classified are presented. Since acquiring the knowledge for a user model is a fundamental problem in user modeling, a section is devoted to this topic. Next, the benefits and costs of implementing a user modeling component for a system are weighed in light of several aspects of the interaction requirements that may be imposed by the system. Finally, the current state of research in user modeling is summarized, and future research topics that must be addressed in order to achieve powerful, general user modeling systems are assessed. | Output Entities:
{"Generic": ["system"], "Material": [], "Method": ["user modeling component", "user model", "intelligent interactive systems"], "Metric": [], "OtherScientificTerm": [], "Task": ["user modeling"]} | science.scierc_ner.38 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nFor intelligent interactive systems to communicate with humans in a natural manner, they must have knowledge about the system users. This paper explores the role of user modeling in such systems. It begins with a characterization of what a user model is and how it can be used. The types of information that a user model may be required to keep about a user are then identified and discussed. User models themselves can vary greatly depending on the requirements of the situation and the implementation, so several dimensions along which they can be classified are presented. Since acquiring the knowledge for a user model is a fundamental problem in user modeling, a section is devoted to this topic. Next, the benefits and costs of implementing a user modeling component for a system are weighed in light of several aspects of the interaction requirements that may be imposed by the system. Finally, the current state of research in user modeling is summarized, and future research topics that must be addressed in order to achieve powerful, general user modeling systems are assessed.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [\"system\"], \"Material\": [], \"Method\": [\"user modeling component\", \"user model\", \"intelligent interactive systems\"], \"Metric\": [], \"OtherScientificTerm\": [], \"Task\": [\"user modeling\"]}",
"role": "assistant"
}
] |
science.bioasq_list_qa | Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.
Context: D(-) and L(+) lactate isomers produced by Lactobacillus spp
Lactate contains a chiral carbon and thus has two optical isomers-d-lactate and l-lactate.
Mucosal cells of the gastrointestinal and female reproductive tract are constantly exposed to l- and d-lactate of bacterial origin.
Question: Which are the lactate isomers? | ["d-lactate", "l-lactate"] | science.bioasq_list_qa.804 | [
{
"content": "Examine the following excerpts from biomedical literature. Your objective is to extract relevant information to answer the posed question. Format your answer as a JSON array.\n\nContext: D(-) and L(+) lactate isomers produced by Lactobacillus spp\nLactate contains a chiral carbon and thus has two optical isomers-d-lactate and l-lactate.\nMucosal cells of the gastrointestinal and female reproductive tract are constantly exposed to l- and d-lactate of bacterial origin.\n\nQuestion: Which are the lactate isomers?",
"role": "user"
},
{
"content": "[\"d-lactate\", \"l-lactate\"]",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Suboptimal nutrition is a leading cause of premature death
Research Abstract:
BACKGROUND The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 provides an up-to-date synthesis of the evidence for risk factor exposure and the attributable burden of disease.
By providing national and subnational assessments spanning the past 25 years, this study can inform debates on the importance of addressing risks in context.
METHODS We used the comparative risk assessment framework developed for previous iterations of the Global Burden of Disease Study to estimate attributable deaths, disability-adjusted life-years (DALYs), and trends in exposure by age group, sex, year, and geography for 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks from 1990 to 2015.
This study included 388 risk-outcome pairs that met World Cancer Research Fund-defined criteria for convincing or probable evidence.
We extracted relative risk and exposure estimates from randomised controlled trials, cohorts, pooled cohorts, household surveys, census data, satellite data, and other sources.
We used statistical models to pool data, adjust for bias, and incorporate covariates.
We developed a metric that allows comparisons of exposure across risk factors-the summary exposure value.
Using the counterfactual scenario of theoretical minimum risk level, we estimated the portion of deaths and DALYs that could be attributed to a given risk.
We decomposed trends in attributable burden into contributions from population growth, population age structure, risk exposure, and risk-deleted cause-specific DALY rates.
We characterised risk exposure in relation to a Socio-demographic Index (SDI).
FINDINGS Between 1990 and 2015, global exposure to unsafe sanitation, household air pollution, childhood underweight, childhood stunting, and smoking each decreased by more than 25%.
Global exposure for several occupational risks, high body-mass index (BMI), and drug use increased by more than 25% over the same period.
All risks jointly evaluated in 2015 accounted for 57·8% (95% CI 56·6-58·8) of global deaths and 41·2% (39·8-42·8) of DALYs.
In 2015, the ten largest contributors to global DALYs among Level 3 risks were high systolic blood pressure (211·8 million [192·7 million to 231·1 million] global DALYs), smoking (148·6 million [134·2 million to 163·1 million]), high fasting plasma glucose (143·1 million [125·1 million to 163·5 million]), high BMI (120·1 million [83·8 million to 158·4 million]), childhood undernutrition (113·3 million [103·9 million to 123·4 million]), ambient particulate matter (103·1 million [90·8 million to 115·1 million]), high total cholesterol (88·7 million [74·6 million to 105·7 million]), household air pollution (85·6 million [66·7 million to 106·1 million]), alcohol use (85·0 million [77·2 million to 93·0 million]), and diets high in sodium (83·0 million [49·3 million to 127·5 million]).
From 1990 to 2015, attributable DALYs declined for micronutrient deficiencies, childhood undernutrition, unsafe sanitation and water, and household air pollution; reductions in risk-deleted DALY rates rather than reductions in exposure drove these declines.
Rising exposure contributed to notable increases in attributable DALYs from high BMI, high fasting plasma glucose, occupational carcinogens, and drug use.
Environmental risks and childhood undernutrition declined steadily with SDI; low physical activity, high BMI, and high fasting plasma glucose increased with SDI.
In 119 countries, metabolic risks, such as high BMI and fasting plasma glucose, contributed the most attributable DALYs in 2015.
Regionally, smoking still ranked among the leading five risk factors for attributable DALYs in 109 countries; childhood underweight and unsafe sex remained primary drivers of early death and disability in much of sub-Saharan Africa.
INTERPRETATION Declines in some key environmental risks have contributed to declines in critical infectious diseases.
Some risks appear to be invariant to SDI.
Increasing risks, including high BMI, high fasting plasma glucose, drug use, and some occupational exposures, contribute to rising burden from some conditions, but also provide opportunities for intervention.
Some highly preventable risks, such as smoking, remain major causes of attributable DALYs, even as exposure is declining.
Public policy makers need to pay attention to the risks that are increasingly major contributors to global burden.
FUNDING Bill & Melinda Gates Foundation. | {"verdict": "SUPPORT", "evidence": ["In 2015, the ten largest contributors to global DALYs among Level 3 risks were high systolic blood pressure (211\u00b78 million [192\u00b77 million to 231\u00b71 million] global DALYs), smoking (148\u00b76 million [134\u00b72 million to 163\u00b71 million]), high fasting plasma glucose (143\u00b71 million [125\u00b71 million to 163\u00b75 million]), high BMI (120\u00b71 million [83\u00b78 million to 158\u00b74 million]), childhood undernutrition (113\u00b73 million [103\u00b79 million to 123\u00b74 million]), ambient particulate matter (103\u00b71 million [90\u00b78 million to 115\u00b71 million]), high total cholesterol (88\u00b77 million [74\u00b76 million to 105\u00b77 million]), household air pollution (85\u00b76 million [66\u00b77 million to 106\u00b71 million]), alcohol use (85\u00b70 million [77\u00b72 million to 93\u00b70 million]), and diets high in sodium (83\u00b70 million [49\u00b73 million to 127\u00b75 million])."]} | science.scifact_entailment.722 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nSuboptimal nutrition is a leading cause of premature death\n\nResearch Abstract:\nBACKGROUND The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 provides an up-to-date synthesis of the evidence for risk factor exposure and the attributable burden of disease.\nBy providing national and subnational assessments spanning the past 25 years, this study can inform debates on the importance of addressing risks in context.\nMETHODS We used the comparative risk assessment framework developed for previous iterations of the Global Burden of Disease Study to estimate attributable deaths, disability-adjusted life-years (DALYs), and trends in exposure by age group, sex, year, and geography for 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks from 1990 to 2015.\nThis study included 388 risk-outcome pairs that met World Cancer Research Fund-defined criteria for convincing or probable evidence.\nWe extracted relative risk and exposure estimates from randomised controlled trials, cohorts, pooled cohorts, household surveys, census data, satellite data, and other sources.\nWe used statistical models to pool data, adjust for bias, and incorporate covariates.\nWe developed a metric that allows comparisons of exposure across risk factors-the summary exposure value.\nUsing the counterfactual scenario of theoretical minimum risk level, we estimated the portion of deaths and DALYs that could be attributed to a given risk.\nWe decomposed trends in attributable burden into contributions from population growth, population age structure, risk exposure, and risk-deleted cause-specific DALY rates.\nWe characterised risk exposure in relation to a Socio-demographic Index (SDI).\nFINDINGS Between 1990 and 2015, global exposure to unsafe sanitation, household air pollution, childhood underweight, childhood stunting, and smoking each decreased by more than 25%.\nGlobal exposure for several occupational risks, high body-mass index (BMI), and drug use increased by more than 25% over the same period.\nAll risks jointly evaluated in 2015 accounted for 57·8% (95% CI 56·6-58·8) of global deaths and 41·2% (39·8-42·8) of DALYs.\nIn 2015, the ten largest contributors to global DALYs among Level 3 risks were high systolic blood pressure (211·8 million [192·7 million to 231·1 million] global DALYs), smoking (148·6 million [134·2 million to 163·1 million]), high fasting plasma glucose (143·1 million [125·1 million to 163·5 million]), high BMI (120·1 million [83·8 million to 158·4 million]), childhood undernutrition (113·3 million [103·9 million to 123·4 million]), ambient particulate matter (103·1 million [90·8 million to 115·1 million]), high total cholesterol (88·7 million [74·6 million to 105·7 million]), household air pollution (85·6 million [66·7 million to 106·1 million]), alcohol use (85·0 million [77·2 million to 93·0 million]), and diets high in sodium (83·0 million [49·3 million to 127·5 million]).\nFrom 1990 to 2015, attributable DALYs declined for micronutrient deficiencies, childhood undernutrition, unsafe sanitation and water, and household air pollution; reductions in risk-deleted DALY rates rather than reductions in exposure drove these declines.\nRising exposure contributed to notable increases in attributable DALYs from high BMI, high fasting plasma glucose, occupational carcinogens, and drug use.\nEnvironmental risks and childhood undernutrition declined steadily with SDI; low physical activity, high BMI, and high fasting plasma glucose increased with SDI.\nIn 119 countries, metabolic risks, such as high BMI and fasting plasma glucose, contributed the most attributable DALYs in 2015.\nRegionally, smoking still ranked among the leading five risk factors for attributable DALYs in 109 countries; childhood underweight and unsafe sex remained primary drivers of early death and disability in much of sub-Saharan Africa.\nINTERPRETATION Declines in some key environmental risks have contributed to declines in critical infectious diseases.\nSome risks appear to be invariant to SDI.\nIncreasing risks, including high BMI, high fasting plasma glucose, drug use, and some occupational exposures, contribute to rising burden from some conditions, but also provide opportunities for intervention.\nSome highly preventable risks, such as smoking, remain major causes of attributable DALYs, even as exposure is declining.\nPublic policy makers need to pay attention to the risks that are increasingly major contributors to global burden.\nFUNDING Bill & Melinda Gates Foundation.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"In 2015, the ten largest contributors to global DALYs among Level 3 risks were high systolic blood pressure (211\\u00b78 million [192\\u00b77 million to 231\\u00b71 million] global DALYs), smoking (148\\u00b76 million [134\\u00b72 million to 163\\u00b71 million]), high fasting plasma glucose (143\\u00b71 million [125\\u00b71 million to 163\\u00b75 million]), high BMI (120\\u00b71 million [83\\u00b78 million to 158\\u00b74 million]), childhood undernutrition (113\\u00b73 million [103\\u00b79 million to 123\\u00b74 million]), ambient particulate matter (103\\u00b71 million [90\\u00b78 million to 115\\u00b71 million]), high total cholesterol (88\\u00b77 million [74\\u00b76 million to 105\\u00b77 million]), household air pollution (85\\u00b76 million [66\\u00b77 million to 106\\u00b71 million]), alcohol use (85\\u00b70 million [77\\u00b72 million to 93\\u00b70 million]), and diets high in sodium (83\\u00b70 million [49\\u00b73 million to 127\\u00b75 million]).\"]}",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Depletion of T-helper 17 (Th17) cells during simian immunodeficiency virus (SIV) infection decreases dissemination of Salmonella Typhimurium from the gut.
Research Abstract:
Salmonella typhimurium causes a localized enteric infection in immunocompetent individuals, whereas HIV-infected individuals develop a life-threatening bacteremia.
Here we show that simian immunodeficiency virus (SIV) infection results in depletion of T helper type 17 (TH17) cells in the ileal mucosa of rhesus macaques, thereby impairing mucosal barrier functions to S. typhimurium dissemination.
In SIV-negative macaques, the gene expression profile induced by S. typhimurium in ligated ileal loops was dominated by TH17 responses, including the expression of interleukin-17 (IL-17) and IL-22.
TH17 cells were markedly depleted in SIV-infected rhesus macaques, resulting in blunted TH17 responses to S. typhimurium infection and increased bacterial dissemination.
IL-17 receptor–deficient mice showed increased systemic dissemination of S. typhimurium from the gut, suggesting that IL-17 deficiency causes defects in mucosal barrier function.
We conclude that SIV infection impairs the IL-17 axis, an arm of the mucosal immune response preventing systemic microbial dissemination from the gastrointestinal tract. | {"verdict": "CONTRADICT", "evidence": ["Here we show that simian immunodeficiency virus (SIV) infection results in depletion of T helper type 17 (TH17) cells in the ileal mucosa of rhesus macaques, thereby impairing mucosal barrier functions to S. typhimurium dissemination.", "TH17 cells were markedly depleted in SIV-infected rhesus macaques, resulting in blunted TH17 responses to S. typhimurium infection and increased bacterial dissemination."]} | science.scifact_entailment.213 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nDepletion of T-helper 17 (Th17) cells during simian immunodeficiency virus (SIV) infection decreases dissemination of Salmonella Typhimurium from the gut.\n\nResearch Abstract:\nSalmonella typhimurium causes a localized enteric infection in immunocompetent individuals, whereas HIV-infected individuals develop a life-threatening bacteremia.\nHere we show that simian immunodeficiency virus (SIV) infection results in depletion of T helper type 17 (TH17) cells in the ileal mucosa of rhesus macaques, thereby impairing mucosal barrier functions to S. typhimurium dissemination.\nIn SIV-negative macaques, the gene expression profile induced by S. typhimurium in ligated ileal loops was dominated by TH17 responses, including the expression of interleukin-17 (IL-17) and IL-22.\nTH17 cells were markedly depleted in SIV-infected rhesus macaques, resulting in blunted TH17 responses to S. typhimurium infection and increased bacterial dissemination.\nIL-17 receptor–deficient mice showed increased systemic dissemination of S. typhimurium from the gut, suggesting that IL-17 deficiency causes defects in mucosal barrier function.\nWe conclude that SIV infection impairs the IL-17 axis, an arm of the mucosal immune response preventing systemic microbial dissemination from the gastrointestinal tract.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Here we show that simian immunodeficiency virus (SIV) infection results in depletion of T helper type 17 (TH17) cells in the ileal mucosa of rhesus macaques, thereby impairing mucosal barrier functions to S. typhimurium dissemination.\", \"TH17 cells were markedly depleted in SIV-infected rhesus macaques, resulting in blunted TH17 responses to S. typhimurium infection and increased bacterial dissemination.\"]}",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic
Main Content:
1 INTRODUCTION . In recent years , model-free reinforcement learning methods using deep neural network controllers have proven effective on a wide range of tasks , from playing video or text-based games ( Mnih et al. , 2015 ; 2016 ; Narasimhan et al. , 2015 ) to learning algorithms ( Zaremba et al. , 2015 ) and complex locomotion tasks ( Lillicrap et al. , 2015 ; Zhang et al. , 2015 ) . However , these methods often require a large number of interactions with the environment in order to learn . While this is not a problem if the environment is simulated , it can limit the application of these methods in realistic environments where interactions with the environment are slow , expensive or potentially dangerous . Building a simulator where the agent can safely try out policies without facing real consequences can mitigate this problem , but requires human engineering effort which increases with the complexity of the environment being modeled . Model-based reinforcement learning approaches try to learn a model of the environment dynamics , and then use this model to plan actions or train a parameterized policy . A common setting is where an agent alternates between collecting experience by executing actions using its current policy or dynamics model , and then using these experiences to improve its dynamics model . This approach has been shown empirically to significantly reduce the required number of environment interactions needed to obtain an effective policy or planner ( Atkeson & Santamaria , 1997 ; Deisenroth & Rasmussen , 2011 ; Nagabandi et al. , 2017 ; Chua et al. , 2018 ) . Despite these improvements in sample complexity , there exist settings where even a single poor action executed by an agent in a real environment can have consequences which ∗Equal contribution . are not acceptable . At the same time , with data collection becoming increasingly inexpensive , there are many settings where observational data of an environment is abundant . This suggests a need for algorithms which can learn policies primarily from observational data , which can then perform well in a real environment . Autonomous driving is an example of such a setting : on one hand , trajectories of human drivers can be easily collected using traffic cameras ( Halkias & Colyar , 2006 ) , resulting in an abundance of observational data ; on the other hand , learning through interaction with the real environment is not a viable solution . However , learning policies from purely observational data is challenging because the data may only cover a small region of the space over which it is defined . If the observational data consists of stateaction pairs produced by an expert , one option is to use imitation learning ( Pomerleau , 1991 ) . However , this is well-known to suffer from a mismatch between the states seen at training and execution time ( Ross & Bagnell , 2010 ) . Another option is to learn a dynamics model from observational data , and then use it to train a policy ( Nguyen & Widrow , 1989 ) . However , the dynamics model may make arbitrary predictions outside the domain it was trained on , which may wrongly be associated with low cost ( or high reward ) as shown in Figure 1 . The policy network may then exploit these errors in the dynamics model and produce actions which lead to wrongly optimistic states . In the interactive setting , this problem is naturally self-correcting , since states where the model predictions are wrongly optimistic will be more likely to be experienced , and thus will correct the dynamics model . However , the problem persists if the dataset of environment interactions which the model is trained on is fixed . In this work , we propose to train a policy while explicitly penalizing the mismatch between the distribution of trajectories it induces and the one reflected in the training data . We use a learned dynamics model which is unrolled for multiple time steps , and train a policy network to minimize a differentiable cost over this rolled-out trajectory . This cost contains two terms : a policy cost which represents the objective the policy seeks to optimize , and an uncertainty cost which represents its divergence from the states it is trained on . We measure this second cost by using the uncertainty of the dynamics model about its own predictions , calculated using dropout . We apply our approach in the context of learning policies to drive an autonomous car in dense traffic , using a large-scale dataset of real-world driving trajectories which we also adapt into an environment for testing learned policies 1 . We show that model-based control using this additional uncertainty regularizer substantially outperforms unregularized control , and enables learning good driving policies using only observational data with no environment interaction or additional labeling by an expert . We also show how to effectively leverage an action-conditional stochastic forward model using a modified posterior distribution , which encourages the model to maintain sensitivity to input actions . 2 MODEL-PREDICTIVE POLICY LEARNING WITH UNCERTAINTY REGULARIZATION . We assume we are given a dataset of observational data which consists of state-action pairs D = { ( st , at ) } t. We first describe our general approach , which consists of two steps : learning an actionconditional dynamics model using the collected observational data , and then using this model to train a fast , feedforward policy network which minimizes both a policy cost and an uncertainty cost . 2.1 ACTION-CONDITIONAL FORWARD MODEL . In this work , we consider recent approaches for stochastic prediction based on Variational Autoencoders ( Kingma & Welling , 2013 ; Babaeizadeh et al. , 2017 ; Denton & Fergus , 2018 ) . The stochastic model fθ ( s1 : t , at , zt ) takes as input a sequence of observed or previously predicted states s1 : t , an action at , and a latent variable zt which represents the information about the next state st+1 which 1Dataset and environment can be found at https : //github.com/Atcold/pytorch-PPUU is not a deterministic function of the input . During training , latent variables are sampled from a distribution whose parameters are output by a posterior network qφ ( s1 : t , st+1 ) conditioned on the past inputs and true targets . This network is trained jointly with the rest of the model using the reparameterization trick , and a term is included in the loss to minimize the KL divergence between the posterior distribution and a fixed prior p ( z ) , which in our case is an isotropic Gaussian . The per-sample loss used for training the stochastic model is given by : L ( θ , φ ; s1 : t , st+1 , at ) = ‖st+1 − fθ ( s1 : t , at , zt ) ‖22 + βDKL ( qφ ( z|s1 : t , st+1 ) ‖p ( z ) ) ( 1 ) After training , different future predictions for a given sequence of frames can be generated by sampling different latent variables from the prior distribution . Recent models for stochastic video prediction ( Babaeizadeh et al. , 2017 ; Denton & Fergus , 2018 ) do not use their model for planning or training a policy network , and parameterize the posterior distribution over latent variables using a diagonal Gaussian . In our case , we are training an actionconditional video prediction model which we will later use to train a policy . This leads to an additional requirement : it is important for the prediction model to accurately respond to input actions , and not use the latent variables to encode factors of variation in the outputs which are due to the actions . To this end we propose to use a mixture of two Gaussians , with one component fixed to the prior , as our posterior distribution : ( µφ , σφ ) = qφ ( s1 : t , st+1 ) u ∼ B ( pu ) zt ∼ ( 1− u ) · N ( µφ , σφ ) + u · p ( z ) This can be seen as applying a form of global dropout to the latent variables at training time 2 , and forces the prediction model to extract as much information as possible from the input states and actions by making the latent variable independent of the output with some probability . In our experiments we will refer to this parameterization as z-dropout . 2.2 TRAINING A POLICY NETWORK WITH UNCERTAINTY MINIMIZATION . Once the forward model is trained , we use it to train a parameterized policy network πψ , which we assume to be stochastic . We first sample an initial state sequence s1 : t from the training set , unroll the forward model over T time steps , and backpropagate gradients of a differentiable objective function with respect to the parameters of the policy network ( shown in Figure 2 ) . During this process the weights of the forward model are fixed , and only the weights of the policy network are optimized . This objective function contains two terms : a policy cost C , which reflects the underlying objective the policy is trying to learn , and an uncertainty cost U , which reflects how close the predicted state induced by the policy network is to the manifold which the data D is drawn from . Training the policy using a stochastic forward model involves solving the following problem , where latent variables are sampled from the prior and input into the forward model at every time step : argmin ψ [ T∑ i=1 C ( ŝt+i ) + λU ( ŝt+i ) ] , such that : zt+i ∼ p ( z ) ât+i ∼ πψ ( ŝt+i−1 ) ŝt+i = f ( ŝt+i−1 , ât+i , zt+i ) The uncertainty cost U is applied to states predicted by the forward model , and could reflect any measure of their likelihood under the distribution the training data is drawn from . We propose here a general form based on the uncertainty of the dynamics model , which is calculated using dropout . Intuitively , if the dynamics model is given a state-action pair from the same distribution asD ( which 2If the variances of both Gaussians are zero , this becomes equivalent to applying dropout to the latent code where all units are either set to zero or unchanged with probability pu . it was trained on ) , it will have low uncertainty about its prediction . If it is given a state-action pair which is outside this distribution , it will have high uncertainty . Dropout ( Hinton et al. , 2012 ; Srivastava et al. , 2014 ) is a regularization technique which consists of randomly setting hidden units in a neural network to zero with some probability . The work of ( Gal & Ghahramani , 2016 ) showed that estimates of the neural network ’ s uncertainty for a given input can be obtained by calculating the covariance of its outputs taken over multiple dropout masks . We note that this uncertainty estimate is the composition of differentiable functions : each of the models induced by applying a different dropout mask is differentiable , as is the covariance operator . Furthermore , we can summarize the covariance matrix by taking its trace ( which is equal to the sum of its eigenvalues , or equivalently the sum of the variances of the outputs across each dimension ) , which is also a differentiable operation . This provides a scalar estimate of uncertainty which is differentiable with respect to the input . More precisely , let fθ1 , ... , fθK denote our prediction model with K different dropout masks applied to its hidden units ( this can also be viewed as changing its weights ) . We define our scalar measure of uncertainty U as follows : U ( ŝt+1 ) = tr [ Cov [ { fθk ( s1 : t , at , zt ) } Kk=1 ] ] = d∑ j=1 Var ( { fθk ( s1 : t , at , zt ) j } Kk=1 ) where d is the dimensionality of the output . Minimizing this quantity with respect to actions encourages the policy network to produce actions which , when plugged into the forward model , will produce predictions which the forward model is confident about 3 . A simple way to define U given an initial sequence of states s1 : t from D would be to set U ( ŝt+k ) = ‖ŝt+k − st+k‖2 , which would encourage the policy network to output actions which lead to a similar trajectory as the one observed in the dataset . This leads to a set of states which the model is presumably confident about , but may not be a trajectory which also satisfies the policy cost C unless the dataset D consists of expert trajectories . If this is the case , setting C ( ŝt+i ) = U ( ŝt+i ) = 1 2‖ŝt+k − st+k‖2 gives a model-based imitation learning objective which simultaneously optimizes the policy cost and the uncertainty cost . The problem then becomes : 3In practice , we apply an additional step to normalize this quantity across different modalities and rollout lengths , which is detailed in Appendix D. argmin ψ [ T∑ i=1 ‖ŝt+i − st+i‖2 ] , such that : zt+i ∼ qφ ( z|s1 : t , st+1 ) ât+i ∼ πψ ( ŝt+i−1 ) ŝt+i = f ( ŝt+i−1 , ât+i , zt+i ) We call the first approach MPUR , for Model-Predictive Policy with Uncertainty Regularization , and the second MPER , for Model-Predictive Policy with Expert Regularization . A key feature of both approaches is that we optimize the objective over T time steps , which is made possible by our learned dynamics model . This means that the actions will receive gradients from multiple time steps ahead , which will penalize actions which lead to large divergences from the training manifold further into the future , even if they only cause a small divergence at the next time step . | The paper addresses the difficulty of covariate shift in model-based reinforcement learning. Here, the distribution over trajectories during is significantly different for the behaviour or data-collecting policy and the target or optimised policy. As a mean to address this, the authors propose to add an uncertainty term to the cost, which is realised by the trace of the covariance of the outputs of a MC dropout forward model. The method is applied to driving in dense traffic, where even single wrong actions can be catastrophic. | science.mup_single_document_summarization.9822 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nModel-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic\n\nMain Content:\n1 INTRODUCTION . In recent years , model-free reinforcement learning methods using deep neural network controllers have proven effective on a wide range of tasks , from playing video or text-based games ( Mnih et al. , 2015 ; 2016 ; Narasimhan et al. , 2015 ) to learning algorithms ( Zaremba et al. , 2015 ) and complex locomotion tasks ( Lillicrap et al. , 2015 ; Zhang et al. , 2015 ) . However , these methods often require a large number of interactions with the environment in order to learn . While this is not a problem if the environment is simulated , it can limit the application of these methods in realistic environments where interactions with the environment are slow , expensive or potentially dangerous . Building a simulator where the agent can safely try out policies without facing real consequences can mitigate this problem , but requires human engineering effort which increases with the complexity of the environment being modeled . Model-based reinforcement learning approaches try to learn a model of the environment dynamics , and then use this model to plan actions or train a parameterized policy . A common setting is where an agent alternates between collecting experience by executing actions using its current policy or dynamics model , and then using these experiences to improve its dynamics model . This approach has been shown empirically to significantly reduce the required number of environment interactions needed to obtain an effective policy or planner ( Atkeson & Santamaria , 1997 ; Deisenroth & Rasmussen , 2011 ; Nagabandi et al. , 2017 ; Chua et al. , 2018 ) . Despite these improvements in sample complexity , there exist settings where even a single poor action executed by an agent in a real environment can have consequences which ∗Equal contribution . are not acceptable . At the same time , with data collection becoming increasingly inexpensive , there are many settings where observational data of an environment is abundant . This suggests a need for algorithms which can learn policies primarily from observational data , which can then perform well in a real environment . Autonomous driving is an example of such a setting : on one hand , trajectories of human drivers can be easily collected using traffic cameras ( Halkias & Colyar , 2006 ) , resulting in an abundance of observational data ; on the other hand , learning through interaction with the real environment is not a viable solution . However , learning policies from purely observational data is challenging because the data may only cover a small region of the space over which it is defined . If the observational data consists of stateaction pairs produced by an expert , one option is to use imitation learning ( Pomerleau , 1991 ) . However , this is well-known to suffer from a mismatch between the states seen at training and execution time ( Ross & Bagnell , 2010 ) . Another option is to learn a dynamics model from observational data , and then use it to train a policy ( Nguyen & Widrow , 1989 ) . However , the dynamics model may make arbitrary predictions outside the domain it was trained on , which may wrongly be associated with low cost ( or high reward ) as shown in Figure 1 . The policy network may then exploit these errors in the dynamics model and produce actions which lead to wrongly optimistic states . In the interactive setting , this problem is naturally self-correcting , since states where the model predictions are wrongly optimistic will be more likely to be experienced , and thus will correct the dynamics model . However , the problem persists if the dataset of environment interactions which the model is trained on is fixed . In this work , we propose to train a policy while explicitly penalizing the mismatch between the distribution of trajectories it induces and the one reflected in the training data . We use a learned dynamics model which is unrolled for multiple time steps , and train a policy network to minimize a differentiable cost over this rolled-out trajectory . This cost contains two terms : a policy cost which represents the objective the policy seeks to optimize , and an uncertainty cost which represents its divergence from the states it is trained on . We measure this second cost by using the uncertainty of the dynamics model about its own predictions , calculated using dropout . We apply our approach in the context of learning policies to drive an autonomous car in dense traffic , using a large-scale dataset of real-world driving trajectories which we also adapt into an environment for testing learned policies 1 . We show that model-based control using this additional uncertainty regularizer substantially outperforms unregularized control , and enables learning good driving policies using only observational data with no environment interaction or additional labeling by an expert . We also show how to effectively leverage an action-conditional stochastic forward model using a modified posterior distribution , which encourages the model to maintain sensitivity to input actions . 2 MODEL-PREDICTIVE POLICY LEARNING WITH UNCERTAINTY REGULARIZATION . We assume we are given a dataset of observational data which consists of state-action pairs D = { ( st , at ) } t. We first describe our general approach , which consists of two steps : learning an actionconditional dynamics model using the collected observational data , and then using this model to train a fast , feedforward policy network which minimizes both a policy cost and an uncertainty cost . 2.1 ACTION-CONDITIONAL FORWARD MODEL . In this work , we consider recent approaches for stochastic prediction based on Variational Autoencoders ( Kingma & Welling , 2013 ; Babaeizadeh et al. , 2017 ; Denton & Fergus , 2018 ) . The stochastic model fθ ( s1 : t , at , zt ) takes as input a sequence of observed or previously predicted states s1 : t , an action at , and a latent variable zt which represents the information about the next state st+1 which 1Dataset and environment can be found at https : //github.com/Atcold/pytorch-PPUU is not a deterministic function of the input . During training , latent variables are sampled from a distribution whose parameters are output by a posterior network qφ ( s1 : t , st+1 ) conditioned on the past inputs and true targets . This network is trained jointly with the rest of the model using the reparameterization trick , and a term is included in the loss to minimize the KL divergence between the posterior distribution and a fixed prior p ( z ) , which in our case is an isotropic Gaussian . The per-sample loss used for training the stochastic model is given by : L ( θ , φ ; s1 : t , st+1 , at ) = ‖st+1 − fθ ( s1 : t , at , zt ) ‖22 + βDKL ( qφ ( z|s1 : t , st+1 ) ‖p ( z ) ) ( 1 ) After training , different future predictions for a given sequence of frames can be generated by sampling different latent variables from the prior distribution . Recent models for stochastic video prediction ( Babaeizadeh et al. , 2017 ; Denton & Fergus , 2018 ) do not use their model for planning or training a policy network , and parameterize the posterior distribution over latent variables using a diagonal Gaussian . In our case , we are training an actionconditional video prediction model which we will later use to train a policy . This leads to an additional requirement : it is important for the prediction model to accurately respond to input actions , and not use the latent variables to encode factors of variation in the outputs which are due to the actions . To this end we propose to use a mixture of two Gaussians , with one component fixed to the prior , as our posterior distribution : ( µφ , σφ ) = qφ ( s1 : t , st+1 ) u ∼ B ( pu ) zt ∼ ( 1− u ) · N ( µφ , σφ ) + u · p ( z ) This can be seen as applying a form of global dropout to the latent variables at training time 2 , and forces the prediction model to extract as much information as possible from the input states and actions by making the latent variable independent of the output with some probability . In our experiments we will refer to this parameterization as z-dropout . 2.2 TRAINING A POLICY NETWORK WITH UNCERTAINTY MINIMIZATION . Once the forward model is trained , we use it to train a parameterized policy network πψ , which we assume to be stochastic . We first sample an initial state sequence s1 : t from the training set , unroll the forward model over T time steps , and backpropagate gradients of a differentiable objective function with respect to the parameters of the policy network ( shown in Figure 2 ) . During this process the weights of the forward model are fixed , and only the weights of the policy network are optimized . This objective function contains two terms : a policy cost C , which reflects the underlying objective the policy is trying to learn , and an uncertainty cost U , which reflects how close the predicted state induced by the policy network is to the manifold which the data D is drawn from . Training the policy using a stochastic forward model involves solving the following problem , where latent variables are sampled from the prior and input into the forward model at every time step : argmin ψ [ T∑ i=1 C ( ŝt+i ) + λU ( ŝt+i ) ] , such that : zt+i ∼ p ( z ) ât+i ∼ πψ ( ŝt+i−1 ) ŝt+i = f ( ŝt+i−1 , ât+i , zt+i ) The uncertainty cost U is applied to states predicted by the forward model , and could reflect any measure of their likelihood under the distribution the training data is drawn from . We propose here a general form based on the uncertainty of the dynamics model , which is calculated using dropout . Intuitively , if the dynamics model is given a state-action pair from the same distribution asD ( which 2If the variances of both Gaussians are zero , this becomes equivalent to applying dropout to the latent code where all units are either set to zero or unchanged with probability pu . it was trained on ) , it will have low uncertainty about its prediction . If it is given a state-action pair which is outside this distribution , it will have high uncertainty . Dropout ( Hinton et al. , 2012 ; Srivastava et al. , 2014 ) is a regularization technique which consists of randomly setting hidden units in a neural network to zero with some probability . The work of ( Gal & Ghahramani , 2016 ) showed that estimates of the neural network ’ s uncertainty for a given input can be obtained by calculating the covariance of its outputs taken over multiple dropout masks . We note that this uncertainty estimate is the composition of differentiable functions : each of the models induced by applying a different dropout mask is differentiable , as is the covariance operator . Furthermore , we can summarize the covariance matrix by taking its trace ( which is equal to the sum of its eigenvalues , or equivalently the sum of the variances of the outputs across each dimension ) , which is also a differentiable operation . This provides a scalar estimate of uncertainty which is differentiable with respect to the input . More precisely , let fθ1 , ... , fθK denote our prediction model with K different dropout masks applied to its hidden units ( this can also be viewed as changing its weights ) . We define our scalar measure of uncertainty U as follows : U ( ŝt+1 ) = tr [ Cov [ { fθk ( s1 : t , at , zt ) } Kk=1 ] ] = d∑ j=1 Var ( { fθk ( s1 : t , at , zt ) j } Kk=1 ) where d is the dimensionality of the output . Minimizing this quantity with respect to actions encourages the policy network to produce actions which , when plugged into the forward model , will produce predictions which the forward model is confident about 3 . A simple way to define U given an initial sequence of states s1 : t from D would be to set U ( ŝt+k ) = ‖ŝt+k − st+k‖2 , which would encourage the policy network to output actions which lead to a similar trajectory as the one observed in the dataset . This leads to a set of states which the model is presumably confident about , but may not be a trajectory which also satisfies the policy cost C unless the dataset D consists of expert trajectories . If this is the case , setting C ( ŝt+i ) = U ( ŝt+i ) = 1 2‖ŝt+k − st+k‖2 gives a model-based imitation learning objective which simultaneously optimizes the policy cost and the uncertainty cost . The problem then becomes : 3In practice , we apply an additional step to normalize this quantity across different modalities and rollout lengths , which is detailed in Appendix D. argmin ψ [ T∑ i=1 ‖ŝt+i − st+i‖2 ] , such that : zt+i ∼ qφ ( z|s1 : t , st+1 ) ât+i ∼ πψ ( ŝt+i−1 ) ŝt+i = f ( ŝt+i−1 , ât+i , zt+i ) We call the first approach MPUR , for Model-Predictive Policy with Uncertainty Regularization , and the second MPER , for Model-Predictive Policy with Expert Regularization . A key feature of both approaches is that we optimize the objective over T time steps , which is made possible by our learned dynamics model . This means that the actions will receive gradients from multiple time steps ahead , which will penalize actions which lead to large divergences from the training manifold further into the future , even if they only cause a small divergence at the next time step .",
"role": "user"
},
{
"content": "The paper addresses the difficulty of covariate shift in model-based reinforcement learning. Here, the distribution over trajectories during is significantly different for the behaviour or data-collecting policy and the target or optimised policy. As a mean to address this, the authors propose to add an uncertainty term to the cost, which is realised by the trace of the covariance of the outputs of a MC dropout forward model. The method is applied to driving in dense traffic, where even single wrong actions can be catastrophic.",
"role": "assistant"
}
] |
science.discomat_te | | Glass type | P2O5 | Al2O3 | SiO2 | MgO |
| MVP1 | 60 | 15 | 15 | 10 |
| MVP2 | 60 | 10 | 15 | 15 |
| MVP3 | 60 | 10 | 10 | 20 |
| MVP5 | 60 | 10 | 5 | 25 |
| MVP6 | 40 | 5 | 5 | 30 |
| MVP8a | 45 | 5 | 5 | 45 |
| MVP8b | 43 | 4 | 10 | 43 |
| MVP8c | 42 | 4 | 12 | 42 |
| MVP9a | 41 | 3 | 15 | 41 |
| MVP9b | 42 | 10 | 4 | 44 |
| MVP9c | 42 | 12 | 4 | 42 |
| MVP9d | 41 | 15 | 3 | 41 |
Caption: Nominal glass composition (wt %).
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "60", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP1"}
{"value": "15", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP1"}
{"value": "15", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP1"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP1"}
{"value": "60", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP2"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP2"}
{"value": "15", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP2"}
{"value": "15", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP2"}
{"value": "60", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP3"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP3"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP3"}
{"value": "20", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP3"}
{"value": "60", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP5"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP5"}
{"value": "5", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP5"}
{"value": "25", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP5"}
{"value": "40", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP6"}
{"value": "5", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP6"}
{"value": "5", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP6"}
{"value": "30", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP6"}
{"value": "45", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP8a"}
{"value": "5", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP8a"}
{"value": "5", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP8a"}
{"value": "45", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP8a"}
{"value": "43", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP8b"}
{"value": "4", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP8b"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP8b"}
{"value": "43", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP8b"}
{"value": "42", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP8c"}
{"value": "4", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP8c"}
{"value": "12", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP8c"}
{"value": "42", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP8c"}
{"value": "41", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP9a"}
{"value": "3", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP9a"}
{"value": "15", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP9a"}
{"value": "41", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP9a"}
{"value": "42", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP9b"}
{"value": "10", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP9b"}
{"value": "4", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP9b"}
{"value": "44", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP9b"}
{"value": "42", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP9c"}
{"value": "12", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP9c"}
{"value": "4", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP9c"}
{"value": "42", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP9c"}
{"value": "41", "type": "Glass_Compound_Amount", "constituent": "P2O5", "unit": "wt", "material": "MVP9d"}
{"value": "15", "type": "Glass_Compound_Amount", "constituent": "Al2O3", "unit": "wt", "material": "MVP9d"}
{"value": "3", "type": "Glass_Compound_Amount", "constituent": "SiO2", "unit": "wt", "material": "MVP9d"}
{"value": "41", "type": "Glass_Compound_Amount", "constituent": "MgO", "unit": "wt", "material": "MVP9d"} | science.discomat_te.285 | [
{
"content": "| Glass type | P2O5 | Al2O3 | SiO2 | MgO |\n| MVP1 | 60 | 15 | 15 | 10 |\n| MVP2 | 60 | 10 | 15 | 15 |\n| MVP3 | 60 | 10 | 10 | 20 |\n| MVP5 | 60 | 10 | 5 | 25 |\n| MVP6 | 40 | 5 | 5 | 30 |\n| MVP8a | 45 | 5 | 5 | 45 |\n| MVP8b | 43 | 4 | 10 | 43 |\n| MVP8c | 42 | 4 | 12 | 42 |\n| MVP9a | 41 | 3 | 15 | 41 |\n| MVP9b | 42 | 10 | 4 | 44 |\n| MVP9c | 42 | 12 | 4 | 42 |\n| MVP9d | 41 | 15 | 3 | 41 |\n\nCaption: Nominal glass composition (wt %).\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP1\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP1\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP1\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP1\"}\n{\"value\": \"60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP2\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP2\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP2\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP2\"}\n{\"value\": \"60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP3\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP3\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP3\"}\n{\"value\": \"20\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP3\"}\n{\"value\": \"60\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP5\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP5\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP5\"}\n{\"value\": \"25\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP5\"}\n{\"value\": \"40\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP6\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP6\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP6\"}\n{\"value\": \"30\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP6\"}\n{\"value\": \"45\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP8a\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP8a\"}\n{\"value\": \"5\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP8a\"}\n{\"value\": \"45\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP8a\"}\n{\"value\": \"43\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP8b\"}\n{\"value\": \"4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP8b\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP8b\"}\n{\"value\": \"43\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP8b\"}\n{\"value\": \"42\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP8c\"}\n{\"value\": \"4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP8c\"}\n{\"value\": \"12\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP8c\"}\n{\"value\": \"42\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP8c\"}\n{\"value\": \"41\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP9a\"}\n{\"value\": \"3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP9a\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP9a\"}\n{\"value\": \"41\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP9a\"}\n{\"value\": \"42\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP9b\"}\n{\"value\": \"10\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP9b\"}\n{\"value\": \"4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP9b\"}\n{\"value\": \"44\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP9b\"}\n{\"value\": \"42\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP9c\"}\n{\"value\": \"12\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP9c\"}\n{\"value\": \"4\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP9c\"}\n{\"value\": \"42\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP9c\"}\n{\"value\": \"41\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"P2O5\", \"unit\": \"wt\", \"material\": \"MVP9d\"}\n{\"value\": \"15\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"Al2O3\", \"unit\": \"wt\", \"material\": \"MVP9d\"}\n{\"value\": \"3\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"SiO2\", \"unit\": \"wt\", \"material\": \"MVP9d\"}\n{\"value\": \"41\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"MgO\", \"unit\": \"wt\", \"material\": \"MVP9d\"}",
"role": "assistant"
}
] |
science.discomat_te | | Sample no. | Ph, volume percent of crystals | Activation energy (kJ/mol) of the scale factor for normalised frequency | ln(t 0, s) of the scale factor | G unrelaxed shear modulus (GPa) |
| Glas 0 | 0 | 137+-18 | -50.15 | 24.3 |
| Glas 1 | 17 | 129+-13 | -47.68 | 23.8 |
| Glas 3 | 22 | 126+-16 | -45.72 | 24.7 |
| Glas 5 | 27 | 117+-10 | -42.25 | 25.0 |
Caption: Activation energies of shear stress relaxation and unrelaxed shear modulus of disilicate lithium glasses
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "0", "type": "Other"}
{"value": "137", "type": "Other"}
{"value": "24.3", "type": "Other"}
{"value": "17", "type": "Other"}
{"value": "129", "type": "Other"}
{"value": "23.8", "type": "Other"}
{"value": "22", "type": "Other"}
{"value": "126", "type": "Other"}
{"value": "24.7", "type": "Other"}
{"value": "27", "type": "Other"}
{"value": "117", "type": "Other"}
{"value": "25.0", "type": "Other"} | science.discomat_te.221 | [
{
"content": "| Sample no. | Ph, volume percent of crystals | Activation energy (kJ/mol) of the scale factor for normalised frequency | ln(t 0, s) of the scale factor | G unrelaxed shear modulus (GPa) |\n| Glas 0 | 0 | 137+-18 | -50.15 | 24.3 |\n| Glas 1 | 17 | 129+-13 | -47.68 | 23.8 |\n| Glas 3 | 22 | 126+-16 | -45.72 | 24.7 |\n| Glas 5 | 27 | 117+-10 | -42.25 | 25.0 |\n\nCaption: Activation energies of shear stress relaxation and unrelaxed shear modulus of disilicate lithium glasses\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"0\", \"type\": \"Other\"}\n{\"value\": \"137\", \"type\": \"Other\"}\n{\"value\": \"24.3\", \"type\": \"Other\"}\n{\"value\": \"17\", \"type\": \"Other\"}\n{\"value\": \"129\", \"type\": \"Other\"}\n{\"value\": \"23.8\", \"type\": \"Other\"}\n{\"value\": \"22\", \"type\": \"Other\"}\n{\"value\": \"126\", \"type\": \"Other\"}\n{\"value\": \"24.7\", \"type\": \"Other\"}\n{\"value\": \"27\", \"type\": \"Other\"}\n{\"value\": \"117\", \"type\": \"Other\"}\n{\"value\": \"25.0\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
AARL: Automated Auxiliary Loss for Reinforcement Learning
Main Content:
1 INTRODUCTION . Reinforcement learning ( RL ) has been a hot research topic and has shown significant progress in many fields ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Gu et al. , 2017 ; Vinyals et al. , 2019 ) . Recent RL research focuses on obtaining a good representation of states as it is shown to be the key to improve sample efficiency of RL ( Laskin et al. , 2020 ) . This is because in high-dimensional environments , applying RL directly on the complex state inputs is incredibly sample-inefficient ( Lake et al. , 2017 ; Kaiser et al. , 2019 ) , making RL hard to scale to real-world tasks where interacting with the environment is costly ( Dulac-Arnold et al. , 2019 ) . Standard RL paradigm learns the representation from critic loss ( value prediction ) and / or actor loss ( maximizing cumulative reward ) , which hardly extracts informative representations in challenging environments like pixel-based RL and complex robotics systems ( Lake et al. , 2017 ; Kober et al. , 2013 ) . This motivates adding auxiliary losses in support of learning better latent representations of states . The usage of auxiliary loss encodes prior knowledge of RL and its environments , and puts regularization on training ( Shelhamer et al. , 2016 ) . Typical auxiliary losses are manually designed by human experts , including observation reconstruction ( Yarats et al. , 2019 ) , reward prediction ( Jaderberg et al. , 2017 ) and environment dynamics prediction ( Shelhamer et al. , 2016 ; De Bruin et al. , 2018 ; Ota et al. , 2020 ) . However , such auxiliary loss designs and choices rely heavily on human knowledge of what might be helpful for RL training , which requires extensive human efforts to scale to new tasks and can be suboptimal for improving RL performance ( Yang & Nachum , 2021 ) . In this paper , we rigorously treat auxiliary loss functions for RL as a first-class problem to explicitly address the question : what is the universal approach to find a good auxiliary loss function for RL ? Considering that the automated machine learning ( AutoML ) community has shown promising results with automated loss search in computer vision tasks ( Li et al. , 2019 ) , we propose to automate the process of designing auxiliary loss functions of RL . Specifically , we formulate the task as a bi-level optimization , where we try to find the best auxiliary loss function , which , to the most extent , helps train a good RL agent . The inner-level problem is a typical RL training problem , while the outer-level can be seen as a search problem . To tackle this , we first design a novel search space of auxiliary loss functions , which is a combination of two components : loss input and loss operator . As shown in Table 1 , the search space covers many existing handcrafted losses , and it is also substantially large ( as large as 4.6 × 1019 ) . We then propose an efficient search strategy to explore the space . This is done in a two-step manner . We first finalize the loss operator specification and then use an evolutionary strategy that performs mutations on configurations of loss input , to identify the top-performing loss inputs quickly . We evaluate our search framework on both pixel-based and state-based environments in the DeepMind Control suite ( Tassa et al. , 2018 ) . Extensive experiments show that the searched auxiliary loss functions greatly outperform state-of-the-art methods and can easily transfer to unseen environments ( that are never used during the search ) , showing that our proposed method is robust and effective . We highlight the main contributions of this paper below : • We introduce a principled and universal approach for auxiliary loss design in RL . To the best of our knowledge , we are the first to derive the optimal auxiliary loss function with an automatic process . Our framework can be easily applied to arbitrary RL tasks to search for the best auxiliary loss function . • We demonstrate that AARL significantly outperforms state-of-the-art methods in both pixel-based and state-based environments . The searched auxiliary loss functions show strong generalization ability to transfer to unseen environments . • We analyze the derived auxiliary loss functions and deliver some insightful discussions that we hope will deepen the understanding of auxiliary losses in RL . We will also open source our codebase to facilitate future research . 2 METHODOLOGY . We consider the standard Markov Decision Process ( MDP ) setting where we specify the state , action and reward at time step t as ( st , at , rt ) . The sequence of interaction data is denoted as ( st , at , rt , · · · , st+k ) , where k represents the horizon of this sequence . Suppose we have an RL agent Mω parameterized by ω and R ( Mω ; E ) is the agent performance ( i.e. , cumulative discounted reward ) in environment E . The goal of AARL is to find the optimal auxiliary loss function L , such that , when ω is optimized under an arbitrary RL objective function LRL ( e.g. , actor-critic loss ) together with L , the agent achieves the best performance . Formally , we have max L R ( Mω∗ ( L ) ; E ) , s.t . ω∗ ( L ) = argmin ω ( LRL ( ω ; E ) + λL ( ω ; E ) ) , ( 1 ) where λ is a hyper-parameter that controls the relative weight of the auxiliary task . We solve this bi-level optimization problem with AutoML techniques . The inner optimization is a standard RL training procedure . For the outer one , we define a finite and discrete search space ( Section 2.1 ) , and use a variation of evolution strategy to explore the space ( Section 2.2 ) . We will explain the details in the rest of this section . 2.1 SEARCH SPACE . An auxiliary loss function L can be viewed as a combination of two components : 1 ) loss input I and 2 ) loss operator f . We define a search space as shown in Figure 1 , where I is a pair of binary masks ( m̂ , m ) that selects from interaction data to compute ŷ and y , and f is an operator that aggregates the selected inputs into an expression of the loss function with scalar output . Loss Input Unlike supervised machine learning tasks with an explicit prediction and ground truth for the loss function , auxiliary losses in RL have no ground truths beforehand . Instead , they are generated accordingly upon interaction with the environment itself in a self-supervised manner . As shown in Figure 1 , we take some tokens from the sequence ( st , at , rt , · · · , st+k ) with binary mask vector m̂ and feed them into the encoder θ̂ which maps states to latent representations . The predictor module then tries to predict targets ( encoded with a momentum encoder θ , as is typically done in recent works ) , which are another few tokens we select from the sequence with another binary mask vectorm . Details about momentum encoder are given in Appendix A.1.2 . In other words , the search space for loss input is a pair of binary masks ( m̂ , m ) , each of which is up to length ( 3k + 1 ) if the length of an interaction data sequence , i.e. , horizon , is limited to k steps . In our case , we set the maximum horizon length kmax = 10 . Loss Operator We optimize ŷ to be as similar as possible to y . Therefore , we make loss operator f cover commonly-used similarity measures , including inner product ( Inner ) ( He et al. , 2020 ; Stooke et al. , 2021 ) , bilinear inner product ( Bilinear ) ( Laskin et al. , 2020 ) , cosine similarity ( Cosine ) ( Chen et al. , 2020 ) , mean squared error ( MSE ) ( Ota et al. , 2020 ; De Bruin et al. , 2018 ) and normalized mean squared error ( N-MSE ) ( Schwarzer et al. , 2020 ) . Some existing works , e.g. , contrastive objectives like InfoNCE loss ( Oord et al. , 2018 ) , also incorporate the trick to sample un-paired predictions and targets as negative samples and maximize the distances between them . We find this technique applicable to all the loss operators mentioned above and thus incorporate the discriminative version of these operators in our search space . Search Space Complexity As shown in Table 1 , many existing manually designed auxiliary losses can naturally fit into our loss space thus are our special cases , which proves that this loss space is reasonable , flexible , and general . However , this is also at the cost that the space is substantially large . In total , the size of the entire space is 10× ∑10 i=1 2 6i+2 ≈ 4.6× 1019 . The detailed derivation can be found in Appendix C . 2.2 SEARCH STRATEGY . Search Space Pruning Considering that the loss space is extremely large , an effective optimization strategy is inevitably required . Directly grid-searching over the whole space is infeasible because of unacceptable computational cost . Thus some advanced techniques such as space pruning and an elaborate search strategy are necessary . Our search space can be seen as a combination of the space for the input I and the space for the operator f . Inspired by AutoML works ( Dai et al. , 2020 ; Ying et al. , 2019 ) that search for hyper-parameters first and then neural architectures , we approximate the joint search of input and operator in Equation ( 1 ) in a two-step manner . The optimal auxiliary loss { I∗ , f∗ } can be optimized as : max L R ( Mω∗ ( L ) ; E ) = maxI , f R ( Mω∗ ( I , f ) ; E ) ≈ maxI R ( Mω∗ ( I , f∗ ) ; E ) where f∗ ≈ argmax f EI [ R ( Mω∗ ( I , f ) ; E ) ] ( 2 ) To decide the best loss operator , for every f in the loss operator space , we estimate EI [ R ( Mω∗ ( I , f ) ; E ) ] with a random sampling strategy . We run 15 trials for each loss operator and calculate the average of their normalized return on three environments . More details are available in Appendix A.1.3 . Surprisingly , as summarized in Table 2 , the simplest MSE without negative samples outperforms all other loss operators with complex designs . Therefore , this loss operator is chosen for the rest of this paper . Evolution Even with the search space pruning , the rest of the space still remains large . To efficiently identify top candidates from the search space , we adopt an evolutionary algorithm ( Real et al. , 2019 ) . The pipeline of the evolution process is illustrated in Figure 2 . In each stage , we first train and evaluate a population of candidates with size P = 100 , where each candidate corresponds to a pair of binary masks ( m̂ , m ) . The candidates are then sorted by the approximated area under learning curve ( AULC ) ( Ghiassian et al. , 2020 ; Stadie et al. , 2015 ) , which well captures both convergence speed and final performance ( Viering & Loog , 2021 ) and reduces the variance of RL evaluation . The top-25 % candidates are selected after each training stage , and we perform four types of mutations on the selected candidates to form a new population for the next stage : replacement ( 50 % of the population ) ; crossover ( 20 % ) ; horizon decrease and horizon increase ( 10 % ) , as shown in Figure 3 . Moreover , the last 20 % of the new population is from random generation . As for each bit of masks , replacement operation flips the given bit with probability p = 12· ( 3k+1 ) , where k is the horizon length . Crossover generates a new candidate by randomly combining the mask bits of two candidates with the same horizon length in the population . We also incorporate the knowledge from the existing auxiliary loss designs by bootstrapping the initial population with a prior distribution ( Co-Reyes et al. , 2020 ) , so that we can quickly converge to a reasonable loss function . More implementation details are available in Appendix A.3 . | The paper introduces an approach to automatically discover optimal auxiliary loss for RL, the method is called (AARL). The authors claim that AARL outperform baselines on both pixel-based and state-based task on the DeepMind Control Suite. Also using their method the author analyse different auxiliary losses to identify common patterns. | science.mup_single_document_summarization.8511 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nAARL: Automated Auxiliary Loss for Reinforcement Learning\n\nMain Content:\n1 INTRODUCTION . Reinforcement learning ( RL ) has been a hot research topic and has shown significant progress in many fields ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Gu et al. , 2017 ; Vinyals et al. , 2019 ) . Recent RL research focuses on obtaining a good representation of states as it is shown to be the key to improve sample efficiency of RL ( Laskin et al. , 2020 ) . This is because in high-dimensional environments , applying RL directly on the complex state inputs is incredibly sample-inefficient ( Lake et al. , 2017 ; Kaiser et al. , 2019 ) , making RL hard to scale to real-world tasks where interacting with the environment is costly ( Dulac-Arnold et al. , 2019 ) . Standard RL paradigm learns the representation from critic loss ( value prediction ) and / or actor loss ( maximizing cumulative reward ) , which hardly extracts informative representations in challenging environments like pixel-based RL and complex robotics systems ( Lake et al. , 2017 ; Kober et al. , 2013 ) . This motivates adding auxiliary losses in support of learning better latent representations of states . The usage of auxiliary loss encodes prior knowledge of RL and its environments , and puts regularization on training ( Shelhamer et al. , 2016 ) . Typical auxiliary losses are manually designed by human experts , including observation reconstruction ( Yarats et al. , 2019 ) , reward prediction ( Jaderberg et al. , 2017 ) and environment dynamics prediction ( Shelhamer et al. , 2016 ; De Bruin et al. , 2018 ; Ota et al. , 2020 ) . However , such auxiliary loss designs and choices rely heavily on human knowledge of what might be helpful for RL training , which requires extensive human efforts to scale to new tasks and can be suboptimal for improving RL performance ( Yang & Nachum , 2021 ) . In this paper , we rigorously treat auxiliary loss functions for RL as a first-class problem to explicitly address the question : what is the universal approach to find a good auxiliary loss function for RL ? Considering that the automated machine learning ( AutoML ) community has shown promising results with automated loss search in computer vision tasks ( Li et al. , 2019 ) , we propose to automate the process of designing auxiliary loss functions of RL . Specifically , we formulate the task as a bi-level optimization , where we try to find the best auxiliary loss function , which , to the most extent , helps train a good RL agent . The inner-level problem is a typical RL training problem , while the outer-level can be seen as a search problem . To tackle this , we first design a novel search space of auxiliary loss functions , which is a combination of two components : loss input and loss operator . As shown in Table 1 , the search space covers many existing handcrafted losses , and it is also substantially large ( as large as 4.6 × 1019 ) . We then propose an efficient search strategy to explore the space . This is done in a two-step manner . We first finalize the loss operator specification and then use an evolutionary strategy that performs mutations on configurations of loss input , to identify the top-performing loss inputs quickly . We evaluate our search framework on both pixel-based and state-based environments in the DeepMind Control suite ( Tassa et al. , 2018 ) . Extensive experiments show that the searched auxiliary loss functions greatly outperform state-of-the-art methods and can easily transfer to unseen environments ( that are never used during the search ) , showing that our proposed method is robust and effective . We highlight the main contributions of this paper below : • We introduce a principled and universal approach for auxiliary loss design in RL . To the best of our knowledge , we are the first to derive the optimal auxiliary loss function with an automatic process . Our framework can be easily applied to arbitrary RL tasks to search for the best auxiliary loss function . • We demonstrate that AARL significantly outperforms state-of-the-art methods in both pixel-based and state-based environments . The searched auxiliary loss functions show strong generalization ability to transfer to unseen environments . • We analyze the derived auxiliary loss functions and deliver some insightful discussions that we hope will deepen the understanding of auxiliary losses in RL . We will also open source our codebase to facilitate future research . 2 METHODOLOGY . We consider the standard Markov Decision Process ( MDP ) setting where we specify the state , action and reward at time step t as ( st , at , rt ) . The sequence of interaction data is denoted as ( st , at , rt , · · · , st+k ) , where k represents the horizon of this sequence . Suppose we have an RL agent Mω parameterized by ω and R ( Mω ; E ) is the agent performance ( i.e. , cumulative discounted reward ) in environment E . The goal of AARL is to find the optimal auxiliary loss function L , such that , when ω is optimized under an arbitrary RL objective function LRL ( e.g. , actor-critic loss ) together with L , the agent achieves the best performance . Formally , we have max L R ( Mω∗ ( L ) ; E ) , s.t . ω∗ ( L ) = argmin ω ( LRL ( ω ; E ) + λL ( ω ; E ) ) , ( 1 ) where λ is a hyper-parameter that controls the relative weight of the auxiliary task . We solve this bi-level optimization problem with AutoML techniques . The inner optimization is a standard RL training procedure . For the outer one , we define a finite and discrete search space ( Section 2.1 ) , and use a variation of evolution strategy to explore the space ( Section 2.2 ) . We will explain the details in the rest of this section . 2.1 SEARCH SPACE . An auxiliary loss function L can be viewed as a combination of two components : 1 ) loss input I and 2 ) loss operator f . We define a search space as shown in Figure 1 , where I is a pair of binary masks ( m̂ , m ) that selects from interaction data to compute ŷ and y , and f is an operator that aggregates the selected inputs into an expression of the loss function with scalar output . Loss Input Unlike supervised machine learning tasks with an explicit prediction and ground truth for the loss function , auxiliary losses in RL have no ground truths beforehand . Instead , they are generated accordingly upon interaction with the environment itself in a self-supervised manner . As shown in Figure 1 , we take some tokens from the sequence ( st , at , rt , · · · , st+k ) with binary mask vector m̂ and feed them into the encoder θ̂ which maps states to latent representations . The predictor module then tries to predict targets ( encoded with a momentum encoder θ , as is typically done in recent works ) , which are another few tokens we select from the sequence with another binary mask vectorm . Details about momentum encoder are given in Appendix A.1.2 . In other words , the search space for loss input is a pair of binary masks ( m̂ , m ) , each of which is up to length ( 3k + 1 ) if the length of an interaction data sequence , i.e. , horizon , is limited to k steps . In our case , we set the maximum horizon length kmax = 10 . Loss Operator We optimize ŷ to be as similar as possible to y . Therefore , we make loss operator f cover commonly-used similarity measures , including inner product ( Inner ) ( He et al. , 2020 ; Stooke et al. , 2021 ) , bilinear inner product ( Bilinear ) ( Laskin et al. , 2020 ) , cosine similarity ( Cosine ) ( Chen et al. , 2020 ) , mean squared error ( MSE ) ( Ota et al. , 2020 ; De Bruin et al. , 2018 ) and normalized mean squared error ( N-MSE ) ( Schwarzer et al. , 2020 ) . Some existing works , e.g. , contrastive objectives like InfoNCE loss ( Oord et al. , 2018 ) , also incorporate the trick to sample un-paired predictions and targets as negative samples and maximize the distances between them . We find this technique applicable to all the loss operators mentioned above and thus incorporate the discriminative version of these operators in our search space . Search Space Complexity As shown in Table 1 , many existing manually designed auxiliary losses can naturally fit into our loss space thus are our special cases , which proves that this loss space is reasonable , flexible , and general . However , this is also at the cost that the space is substantially large . In total , the size of the entire space is 10× ∑10 i=1 2 6i+2 ≈ 4.6× 1019 . The detailed derivation can be found in Appendix C . 2.2 SEARCH STRATEGY . Search Space Pruning Considering that the loss space is extremely large , an effective optimization strategy is inevitably required . Directly grid-searching over the whole space is infeasible because of unacceptable computational cost . Thus some advanced techniques such as space pruning and an elaborate search strategy are necessary . Our search space can be seen as a combination of the space for the input I and the space for the operator f . Inspired by AutoML works ( Dai et al. , 2020 ; Ying et al. , 2019 ) that search for hyper-parameters first and then neural architectures , we approximate the joint search of input and operator in Equation ( 1 ) in a two-step manner . The optimal auxiliary loss { I∗ , f∗ } can be optimized as : max L R ( Mω∗ ( L ) ; E ) = maxI , f R ( Mω∗ ( I , f ) ; E ) ≈ maxI R ( Mω∗ ( I , f∗ ) ; E ) where f∗ ≈ argmax f EI [ R ( Mω∗ ( I , f ) ; E ) ] ( 2 ) To decide the best loss operator , for every f in the loss operator space , we estimate EI [ R ( Mω∗ ( I , f ) ; E ) ] with a random sampling strategy . We run 15 trials for each loss operator and calculate the average of their normalized return on three environments . More details are available in Appendix A.1.3 . Surprisingly , as summarized in Table 2 , the simplest MSE without negative samples outperforms all other loss operators with complex designs . Therefore , this loss operator is chosen for the rest of this paper . Evolution Even with the search space pruning , the rest of the space still remains large . To efficiently identify top candidates from the search space , we adopt an evolutionary algorithm ( Real et al. , 2019 ) . The pipeline of the evolution process is illustrated in Figure 2 . In each stage , we first train and evaluate a population of candidates with size P = 100 , where each candidate corresponds to a pair of binary masks ( m̂ , m ) . The candidates are then sorted by the approximated area under learning curve ( AULC ) ( Ghiassian et al. , 2020 ; Stadie et al. , 2015 ) , which well captures both convergence speed and final performance ( Viering & Loog , 2021 ) and reduces the variance of RL evaluation . The top-25 % candidates are selected after each training stage , and we perform four types of mutations on the selected candidates to form a new population for the next stage : replacement ( 50 % of the population ) ; crossover ( 20 % ) ; horizon decrease and horizon increase ( 10 % ) , as shown in Figure 3 . Moreover , the last 20 % of the new population is from random generation . As for each bit of masks , replacement operation flips the given bit with probability p = 12· ( 3k+1 ) , where k is the horizon length . Crossover generates a new candidate by randomly combining the mask bits of two candidates with the same horizon length in the population . We also incorporate the knowledge from the existing auxiliary loss designs by bootstrapping the initial population with a prior distribution ( Co-Reyes et al. , 2020 ) , so that we can quickly converge to a reasonable loss function . More implementation details are available in Appendix A.3 .",
"role": "user"
},
{
"content": "The paper introduces an approach to automatically discover optimal auxiliary loss for RL, the method is called (AARL). The authors claim that AARL outperform baselines on both pixel-based and state-based task on the DeepMind Control Suite. Also using their method the author analyse different auxiliary losses to identify common patterns.",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
While our model is not explicitly trained to recognise words or segment the speech signal, previous work has shown that such information can be extracted by visual grounding models<cite> [15,</cite> 28] . | ["Background"] | science.multicite_intent_classification.2162 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nWhile our model is not explicitly trained to recognise words or segment the speech signal, previous work has shown that such information can be extracted by visual grounding models<cite> [15,</cite> 28] .",
"role": "user"
},
{
"content": "[\"Background\"]",
"role": "assistant"
}
] |
science.discomat_te | | | As-drawn | Aged 2 h at 80 degC | Aged 24 h at 80 degC | Aged 2 h at 120 degC |
| Mean strength (MPa) | 94+-14 | 80+-14 | 57+-22 | 54+-13 |
Caption: Change of TAS fiber strength due to aging treatments in air (results averaged on 15 experiments)
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "94", "type": "Other"}
{"value": "80", "type": "Other"}
{"value": "57", "type": "Other"}
{"value": "54", "type": "Other"} | science.discomat_te.390 | [
{
"content": "| | As-drawn | Aged 2 h at 80 degC | Aged 24 h at 80 degC | Aged 2 h at 120 degC |\n| Mean strength (MPa) | 94+-14 | 80+-14 | 57+-22 | 54+-13 |\n\nCaption: Change of TAS fiber strength due to aging treatments in air (results averaged on 15 experiments)\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"94\", \"type\": \"Other\"}\n{\"value\": \"80\", \"type\": \"Other\"}\n{\"value\": \"57\", \"type\": \"Other\"}\n{\"value\": \"54\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.discomat_te | | Sample | d (ppm) | FWHM (ppm) |
| RI-2670 | -110.5+-0.02 | 12.6+-0.03 |
| RI-2670B | -110.5+-0.02 | 11.8+-0.03 |
| RI-2730 | -110.5+-0.02 | 12.2+-0.03 |
Caption: 29Si chemical shifts and line widths obtained from the deconvolution of experimental spectra a
Examine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:
{"value": "xx", "type": "Other"}
{"value": "xx", "type": "Glass_Compound_Amount", "constituent": "xx", "unit": "xx", "material": "xx"}
Process each cell sequentially, and for any unknown attributes, use "xx" as a placeholder. Each JSON should be on a new line.
Output: | {"value": "12.6", "type": "Other"}
{"value": "11.8", "type": "Other"}
{"value": "12.2", "type": "Other"} | science.discomat_te.104 | [
{
"content": "| Sample | d (ppm) | FWHM (ppm) |\n| RI-2670 | -110.5+-0.02 | 12.6+-0.03 |\n| RI-2670B | -110.5+-0.02 | 11.8+-0.03 |\n| RI-2730 | -110.5+-0.02 | 12.2+-0.03 |\n\nCaption: 29Si chemical shifts and line widths obtained from the deconvolution of experimental spectra a\n\nExamine the table provided from a materials science study. Your task is to convert each numeric cell into a JSON format. Use the following structures for guidance:\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent\": \"xx\", \"unit\": \"xx\", \"material\": \"xx\"}\n\nProcess each cell sequentially, and for any unknown attributes, use \"xx\" as a placeholder. Each JSON should be on a new line.\n\nOutput:",
"role": "user"
},
{
"content": "{\"value\": \"12.6\", \"type\": \"Other\"}\n{\"value\": \"11.8\", \"type\": \"Other\"}\n{\"value\": \"12.2\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Robust Temporal Ensembling
Main Content:
1 INTRODUCTION . Deep neural networks have enjoyed considerable success across a variety of domains , and in particular computer vision , where the common theme is that more labeled training data yields improved model performance ( Hestness et al. , 2017 ; Mahajan et al. , 2018 ; Xie et al. , 2019b ; Kolesnikov et al. , 2019 ) . However , performance depends on the quality of the training data , which is expensive to collect and inevitably imperfect . For example , ImageNet ( Deng et al. , 2009 ) is one of the most widely-used datasets in the field of deep learning and despite over 2 years of labor from more than 49,000 human annotators across 167 countries , it still contains erroneous and ambiguous labels ( FeiFei & Deng , 2017 ; Karpathy , 2014 ) . It is therefore essential that learning algorithms in production workflows leverage noise robust methods . Noise robust learning has a long history and takes many forms ( Natarajan et al. , 2013 ; Frenay & Verleysen , 2014 ; Song et al. , 2020 ) . Common strategies include loss correction and reweighting ( Patrini et al. , 2016 ; Zhang & Sabuncu , 2018 ; Menon et al. , 2020 ) , label refurbishment ( Reed et al. , 2014 ; Song et al. , 2019 ) , abstention ( Thulasidasan et al. , 2019 ) , and relying on carefully constructed trusted subsets of human-verified labeled data ( Li et al. , 2017 ; Hendrycks et al. , 2018 ; Zhang et al. , 2020 ) . Additionally , recent methods such as SELF ( Nguyen et al. , 2020 ) and DivideMix ( Li et al. , 2020 ) convert the problem of learning with noise into a semi-supervised learning approach by splitting the corrupted training set into clean labeled data and noisy unlabeled data at which point semisupervised learning methods such as Mean Teacher ( Tarvainen & Valpola , 2017 ) and MixMatch ( Berthelot et al. , 2019 ) can be applied directly . In essence , these methods effectively discard a majority of the label information so as to side-step having to learning with noise at all . The problem here is that noisy label filtering tactics are imperfect resulting in corrupted data in the small labeled partition and valuable clean samples lost to the large pool of unlabeled data . Moreover , caution is needed when applying semi-supervised methods where the labeled data is not sampled i.i.d . from the pool of unlabeled data ( Oliver et al. ) . Indeed , filtering tactics can be biased and irregular , driven by specification error and the underlying noise process of the label corruption . Recognizing the success of semi-supervised approaches , we ask : can we leverage the underlying mechanisms of semi-supervised learning such as entropy regularization for learning with noise without discarding our most valuable asset , the labels ? 2 ROBUST TEMPORAL ENSEMBLING . 2.1 PRELIMINARIES . Adopting the notation of Zhang & Sabuncu ( 2018 ) , we consider the problem of classification where X ⊂ Rd is the feature space and Y = { 1 , . . . , c } is the label space where the classifier function is a deep neural network with a softmax output layer that maps input features to distributions over labels f : X → Rc . The dataset of training examples containing in-sample noise is defined as D = { ( xi , ỹi ) } ni=1 where ( xi , ỹi ) ∈ ( X × Y ) and ỹi is the noisy version of the true label yi such that p ( ỹi = k|yi = j , xi ) ≡ ηijk . We do not consider open-set noise ( Wang et al. , 2018 ) , in which there is a particular type of noise that occurs on inputs , x̃ , rather than labels . Following most prior work , we make the simplifying assumption that the noise is conditionally independent of the input , xi , given the true labels . In this setting , we can write ηijk = p ( ỹi = k|yi = j ) ≡ ηjk which is , in general , considered to be class dependent noise1,2 . To aid in a simple and precise corruption procedure , we now depart from traditional notation and further decompose ηjk as pj · cjk , where pj ∈ [ 0 , 1 ] is the probability of corruption of the j-th class and cjk ∈ [ 0 , 1 ] is the relative probability that corrupted samples of class j are labeled as class k , with ci6=j ≥ 0 , cjj = 0 and ∑ k cjk = 1 . A noisy dataset with m classes can then be described as transition probabilities specified by F = diag ( P ) · C + diag ( 1− P ) · I ( 1 ) where C ∈ Rm×m defines the system confusion or noise structure , P ∈ Rm defines the noise intensity or ratio for each class , and I is the identity matrix . When cjk = ckj the noise is said to be symmetric and is considered asymmetric otherwise . If ratio of noise is the same for all classes then pj = p and the dataset is said to exhibit uniform noise . For the case of uniform noise , equation ( 1 ) interestingly takes the familiar form of the Google matrix equation as Fp = p · C + ( 1− p ) · I ( 2 ) Note that , by this definition , ηjj = p · cjj = 0 which prohibits ỹi = yi . This ensures a true effective noise ratio of p. For example , suppose there are m = 10 classes and we wish to corrupt labels with 80 % probability . Then if corrupted labels are sampled from Y rather than Y \ { y } , 110 · 0.8 = 8 % of the corrupted samples will not actually be corrupted , leading to a true corruption rate of 72 % . Therefore , despite prescribing p = 0.8 , the true effective noise ratio would be 0.72 , which in turn yields a 0.081−0.8 = 40 % increase in clean labels , and this is indeed the case in many studies ( Zhang & Sabuncu , 2018 ; Nguyen et al. , 2020 ; Li et al. , 2020 ; Zhang et al. , 2020 ) . 2.2 METHODS . At a very high level , RTE is the combination of noise-robust task loss , augmentation , and pseudolabeling for consistency regularization . We unify generalized cross entropy ( Zhang & Sabuncu , 2018 ) , AugMix stochastic augmentation strategy ( Hendrycks et al. , 2020 ) , an exponential moving average of model weights for generating pseudo-labels ( Tarvainen & Valpola , 2017 ) , and an augmentation anchoring-like approach ( Berthelot et al. , 2020 ) to form a robust approach for learning with noisy labels . 2.2.1 NOISE-ROBUST TASK LOSS . Generalized cross entropy ( GCE ) ( Zhang & Sabuncu , 2018 ) is a theoretically grounded noise-robust loss function that can be seen as a generalization of mean absolute error ( MAE ) and categorical cross entropy ( CCE ) . The main idea is that CCE learns quickly , but more emphasis is put on difficult samples which is prone to overfit noisy labels , while MAE treats all samples equally , providing noise-robustness but learning slowly . To exploit the benefits of both MAE and CCE , a negative Box-Cox transformation ( Box & Cox , 1964 ) is used as the loss function Lq ( f ( xi ) , yi = j ) = ( 1− fj ( xi ) q ) q ( 3 ) 1See Lee et al . ( 2019 ) for treatment of conditionally dependent semantic noise such that ηijk 6= ηjk . 2Note that Patrini et al . ( 2016 ) define the noise transition matrix T such that Tjk ≡ ηjk . where q ∈ ( 0 , 1 ] , and fj denotes the j-th element of f . Note that GCE becomes CCE for limq→0 Lq and becomes MAE/unhinged loss when q = 1 . 2.2.2 ENSEMBLE CONSISTENCY REGULARIZATION . Consistency regularization works under the assumption that a model should output similar predictions given augmented versions of the same input . This regularization strategy is a common component of semi-supervised learning algorithms with the general form of ‖pθ ( y|xaug1 ) −pθ ( y|xaug2 ) ‖ where pθ ( y|x ) is the predicted class distribution produced by the model having parameters θ for input x ( Zheng et al. , 2016 ; Sajjadi et al. , 2016 ) . We build upon numerous variations from semisupervised learning ( Laine & Alia , 2017 ; Tarvainen & Valpola , 2017 ; Berthelot et al. , 2019 ; 2020 ) and leverage an ensemble consistency regularization ( ECR ) strategy as ECR = 1 |Y|N∗ N∗∑ i=1 ‖pθ′ ( y|x ) − pθ ( y|A ( x ) ) ‖ ( 4 ) where x is the training example , A is stochastic augmentation function reevaluated for each term in the summation , θ′t = αθ ′ t−1 + ( 1 − α ) θt is a temporal moving average of model weights used to generate pseudo-label targets , and inputs are pre-processed with standard random horizontal flip and crop . In practice , this consists of initializing a copy of the initial model and maintaining an exponential moving average as training progresses . Some methods directly average multiple label predictions together at each optimization step to form a single pseudo-label target ( Berthelot et al. , 2019 ; Li et al. , 2020 ) but we find pseudo-label target distributions generated by θ′ to be better suited for the learning with noise problem due to the intrinsic ensemble nature of the weight averaging process over many optimization steps ( Tarvainen & Valpola , 2017 ) . In semi-supervised learning techniques , it is common to leverage a large batch-size of unlabeled data for consistency regularization . However , we found that modulating N∗ , rather than the batch size of the consistency term , yields a monotonic increase in model performance consistent with related works ( Berthelot et al. , 2020 ) . Moreover , in semi-supervised learning , different batches are used for between supervised and unsupervised loss terms but we find ( see section 4.3 ) that for the case of learning with noise , batches synchronized with GCE task loss term yields superior performance . 2.2.3 AUGMENTATION . AugMix ( Hendrycks et al. , 2020 ) is a data augmentation technique which utilizes stochasticity , diverse augmentations , a Jensen-Shannon divergence consistency loss , and a formulation to mix multiple augmented inputs . Other augmentation strategies such as RandAugment ( Cubuk et al. , 2020 ) , augmentations are applied sequentially with fixed intensity which can degrade input quickly . In AugMix , to mitigate input degradation but retain augmentation diversity , several stochastically sampled augmentation chains are layered together in a convex combination to generate highly diverse transformations . These mixing coefficients are randomly sampled from a Dirichlet distribution with shared concentration parameters , and the resulting augmented version of the input is combined with the original input through a second random convex combination sampled from a beta distribution , again with shared parameters . 2.2.4 JENSEN-SHANNON DIVERGENCE . The Jensen-Shannon consistency loss is used to enforce a flat response of the classifier by incentivizing the model to be stable , consistent , and insensitive across a diverse range of inputs ( Zheng et al. , 2016 ) . The Jensen-Shannon divergence ( JSD ) is minimized across distributions porig , paug1 , and paug2 of the original sample xorig and its augmented variants xaug1 and xaug2 which can be understood to measure the average information that the sample reveals about the identity of its originating distribution ( Hendrycks et al. , 2020 ) . This JSD term is computed withM = ( porig +paug1 +paug2 ) /3 and is then JSD = 1 3 ( KL ( porig ‖M ) + KL ( paug1 ‖M ) + KL ( paug2 ‖M ) ) ( 5 ) where KL ( p ‖ q ) is Kullback–Leibler divergence from q to p. In this way , the JSD term improves the stability of training in the presence of noisy labels and heavy data augmentation with a modest contribution to final classifier test accuracy as shown in Table 5 . | This submission deals with robust supervised learning in the presence of noisy labels. The label noise is modeled using a probabilistic (and conditionally independent) transition matrix that changes the label of one class to another one. In order to classify with noise, the network is trained with a mixture of three known losses including: 1) generalized cross entropy (GCE) rejects the outlier labels, 2) JSD divergence to assure the soft-max distribution matches the augmented data distributions, and 3) an ensemble consistency regularization (ECR) that penalizes the inconsistencies of the augmented data based on the mean teachers. Experiments with CIFAR-10, CIFAR-100, and ImageNet classification indicate substantial gains compared with state-of-the-art alternatives. | science.mup_single_document_summarization.710 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nRobust Temporal Ensembling\n\nMain Content:\n1 INTRODUCTION . Deep neural networks have enjoyed considerable success across a variety of domains , and in particular computer vision , where the common theme is that more labeled training data yields improved model performance ( Hestness et al. , 2017 ; Mahajan et al. , 2018 ; Xie et al. , 2019b ; Kolesnikov et al. , 2019 ) . However , performance depends on the quality of the training data , which is expensive to collect and inevitably imperfect . For example , ImageNet ( Deng et al. , 2009 ) is one of the most widely-used datasets in the field of deep learning and despite over 2 years of labor from more than 49,000 human annotators across 167 countries , it still contains erroneous and ambiguous labels ( FeiFei & Deng , 2017 ; Karpathy , 2014 ) . It is therefore essential that learning algorithms in production workflows leverage noise robust methods . Noise robust learning has a long history and takes many forms ( Natarajan et al. , 2013 ; Frenay & Verleysen , 2014 ; Song et al. , 2020 ) . Common strategies include loss correction and reweighting ( Patrini et al. , 2016 ; Zhang & Sabuncu , 2018 ; Menon et al. , 2020 ) , label refurbishment ( Reed et al. , 2014 ; Song et al. , 2019 ) , abstention ( Thulasidasan et al. , 2019 ) , and relying on carefully constructed trusted subsets of human-verified labeled data ( Li et al. , 2017 ; Hendrycks et al. , 2018 ; Zhang et al. , 2020 ) . Additionally , recent methods such as SELF ( Nguyen et al. , 2020 ) and DivideMix ( Li et al. , 2020 ) convert the problem of learning with noise into a semi-supervised learning approach by splitting the corrupted training set into clean labeled data and noisy unlabeled data at which point semisupervised learning methods such as Mean Teacher ( Tarvainen & Valpola , 2017 ) and MixMatch ( Berthelot et al. , 2019 ) can be applied directly . In essence , these methods effectively discard a majority of the label information so as to side-step having to learning with noise at all . The problem here is that noisy label filtering tactics are imperfect resulting in corrupted data in the small labeled partition and valuable clean samples lost to the large pool of unlabeled data . Moreover , caution is needed when applying semi-supervised methods where the labeled data is not sampled i.i.d . from the pool of unlabeled data ( Oliver et al. ) . Indeed , filtering tactics can be biased and irregular , driven by specification error and the underlying noise process of the label corruption . Recognizing the success of semi-supervised approaches , we ask : can we leverage the underlying mechanisms of semi-supervised learning such as entropy regularization for learning with noise without discarding our most valuable asset , the labels ? 2 ROBUST TEMPORAL ENSEMBLING . 2.1 PRELIMINARIES . Adopting the notation of Zhang & Sabuncu ( 2018 ) , we consider the problem of classification where X ⊂ Rd is the feature space and Y = { 1 , . . . , c } is the label space where the classifier function is a deep neural network with a softmax output layer that maps input features to distributions over labels f : X → Rc . The dataset of training examples containing in-sample noise is defined as D = { ( xi , ỹi ) } ni=1 where ( xi , ỹi ) ∈ ( X × Y ) and ỹi is the noisy version of the true label yi such that p ( ỹi = k|yi = j , xi ) ≡ ηijk . We do not consider open-set noise ( Wang et al. , 2018 ) , in which there is a particular type of noise that occurs on inputs , x̃ , rather than labels . Following most prior work , we make the simplifying assumption that the noise is conditionally independent of the input , xi , given the true labels . In this setting , we can write ηijk = p ( ỹi = k|yi = j ) ≡ ηjk which is , in general , considered to be class dependent noise1,2 . To aid in a simple and precise corruption procedure , we now depart from traditional notation and further decompose ηjk as pj · cjk , where pj ∈ [ 0 , 1 ] is the probability of corruption of the j-th class and cjk ∈ [ 0 , 1 ] is the relative probability that corrupted samples of class j are labeled as class k , with ci6=j ≥ 0 , cjj = 0 and ∑ k cjk = 1 . A noisy dataset with m classes can then be described as transition probabilities specified by F = diag ( P ) · C + diag ( 1− P ) · I ( 1 ) where C ∈ Rm×m defines the system confusion or noise structure , P ∈ Rm defines the noise intensity or ratio for each class , and I is the identity matrix . When cjk = ckj the noise is said to be symmetric and is considered asymmetric otherwise . If ratio of noise is the same for all classes then pj = p and the dataset is said to exhibit uniform noise . For the case of uniform noise , equation ( 1 ) interestingly takes the familiar form of the Google matrix equation as Fp = p · C + ( 1− p ) · I ( 2 ) Note that , by this definition , ηjj = p · cjj = 0 which prohibits ỹi = yi . This ensures a true effective noise ratio of p. For example , suppose there are m = 10 classes and we wish to corrupt labels with 80 % probability . Then if corrupted labels are sampled from Y rather than Y \\ { y } , 110 · 0.8 = 8 % of the corrupted samples will not actually be corrupted , leading to a true corruption rate of 72 % . Therefore , despite prescribing p = 0.8 , the true effective noise ratio would be 0.72 , which in turn yields a 0.081−0.8 = 40 % increase in clean labels , and this is indeed the case in many studies ( Zhang & Sabuncu , 2018 ; Nguyen et al. , 2020 ; Li et al. , 2020 ; Zhang et al. , 2020 ) . 2.2 METHODS . At a very high level , RTE is the combination of noise-robust task loss , augmentation , and pseudolabeling for consistency regularization . We unify generalized cross entropy ( Zhang & Sabuncu , 2018 ) , AugMix stochastic augmentation strategy ( Hendrycks et al. , 2020 ) , an exponential moving average of model weights for generating pseudo-labels ( Tarvainen & Valpola , 2017 ) , and an augmentation anchoring-like approach ( Berthelot et al. , 2020 ) to form a robust approach for learning with noisy labels . 2.2.1 NOISE-ROBUST TASK LOSS . Generalized cross entropy ( GCE ) ( Zhang & Sabuncu , 2018 ) is a theoretically grounded noise-robust loss function that can be seen as a generalization of mean absolute error ( MAE ) and categorical cross entropy ( CCE ) . The main idea is that CCE learns quickly , but more emphasis is put on difficult samples which is prone to overfit noisy labels , while MAE treats all samples equally , providing noise-robustness but learning slowly . To exploit the benefits of both MAE and CCE , a negative Box-Cox transformation ( Box & Cox , 1964 ) is used as the loss function Lq ( f ( xi ) , yi = j ) = ( 1− fj ( xi ) q ) q ( 3 ) 1See Lee et al . ( 2019 ) for treatment of conditionally dependent semantic noise such that ηijk 6= ηjk . 2Note that Patrini et al . ( 2016 ) define the noise transition matrix T such that Tjk ≡ ηjk . where q ∈ ( 0 , 1 ] , and fj denotes the j-th element of f . Note that GCE becomes CCE for limq→0 Lq and becomes MAE/unhinged loss when q = 1 . 2.2.2 ENSEMBLE CONSISTENCY REGULARIZATION . Consistency regularization works under the assumption that a model should output similar predictions given augmented versions of the same input . This regularization strategy is a common component of semi-supervised learning algorithms with the general form of ‖pθ ( y|xaug1 ) −pθ ( y|xaug2 ) ‖ where pθ ( y|x ) is the predicted class distribution produced by the model having parameters θ for input x ( Zheng et al. , 2016 ; Sajjadi et al. , 2016 ) . We build upon numerous variations from semisupervised learning ( Laine & Alia , 2017 ; Tarvainen & Valpola , 2017 ; Berthelot et al. , 2019 ; 2020 ) and leverage an ensemble consistency regularization ( ECR ) strategy as ECR = 1 |Y|N∗ N∗∑ i=1 ‖pθ′ ( y|x ) − pθ ( y|A ( x ) ) ‖ ( 4 ) where x is the training example , A is stochastic augmentation function reevaluated for each term in the summation , θ′t = αθ ′ t−1 + ( 1 − α ) θt is a temporal moving average of model weights used to generate pseudo-label targets , and inputs are pre-processed with standard random horizontal flip and crop . In practice , this consists of initializing a copy of the initial model and maintaining an exponential moving average as training progresses . Some methods directly average multiple label predictions together at each optimization step to form a single pseudo-label target ( Berthelot et al. , 2019 ; Li et al. , 2020 ) but we find pseudo-label target distributions generated by θ′ to be better suited for the learning with noise problem due to the intrinsic ensemble nature of the weight averaging process over many optimization steps ( Tarvainen & Valpola , 2017 ) . In semi-supervised learning techniques , it is common to leverage a large batch-size of unlabeled data for consistency regularization . However , we found that modulating N∗ , rather than the batch size of the consistency term , yields a monotonic increase in model performance consistent with related works ( Berthelot et al. , 2020 ) . Moreover , in semi-supervised learning , different batches are used for between supervised and unsupervised loss terms but we find ( see section 4.3 ) that for the case of learning with noise , batches synchronized with GCE task loss term yields superior performance . 2.2.3 AUGMENTATION . AugMix ( Hendrycks et al. , 2020 ) is a data augmentation technique which utilizes stochasticity , diverse augmentations , a Jensen-Shannon divergence consistency loss , and a formulation to mix multiple augmented inputs . Other augmentation strategies such as RandAugment ( Cubuk et al. , 2020 ) , augmentations are applied sequentially with fixed intensity which can degrade input quickly . In AugMix , to mitigate input degradation but retain augmentation diversity , several stochastically sampled augmentation chains are layered together in a convex combination to generate highly diverse transformations . These mixing coefficients are randomly sampled from a Dirichlet distribution with shared concentration parameters , and the resulting augmented version of the input is combined with the original input through a second random convex combination sampled from a beta distribution , again with shared parameters . 2.2.4 JENSEN-SHANNON DIVERGENCE . The Jensen-Shannon consistency loss is used to enforce a flat response of the classifier by incentivizing the model to be stable , consistent , and insensitive across a diverse range of inputs ( Zheng et al. , 2016 ) . The Jensen-Shannon divergence ( JSD ) is minimized across distributions porig , paug1 , and paug2 of the original sample xorig and its augmented variants xaug1 and xaug2 which can be understood to measure the average information that the sample reveals about the identity of its originating distribution ( Hendrycks et al. , 2020 ) . This JSD term is computed withM = ( porig +paug1 +paug2 ) /3 and is then JSD = 1 3 ( KL ( porig ‖M ) + KL ( paug1 ‖M ) + KL ( paug2 ‖M ) ) ( 5 ) where KL ( p ‖ q ) is Kullback–Leibler divergence from q to p. In this way , the JSD term improves the stability of training in the presence of noisy labels and heavy data augmentation with a modest contribution to final classifier test accuracy as shown in Table 5 .",
"role": "user"
},
{
"content": "This submission deals with robust supervised learning in the presence of noisy labels. The label noise is modeled using a probabilistic (and conditionally independent) transition matrix that changes the label of one class to another one. In order to classify with noise, the network is trained with a mixture of three known losses including: 1) generalized cross entropy (GCE) rejects the outlier labels, 2) JSD divergence to assure the soft-max distribution matches the augmented data distributions, and 3) an ensemble consistency regularization (ECR) that penalizes the inconsistencies of the augmented data based on the mean teachers. Experiments with CIFAR-10, CIFAR-100, and ImageNet classification indicate substantial gains compared with state-of-the-art alternatives.",
"role": "assistant"
}
] |
science.multicite_intent_classification | Examine the text below, which includes a citation marked by "<cite>" and "</cite>". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.
Provide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.
Text with Citation:
Even though the results obtained by <cite>Yuan et al. (2016)</cite> outperform the previous state-of-the-art, neither the used datasets nor the constructed models are available to the community. This is unfortunate because this makes the re-application of this technique a non-trivial process, and it hinders further studies for understanding which limitations prevent even higher accuracies. These could be, for instance, of algorithmic nature or relate to the input (either size or quality), and a deeper understanding is crucial for enabling further improvements. In addition, some details are not reported, and this could prevent other attempts from replicating the results. To address these issues, we reimplemented <cite>Yuan et al. (2016)</cite> 's method with the goal of: 1) reproducing and making available the code, trained models, and results and 2) understanding which are the main factors that constitute the strengths and weaknesses of this method. | ["Motivation"] | science.multicite_intent_classification.1401 | [
{
"content": "Examine the text below, which includes a citation marked by \"<cite>\" and \"</cite>\". Your objective is to identify the intent behind this citation. Focus on understanding the role the cited work plays in the context of the text.\n\nProvide your analysis as a JSON array. Ensure your response is concise and directly related to the citation's purpose.\n\nText with Citation:\nEven though the results obtained by <cite>Yuan et al. (2016)</cite> outperform the previous state-of-the-art, neither the used datasets nor the constructed models are available to the community. This is unfortunate because this makes the re-application of this technique a non-trivial process, and it hinders further studies for understanding which limitations prevent even higher accuracies. These could be, for instance, of algorithmic nature or relate to the input (either size or quality), and a deeper understanding is crucial for enabling further improvements. In addition, some details are not reported, and this could prevent other attempts from replicating the results. To address these issues, we reimplemented <cite>Yuan et al. (2016)</cite> 's method with the goal of: 1) reproducing and making available the code, trained models, and results and 2) understanding which are the main factors that constitute the strengths and weaknesses of this method.",
"role": "user"
},
{
"content": "[\"Motivation\"]",
"role": "assistant"
}
] |
science.mup_single_document_summarization | Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.
Title of the Paper:
Fair Exploration via Axiomatic Bargaining
Main Content:
Motivated by the consideration of fairly sharing the cost of exploration between multiple groups in learning problems , we develop the Nash bargaining solution in the context of multi-armed bandits . Specifically , the ‘ grouped ’ bandit associated with any multi-armed bandit problem associates , with each time step , a single group from some finite set of groups . The utility gained by a given group under some learning policy is naturally viewed as the reduction in that group ’ s regret relative to the regret that group would have incurred ‘ on its own ’ . We derive policies that yield the Nash bargaining solution relative to the set of incremental utilities possible under any policy . We show that on the one hand , the ‘ price of fairness ’ under such policies is limited , while on the other hand , regret optimal policies are arbitrarily unfair under generic conditions . Our theoretical development is complemented by a case study on contextual bandits for warfarin dosing where we are concerned with the cost of exploration across multiple races and age groups . 1 Introduction . Exploration in learning problems has an implicit cost , insomuch that exploring actions that are eventually revealed to be sub-optimal incurs regret . We study how this cost of exploration is shared in a system with multiple stakeholders . At the outset , we present two motivating examples . Personalized Medicine and Adaptive Trials : Multi-stage , adaptive designs [ 1 , 2 , 3 , 4 ] , are widely viewed as a frontier in clinical trials . More generally , the ability to collect detailed patient level data , and real time monitoring ( eg . glucose monitoring for diabetes [ 5 , 6 ] ) has raised the specter of learning personalized treatments . Among other formulations , such problems may be viewed as contextual bandits . For instance , for the problem of optimal warfarin dosing [ 7 ] , the context at each time step corresponds to a patient ’ s covariates , arms correspond to different dosages of warfarin , and the reward is the observed efficacy of the assigned dose . In examining such a study in retrospect , it is natural to measure the regret incurred by distinct groups of patients ( eg . by race or age ) . What makes a profile of regret across such groups fair or unfair ? Revenue Management for Search Advertising : Ad platforms enjoy a tremendous amount of flexibility in the the choice of ads served against search queries . Specifically , this flexibility exists both in selecting a slate of advertisers to compete for a specific search , and then in picking a winner from this slate . Now a key goal for the platform is learning the affinity of any given ad for a given search . In solving such a learning problem – for which many variants have been proposed [ 8 , 9 ] – we may again ask the question of who bears the cost of exploration , and whether the profile of such costs across various groups of advertisers is fair . ∗The full version of the paper can be found at https : //arxiv.org/abs/2106.02553 . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . 1.1 Bandits , Groups and Axiomatic Bargaining . Delaying a formal development to later , any bandit problem has an associated ‘ grouped ’ variant . Specifically , we are given a finite set of groups ( eg . races or age groups in the warfarin example ) , and each group is associated with an arrival probability and a distribution over action sets . At each time step , a group and an action set is drawn from this distribution from which the learning algorithm must pick an action . Heterogeneity in groups is thus driven by differences in their respective distributions over feasible action sets . In addition to measuring overall regret , we also care about the regret incurred by specific groups , which we can view as the cost of exploration borne by that group . In reasoning about ‘ fair ’ regret profiles we turn to the theory of axiomatic bargaining . There , a central decision maker is concerned with the incremental utility earned by each group from collaborating , relative to the utility the group would earn on its own . Here this incremental utility is precisely the reduction in regret for any given group relative to the optimal regret that group would have incurred ‘ on its own ’ . A bargaining solution maximizes some objective function over the set of achievable incremental utilities . The utilitarian solution , for instance , maximizes the sum of incremental utilities which would reduce here to the usual objective of minimizing total regret . The Nash bargaining solution maximizes an alternative objective , the Nash Social Welfare ( SW ) function . This latter solution is the unique solution to satisfy a set of axioms any ‘ fair ’ solution would reasonably satisfy . This paper develops the Nash bargaining solution to the ( grouped ) bandit problem . 1.2 Contributions . In developing the Nash bargaining solution , we focus primarily on what is arguably the simplest non-trivial grouped bandit setting . Specifically , we consider the ‘ grouped ’ K-armed bandit model , wherein each group corresponds to a subset of the K arms . We make the following contributions relative to this problem : Regret Optimal Policies are Unfair ( Theorem 3.1 ) : We show that all regret optimal policies for the grouped K-armed bandit share a structural property that make them ‘ arbitrarily unfair ’ – in the sense that the Nash SW is−∞ for these solutions – under a broad set of conditions on the problem instance . Achievable Fairness ( Theorem 3.3 ) : We derive an instance-dependent upper bound on the Nash SW for the grouped K-armed bandit . This can be viewed as a ‘ fair ’ analogue to a regret lower bound ( e.g . [ 10 ] ) for the problem , since a lower bound on achievable regret ( forgoing any fairness concerns ) would in effect correspond to an upper bound on the utilitarian SW for the problem . Nash Solution ( Theorem 4.1 ) : We produce a policy that achieves the Nash solution . Specifically , the Nash SW under this policy achieves the upper bound we derive on the Nash SW for all instances of the grouped K-armed bandit . Price of Fairness for the Nash Solution ( Theorem 4.2 ) : We show that the ‘ price of fairness ’ for the Nash solution is small : if G is the number of groups , the Nash solution achieves at least O ( 1/ √ G ) of the reduction in regret achieved under a regret optimal solution relative to the regret incurred when groups operate separately . Taken together , these results establish a rigorous framework for the design of bandit algorithms that yield fair outcomes across groups at a low cost to total regret . As a final contribution , we extend our framework beyond the grouped K-armed bandit and undertake an empirical study : Linear Contextual Bandits and Warfarin Dosing : We extend our framework to grouped linear contextual bandits , yielding a candidate Nash solution there . Applied to a real-world dataset on warfarin dosing using race and age groups , we show ( a ) a regret optimal solution that ignores groups is dramatically unfair , and ( b ) the Nash solution balances out reductions in regret across groups at the cost of a small increase in total regret . 1.3 Related Literature . Two pieces of prior work have a motivation similar to our own . [ 11 ] studies a setting with multiple agents with a common bandit problem , where each agent can decide which action to take at each time . They show that ‘ free-riding ’ is possible — an agent that can access information from other agents can incur only O ( 1 ) regret in several classes of problems . This is consistent with our motivation . [ 12 ] studies a very similar grouped bandit model to ours , and provides a ‘ counterexample ’ in which a group can have a negative externality on another group . This example is somewhat pathological and stems from considering an instance-specific fixed time horizon ; instead , if T →∞ , all externalities become non-negative ( details in Appendix A.1 ) . Our grouped bandit model is also similar to sleeping bandits [ 13 ] , in which the set of available arms is adversarially chosen in each round . The known , fixed group structure in our model allows us to achieve tighter regret bounds than [ 13 ] . There have also been a handful of papers [ 14 , 15 , 16 , 17 ] that study ‘ fairness in bandits ’ in a completely different context . These works enforce a fairness criterion between arms , which is relevant in settings where a ‘ pull ’ represents some resource that is allocated to that arm , and these pulls should be distributed between arms in a fair manner . In these models , the decision maker ’ s objective ( maximize reward ) is distinct from that of a group ( obtain ‘ pulls ’ ) , unlike our setting ( and motivating examples ) where the groups and decision maker are aligned in their eventual objective . Our upper bound on Nash SW borrows classic techniques from the regret lower bound results of [ 10 ] and [ 18 ] . Our policy follows a similar pattern to recent work on regret-optimal , optimization-based policies for structured bandits [ 19 , 20 , 21 , 22 ] . Unlike those policies , our policy has no forced exploration . Further the optimization problem defining the Nash solution can generically have multiple solutions whereas the aforementioned approaches would require this solution to be unique ; our approach does not require a unique solution . Nonetheless , we believe that the framework in the aforementioned works can be fruitfully leveraged to construct Nash solutions for general grouped bandits , and we provide such a candidate solution as an extension . Our fairness framework is inspired by the literature on fairness in welfare economics — see [ 23 , 24 ] . Specifically , we study fairness in exploration through the lens of the axiomatic bargaining framework , first studied by [ 25 ] , who showed that enforcing four desirable axioms induces a unique fair solution . [ 26 ] is an excellent textbook reference for this topic . 2 The Axiomatic Bargaining Framework for Bandits . Let θ ∈ Θ be an unknown parameter and let A be the action set . For every arm a ∈ A , ( Yn ( a ) ) n≥1 is an i.i.d . sequence of rewards drawn from a distribution F ( θ , a ) parameterized by θ and a . We let µ ( a ) = E [ Y1 ( a ) ] be the expected reward of arm a . In defining a grouped bandit problem , we let G be a set of G groups . Each group g ∈ G is associated with a probability distribution P g over 2A , and a probability of arrival pg ; ∑ g pg = 1 . The identity of the group arriving at time t , gt , is chosen independently according to this latter distribution ; At is then drawn according to P gt . An instance of the grouped bandit problem is specified by I = ( A , G , p , P , F , θ ) , where all quantities except for θ are known . At each time t , a central decision maker observes gt and At , chooses an arm At ∈ At to pull and observes the reward YNt ( At ) +1 ( At ) , where Nt ( a ) is the total number of times arm a was pulled up to but not including time t. Let A∗t ∈ argmaxa∈At µ ( a ) be an optimal arm at time t. Given an instance I and a policy π , the total regret , and the group regret for group g ∈ G are respectively RT ( π , I ) = E [ T∑ t=1 ( µ ( A∗t ) − µ ( At ) ) ] and RgT ( π , I ) = E [ T∑ t=1 1 ( gt = g ) ( µ ( A ∗ t ) − µ ( At ) ) ] , where the expectation is over randomness in arrivals ( gt , At ) , rewards Yn ( a ) , and the policy π . Finally , so that the notion of an optimal policy for some class of instances , I , is well defined , we restrict attention to consistent policies which yield sub-polynomial regret for any instance in that class : Ψ = { π : RT ( π , I ) = o ( T b ) ∀I ∈ I , ∀b > 0 } . | This paper considers a grouped MAB formulation, wherein a random group of arms becomes available to pull at each time instant. The authors then consider the regret of each separate group, and show that policies that optimize global regret can be unfair in that certain groups no not `benefit' from the presence of the other groups. They use the Nash bargaining solution to define a `fair' allocation of the exploration cost, and propose a policy that meets this fairness criterion. The authors also bound the impact of the `fairness' consideration on regret by analysing the price of fairness. | science.mup_single_document_summarization.13026 | [
{
"content": "Summarize the key points and contributions of the following computer science research paper. Your summary should be clear and concise.\n\nTitle of the Paper:\nFair Exploration via Axiomatic Bargaining\n\nMain Content:\nMotivated by the consideration of fairly sharing the cost of exploration between multiple groups in learning problems , we develop the Nash bargaining solution in the context of multi-armed bandits . Specifically , the ‘ grouped ’ bandit associated with any multi-armed bandit problem associates , with each time step , a single group from some finite set of groups . The utility gained by a given group under some learning policy is naturally viewed as the reduction in that group ’ s regret relative to the regret that group would have incurred ‘ on its own ’ . We derive policies that yield the Nash bargaining solution relative to the set of incremental utilities possible under any policy . We show that on the one hand , the ‘ price of fairness ’ under such policies is limited , while on the other hand , regret optimal policies are arbitrarily unfair under generic conditions . Our theoretical development is complemented by a case study on contextual bandits for warfarin dosing where we are concerned with the cost of exploration across multiple races and age groups . 1 Introduction . Exploration in learning problems has an implicit cost , insomuch that exploring actions that are eventually revealed to be sub-optimal incurs regret . We study how this cost of exploration is shared in a system with multiple stakeholders . At the outset , we present two motivating examples . Personalized Medicine and Adaptive Trials : Multi-stage , adaptive designs [ 1 , 2 , 3 , 4 ] , are widely viewed as a frontier in clinical trials . More generally , the ability to collect detailed patient level data , and real time monitoring ( eg . glucose monitoring for diabetes [ 5 , 6 ] ) has raised the specter of learning personalized treatments . Among other formulations , such problems may be viewed as contextual bandits . For instance , for the problem of optimal warfarin dosing [ 7 ] , the context at each time step corresponds to a patient ’ s covariates , arms correspond to different dosages of warfarin , and the reward is the observed efficacy of the assigned dose . In examining such a study in retrospect , it is natural to measure the regret incurred by distinct groups of patients ( eg . by race or age ) . What makes a profile of regret across such groups fair or unfair ? Revenue Management for Search Advertising : Ad platforms enjoy a tremendous amount of flexibility in the the choice of ads served against search queries . Specifically , this flexibility exists both in selecting a slate of advertisers to compete for a specific search , and then in picking a winner from this slate . Now a key goal for the platform is learning the affinity of any given ad for a given search . In solving such a learning problem – for which many variants have been proposed [ 8 , 9 ] – we may again ask the question of who bears the cost of exploration , and whether the profile of such costs across various groups of advertisers is fair . ∗The full version of the paper can be found at https : //arxiv.org/abs/2106.02553 . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . 1.1 Bandits , Groups and Axiomatic Bargaining . Delaying a formal development to later , any bandit problem has an associated ‘ grouped ’ variant . Specifically , we are given a finite set of groups ( eg . races or age groups in the warfarin example ) , and each group is associated with an arrival probability and a distribution over action sets . At each time step , a group and an action set is drawn from this distribution from which the learning algorithm must pick an action . Heterogeneity in groups is thus driven by differences in their respective distributions over feasible action sets . In addition to measuring overall regret , we also care about the regret incurred by specific groups , which we can view as the cost of exploration borne by that group . In reasoning about ‘ fair ’ regret profiles we turn to the theory of axiomatic bargaining . There , a central decision maker is concerned with the incremental utility earned by each group from collaborating , relative to the utility the group would earn on its own . Here this incremental utility is precisely the reduction in regret for any given group relative to the optimal regret that group would have incurred ‘ on its own ’ . A bargaining solution maximizes some objective function over the set of achievable incremental utilities . The utilitarian solution , for instance , maximizes the sum of incremental utilities which would reduce here to the usual objective of minimizing total regret . The Nash bargaining solution maximizes an alternative objective , the Nash Social Welfare ( SW ) function . This latter solution is the unique solution to satisfy a set of axioms any ‘ fair ’ solution would reasonably satisfy . This paper develops the Nash bargaining solution to the ( grouped ) bandit problem . 1.2 Contributions . In developing the Nash bargaining solution , we focus primarily on what is arguably the simplest non-trivial grouped bandit setting . Specifically , we consider the ‘ grouped ’ K-armed bandit model , wherein each group corresponds to a subset of the K arms . We make the following contributions relative to this problem : Regret Optimal Policies are Unfair ( Theorem 3.1 ) : We show that all regret optimal policies for the grouped K-armed bandit share a structural property that make them ‘ arbitrarily unfair ’ – in the sense that the Nash SW is−∞ for these solutions – under a broad set of conditions on the problem instance . Achievable Fairness ( Theorem 3.3 ) : We derive an instance-dependent upper bound on the Nash SW for the grouped K-armed bandit . This can be viewed as a ‘ fair ’ analogue to a regret lower bound ( e.g . [ 10 ] ) for the problem , since a lower bound on achievable regret ( forgoing any fairness concerns ) would in effect correspond to an upper bound on the utilitarian SW for the problem . Nash Solution ( Theorem 4.1 ) : We produce a policy that achieves the Nash solution . Specifically , the Nash SW under this policy achieves the upper bound we derive on the Nash SW for all instances of the grouped K-armed bandit . Price of Fairness for the Nash Solution ( Theorem 4.2 ) : We show that the ‘ price of fairness ’ for the Nash solution is small : if G is the number of groups , the Nash solution achieves at least O ( 1/ √ G ) of the reduction in regret achieved under a regret optimal solution relative to the regret incurred when groups operate separately . Taken together , these results establish a rigorous framework for the design of bandit algorithms that yield fair outcomes across groups at a low cost to total regret . As a final contribution , we extend our framework beyond the grouped K-armed bandit and undertake an empirical study : Linear Contextual Bandits and Warfarin Dosing : We extend our framework to grouped linear contextual bandits , yielding a candidate Nash solution there . Applied to a real-world dataset on warfarin dosing using race and age groups , we show ( a ) a regret optimal solution that ignores groups is dramatically unfair , and ( b ) the Nash solution balances out reductions in regret across groups at the cost of a small increase in total regret . 1.3 Related Literature . Two pieces of prior work have a motivation similar to our own . [ 11 ] studies a setting with multiple agents with a common bandit problem , where each agent can decide which action to take at each time . They show that ‘ free-riding ’ is possible — an agent that can access information from other agents can incur only O ( 1 ) regret in several classes of problems . This is consistent with our motivation . [ 12 ] studies a very similar grouped bandit model to ours , and provides a ‘ counterexample ’ in which a group can have a negative externality on another group . This example is somewhat pathological and stems from considering an instance-specific fixed time horizon ; instead , if T →∞ , all externalities become non-negative ( details in Appendix A.1 ) . Our grouped bandit model is also similar to sleeping bandits [ 13 ] , in which the set of available arms is adversarially chosen in each round . The known , fixed group structure in our model allows us to achieve tighter regret bounds than [ 13 ] . There have also been a handful of papers [ 14 , 15 , 16 , 17 ] that study ‘ fairness in bandits ’ in a completely different context . These works enforce a fairness criterion between arms , which is relevant in settings where a ‘ pull ’ represents some resource that is allocated to that arm , and these pulls should be distributed between arms in a fair manner . In these models , the decision maker ’ s objective ( maximize reward ) is distinct from that of a group ( obtain ‘ pulls ’ ) , unlike our setting ( and motivating examples ) where the groups and decision maker are aligned in their eventual objective . Our upper bound on Nash SW borrows classic techniques from the regret lower bound results of [ 10 ] and [ 18 ] . Our policy follows a similar pattern to recent work on regret-optimal , optimization-based policies for structured bandits [ 19 , 20 , 21 , 22 ] . Unlike those policies , our policy has no forced exploration . Further the optimization problem defining the Nash solution can generically have multiple solutions whereas the aforementioned approaches would require this solution to be unique ; our approach does not require a unique solution . Nonetheless , we believe that the framework in the aforementioned works can be fruitfully leveraged to construct Nash solutions for general grouped bandits , and we provide such a candidate solution as an extension . Our fairness framework is inspired by the literature on fairness in welfare economics — see [ 23 , 24 ] . Specifically , we study fairness in exploration through the lens of the axiomatic bargaining framework , first studied by [ 25 ] , who showed that enforcing four desirable axioms induces a unique fair solution . [ 26 ] is an excellent textbook reference for this topic . 2 The Axiomatic Bargaining Framework for Bandits . Let θ ∈ Θ be an unknown parameter and let A be the action set . For every arm a ∈ A , ( Yn ( a ) ) n≥1 is an i.i.d . sequence of rewards drawn from a distribution F ( θ , a ) parameterized by θ and a . We let µ ( a ) = E [ Y1 ( a ) ] be the expected reward of arm a . In defining a grouped bandit problem , we let G be a set of G groups . Each group g ∈ G is associated with a probability distribution P g over 2A , and a probability of arrival pg ; ∑ g pg = 1 . The identity of the group arriving at time t , gt , is chosen independently according to this latter distribution ; At is then drawn according to P gt . An instance of the grouped bandit problem is specified by I = ( A , G , p , P , F , θ ) , where all quantities except for θ are known . At each time t , a central decision maker observes gt and At , chooses an arm At ∈ At to pull and observes the reward YNt ( At ) +1 ( At ) , where Nt ( a ) is the total number of times arm a was pulled up to but not including time t. Let A∗t ∈ argmaxa∈At µ ( a ) be an optimal arm at time t. Given an instance I and a policy π , the total regret , and the group regret for group g ∈ G are respectively RT ( π , I ) = E [ T∑ t=1 ( µ ( A∗t ) − µ ( At ) ) ] and RgT ( π , I ) = E [ T∑ t=1 1 ( gt = g ) ( µ ( A ∗ t ) − µ ( At ) ) ] , where the expectation is over randomness in arrivals ( gt , At ) , rewards Yn ( a ) , and the policy π . Finally , so that the notion of an optimal policy for some class of instances , I , is well defined , we restrict attention to consistent policies which yield sub-polynomial regret for any instance in that class : Ψ = { π : RT ( π , I ) = o ( T b ) ∀I ∈ I , ∀b > 0 } .",
"role": "user"
},
{
"content": "This paper considers a grouped MAB formulation, wherein a random group of arms becomes available to pull at each time instant. The authors then consider the regret of each separate group, and show that policies that optimize global regret can be unfair in that certain groups no not `benefit' from the presence of the other groups. They use the Nash bargaining solution to define a `fair' allocation of the exploration cost, and propose a policy that meets this fairness criterion. The authors also bound the impact of the `fairness' consideration on regret by analysing the price of fairness.",
"role": "assistant"
}
] |
science.evidence_inference | Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:
- Treatment: The specific medical procedure or drug being evaluated.
- Control: The standard or placebo used for comparison. If absent, use `null`.
- Result: The health outcome being assessed.
- Impact: The observed effect of the treatment on the result, categorized as: ("increased", "decreased", "no change").
- Justification: The supporting details for the observed impact.
Format the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].
Abstract:
ABSTRACT.BACKGROUND:: Nasal cannula, beside administering low-flow therapy, showed the capability for the administration of continuous positive airway pressure (CPAP) through high-flow nasal cannula (HFNC). Meeting specific physical criteria of 100% relative humidity (RH) and temperature of 37°C are the basic interventional requirements to administer oxygen for the newborns through a nasal cannula. Recently, two systems, MR850 and PMH7000, received the Food and Drug Administration (FDA) approval to administer heated, humidified HFNC (HHHFNC). These systems are evaluated in this study based on their humidifying and heating capabilities.
ABSTRACT.MATERIALS AND METHODS:: This study was done as an RCT on newborns weighing 1,000 to 1,500 g recovering from respiratory distress syndrome (RDS) while nCPAP was administered at CDP = 4 cmH2O, Fio2 <30%. Patients were randomized to two groups of 35 receiving HHHFNC after treatment with nCPAP, with one group using MR850 humidifier and the other PMH7000. The patients were compared according to the duration of HHHFNC administration, repeated need for nCPAP respiratory support, the need for invasive ventilation, apnea, chronic lung disease (CLD), nasal trauma, RH, and temperature of the gases.
ABSTRACT.RESULTS:: The average time of support with HHHNFC did not show any significant difference in the two groups. There was no significant difference between the groups in the need for nCPAP, invasive ventilation, apnea, nasal trauma, and CLD. The difference in the levels of average temperature and humidity was significant (P value <0.001).
ABSTRACT.CONCLUSION:: Although the records of temperature and RH in the PMH7000 system was lower than the records from the MR850 system, no clinical priority was observed for respiratory support with HHHNFC in the two systems.
Extracted Data: | [["PMH7000", "MR850", "severity of the mucosa injury", "no significant difference", "There was no significant difference between the groups in the need for nCPAP, invasive ventilation, apnea, nasal trauma, and CLD. T"], ["PMH7000", "MR850", "The need for a second administration of nCPAP", "no significant difference", "There was no significant difference between the groups in the need for nCPAP"]] | science.evidence_inference.235 | [
{
"content": "Analyze the provided abstract from a clinical trial. Extract relevant data points and compile them into a structured JSON format. Focus on identifying key components such as:\n\n- Treatment: The specific medical procedure or drug being evaluated.\n- Control: The standard or placebo used for comparison. If absent, use `null`.\n- Result: The health outcome being assessed.\n- Impact: The observed effect of the treatment on the result, categorized as: (\"increased\", \"decreased\", \"no change\").\n- Justification: The supporting details for the observed impact.\n\nFormat the extracted information as a JSON array, with each entry containing the elements in this sequence: [<treatment>, <control>, <result>, <impact>, <justification>].\n\nAbstract:\nABSTRACT.BACKGROUND:: Nasal cannula, beside administering low-flow therapy, showed the capability for the administration of continuous positive airway pressure (CPAP) through high-flow nasal cannula (HFNC). Meeting specific physical criteria of 100% relative humidity (RH) and temperature of 37°C are the basic interventional requirements to administer oxygen for the newborns through a nasal cannula. Recently, two systems, MR850 and PMH7000, received the Food and Drug Administration (FDA) approval to administer heated, humidified HFNC (HHHFNC). These systems are evaluated in this study based on their humidifying and heating capabilities.\n\nABSTRACT.MATERIALS AND METHODS:: This study was done as an RCT on newborns weighing 1,000 to 1,500 g recovering from respiratory distress syndrome (RDS) while nCPAP was administered at CDP = 4 cmH2O, Fio2 <30%. Patients were randomized to two groups of 35 receiving HHHFNC after treatment with nCPAP, with one group using MR850 humidifier and the other PMH7000. The patients were compared according to the duration of HHHFNC administration, repeated need for nCPAP respiratory support, the need for invasive ventilation, apnea, chronic lung disease (CLD), nasal trauma, RH, and temperature of the gases.\n\nABSTRACT.RESULTS:: The average time of support with HHHNFC did not show any significant difference in the two groups. There was no significant difference between the groups in the need for nCPAP, invasive ventilation, apnea, nasal trauma, and CLD. The difference in the levels of average temperature and humidity was significant (P value <0.001).\n\nABSTRACT.CONCLUSION:: Although the records of temperature and RH in the PMH7000 system was lower than the records from the MR850 system, no clinical priority was observed for respiratory support with HHHNFC in the two systems.\n\nExtracted Data:",
"role": "user"
},
{
"content": "[[\"PMH7000\", \"MR850\", \"severity of the mucosa injury\", \"no significant difference\", \"There was no significant difference between the groups in the need for nCPAP, invasive ventilation, apnea, nasal trauma, and CLD. T\"], [\"PMH7000\", \"MR850\", \"The need for a second administration of nCPAP\", \"no significant difference\", \"There was no significant difference between the groups in the need for nCPAP\"]]",
"role": "assistant"
}
] |
science.scierc_ner | Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.
Input Abstract:
Along with the increasing requirements, the hash-tag recommendation task for microblogs has been receiving considerable attention in recent years. Various researchers have studied the problem from different aspects. However, most of these methods usually need handcrafted features. Motivated by the successful use of convolutional neural networks (CNNs) for many natural language processing tasks, in this paper, we adopt CNNs to perform the hashtag recommendation problem. To incorporate the trigger words whose effectiveness have been experimentally evaluated in several previous works, we propose a novel architecture with an attention mechanism. The results of experiments on the data collected from a real world microblogging service demonstrated that the proposed model outperforms state-of-the-art methods. By incorporating trigger words into the consideration, the relative improvement of the proposed method over the state-of-the-art method is around 9.4% in the F1-score. | Output Entities:
{"Generic": ["state-of-the-art methods", "data", "architecture"], "Material": ["microblogs"], "Method": ["convolutional neural networks (CNNs)"], "Metric": ["F1-score"], "OtherScientificTerm": ["attention mechanism", "handcrafted features", "trigger words"], "Task": ["natural language processing tasks", "hash-tag recommendation task"]} | science.scierc_ner.25 | [
{
"content": "Analyze the given abstract from a computer science paper. Identify and list entities in a structured JSON format. Focus on capturing distinct elements relevant to the research context.\n\nInput Abstract:\n\nAlong with the increasing requirements, the hash-tag recommendation task for microblogs has been receiving considerable attention in recent years. Various researchers have studied the problem from different aspects. However, most of these methods usually need handcrafted features. Motivated by the successful use of convolutional neural networks (CNNs) for many natural language processing tasks, in this paper, we adopt CNNs to perform the hashtag recommendation problem. To incorporate the trigger words whose effectiveness have been experimentally evaluated in several previous works, we propose a novel architecture with an attention mechanism. The results of experiments on the data collected from a real world microblogging service demonstrated that the proposed model outperforms state-of-the-art methods. By incorporating trigger words into the consideration, the relative improvement of the proposed method over the state-of-the-art method is around 9.4% in the F1-score.",
"role": "user"
},
{
"content": "Output Entities:\n\n{\"Generic\": [\"state-of-the-art methods\", \"data\", \"architecture\"], \"Material\": [\"microblogs\"], \"Method\": [\"convolutional neural networks (CNNs)\"], \"Metric\": [\"F1-score\"], \"OtherScientificTerm\": [\"attention mechanism\", \"handcrafted features\", \"trigger words\"], \"Task\": [\"natural language processing tasks\", \"hash-tag recommendation task\"]}",
"role": "assistant"
}
] |
science.qasper_abstractive_qa | Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.
Document Content:
How big is big enough? Unsupervised word sense disambiguation using a very large corpus
In this paper, the problem of disambiguating a target word for Polish is approached by searching for related words with known meaning. These relatives are used to build a training corpus from unannotated text. This technique is improved by proposing new rich sources of replacements that substitute the traditional requirement of monosemy with heuristics based on wordnet relations. The na\"ive Bayesian classifier has been modified to account for an unknown distribution of senses. A corpus of 600 million web documents (594 billion tokens), gathered by the NEKST search engine allows us to assess the relationship between training set size and disambiguation accuracy. The classifier is evaluated using both a wordnet baseline and a corpus with 17,314 manually annotated occurrences of 54 ambiguous words.
Introduction
The focus of the word sense disambiguation (WSD) task is polysemy, i.e. words having several substantially different meanings. Two common examples are bank (riverside or financial institution) and bass (fish or musical instrument), but usually the meanings of a word are closely related, e.g. class may refer to: (a) a group of students, (b) the period when they meet to study or (c) a room where such meetings occur. Readers deal with this problem by using a word's context and in WSD we aim at doing it automatically.
The most effective solution, called supervised WSD, is to use a large number of sense-annotated occurrences of the target word to build a machine learning model to label test cases. However, this approach suffers from a knowledge acquisition bottleneck. The annotation of a separate training corpus for every target word demands a considerable amount of human labour. Therefore, this approach is unusable in applications that require WSD across a wide vocabulary, such as open-domain question answering BIBREF0 .
The method of monosemous relatives, which is the focus of this work, bypasses the bottleneck by gathering occurences of words related to the target word, but free from ambiguity, and treating them as training cases of the respective senses. Human labour is eliminated at the expense of accuracy, as the context of each relative only approximately matches the context of the target word sense.
Monosemous relatives have been employed multiple times (see Section 2), but results remain unsatisfactory. The aim of my study is to explore the limitations of this technique by implementing and evaluating such a tool for Polish. Firstly, the method is expanded by waiving the requirement of monosemy and proposing several new sources of relatives. These previously unexplored sources are based on wordnet data and help gather many training cases from the corpus. Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a naïve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data. Finally, a very large corpus (600 million documents), gathered from the web by a Polish search engine NEKST, is used to build models based on training corpora of different sizes. Those experiments show what amount of data is sufficient for such a task. The proposed solution is compared to baselines that use wordnet structure only, with no training corpora.
This paper is organised as follows. The next section reviews the previous research in the area, focusing on unsupervised WSD using monosemous relatives. Section 3 outlines the proposed solution by describing the new sources of relatives, the employed corpus, the features extracted from context and the modified Bayesian classifier. Section 4 describes the evaluation data and process, while section 5 quotes the results. Section 6 is devoted to discussing the outcomes and section 7 concludes the paper.
Method
The algorithm works as follows. First, a set of relatives is obtained for each sense of a target word using the Polish wordnet: plWordNet BIBREF18 . Some of the replacements may have multiple senses, however usually one of them covers most cases. Secondly, a set of context features is extracted from occurrences of relatives in the NEKST corpus. Finally, the aggregated feature values corresponding to target word senses are used to build a naïve Bayesian classifier adjusted to a situation of unknown a priori probabilities.
Corpus
As some of the relatives may be very rare, it is important to use a training corpus of significant size. In this case, we used 600 million webpages (594 billion tokens) indexed by a Polish search engine NEKST, developed at the Institute of Computer Science, Polish Academy of Sciences. Training sub-corpora were selected with size varying from 19,000 to 60 million documents. Instead of using snippets returned by a search interface, we use whole textual contents (with morphosyntactic annotation) of each document, taken from NEKST distributed infrastructure.
Unfortunately, a lot of text on web pages is not suitable for training a WSD classifier, for example elements of page structure or parts unrecognised by a tagger. Thus, each sentence has to satisfy the following requirements to qualify for training:
be at least 150-character long.
contain at least five words.
contain at least four different parts of speech (including punctuation).
These criteria help to filter out most of the web content of unsatisfactory quality.
Classification
After gathering the values of features from occurrences of relatives, a naïve Bayesian classification model is built. However, as many researchers in the field have noticed BIBREF13 , BIBREF9 , BIBREF11 , there is a major drawback of using relatives instead of a target word: the number of occurrences of relatives is usually not proportional to the frequency of the target word sense. To bypass this problem, we modify the Bayesian model in this study. The basic classifier chooses the class that maximises the following probability: INLINEFORM0
In the case of binary features, which represent the occurrence of a certain word in context, we have INLINEFORM0 and: INLINEFORM1
Which could be rewritten as: INLINEFORM0
The expression has been formulated as a product of two factors: INLINEFORM0 , independent from observed features and corresponding to empty word context, and INLINEFORM1 that depends on observed context. To weaken the influence of improper distribution of training cases, we omit INLINEFORM2 , so that when no context features are observed, every word sense is considered equally probable.
Thus, for the given context features INLINEFORM0 , sense INLINEFORM1 is selected when: INLINEFORM2
The table with final results ( TABREF27 ) contains accuracies of both original and modified versions of the classifier.
Evaluation
For experiments and evaluation a sub-corpus of the National Corpus of Polish, NCP BIBREF6 was employed. The manually annotated sub-corpus contains sense labels of 106 different words: 50 nouns, 48 verbs and 8 adjectives. As verbs have much poorer connectivity in plWordNet, they have been ignored within this study.
The senses used for annotation are coarse-grained – with one sense covering a range of related usages. Each word has between two and four senses. To employ the method described in previous section, the NCP senses have been manually mapped to fine-grained plWordNet synsets. As NCP senses have been created independently from wordnet senses, a substantial part of the latter remain uncovered by the former. However, we only found four cases where an NCP sense has no counterpart in wordnet; those words are excluded from the test set.
In total, the test set includes 17,314 occurrences of 54 ambiguous words, having two to four coarse-grained meanings. Table TABREF26 contains a detailed summary.
The algorithm works using plWordNet synsets and its output is mapped to NCP meaning to measure its quality. Accuracy measures what percentage of the programme's guesses agree with manually assigned senses. To assess the general performance of a particular configuration, the accuracy has been averaged over all target words.
To properly judge the results, we need to start with the baselines. Without knowing the distribution of senses, three basic possibilities seem reasonable: we can either 1) select a meaning randomly, 2) base on the sense numbering in NCP or 3) use plWordNet in the same way. To have a better comparison with ontology-based methods, the results also include a word similarity baseline configuration, which selects the sense with the strongest similarity to any of the words in context (sentence). For that purpose the Leacock&Chodorow similarity measure (implemented using all relations between synsets in plWordNet) is employed, as it has been previously used in WSD BIBREF21 and also correlates well with human judgement of similarity BIBREF22 . The baseline results, shown in Table TABREF27 , support the claim of intentional sense ordering in plWordNet.
Results
The goal of the first experiment was to select an optimal feature set for this task. Several models with a common basic configuration, i.e. using all possible relatives and 6 million documents, have been built with different feature sets and evaluated. The results are shown in Table TABREF22 . As we can see, the lexical features give us more predictive power than morphological interpretations. The best solution, incorporated into the basic configuration for further experiments, includes all features except these based on the interpretation of the word in focus.
Secondly, it is necessary to evaluate different types of replacements, outlined in section SECREF2 . Table TABREF23 contains the average number of possible replacements per target word, the number of occurrences in the six-million corpus and the average classification accuracy. As we can see, the number of replacements rises after adding subsequent sources, but the largest increase is caused by including polysemous relatives with determiners. On the other hand, these compound relatives rarely appear in the corpus (13,126 occurences), whereas employing polysemous words in the first sense results in 1,525,739 new training cases and a substantial growth of accuracy. What is more, although the profits from these sources of relatives differ, none of them decreases the accuracy.
The availability of the corpus containing 600 million documents helps to answer the question of sufficient corpus size for such task. Figure FIGREF24 shows mean classification accuracy for models built using different training corpora, which have been created by randomly selecting a subset of the original document set. The considered sizes are between 19,000 and 60,000,000. Additionally, a different type of corpora has been created by using only documents from the Polish Wikipedia (sizes 11,000 – 1,098,000). We see that after a certain point adding in new data does not improve the accuracy. Surprisingly, the subcorpora of Wikipedia yield worse results than those of unlimited origin.
Table TABREF26 shows the accuracy of disambiguation in the configuration outlined above with respect to the target word. The easiest words have meanings corresponding to distinct physical objects, e.g. in Polish piłka (100% accuracy) may mean a ball or a small saw. The hardest cases are those with many abstract and fuzzy meanings, e.g. in Polish klasa has four meanings related to English class: (1) group of similar objects, (2) level of quality, (3) group of pupils or (4) classroom. The meaning (1) could be hard to distinguish from (2) even for a human, whereas (3) and (4) may appear in very similar contexts.
Finally, Table TABREF27 contains the mean accuracy of the basic configuration of the classifier described in this work (with and without modifications to Bayesian model). It is compared to the four previously mentioned baselines.
Conclusions
In this paper the limitations and improvements of unsupervised word sense disambiguation have been investigated. The main problem – insufficient number and quality of replacements has been tackled by adding new rich sources of replacements. The quality of the models has indeed improved, especially thanks to replacements based on sense ordering in plWordNet. To deal with the problem of unknown sense distribution, the Bayesian classifier has been modified, removing the bias towards frequent labels in the training data. Finally, the experiments with very large corpus have shown the sufficient amount of training data for this task, which is only 6 million documents.
Figure 1: A part of plWordNet network used to extract replacements for the word język in its 6th sense, meaning tongue (animal or human organ). The resulting replacements are underlined.
Table 1: Mean accuracy of the disambiguation algorithm with respect to the features involved in the classification (I0 – interpretation of a disambiguated word, I – interpretations of neighbouring words, Lp – lemmas of neighbouring words, L – lemmas present in the whole sentence).
Table 2: Strategies for generating replacements, each built by adding new elements to the previous step, with the resulting number of replacements (average per word), their occurrences in the corpus (total) and the mean accuracy of disambiguation.
Figure 2: Mean disambiguation accuracy of models built using corpora of different sizes, created by random selection from 600 million web documents from NEKST search engine: unrestricted or only from the Polish Wikipedia. The best of the baselines, which uses wordnet-based word similarity is also shown.
Table 3: Polysemous words used for evaluation with their number of meanings, test cases and obtained disambiguation accuracy in basic configuration.
Query: How do they deal with unknown distribution senses?
JSON Response: | {"answer": "The N\u00e4ive-Bayes classifier is corrected so it is not biased to most frequent classes", "evidence": ["Monosemous relatives have been employed multiple times (see Section 2), but results remain unsatisfactory. The aim of my study is to explore the limitations of this technique by implementing and evaluating such a tool for Polish. Firstly, the method is expanded by waiving the requirement of monosemy and proposing several new sources of relatives. These previously unexplored sources are based on wordnet data and help gather many training cases from the corpus. Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a na\u00efve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data. Finally, a very large corpus (600 million documents), gathered from the web by a Polish search engine NEKST, is used to build models based on training corpora of different sizes. Those experiments show what amount of data is sufficient for such a task. The proposed solution is compared to baselines that use wordnet structure only, with no training corpora.", "The algorithm works as follows. First, a set of relatives is obtained for each sense of a target word using the Polish wordnet: plWordNet BIBREF18 . Some of the replacements may have multiple senses, however usually one of them covers most cases. Secondly, a set of context features is extracted from occurrences of relatives in the NEKST corpus. Finally, the aggregated feature values corresponding to target word senses are used to build a na\u00efve Bayesian classifier adjusted to a situation of unknown a priori probabilities."]} | science.qasper_abstractive_qa.35.0 | [
{
"content": "Examine the provided sections from a research document and address the accompanying query. The text is divided by newlines.\n\nDocument Content:\n\nHow big is big enough? Unsupervised word sense disambiguation using a very large corpus\n\nIn this paper, the problem of disambiguating a target word for Polish is approached by searching for related words with known meaning. These relatives are used to build a training corpus from unannotated text. This technique is improved by proposing new rich sources of replacements that substitute the traditional requirement of monosemy with heuristics based on wordnet relations. The na\\\"ive Bayesian classifier has been modified to account for an unknown distribution of senses. A corpus of 600 million web documents (594 billion tokens), gathered by the NEKST search engine allows us to assess the relationship between training set size and disambiguation accuracy. The classifier is evaluated using both a wordnet baseline and a corpus with 17,314 manually annotated occurrences of 54 ambiguous words.\n\nIntroduction\nThe focus of the word sense disambiguation (WSD) task is polysemy, i.e. words having several substantially different meanings. Two common examples are bank (riverside or financial institution) and bass (fish or musical instrument), but usually the meanings of a word are closely related, e.g. class may refer to: (a) a group of students, (b) the period when they meet to study or (c) a room where such meetings occur. Readers deal with this problem by using a word's context and in WSD we aim at doing it automatically.\nThe most effective solution, called supervised WSD, is to use a large number of sense-annotated occurrences of the target word to build a machine learning model to label test cases. However, this approach suffers from a knowledge acquisition bottleneck. The annotation of a separate training corpus for every target word demands a considerable amount of human labour. Therefore, this approach is unusable in applications that require WSD across a wide vocabulary, such as open-domain question answering BIBREF0 .\nThe method of monosemous relatives, which is the focus of this work, bypasses the bottleneck by gathering occurences of words related to the target word, but free from ambiguity, and treating them as training cases of the respective senses. Human labour is eliminated at the expense of accuracy, as the context of each relative only approximately matches the context of the target word sense.\nMonosemous relatives have been employed multiple times (see Section 2), but results remain unsatisfactory. The aim of my study is to explore the limitations of this technique by implementing and evaluating such a tool for Polish. Firstly, the method is expanded by waiving the requirement of monosemy and proposing several new sources of relatives. These previously unexplored sources are based on wordnet data and help gather many training cases from the corpus. Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a naïve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data. Finally, a very large corpus (600 million documents), gathered from the web by a Polish search engine NEKST, is used to build models based on training corpora of different sizes. Those experiments show what amount of data is sufficient for such a task. The proposed solution is compared to baselines that use wordnet structure only, with no training corpora.\nThis paper is organised as follows. The next section reviews the previous research in the area, focusing on unsupervised WSD using monosemous relatives. Section 3 outlines the proposed solution by describing the new sources of relatives, the employed corpus, the features extracted from context and the modified Bayesian classifier. Section 4 describes the evaluation data and process, while section 5 quotes the results. Section 6 is devoted to discussing the outcomes and section 7 concludes the paper.\n\nMethod\nThe algorithm works as follows. First, a set of relatives is obtained for each sense of a target word using the Polish wordnet: plWordNet BIBREF18 . Some of the replacements may have multiple senses, however usually one of them covers most cases. Secondly, a set of context features is extracted from occurrences of relatives in the NEKST corpus. Finally, the aggregated feature values corresponding to target word senses are used to build a naïve Bayesian classifier adjusted to a situation of unknown a priori probabilities.\n\nCorpus\nAs some of the relatives may be very rare, it is important to use a training corpus of significant size. In this case, we used 600 million webpages (594 billion tokens) indexed by a Polish search engine NEKST, developed at the Institute of Computer Science, Polish Academy of Sciences. Training sub-corpora were selected with size varying from 19,000 to 60 million documents. Instead of using snippets returned by a search interface, we use whole textual contents (with morphosyntactic annotation) of each document, taken from NEKST distributed infrastructure.\nUnfortunately, a lot of text on web pages is not suitable for training a WSD classifier, for example elements of page structure or parts unrecognised by a tagger. Thus, each sentence has to satisfy the following requirements to qualify for training:\nbe at least 150-character long.\ncontain at least five words.\ncontain at least four different parts of speech (including punctuation).\nThese criteria help to filter out most of the web content of unsatisfactory quality.\n\nClassification\nAfter gathering the values of features from occurrences of relatives, a naïve Bayesian classification model is built. However, as many researchers in the field have noticed BIBREF13 , BIBREF9 , BIBREF11 , there is a major drawback of using relatives instead of a target word: the number of occurrences of relatives is usually not proportional to the frequency of the target word sense. To bypass this problem, we modify the Bayesian model in this study. The basic classifier chooses the class that maximises the following probability: INLINEFORM0 \nIn the case of binary features, which represent the occurrence of a certain word in context, we have INLINEFORM0 and: INLINEFORM1 \n Which could be rewritten as: INLINEFORM0 \n The expression has been formulated as a product of two factors: INLINEFORM0 , independent from observed features and corresponding to empty word context, and INLINEFORM1 that depends on observed context. To weaken the influence of improper distribution of training cases, we omit INLINEFORM2 , so that when no context features are observed, every word sense is considered equally probable.\nThus, for the given context features INLINEFORM0 , sense INLINEFORM1 is selected when: INLINEFORM2 \nThe table with final results ( TABREF27 ) contains accuracies of both original and modified versions of the classifier.\n\nEvaluation\nFor experiments and evaluation a sub-corpus of the National Corpus of Polish, NCP BIBREF6 was employed. The manually annotated sub-corpus contains sense labels of 106 different words: 50 nouns, 48 verbs and 8 adjectives. As verbs have much poorer connectivity in plWordNet, they have been ignored within this study.\nThe senses used for annotation are coarse-grained – with one sense covering a range of related usages. Each word has between two and four senses. To employ the method described in previous section, the NCP senses have been manually mapped to fine-grained plWordNet synsets. As NCP senses have been created independently from wordnet senses, a substantial part of the latter remain uncovered by the former. However, we only found four cases where an NCP sense has no counterpart in wordnet; those words are excluded from the test set.\nIn total, the test set includes 17,314 occurrences of 54 ambiguous words, having two to four coarse-grained meanings. Table TABREF26 contains a detailed summary.\nThe algorithm works using plWordNet synsets and its output is mapped to NCP meaning to measure its quality. Accuracy measures what percentage of the programme's guesses agree with manually assigned senses. To assess the general performance of a particular configuration, the accuracy has been averaged over all target words.\nTo properly judge the results, we need to start with the baselines. Without knowing the distribution of senses, three basic possibilities seem reasonable: we can either 1) select a meaning randomly, 2) base on the sense numbering in NCP or 3) use plWordNet in the same way. To have a better comparison with ontology-based methods, the results also include a word similarity baseline configuration, which selects the sense with the strongest similarity to any of the words in context (sentence). For that purpose the Leacock&Chodorow similarity measure (implemented using all relations between synsets in plWordNet) is employed, as it has been previously used in WSD BIBREF21 and also correlates well with human judgement of similarity BIBREF22 . The baseline results, shown in Table TABREF27 , support the claim of intentional sense ordering in plWordNet.\n\nResults\nThe goal of the first experiment was to select an optimal feature set for this task. Several models with a common basic configuration, i.e. using all possible relatives and 6 million documents, have been built with different feature sets and evaluated. The results are shown in Table TABREF22 . As we can see, the lexical features give us more predictive power than morphological interpretations. The best solution, incorporated into the basic configuration for further experiments, includes all features except these based on the interpretation of the word in focus.\nSecondly, it is necessary to evaluate different types of replacements, outlined in section SECREF2 . Table TABREF23 contains the average number of possible replacements per target word, the number of occurrences in the six-million corpus and the average classification accuracy. As we can see, the number of replacements rises after adding subsequent sources, but the largest increase is caused by including polysemous relatives with determiners. On the other hand, these compound relatives rarely appear in the corpus (13,126 occurences), whereas employing polysemous words in the first sense results in 1,525,739 new training cases and a substantial growth of accuracy. What is more, although the profits from these sources of relatives differ, none of them decreases the accuracy.\nThe availability of the corpus containing 600 million documents helps to answer the question of sufficient corpus size for such task. Figure FIGREF24 shows mean classification accuracy for models built using different training corpora, which have been created by randomly selecting a subset of the original document set. The considered sizes are between 19,000 and 60,000,000. Additionally, a different type of corpora has been created by using only documents from the Polish Wikipedia (sizes 11,000 – 1,098,000). We see that after a certain point adding in new data does not improve the accuracy. Surprisingly, the subcorpora of Wikipedia yield worse results than those of unlimited origin.\nTable TABREF26 shows the accuracy of disambiguation in the configuration outlined above with respect to the target word. The easiest words have meanings corresponding to distinct physical objects, e.g. in Polish piłka (100% accuracy) may mean a ball or a small saw. The hardest cases are those with many abstract and fuzzy meanings, e.g. in Polish klasa has four meanings related to English class: (1) group of similar objects, (2) level of quality, (3) group of pupils or (4) classroom. The meaning (1) could be hard to distinguish from (2) even for a human, whereas (3) and (4) may appear in very similar contexts.\nFinally, Table TABREF27 contains the mean accuracy of the basic configuration of the classifier described in this work (with and without modifications to Bayesian model). It is compared to the four previously mentioned baselines.\n\nConclusions\nIn this paper the limitations and improvements of unsupervised word sense disambiguation have been investigated. The main problem – insufficient number and quality of replacements has been tackled by adding new rich sources of replacements. The quality of the models has indeed improved, especially thanks to replacements based on sense ordering in plWordNet. To deal with the problem of unknown sense distribution, the Bayesian classifier has been modified, removing the bias towards frequent labels in the training data. Finally, the experiments with very large corpus have shown the sufficient amount of training data for this task, which is only 6 million documents.\n\nFigure 1: A part of plWordNet network used to extract replacements for the word język in its 6th sense, meaning tongue (animal or human organ). The resulting replacements are underlined.\n\nTable 1: Mean accuracy of the disambiguation algorithm with respect to the features involved in the classification (I0 – interpretation of a disambiguated word, I – interpretations of neighbouring words, Lp – lemmas of neighbouring words, L – lemmas present in the whole sentence).\n\nTable 2: Strategies for generating replacements, each built by adding new elements to the previous step, with the resulting number of replacements (average per word), their occurrences in the corpus (total) and the mean accuracy of disambiguation.\n\nFigure 2: Mean disambiguation accuracy of models built using corpora of different sizes, created by random selection from 600 million web documents from NEKST search engine: unrestricted or only from the Polish Wikipedia. The best of the baselines, which uses wordnet-based word similarity is also shown.\n\nTable 3: Polysemous words used for evaluation with their number of meanings, test cases and obtained disambiguation accuracy in basic configuration.\n\nQuery: How do they deal with unknown distribution senses?\n\nJSON Response:",
"role": "user"
},
{
"content": "{\"answer\": \"The N\\u00e4ive-Bayes classifier is corrected so it is not biased to most frequent classes\", \"evidence\": [\"Monosemous relatives have been employed multiple times (see Section 2), but results remain unsatisfactory. The aim of my study is to explore the limitations of this technique by implementing and evaluating such a tool for Polish. Firstly, the method is expanded by waiving the requirement of monosemy and proposing several new sources of relatives. These previously unexplored sources are based on wordnet data and help gather many training cases from the corpus. Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a na\\u00efve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data. Finally, a very large corpus (600 million documents), gathered from the web by a Polish search engine NEKST, is used to build models based on training corpora of different sizes. Those experiments show what amount of data is sufficient for such a task. The proposed solution is compared to baselines that use wordnet structure only, with no training corpora.\", \"The algorithm works as follows. First, a set of relatives is obtained for each sense of a target word using the Polish wordnet: plWordNet BIBREF18 . Some of the replacements may have multiple senses, however usually one of them covers most cases. Secondly, a set of context features is extracted from occurrences of relatives in the NEKST corpus. Finally, the aggregated feature values corresponding to target word senses are used to build a na\\u00efve Bayesian classifier adjusted to a situation of unknown a priori probabilities.\"]}",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
Lack of FGF21 in mice leads to reduced lifespan.
Research Abstract:
BACKGROUND Fibroblast growth factor 21 (FGF21) is a metabolic hormone with pleiotropic effects on glucose and lipid metabolism and insulin sensitivity.
It acts as a key downstream target of both peroxisome proliferator-activated receptor α and γ, the agonists of which have been used for lipid lowering and insulin sensitization, respectively.
However, the role of FGF21 in the cardiovascular system remains elusive.
METHODS AND RESULTS The roles of FGF21 in atherosclerosis were investigated by evaluating the impact of FGF21 deficiency and replenishment with recombinant FGF21 in apolipoprotein E(-/-) mice.
FGF21 deficiency causes a marked exacerbation of atherosclerotic plaque formation and premature death in apolipoprotein E(-/-) mice, which is accompanied by hypoadiponectinemia and severe hypercholesterolemia.
Replenishment of FGF21 protects against atherosclerosis in apolipoprotein E(-/-)mice via 2 independent mechanisms, inducing the adipocyte production of adiponectin, which in turn acts on the blood vessels to inhibit neointima formation and macrophage inflammation, and suppressing the hepatic expression of the transcription factor sterol regulatory element-binding protein-2, thereby leading to reduced cholesterol synthesis and attenuation of hypercholesterolemia.
Chronic treatment with adiponectin partially reverses atherosclerosis without obvious effects on hypercholesterolemia in FGF21-deficient apolipoprotein E(-/-) mice.
By contrast, the cholesterol-lowering effects of FGF21 are abrogated by hepatic expression of sterol regulatory element-binding protein-2.
CONCLUSIONS FGF21 protects against atherosclerosis via fine tuning the multiorgan crosstalk among liver, adipose tissue, and blood vessels. | {"verdict": "SUPPORT", "evidence": ["FGF21 deficiency causes a marked exacerbation of atherosclerotic plaque formation and premature death in apolipoprotein E(-/-) mice, which is accompanied by hypoadiponectinemia and severe hypercholesterolemia."]} | science.scifact_entailment.444 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nLack of FGF21 in mice leads to reduced lifespan.\n\nResearch Abstract:\nBACKGROUND Fibroblast growth factor 21 (FGF21) is a metabolic hormone with pleiotropic effects on glucose and lipid metabolism and insulin sensitivity.\nIt acts as a key downstream target of both peroxisome proliferator-activated receptor α and γ, the agonists of which have been used for lipid lowering and insulin sensitization, respectively.\nHowever, the role of FGF21 in the cardiovascular system remains elusive.\nMETHODS AND RESULTS The roles of FGF21 in atherosclerosis were investigated by evaluating the impact of FGF21 deficiency and replenishment with recombinant FGF21 in apolipoprotein E(-/-) mice.\nFGF21 deficiency causes a marked exacerbation of atherosclerotic plaque formation and premature death in apolipoprotein E(-/-) mice, which is accompanied by hypoadiponectinemia and severe hypercholesterolemia.\nReplenishment of FGF21 protects against atherosclerosis in apolipoprotein E(-/-)mice via 2 independent mechanisms, inducing the adipocyte production of adiponectin, which in turn acts on the blood vessels to inhibit neointima formation and macrophage inflammation, and suppressing the hepatic expression of the transcription factor sterol regulatory element-binding protein-2, thereby leading to reduced cholesterol synthesis and attenuation of hypercholesterolemia.\nChronic treatment with adiponectin partially reverses atherosclerosis without obvious effects on hypercholesterolemia in FGF21-deficient apolipoprotein E(-/-) mice.\nBy contrast, the cholesterol-lowering effects of FGF21 are abrogated by hepatic expression of sterol regulatory element-binding protein-2.\nCONCLUSIONS FGF21 protects against atherosclerosis via fine tuning the multiorgan crosstalk among liver, adipose tissue, and blood vessels.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"FGF21 deficiency causes a marked exacerbation of atherosclerotic plaque formation and premature death in apolipoprotein E(-/-) mice, which is accompanied by hypoadiponectinemia and severe hypercholesterolemia.\"]}",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
A breast cancer patient's capacity to metabolize tamoxifen influences treatment outcome.
Research Abstract:
CONTEXT The growth inhibitory effect of tamoxifen, which is used for the treatment of hormone receptor-positive breast cancer, is mediated by its metabolites, 4-hydroxytamoxifen and endoxifen.
The formation of active metabolites is catalyzed by the polymorphic cytochrome P450 2D6 (CYP2D6) enzyme.
OBJECTIVE To determine whether CYP2D6 variation is associated with clinical outcomes in women receiving adjuvant tamoxifen.
DESIGN, SETTING, AND PATIENTS Retrospective analysis of German and US cohorts of patients treated with adjuvant tamoxifen for early stage breast cancer.
The 1325 patients had diagnoses between 1986 and 2005 of stage I through III breast cancer and were mainly postmenopausal (95.4%).
Last follow-up was in December 2008; inclusion criteria were hormone receptor positivity, no metastatic disease at diagnosis, adjuvant tamoxifen therapy, and no chemotherapy.
DNA from tumor tissue or blood was genotyped for CYP2D6 variants associated with reduced (*10, *41) or absent (*3, *4, *5) enzyme activity.
Women were classified as having an extensive (n=609), heterozygous extensive/intermediate (n=637), or poor (n=79) CYP2D6 metabolism.
MAIN OUTCOME MEASURES Time to recurrence, event-free survival, disease-free survival, and overall survival.
RESULTS Median follow-up was 6.3 years.
At 9 years of follow-up, the recurrence rates were 14.9% for extensive metabolizers, 20.9% for heterozygous extensive/intermediate metabolizers, and 29.0% for poor metabolizers, and all-cause mortality rates were 16.7%, 18.0%, and 22.8%, respectively.
Compared with extensive metabolizers, there was a significantly increased risk of recurrence for heterozygous extensive/intermediate metabolizers (time to recurrence adjusted hazard ratio [HR], 1.40; 95% confidence interval [CI], 1.04-1.90) and for poor metabolizers (time to recurrence HR, 1.90; 95% CI, 1.10-3.28).
Compared with extensive metabolizers, those with decreased CYP2D6 activity (heterozygous extensive/intermediate and poor metabolism) had worse event-free survival (HR, 1.33; 95% CI, 1.06-1.68) and disease-free survival (HR, 1.29; 95% CI, 1.03-1.61), but there was no significant difference in overall survival (HR, 1.15; 95% CI, 0.88-1.51).
CONCLUSION Among women with breast cancer treated with tamoxifen, there was an association between CYP2D6 variation and clinical outcomes, such that the presence of 2 functional CYP2D6 alleles was associated with better clinical outcomes and the presence of nonfunctional or reduced-function alleles with worse outcomes. | {"verdict": "SUPPORT", "evidence": ["At 9 years of follow-up, the recurrence rates were 14.9% for extensive metabolizers, 20.9% for heterozygous extensive/intermediate metabolizers, and 29.0% for poor metabolizers, and all-cause mortality rates were 16.7%, 18.0%, and 22.8%, respectively.", "Compared with extensive metabolizers, there was a significantly increased risk of recurrence for heterozygous extensive/intermediate metabolizers (time to recurrence adjusted hazard ratio [HR], 1.40; 95% confidence interval [CI], 1.04-1.90) and for poor metabolizers (time to recurrence HR, 1.90; 95% CI, 1.10-3.28).", "Compared with extensive metabolizers, those with decreased CYP2D6 activity (heterozygous extensive/intermediate and poor metabolism) had worse event-free survival (HR, 1.33; 95% CI, 1.06-1.68) and disease-free survival (HR, 1.29; 95% CI, 1.03-1.61), but there was no significant difference in overall survival (HR, 1.15; 95% CI, 0.88-1.51).", "CONCLUSION Among women with breast cancer treated with tamoxifen, there was an association between CYP2D6 variation and clinical outcomes, such that the presence of 2 functional CYP2D6 alleles was associated with better clinical outcomes and the presence of nonfunctional or reduced-function alleles with worse outcomes."]} | science.scifact_entailment.21 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nA breast cancer patient's capacity to metabolize tamoxifen influences treatment outcome.\n\nResearch Abstract:\nCONTEXT The growth inhibitory effect of tamoxifen, which is used for the treatment of hormone receptor-positive breast cancer, is mediated by its metabolites, 4-hydroxytamoxifen and endoxifen.\nThe formation of active metabolites is catalyzed by the polymorphic cytochrome P450 2D6 (CYP2D6) enzyme.\nOBJECTIVE To determine whether CYP2D6 variation is associated with clinical outcomes in women receiving adjuvant tamoxifen.\nDESIGN, SETTING, AND PATIENTS Retrospective analysis of German and US cohorts of patients treated with adjuvant tamoxifen for early stage breast cancer.\nThe 1325 patients had diagnoses between 1986 and 2005 of stage I through III breast cancer and were mainly postmenopausal (95.4%).\nLast follow-up was in December 2008; inclusion criteria were hormone receptor positivity, no metastatic disease at diagnosis, adjuvant tamoxifen therapy, and no chemotherapy.\nDNA from tumor tissue or blood was genotyped for CYP2D6 variants associated with reduced (*10, *41) or absent (*3, *4, *5) enzyme activity.\nWomen were classified as having an extensive (n=609), heterozygous extensive/intermediate (n=637), or poor (n=79) CYP2D6 metabolism.\nMAIN OUTCOME MEASURES Time to recurrence, event-free survival, disease-free survival, and overall survival.\nRESULTS Median follow-up was 6.3 years.\nAt 9 years of follow-up, the recurrence rates were 14.9% for extensive metabolizers, 20.9% for heterozygous extensive/intermediate metabolizers, and 29.0% for poor metabolizers, and all-cause mortality rates were 16.7%, 18.0%, and 22.8%, respectively.\nCompared with extensive metabolizers, there was a significantly increased risk of recurrence for heterozygous extensive/intermediate metabolizers (time to recurrence adjusted hazard ratio [HR], 1.40; 95% confidence interval [CI], 1.04-1.90) and for poor metabolizers (time to recurrence HR, 1.90; 95% CI, 1.10-3.28).\nCompared with extensive metabolizers, those with decreased CYP2D6 activity (heterozygous extensive/intermediate and poor metabolism) had worse event-free survival (HR, 1.33; 95% CI, 1.06-1.68) and disease-free survival (HR, 1.29; 95% CI, 1.03-1.61), but there was no significant difference in overall survival (HR, 1.15; 95% CI, 0.88-1.51).\nCONCLUSION Among women with breast cancer treated with tamoxifen, there was an association between CYP2D6 variation and clinical outcomes, such that the presence of 2 functional CYP2D6 alleles was associated with better clinical outcomes and the presence of nonfunctional or reduced-function alleles with worse outcomes.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"At 9 years of follow-up, the recurrence rates were 14.9% for extensive metabolizers, 20.9% for heterozygous extensive/intermediate metabolizers, and 29.0% for poor metabolizers, and all-cause mortality rates were 16.7%, 18.0%, and 22.8%, respectively.\", \"Compared with extensive metabolizers, there was a significantly increased risk of recurrence for heterozygous extensive/intermediate metabolizers (time to recurrence adjusted hazard ratio [HR], 1.40; 95% confidence interval [CI], 1.04-1.90) and for poor metabolizers (time to recurrence HR, 1.90; 95% CI, 1.10-3.28).\", \"Compared with extensive metabolizers, those with decreased CYP2D6 activity (heterozygous extensive/intermediate and poor metabolism) had worse event-free survival (HR, 1.33; 95% CI, 1.06-1.68) and disease-free survival (HR, 1.29; 95% CI, 1.03-1.61), but there was no significant difference in overall survival (HR, 1.15; 95% CI, 0.88-1.51).\", \"CONCLUSION Among women with breast cancer treated with tamoxifen, there was an association between CYP2D6 variation and clinical outcomes, such that the presence of 2 functional CYP2D6 alleles was associated with better clinical outcomes and the presence of nonfunctional or reduced-function alleles with worse outcomes.\"]}",
"role": "assistant"
}
] |
science.scifact_entailment | Determine the relationship between them and generate a JSON response. Scientific Claim:
β-sheet opening occurs during pleurotolysin pore formation.
Research Abstract:
Membrane attack complex/perforin-like (MACPF) proteins comprise the largest superfamily of pore-forming proteins, playing crucial roles in immunity and pathogenesis.
Soluble monomers assemble into large transmembrane pores via conformational transitions that remain to be structurally and mechanistically characterised.
Here we present an 11 Å resolution cryo-electron microscopy (cryo-EM) structure of the two-part, fungal toxin Pleurotolysin (Ply), together with crystal structures of both components (the lipid binding PlyA protein and the pore-forming MACPF component PlyB).
These data reveal a 13-fold pore 80 Å in diameter and 100 Å in height, with each subunit comprised of a PlyB molecule atop a membrane bound dimer of PlyA. The resolution of the EM map, together with biophysical and computational experiments, allowed confident assignment of subdomains in a MACPF pore assembly.
The major conformational changes in PlyB are a ∼70° opening of the bent and distorted central β-sheet of the MACPF domain, accompanied by extrusion and refolding of two α-helical regions into transmembrane β-hairpins (TMH1 and TMH2).
We determined the structures of three different disulphide bond-trapped prepore intermediates.
Analysis of these data by molecular modelling and flexible fitting allows us to generate a potential trajectory of β-sheet unbending.
The results suggest that MACPF conformational change is triggered through disruption of the interface between a conserved helix-turn-helix motif and the top of TMH2.
Following their release we propose that the transmembrane regions assemble into β-hairpins via top down zippering of backbone hydrogen bonds to form the membrane-inserted β-barrel.
The intermediate structures of the MACPF domain during refolding into the β-barrel pore establish a structural paradigm for the transition from soluble monomer to pore, which may be conserved across the whole superfamily.
The TMH2 region is critical for the release of both TMH clusters, suggesting why this region is targeted by endogenous inhibitors of MACPF function. | {"verdict": "SUPPORT", "evidence": ["The major conformational changes in PlyB are a \u223c70\u00b0 opening of the bent and distorted central \u03b2-sheet of the MACPF domain, accompanied by extrusion and refolding of two \u03b1-helical regions into transmembrane \u03b2-hairpins (TMH1 and TMH2)."]} | science.scifact_entailment.916 | [
{
"content": "Determine the relationship between them and generate a JSON response. Scientific Claim: \nβ-sheet opening occurs during pleurotolysin pore formation.\n\nResearch Abstract:\nMembrane attack complex/perforin-like (MACPF) proteins comprise the largest superfamily of pore-forming proteins, playing crucial roles in immunity and pathogenesis.\nSoluble monomers assemble into large transmembrane pores via conformational transitions that remain to be structurally and mechanistically characterised.\nHere we present an 11 Å resolution cryo-electron microscopy (cryo-EM) structure of the two-part, fungal toxin Pleurotolysin (Ply), together with crystal structures of both components (the lipid binding PlyA protein and the pore-forming MACPF component PlyB).\nThese data reveal a 13-fold pore 80 Å in diameter and 100 Å in height, with each subunit comprised of a PlyB molecule atop a membrane bound dimer of PlyA. The resolution of the EM map, together with biophysical and computational experiments, allowed confident assignment of subdomains in a MACPF pore assembly.\nThe major conformational changes in PlyB are a ∼70° opening of the bent and distorted central β-sheet of the MACPF domain, accompanied by extrusion and refolding of two α-helical regions into transmembrane β-hairpins (TMH1 and TMH2).\nWe determined the structures of three different disulphide bond-trapped prepore intermediates.\nAnalysis of these data by molecular modelling and flexible fitting allows us to generate a potential trajectory of β-sheet unbending.\nThe results suggest that MACPF conformational change is triggered through disruption of the interface between a conserved helix-turn-helix motif and the top of TMH2.\nFollowing their release we propose that the transmembrane regions assemble into β-hairpins via top down zippering of backbone hydrogen bonds to form the membrane-inserted β-barrel.\nThe intermediate structures of the MACPF domain during refolding into the β-barrel pore establish a structural paradigm for the transition from soluble monomer to pore, which may be conserved across the whole superfamily.\nThe TMH2 region is critical for the release of both TMH clusters, suggesting why this region is targeted by endogenous inhibitors of MACPF function.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"The major conformational changes in PlyB are a \\u223c70\\u00b0 opening of the bent and distorted central \\u03b2-sheet of the MACPF domain, accompanied by extrusion and refolding of two \\u03b1-helical regions into transmembrane \\u03b2-hairpins (TMH1 and TMH2).\"]}",
"role": "assistant"
}
] |
Subsets and Splits