before_sent
stringlengths 13
1.44k
| before_sent_with_intent
stringlengths 25
1.45k
| after_sent
stringlengths 0
1.41k
| labels
stringclasses 6
values | doc_id
stringlengths 4
10
| revision_depth
int64 1
4
|
---|---|---|---|---|---|
In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations . | <fluency> In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations . | In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations . | fluency | 2001.08604 | 1 |
In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations . | <coherence> In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations . | In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogs. Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations . | coherence | 2001.08604 | 1 |
In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations . Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data . | <fluency> In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations . Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data . | In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations , deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data . | fluency | 2001.08604 | 1 |
Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data . | <clarity> Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data . | Deep generative data augmentation for the task requires the generative model to be aware of the hierarchically structured data . | clarity | 2001.08604 | 1 |
Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data . | <clarity> Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data . | Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchical nature . | clarity | 2001.08604 | 1 |
We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. | <fluency> We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. | We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. | fluency | 2001.08604 | 1 |
We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. | <meaning-changed> We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. | We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. | meaning-changed | 2001.08604 | 1 |
We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. | <fluency> We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. | We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogs , including linguistic and underlying annotation structures. | fluency | 2001.08604 | 1 |
We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | <meaning-changed> We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic features and underlying structured annotations, namely dialog acts and goals. We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | meaning-changed | 2001.08604 | 1 |
Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | <meaning-changed> Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | Our experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | meaning-changed | 2001.08604 | 1 |
Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | <fluency> Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | fluency | 2001.08604 | 1 |
Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | <clarity> Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving the dialog state tracking performances on several datasets . | clarity | 2001.08604 | 1 |
Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | <meaning-changed> Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets . | Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation . | meaning-changed | 2001.08604 | 1 |
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. | <clarity> Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. | Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit certain NLP tasks. | clarity | 2001.08604 | 2 |
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. | <clarity> Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. | Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit NLP tasks. | clarity | 2001.08604 | 2 |
Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . | <clarity> Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . | Due to the inherent hierarchical structure of goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . | clarity | 2001.08604 | 2 |
Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . | <clarity> Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . | Since, goal-oriented dialogs over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . | clarity | 2001.08604 | 2 |
Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . | <clarity> Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . | Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, the deep generative model must be capable of capturing the coherence among different hierarchies and types of dialog features . | clarity | 2001.08604 | 2 |
We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely dialog acts and goals. | <fluency> We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely dialog acts and goals. | We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling the complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely dialog acts and goals. | fluency | 2001.08604 | 2 |
We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely dialog acts and goals. | <clarity> We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely dialog acts and goals. | We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely speaker information, dialog acts, and goals. | clarity | 2001.08604 | 2 |
We also propose two training policies to mitigate issues that arise with training VAE-based models. | <meaning-changed> We also propose two training policies to mitigate issues that arise with training VAE-based models. | The proposed architecture is designed to model each aspect of goal-oriented dialogs using inter-connected latent variables and learns to generate coherent goal-oriented dialogs from the latent spaces. To overcome training issues that arise with training VAE-based models. | meaning-changed | 2001.08604 | 2 |
We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. | <clarity> We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. | We also propose two training policies to mitigate issues that arise from training complex variational models, we propose appropriate training strategies. Experiments on various dialog datasets show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. | clarity | 2001.08604 | 2 |
Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation . | <clarity> Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation . | Experiments show that our model improves the downstream dialog trackers' robustness via generative data augmentation. We also discover additional benefits of our unified approach to modeling goal-oriented dialogs: dialog response generation and user simulation . | clarity | 2001.08604 | 2 |
Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation . | <meaning-changed> Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation . | Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation , where our model outperforms previous strong baselines . | meaning-changed | 2001.08604 | 2 |
How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? | <fluency> How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? | How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? | fluency | 2001.11453 | 1 |
We infer the posteriors over such latent variables based on data from seen task-language combinations through variational inference. | <fluency> We infer the posteriors over such latent variables based on data from seen task-language combinations through variational inference. | We infer the posteriors over such latent variables based on data from seen task--language combinations through variational inference. | fluency | 2001.11453 | 1 |
In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods ; it increases performance by 4.49 points for POS tagging and 7.73 points for NER on average compared to the strongest baseline . | <meaning-changed> In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods ; it increases performance by 4.49 points for POS tagging and 7.73 points for NER on average compared to the strongest baseline . | In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods . Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy strongly correlates with accuracy. Hence, the proposed framework also offers robust estimates of uncertainty . | meaning-changed | 2001.11453 | 1 |
How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? | <fluency> How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? | How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? | fluency | 2001.11453 | 2 |
We infer the posteriors over such latent variables based on data from seen task--language combinations through variational inference. | <fluency> We infer the posteriors over such latent variables based on data from seen task--language combinations through variational inference. | We infer the posteriors over such latent variables based on data from seen task-language combinations through variational inference. | fluency | 2001.11453 | 2 |
Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy strongly correlates with accuracy. | <meaning-changed> Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy strongly correlates with accuracy. | Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy inversely correlates with accuracy. | meaning-changed | 2001.11453 | 2 |
Hence, the proposed framework also offers robust estimates of uncertainty. | <meaning-changed> Hence, the proposed framework also offers robust estimates of uncertainty. | Hence, the proposed framework also offers robust estimates of prediction uncertainty. Our code is located at github.com/cambridgeltl/parameter-factorization | meaning-changed | 2001.11453 | 2 |
We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. | <coherence> We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. | With the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. | coherence | 2002.06353 | 1 |
Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. | <coherence> Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. | Motivated by the recent success of pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. | coherence | 2002.06353 | 1 |
Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. | <clarity> Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. | Motivated by the recent success of BERT based pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training using narrated instructional videos. | clarity | 2002.06353 | 1 |
Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . | <coherence> Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . | Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training . Besides, most of the existing multimodal models are pre-trained for understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . | coherence | 2002.06353 | 1 |
Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . | <meaning-changed> Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . | Different from their works which only pre-train understanding task, which leads to a pretrain-finetune discrepency for generation tasks. In this paper, we propose UniViLM: a Unified Video and Language pre-training model for both understanding and generation tasks . | meaning-changed | 2002.06353 | 1 |
Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . | <clarity> Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . | Different from their works which only pre-train understanding task, we propose a unified video-language pre-training Model for both multimodal understanding and generation tasks . | clarity | 2002.06353 | 1 |
Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . | <clarity> Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . | Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation . | clarity | 2002.06353 | 1 |
IMAGINE learns to represent goals by jointly learning a language model and a goal-conditioned reward function. | <clarity> IMAGINE learns to represent goals by jointly learning a language model and a goal-conditioned reward function. | IMAGINE learns to represent goals by jointly learning a language encoder and a goal-conditioned reward function. | clarity | 2002.09253 | 1 |
Just like humans, our agent uses language compositionality to generate new goals by composing known ones . Leveraging modular model architectures based on Deep Sets and gated-attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. | <meaning-changed> Just like humans, our agent uses language compositionality to generate new goals by composing known ones . Leveraging modular model architectures based on Deep Sets and gated-attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. | Just like humans, our agent uses language compositionality to generate new goals by composing known ones , using an algorithm grounded in construction grammar models of child language acquisition . Leveraging modular model architectures based on Deep Sets and gated-attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. | meaning-changed | 2002.09253 | 1 |
Just like humans, our agent uses language compositionality to generate new goals by composing known ones . Leveraging modular model architectures based on Deep Sets and gated-attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. | <fluency> Just like humans, our agent uses language compositionality to generate new goals by composing known ones . Leveraging modular model architectures based on Deep Sets and gated-attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. | Just like humans, our agent uses language compositionality to generate new goals by composing known ones . Leveraging modular model architectures based on deepsets and gated attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. | fluency | 2002.09253 | 1 |
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. | <meaning-changed> Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. | Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills. Such agents need to create and represent goals, select which ones to pursue and learn to achieve them. | meaning-changed | 2002.09253 | 2 |
As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goalsby jointly learning a language encoder and a goal-conditioned reward function . Just like humans, our agent uses language compositionality to generate new goals by composing known ones, using an algorithm grounded in construction grammar models of child language acquisition. Leveraging modular model architectures based on deepsets and gated attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them . | <meaning-changed> As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goalsby jointly learning a language encoder and a goal-conditioned reward function . Just like humans, our agent uses language compositionality to generate new goals by composing known ones, using an algorithm grounded in construction grammar models of child language acquisition. Leveraging modular model architectures based on deepsets and gated attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them . | Recent approaches have considered goal spaces that were either fixed and hand-defined or learned using generative models of states. This limited agents to sample goals within the distribution of known effects. We argue that the ability to imagine out-of-distribution goals is key to enable creative discoveries and open-ended learning. Children do so by leveraging the compositionality of language as a tool to imagine descriptions of outcomes they never experienced before, targeting them as goals during play. We introduce Imagine, an intrinsically motivated deep reinforcement learning architecture that models this ability. Such imaginative agents, like children, benefit from the guidance of a social peer who provides language descriptions. To take advantage of goal imagination, agents must be able to leverage these descriptions to interpret their imagined out-of-distribution goals. This generalization is made possible by modularity: a decomposition between learned goal-achievement reward function and policy relying on deep sets, gated attention and object-centered representations. We introduce the Playground environment and study how this form of goal imagination improves generalization and exploration over agents lacking this capacity. In addition, we identify the properties of goal imagination that enable these results and study the impacts of modularity and social interactions . | meaning-changed | 2002.09253 | 2 |
Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round.However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. | <clarity> Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round.However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. | Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i.e., they use several consistent messages for readability instead of a long message in one turn. | clarity | 2002.09616 | 1 |
However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | <meaning-changed> However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | meaning-changed | 2002.09616 | 1 |
To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | <clarity> To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | To address this issue, in this paper , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | clarity | 2002.09616 | 1 |
To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | <meaning-changed> To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator model to help the agent decide whether to wait or to make a response directly. | meaning-changed | 2002.09616 | 1 |
To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | <clarity> To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the dialogue system decide to wait or to make a response directly. | clarity | 2002.09616 | 1 |
To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models . | <meaning-changed> To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models . | To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the user has some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing models in solving this Wait-or-Answer problem . | meaning-changed | 2002.09616 | 1 |
Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent messages for readability instead of a long sentence to express their question. | <meaning-changed> Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent messages for readability instead of a long sentence to express their question. | Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round. However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long sentence to express their question. | meaning-changed | 2002.09616 | 2 |
Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. | <meaning-changed> Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. | Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent messages for readability instead of a long message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. | meaning-changed | 2002.09616 | 2 |
Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. | <clarity> Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. | Further , we propose a novel Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. | clarity | 2002.09616 | 2 |
Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. | <coherence> Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. | Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) neural dialogue model to help the dialogue system decide to wait or answer. | coherence | 2002.09616 | 2 |
More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. | <clarity> More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. | More specifically, we take advantage of an arbitrator model to help the agent decide whether to wait or answer. | clarity | 2002.09616 | 2 |
More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the userhas some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing modelsin solving this Wait-or-Answer problem . | <meaning-changed> More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the userhas some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing modelsin solving this Wait-or-Answer problem . | More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models . | meaning-changed | 2002.09616 | 2 |
To this end, we used data gathered by the CrowdSource team at Google Research in 2019 and fine-tuned pre-trained BERT model on our problem. | <fluency> To this end, we used data gathered by the CrowdSource team at Google Research in 2019 and fine-tuned pre-trained BERT model on our problem. | To this end, we used data gathered by the CrowdSource team at Google Research in 2019 and a fine-tuned pre-trained BERT model on our problem. | fluency | 2002.10107 | 2 |
Based on evaluation by Mean-Squared-Error (MSE), model achieved the value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. | <fluency> Based on evaluation by Mean-Squared-Error (MSE), model achieved the value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. | Based on the evaluation by Mean-Squared-Error (MSE), model achieved the value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. | fluency | 2002.10107 | 2 |
Based on evaluation by Mean-Squared-Error (MSE), model achieved the value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. | <clarity> Based on evaluation by Mean-Squared-Error (MSE), model achieved the value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. | Based on evaluation by Mean-Squared-Error (MSE), the model achieved a value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. | clarity | 2002.10107 | 2 |
Previous attempts to learn variational auto-encoders for language data ? have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the ? occurrence of posterior collapse with VAEs. | <fluency> Previous attempts to learn variational auto-encoders for language data ? have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the ? occurrence of posterior collapse with VAEs. | Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the ? occurrence of posterior collapse with VAEs. | fluency | 2003.02645 | 1 |
Previous attempts to learn variational auto-encoders for language data ? have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the ? occurrence of posterior collapse with VAEs. | <fluency> Previous attempts to learn variational auto-encoders for language data ? have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the ? occurrence of posterior collapse with VAEs. | Previous attempts to learn variational auto-encoders for language data ? have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapse with VAEs. | fluency | 2003.02645 | 1 |
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | <clarity> We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | SentenceMIM is a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | clarity | 2003.02645 | 2 |
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | <clarity> We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | We introduce sentenceMIM, a probabilistic auto-encoder for language data , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | clarity | 2003.02645 | 2 |
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | <meaning-changed> We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (ie, similar to VAE) . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | meaning-changed | 2003.02645 | 2 |
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | <clarity> We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn VAEs for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | clarity | 2003.02645 | 2 |
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. | <clarity> We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. | We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. | clarity | 2003.02645 | 2 |
The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. | <clarity> The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. | The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is robust against posterior collapse. | clarity | 2003.02645 | 2 |
This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. | <meaning-changed> This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. | As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. | meaning-changed | 2003.02645 | 2 |
We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. | <meaning-changed> We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. | We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths with a fixed-dimensional latent representation. | meaning-changed | 2003.02645 | 2 |
We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning . | <clarity> We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning . | We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths . We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning . | clarity | 2003.02645 | 2 |
We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning . | <clarity> We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning . | We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning , without fine-tuning . | clarity | 2003.02645 | 2 |
We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning . To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models . | <meaning-changed> We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning . To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models . | We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning , outperforming VAE and AE with similar architectures . | meaning-changed | 2003.02645 | 2 |
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. | <clarity> Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. | Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. | clarity | 2004.12316 | 1 |
In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations . | <clarity> In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations . | In addition, our empirical analysis also suggests that persona plays an important role in empathetic dialogues . | clarity | 2004.12316 | 1 |
To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impacts of persona on empathetic responding. | <clarity> To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impacts of persona on empathetic responding. | To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. | clarity | 2004.12316 | 1 |
Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations . | <clarity> Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations . | Specifically, we first present a novel large-scale multi-domain dataset for empathetic dialogues with personas . | clarity | 2004.12316 | 1 |
Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations . | <clarity> Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations . | Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations . | clarity | 2004.12316 | 1 |
Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations . | <clarity> Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations . | Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues . | clarity | 2004.12316 | 1 |
Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. | <clarity> Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. | Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. | clarity | 2004.12316 | 2 |
In addition, our empirical analysis also suggests that persona plays an important role in empathetic dialogues . | <clarity> In addition, our empirical analysis also suggests that persona plays an important role in empathetic dialogues . | In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations . | clarity | 2004.12316 | 2 |
To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. | <clarity> To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. | To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impacts of persona on empathetic responding. | clarity | 2004.12316 | 2 |
To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. | <fluency> To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. | To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impact of persona on empathetic responding. | fluency | 2004.12316 | 2 |
Specifically, we first present a novel large-scale multi-domain dataset for empathetic dialogues with personas . | <clarity> Specifically, we first present a novel large-scale multi-domain dataset for empathetic dialogues with personas . | Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations . | clarity | 2004.12316 | 2 |
Finally, we conduct extensive experiments to investigate the impacts of persona on empathetic responding. | <fluency> Finally, we conduct extensive experiments to investigate the impacts of persona on empathetic responding. | Finally, we conduct extensive experiments to investigate the impact of persona on empathetic responding. | fluency | 2004.12316 | 2 |
Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues . | <clarity> Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues . | Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues . | clarity | 2004.12316 | 2 |
Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues . | <clarity> Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues . | Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations . | clarity | 2004.12316 | 2 |
Automatic humor detection has interesting use cases in modern technologies, such as chatbots and personal assistants. In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. | <meaning-changed> Automatic humor detection has interesting use cases in modern technologies, such as chatbots and personal assistants. In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. | Automatic humor detection has interesting use cases in modern technologies, such as chatbots and virtual assistants. Based on the general linguistic structure of humor, in this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. | meaning-changed | 2004.12765 | 1 |
In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. | <clarity> In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. | In this paper, we propose a novel approach for detecting humor in short texts using BERT sentence embedding. | clarity | 2004.12765 | 1 |
In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. | <fluency> In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. | In this paper, we describe a novel approach for detecting humor in short texts by using BERT sentence embedding. | fluency | 2004.12765 | 1 |
Our proposed model uses BERT to generate tokens and sentence embedding for texts. | <clarity> Our proposed model uses BERT to generate tokens and sentence embedding for texts. | Our proposed method uses BERT to generate tokens and sentence embedding for texts. | clarity | 2004.12765 | 1 |
Our proposed model uses BERT to generate tokens and sentence embedding for texts. It sends embedding outputs as input to a two-layered neural networkthat predicts the target value. | <clarity> Our proposed model uses BERT to generate tokens and sentence embedding for texts. It sends embedding outputs as input to a two-layered neural networkthat predicts the target value. | Our proposed model uses BERT to generate embeddings for sentences of a given text and uses these embeddings as inputs for parallel lines of hidden layers in a neural network. These lines are finally concatenated to predict the target value. | clarity | 2004.12765 | 1 |
For evaluation , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive , 100k negative). | <clarity> For evaluation , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive , 100k negative). | For evaluation purposes , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive , 100k negative). | clarity | 2004.12765 | 1 |
For evaluation , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive , 100k negative). | <fluency> For evaluation , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive , 100k negative). | For evaluation , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive and 100k negative). | fluency | 2004.12765 | 1 |
Experimental results show an accuracy of 98.1 percent for the proposed method , 2.1 percent improvement compared to the best CNN and RNN models and 1.1 percentbetter than a fine-tuned BERT model. In addition, the combination of RNN-CNN was not successful in this task compared to the CNN model . | <clarity> Experimental results show an accuracy of 98.1 percent for the proposed method , 2.1 percent improvement compared to the best CNN and RNN models and 1.1 percentbetter than a fine-tuned BERT model. In addition, the combination of RNN-CNN was not successful in this task compared to the CNN model . | Experimental results show that our proposed method can determine humor in short texts with accuracy and an F1-score of 98.2 percent. Our 8-layer model with 110M parameters outperforms all baseline models with a large margin, showing the importance of utilizing linguistic structure in machine learning models . | clarity | 2004.12765 | 1 |
Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. | <clarity> Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. | Multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. | clarity | 2004.14519 | 2 |
A few multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. Their performance on the IEtasks is less known, in particular, the cross-lingual transfer capability from English to Arabic . | <meaning-changed> A few multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. Their performance on the IEtasks is less known, in particular, the cross-lingual transfer capability from English to Arabic . | A few multilingual pre-trained Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the effective cross-lingual transfer capability from English to Arabic . | meaning-changed | 2004.14519 | 2 |
Their performance on the IEtasks is less known, in particular, the cross-lingual transfer capability from English to Arabic . | <meaning-changed> Their performance on the IEtasks is less known, in particular, the cross-lingual transfer capability from English to Arabic . | Their performance on the IEtasks is less known, in particular, the cross-lingual zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied . | meaning-changed | 2004.14519 | 2 |
In this work , we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks. | <clarity> In this work , we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks. | In this paper , we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks. | clarity | 2004.14519 | 2 |
In this work , we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks. Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot learning settings. footnote We have made our pre-trained models publicly available at URL | <meaning-changed> In this work , we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks. Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot learning settings. footnote We have made our pre-trained models publicly available at URL | In this work , we pre-train a customized bilingual BERT, dubbed GigaBERT, that is designed specifically for Arabic NLP and English-to-Arabic zero-shot transfer learning. We study GigaBERT's effectiveness on zero-short transfer across four IE tasks: named entity recognition, part-of-speech tagging, argument role labeling, and relation extraction. Our best model significantly outperforms mBERT, XLM-RoBERTa, and AraBERT (Antoun et al., 2020) in both the supervised and zero-shot learning settings. footnote We have made our pre-trained models publicly available at URL | meaning-changed | 2004.14519 | 2 |
Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot learning settings. footnote We have made our pre-trained models publicly available at URL | <clarity> Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot learning settings. footnote We have made our pre-trained models publicly available at URL | Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot transfer settings. We have made our pre-trained models publicly available at URL | clarity | 2004.14519 | 2 |
We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. | <clarity> We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. | We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models through transfer learning. | clarity | 2004.14601 | 1 |