Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,7 @@ It achieves the following results on the evaluation set:
|
|
25 |
## Autoregressive and Prefix Language Modelling
|
26 |
|
27 |
Language Modelling, especially text generation works on the principle of generating the next token based on its previous antecedents.
|
|
|
28 |
This is what Autoregressive modelling are based on, it predicts the next token i.e. word here on the basis of token preceding it. Here, we take P(wi|wi-1), where wi is next word and wi-1 is token preceeding it, and P is the probbaility pf generating wi wrt wi-1
|
29 |
|
30 |
But for Prefix Language modelling, we consider input into function and consider it in generation of our next word, i.e. the input is used as a context for generation of next tokens, calculating the conditional probability of next work wrt context. P(w|x), where w is next token and x is context and P is probability of getting w wrt x context.
|
|
|
25 |
## Autoregressive and Prefix Language Modelling
|
26 |
|
27 |
Language Modelling, especially text generation works on the principle of generating the next token based on its previous antecedents.
|
28 |
+
|
29 |
This is what Autoregressive modelling are based on, it predicts the next token i.e. word here on the basis of token preceding it. Here, we take P(wi|wi-1), where wi is next word and wi-1 is token preceeding it, and P is the probbaility pf generating wi wrt wi-1
|
30 |
|
31 |
But for Prefix Language modelling, we consider input into function and consider it in generation of our next word, i.e. the input is used as a context for generation of next tokens, calculating the conditional probability of next work wrt context. P(w|x), where w is next token and x is context and P is probability of getting w wrt x context.
|