Update spacy pipeline to 3.8.0
Browse files- README.md +10 -62
- config.cfg +4 -4
- edit_tree_lemmatizer.py +465 -465
- hu_core_news_md-any-py3-none-any.whl +2 -2
- lemma_postprocessing.py +113 -113
- lookup_lemmatizer.py +132 -132
- meta.json +189 -189
- morphologizer/model +1 -1
- ner/model +1 -1
- parser/model +1 -1
- senter/model +1 -1
- tagger/model +1 -1
- tok2vec/model +1 -1
- trainable_lemmatizer/model +1 -1
- vocab/strings.json +2 -2
README.md
CHANGED
@@ -14,112 +14,60 @@ model-index:
|
|
14 |
metrics:
|
15 |
- name: NER Precision
|
16 |
type: precision
|
17 |
-
value: 0.
|
18 |
- name: NER Recall
|
19 |
type: recall
|
20 |
-
value: 0.
|
21 |
- name: NER F Score
|
22 |
type: f_score
|
23 |
-
value: 0.
|
24 |
- task:
|
25 |
name: TAG
|
26 |
type: token-classification
|
27 |
metrics:
|
28 |
- name: TAG (XPOS) Accuracy
|
29 |
type: accuracy
|
30 |
-
value: 0.
|
31 |
- task:
|
32 |
name: POS
|
33 |
type: token-classification
|
34 |
metrics:
|
35 |
- name: POS (UPOS) Accuracy
|
36 |
type: accuracy
|
37 |
-
value: 0.
|
38 |
- task:
|
39 |
name: MORPH
|
40 |
type: token-classification
|
41 |
metrics:
|
42 |
- name: Morph (UFeats) Accuracy
|
43 |
type: accuracy
|
44 |
-
value: 0.
|
45 |
- task:
|
46 |
name: LEMMA
|
47 |
type: token-classification
|
48 |
metrics:
|
49 |
- name: Lemma Accuracy
|
50 |
type: accuracy
|
51 |
-
value: 0.
|
52 |
- task:
|
53 |
name: UNLABELED_DEPENDENCIES
|
54 |
type: token-classification
|
55 |
metrics:
|
56 |
- name: Unlabeled Attachment Score (UAS)
|
57 |
type: f_score
|
58 |
-
value: 0.
|
59 |
- task:
|
60 |
name: LABELED_DEPENDENCIES
|
61 |
type: token-classification
|
62 |
metrics:
|
63 |
- name: Labeled Attachment Score (LAS)
|
64 |
type: f_score
|
65 |
-
value: 0.
|
66 |
- task:
|
67 |
name: SENTS
|
68 |
type: token-classification
|
69 |
metrics:
|
70 |
- name: Sentences F-Score
|
71 |
type: f_score
|
72 |
-
value: 0.
|
73 |
---
|
74 |
-
Core Hungarian model for HuSpaCy. Components: tok2vec, senter, tagger, morphologizer, lemmatizer, parser, ner
|
75 |
-
|
76 |
-
| Feature | Description |
|
77 |
-
| --- | --- |
|
78 |
-
| **Name** | `hu_core_news_md` |
|
79 |
-
| **Version** | `3.7.0` |
|
80 |
-
| **spaCy** | `>=3.7.0,<3.8.0` |
|
81 |
-
| **Default Pipeline** | `tok2vec`, `senter`, `tagger`, `morphologizer`, `lookup_lemmatizer`, `trainable_lemmatizer`, `parser`, `ner` |
|
82 |
-
| **Components** | `tok2vec`, `senter`, `tagger`, `morphologizer`, `lookup_lemmatizer`, `trainable_lemmatizer`, `parser`, `ner` |
|
83 |
-
| **Vectors** | -1 keys, 200000 unique vectors (100 dimensions) |
|
84 |
-
| **Sources** | [UD Hungarian Szeged](https://universaldependencies.org/treebanks/hu_szeged/index.html) (Richárd Farkas, Katalin Simkó, Zsolt Szántó, Viktor Varga, Veronika Vincze (MTA-SZTE Research Group on Artificial Intelligence))<br>[NYTK-NerKor Corpus](https://github.com/nytud/NYTK-NerKor) (Eszter Simon, Noémi Vadász (Department of Language Technology and Applied Linguistics))<br>[Szeged NER Corpus](https://rgai.inf.u-szeged.hu/node/130) (György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik (MTA-SZTE Research Group on Artificial Intelligence))<br>[Hungarian lg Floret vectors](https://huggingface.co/huspacy/hu_vectors_web_lg) (Szeged AI) |
|
85 |
-
| **License** | `cc-by-sa-4.0` |
|
86 |
-
| **Author** | [SzegedAI, MILAB](https://github.com/huspacy/huspacy) |
|
87 |
-
|
88 |
-
### Label Scheme
|
89 |
-
|
90 |
-
<details>
|
91 |
-
|
92 |
-
<summary>View label scheme (1209 labels for 4 components)</summary>
|
93 |
-
|
94 |
-
| Component | Labels |
|
95 |
-
| --- | --- |
|
96 |
-
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` |
|
97 |
-
| **`morphologizer`** | `Definite=Def\|POS=DET\|PronType=Art`, `Case=Ine\|Number=Sing\|POS=NOUN`, `POS=ADV`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=NOUN`, `Definite=Ind\|POS=DET\|PronType=Tot`, `Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `POS=PUNCT`, `Case=Nom\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|POS=DET\|PronType=Ind`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADP`, `POS=CCONJ`, `Case=Del\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Sbl\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=NOUN`, `Case=Sup\|Number=Sing\|POS=PROPN`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV`, `Case=Sup\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Cau\|Number=Plur\|POS=NOUN`, `Case=Cau\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ins\|Number=Sing\|POS=NOUN`, `POS=ADV\|PronType=Neg`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=SCONJ`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=NOUN`, `Case=Dat\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=ADJ`, `POS=ADV\|PronType=Dem`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=ADV\|PronType=Int`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PROPN`, `Case=Sbl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PART`, `Case=Sup\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `POS=ADV\|PronType=Tot`, `Case=Ill\|Definite=Ind\|POS=DET\|PronType=Ind`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ess\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Acc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartFut`, `Case=Ine\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=NOUN`, `Case=Del\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Tra\|Number=Sing\|POS=NOUN`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=NOUN`, `Case=Sbl\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PROPN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Sing\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `POS=ADV\|PronType=Rel`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ill\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PROPN`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Definite=Def\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ter\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `POS=ADV\|VerbForm=Conv`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dis\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PROPN`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Abs\|Number=Sing\|POS=NOUN`, `Case=Ade\|Number=Sing\|POS=PROPN`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Del\|Number=Sing\|POS=PROPN`, `Case=Sbl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Loc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Ind\|POS=DET\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ter\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `POS=X`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Del\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Tra\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|Reflex=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Cnd,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ine\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Cnd,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Case=Sbl\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ess\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3\|VerbForm=PartPast`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|POS=DET\|PronType=Neg`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Cau\|Number=Sing\|POS=PROPN`, `Case=Abs\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|POS=NOUN`, `Case=Ter\|Number=Plur\|POS=NOUN`, `Case=Tem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Case=Ine\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Number=Plur\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PROPN`, `Case=Ter\|Number=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sbl\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Definite=Def\|POS=DET\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|Mood=Imp,Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=2\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Sbl\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Aspect=Iter\|Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ter\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Tem\|Number=Sing\|POS=NOUN`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=ADV\|PronType=Ind`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|POS=DET\|PronType=Int`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abs\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PROPN`, `Case=Abl\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ela\|Number=Sing\|POS=PROPN`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Imp,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|POS=DET\|PronType=Tot`, `Definite=Def\|POS=DET\|PronType=Neg`, `Case=Ins\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Sbl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Tra\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ess\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Degree=Cmp\|POS=ADV`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartFut`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Degree=Sup\|POS=ADV`, `Case=Acc\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|Number=Plur\|POS=NOUN`, `Case=Acc\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=All\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Cau\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psed]=Sing\|POS=ADJ`, `Case=Nom\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Mood=Pot\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ade\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Sbl\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Number=Plur\|POS=ADV\|Person=1\|PronType=PrsPron`, `POS=ADV\|PronType=v`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=3\|PronType=PrsPron`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Tem\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=1\|PronType=PrsPron`, `Case=Ter\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=ADV\|Person=3\|PronType=PrsPron`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ter\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sbl\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Del\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|NumType=Dist\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=ADV\|Person=2\|PronType=PrsPron`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Del\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Sbl\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=All\|Number=Plur\|POS=PROPN`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ade\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=PROPN`, `Case=Nom\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Case=Ela\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sup\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ill\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=2\|PronType=PrsPron`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV\|PronType=Dem`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Del\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Sbl\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Tem\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Tem\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Result\|Number=Sing\|POS=NUM`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Del\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Del\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=2\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Com\|Number=Sing\|POS=NOUN`, `Case=Tra\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Ade\|Number=Plur\|POS=PROPN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|NumType[sem]=Quotient\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Tem\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=1`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Gen\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Tem\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Abs\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Ind`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Cas=1\|Number=Sing\|POS=PROPN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sbl\|NumType[sem]=Result\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Cau\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Tra\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Cau\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Sup\|Number=Plur\|POS=PROPN`, `Case=Ess\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dis\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abs\|Number=Plur\|POS=NOUN`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Ine\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Tra\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Degree=Cmp\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ter\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sup\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Sbl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=All\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Definite=2\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Abs\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ade\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Number=Sing\|POS=VERB\|Person=2\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Cau\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Tra\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tem\|Number=Plur\|POS=NOUN`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Acc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=All\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ess\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Cau\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Tra\|NumType=Card\|Number=Sing\|POS=NUM`, `Number=Plur\|POS=VERB\|Person=2\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Cas=6\|Number=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Int`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Sbl\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Dem`, `Case=Nom\|Degree=Cmp\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Ind`, `Case=All\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Tem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Cau\|Number=Plur\|POS=PROPN`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Tra\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ter\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Cau\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tem\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ter\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Sup\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Ine\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=w`, `Case=Gen\|Number=Sing\|POS=SYM\|Type=w`, `Case=Abl\|Number=Sing\|POS=SYM\|Type=w`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|POS=SYM\|Type=w`, `Case=Tra\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Sup\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Sup\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Quotient\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ine\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=o`, `Case=Gen\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sup\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType[sem]=Signed\|Number=Sing\|POS=NUM`, `Case=Com\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psed]=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Gen\|NumType=Dist\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType[sem]=Formula\|Number=Sing\|POS=NUM`, `Case=Del\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Rel`, `Case=Ine\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=o`, `Case=Ins\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Sing\|POS=SYM\|Type=o`, `Case=Dat\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psed]=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Number=Sing\|POS=SYM\|Type=w`, `Case=Sbl\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ill\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sup\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ins\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ela\|Number=Sing\|POS=SYM\|Type=w`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=p`, `Case=Abl\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType[sem]=Measure\|Number=Sing\|POS=NUM`, `Case=Abs\|Number=Sing\|POS=PROPN`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=m`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=m`, `Case=Sup\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=SYM\|Type=o`, `Case=Ins\|Number=Sing\|POS=SYM\|Type=o`, `Case=Ins\|Number=Sing\|POS=SYM\|Type=w`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Gen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Abl\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abl\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Sup\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Sup\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Abs\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Acc\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Acc\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ter\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ins\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ins\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Gen\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sbl\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ine\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=All\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=All\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Abl\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType[sem]=Formula\|Number=Sing\|POS=NUM`, `Case=Sbl\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Del\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Cau\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ins\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|NumType=Frac\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|NumType=Frac\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Sup\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Tra\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Tem\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Dat\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Sbl\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=All\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=All\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Sbl\|Number=Plur\|POS=PROPN`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sup\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ill\|Number=Plur\|POS=PROPN`, `Case=Loc\|Number=Sing\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Abl\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=All\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ade\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=SYM\|Type=w`, `Case=Cau\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Tem\|Number=Sing\|POS=PROPN`, `Case=Del\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Acc\|Degree=Sup\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|Number[psed]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Del\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Number=Plur\|POS=PROPN`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Sbl\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ter\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Gen\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Tra\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Del\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Sbl\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Ind`, `Case=All\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ill\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ine\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Del\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Ine\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=2`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ine\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=2\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ela\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=p`, `Case=Abl\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|POS=PROPN`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Ter\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot` |
|
98 |
-
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:locy`, `advmod:mode`, `advmod:que`, `advmod:tfrom`, `advmod:tlocy`, `advmod:to`, `advmod:tto`, `amod:att`, `appos`, `aux`, `case`, `cc`, `ccomp`, `ccomp:obj`, `ccomp:obl`, `ccomp:pred`, `compound`, `compound:preverb`, `conj`, `cop`, `csubj`, `dep`, `det`, `flat:name`, `iobj`, `list`, `mark`, `nmod`, `nmod:att`, `nmod:obl`, `nsubj`, `nummod`, `obj`, `obj:lvc`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` |
|
99 |
-
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
|
100 |
-
|
101 |
-
</details>
|
102 |
-
|
103 |
-
### Accuracy
|
104 |
-
|
105 |
-
| Type | Score |
|
106 |
-
| --- | --- |
|
107 |
-
| `TOKEN_ACC` | 99.99 |
|
108 |
-
| `TOKEN_P` | 99.86 |
|
109 |
-
| `TOKEN_R` | 99.93 |
|
110 |
-
| `TOKEN_F` | 99.89 |
|
111 |
-
| `SENTS_P` | 97.11 |
|
112 |
-
| `SENTS_R` | 97.33 |
|
113 |
-
| `SENTS_F` | 97.22 |
|
114 |
-
| `TAG_ACC` | 96.96 |
|
115 |
-
| `POS_ACC` | 96.89 |
|
116 |
-
| `MORPH_ACC` | 94.51 |
|
117 |
-
| `MORPH_MICRO_P` | 97.64 |
|
118 |
-
| `MORPH_MICRO_R` | 96.84 |
|
119 |
-
| `MORPH_MICRO_F` | 97.24 |
|
120 |
-
| `LEMMA_ACC` | 97.45 |
|
121 |
-
| `DEP_UAS` | 80.90 |
|
122 |
-
| `DEP_LAS` | 73.69 |
|
123 |
-
| `ENTS_P` | 84.41 |
|
124 |
-
| `ENTS_R` | 83.68 |
|
125 |
-
| `ENTS_F` | 84.05 |
|
|
|
14 |
metrics:
|
15 |
- name: NER Precision
|
16 |
type: precision
|
17 |
+
value: 0.8499734936
|
18 |
- name: NER Recall
|
19 |
type: recall
|
20 |
+
value: 0.8456399437
|
21 |
- name: NER F Score
|
22 |
type: f_score
|
23 |
+
value: 0.8478011809
|
24 |
- task:
|
25 |
name: TAG
|
26 |
type: token-classification
|
27 |
metrics:
|
28 |
- name: TAG (XPOS) Accuracy
|
29 |
type: accuracy
|
30 |
+
value: 0.9710512465
|
31 |
- task:
|
32 |
name: POS
|
33 |
type: token-classification
|
34 |
metrics:
|
35 |
- name: POS (UPOS) Accuracy
|
36 |
type: accuracy
|
37 |
+
value: 0.9685137334
|
38 |
- task:
|
39 |
name: MORPH
|
40 |
type: token-classification
|
41 |
metrics:
|
42 |
- name: Morph (UFeats) Accuracy
|
43 |
type: accuracy
|
44 |
+
value: 0.9431524548
|
45 |
- task:
|
46 |
name: LEMMA
|
47 |
type: token-classification
|
48 |
metrics:
|
49 |
- name: Lemma Accuracy
|
50 |
type: accuracy
|
51 |
+
value: 0.974069467
|
52 |
- task:
|
53 |
name: UNLABELED_DEPENDENCIES
|
54 |
type: token-classification
|
55 |
metrics:
|
56 |
- name: Unlabeled Attachment Score (UAS)
|
57 |
type: f_score
|
58 |
+
value: 0.818445411
|
59 |
- task:
|
60 |
name: LABELED_DEPENDENCIES
|
61 |
type: token-classification
|
62 |
metrics:
|
63 |
- name: Labeled Attachment Score (LAS)
|
64 |
type: f_score
|
65 |
+
value: 0.7425002788
|
66 |
- task:
|
67 |
name: SENTS
|
68 |
type: token-classification
|
69 |
metrics:
|
70 |
- name: Sentences F-Score
|
71 |
type: f_score
|
72 |
+
value: 0.98
|
73 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
config.cfg
CHANGED
@@ -1,8 +1,8 @@
|
|
1 |
[paths]
|
2 |
-
parser_model = "models/hu_core_news_md-parser-3.
|
3 |
-
ner_model = "models/hu_core_news_md-ner-3.
|
4 |
-
lemmatizer_lookups = "models/hu_core_news_md-lookup-lemmatizer-3.
|
5 |
-
tagger_model = "models/hu_core_news_md-tagger-3.
|
6 |
train = null
|
7 |
dev = null
|
8 |
vectors = null
|
|
|
1 |
[paths]
|
2 |
+
parser_model = "models/hu_core_news_md-parser-3.8.0/model-best"
|
3 |
+
ner_model = "models/hu_core_news_md-ner-3.8.0/model-best"
|
4 |
+
lemmatizer_lookups = "models/hu_core_news_md-lookup-lemmatizer-3.8.0"
|
5 |
+
tagger_model = "models/hu_core_news_md-tagger-3.8.0/model-best"
|
6 |
train = null
|
7 |
dev = null
|
8 |
vectors = null
|
edit_tree_lemmatizer.py
CHANGED
@@ -1,465 +1,465 @@
|
|
1 |
-
from functools import lru_cache
|
2 |
-
|
3 |
-
from typing import cast, Any, Callable, Dict, Iterable, List, Optional
|
4 |
-
from typing import Sequence, Tuple, Union
|
5 |
-
from collections import Counter
|
6 |
-
from copy import deepcopy
|
7 |
-
from itertools import islice
|
8 |
-
import numpy as np
|
9 |
-
|
10 |
-
import srsly
|
11 |
-
from thinc.api import Config, Model, SequenceCategoricalCrossentropy, NumpyOps
|
12 |
-
from thinc.types import Floats2d, Ints2d
|
13 |
-
|
14 |
-
from spacy.pipeline._edit_tree_internals.edit_trees import EditTrees
|
15 |
-
from spacy.pipeline._edit_tree_internals.schemas import validate_edit_tree
|
16 |
-
from spacy.pipeline.lemmatizer import lemmatizer_score
|
17 |
-
from spacy.pipeline.trainable_pipe import TrainablePipe
|
18 |
-
from spacy.errors import Errors
|
19 |
-
from spacy.language import Language
|
20 |
-
from spacy.tokens import Doc, Token
|
21 |
-
from spacy.training import Example, validate_examples, validate_get_examples
|
22 |
-
from spacy.vocab import Vocab
|
23 |
-
from spacy import util
|
24 |
-
|
25 |
-
|
26 |
-
TOP_K_GUARDRAIL = 20
|
27 |
-
|
28 |
-
|
29 |
-
default_model_config = """
|
30 |
-
[model]
|
31 |
-
@architectures = "spacy.Tagger.v2"
|
32 |
-
|
33 |
-
[model.tok2vec]
|
34 |
-
@architectures = "spacy.HashEmbedCNN.v2"
|
35 |
-
pretrained_vectors = null
|
36 |
-
width = 96
|
37 |
-
depth = 4
|
38 |
-
embed_size = 2000
|
39 |
-
window_size = 1
|
40 |
-
maxout_pieces = 3
|
41 |
-
subword_features = true
|
42 |
-
"""
|
43 |
-
DEFAULT_EDIT_TREE_LEMMATIZER_MODEL = Config().from_str(default_model_config)["model"]
|
44 |
-
|
45 |
-
|
46 |
-
@Language.factory(
|
47 |
-
"trainable_lemmatizer_v2",
|
48 |
-
assigns=["token.lemma"],
|
49 |
-
requires=[],
|
50 |
-
default_config={
|
51 |
-
"model": DEFAULT_EDIT_TREE_LEMMATIZER_MODEL,
|
52 |
-
"backoff": "orth",
|
53 |
-
"min_tree_freq": 3,
|
54 |
-
"overwrite": False,
|
55 |
-
"top_k": 1,
|
56 |
-
"overwrite_labels": True,
|
57 |
-
"scorer": {"@scorers": "spacy.lemmatizer_scorer.v1"},
|
58 |
-
},
|
59 |
-
default_score_weights={"lemma_acc": 1.0},
|
60 |
-
)
|
61 |
-
def make_edit_tree_lemmatizer(
|
62 |
-
nlp: Language,
|
63 |
-
name: str,
|
64 |
-
model: Model,
|
65 |
-
backoff: Optional[str],
|
66 |
-
min_tree_freq: int,
|
67 |
-
overwrite: bool,
|
68 |
-
top_k: int,
|
69 |
-
overwrite_labels: bool,
|
70 |
-
scorer: Optional[Callable],
|
71 |
-
):
|
72 |
-
"""Construct an EditTreeLemmatizer component."""
|
73 |
-
return EditTreeLemmatizer(
|
74 |
-
nlp.vocab,
|
75 |
-
model,
|
76 |
-
name,
|
77 |
-
backoff=backoff,
|
78 |
-
min_tree_freq=min_tree_freq,
|
79 |
-
overwrite=overwrite,
|
80 |
-
top_k=top_k,
|
81 |
-
overwrite_labels=overwrite_labels,
|
82 |
-
scorer=scorer,
|
83 |
-
)
|
84 |
-
|
85 |
-
|
86 |
-
# _f = open("lemmatizer.log", "w")
|
87 |
-
# def debug(*args):
|
88 |
-
# _f.write(" ".join(args) + "\n")
|
89 |
-
def debug(*args):
|
90 |
-
pass
|
91 |
-
|
92 |
-
|
93 |
-
class EditTreeLemmatizer(TrainablePipe):
|
94 |
-
"""
|
95 |
-
Lemmatizer that lemmatizes each word using a predicted edit tree.
|
96 |
-
"""
|
97 |
-
|
98 |
-
def __init__(
|
99 |
-
self,
|
100 |
-
vocab: Vocab,
|
101 |
-
model: Model,
|
102 |
-
name: str = "trainable_lemmatizer",
|
103 |
-
*,
|
104 |
-
backoff: Optional[str] = "orth",
|
105 |
-
min_tree_freq: int = 3,
|
106 |
-
overwrite: bool = False,
|
107 |
-
top_k: int = 1,
|
108 |
-
overwrite_labels,
|
109 |
-
scorer: Optional[Callable] = lemmatizer_score,
|
110 |
-
):
|
111 |
-
"""
|
112 |
-
Construct an edit tree lemmatizer.
|
113 |
-
|
114 |
-
backoff (Optional[str]): backoff to use when the predicted edit trees
|
115 |
-
are not applicable. Must be an attribute of Token or None (leave the
|
116 |
-
lemma unset).
|
117 |
-
min_tree_freq (int): prune trees that are applied less than this
|
118 |
-
frequency in the training data.
|
119 |
-
overwrite (bool): overwrite existing lemma annotations.
|
120 |
-
top_k (int): try to apply at most the k most probable edit trees.
|
121 |
-
"""
|
122 |
-
self.vocab = vocab
|
123 |
-
self.model = model
|
124 |
-
self.name = name
|
125 |
-
self.backoff = backoff
|
126 |
-
self.min_tree_freq = min_tree_freq
|
127 |
-
self.overwrite = overwrite
|
128 |
-
self.top_k = top_k
|
129 |
-
self.overwrite_labels = overwrite_labels
|
130 |
-
|
131 |
-
self.trees = EditTrees(self.vocab.strings)
|
132 |
-
self.tree2label: Dict[int, int] = {}
|
133 |
-
|
134 |
-
self.cfg: Dict[str, Any] = {"labels": []}
|
135 |
-
self.scorer = scorer
|
136 |
-
self.numpy_ops = NumpyOps()
|
137 |
-
|
138 |
-
def get_loss(
|
139 |
-
self, examples: Iterable[Example], scores: List[Floats2d]
|
140 |
-
) -> Tuple[float, List[Floats2d]]:
|
141 |
-
validate_examples(examples, "EditTreeLemmatizer.get_loss")
|
142 |
-
loss_func = SequenceCategoricalCrossentropy(normalize=False, missing_value=-1)
|
143 |
-
|
144 |
-
truths = []
|
145 |
-
for eg in examples:
|
146 |
-
eg_truths = []
|
147 |
-
for (predicted, gold_lemma, gold_pos, gold_sent_start) in zip(
|
148 |
-
eg.predicted,
|
149 |
-
eg.get_aligned("LEMMA", as_string=True),
|
150 |
-
eg.get_aligned("POS", as_string=True),
|
151 |
-
eg.get_aligned_sent_starts(),
|
152 |
-
):
|
153 |
-
if gold_lemma is None:
|
154 |
-
label = -1
|
155 |
-
else:
|
156 |
-
form = self._get_true_cased_form(
|
157 |
-
predicted.text, gold_sent_start, gold_pos
|
158 |
-
)
|
159 |
-
tree_id = self.trees.add(form, gold_lemma)
|
160 |
-
# debug(f"@get_loss: {predicted}/{gold_pos}[{gold_sent_start}]->{form}|{gold_lemma}[{tree_id}]")
|
161 |
-
label = self.tree2label.get(tree_id, 0)
|
162 |
-
eg_truths.append(label)
|
163 |
-
|
164 |
-
truths.append(eg_truths)
|
165 |
-
|
166 |
-
d_scores, loss = loss_func(scores, truths)
|
167 |
-
if self.model.ops.xp.isnan(loss):
|
168 |
-
raise ValueError(Errors.E910.format(name=self.name))
|
169 |
-
|
170 |
-
return float(loss), d_scores
|
171 |
-
|
172 |
-
def predict(self, docs: Iterable[Doc]) -> List[Ints2d]:
|
173 |
-
if self.top_k == 1:
|
174 |
-
scores2guesses = self._scores2guesses_top_k_equals_1
|
175 |
-
elif self.top_k <= TOP_K_GUARDRAIL:
|
176 |
-
scores2guesses = self._scores2guesses_top_k_greater_1
|
177 |
-
else:
|
178 |
-
scores2guesses = self._scores2guesses_top_k_guardrail
|
179 |
-
# The behaviour of *_scores2guesses_top_k_greater_1()* is efficient for values
|
180 |
-
# of *top_k>1* that are likely to be useful when the edit tree lemmatizer is used
|
181 |
-
# for its principal purpose of lemmatizing tokens. However, the code could also
|
182 |
-
# be used for other purposes, and with very large values of *top_k* the method
|
183 |
-
# becomes inefficient. In such cases, *_scores2guesses_top_k_guardrail()* is used
|
184 |
-
# instead.
|
185 |
-
n_docs = len(list(docs))
|
186 |
-
if not any(len(doc) for doc in docs):
|
187 |
-
# Handle cases where there are no tokens in any docs.
|
188 |
-
n_labels = len(self.cfg["labels"])
|
189 |
-
guesses: List[Ints2d] = [self.model.ops.alloc2i(0, n_labels) for _ in docs]
|
190 |
-
assert len(guesses) == n_docs
|
191 |
-
return guesses
|
192 |
-
scores = self.model.predict(docs)
|
193 |
-
assert len(scores) == n_docs
|
194 |
-
guesses = scores2guesses(docs, scores)
|
195 |
-
assert len(guesses) == n_docs
|
196 |
-
return guesses
|
197 |
-
|
198 |
-
def _scores2guesses_top_k_equals_1(self, docs, scores):
|
199 |
-
guesses = []
|
200 |
-
for doc, doc_scores in zip(docs, scores):
|
201 |
-
doc_guesses = doc_scores.argmax(axis=1)
|
202 |
-
doc_guesses = self.numpy_ops.asarray(doc_guesses)
|
203 |
-
|
204 |
-
doc_compat_guesses = []
|
205 |
-
for i, token in enumerate(doc):
|
206 |
-
tree_id = self.cfg["labels"][doc_guesses[i]]
|
207 |
-
form: str = self._get_true_cased_form_of_token(token)
|
208 |
-
if self.trees.apply(tree_id, form) is not None:
|
209 |
-
doc_compat_guesses.append(tree_id)
|
210 |
-
else:
|
211 |
-
doc_compat_guesses.append(-1)
|
212 |
-
guesses.append(np.array(doc_compat_guesses))
|
213 |
-
|
214 |
-
return guesses
|
215 |
-
|
216 |
-
def _scores2guesses_top_k_greater_1(self, docs, scores):
|
217 |
-
guesses = []
|
218 |
-
top_k = min(self.top_k, len(self.labels))
|
219 |
-
for doc, doc_scores in zip(docs, scores):
|
220 |
-
doc_scores = self.numpy_ops.asarray(doc_scores)
|
221 |
-
doc_compat_guesses = []
|
222 |
-
for i, token in enumerate(doc):
|
223 |
-
for _ in range(top_k):
|
224 |
-
candidate = int(doc_scores[i].argmax())
|
225 |
-
candidate_tree_id = self.cfg["labels"][candidate]
|
226 |
-
form: str = self._get_true_cased_form_of_token(token)
|
227 |
-
if self.trees.apply(candidate_tree_id, form) is not None:
|
228 |
-
doc_compat_guesses.append(candidate_tree_id)
|
229 |
-
break
|
230 |
-
doc_scores[i, candidate] = np.finfo(np.float32).min
|
231 |
-
else:
|
232 |
-
doc_compat_guesses.append(-1)
|
233 |
-
guesses.append(np.array(doc_compat_guesses))
|
234 |
-
|
235 |
-
return guesses
|
236 |
-
|
237 |
-
def _scores2guesses_top_k_guardrail(self, docs, scores):
|
238 |
-
guesses = []
|
239 |
-
for doc, doc_scores in zip(docs, scores):
|
240 |
-
doc_guesses = np.argsort(doc_scores)[..., : -self.top_k - 1 : -1]
|
241 |
-
doc_guesses = self.numpy_ops.asarray(doc_guesses)
|
242 |
-
|
243 |
-
doc_compat_guesses = []
|
244 |
-
for token, candidates in zip(doc, doc_guesses):
|
245 |
-
tree_id = -1
|
246 |
-
for candidate in candidates:
|
247 |
-
candidate_tree_id = self.cfg["labels"][candidate]
|
248 |
-
|
249 |
-
form: str = self._get_true_cased_form_of_token(token)
|
250 |
-
|
251 |
-
if self.trees.apply(candidate_tree_id, form) is not None:
|
252 |
-
tree_id = candidate_tree_id
|
253 |
-
break
|
254 |
-
doc_compat_guesses.append(tree_id)
|
255 |
-
|
256 |
-
guesses.append(np.array(doc_compat_guesses))
|
257 |
-
|
258 |
-
return guesses
|
259 |
-
|
260 |
-
def set_annotations(self, docs: Iterable[Doc], batch_tree_ids):
|
261 |
-
for i, doc in enumerate(docs):
|
262 |
-
doc_tree_ids = batch_tree_ids[i]
|
263 |
-
if hasattr(doc_tree_ids, "get"):
|
264 |
-
doc_tree_ids = doc_tree_ids.get()
|
265 |
-
for j, tree_id in enumerate(doc_tree_ids):
|
266 |
-
if self.overwrite or doc[j].lemma == 0:
|
267 |
-
# If no applicable tree could be found during prediction,
|
268 |
-
# the special identifier -1 is used. Otherwise the tree
|
269 |
-
# is guaranteed to be applicable.
|
270 |
-
if tree_id == -1:
|
271 |
-
if self.backoff is not None:
|
272 |
-
doc[j].lemma = getattr(doc[j], self.backoff)
|
273 |
-
else:
|
274 |
-
form = self._get_true_cased_form_of_token(doc[j])
|
275 |
-
lemma = self.trees.apply(tree_id, form) or form
|
276 |
-
# debug(f"@set_annotations: {doc[j]}/{doc[j].pos_}[{doc[j].is_sent_start}]->{form}|{lemma}[{tree_id}]")
|
277 |
-
doc[j].lemma_ = lemma
|
278 |
-
|
279 |
-
@property
|
280 |
-
def labels(self) -> Tuple[int, ...]:
|
281 |
-
"""Returns the labels currently added to the component."""
|
282 |
-
return tuple(self.cfg["labels"])
|
283 |
-
|
284 |
-
@property
|
285 |
-
def hide_labels(self) -> bool:
|
286 |
-
return True
|
287 |
-
|
288 |
-
@property
|
289 |
-
def label_data(self) -> Dict:
|
290 |
-
trees = []
|
291 |
-
for tree_id in range(len(self.trees)):
|
292 |
-
tree = self.trees[tree_id]
|
293 |
-
if "orig" in tree:
|
294 |
-
tree["orig"] = self.vocab.strings[tree["orig"]]
|
295 |
-
if "subst" in tree:
|
296 |
-
tree["subst"] = self.vocab.strings[tree["subst"]]
|
297 |
-
trees.append(tree)
|
298 |
-
return dict(trees=trees, labels=tuple(self.cfg["labels"]))
|
299 |
-
|
300 |
-
def initialize(
|
301 |
-
self,
|
302 |
-
get_examples: Callable[[], Iterable[Example]],
|
303 |
-
*,
|
304 |
-
nlp: Optional[Language] = None,
|
305 |
-
labels: Optional[Dict] = None,
|
306 |
-
):
|
307 |
-
validate_get_examples(get_examples, "EditTreeLemmatizer.initialize")
|
308 |
-
|
309 |
-
if self.overwrite_labels:
|
310 |
-
if labels is None:
|
311 |
-
self._labels_from_data(get_examples)
|
312 |
-
else:
|
313 |
-
self._add_labels(labels)
|
314 |
-
|
315 |
-
# Sample for the model.
|
316 |
-
doc_sample = []
|
317 |
-
label_sample = []
|
318 |
-
for example in islice(get_examples(), 10):
|
319 |
-
doc_sample.append(example.x)
|
320 |
-
gold_labels: List[List[float]] = []
|
321 |
-
for token in example.reference:
|
322 |
-
if token.lemma == 0:
|
323 |
-
gold_label = None
|
324 |
-
else:
|
325 |
-
gold_label = self._pair2label(token.text, token.lemma_)
|
326 |
-
|
327 |
-
gold_labels.append(
|
328 |
-
[
|
329 |
-
1.0 if label == gold_label else 0.0
|
330 |
-
for label in self.cfg["labels"]
|
331 |
-
]
|
332 |
-
)
|
333 |
-
|
334 |
-
gold_labels = cast(Floats2d, gold_labels)
|
335 |
-
label_sample.append(self.model.ops.asarray(gold_labels, dtype="float32"))
|
336 |
-
|
337 |
-
self._require_labels()
|
338 |
-
assert len(doc_sample) > 0, Errors.E923.format(name=self.name)
|
339 |
-
assert len(label_sample) > 0, Errors.E923.format(name=self.name)
|
340 |
-
|
341 |
-
self.model.initialize(X=doc_sample, Y=label_sample)
|
342 |
-
|
343 |
-
def from_bytes(self, bytes_data, *, exclude=tuple()):
|
344 |
-
deserializers = {
|
345 |
-
"cfg": lambda b: self.cfg.update(srsly.json_loads(b)),
|
346 |
-
"model": lambda b: self.model.from_bytes(b),
|
347 |
-
"vocab": lambda b: self.vocab.from_bytes(b, exclude=exclude),
|
348 |
-
"trees": lambda b: self.trees.from_bytes(b),
|
349 |
-
}
|
350 |
-
|
351 |
-
util.from_bytes(bytes_data, deserializers, exclude)
|
352 |
-
|
353 |
-
return self
|
354 |
-
|
355 |
-
def to_bytes(self, *, exclude=tuple()):
|
356 |
-
serializers = {
|
357 |
-
"cfg": lambda: srsly.json_dumps(self.cfg),
|
358 |
-
"model": lambda: self.model.to_bytes(),
|
359 |
-
"vocab": lambda: self.vocab.to_bytes(exclude=exclude),
|
360 |
-
"trees": lambda: self.trees.to_bytes(),
|
361 |
-
}
|
362 |
-
|
363 |
-
return util.to_bytes(serializers, exclude)
|
364 |
-
|
365 |
-
def to_disk(self, path, exclude=tuple()):
|
366 |
-
path = util.ensure_path(path)
|
367 |
-
serializers = {
|
368 |
-
"cfg": lambda p: srsly.write_json(p, self.cfg),
|
369 |
-
"model": lambda p: self.model.to_disk(p),
|
370 |
-
"vocab": lambda p: self.vocab.to_disk(p, exclude=exclude),
|
371 |
-
"trees": lambda p: self.trees.to_disk(p),
|
372 |
-
}
|
373 |
-
util.to_disk(path, serializers, exclude)
|
374 |
-
|
375 |
-
def from_disk(self, path, exclude=tuple()):
|
376 |
-
def load_model(p):
|
377 |
-
try:
|
378 |
-
with open(p, "rb") as mfile:
|
379 |
-
self.model.from_bytes(mfile.read())
|
380 |
-
except AttributeError:
|
381 |
-
raise ValueError(Errors.E149) from None
|
382 |
-
|
383 |
-
deserializers = {
|
384 |
-
"cfg": lambda p: self.cfg.update(srsly.read_json(p)),
|
385 |
-
"model": load_model,
|
386 |
-
"vocab": lambda p: self.vocab.from_disk(p, exclude=exclude),
|
387 |
-
"trees": lambda p: self.trees.from_disk(p),
|
388 |
-
}
|
389 |
-
|
390 |
-
util.from_disk(path, deserializers, exclude)
|
391 |
-
return self
|
392 |
-
|
393 |
-
def _add_labels(self, labels: Dict):
|
394 |
-
if "labels" not in labels:
|
395 |
-
raise ValueError(Errors.E857.format(name="labels"))
|
396 |
-
if "trees" not in labels:
|
397 |
-
raise ValueError(Errors.E857.format(name="trees"))
|
398 |
-
|
399 |
-
self.cfg["labels"] = list(labels["labels"])
|
400 |
-
trees = []
|
401 |
-
for tree in labels["trees"]:
|
402 |
-
errors = validate_edit_tree(tree)
|
403 |
-
if errors:
|
404 |
-
raise ValueError(Errors.E1026.format(errors="\n".join(errors)))
|
405 |
-
|
406 |
-
tree = dict(tree)
|
407 |
-
if "orig" in tree:
|
408 |
-
tree["orig"] = self.vocab.strings[tree["orig"]]
|
409 |
-
if "orig" in tree:
|
410 |
-
tree["subst"] = self.vocab.strings[tree["subst"]]
|
411 |
-
|
412 |
-
trees.append(tree)
|
413 |
-
|
414 |
-
self.trees.from_json(trees)
|
415 |
-
|
416 |
-
for label, tree in enumerate(self.labels):
|
417 |
-
self.tree2label[tree] = label
|
418 |
-
|
419 |
-
def _labels_from_data(self, get_examples: Callable[[], Iterable[Example]]):
|
420 |
-
# Count corpus tree frequencies in ad-hoc storage to avoid cluttering
|
421 |
-
# the final pipe/string store.
|
422 |
-
vocab = Vocab()
|
423 |
-
trees = EditTrees(vocab.strings)
|
424 |
-
tree_freqs: Counter = Counter()
|
425 |
-
repr_pairs: Dict = {}
|
426 |
-
for example in get_examples():
|
427 |
-
for token in example.reference:
|
428 |
-
if token.lemma != 0:
|
429 |
-
form = self._get_true_cased_form_of_token(token)
|
430 |
-
# debug("_labels_from_data", str(token) + "->" + form, token.lemma_)
|
431 |
-
tree_id = trees.add(form, token.lemma_)
|
432 |
-
tree_freqs[tree_id] += 1
|
433 |
-
repr_pairs[tree_id] = (form, token.lemma_)
|
434 |
-
|
435 |
-
# Construct trees that make the frequency cut-off using representative
|
436 |
-
# form - token pairs.
|
437 |
-
for tree_id, freq in tree_freqs.items():
|
438 |
-
if freq >= self.min_tree_freq:
|
439 |
-
form, lemma = repr_pairs[tree_id]
|
440 |
-
self._pair2label(form, lemma, add_label=True)
|
441 |
-
|
442 |
-
@lru_cache()
|
443 |
-
def _get_true_cased_form(self, token: str, is_sent_start: bool, pos: str) -> str:
|
444 |
-
if is_sent_start and pos != "PROPN":
|
445 |
-
return token.lower()
|
446 |
-
else:
|
447 |
-
return token
|
448 |
-
|
449 |
-
def _get_true_cased_form_of_token(self, token: Token) -> str:
|
450 |
-
return self._get_true_cased_form(token.text, token.is_sent_start, token.pos_)
|
451 |
-
|
452 |
-
def _pair2label(self, form, lemma, add_label=False):
|
453 |
-
"""
|
454 |
-
Look up the edit tree identifier for a form/label pair. If the edit
|
455 |
-
tree is unknown and "add_label" is set, the edit tree will be added to
|
456 |
-
the labels.
|
457 |
-
"""
|
458 |
-
tree_id = self.trees.add(form, lemma)
|
459 |
-
if tree_id not in self.tree2label:
|
460 |
-
if not add_label:
|
461 |
-
return None
|
462 |
-
|
463 |
-
self.tree2label[tree_id] = len(self.cfg["labels"])
|
464 |
-
self.cfg["labels"].append(tree_id)
|
465 |
-
return self.tree2label[tree_id]
|
|
|
1 |
+
from functools import lru_cache
|
2 |
+
|
3 |
+
from typing import cast, Any, Callable, Dict, Iterable, List, Optional
|
4 |
+
from typing import Sequence, Tuple, Union
|
5 |
+
from collections import Counter
|
6 |
+
from copy import deepcopy
|
7 |
+
from itertools import islice
|
8 |
+
import numpy as np
|
9 |
+
|
10 |
+
import srsly
|
11 |
+
from thinc.api import Config, Model, SequenceCategoricalCrossentropy, NumpyOps
|
12 |
+
from thinc.types import Floats2d, Ints2d
|
13 |
+
|
14 |
+
from spacy.pipeline._edit_tree_internals.edit_trees import EditTrees
|
15 |
+
from spacy.pipeline._edit_tree_internals.schemas import validate_edit_tree
|
16 |
+
from spacy.pipeline.lemmatizer import lemmatizer_score
|
17 |
+
from spacy.pipeline.trainable_pipe import TrainablePipe
|
18 |
+
from spacy.errors import Errors
|
19 |
+
from spacy.language import Language
|
20 |
+
from spacy.tokens import Doc, Token
|
21 |
+
from spacy.training import Example, validate_examples, validate_get_examples
|
22 |
+
from spacy.vocab import Vocab
|
23 |
+
from spacy import util
|
24 |
+
|
25 |
+
|
26 |
+
TOP_K_GUARDRAIL = 20
|
27 |
+
|
28 |
+
|
29 |
+
default_model_config = """
|
30 |
+
[model]
|
31 |
+
@architectures = "spacy.Tagger.v2"
|
32 |
+
|
33 |
+
[model.tok2vec]
|
34 |
+
@architectures = "spacy.HashEmbedCNN.v2"
|
35 |
+
pretrained_vectors = null
|
36 |
+
width = 96
|
37 |
+
depth = 4
|
38 |
+
embed_size = 2000
|
39 |
+
window_size = 1
|
40 |
+
maxout_pieces = 3
|
41 |
+
subword_features = true
|
42 |
+
"""
|
43 |
+
DEFAULT_EDIT_TREE_LEMMATIZER_MODEL = Config().from_str(default_model_config)["model"]
|
44 |
+
|
45 |
+
|
46 |
+
@Language.factory(
|
47 |
+
"trainable_lemmatizer_v2",
|
48 |
+
assigns=["token.lemma"],
|
49 |
+
requires=[],
|
50 |
+
default_config={
|
51 |
+
"model": DEFAULT_EDIT_TREE_LEMMATIZER_MODEL,
|
52 |
+
"backoff": "orth",
|
53 |
+
"min_tree_freq": 3,
|
54 |
+
"overwrite": False,
|
55 |
+
"top_k": 1,
|
56 |
+
"overwrite_labels": True,
|
57 |
+
"scorer": {"@scorers": "spacy.lemmatizer_scorer.v1"},
|
58 |
+
},
|
59 |
+
default_score_weights={"lemma_acc": 1.0},
|
60 |
+
)
|
61 |
+
def make_edit_tree_lemmatizer(
|
62 |
+
nlp: Language,
|
63 |
+
name: str,
|
64 |
+
model: Model,
|
65 |
+
backoff: Optional[str],
|
66 |
+
min_tree_freq: int,
|
67 |
+
overwrite: bool,
|
68 |
+
top_k: int,
|
69 |
+
overwrite_labels: bool,
|
70 |
+
scorer: Optional[Callable],
|
71 |
+
):
|
72 |
+
"""Construct an EditTreeLemmatizer component."""
|
73 |
+
return EditTreeLemmatizer(
|
74 |
+
nlp.vocab,
|
75 |
+
model,
|
76 |
+
name,
|
77 |
+
backoff=backoff,
|
78 |
+
min_tree_freq=min_tree_freq,
|
79 |
+
overwrite=overwrite,
|
80 |
+
top_k=top_k,
|
81 |
+
overwrite_labels=overwrite_labels,
|
82 |
+
scorer=scorer,
|
83 |
+
)
|
84 |
+
|
85 |
+
|
86 |
+
# _f = open("lemmatizer.log", "w")
|
87 |
+
# def debug(*args):
|
88 |
+
# _f.write(" ".join(args) + "\n")
|
89 |
+
def debug(*args):
|
90 |
+
pass
|
91 |
+
|
92 |
+
|
93 |
+
class EditTreeLemmatizer(TrainablePipe):
|
94 |
+
"""
|
95 |
+
Lemmatizer that lemmatizes each word using a predicted edit tree.
|
96 |
+
"""
|
97 |
+
|
98 |
+
def __init__(
|
99 |
+
self,
|
100 |
+
vocab: Vocab,
|
101 |
+
model: Model,
|
102 |
+
name: str = "trainable_lemmatizer",
|
103 |
+
*,
|
104 |
+
backoff: Optional[str] = "orth",
|
105 |
+
min_tree_freq: int = 3,
|
106 |
+
overwrite: bool = False,
|
107 |
+
top_k: int = 1,
|
108 |
+
overwrite_labels,
|
109 |
+
scorer: Optional[Callable] = lemmatizer_score,
|
110 |
+
):
|
111 |
+
"""
|
112 |
+
Construct an edit tree lemmatizer.
|
113 |
+
|
114 |
+
backoff (Optional[str]): backoff to use when the predicted edit trees
|
115 |
+
are not applicable. Must be an attribute of Token or None (leave the
|
116 |
+
lemma unset).
|
117 |
+
min_tree_freq (int): prune trees that are applied less than this
|
118 |
+
frequency in the training data.
|
119 |
+
overwrite (bool): overwrite existing lemma annotations.
|
120 |
+
top_k (int): try to apply at most the k most probable edit trees.
|
121 |
+
"""
|
122 |
+
self.vocab = vocab
|
123 |
+
self.model = model
|
124 |
+
self.name = name
|
125 |
+
self.backoff = backoff
|
126 |
+
self.min_tree_freq = min_tree_freq
|
127 |
+
self.overwrite = overwrite
|
128 |
+
self.top_k = top_k
|
129 |
+
self.overwrite_labels = overwrite_labels
|
130 |
+
|
131 |
+
self.trees = EditTrees(self.vocab.strings)
|
132 |
+
self.tree2label: Dict[int, int] = {}
|
133 |
+
|
134 |
+
self.cfg: Dict[str, Any] = {"labels": []}
|
135 |
+
self.scorer = scorer
|
136 |
+
self.numpy_ops = NumpyOps()
|
137 |
+
|
138 |
+
def get_loss(
|
139 |
+
self, examples: Iterable[Example], scores: List[Floats2d]
|
140 |
+
) -> Tuple[float, List[Floats2d]]:
|
141 |
+
validate_examples(examples, "EditTreeLemmatizer.get_loss")
|
142 |
+
loss_func = SequenceCategoricalCrossentropy(normalize=False, missing_value=-1)
|
143 |
+
|
144 |
+
truths = []
|
145 |
+
for eg in examples:
|
146 |
+
eg_truths = []
|
147 |
+
for (predicted, gold_lemma, gold_pos, gold_sent_start) in zip(
|
148 |
+
eg.predicted,
|
149 |
+
eg.get_aligned("LEMMA", as_string=True),
|
150 |
+
eg.get_aligned("POS", as_string=True),
|
151 |
+
eg.get_aligned_sent_starts(),
|
152 |
+
):
|
153 |
+
if gold_lemma is None:
|
154 |
+
label = -1
|
155 |
+
else:
|
156 |
+
form = self._get_true_cased_form(
|
157 |
+
predicted.text, gold_sent_start, gold_pos
|
158 |
+
)
|
159 |
+
tree_id = self.trees.add(form, gold_lemma)
|
160 |
+
# debug(f"@get_loss: {predicted}/{gold_pos}[{gold_sent_start}]->{form}|{gold_lemma}[{tree_id}]")
|
161 |
+
label = self.tree2label.get(tree_id, 0)
|
162 |
+
eg_truths.append(label)
|
163 |
+
|
164 |
+
truths.append(eg_truths)
|
165 |
+
|
166 |
+
d_scores, loss = loss_func(scores, truths)
|
167 |
+
if self.model.ops.xp.isnan(loss):
|
168 |
+
raise ValueError(Errors.E910.format(name=self.name))
|
169 |
+
|
170 |
+
return float(loss), d_scores
|
171 |
+
|
172 |
+
def predict(self, docs: Iterable[Doc]) -> List[Ints2d]:
|
173 |
+
if self.top_k == 1:
|
174 |
+
scores2guesses = self._scores2guesses_top_k_equals_1
|
175 |
+
elif self.top_k <= TOP_K_GUARDRAIL:
|
176 |
+
scores2guesses = self._scores2guesses_top_k_greater_1
|
177 |
+
else:
|
178 |
+
scores2guesses = self._scores2guesses_top_k_guardrail
|
179 |
+
# The behaviour of *_scores2guesses_top_k_greater_1()* is efficient for values
|
180 |
+
# of *top_k>1* that are likely to be useful when the edit tree lemmatizer is used
|
181 |
+
# for its principal purpose of lemmatizing tokens. However, the code could also
|
182 |
+
# be used for other purposes, and with very large values of *top_k* the method
|
183 |
+
# becomes inefficient. In such cases, *_scores2guesses_top_k_guardrail()* is used
|
184 |
+
# instead.
|
185 |
+
n_docs = len(list(docs))
|
186 |
+
if not any(len(doc) for doc in docs):
|
187 |
+
# Handle cases where there are no tokens in any docs.
|
188 |
+
n_labels = len(self.cfg["labels"])
|
189 |
+
guesses: List[Ints2d] = [self.model.ops.alloc2i(0, n_labels) for _ in docs]
|
190 |
+
assert len(guesses) == n_docs
|
191 |
+
return guesses
|
192 |
+
scores = self.model.predict(docs)
|
193 |
+
assert len(scores) == n_docs
|
194 |
+
guesses = scores2guesses(docs, scores)
|
195 |
+
assert len(guesses) == n_docs
|
196 |
+
return guesses
|
197 |
+
|
198 |
+
def _scores2guesses_top_k_equals_1(self, docs, scores):
|
199 |
+
guesses = []
|
200 |
+
for doc, doc_scores in zip(docs, scores):
|
201 |
+
doc_guesses = doc_scores.argmax(axis=1)
|
202 |
+
doc_guesses = self.numpy_ops.asarray(doc_guesses)
|
203 |
+
|
204 |
+
doc_compat_guesses = []
|
205 |
+
for i, token in enumerate(doc):
|
206 |
+
tree_id = self.cfg["labels"][doc_guesses[i]]
|
207 |
+
form: str = self._get_true_cased_form_of_token(token)
|
208 |
+
if self.trees.apply(tree_id, form) is not None:
|
209 |
+
doc_compat_guesses.append(tree_id)
|
210 |
+
else:
|
211 |
+
doc_compat_guesses.append(-1)
|
212 |
+
guesses.append(np.array(doc_compat_guesses))
|
213 |
+
|
214 |
+
return guesses
|
215 |
+
|
216 |
+
def _scores2guesses_top_k_greater_1(self, docs, scores):
|
217 |
+
guesses = []
|
218 |
+
top_k = min(self.top_k, len(self.labels))
|
219 |
+
for doc, doc_scores in zip(docs, scores):
|
220 |
+
doc_scores = self.numpy_ops.asarray(doc_scores)
|
221 |
+
doc_compat_guesses = []
|
222 |
+
for i, token in enumerate(doc):
|
223 |
+
for _ in range(top_k):
|
224 |
+
candidate = int(doc_scores[i].argmax())
|
225 |
+
candidate_tree_id = self.cfg["labels"][candidate]
|
226 |
+
form: str = self._get_true_cased_form_of_token(token)
|
227 |
+
if self.trees.apply(candidate_tree_id, form) is not None:
|
228 |
+
doc_compat_guesses.append(candidate_tree_id)
|
229 |
+
break
|
230 |
+
doc_scores[i, candidate] = np.finfo(np.float32).min
|
231 |
+
else:
|
232 |
+
doc_compat_guesses.append(-1)
|
233 |
+
guesses.append(np.array(doc_compat_guesses))
|
234 |
+
|
235 |
+
return guesses
|
236 |
+
|
237 |
+
def _scores2guesses_top_k_guardrail(self, docs, scores):
|
238 |
+
guesses = []
|
239 |
+
for doc, doc_scores in zip(docs, scores):
|
240 |
+
doc_guesses = np.argsort(doc_scores)[..., : -self.top_k - 1 : -1]
|
241 |
+
doc_guesses = self.numpy_ops.asarray(doc_guesses)
|
242 |
+
|
243 |
+
doc_compat_guesses = []
|
244 |
+
for token, candidates in zip(doc, doc_guesses):
|
245 |
+
tree_id = -1
|
246 |
+
for candidate in candidates:
|
247 |
+
candidate_tree_id = self.cfg["labels"][candidate]
|
248 |
+
|
249 |
+
form: str = self._get_true_cased_form_of_token(token)
|
250 |
+
|
251 |
+
if self.trees.apply(candidate_tree_id, form) is not None:
|
252 |
+
tree_id = candidate_tree_id
|
253 |
+
break
|
254 |
+
doc_compat_guesses.append(tree_id)
|
255 |
+
|
256 |
+
guesses.append(np.array(doc_compat_guesses))
|
257 |
+
|
258 |
+
return guesses
|
259 |
+
|
260 |
+
def set_annotations(self, docs: Iterable[Doc], batch_tree_ids):
|
261 |
+
for i, doc in enumerate(docs):
|
262 |
+
doc_tree_ids = batch_tree_ids[i]
|
263 |
+
if hasattr(doc_tree_ids, "get"):
|
264 |
+
doc_tree_ids = doc_tree_ids.get()
|
265 |
+
for j, tree_id in enumerate(doc_tree_ids):
|
266 |
+
if self.overwrite or doc[j].lemma == 0:
|
267 |
+
# If no applicable tree could be found during prediction,
|
268 |
+
# the special identifier -1 is used. Otherwise the tree
|
269 |
+
# is guaranteed to be applicable.
|
270 |
+
if tree_id == -1:
|
271 |
+
if self.backoff is not None:
|
272 |
+
doc[j].lemma = getattr(doc[j], self.backoff)
|
273 |
+
else:
|
274 |
+
form = self._get_true_cased_form_of_token(doc[j])
|
275 |
+
lemma = self.trees.apply(tree_id, form) or form
|
276 |
+
# debug(f"@set_annotations: {doc[j]}/{doc[j].pos_}[{doc[j].is_sent_start}]->{form}|{lemma}[{tree_id}]")
|
277 |
+
doc[j].lemma_ = lemma
|
278 |
+
|
279 |
+
@property
|
280 |
+
def labels(self) -> Tuple[int, ...]:
|
281 |
+
"""Returns the labels currently added to the component."""
|
282 |
+
return tuple(self.cfg["labels"])
|
283 |
+
|
284 |
+
@property
|
285 |
+
def hide_labels(self) -> bool:
|
286 |
+
return True
|
287 |
+
|
288 |
+
@property
|
289 |
+
def label_data(self) -> Dict:
|
290 |
+
trees = []
|
291 |
+
for tree_id in range(len(self.trees)):
|
292 |
+
tree = self.trees[tree_id]
|
293 |
+
if "orig" in tree:
|
294 |
+
tree["orig"] = self.vocab.strings[tree["orig"]]
|
295 |
+
if "subst" in tree:
|
296 |
+
tree["subst"] = self.vocab.strings[tree["subst"]]
|
297 |
+
trees.append(tree)
|
298 |
+
return dict(trees=trees, labels=tuple(self.cfg["labels"]))
|
299 |
+
|
300 |
+
def initialize(
|
301 |
+
self,
|
302 |
+
get_examples: Callable[[], Iterable[Example]],
|
303 |
+
*,
|
304 |
+
nlp: Optional[Language] = None,
|
305 |
+
labels: Optional[Dict] = None,
|
306 |
+
):
|
307 |
+
validate_get_examples(get_examples, "EditTreeLemmatizer.initialize")
|
308 |
+
|
309 |
+
if self.overwrite_labels:
|
310 |
+
if labels is None:
|
311 |
+
self._labels_from_data(get_examples)
|
312 |
+
else:
|
313 |
+
self._add_labels(labels)
|
314 |
+
|
315 |
+
# Sample for the model.
|
316 |
+
doc_sample = []
|
317 |
+
label_sample = []
|
318 |
+
for example in islice(get_examples(), 10):
|
319 |
+
doc_sample.append(example.x)
|
320 |
+
gold_labels: List[List[float]] = []
|
321 |
+
for token in example.reference:
|
322 |
+
if token.lemma == 0:
|
323 |
+
gold_label = None
|
324 |
+
else:
|
325 |
+
gold_label = self._pair2label(token.text, token.lemma_)
|
326 |
+
|
327 |
+
gold_labels.append(
|
328 |
+
[
|
329 |
+
1.0 if label == gold_label else 0.0
|
330 |
+
for label in self.cfg["labels"]
|
331 |
+
]
|
332 |
+
)
|
333 |
+
|
334 |
+
gold_labels = cast(Floats2d, gold_labels)
|
335 |
+
label_sample.append(self.model.ops.asarray(gold_labels, dtype="float32"))
|
336 |
+
|
337 |
+
self._require_labels()
|
338 |
+
assert len(doc_sample) > 0, Errors.E923.format(name=self.name)
|
339 |
+
assert len(label_sample) > 0, Errors.E923.format(name=self.name)
|
340 |
+
|
341 |
+
self.model.initialize(X=doc_sample, Y=label_sample)
|
342 |
+
|
343 |
+
def from_bytes(self, bytes_data, *, exclude=tuple()):
|
344 |
+
deserializers = {
|
345 |
+
"cfg": lambda b: self.cfg.update(srsly.json_loads(b)),
|
346 |
+
"model": lambda b: self.model.from_bytes(b),
|
347 |
+
"vocab": lambda b: self.vocab.from_bytes(b, exclude=exclude),
|
348 |
+
"trees": lambda b: self.trees.from_bytes(b),
|
349 |
+
}
|
350 |
+
|
351 |
+
util.from_bytes(bytes_data, deserializers, exclude)
|
352 |
+
|
353 |
+
return self
|
354 |
+
|
355 |
+
def to_bytes(self, *, exclude=tuple()):
|
356 |
+
serializers = {
|
357 |
+
"cfg": lambda: srsly.json_dumps(self.cfg),
|
358 |
+
"model": lambda: self.model.to_bytes(),
|
359 |
+
"vocab": lambda: self.vocab.to_bytes(exclude=exclude),
|
360 |
+
"trees": lambda: self.trees.to_bytes(),
|
361 |
+
}
|
362 |
+
|
363 |
+
return util.to_bytes(serializers, exclude)
|
364 |
+
|
365 |
+
def to_disk(self, path, exclude=tuple()):
|
366 |
+
path = util.ensure_path(path)
|
367 |
+
serializers = {
|
368 |
+
"cfg": lambda p: srsly.write_json(p, self.cfg),
|
369 |
+
"model": lambda p: self.model.to_disk(p),
|
370 |
+
"vocab": lambda p: self.vocab.to_disk(p, exclude=exclude),
|
371 |
+
"trees": lambda p: self.trees.to_disk(p),
|
372 |
+
}
|
373 |
+
util.to_disk(path, serializers, exclude)
|
374 |
+
|
375 |
+
def from_disk(self, path, exclude=tuple()):
|
376 |
+
def load_model(p):
|
377 |
+
try:
|
378 |
+
with open(p, "rb") as mfile:
|
379 |
+
self.model.from_bytes(mfile.read())
|
380 |
+
except AttributeError:
|
381 |
+
raise ValueError(Errors.E149) from None
|
382 |
+
|
383 |
+
deserializers = {
|
384 |
+
"cfg": lambda p: self.cfg.update(srsly.read_json(p)),
|
385 |
+
"model": load_model,
|
386 |
+
"vocab": lambda p: self.vocab.from_disk(p, exclude=exclude),
|
387 |
+
"trees": lambda p: self.trees.from_disk(p),
|
388 |
+
}
|
389 |
+
|
390 |
+
util.from_disk(path, deserializers, exclude)
|
391 |
+
return self
|
392 |
+
|
393 |
+
def _add_labels(self, labels: Dict):
|
394 |
+
if "labels" not in labels:
|
395 |
+
raise ValueError(Errors.E857.format(name="labels"))
|
396 |
+
if "trees" not in labels:
|
397 |
+
raise ValueError(Errors.E857.format(name="trees"))
|
398 |
+
|
399 |
+
self.cfg["labels"] = list(labels["labels"])
|
400 |
+
trees = []
|
401 |
+
for tree in labels["trees"]:
|
402 |
+
errors = validate_edit_tree(tree)
|
403 |
+
if errors:
|
404 |
+
raise ValueError(Errors.E1026.format(errors="\n".join(errors)))
|
405 |
+
|
406 |
+
tree = dict(tree)
|
407 |
+
if "orig" in tree:
|
408 |
+
tree["orig"] = self.vocab.strings[tree["orig"]]
|
409 |
+
if "orig" in tree:
|
410 |
+
tree["subst"] = self.vocab.strings[tree["subst"]]
|
411 |
+
|
412 |
+
trees.append(tree)
|
413 |
+
|
414 |
+
self.trees.from_json(trees)
|
415 |
+
|
416 |
+
for label, tree in enumerate(self.labels):
|
417 |
+
self.tree2label[tree] = label
|
418 |
+
|
419 |
+
def _labels_from_data(self, get_examples: Callable[[], Iterable[Example]]):
|
420 |
+
# Count corpus tree frequencies in ad-hoc storage to avoid cluttering
|
421 |
+
# the final pipe/string store.
|
422 |
+
vocab = Vocab()
|
423 |
+
trees = EditTrees(vocab.strings)
|
424 |
+
tree_freqs: Counter = Counter()
|
425 |
+
repr_pairs: Dict = {}
|
426 |
+
for example in get_examples():
|
427 |
+
for token in example.reference:
|
428 |
+
if token.lemma != 0:
|
429 |
+
form = self._get_true_cased_form_of_token(token)
|
430 |
+
# debug("_labels_from_data", str(token) + "->" + form, token.lemma_)
|
431 |
+
tree_id = trees.add(form, token.lemma_)
|
432 |
+
tree_freqs[tree_id] += 1
|
433 |
+
repr_pairs[tree_id] = (form, token.lemma_)
|
434 |
+
|
435 |
+
# Construct trees that make the frequency cut-off using representative
|
436 |
+
# form - token pairs.
|
437 |
+
for tree_id, freq in tree_freqs.items():
|
438 |
+
if freq >= self.min_tree_freq:
|
439 |
+
form, lemma = repr_pairs[tree_id]
|
440 |
+
self._pair2label(form, lemma, add_label=True)
|
441 |
+
|
442 |
+
@lru_cache()
|
443 |
+
def _get_true_cased_form(self, token: str, is_sent_start: bool, pos: str) -> str:
|
444 |
+
if is_sent_start and pos != "PROPN":
|
445 |
+
return token.lower()
|
446 |
+
else:
|
447 |
+
return token
|
448 |
+
|
449 |
+
def _get_true_cased_form_of_token(self, token: Token) -> str:
|
450 |
+
return self._get_true_cased_form(token.text, token.is_sent_start, token.pos_)
|
451 |
+
|
452 |
+
def _pair2label(self, form, lemma, add_label=False):
|
453 |
+
"""
|
454 |
+
Look up the edit tree identifier for a form/label pair. If the edit
|
455 |
+
tree is unknown and "add_label" is set, the edit tree will be added to
|
456 |
+
the labels.
|
457 |
+
"""
|
458 |
+
tree_id = self.trees.add(form, lemma)
|
459 |
+
if tree_id not in self.tree2label:
|
460 |
+
if not add_label:
|
461 |
+
return None
|
462 |
+
|
463 |
+
self.tree2label[tree_id] = len(self.cfg["labels"])
|
464 |
+
self.cfg["labels"].append(tree_id)
|
465 |
+
return self.tree2label[tree_id]
|
hu_core_news_md-any-py3-none-any.whl
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0fd89c6ccf0efe1d7591910065c3bec4eadb1e25313d6ceea551150832b0f861
|
3 |
+
size 127018056
|
lemma_postprocessing.py
CHANGED
@@ -1,113 +1,113 @@
|
|
1 |
-
"""
|
2 |
-
This module contains various rule-based components aiming to improve on baseline lemmatization tools.
|
3 |
-
"""
|
4 |
-
|
5 |
-
import re
|
6 |
-
from typing import List, Callable
|
7 |
-
|
8 |
-
from spacy.lang.hu import Hungarian
|
9 |
-
from spacy.pipeline import Pipe
|
10 |
-
from spacy.tokens import Token
|
11 |
-
from spacy.tokens.doc import Doc
|
12 |
-
|
13 |
-
|
14 |
-
@Hungarian.component(
|
15 |
-
"lemma_case_smoother",
|
16 |
-
assigns=["token.lemma"],
|
17 |
-
requires=["token.lemma", "token.pos"],
|
18 |
-
)
|
19 |
-
def lemma_case_smoother(doc: Doc) -> Doc:
|
20 |
-
"""Smooth lemma casing by POS.
|
21 |
-
|
22 |
-
DEPRECATED: This is not needed anymore, as the lemmatizer is now case-insensitive.
|
23 |
-
|
24 |
-
Args:
|
25 |
-
doc (Doc): Input document.
|
26 |
-
|
27 |
-
Returns:
|
28 |
-
Doc: Output document.
|
29 |
-
"""
|
30 |
-
for token in doc:
|
31 |
-
if token.is_sent_start and token.tag_ != "PROPN":
|
32 |
-
token.lemma_ = token.lemma_.lower()
|
33 |
-
|
34 |
-
return doc
|
35 |
-
|
36 |
-
|
37 |
-
class LemmaSmoother(Pipe):
|
38 |
-
"""Smooths lemma by fixing common errors of the edit-tree lemmatizer."""
|
39 |
-
|
40 |
-
_DATE_PATTERN = re.compile(r"(\d+)-j?[éá]?n?a?(t[őó]l)?")
|
41 |
-
_NUMBER_PATTERN = re.compile(r"(\d+([-,/_.:]?(._)?\d+)*%?)")
|
42 |
-
|
43 |
-
# noinspection PyUnusedLocal
|
44 |
-
@staticmethod
|
45 |
-
@Hungarian.factory("lemma_smoother", assigns=["token.lemma"], requires=["token.lemma", "token.pos"])
|
46 |
-
def create_lemma_smoother(nlp: Hungarian, name: str) -> "LemmaSmoother":
|
47 |
-
return LemmaSmoother()
|
48 |
-
|
49 |
-
def __call__(self, doc: Doc) -> Doc:
|
50 |
-
rules: List[Callable] = [
|
51 |
-
self._remove_exclamation_marks,
|
52 |
-
self._remove_question_marks,
|
53 |
-
self._remove_date_suffixes,
|
54 |
-
self._remove_suffix_after_numbers,
|
55 |
-
]
|
56 |
-
|
57 |
-
for token in doc:
|
58 |
-
for rule in rules:
|
59 |
-
rule(token)
|
60 |
-
|
61 |
-
return doc
|
62 |
-
|
63 |
-
@classmethod
|
64 |
-
def _remove_exclamation_marks(cls, token: Token) -> None:
|
65 |
-
"""Removes exclamation marks from the lemma.
|
66 |
-
|
67 |
-
Args:
|
68 |
-
token (Token): The original token.
|
69 |
-
"""
|
70 |
-
|
71 |
-
if "!" != token.lemma_:
|
72 |
-
exclamation_mark_index = token.lemma_.find("!")
|
73 |
-
if exclamation_mark_index != -1:
|
74 |
-
token.lemma_ = token.lemma_[:exclamation_mark_index]
|
75 |
-
|
76 |
-
@classmethod
|
77 |
-
def _remove_question_marks(cls, token: Token) -> None:
|
78 |
-
"""Removes question marks from the lemma.
|
79 |
-
|
80 |
-
Args:
|
81 |
-
token (Token): The original token.
|
82 |
-
"""
|
83 |
-
|
84 |
-
if "?" != token.lemma_:
|
85 |
-
question_mark_index = token.lemma_.find("?")
|
86 |
-
if question_mark_index != -1:
|
87 |
-
token.lemma_ = token.lemma_[:question_mark_index]
|
88 |
-
|
89 |
-
@classmethod
|
90 |
-
def _remove_date_suffixes(cls, token: Token) -> None:
|
91 |
-
"""Fixes the suffixes of dates.
|
92 |
-
|
93 |
-
Args:
|
94 |
-
token (Token): The original token.
|
95 |
-
"""
|
96 |
-
|
97 |
-
if token.pos_ == "NOUN":
|
98 |
-
match = cls._DATE_PATTERN.match(token.lemma_)
|
99 |
-
if match is not None:
|
100 |
-
token.lemma_ = match.group(1) + "."
|
101 |
-
|
102 |
-
@classmethod
|
103 |
-
def _remove_suffix_after_numbers(cls, token: Token) -> None:
|
104 |
-
"""Removes suffixes after numbers.
|
105 |
-
|
106 |
-
Args:
|
107 |
-
token (str): The original token.
|
108 |
-
"""
|
109 |
-
|
110 |
-
if token.pos_ == "NUM":
|
111 |
-
match = cls._NUMBER_PATTERN.match(token.text)
|
112 |
-
if match is not None:
|
113 |
-
token.lemma_ = match.group(0)
|
|
|
1 |
+
"""
|
2 |
+
This module contains various rule-based components aiming to improve on baseline lemmatization tools.
|
3 |
+
"""
|
4 |
+
|
5 |
+
import re
|
6 |
+
from typing import List, Callable
|
7 |
+
|
8 |
+
from spacy.lang.hu import Hungarian
|
9 |
+
from spacy.pipeline import Pipe
|
10 |
+
from spacy.tokens import Token
|
11 |
+
from spacy.tokens.doc import Doc
|
12 |
+
|
13 |
+
|
14 |
+
@Hungarian.component(
|
15 |
+
"lemma_case_smoother",
|
16 |
+
assigns=["token.lemma"],
|
17 |
+
requires=["token.lemma", "token.pos"],
|
18 |
+
)
|
19 |
+
def lemma_case_smoother(doc: Doc) -> Doc:
|
20 |
+
"""Smooth lemma casing by POS.
|
21 |
+
|
22 |
+
DEPRECATED: This is not needed anymore, as the lemmatizer is now case-insensitive.
|
23 |
+
|
24 |
+
Args:
|
25 |
+
doc (Doc): Input document.
|
26 |
+
|
27 |
+
Returns:
|
28 |
+
Doc: Output document.
|
29 |
+
"""
|
30 |
+
for token in doc:
|
31 |
+
if token.is_sent_start and token.tag_ != "PROPN":
|
32 |
+
token.lemma_ = token.lemma_.lower()
|
33 |
+
|
34 |
+
return doc
|
35 |
+
|
36 |
+
|
37 |
+
class LemmaSmoother(Pipe):
|
38 |
+
"""Smooths lemma by fixing common errors of the edit-tree lemmatizer."""
|
39 |
+
|
40 |
+
_DATE_PATTERN = re.compile(r"(\d+)-j?[éá]?n?a?(t[őó]l)?")
|
41 |
+
_NUMBER_PATTERN = re.compile(r"(\d+([-,/_.:]?(._)?\d+)*%?)")
|
42 |
+
|
43 |
+
# noinspection PyUnusedLocal
|
44 |
+
@staticmethod
|
45 |
+
@Hungarian.factory("lemma_smoother", assigns=["token.lemma"], requires=["token.lemma", "token.pos"])
|
46 |
+
def create_lemma_smoother(nlp: Hungarian, name: str) -> "LemmaSmoother":
|
47 |
+
return LemmaSmoother()
|
48 |
+
|
49 |
+
def __call__(self, doc: Doc) -> Doc:
|
50 |
+
rules: List[Callable] = [
|
51 |
+
self._remove_exclamation_marks,
|
52 |
+
self._remove_question_marks,
|
53 |
+
self._remove_date_suffixes,
|
54 |
+
self._remove_suffix_after_numbers,
|
55 |
+
]
|
56 |
+
|
57 |
+
for token in doc:
|
58 |
+
for rule in rules:
|
59 |
+
rule(token)
|
60 |
+
|
61 |
+
return doc
|
62 |
+
|
63 |
+
@classmethod
|
64 |
+
def _remove_exclamation_marks(cls, token: Token) -> None:
|
65 |
+
"""Removes exclamation marks from the lemma.
|
66 |
+
|
67 |
+
Args:
|
68 |
+
token (Token): The original token.
|
69 |
+
"""
|
70 |
+
|
71 |
+
if "!" != token.lemma_:
|
72 |
+
exclamation_mark_index = token.lemma_.find("!")
|
73 |
+
if exclamation_mark_index != -1:
|
74 |
+
token.lemma_ = token.lemma_[:exclamation_mark_index]
|
75 |
+
|
76 |
+
@classmethod
|
77 |
+
def _remove_question_marks(cls, token: Token) -> None:
|
78 |
+
"""Removes question marks from the lemma.
|
79 |
+
|
80 |
+
Args:
|
81 |
+
token (Token): The original token.
|
82 |
+
"""
|
83 |
+
|
84 |
+
if "?" != token.lemma_:
|
85 |
+
question_mark_index = token.lemma_.find("?")
|
86 |
+
if question_mark_index != -1:
|
87 |
+
token.lemma_ = token.lemma_[:question_mark_index]
|
88 |
+
|
89 |
+
@classmethod
|
90 |
+
def _remove_date_suffixes(cls, token: Token) -> None:
|
91 |
+
"""Fixes the suffixes of dates.
|
92 |
+
|
93 |
+
Args:
|
94 |
+
token (Token): The original token.
|
95 |
+
"""
|
96 |
+
|
97 |
+
if token.pos_ == "NOUN":
|
98 |
+
match = cls._DATE_PATTERN.match(token.lemma_)
|
99 |
+
if match is not None:
|
100 |
+
token.lemma_ = match.group(1) + "."
|
101 |
+
|
102 |
+
@classmethod
|
103 |
+
def _remove_suffix_after_numbers(cls, token: Token) -> None:
|
104 |
+
"""Removes suffixes after numbers.
|
105 |
+
|
106 |
+
Args:
|
107 |
+
token (str): The original token.
|
108 |
+
"""
|
109 |
+
|
110 |
+
if token.pos_ == "NUM":
|
111 |
+
match = cls._NUMBER_PATTERN.match(token.text)
|
112 |
+
if match is not None:
|
113 |
+
token.lemma_ = match.group(0)
|
lookup_lemmatizer.py
CHANGED
@@ -1,132 +1,132 @@
|
|
1 |
-
import re
|
2 |
-
from collections import defaultdict
|
3 |
-
from operator import itemgetter
|
4 |
-
from pathlib import Path
|
5 |
-
from re import Pattern
|
6 |
-
from typing import Optional, Callable, Iterable, Dict, Tuple
|
7 |
-
|
8 |
-
from spacy.lang.hu import Hungarian
|
9 |
-
from spacy.language import Language
|
10 |
-
from spacy.lookups import Lookups, Table
|
11 |
-
from spacy.pipeline import Pipe
|
12 |
-
from spacy.pipeline.lemmatizer import lemmatizer_score
|
13 |
-
from spacy.tokens import Token
|
14 |
-
from spacy.tokens.doc import Doc
|
15 |
-
|
16 |
-
# noinspection PyUnresolvedReferences
|
17 |
-
from spacy.training.example import Example
|
18 |
-
from spacy.util import ensure_path
|
19 |
-
|
20 |
-
|
21 |
-
class LookupLemmatizer(Pipe):
|
22 |
-
"""
|
23 |
-
LookupLemmatizer learn `(token, pos, morph. feat) -> lemma` mappings during training, and applies them at prediction
|
24 |
-
time.
|
25 |
-
"""
|
26 |
-
|
27 |
-
_number_pattern: Pattern = re.compile(r"\d")
|
28 |
-
|
29 |
-
# noinspection PyUnusedLocal
|
30 |
-
@staticmethod
|
31 |
-
@Hungarian.factory(
|
32 |
-
"lookup_lemmatizer",
|
33 |
-
assigns=["token.lemma"],
|
34 |
-
requires=["token.pos"],
|
35 |
-
default_config={"scorer": {"@scorers": "spacy.lemmatizer_scorer.v1"}, "source": ""},
|
36 |
-
)
|
37 |
-
def create(nlp: Language, name: str, scorer: Optional[Callable], source: str) -> "LookupLemmatizer":
|
38 |
-
return LookupLemmatizer(None, source, scorer)
|
39 |
-
|
40 |
-
def train(self, sentences: Iterable[Iterable[Tuple[str, str, str, str]]], min_occurrences: int = 1) -> None:
|
41 |
-
"""
|
42 |
-
|
43 |
-
Args:
|
44 |
-
sentences (Iterable[Iterable[Tuple[str, str, str, str]]]): Sentences to learn the mappings from
|
45 |
-
min_occurrences (int): mapping occurring less than this threshold are not learned
|
46 |
-
|
47 |
-
"""
|
48 |
-
|
49 |
-
# Lookup table which maps (upos, form) to (lemma -> frequency),
|
50 |
-
# e.g. `{ ("NOUN", "alma"): { "alma" : 99, "alom": 1} }`
|
51 |
-
lemma_lookup_table: Dict[Tuple[str, str], Dict[str, int]] = defaultdict(lambda: defaultdict(int))
|
52 |
-
|
53 |
-
for sentence in sentences:
|
54 |
-
for token, pos, feats, lemma in sentence:
|
55 |
-
token = self.__mask_numbers(token)
|
56 |
-
lemma = self.__mask_numbers(lemma)
|
57 |
-
feats_str = ("|" + feats) if feats else ""
|
58 |
-
key = (token, pos + feats_str)
|
59 |
-
lemma_lookup_table[key][lemma] += 1
|
60 |
-
lemma_lookup_table = dict(lemma_lookup_table)
|
61 |
-
|
62 |
-
self._lookups = Lookups()
|
63 |
-
table = Table(name="lemma_lookups")
|
64 |
-
|
65 |
-
lemma_freq: Dict[str, int]
|
66 |
-
for (form, pos), lemma_freq in dict(lemma_lookup_table).items():
|
67 |
-
most_freq_lemma, freq = sorted(lemma_freq.items(), key=itemgetter(1), reverse=True)[0]
|
68 |
-
if freq >= min_occurrences:
|
69 |
-
if form not in table:
|
70 |
-
# lemma by pos
|
71 |
-
table[form]: Dict[str, str] = dict()
|
72 |
-
table[form][pos] = most_freq_lemma
|
73 |
-
|
74 |
-
self._lookups.set_table(name=f"lemma_lookups", table=table)
|
75 |
-
|
76 |
-
def __init__(
|
77 |
-
self,
|
78 |
-
lookups: Optional[Lookups] = None,
|
79 |
-
source: Optional[str] = None,
|
80 |
-
scorer: Optional[Callable] = lemmatizer_score,
|
81 |
-
):
|
82 |
-
self._lookups: Optional[Lookups] = lookups
|
83 |
-
self.scorer = scorer
|
84 |
-
self.source = source
|
85 |
-
|
86 |
-
def __call__(self, doc: Doc) -> Doc:
|
87 |
-
assert self._lookups is not None, "Lookup table should be initialized first"
|
88 |
-
|
89 |
-
token: Token
|
90 |
-
for token in doc:
|
91 |
-
lemma_lookup_table = self._lookups.get_table(f"lemma_lookups")
|
92 |
-
masked_token = self.__mask_numbers(token.text)
|
93 |
-
|
94 |
-
if masked_token in lemma_lookup_table:
|
95 |
-
lemma_by_pos: Dict[str, str] = lemma_lookup_table[masked_token]
|
96 |
-
feats_str = ("|" + str(token.morph)) if str(token.morph) else ""
|
97 |
-
key = token.pos_ + feats_str
|
98 |
-
if key in lemma_by_pos:
|
99 |
-
if masked_token != token.text:
|
100 |
-
# If the token contains numbers, we need to replace the numbers in the lemma as well
|
101 |
-
token.lemma_ = self.__replace_numbers(lemma_by_pos[key], token.text)
|
102 |
-
pass
|
103 |
-
else:
|
104 |
-
token.lemma_ = lemma_by_pos[key]
|
105 |
-
return doc
|
106 |
-
|
107 |
-
# noinspection PyUnusedLocal
|
108 |
-
def to_disk(self, path, exclude=tuple()):
|
109 |
-
assert self._lookups is not None, "Lookup table should be initialized first"
|
110 |
-
|
111 |
-
path: Path = ensure_path(path)
|
112 |
-
path.mkdir(exist_ok=True)
|
113 |
-
self._lookups.to_disk(path)
|
114 |
-
|
115 |
-
# noinspection PyUnusedLocal
|
116 |
-
def from_disk(self, path, exclude=tuple()) -> "LookupLemmatizer":
|
117 |
-
path: Path = ensure_path(path)
|
118 |
-
lookups = Lookups()
|
119 |
-
self._lookups = lookups.from_disk(path=path)
|
120 |
-
return self
|
121 |
-
|
122 |
-
def initialize(self, get_examples: Callable[[], Iterable[Example]], *, nlp: Language = None) -> None:
|
123 |
-
lookups = Lookups()
|
124 |
-
self._lookups = lookups.from_disk(path=self.source)
|
125 |
-
|
126 |
-
@classmethod
|
127 |
-
def __mask_numbers(cls, token: str) -> str:
|
128 |
-
return cls._number_pattern.sub("0", token)
|
129 |
-
|
130 |
-
@classmethod
|
131 |
-
def __replace_numbers(cls, lemma: str, token: str) -> str:
|
132 |
-
return cls._number_pattern.sub(lambda match: token[match.start()], lemma)
|
|
|
1 |
+
import re
|
2 |
+
from collections import defaultdict
|
3 |
+
from operator import itemgetter
|
4 |
+
from pathlib import Path
|
5 |
+
from re import Pattern
|
6 |
+
from typing import Optional, Callable, Iterable, Dict, Tuple
|
7 |
+
|
8 |
+
from spacy.lang.hu import Hungarian
|
9 |
+
from spacy.language import Language
|
10 |
+
from spacy.lookups import Lookups, Table
|
11 |
+
from spacy.pipeline import Pipe
|
12 |
+
from spacy.pipeline.lemmatizer import lemmatizer_score
|
13 |
+
from spacy.tokens import Token
|
14 |
+
from spacy.tokens.doc import Doc
|
15 |
+
|
16 |
+
# noinspection PyUnresolvedReferences
|
17 |
+
from spacy.training.example import Example
|
18 |
+
from spacy.util import ensure_path
|
19 |
+
|
20 |
+
|
21 |
+
class LookupLemmatizer(Pipe):
|
22 |
+
"""
|
23 |
+
LookupLemmatizer learn `(token, pos, morph. feat) -> lemma` mappings during training, and applies them at prediction
|
24 |
+
time.
|
25 |
+
"""
|
26 |
+
|
27 |
+
_number_pattern: Pattern = re.compile(r"\d")
|
28 |
+
|
29 |
+
# noinspection PyUnusedLocal
|
30 |
+
@staticmethod
|
31 |
+
@Hungarian.factory(
|
32 |
+
"lookup_lemmatizer",
|
33 |
+
assigns=["token.lemma"],
|
34 |
+
requires=["token.pos"],
|
35 |
+
default_config={"scorer": {"@scorers": "spacy.lemmatizer_scorer.v1"}, "source": ""},
|
36 |
+
)
|
37 |
+
def create(nlp: Language, name: str, scorer: Optional[Callable], source: str) -> "LookupLemmatizer":
|
38 |
+
return LookupLemmatizer(None, source, scorer)
|
39 |
+
|
40 |
+
def train(self, sentences: Iterable[Iterable[Tuple[str, str, str, str]]], min_occurrences: int = 1) -> None:
|
41 |
+
"""
|
42 |
+
|
43 |
+
Args:
|
44 |
+
sentences (Iterable[Iterable[Tuple[str, str, str, str]]]): Sentences to learn the mappings from
|
45 |
+
min_occurrences (int): mapping occurring less than this threshold are not learned
|
46 |
+
|
47 |
+
"""
|
48 |
+
|
49 |
+
# Lookup table which maps (upos, form) to (lemma -> frequency),
|
50 |
+
# e.g. `{ ("NOUN", "alma"): { "alma" : 99, "alom": 1} }`
|
51 |
+
lemma_lookup_table: Dict[Tuple[str, str], Dict[str, int]] = defaultdict(lambda: defaultdict(int))
|
52 |
+
|
53 |
+
for sentence in sentences:
|
54 |
+
for token, pos, feats, lemma in sentence:
|
55 |
+
token = self.__mask_numbers(token)
|
56 |
+
lemma = self.__mask_numbers(lemma)
|
57 |
+
feats_str = ("|" + feats) if feats else ""
|
58 |
+
key = (token, pos + feats_str)
|
59 |
+
lemma_lookup_table[key][lemma] += 1
|
60 |
+
lemma_lookup_table = dict(lemma_lookup_table)
|
61 |
+
|
62 |
+
self._lookups = Lookups()
|
63 |
+
table = Table(name="lemma_lookups")
|
64 |
+
|
65 |
+
lemma_freq: Dict[str, int]
|
66 |
+
for (form, pos), lemma_freq in dict(lemma_lookup_table).items():
|
67 |
+
most_freq_lemma, freq = sorted(lemma_freq.items(), key=itemgetter(1), reverse=True)[0]
|
68 |
+
if freq >= min_occurrences:
|
69 |
+
if form not in table:
|
70 |
+
# lemma by pos
|
71 |
+
table[form]: Dict[str, str] = dict()
|
72 |
+
table[form][pos] = most_freq_lemma
|
73 |
+
|
74 |
+
self._lookups.set_table(name=f"lemma_lookups", table=table)
|
75 |
+
|
76 |
+
def __init__(
|
77 |
+
self,
|
78 |
+
lookups: Optional[Lookups] = None,
|
79 |
+
source: Optional[str] = None,
|
80 |
+
scorer: Optional[Callable] = lemmatizer_score,
|
81 |
+
):
|
82 |
+
self._lookups: Optional[Lookups] = lookups
|
83 |
+
self.scorer = scorer
|
84 |
+
self.source = source
|
85 |
+
|
86 |
+
def __call__(self, doc: Doc) -> Doc:
|
87 |
+
assert self._lookups is not None, "Lookup table should be initialized first"
|
88 |
+
|
89 |
+
token: Token
|
90 |
+
for token in doc:
|
91 |
+
lemma_lookup_table = self._lookups.get_table(f"lemma_lookups")
|
92 |
+
masked_token = self.__mask_numbers(token.text)
|
93 |
+
|
94 |
+
if masked_token in lemma_lookup_table:
|
95 |
+
lemma_by_pos: Dict[str, str] = lemma_lookup_table[masked_token]
|
96 |
+
feats_str = ("|" + str(token.morph)) if str(token.morph) else ""
|
97 |
+
key = token.pos_ + feats_str
|
98 |
+
if key in lemma_by_pos:
|
99 |
+
if masked_token != token.text:
|
100 |
+
# If the token contains numbers, we need to replace the numbers in the lemma as well
|
101 |
+
token.lemma_ = self.__replace_numbers(lemma_by_pos[key], token.text)
|
102 |
+
pass
|
103 |
+
else:
|
104 |
+
token.lemma_ = lemma_by_pos[key]
|
105 |
+
return doc
|
106 |
+
|
107 |
+
# noinspection PyUnusedLocal
|
108 |
+
def to_disk(self, path, exclude=tuple()):
|
109 |
+
assert self._lookups is not None, "Lookup table should be initialized first"
|
110 |
+
|
111 |
+
path: Path = ensure_path(path)
|
112 |
+
path.mkdir(exist_ok=True)
|
113 |
+
self._lookups.to_disk(path)
|
114 |
+
|
115 |
+
# noinspection PyUnusedLocal
|
116 |
+
def from_disk(self, path, exclude=tuple()) -> "LookupLemmatizer":
|
117 |
+
path: Path = ensure_path(path)
|
118 |
+
lookups = Lookups()
|
119 |
+
self._lookups = lookups.from_disk(path=path)
|
120 |
+
return self
|
121 |
+
|
122 |
+
def initialize(self, get_examples: Callable[[], Iterable[Example]], *, nlp: Language = None) -> None:
|
123 |
+
lookups = Lookups()
|
124 |
+
self._lookups = lookups.from_disk(path=self.source)
|
125 |
+
|
126 |
+
@classmethod
|
127 |
+
def __mask_numbers(cls, token: str) -> str:
|
128 |
+
return cls._number_pattern.sub("0", token)
|
129 |
+
|
130 |
+
@classmethod
|
131 |
+
def __replace_numbers(cls, lemma: str, token: str) -> str:
|
132 |
+
return cls._number_pattern.sub(lambda match: token[match.start()], lemma)
|
meta.json
CHANGED
@@ -1,14 +1,14 @@
|
|
1 |
{
|
2 |
"lang":"hu",
|
3 |
"name":"core_news_md",
|
4 |
-
"version":"3.
|
5 |
"description":"Core Hungarian model for HuSpaCy. Components: tok2vec, senter, tagger, morphologizer, lemmatizer, parser, ner",
|
6 |
"author":"SzegedAI, MILAB",
|
7 |
"email":"[email protected]",
|
8 |
"url":"https://github.com/huspacy/huspacy",
|
9 |
"license":"cc-by-sa-4.0",
|
10 |
-
"spacy_version":">=3.
|
11 |
-
"spacy_git_version":"
|
12 |
"vectors":{
|
13 |
"width":100,
|
14 |
"vectors":200000,
|
@@ -1268,90 +1268,90 @@
|
|
1268 |
"token_p":0.998565417,
|
1269 |
"token_r":0.9993300153,
|
1270 |
"token_f":0.9989475698,
|
1271 |
-
"sents_p":0.
|
1272 |
-
"sents_r":0.
|
1273 |
-
"sents_f":0.
|
1274 |
-
"tag_acc":0.
|
1275 |
-
"pos_acc":0.
|
1276 |
-
"morph_acc":0.
|
1277 |
-
"morph_micro_p":0.
|
1278 |
-
"morph_micro_r":0.
|
1279 |
-
"morph_micro_f":0.
|
1280 |
"morph_per_feat":{
|
1281 |
"Definite":{
|
1282 |
-
"p":0.
|
1283 |
-
"r":0.
|
1284 |
-
"f":0.
|
1285 |
},
|
1286 |
"PronType":{
|
1287 |
-
"p":0.
|
1288 |
-
"r":0.
|
1289 |
-
"f":0.
|
1290 |
},
|
1291 |
"Case":{
|
1292 |
-
"p":0.
|
1293 |
-
"r":0.
|
1294 |
-
"f":0.
|
1295 |
},
|
1296 |
"Degree":{
|
1297 |
-
"p":0.
|
1298 |
-
"r":0.
|
1299 |
-
"f":0.
|
1300 |
},
|
1301 |
"Number":{
|
1302 |
-
"p":0.
|
1303 |
-
"r":0.
|
1304 |
-
"f":0.
|
1305 |
},
|
1306 |
"Mood":{
|
1307 |
-
"p":0.
|
1308 |
-
"r":0.
|
1309 |
-
"f":0.
|
1310 |
},
|
1311 |
"Person":{
|
1312 |
-
"p":0.
|
1313 |
"r":0.9712171053,
|
1314 |
-
"f":0.
|
1315 |
},
|
1316 |
"Tense":{
|
1317 |
-
"p":0.
|
1318 |
-
"r":0.
|
1319 |
-
"f":0.
|
1320 |
},
|
1321 |
"VerbForm":{
|
1322 |
-
"p":0.
|
1323 |
-
"r":0.
|
1324 |
-
"f":0.
|
1325 |
},
|
1326 |
"Voice":{
|
1327 |
-
"p":0.
|
1328 |
"r":0.9744376278,
|
1329 |
-
"f":0.
|
1330 |
},
|
1331 |
"Number[psor]":{
|
1332 |
-
"p":0.
|
1333 |
"r":0.9729344729,
|
1334 |
-
"f":0.
|
1335 |
},
|
1336 |
"Person[psor]":{
|
1337 |
-
"p":0.
|
1338 |
-
"r":0.
|
1339 |
-
"f":0.
|
1340 |
},
|
1341 |
"NumType":{
|
1342 |
-
"p":0.
|
1343 |
-
"r":0.
|
1344 |
-
"f":0.
|
1345 |
},
|
1346 |
"Poss":{
|
1347 |
-
"p":0.
|
1348 |
"r":1.0,
|
1349 |
-
"f":0.
|
1350 |
},
|
1351 |
"Reflex":{
|
1352 |
"p":1.0,
|
1353 |
-
"r":0.
|
1354 |
-
"f":0.
|
1355 |
},
|
1356 |
"Reflexive":{
|
1357 |
"p":0.0,
|
@@ -1375,118 +1375,118 @@
|
|
1375 |
},
|
1376 |
"Number[psed]":{
|
1377 |
"p":1.0,
|
1378 |
-
"r":0.
|
1379 |
-
"f":0.
|
1380 |
}
|
1381 |
},
|
1382 |
-
"lemma_acc":0.
|
1383 |
-
"dep_uas":0.
|
1384 |
-
"dep_las":0.
|
1385 |
"dep_las_per_type":{
|
1386 |
"det":{
|
1387 |
-
"p":0.
|
1388 |
-
"r":0.
|
1389 |
-
"f":0.
|
1390 |
},
|
1391 |
"amod:att":{
|
1392 |
-
"p":0.
|
1393 |
-
"r":0.
|
1394 |
-
"f":0.
|
1395 |
},
|
1396 |
"nsubj":{
|
1397 |
-
"p":0.
|
1398 |
-
"r":0.
|
1399 |
-
"f":0.
|
1400 |
},
|
1401 |
"advmod:mode":{
|
1402 |
-
"p":0.
|
1403 |
-
"r":0.
|
1404 |
-
"f":0.
|
1405 |
},
|
1406 |
"nmod:att":{
|
1407 |
-
"p":0.
|
1408 |
-
"r":0.
|
1409 |
-
"f":0.
|
1410 |
},
|
1411 |
"obl":{
|
1412 |
-
"p":0.
|
1413 |
-
"r":0.
|
1414 |
-
"f":0.
|
1415 |
},
|
1416 |
"obj":{
|
1417 |
-
"p":0.
|
1418 |
-
"r":0.
|
1419 |
-
"f":0.
|
1420 |
},
|
1421 |
"root":{
|
1422 |
-
"p":0.
|
1423 |
-
"r":0.
|
1424 |
-
"f":0.
|
1425 |
},
|
1426 |
"cc":{
|
1427 |
-
"p":0.
|
1428 |
-
"r":0.
|
1429 |
-
"f":0.
|
1430 |
},
|
1431 |
"conj":{
|
1432 |
-
"p":0.
|
1433 |
-
"r":0.
|
1434 |
-
"f":0.
|
1435 |
},
|
1436 |
"advmod":{
|
1437 |
-
"p":0.
|
1438 |
-
"r":0.
|
1439 |
-
"f":0.
|
1440 |
},
|
1441 |
"flat:name":{
|
1442 |
-
"p":0.
|
1443 |
-
"r":0.
|
1444 |
-
"f":0.
|
1445 |
},
|
1446 |
"appos":{
|
1447 |
-
"p":0.
|
1448 |
-
"r":0.
|
1449 |
-
"f":0.
|
1450 |
},
|
1451 |
"advcl":{
|
1452 |
-
"p":0.
|
1453 |
"r":0.2040816327,
|
1454 |
-
"f":0.
|
1455 |
},
|
1456 |
"advmod:tlocy":{
|
1457 |
-
"p":0.
|
1458 |
-
"r":0.
|
1459 |
-
"f":0.
|
1460 |
},
|
1461 |
"ccomp:obj":{
|
1462 |
-
"p":0.
|
1463 |
-
"r":0.
|
1464 |
-
"f":0.
|
1465 |
},
|
1466 |
"mark":{
|
1467 |
-
"p":0.
|
1468 |
-
"r":0.
|
1469 |
-
"f":0.
|
1470 |
},
|
1471 |
"compound:preverb":{
|
1472 |
-
"p":0.
|
1473 |
-
"r":0.
|
1474 |
-
"f":0.
|
1475 |
},
|
1476 |
"advmod:locy":{
|
1477 |
-
"p":0.
|
1478 |
-
"r":0.
|
1479 |
-
"f":0.
|
1480 |
},
|
1481 |
"cop":{
|
1482 |
-
"p":0.
|
1483 |
-
"r":0.
|
1484 |
-
"f":0.
|
1485 |
},
|
1486 |
"nmod:obl":{
|
1487 |
-
"p":0.
|
1488 |
-
"r":0.
|
1489 |
-
"f":0.
|
1490 |
},
|
1491 |
"advmod:to":{
|
1492 |
"p":0.0,
|
@@ -1494,69 +1494,74 @@
|
|
1494 |
"f":0.0
|
1495 |
},
|
1496 |
"obj:lvc":{
|
1497 |
-
"p":0.
|
1498 |
-
"r":0.
|
1499 |
-
"f":0.
|
1500 |
},
|
1501 |
"ccomp:obl":{
|
1502 |
-
"p":0.
|
1503 |
-
"r":0.
|
1504 |
-
"f":0.
|
1505 |
},
|
1506 |
"iobj":{
|
1507 |
-
"p":0.
|
1508 |
-
"r":0.
|
1509 |
-
"f":0.
|
1510 |
-
},
|
1511 |
-
"dep":{
|
1512 |
-
"p":0.0,
|
1513 |
-
"r":0.0,
|
1514 |
-
"f":0.0
|
1515 |
-
},
|
1516 |
-
"acl":{
|
1517 |
-
"p":0.3606557377,
|
1518 |
-
"r":0.3055555556,
|
1519 |
-
"f":0.3308270677
|
1520 |
},
|
1521 |
"case":{
|
1522 |
-
"p":0.
|
1523 |
-
"r":0.
|
1524 |
-
"f":0.
|
1525 |
},
|
1526 |
"csubj":{
|
1527 |
-
"p":0.
|
1528 |
-
"r":0.
|
1529 |
-
"f":0.
|
1530 |
},
|
1531 |
"parataxis":{
|
1532 |
-
"p":0.
|
1533 |
-
"r":0.
|
1534 |
-
"f":0.
|
1535 |
},
|
1536 |
"xcomp":{
|
1537 |
-
"p":0.
|
1538 |
-
"r":0.
|
1539 |
-
"f":0.
|
1540 |
},
|
1541 |
"nummod":{
|
1542 |
-
"p":0.
|
1543 |
-
"r":0.
|
1544 |
-
"f":0.
|
|
|
|
|
|
|
|
|
|
|
1545 |
},
|
1546 |
"advmod:tto":{
|
1547 |
-
"p":0.
|
1548 |
-
"r":0.
|
1549 |
-
"f":0.
|
1550 |
},
|
1551 |
"nmod":{
|
1552 |
-
"p":0.
|
1553 |
"r":0.1818181818,
|
1554 |
-
"f":0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1555 |
},
|
1556 |
"aux":{
|
1557 |
-
"p":0.
|
1558 |
"r":0.6666666667,
|
1559 |
-
"f":0.
|
1560 |
},
|
1561 |
"advmod:tfrom":{
|
1562 |
"p":0.0,
|
@@ -1564,9 +1569,9 @@
|
|
1564 |
"f":0.0
|
1565 |
},
|
1566 |
"list":{
|
1567 |
-
"p":0.
|
1568 |
"r":0.1666666667,
|
1569 |
-
"f":0.
|
1570 |
},
|
1571 |
"goeswith":{
|
1572 |
"p":0.0,
|
@@ -1574,14 +1579,9 @@
|
|
1574 |
"f":0.0
|
1575 |
},
|
1576 |
"compound":{
|
1577 |
-
"p":0.
|
1578 |
-
"r":0.
|
1579 |
-
"f":0.
|
1580 |
-
},
|
1581 |
-
"ccomp":{
|
1582 |
-
"p":0.1111111111,
|
1583 |
-
"r":0.0769230769,
|
1584 |
-
"f":0.0909090909
|
1585 |
},
|
1586 |
"obl:lvc":{
|
1587 |
"p":0.0,
|
@@ -1600,8 +1600,8 @@
|
|
1600 |
},
|
1601 |
"advmod:que":{
|
1602 |
"p":1.0,
|
1603 |
-
"r":0.
|
1604 |
-
"f":0.
|
1605 |
},
|
1606 |
"ccomp:pred":{
|
1607 |
"p":0.0,
|
@@ -1609,32 +1609,32 @@
|
|
1609 |
"f":0.0
|
1610 |
}
|
1611 |
},
|
1612 |
-
"ents_p":0.
|
1613 |
-
"ents_r":0.
|
1614 |
-
"ents_f":0.
|
1615 |
"ents_per_type":{
|
1616 |
"ORG":{
|
1617 |
-
"p":0.
|
1618 |
-
"r":0.
|
1619 |
-
"f":0.
|
1620 |
},
|
1621 |
"PER":{
|
1622 |
-
"p":0.
|
1623 |
-
"r":0.
|
1624 |
-
"f":0.
|
1625 |
},
|
1626 |
"LOC":{
|
1627 |
-
"p":0.
|
1628 |
-
"r":0.
|
1629 |
-
"f":0.
|
1630 |
},
|
1631 |
"MISC":{
|
1632 |
-
"p":0.
|
1633 |
-
"r":0.
|
1634 |
-
"f":0.
|
1635 |
}
|
1636 |
},
|
1637 |
-
"speed":
|
1638 |
},
|
1639 |
"sources":[
|
1640 |
{
|
@@ -1663,6 +1663,6 @@
|
|
1663 |
}
|
1664 |
],
|
1665 |
"requirements":[
|
1666 |
-
|
1667 |
]
|
1668 |
}
|
|
|
1 |
{
|
2 |
"lang":"hu",
|
3 |
"name":"core_news_md",
|
4 |
+
"version":"3.8.0",
|
5 |
"description":"Core Hungarian model for HuSpaCy. Components: tok2vec, senter, tagger, morphologizer, lemmatizer, parser, ner",
|
6 |
"author":"SzegedAI, MILAB",
|
7 |
"email":"[email protected]",
|
8 |
"url":"https://github.com/huspacy/huspacy",
|
9 |
"license":"cc-by-sa-4.0",
|
10 |
+
"spacy_version":">=3.8.0,<3.9.0",
|
11 |
+
"spacy_git_version":"63f1b53",
|
12 |
"vectors":{
|
13 |
"width":100,
|
14 |
"vectors":200000,
|
|
|
1268 |
"token_p":0.998565417,
|
1269 |
"token_r":0.9993300153,
|
1270 |
"token_f":0.9989475698,
|
1271 |
+
"sents_p":0.977827051,
|
1272 |
+
"sents_r":0.9821826281,
|
1273 |
+
"sents_f":0.98,
|
1274 |
+
"tag_acc":0.9710512465,
|
1275 |
+
"pos_acc":0.9685137334,
|
1276 |
+
"morph_acc":0.9431524548,
|
1277 |
+
"morph_micro_p":0.9750909721,
|
1278 |
+
"morph_micro_r":0.9672969489,
|
1279 |
+
"morph_micro_f":0.9711783233,
|
1280 |
"morph_per_feat":{
|
1281 |
"Definite":{
|
1282 |
+
"p":0.9770920991,
|
1283 |
+
"r":0.9752683154,
|
1284 |
+
"f":0.9761793554
|
1285 |
},
|
1286 |
"PronType":{
|
1287 |
+
"p":0.9718387631,
|
1288 |
+
"r":0.9713024283,
|
1289 |
+
"f":0.9715705217
|
1290 |
},
|
1291 |
"Case":{
|
1292 |
+
"p":0.9834792994,
|
1293 |
+
"r":0.9762892709,
|
1294 |
+
"f":0.9798710957
|
1295 |
},
|
1296 |
"Degree":{
|
1297 |
+
"p":0.9336283186,
|
1298 |
+
"r":0.877703827,
|
1299 |
+
"f":0.9048027444
|
1300 |
},
|
1301 |
"Number":{
|
1302 |
+
"p":0.9897668176,
|
1303 |
+
"r":0.988771577,
|
1304 |
+
"f":0.989268947
|
1305 |
},
|
1306 |
"Mood":{
|
1307 |
+
"p":0.9351648352,
|
1308 |
+
"r":0.94345898,
|
1309 |
+
"f":0.9392935982
|
1310 |
},
|
1311 |
"Person":{
|
1312 |
+
"p":0.9555016181,
|
1313 |
"r":0.9712171053,
|
1314 |
+
"f":0.9632952692
|
1315 |
},
|
1316 |
"Tense":{
|
1317 |
+
"p":0.9747252747,
|
1318 |
+
"r":0.9801104972,
|
1319 |
+
"f":0.9774104683
|
1320 |
},
|
1321 |
"VerbForm":{
|
1322 |
+
"p":0.9723899914,
|
1323 |
+
"r":0.9037690457,
|
1324 |
+
"f":0.9368246052
|
1325 |
},
|
1326 |
"Voice":{
|
1327 |
+
"p":0.9665314402,
|
1328 |
"r":0.9744376278,
|
1329 |
+
"f":0.9704684318
|
1330 |
},
|
1331 |
"Number[psor]":{
|
1332 |
+
"p":0.9956268222,
|
1333 |
"r":0.9729344729,
|
1334 |
+
"f":0.9841498559
|
1335 |
},
|
1336 |
"Person[psor]":{
|
1337 |
+
"p":0.9941690962,
|
1338 |
+
"r":0.9728958631,
|
1339 |
+
"f":0.9834174477
|
1340 |
},
|
1341 |
"NumType":{
|
1342 |
+
"p":0.9376498801,
|
1343 |
+
"r":0.9536585366,
|
1344 |
+
"f":0.9455864571
|
1345 |
},
|
1346 |
"Poss":{
|
1347 |
+
"p":0.5,
|
1348 |
"r":1.0,
|
1349 |
+
"f":0.6666666667
|
1350 |
},
|
1351 |
"Reflex":{
|
1352 |
"p":1.0,
|
1353 |
+
"r":0.375,
|
1354 |
+
"f":0.5454545455
|
1355 |
},
|
1356 |
"Reflexive":{
|
1357 |
"p":0.0,
|
|
|
1375 |
},
|
1376 |
"Number[psed]":{
|
1377 |
"p":1.0,
|
1378 |
+
"r":0.2222222222,
|
1379 |
+
"f":0.3636363636
|
1380 |
}
|
1381 |
},
|
1382 |
+
"lemma_acc":0.974069467,
|
1383 |
+
"dep_uas":0.818445411,
|
1384 |
+
"dep_las":0.7425002788,
|
1385 |
"dep_las_per_type":{
|
1386 |
"det":{
|
1387 |
+
"p":0.8732394366,
|
1388 |
+
"r":0.8885350318,
|
1389 |
+
"f":0.8808208366
|
1390 |
},
|
1391 |
"amod:att":{
|
1392 |
+
"p":0.8492257539,
|
1393 |
+
"r":0.8520032706,
|
1394 |
+
"f":0.8506122449
|
1395 |
},
|
1396 |
"nsubj":{
|
1397 |
+
"p":0.7138413686,
|
1398 |
+
"r":0.7171875,
|
1399 |
+
"f":0.7155105222
|
1400 |
},
|
1401 |
"advmod:mode":{
|
1402 |
+
"p":0.5352422907,
|
1403 |
+
"r":0.5955882353,
|
1404 |
+
"f":0.5638051044
|
1405 |
},
|
1406 |
"nmod:att":{
|
1407 |
+
"p":0.7360655738,
|
1408 |
+
"r":0.7610169492,
|
1409 |
+
"f":0.7483333333
|
1410 |
},
|
1411 |
"obl":{
|
1412 |
+
"p":0.7799263352,
|
1413 |
+
"r":0.7623762376,
|
1414 |
+
"f":0.7710514338
|
1415 |
},
|
1416 |
"obj":{
|
1417 |
+
"p":0.8684807256,
|
1418 |
+
"r":0.8606741573,
|
1419 |
+
"f":0.8645598194
|
1420 |
},
|
1421 |
"root":{
|
1422 |
+
"p":0.844789357,
|
1423 |
+
"r":0.8485523385,
|
1424 |
+
"f":0.8466666667
|
1425 |
},
|
1426 |
"cc":{
|
1427 |
+
"p":0.7149028078,
|
1428 |
+
"r":0.6968421053,
|
1429 |
+
"f":0.7057569296
|
1430 |
},
|
1431 |
"conj":{
|
1432 |
+
"p":0.4516129032,
|
1433 |
+
"r":0.525,
|
1434 |
+
"f":0.4855491329
|
1435 |
},
|
1436 |
"advmod":{
|
1437 |
+
"p":0.8058252427,
|
1438 |
+
"r":0.8736842105,
|
1439 |
+
"f":0.8383838384
|
1440 |
},
|
1441 |
"flat:name":{
|
1442 |
+
"p":0.8434782609,
|
1443 |
+
"r":0.9065420561,
|
1444 |
+
"f":0.8738738739
|
1445 |
},
|
1446 |
"appos":{
|
1447 |
+
"p":0.5230769231,
|
1448 |
+
"r":0.3617021277,
|
1449 |
+
"f":0.427672956
|
1450 |
},
|
1451 |
"advcl":{
|
1452 |
+
"p":0.2247191011,
|
1453 |
"r":0.2040816327,
|
1454 |
+
"f":0.2139037433
|
1455 |
},
|
1456 |
"advmod:tlocy":{
|
1457 |
+
"p":0.6034482759,
|
1458 |
+
"r":0.6086956522,
|
1459 |
+
"f":0.6060606061
|
1460 |
},
|
1461 |
"ccomp:obj":{
|
1462 |
+
"p":0.1818181818,
|
1463 |
+
"r":0.303030303,
|
1464 |
+
"f":0.2272727273
|
1465 |
},
|
1466 |
"mark":{
|
1467 |
+
"p":0.821656051,
|
1468 |
+
"r":0.8164556962,
|
1469 |
+
"f":0.819047619
|
1470 |
},
|
1471 |
"compound:preverb":{
|
1472 |
+
"p":0.8606557377,
|
1473 |
+
"r":0.9633027523,
|
1474 |
+
"f":0.9090909091
|
1475 |
},
|
1476 |
"advmod:locy":{
|
1477 |
+
"p":0.6666666667,
|
1478 |
+
"r":0.3125,
|
1479 |
+
"f":0.4255319149
|
1480 |
},
|
1481 |
"cop":{
|
1482 |
+
"p":0.7857142857,
|
1483 |
+
"r":0.5365853659,
|
1484 |
+
"f":0.6376811594
|
1485 |
},
|
1486 |
"nmod:obl":{
|
1487 |
+
"p":0.1951219512,
|
1488 |
+
"r":0.2,
|
1489 |
+
"f":0.1975308642
|
1490 |
},
|
1491 |
"advmod:to":{
|
1492 |
"p":0.0,
|
|
|
1494 |
"f":0.0
|
1495 |
},
|
1496 |
"obj:lvc":{
|
1497 |
+
"p":0.1666666667,
|
1498 |
+
"r":0.0833333333,
|
1499 |
+
"f":0.1111111111
|
1500 |
},
|
1501 |
"ccomp:obl":{
|
1502 |
+
"p":0.36,
|
1503 |
+
"r":0.28125,
|
1504 |
+
"f":0.3157894737
|
1505 |
},
|
1506 |
"iobj":{
|
1507 |
+
"p":0.2,
|
1508 |
+
"r":0.1333333333,
|
1509 |
+
"f":0.16
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1510 |
},
|
1511 |
"case":{
|
1512 |
+
"p":0.9336734694,
|
1513 |
+
"r":0.9336734694,
|
1514 |
+
"f":0.9336734694
|
1515 |
},
|
1516 |
"csubj":{
|
1517 |
+
"p":0.4166666667,
|
1518 |
+
"r":0.2702702703,
|
1519 |
+
"f":0.3278688525
|
1520 |
},
|
1521 |
"parataxis":{
|
1522 |
+
"p":0.2068965517,
|
1523 |
+
"r":0.0821917808,
|
1524 |
+
"f":0.1176470588
|
1525 |
},
|
1526 |
"xcomp":{
|
1527 |
+
"p":0.8904109589,
|
1528 |
+
"r":0.8783783784,
|
1529 |
+
"f":0.8843537415
|
1530 |
},
|
1531 |
"nummod":{
|
1532 |
+
"p":0.5242718447,
|
1533 |
+
"r":0.5806451613,
|
1534 |
+
"f":0.5510204082
|
1535 |
+
},
|
1536 |
+
"acl":{
|
1537 |
+
"p":0.3333333333,
|
1538 |
+
"r":0.2361111111,
|
1539 |
+
"f":0.2764227642
|
1540 |
},
|
1541 |
"advmod:tto":{
|
1542 |
+
"p":0.2,
|
1543 |
+
"r":0.1,
|
1544 |
+
"f":0.1333333333
|
1545 |
},
|
1546 |
"nmod":{
|
1547 |
+
"p":0.2857142857,
|
1548 |
"r":0.1818181818,
|
1549 |
+
"f":0.2222222222
|
1550 |
+
},
|
1551 |
+
"ccomp":{
|
1552 |
+
"p":0.25,
|
1553 |
+
"r":0.0769230769,
|
1554 |
+
"f":0.1176470588
|
1555 |
+
},
|
1556 |
+
"dep":{
|
1557 |
+
"p":0.0,
|
1558 |
+
"r":0.0,
|
1559 |
+
"f":0.0
|
1560 |
},
|
1561 |
"aux":{
|
1562 |
+
"p":0.8,
|
1563 |
"r":0.6666666667,
|
1564 |
+
"f":0.7272727273
|
1565 |
},
|
1566 |
"advmod:tfrom":{
|
1567 |
"p":0.0,
|
|
|
1569 |
"f":0.0
|
1570 |
},
|
1571 |
"list":{
|
1572 |
+
"p":0.0769230769,
|
1573 |
"r":0.1666666667,
|
1574 |
+
"f":0.1052631579
|
1575 |
},
|
1576 |
"goeswith":{
|
1577 |
"p":0.0,
|
|
|
1579 |
"f":0.0
|
1580 |
},
|
1581 |
"compound":{
|
1582 |
+
"p":0.95,
|
1583 |
+
"r":0.95,
|
1584 |
+
"f":0.95
|
|
|
|
|
|
|
|
|
|
|
1585 |
},
|
1586 |
"obl:lvc":{
|
1587 |
"p":0.0,
|
|
|
1600 |
},
|
1601 |
"advmod:que":{
|
1602 |
"p":1.0,
|
1603 |
+
"r":0.5,
|
1604 |
+
"f":0.6666666667
|
1605 |
},
|
1606 |
"ccomp:pred":{
|
1607 |
"p":0.0,
|
|
|
1609 |
"f":0.0
|
1610 |
}
|
1611 |
},
|
1612 |
+
"ents_p":0.8499734936,
|
1613 |
+
"ents_r":0.8456399437,
|
1614 |
+
"ents_f":0.8478011809,
|
1615 |
"ents_per_type":{
|
1616 |
"ORG":{
|
1617 |
+
"p":0.8741573034,
|
1618 |
+
"r":0.9017153454,
|
1619 |
+
"f":0.8877225011
|
1620 |
},
|
1621 |
"PER":{
|
1622 |
+
"p":0.8958333333,
|
1623 |
+
"r":0.8733572282,
|
1624 |
+
"f":0.8844525106
|
1625 |
},
|
1626 |
"LOC":{
|
1627 |
+
"p":0.8658865887,
|
1628 |
+
"r":0.8350694444,
|
1629 |
+
"f":0.8501988511
|
1630 |
},
|
1631 |
"MISC":{
|
1632 |
+
"p":0.6382054993,
|
1633 |
+
"r":0.6255319149,
|
1634 |
+
"f":0.6318051576
|
1635 |
}
|
1636 |
},
|
1637 |
+
"speed":4473.6366022181
|
1638 |
},
|
1639 |
"sources":[
|
1640 |
{
|
|
|
1663 |
}
|
1664 |
],
|
1665 |
"requirements":[
|
1666 |
+
"spacy>=3.8.0,<3.9.0"
|
1667 |
]
|
1668 |
}
|
morphologizer/model
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 463022
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d2b94e9da7c6ae76ea19b0deeaa9244606a8c4aa610a3fd2e06aae8b0253d5ad
|
3 |
size 463022
|
ner/model
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 9791307
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:05134e628d4094b7e167701fc1eb41db49f7ad1ada234facd1bfe8f8c10ae022
|
3 |
size 9791307
|
parser/model
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 25601129
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7f38aee8f61557fd7529a3d51c69188f8721a893e83e7bdb78dc1fd19dc89105
|
3 |
size 25601129
|
senter/model
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1237
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2aff0a005319472d364853603d0700f2f1f78268161842dd99c26d8d1571f180
|
3 |
size 1237
|
tagger/model
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 7297
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:87da5da8e02ab091f00a4110ce8e690a749926272f1e079e7ae7f5935003f64e
|
3 |
size 7297
|
tok2vec/model
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 9659749
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3b31693678a7fda755d8c9b9b313572c26086c7d7dfab29716eed5864b592655
|
3 |
size 9659749
|
trainable_lemmatizer/model
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 11281364
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c0c3842d4b4e28a704f66c24be343a2f27b12ee37fb0cbe5113e384196202694
|
3 |
size 11281364
|
vocab/strings.json
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1f7d6d928c4beb7598b0ccfb9c58c82a4629f9de30f75923eb92a864c2a6e65b
|
3 |
+
size 6390466
|