jimi0209 commited on
Commit
1985835
·
verified ·
1 Parent(s): b1653dc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1103 -0
README.md ADDED
@@ -0,0 +1,1103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - mteb
4
+ - llama-cpp
5
+ - gguf-my-repo
6
+ base_model: Classical/Yinka
7
+ model-index:
8
+ - name: checkpoint-1431
9
+ results:
10
+ - task:
11
+ type: STS
12
+ dataset:
13
+ name: MTEB AFQMC
14
+ type: C-MTEB/AFQMC
15
+ config: default
16
+ split: validation
17
+ revision: None
18
+ metrics:
19
+ - type: cos_sim_pearson
20
+ value: 56.306314279047875
21
+ - type: cos_sim_spearman
22
+ value: 61.020227685004016
23
+ - type: euclidean_pearson
24
+ value: 58.61821670933433
25
+ - type: euclidean_spearman
26
+ value: 60.131457106640674
27
+ - type: manhattan_pearson
28
+ value: 58.6189460369694
29
+ - type: manhattan_spearman
30
+ value: 60.126350618526224
31
+ - task:
32
+ type: STS
33
+ dataset:
34
+ name: MTEB ATEC
35
+ type: C-MTEB/ATEC
36
+ config: default
37
+ split: test
38
+ revision: None
39
+ metrics:
40
+ - type: cos_sim_pearson
41
+ value: 55.8612958476143
42
+ - type: cos_sim_spearman
43
+ value: 59.01977664864512
44
+ - type: euclidean_pearson
45
+ value: 62.028094897243655
46
+ - type: euclidean_spearman
47
+ value: 58.6046814257705
48
+ - type: manhattan_pearson
49
+ value: 62.02580042431887
50
+ - type: manhattan_spearman
51
+ value: 58.60626890004892
52
+ - task:
53
+ type: Classification
54
+ dataset:
55
+ name: MTEB AmazonReviewsClassification (zh)
56
+ type: mteb/amazon_reviews_multi
57
+ config: zh
58
+ split: test
59
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
60
+ metrics:
61
+ - type: accuracy
62
+ value: 49.496
63
+ - type: f1
64
+ value: 46.673963383873065
65
+ - task:
66
+ type: STS
67
+ dataset:
68
+ name: MTEB BQ
69
+ type: C-MTEB/BQ
70
+ config: default
71
+ split: test
72
+ revision: None
73
+ metrics:
74
+ - type: cos_sim_pearson
75
+ value: 70.73971622592535
76
+ - type: cos_sim_spearman
77
+ value: 72.76102992060764
78
+ - type: euclidean_pearson
79
+ value: 71.04525865868672
80
+ - type: euclidean_spearman
81
+ value: 72.4032852155075
82
+ - type: manhattan_pearson
83
+ value: 71.03693009336658
84
+ - type: manhattan_spearman
85
+ value: 72.39635701224252
86
+ - task:
87
+ type: Clustering
88
+ dataset:
89
+ name: MTEB CLSClusteringP2P
90
+ type: C-MTEB/CLSClusteringP2P
91
+ config: default
92
+ split: test
93
+ revision: None
94
+ metrics:
95
+ - type: v_measure
96
+ value: 56.34751074520767
97
+ - task:
98
+ type: Clustering
99
+ dataset:
100
+ name: MTEB CLSClusteringS2S
101
+ type: C-MTEB/CLSClusteringS2S
102
+ config: default
103
+ split: test
104
+ revision: None
105
+ metrics:
106
+ - type: v_measure
107
+ value: 48.4856662121073
108
+ - task:
109
+ type: Reranking
110
+ dataset:
111
+ name: MTEB CMedQAv1
112
+ type: C-MTEB/CMedQAv1-reranking
113
+ config: default
114
+ split: test
115
+ revision: None
116
+ metrics:
117
+ - type: map
118
+ value: 89.26384109024997
119
+ - type: mrr
120
+ value: 91.27261904761905
121
+ - task:
122
+ type: Reranking
123
+ dataset:
124
+ name: MTEB CMedQAv2
125
+ type: C-MTEB/CMedQAv2-reranking
126
+ config: default
127
+ split: test
128
+ revision: None
129
+ metrics:
130
+ - type: map
131
+ value: 90.0464058154547
132
+ - type: mrr
133
+ value: 92.06480158730159
134
+ - task:
135
+ type: Retrieval
136
+ dataset:
137
+ name: MTEB CmedqaRetrieval
138
+ type: C-MTEB/CmedqaRetrieval
139
+ config: default
140
+ split: dev
141
+ revision: None
142
+ metrics:
143
+ - type: map_at_1
144
+ value: 27.236
145
+ - type: map_at_10
146
+ value: 40.778
147
+ - type: map_at_100
148
+ value: 42.692
149
+ - type: map_at_1000
150
+ value: 42.787
151
+ - type: map_at_3
152
+ value: 36.362
153
+ - type: map_at_5
154
+ value: 38.839
155
+ - type: mrr_at_1
156
+ value: 41.335
157
+ - type: mrr_at_10
158
+ value: 49.867
159
+ - type: mrr_at_100
160
+ value: 50.812999999999995
161
+ - type: mrr_at_1000
162
+ value: 50.848000000000006
163
+ - type: mrr_at_3
164
+ value: 47.354
165
+ - type: mrr_at_5
166
+ value: 48.718
167
+ - type: ndcg_at_1
168
+ value: 41.335
169
+ - type: ndcg_at_10
170
+ value: 47.642
171
+ - type: ndcg_at_100
172
+ value: 54.855
173
+ - type: ndcg_at_1000
174
+ value: 56.449000000000005
175
+ - type: ndcg_at_3
176
+ value: 42.203
177
+ - type: ndcg_at_5
178
+ value: 44.416
179
+ - type: precision_at_1
180
+ value: 41.335
181
+ - type: precision_at_10
182
+ value: 10.568
183
+ - type: precision_at_100
184
+ value: 1.6400000000000001
185
+ - type: precision_at_1000
186
+ value: 0.184
187
+ - type: precision_at_3
188
+ value: 23.998
189
+ - type: precision_at_5
190
+ value: 17.389
191
+ - type: recall_at_1
192
+ value: 27.236
193
+ - type: recall_at_10
194
+ value: 58.80800000000001
195
+ - type: recall_at_100
196
+ value: 88.411
197
+ - type: recall_at_1000
198
+ value: 99.032
199
+ - type: recall_at_3
200
+ value: 42.253
201
+ - type: recall_at_5
202
+ value: 49.118
203
+ - task:
204
+ type: PairClassification
205
+ dataset:
206
+ name: MTEB Cmnli
207
+ type: C-MTEB/CMNLI
208
+ config: default
209
+ split: validation
210
+ revision: None
211
+ metrics:
212
+ - type: cos_sim_accuracy
213
+ value: 86.03728202044498
214
+ - type: cos_sim_ap
215
+ value: 92.49469583272597
216
+ - type: cos_sim_f1
217
+ value: 86.74095974528088
218
+ - type: cos_sim_precision
219
+ value: 84.43657294664601
220
+ - type: cos_sim_recall
221
+ value: 89.17465513210195
222
+ - type: dot_accuracy
223
+ value: 72.21888153938664
224
+ - type: dot_ap
225
+ value: 80.59377163340332
226
+ - type: dot_f1
227
+ value: 74.96686040583258
228
+ - type: dot_precision
229
+ value: 66.4737793851718
230
+ - type: dot_recall
231
+ value: 85.94809445873275
232
+ - type: euclidean_accuracy
233
+ value: 85.47203848466627
234
+ - type: euclidean_ap
235
+ value: 91.89152584749868
236
+ - type: euclidean_f1
237
+ value: 86.38105975197294
238
+ - type: euclidean_precision
239
+ value: 83.40953625081646
240
+ - type: euclidean_recall
241
+ value: 89.5721299976619
242
+ - type: manhattan_accuracy
243
+ value: 85.3758268190018
244
+ - type: manhattan_ap
245
+ value: 91.88989707722311
246
+ - type: manhattan_f1
247
+ value: 86.39767519839052
248
+ - type: manhattan_precision
249
+ value: 82.76231263383298
250
+ - type: manhattan_recall
251
+ value: 90.36707972878185
252
+ - type: max_accuracy
253
+ value: 86.03728202044498
254
+ - type: max_ap
255
+ value: 92.49469583272597
256
+ - type: max_f1
257
+ value: 86.74095974528088
258
+ - task:
259
+ type: Retrieval
260
+ dataset:
261
+ name: MTEB CovidRetrieval
262
+ type: C-MTEB/CovidRetrieval
263
+ config: default
264
+ split: dev
265
+ revision: None
266
+ metrics:
267
+ - type: map_at_1
268
+ value: 74.34100000000001
269
+ - type: map_at_10
270
+ value: 82.49499999999999
271
+ - type: map_at_100
272
+ value: 82.64200000000001
273
+ - type: map_at_1000
274
+ value: 82.643
275
+ - type: map_at_3
276
+ value: 81.142
277
+ - type: map_at_5
278
+ value: 81.95400000000001
279
+ - type: mrr_at_1
280
+ value: 74.71
281
+ - type: mrr_at_10
282
+ value: 82.553
283
+ - type: mrr_at_100
284
+ value: 82.699
285
+ - type: mrr_at_1000
286
+ value: 82.70100000000001
287
+ - type: mrr_at_3
288
+ value: 81.279
289
+ - type: mrr_at_5
290
+ value: 82.069
291
+ - type: ndcg_at_1
292
+ value: 74.605
293
+ - type: ndcg_at_10
294
+ value: 85.946
295
+ - type: ndcg_at_100
296
+ value: 86.607
297
+ - type: ndcg_at_1000
298
+ value: 86.669
299
+ - type: ndcg_at_3
300
+ value: 83.263
301
+ - type: ndcg_at_5
302
+ value: 84.71600000000001
303
+ - type: precision_at_1
304
+ value: 74.605
305
+ - type: precision_at_10
306
+ value: 9.758
307
+ - type: precision_at_100
308
+ value: 1.005
309
+ - type: precision_at_1000
310
+ value: 0.101
311
+ - type: precision_at_3
312
+ value: 29.996000000000002
313
+ - type: precision_at_5
314
+ value: 18.736
315
+ - type: recall_at_1
316
+ value: 74.34100000000001
317
+ - type: recall_at_10
318
+ value: 96.523
319
+ - type: recall_at_100
320
+ value: 99.473
321
+ - type: recall_at_1000
322
+ value: 100.0
323
+ - type: recall_at_3
324
+ value: 89.278
325
+ - type: recall_at_5
326
+ value: 92.83500000000001
327
+ - task:
328
+ type: Retrieval
329
+ dataset:
330
+ name: MTEB DuRetrieval
331
+ type: C-MTEB/DuRetrieval
332
+ config: default
333
+ split: dev
334
+ revision: None
335
+ metrics:
336
+ - type: map_at_1
337
+ value: 26.950000000000003
338
+ - type: map_at_10
339
+ value: 82.408
340
+ - type: map_at_100
341
+ value: 85.057
342
+ - type: map_at_1000
343
+ value: 85.09100000000001
344
+ - type: map_at_3
345
+ value: 57.635999999999996
346
+ - type: map_at_5
347
+ value: 72.48
348
+ - type: mrr_at_1
349
+ value: 92.15
350
+ - type: mrr_at_10
351
+ value: 94.554
352
+ - type: mrr_at_100
353
+ value: 94.608
354
+ - type: mrr_at_1000
355
+ value: 94.61
356
+ - type: mrr_at_3
357
+ value: 94.292
358
+ - type: mrr_at_5
359
+ value: 94.459
360
+ - type: ndcg_at_1
361
+ value: 92.15
362
+ - type: ndcg_at_10
363
+ value: 89.108
364
+ - type: ndcg_at_100
365
+ value: 91.525
366
+ - type: ndcg_at_1000
367
+ value: 91.82900000000001
368
+ - type: ndcg_at_3
369
+ value: 88.44
370
+ - type: ndcg_at_5
371
+ value: 87.271
372
+ - type: precision_at_1
373
+ value: 92.15
374
+ - type: precision_at_10
375
+ value: 42.29
376
+ - type: precision_at_100
377
+ value: 4.812
378
+ - type: precision_at_1000
379
+ value: 0.48900000000000005
380
+ - type: precision_at_3
381
+ value: 79.14999999999999
382
+ - type: precision_at_5
383
+ value: 66.64
384
+ - type: recall_at_1
385
+ value: 26.950000000000003
386
+ - type: recall_at_10
387
+ value: 89.832
388
+ - type: recall_at_100
389
+ value: 97.921
390
+ - type: recall_at_1000
391
+ value: 99.471
392
+ - type: recall_at_3
393
+ value: 59.562000000000005
394
+ - type: recall_at_5
395
+ value: 76.533
396
+ - task:
397
+ type: Retrieval
398
+ dataset:
399
+ name: MTEB EcomRetrieval
400
+ type: C-MTEB/EcomRetrieval
401
+ config: default
402
+ split: dev
403
+ revision: None
404
+ metrics:
405
+ - type: map_at_1
406
+ value: 53.5
407
+ - type: map_at_10
408
+ value: 63.105999999999995
409
+ - type: map_at_100
410
+ value: 63.63100000000001
411
+ - type: map_at_1000
412
+ value: 63.641999999999996
413
+ - type: map_at_3
414
+ value: 60.617
415
+ - type: map_at_5
416
+ value: 62.132
417
+ - type: mrr_at_1
418
+ value: 53.5
419
+ - type: mrr_at_10
420
+ value: 63.105999999999995
421
+ - type: mrr_at_100
422
+ value: 63.63100000000001
423
+ - type: mrr_at_1000
424
+ value: 63.641999999999996
425
+ - type: mrr_at_3
426
+ value: 60.617
427
+ - type: mrr_at_5
428
+ value: 62.132
429
+ - type: ndcg_at_1
430
+ value: 53.5
431
+ - type: ndcg_at_10
432
+ value: 67.92200000000001
433
+ - type: ndcg_at_100
434
+ value: 70.486
435
+ - type: ndcg_at_1000
436
+ value: 70.777
437
+ - type: ndcg_at_3
438
+ value: 62.853
439
+ - type: ndcg_at_5
440
+ value: 65.59899999999999
441
+ - type: precision_at_1
442
+ value: 53.5
443
+ - type: precision_at_10
444
+ value: 8.309999999999999
445
+ - type: precision_at_100
446
+ value: 0.951
447
+ - type: precision_at_1000
448
+ value: 0.097
449
+ - type: precision_at_3
450
+ value: 23.1
451
+ - type: precision_at_5
452
+ value: 15.2
453
+ - type: recall_at_1
454
+ value: 53.5
455
+ - type: recall_at_10
456
+ value: 83.1
457
+ - type: recall_at_100
458
+ value: 95.1
459
+ - type: recall_at_1000
460
+ value: 97.39999999999999
461
+ - type: recall_at_3
462
+ value: 69.3
463
+ - type: recall_at_5
464
+ value: 76.0
465
+ - task:
466
+ type: Classification
467
+ dataset:
468
+ name: MTEB IFlyTek
469
+ type: C-MTEB/IFlyTek-classification
470
+ config: default
471
+ split: validation
472
+ revision: None
473
+ metrics:
474
+ - type: accuracy
475
+ value: 51.773759138130046
476
+ - type: f1
477
+ value: 40.38600802756481
478
+ - task:
479
+ type: Classification
480
+ dataset:
481
+ name: MTEB JDReview
482
+ type: C-MTEB/JDReview-classification
483
+ config: default
484
+ split: test
485
+ revision: None
486
+ metrics:
487
+ - type: accuracy
488
+ value: 88.48030018761726
489
+ - type: ap
490
+ value: 59.2732541555627
491
+ - type: f1
492
+ value: 83.58836007358619
493
+ - task:
494
+ type: STS
495
+ dataset:
496
+ name: MTEB LCQMC
497
+ type: C-MTEB/LCQMC
498
+ config: default
499
+ split: test
500
+ revision: None
501
+ metrics:
502
+ - type: cos_sim_pearson
503
+ value: 73.67511194245922
504
+ - type: cos_sim_spearman
505
+ value: 79.43347759067298
506
+ - type: euclidean_pearson
507
+ value: 79.04491504318766
508
+ - type: euclidean_spearman
509
+ value: 79.14478545356785
510
+ - type: manhattan_pearson
511
+ value: 79.03365022867428
512
+ - type: manhattan_spearman
513
+ value: 79.13172717619908
514
+ - task:
515
+ type: Retrieval
516
+ dataset:
517
+ name: MTEB MMarcoRetrieval
518
+ type: C-MTEB/MMarcoRetrieval
519
+ config: default
520
+ split: dev
521
+ revision: None
522
+ metrics:
523
+ - type: map_at_1
524
+ value: 67.184
525
+ - type: map_at_10
526
+ value: 76.24600000000001
527
+ - type: map_at_100
528
+ value: 76.563
529
+ - type: map_at_1000
530
+ value: 76.575
531
+ - type: map_at_3
532
+ value: 74.522
533
+ - type: map_at_5
534
+ value: 75.598
535
+ - type: mrr_at_1
536
+ value: 69.47
537
+ - type: mrr_at_10
538
+ value: 76.8
539
+ - type: mrr_at_100
540
+ value: 77.082
541
+ - type: mrr_at_1000
542
+ value: 77.093
543
+ - type: mrr_at_3
544
+ value: 75.29400000000001
545
+ - type: mrr_at_5
546
+ value: 76.24
547
+ - type: ndcg_at_1
548
+ value: 69.47
549
+ - type: ndcg_at_10
550
+ value: 79.81099999999999
551
+ - type: ndcg_at_100
552
+ value: 81.187
553
+ - type: ndcg_at_1000
554
+ value: 81.492
555
+ - type: ndcg_at_3
556
+ value: 76.536
557
+ - type: ndcg_at_5
558
+ value: 78.367
559
+ - type: precision_at_1
560
+ value: 69.47
561
+ - type: precision_at_10
562
+ value: 9.599
563
+ - type: precision_at_100
564
+ value: 1.026
565
+ - type: precision_at_1000
566
+ value: 0.105
567
+ - type: precision_at_3
568
+ value: 28.777
569
+ - type: precision_at_5
570
+ value: 18.232
571
+ - type: recall_at_1
572
+ value: 67.184
573
+ - type: recall_at_10
574
+ value: 90.211
575
+ - type: recall_at_100
576
+ value: 96.322
577
+ - type: recall_at_1000
578
+ value: 98.699
579
+ - type: recall_at_3
580
+ value: 81.556
581
+ - type: recall_at_5
582
+ value: 85.931
583
+ - task:
584
+ type: Classification
585
+ dataset:
586
+ name: MTEB MassiveIntentClassification (zh-CN)
587
+ type: mteb/amazon_massive_intent
588
+ config: zh-CN
589
+ split: test
590
+ revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
591
+ metrics:
592
+ - type: accuracy
593
+ value: 76.96032279757901
594
+ - type: f1
595
+ value: 73.48052314033545
596
+ - task:
597
+ type: Classification
598
+ dataset:
599
+ name: MTEB MassiveScenarioClassification (zh-CN)
600
+ type: mteb/amazon_massive_scenario
601
+ config: zh-CN
602
+ split: test
603
+ revision: 7d571f92784cd94a019292a1f45445077d0ef634
604
+ metrics:
605
+ - type: accuracy
606
+ value: 84.64357767316744
607
+ - type: f1
608
+ value: 83.58250539497922
609
+ - task:
610
+ type: Retrieval
611
+ dataset:
612
+ name: MTEB MedicalRetrieval
613
+ type: C-MTEB/MedicalRetrieval
614
+ config: default
615
+ split: dev
616
+ revision: None
617
+ metrics:
618
+ - type: map_at_1
619
+ value: 56.00000000000001
620
+ - type: map_at_10
621
+ value: 62.066
622
+ - type: map_at_100
623
+ value: 62.553000000000004
624
+ - type: map_at_1000
625
+ value: 62.598
626
+ - type: map_at_3
627
+ value: 60.4
628
+ - type: map_at_5
629
+ value: 61.370000000000005
630
+ - type: mrr_at_1
631
+ value: 56.2
632
+ - type: mrr_at_10
633
+ value: 62.166
634
+ - type: mrr_at_100
635
+ value: 62.653000000000006
636
+ - type: mrr_at_1000
637
+ value: 62.699000000000005
638
+ - type: mrr_at_3
639
+ value: 60.5
640
+ - type: mrr_at_5
641
+ value: 61.47
642
+ - type: ndcg_at_1
643
+ value: 56.00000000000001
644
+ - type: ndcg_at_10
645
+ value: 65.199
646
+ - type: ndcg_at_100
647
+ value: 67.79899999999999
648
+ - type: ndcg_at_1000
649
+ value: 69.056
650
+ - type: ndcg_at_3
651
+ value: 61.814
652
+ - type: ndcg_at_5
653
+ value: 63.553000000000004
654
+ - type: precision_at_1
655
+ value: 56.00000000000001
656
+ - type: precision_at_10
657
+ value: 7.51
658
+ - type: precision_at_100
659
+ value: 0.878
660
+ - type: precision_at_1000
661
+ value: 0.098
662
+ - type: precision_at_3
663
+ value: 21.967
664
+ - type: precision_at_5
665
+ value: 14.02
666
+ - type: recall_at_1
667
+ value: 56.00000000000001
668
+ - type: recall_at_10
669
+ value: 75.1
670
+ - type: recall_at_100
671
+ value: 87.8
672
+ - type: recall_at_1000
673
+ value: 97.7
674
+ - type: recall_at_3
675
+ value: 65.9
676
+ - type: recall_at_5
677
+ value: 70.1
678
+ - task:
679
+ type: Reranking
680
+ dataset:
681
+ name: MTEB MMarcoReranking
682
+ type: C-MTEB/Mmarco-reranking
683
+ config: default
684
+ split: dev
685
+ revision: None
686
+ metrics:
687
+ - type: map
688
+ value: 32.74158258279793
689
+ - type: mrr
690
+ value: 31.56071428571428
691
+ - task:
692
+ type: Classification
693
+ dataset:
694
+ name: MTEB MultilingualSentiment
695
+ type: C-MTEB/MultilingualSentiment-classification
696
+ config: default
697
+ split: validation
698
+ revision: None
699
+ metrics:
700
+ - type: accuracy
701
+ value: 78.96666666666667
702
+ - type: f1
703
+ value: 78.82528563818045
704
+ - task:
705
+ type: PairClassification
706
+ dataset:
707
+ name: MTEB Ocnli
708
+ type: C-MTEB/OCNLI
709
+ config: default
710
+ split: validation
711
+ revision: None
712
+ metrics:
713
+ - type: cos_sim_accuracy
714
+ value: 83.54087709799674
715
+ - type: cos_sim_ap
716
+ value: 87.26170197077586
717
+ - type: cos_sim_f1
718
+ value: 84.7609561752988
719
+ - type: cos_sim_precision
720
+ value: 80.20735155513667
721
+ - type: cos_sim_recall
722
+ value: 89.86272439281943
723
+ - type: dot_accuracy
724
+ value: 72.22523010286952
725
+ - type: dot_ap
726
+ value: 79.51975358187732
727
+ - type: dot_f1
728
+ value: 76.32183908045977
729
+ - type: dot_precision
730
+ value: 67.58957654723126
731
+ - type: dot_recall
732
+ value: 87.64519535374869
733
+ - type: euclidean_accuracy
734
+ value: 82.0249052517596
735
+ - type: euclidean_ap
736
+ value: 85.32829948726406
737
+ - type: euclidean_f1
738
+ value: 83.24924318869829
739
+ - type: euclidean_precision
740
+ value: 79.71014492753623
741
+ - type: euclidean_recall
742
+ value: 87.11721224920802
743
+ - type: manhattan_accuracy
744
+ value: 82.13318895506227
745
+ - type: manhattan_ap
746
+ value: 85.28856869288006
747
+ - type: manhattan_f1
748
+ value: 83.34946757018393
749
+ - type: manhattan_precision
750
+ value: 76.94369973190348
751
+ - type: manhattan_recall
752
+ value: 90.91869060190075
753
+ - type: max_accuracy
754
+ value: 83.54087709799674
755
+ - type: max_ap
756
+ value: 87.26170197077586
757
+ - type: max_f1
758
+ value: 84.7609561752988
759
+ - task:
760
+ type: Classification
761
+ dataset:
762
+ name: MTEB OnlineShopping
763
+ type: C-MTEB/OnlineShopping-classification
764
+ config: default
765
+ split: test
766
+ revision: None
767
+ metrics:
768
+ - type: accuracy
769
+ value: 94.56
770
+ - type: ap
771
+ value: 92.80848436710805
772
+ - type: f1
773
+ value: 94.54951966576111
774
+ - task:
775
+ type: STS
776
+ dataset:
777
+ name: MTEB PAWSX
778
+ type: C-MTEB/PAWSX
779
+ config: default
780
+ split: test
781
+ revision: None
782
+ metrics:
783
+ - type: cos_sim_pearson
784
+ value: 39.0866558287863
785
+ - type: cos_sim_spearman
786
+ value: 45.9211126233312
787
+ - type: euclidean_pearson
788
+ value: 44.86568743222145
789
+ - type: euclidean_spearman
790
+ value: 45.63882757207507
791
+ - type: manhattan_pearson
792
+ value: 44.89480036909126
793
+ - type: manhattan_spearman
794
+ value: 45.65929449046206
795
+ - task:
796
+ type: STS
797
+ dataset:
798
+ name: MTEB QBQTC
799
+ type: C-MTEB/QBQTC
800
+ config: default
801
+ split: test
802
+ revision: None
803
+ metrics:
804
+ - type: cos_sim_pearson
805
+ value: 43.04701793979569
806
+ - type: cos_sim_spearman
807
+ value: 44.87491033760315
808
+ - type: euclidean_pearson
809
+ value: 36.2004061032567
810
+ - type: euclidean_spearman
811
+ value: 41.44823909683865
812
+ - type: manhattan_pearson
813
+ value: 36.136113427955095
814
+ - type: manhattan_spearman
815
+ value: 41.39225495993949
816
+ - task:
817
+ type: STS
818
+ dataset:
819
+ name: MTEB STS22 (zh)
820
+ type: mteb/sts22-crosslingual-sts
821
+ config: zh
822
+ split: test
823
+ revision: None
824
+ metrics:
825
+ - type: cos_sim_pearson
826
+ value: 61.65611315777857
827
+ - type: cos_sim_spearman
828
+ value: 64.4067673105648
829
+ - type: euclidean_pearson
830
+ value: 61.814977248797184
831
+ - type: euclidean_spearman
832
+ value: 63.99473350700169
833
+ - type: manhattan_pearson
834
+ value: 61.684304629588624
835
+ - type: manhattan_spearman
836
+ value: 63.97831213239316
837
+ - task:
838
+ type: STS
839
+ dataset:
840
+ name: MTEB STSB
841
+ type: C-MTEB/STSB
842
+ config: default
843
+ split: test
844
+ revision: None
845
+ metrics:
846
+ - type: cos_sim_pearson
847
+ value: 76.57324933064379
848
+ - type: cos_sim_spearman
849
+ value: 79.23602286949782
850
+ - type: euclidean_pearson
851
+ value: 80.28226284310948
852
+ - type: euclidean_spearman
853
+ value: 80.32210477608423
854
+ - type: manhattan_pearson
855
+ value: 80.27262188617811
856
+ - type: manhattan_spearman
857
+ value: 80.31619185039723
858
+ - task:
859
+ type: Reranking
860
+ dataset:
861
+ name: MTEB T2Reranking
862
+ type: C-MTEB/T2Reranking
863
+ config: default
864
+ split: dev
865
+ revision: None
866
+ metrics:
867
+ - type: map
868
+ value: 67.05266891356277
869
+ - type: mrr
870
+ value: 77.1906333623497
871
+ - task:
872
+ type: Retrieval
873
+ dataset:
874
+ name: MTEB T2Retrieval
875
+ type: C-MTEB/T2Retrieval
876
+ config: default
877
+ split: dev
878
+ revision: None
879
+ metrics:
880
+ - type: map_at_1
881
+ value: 28.212
882
+ - type: map_at_10
883
+ value: 78.932
884
+ - type: map_at_100
885
+ value: 82.51899999999999
886
+ - type: map_at_1000
887
+ value: 82.575
888
+ - type: map_at_3
889
+ value: 55.614
890
+ - type: map_at_5
891
+ value: 68.304
892
+ - type: mrr_at_1
893
+ value: 91.211
894
+ - type: mrr_at_10
895
+ value: 93.589
896
+ - type: mrr_at_100
897
+ value: 93.659
898
+ - type: mrr_at_1000
899
+ value: 93.662
900
+ - type: mrr_at_3
901
+ value: 93.218
902
+ - type: mrr_at_5
903
+ value: 93.453
904
+ - type: ndcg_at_1
905
+ value: 91.211
906
+ - type: ndcg_at_10
907
+ value: 86.24000000000001
908
+ - type: ndcg_at_100
909
+ value: 89.614
910
+ - type: ndcg_at_1000
911
+ value: 90.14
912
+ - type: ndcg_at_3
913
+ value: 87.589
914
+ - type: ndcg_at_5
915
+ value: 86.265
916
+ - type: precision_at_1
917
+ value: 91.211
918
+ - type: precision_at_10
919
+ value: 42.626
920
+ - type: precision_at_100
921
+ value: 5.043
922
+ - type: precision_at_1000
923
+ value: 0.517
924
+ - type: precision_at_3
925
+ value: 76.42
926
+ - type: precision_at_5
927
+ value: 64.045
928
+ - type: recall_at_1
929
+ value: 28.212
930
+ - type: recall_at_10
931
+ value: 85.223
932
+ - type: recall_at_100
933
+ value: 96.229
934
+ - type: recall_at_1000
935
+ value: 98.849
936
+ - type: recall_at_3
937
+ value: 57.30800000000001
938
+ - type: recall_at_5
939
+ value: 71.661
940
+ - task:
941
+ type: Classification
942
+ dataset:
943
+ name: MTEB TNews
944
+ type: C-MTEB/TNews-classification
945
+ config: default
946
+ split: validation
947
+ revision: None
948
+ metrics:
949
+ - type: accuracy
950
+ value: 54.385000000000005
951
+ - type: f1
952
+ value: 52.38762400903556
953
+ - task:
954
+ type: Clustering
955
+ dataset:
956
+ name: MTEB ThuNewsClusteringP2P
957
+ type: C-MTEB/ThuNewsClusteringP2P
958
+ config: default
959
+ split: test
960
+ revision: None
961
+ metrics:
962
+ - type: v_measure
963
+ value: 74.55283855942916
964
+ - task:
965
+ type: Clustering
966
+ dataset:
967
+ name: MTEB ThuNewsClusteringS2S
968
+ type: C-MTEB/ThuNewsClusteringS2S
969
+ config: default
970
+ split: test
971
+ revision: None
972
+ metrics:
973
+ - type: v_measure
974
+ value: 68.55115316700493
975
+ - task:
976
+ type: Retrieval
977
+ dataset:
978
+ name: MTEB VideoRetrieval
979
+ type: C-MTEB/VideoRetrieval
980
+ config: default
981
+ split: dev
982
+ revision: None
983
+ metrics:
984
+ - type: map_at_1
985
+ value: 58.8
986
+ - type: map_at_10
987
+ value: 69.035
988
+ - type: map_at_100
989
+ value: 69.52000000000001
990
+ - type: map_at_1000
991
+ value: 69.529
992
+ - type: map_at_3
993
+ value: 67.417
994
+ - type: map_at_5
995
+ value: 68.407
996
+ - type: mrr_at_1
997
+ value: 58.8
998
+ - type: mrr_at_10
999
+ value: 69.035
1000
+ - type: mrr_at_100
1001
+ value: 69.52000000000001
1002
+ - type: mrr_at_1000
1003
+ value: 69.529
1004
+ - type: mrr_at_3
1005
+ value: 67.417
1006
+ - type: mrr_at_5
1007
+ value: 68.407
1008
+ - type: ndcg_at_1
1009
+ value: 58.8
1010
+ - type: ndcg_at_10
1011
+ value: 73.395
1012
+ - type: ndcg_at_100
1013
+ value: 75.62
1014
+ - type: ndcg_at_1000
1015
+ value: 75.90299999999999
1016
+ - type: ndcg_at_3
1017
+ value: 70.11800000000001
1018
+ - type: ndcg_at_5
1019
+ value: 71.87400000000001
1020
+ - type: precision_at_1
1021
+ value: 58.8
1022
+ - type: precision_at_10
1023
+ value: 8.68
1024
+ - type: precision_at_100
1025
+ value: 0.9690000000000001
1026
+ - type: precision_at_1000
1027
+ value: 0.099
1028
+ - type: precision_at_3
1029
+ value: 25.967000000000002
1030
+ - type: precision_at_5
1031
+ value: 16.42
1032
+ - type: recall_at_1
1033
+ value: 58.8
1034
+ - type: recall_at_10
1035
+ value: 86.8
1036
+ - type: recall_at_100
1037
+ value: 96.89999999999999
1038
+ - type: recall_at_1000
1039
+ value: 99.2
1040
+ - type: recall_at_3
1041
+ value: 77.9
1042
+ - type: recall_at_5
1043
+ value: 82.1
1044
+ - task:
1045
+ type: Classification
1046
+ dataset:
1047
+ name: MTEB Waimai
1048
+ type: C-MTEB/waimai-classification
1049
+ config: default
1050
+ split: test
1051
+ revision: None
1052
+ metrics:
1053
+ - type: accuracy
1054
+ value: 89.42
1055
+ - type: ap
1056
+ value: 75.35978503182068
1057
+ - type: f1
1058
+ value: 88.01006394348263
1059
+ ---
1060
+
1061
+ # jimi0209/Yinka-Q5_K_M-GGUF
1062
+ This model was converted to GGUF format from [`Classical/Yinka`](https://huggingface.co/Classical/Yinka) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
1063
+ Refer to the [original model card](https://huggingface.co/Classical/Yinka) for more details on the model.
1064
+
1065
+ ## Use with llama.cpp
1066
+ Install llama.cpp through brew (works on Mac and Linux)
1067
+
1068
+ ```bash
1069
+ brew install llama.cpp
1070
+
1071
+ ```
1072
+ Invoke the llama.cpp server or the CLI.
1073
+
1074
+ ### CLI:
1075
+ ```bash
1076
+ llama-cli --hf-repo jimi0209/Yinka-Q5_K_M-GGUF --hf-file yinka-q5_k_m.gguf -p "The meaning to life and the universe is"
1077
+ ```
1078
+
1079
+ ### Server:
1080
+ ```bash
1081
+ llama-server --hf-repo jimi0209/Yinka-Q5_K_M-GGUF --hf-file yinka-q5_k_m.gguf -c 2048
1082
+ ```
1083
+
1084
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
1085
+
1086
+ Step 1: Clone llama.cpp from GitHub.
1087
+ ```
1088
+ git clone https://github.com/ggerganov/llama.cpp
1089
+ ```
1090
+
1091
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
1092
+ ```
1093
+ cd llama.cpp && LLAMA_CURL=1 make
1094
+ ```
1095
+
1096
+ Step 3: Run inference through the main binary.
1097
+ ```
1098
+ ./llama-cli --hf-repo jimi0209/Yinka-Q5_K_M-GGUF --hf-file yinka-q5_k_m.gguf -p "The meaning to life and the universe is"
1099
+ ```
1100
+ or
1101
+ ```
1102
+ ./llama-server --hf-repo jimi0209/Yinka-Q5_K_M-GGUF --hf-file yinka-q5_k_m.gguf -c 2048
1103
+ ```