• Original model is JEJUMA/JEJUMA-002 - bbd7ec2

  • quantized using llama.cpp - b3542

  • JEJUMA Official Quantization is JEJUMA/JEJUMA-002-GGUF

  • After trying out this model, I noticed a few things:

    1. It's more like a translation model. You can't chat with it, it only does translations.
    2. It can only handle one dialect (or standard Korean) at a time.
    3. Don't expect a conversation. It's strictly for translation purposes! Look at the example below!
      system prompt
      Answer your questions using the Jeju dialect.
      
      user question
      hello! How are you doing now?
      
      assistant answer
      ํ—์ฏค ํ—˜๊ณผ๊ฒŒ
      # ํ• ์ˆ˜ ๋งŽ์Šต๋‹ˆ๊นŒ
      

Prompt(LM Studio)

<|start_header_id|>system<|end_header_id|>

{System}
<|eot_id|><|start_header_id|>user<|end_header_id|>

{User}
<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{Assistant}

Example of User Prompts

Detect the following sentence or word is standard, jeju, chungcheong, gangwon, gyeongsang, or jeonla's dialect: 
```
{Enter the Jeju island dialect or standard Korean here}
```
Detect the following sentence or word is which dialect and convert the following sentence or word to standard Korean: 
```
{Enter Jeju island dialect or standard Korean here}
```
Downloads last month
109
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for joongi007/JEJUMA-002-GGUF

Base model

JEJUMA/JEJUMA-002
Quantized
(1)
this model

Collection including joongi007/JEJUMA-002-GGUF