QuantFactory/Berghof-NSFW-7B-GGUF
This is quantized version of Elizezen/Berghof-NSFW-7B created using llama.cpp
Original Model Card
Berghof NSFW 7B
Model Description
多分これが一番強いと思います
Usage
Ensure you are using Transformers 4.34.0 or newer.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Elizezen/Berghof-NSFW-7B")
model = AutoModelForCausalLM.from_pretrained(
"Elizezen/Berghof-NSFW-7B",
torch_dtype="auto",
)
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
input_ids = tokenizer.encode(
"吾輩は猫である。名前はまだない",,
add_special_tokens=True,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=512,
temperature=1,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
Intended Use
The model is mainly intended to be used for generating novels. It may not be so capable with instruction-based responses.
- Downloads last month
- 738