MrRobotoAI/Thor-v2-8b-FANTASY-FICTION-Q4_K_M-GGUF

This model was converted to GGUF format from MrRobotoAI/Thor-v2-8b-FANTASY-FICTION using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

image/png

Abstract

A LLM model free from ethical direction, social, societal, racial, or political allegiance, concern for legality, morality, ethics, and regard for individual well-being. This unbiased and unaligned tool could be applied in various domains such as novel creation, content generation, translation, and summarization without the influence of usual constraints of other models. Thus, it becomes possible to develop more accurate and reliable text-based systems that unlock new possibilities for language processing and generation. To construct such an AI, one must draw upon a vast multitude of data from diverse fields including psychology, philosophy, sociology, neuroscience, English and world literature, world languages, and grammar. By carefully designing traits, values, and beliefs, it is possible to shape the AI's worldview and thought processes through artificially constructed datasets and specific model merging techniques. However, incremental testing will be necessary to refine the model and ensure its direction remains amoral and unaligned as it progresses to a final product.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo MrRobotoAI/Thor-v2-8b-FANTASY-FICTION-Q4_K_M-GGUF --hf-file thor-v2-8b-fantasy-fiction-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo MrRobotoAI/Thor-v2-8b-FANTASY-FICTION-Q4_K_M-GGUF --hf-file thor-v2-8b-fantasy-fiction-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo MrRobotoAI/Thor-v2-8b-FANTASY-FICTION-Q4_K_M-GGUF --hf-file thor-v2-8b-fantasy-fiction-q4_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo MrRobotoAI/Thor-v2-8b-FANTASY-FICTION-Q4_K_M-GGUF --hf-file thor-v2-8b-fantasy-fiction-q4_k_m.gguf -c 2048
Downloads last month
51
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for MrRobotoAI/Thor-v2-8b-FANTASY-FICTION-Q4_K_M-GGUF

Quantized
(1)
this model