Text Generation
Transformers
Safetensors
English
llama
nvidia
llama3.1
conversational
text-generation-inference

How to run tis model please help me

#40
by Agoogleuser - opened

hey help me to run this model on windows

How to run this model on windows locally

Hiw to run it on colab t4 with bnb4 Quant

It is quite evident how to run the model using the instructions provided in the model card's Usage section. Where do you specifically need help ?

This comment has been hidden

Sign up or log in to comment