Xenova's picture
Xenova HF staff
Update README.md
454cd00 verified
metadata
title: Llama 3.2 Reasoning WebGPU
emoji: 🧠
colorFrom: red
colorTo: blue
sdk: static
pinned: false
license: apache-2.0
short_description: Small and powerful reasoning LLM that runs in your browser
thumbnail: >-
  https://huggingface.co./spaces/webml-community/llama-3.2-reasoning-webgpu/resolve/main/banner.png

Llama 3.2 Reasoning WebGPU

Getting Started

Follow the steps below to set up and run the application.

1. Clone the Repository

Clone the examples repository from GitHub:

git clone https://github.com/huggingface/transformers.js-examples.git

2. Navigate to the Project Directory

Change your working directory to the llama-3.2-reasoning-webgpu folder:

cd transformers.js-examples/llama-3.2-reasoning-webgpu

3. Install Dependencies

Install the necessary dependencies using npm:

npm i

4. Run the Development Server

Start the development server:

npm run dev

The application should now be running locally. Open your browser and go to http://localhost:5173 to see it in action.