prithivMLmods commited on
Commit
0dd52c0
·
verified ·
1 Parent(s): cf28a46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -52
README.md CHANGED
@@ -1,52 +1,10 @@
1
- ---
2
- title: Llama 3.2 Reasoning WebGPU
3
- emoji: 🧠
4
- colorFrom: red
5
- colorTo: blue
6
- sdk: static
7
- pinned: false
8
- license: apache-2.0
9
- short_description: Small and powerful reasoning LLM that runs in your browser
10
- thumbnail: >-
11
- https://huggingface.co/spaces/webml-community/llama-3.2-reasoning-webgpu/resolve/main/banner.png
12
- ---
13
-
14
- # Llama 3.2 Reasoning WebGPU
15
-
16
- ## Getting Started
17
-
18
- Follow the steps below to set up and run the application.
19
-
20
- ### 1. Clone the Repository
21
-
22
- Clone the examples repository from GitHub:
23
-
24
- ```sh
25
- git clone https://github.com/huggingface/transformers.js-examples.git
26
- ```
27
-
28
- ### 2. Navigate to the Project Directory
29
-
30
- Change your working directory to the `llama-3.2-reasoning-webgpu` folder:
31
-
32
- ```sh
33
- cd transformers.js-examples/llama-3.2-reasoning-webgpu
34
- ```
35
-
36
- ### 3. Install Dependencies
37
-
38
- Install the necessary dependencies using npm:
39
-
40
- ```sh
41
- npm i
42
- ```
43
-
44
- ### 4. Run the Development Server
45
-
46
- Start the development server:
47
-
48
- ```sh
49
- npm run dev
50
- ```
51
-
52
- The application should now be running locally. Open your browser and go to `http://localhost:5173` to see it in action.
 
1
+ ---
2
+ title: BELLATRIX WEB
3
+ emoji: 🧨
4
+ colorFrom: red
5
+ colorTo: blue
6
+ sdk: static
7
+ pinned: false
8
+ license: apache-2.0
9
+ short_description: llama webgpu optimum
10
+ ---