Nemotron-4-340B-Instruct Tokenizer

A 🤗-compatible version of the Nemotron-4-340B-Instruct (adapted from nvidia/Nemotron-4-340B-Instruct). This means it can be used with Hugging Face libraries including Transformers, Tokenizers, and Transformers.js.

Example usage:

Transformers/Tokenizers

from transformers import PreTrainedTokenizerFast

tokenizer = PreTrainedTokenizerFast.from_pretrained('Xenova/Nemotron-4-340B-Instruct-Tokenizer')
assert tokenizer.encode('hello world') == [38150, 2268]

Transformers.js

import { AutoTokenizer } from '@xenova/transformers';

const tokenizer = await AutoTokenizer.from_pretrained('Xenova/Nemotron-4-340B-Instruct-Tokenizer');
const tokens = tokenizer.encode('hello world'); // [38150, 2268]
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .