Support for Tensorflow.js

#94
by bayang - opened

How can i use this model in js. My goal is to load the quantized. Anyone did that already?

Sentence Transformers org

Hello!
I believe you might be able to use:

import { pipeline } from '@huggingface/transformers';

// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'sentence-transformers/all-MiniLM-L6-v2');

// Compute sentence embeddings
const sentences = ['This is an example sentence', 'Each sentence is converted'];
const output = await extractor(sentences, { pooling: 'mean', normalize: true });
console.log(output);

For quantized models, you can e.g. add { dtype: "q4" }:

const extractor = await pipeline('feature-extraction', 'sentence-transformers/all-MiniLM-L6-v2', { dtype: 'q8' });
  • Tom Aarsen

@tomaarsen thanks for the quick reply. But on the browser, I meant as a chrome extension.
This code works well on a classic nextjs or Vanilla JS.

But impossible to bundle @huggingface/transformers into the extension. But is it true that hg/transformers.js require sometime file system access ?

But I the xenova version run well, but there is a link issue.

chrome.runtime.onMessage.addListener(async (message, sender, sendResponse) => {
    if (message.action === "extractPageText") {
        const { text } = message.payload; // text is a literal string, not a list for clarification.
        console.log("---------------------");
        if (!extractor) {
            extractor = await pipeline(
                "feature-extraction", "sentence-transformers/all-MiniLM-L6-v2", 
                { dtype: 'q8' }
            );
        }
        const startTime = performance.now();
        const embeddings = await extractor([text], {
            pooling: "mean",
            normalize: true,
        });
        const duration = performance.now() - startTime;
        console.log(`embedding computation took ${duration.toFixed(2)}ms`);
    }
});

That code throw this error:

background.bundle.js:1 Unable to load from local path "/models/sentence-transformers/all-MiniLM-L6-v2/onnx/model_quantized.onnx": "TypeError: Failed to fetch"

I'm getting close, I'm debugging it more to try finding a workaround.

  • bayang :D

Sign up or log in to comment