Inference Endpoints (dedicated) documentation

FAQs

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

FAQs

Q: In which regions are Inference Endpoints available?

A: Inference Endpoints are currently available on AWS in us-east-1 (N. Virginia) & eu-west-1 (Ireland), on Azure in eastus (Virginia), and on GCP in us-east4 (Virginia). If you need to deploy in a different region, please let us know.

Q: Can I access the instance my Endpoint is running on?

A: No, you cannot access the instance hosting your Endpoint. But if you are missing information or need more insights on the machine where the Endpoint is running, please contact us.

Q: Can I see my Private Endpoint running on my VPC account?

A: No, when creating a Private Endpoint (a Hugging Face Inference Endpoint linked to your VPC via AWS/Azure PrivateLink), you can only see the ENI in your VPC where the Endpoint is available.

Q: Can I run inference in batches?

A: It depends on the Task. The supported Tasks are using the transformers, sentence-transformers, or diffusers pipelines under the hood. If your Task pipeline supports batching, e.g. Zero-Shot Classification then batch inference is supported. In any case, you can always create your own inference handler and implement batching.

Q: How can I scale my deployment?

A: The Endpoints are scaled automatically for you, the only information you need to provide is a min replica target and a max replica target. Then the system will scale your Endpoint based on the load. Scaling to zero is supported with a variety of timing options.

Q: Will my endpoint still be running if no more requests are processed?

A: Yes, your Endpoint will always stay available/up with the number of min replicas defined in the Advanced configuration.

Q: I would like to deploy a model which is not in the supported tasks, is this possible?

A: Yes, you can deploy any repository from the Hugging Face Hub and if your task/model/framework is not supported out of the box, you can create your own inference handler and then deploy your model to an Endpoint.

Q: How much does it cost to run my Endpoint?

A: Dedicated Endpoints are billed based on the compute hours of your Running Endpoints, and the associated instance types. We may add usage costs for load balancers and Private Links in the future.

Q: Is the data transiting to the Endpoint encrypted?

A: Yes, data is encrypted during transit with TLS/SSL.

Q: How can I reduce the latency of my Endpoint?

A: There are several ways to reduce the latency of your Endpoint. One is to deploy your Endpoint in a region close to your application to reduce the network overhead. Another is to optimize your model using Hugging Face Optimum before creating your Endpoint. If you need help or have more questions about reducing latency, please contact us.

Q: How do I monitor my deployed Endpoint?

A: You can currently monitor your Endpoint through the 🤗 Inference Endpoints web application, where you have access to the Logs of your Endpoints as well as a metrics dashboard. If you need programmatic access or more information, please contact us.

Q: What if I would like to deploy to a different instance type that is not listed?

A: Please contact us if you feel your model would do better on a different instance type than what is listed.

Q: I accidentally leaked my token. Do I need to delete my endpoint?

A: You can invalidate existing personal tokens and create new ones in your settings here: https://huggingface.co./settings/tokens. Note that fine-grained tokens are supported in Inference Endpoints - please consider using them!

Q: I need to add a custom environment variable (default or secrets) to my endpoint. How can I do this?

A: This is now possible in the UI, or via the API:

{
  "model": {
    "image": {
      "huggingface": {
        "env": { "var1": "value" }
      }
    },
}

Q: I’m using the text-generation-inference container type for my Endpoint. Is there more information about using TGI?

A: Yes! Please check out our TGI documentation and this video on TGI deploys.

Q: I’m sometimes running into a 503 error on a running endpoint in production. What can I do?

A: To help mitigate service interruptions on an Endpoint that needs to be highly available, please make sure to use at least 2 replicas, ie min replicas set to 2.

Q: What’s the difference between Dedicated and Serverless Endpoints?

A: The Inference API (Serverless) is a solution to easily explore and evaluate models. For larger volumes of requests, or if you need guaranteed latency/performance, use Inference Endpoints (Dedicated) to easily deploy your models on dedicated, fully-managed infrastructure.

Q: I can see from the logs that my endpoint is running but the status is stuck at “initializing”

A: This usually means that the port mapping is incorrect. Ensure your app is listening on port 80 and that the Docker container is exposing port 80 externally. If you’re deploying a custom container you can change these values, but make sure to keep them aligned.

Q: I’m getting a 500 response in the beginning of my endpoint deployment or when scaling is happening

A: Confirm that you have a health route implemented in your app that returns a status code 200 when your application is ready to serve requests. Otherwise your app is considered ready as soon as the container has started, potentially resulting in 500s. You can configure the health route under the settings of your endpoint.

< > Update on GitHub