text
stringlengths
118
20.5k
title
stringlengths
7
93
article_id
stringlengths
7
7
url
stringlengths
51
133
Introduction ============ ​Since releasing the Answers endpoint in beta last year, we’ve developed new methods that achieve better results for this task. As a result, we’ll be removing the Answers endpoint from our documentation and removing access to this endpoint on December 3, 2022 for all organizations. New accounts created after June 3rd will not have access to this endpoint. We strongly encourage developers to switch over to newer techniques which produce better results, outlined below. Current documentation --------------------- <https://beta.openai.com/docs/guides/answers> <https://beta.openai.com/docs/api-reference/answers> Options ======= As a quick review, here are the high level steps of the current Answers endpoint: ![](https://openai.intercom-attachments-7.com/i/o/524217540/51eda23e171f33f1b9d5acff/rM6ZVI3XZ2CpxcEStmG5mFy6ATBCskmX2g3_GPmeY3FicvrWfJCuFOtzsnbkpMQe-TQ6hi5j1BV9cFo7bCDcsz8VWxFfeOnC1Gb4QNaeVYtJq4Qtg76SBOLLk-jgHUA8mWZ0QgOuV636UgcvMA)All of these options are also outlined [here](https://github.com/openai/openai-cookbook/tree/main/transition_guides_for_deprecated_API_endpoints) Option 1: Transition to Embeddings-based search (recommended) ------------------------------------------------------------- We believe that most use cases will be better served by moving the underlying search system to use a vector-based embedding search. The major reason for this is that our current system used a bigram filter to narrow down the scope of candidates whereas our embeddings system has much more contextual awareness. Also, in general, using embeddings will be considerably lower cost in the long run. If you’re not familiar with this, you can learn more by visiting our [guide to embeddings](https://beta.openai.com/docs/guides/embeddings/use-cases). If you’re using a small dataset (<10,000 documents), consider using the techniques described in that guide to find the best documents to construct a prompt similar to [this](#h_89196129b2). Then, you can just submit that prompt to our [Completions](https://beta.openai.com/docs/api-reference/completions) endpoint. If you have a larger dataset, consider using a vector search engine like [Pinecone](https://share.streamlit.io/pinecone-io/playground/beyond_search_openai/src/server.py) or [Weaviate](https://weaviate.io/developers/weaviate/current/retriever-vectorizer-modules/text2vec-openai.html) to power that search. Option 2: Reimplement existing functionality -------------------------------------------- If you’d like to recreate the functionality of the Answers endpoint, here’s how we did it. There is also a [script](https://github.com/openai/openai-cookbook/blob/main/transition_guides_for_deprecated_API_endpoints/answers_functionality_example.py) that replicates most of this functionality. At a high level, there are two main ways you can use the answers endpoint: you can source the data from an uploaded file or send it in with the request. **If you’re using the document parameter** ------------------------------------------ There’s only one step if you provide the documents in the Answers API call. Here’s roughly the steps we used: * Construct the prompt [with this format.](#h_89196129b2) * Gather all of the provided documents. If they fit in the prompt, just use all of them. * Do an [OpenAI search](https://beta.openai.com/docs/api-reference/searches) (note that this is also being deprecated and has a [transition guide](https://app.intercom.com/a/apps/dgkjq2bp/articles/articles/6272952/show)) where the documents are the user provided documents and the query is the query from above. Rank the documents by score. * In order of score, attempt to add Elastic search documents until you run out of space in the context. * Request a completion with the provided parameters (logit\_bias, n, stop, etc) Throughout all of this, you’ll need to check that the prompt’s length doesn’t exceed [the model's token limit](https://beta.openai.com/docs/engines/gpt-3). To assess the number of tokens present in a prompt, we recommend <https://huggingface.co./docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast>. If you're using the file parameter ---------------------------------- Step 1: upload a jsonl file Behind the scenes, we upload new files meant for answers to an Elastic search cluster. Each line of the jsonl is then submitted as a document. If you uploaded the file with the purpose “answers,” we additionally split the documents on newlines and upload each of those chunks as separate documents to ensure that we can search across and reference the highest number of relevant text sections in the file. Each line requires a “text” field and an optional “metadata” field. These are the Elastic search settings and mappings for our index: [Elastic searching mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html): ``` { "properties": { "document": {"type": "text", "analyzer": "standard_bigram_analyzer"}, -> the “text” field "metadata": {"type": "object", "enabled": False}, -> the “metadata” field } } ``` [Elastic search analyzer](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html): ``` { "analysis": { "analyzer": { "standard_bigram_analyzer": { "type": "custom", "tokenizer": "standard", "filter": ["lowercase", "english_stop", "shingle"], } }, "filter": {"english_stop": {"type": "stop", "stopwords": "_english_"}}, } } ``` After that, we performed [standard Elastic search search calls](https://elasticsearch-py.readthedocs.io/en/v8.2.0/api.html#elasticsearch.Elasticsearch.search) and used `max\_rerank` to determine the number of documents to return from Elastic search. Step 2: Search Here’s roughly the steps we used. Our end goal is to create a [Completions](https://beta.openai.com/docs/api-reference/completions) request [with this format](#h_89196129b2). It will look very similar to [Documents](#h_cb1d8a8d3f) From there, our steps are: * Start with the `experimental\_alternative\_question` or, if that's not provided, what’s in the `question` field. Call that the query. * Query Elastic search for `max\_rerank` documents with query as the search param. * Take those documents and do an [OpenAI search](https://beta.openai.com/docs/api-reference/searches) on them where the entries from Elastic search are the docs, and the query is the query that you used above. Use the score from the search to rank the documents. * In order of score, attempt to add Elastic search documents until you run out of space in the prompt. * Request an OpenAI completion with the provided parameters (logit\_bias, n, stop, etc). Return that answer to the user. Completion Prompt ----------------- ``` === Context: #{{ provided examples_context }} === Q: example 1 question A: example 1 answer --- Q: example 2 question A: example 2 answer (and so on for all examples provided in the request) === Context: #{{ what we return from Elasticsearch }} === Q: #{{ user provided question }} A: ```
Answers Transition Guide
6233728
https://help.openai.com/en/articles/6233728-answers-transition-guide
Introduction ============ ​Since releasing the Classifications endpoint in beta last year, we’ve developed new methods that achieve better results for this task. As a result, we’ll be removing the Classifications endpoints from our documentation and removing access to this endpoint on December 3, 2022 for all organizations. New accounts created after June 3rd will not have access to this endpoint. We strongly encourage developers to switch over to newer techniques which produce better results, outlined below. Current documentation --------------------- <https://beta.openai.com/docs/guides/classifications> <https://beta.openai.com/docs/api-reference/classifications> Options ======= All of these options are also outlined [here](https://github.com/openai/openai-cookbook/tree/main/transition_guides_for_deprecated_API_endpoints). As a quick review, here are the high level steps of the current Classifications endpoint: ![](https://openai.intercom-attachments-7.com/i/o/524219891/aa3136e9c7bcd8697c51ae9a/wDEz1wePRC3E7UyA1n0lsTPUvVakpPlMQ92SDnvEsScQFclIRW-bO2eKRhAp9_15j0vnyPYnhG71PjJj6Fttfwdpb1UnHZzMle9llSC76HQHN9lCzMNF6N2UDmeWzOldgwqRYYy-hzxBAD61Nw) Option 1: Transition to fine-tuning (recommended) ------------------------------------------------- We believe that most use cases will be better served by moving to a fine tuned model. The major reason for this is that our current system used a bigram filter to narrow down the scope of candidates whereas our fine tuned system can take in an arbitrary amount of data and learn more nuance between examples. For more on creating a fine tuned model, check out our [guide](https://beta.openai.com/docs/guides/fine-tuning/classification). Option 2: Transition to Embeddings-based search ----------------------------------------------- Another possible option, especially if your classification labels change frequently, is to use embeddings. If you’re not familiar with this, you can learn more by visiting our [guide to embeddings](https://beta.openai.com/docs/guides/embeddings/use-cases). If you’re using a small dataset (<10,000 documents), consider using the techniques described in that guide to find the best documents to construct a prompt similar to [this](#h_e63b71a5c8). Then, you can just submit that prompt to our [Completions](https://beta.openai.com/docs/api-reference/completions) endpoint. If you have a larger dataset, consider using a vector search engine like [Pinecone](https://share.streamlit.io/pinecone-io/playground/beyond_search_openai/src/server.py) or [Weaviate](https://weaviate.io/developers/weaviate/current/retriever-vectorizer-modules/text2vec-openai.html) to power that search. Option 3: Reimplement existing functionality -------------------------------------------- If you’d like to recreate the functionality of the Classifications endpoint, here’s how we did it. This functionality is also mostly replicated in this [script](https://github.com/openai/openai-cookbook/blob/main/transition_guides_for_deprecated_API_endpoints/classification_functionality_example.py). At a high level, there are two main ways you can use the classifications endpoint: you can source the data from an uploaded file or send it in with the request. If you're using the document parameter -------------------------------------- There’s only one step if you provide the documents in the Classifications API call. Here’s roughly the steps we used: * Construct the prompt [with this format.](#h_e63b71a5c8) * Gather all of the provided documents. If they fit in the prompt, just use all of them. * Do an [OpenAI search](https://beta.openai.com/docs/api-reference/searches) (also being deprecated. Please see its [transition guide](https://help.openai.com/en/articles/6272952-search-transition-guide)) where the documents are the user provided documents and the query is the query from above. Rank the documents by score. * In order of score, attempt to add Elastic search documents until you run out of space in the context. Try to maximize the number of distinct labels as that will help the model understand the different labels that are available. * Request a completion with the provided parameters (logit\_bias, n, stop, etc) Throughout all of this, you’ll need to check that the prompt’s length doesn’t exceed [the model's token limit](https://beta.openai.com/docs/engines/gpt-3). To assess the number of tokens present in a prompt, we recommend <https://huggingface.co./docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast>. If you're using the file parameter ---------------------------------- Step 1: upload a jsonl file Behind the scenes, we upload new files meant for classifications to an Elastic search. Each line of the jsonl is then submitted as a document. In each line we require a “text” field, a “label” field, and an optional “metadata” field These are the Elastic search settings and mappings for our index: [Elastic searching mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html): ``` { "properties": { "document": {"type": "text", "analyzer": "standard_bigram_analyzer"}, -> the “text” field "label": {"type": "text", "analyzer": "standard_bigram_analyzer"}, "metadata": {"type": "object", "enabled": False}, -> the “metadata” field } } ``` [Elastic search analyzer](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html): ``` { "analysis": { "analyzer": { "standard_bigram_analyzer": { "type": "custom", "tokenizer": "standard", "filter": ["lowercase", "english_stop", "shingle"], } }, "filter": {"english_stop": {"type": "stop", "stopwords": "_english_"}}, } } ``` After that, we performed [standard Elastic search search calls](https://elasticsearch-py.readthedocs.io/en/v8.2.0/api.html#elasticsearch.Elasticsearch.search) and used `max\_examples` to determine the number of documents to return from Elastic search. Step 2: Search Here’s roughly the steps we used. Our end goal is to create a [Completions](https://beta.openai.com/docs/api-reference/completions) request [with this format](#h_e63b71a5c8). It will look very similar to [Documents](#h_51fe4aed6d). From there, our steps are: * Start with the `experimental\_alternative\_question` or, if that's not provided, what’s in the `question` field. Call that the query. * Query Elastic search for `max\_examples` documents with query as the search param. * Take those documents and do an [OpenAI search](https://beta.openai.com/docs/api-reference/searches) on them where the entries from Elastic search are the docs, and the query is the query that you used above. Use the score from the search to rank the documents. * In order of score, attempt to add Elastic search documents until you run out of space in the prompt. Try to maximize the number of distinct labels as that will help the model understand the different labels that are available. * Request an OpenAI completion with the provided parameters (logit\_bias, n, stop, etc). Return that generation to the user. Completion Prompt ----------------- ``` #{{ an optional instruction }} Text: #{{example 1 text}} Category: #{{example 1 label}} --- Text: #{{example 2 text}} Category: #{{example 2 label}} --- Text: #{{question}} Category: ```
Classifications Transition Guide
6272941
https://help.openai.com/en/articles/6272941-classifications-transition-guide
Introduction ============ ​Since releasing the Search endpoint, we’ve developed new methods that achieve better results for this task. As a result, we’ll be removing the Search endpoint from our documentation and removing access to this endpoint for all organizations on December 3, 2022. New accounts created after June 3rd will not have access to this endpoint. We strongly encourage developers to switch over to newer techniques which produce better results, outlined below. Current documentation --------------------- <https://beta.openai.com/docs/guides/search> <https://beta.openai.com/docs/api-reference/searches> Options ======= This options are also outlined [here](https://github.com/openai/openai-cookbook/tree/main/transition_guides_for_deprecated_API_endpoints). Option 1: Transition to Embeddings-based search (recommended) ------------------------------------------------------------- We believe that most use cases will be better served by moving the underlying search system to use a vector-based embedding search. The major reason for this is that our current system used a bigram filter to narrow down the scope of candidates whereas our embeddings system has much more contextual awareness. Also, in general, using embeddings will be considerably lower cost in the long run. If you’re not familiar with this, you can learn more by visiting our [guide to embeddings](https://beta.openai.com/docs/guides/embeddings/use-cases). If you have a larger dataset (>10,000 documents), consider using a vector search engine like [Pinecone](https://www.pinecone.io) or [Weaviate](https://weaviate.io/developers/weaviate/current/retriever-vectorizer-modules/text2vec-openai.html) to power that search. Option 2: Reimplement existing functionality -------------------------------------------- If you’re using the document parameter -------------------------------------- The current openai.Search.create and openai.Engine.search code can be replaced with this [snippet](https://github.com/openai/openai-cookbook/blob/main/transition_guides_for_deprecated_API_endpoints/search_functionality_example.py) (note this will only work with non-Codex engines since they use a different tokenizer.) We plan to move this snippet into the openai-python repo under openai.Search.create\_legacy. If you’re using the file parameter ---------------------------------- As a quick review, here are the high level steps of the current Search endpoint with a file: ![](https://openai.intercom-attachments-7.com/i/o/524222854/57382ab799ebe9bb988c0a1f/_y63ycSmtiFAS3slJdbfW0Mz-0nx2DP4gNAjyknMAmTT1fQUE9d7nha5yfsXJLkWRFmM41uvjPxi2ToSW4vrF7EcasiQDG51CrKPNOpXPVG4WZXI8jC8orWSmuGhAGGC4KoUYucwJOh0bH9Nzw) Step 1: upload a jsonl file Behind the scenes, we upload new files meant for file search to an Elastic search. Each line of the jsonl is then submitted as a document. Each line is required to have a “text” field and an optional “metadata” field. These are the Elastic search settings and mappings for our index: [Elastic searching mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html): ``` { "properties": { "document": {"type": "text", "analyzer": "standard_bigram_analyzer"}, -> the “text” field "metadata": {"type": "object", "enabled": False}, -> the “metadata” field } } ``` [Elastic search analyzer](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html): ``` { "analysis": { "analyzer": { "standard_bigram_analyzer": { "type": "custom", "tokenizer": "standard", "filter": ["lowercase", "english_stop", "shingle"], } }, "filter": {"english_stop": {"type": "stop", "stopwords": "_english_"}}, } } ``` After that, we performed [standard Elastic search search calls](https://elasticsearch-py.readthedocs.io/en/v8.2.0/api.html#elasticsearch.Elasticsearch.search) and used `max\_rerank` to determine the number of documents to return from Elastic search. Step 2: Search Once you have the candidate documents from step 1, you could just make a standard openai.Search.create or openai.Engine.search call to rerank the candidates. See [Document](#h_f6ab294756)
Search Transition Guide
6272952
https://help.openai.com/en/articles/6272952-search-transition-guide
*This article is only relevant if you started using the API before June 6, 2022.* We are deprecating the term ‘engine’ in favor of ‘model’. Most people already use these terms interchangeably, and we consistently hear that ‘model’ is more intuitive. Moving forward, API requests will work by referencing a ‘model’ instead of an ‘engine’. If you have used a fine-tuned model, then you are already familiar with using ‘model’ instead of ‘engine’ when making an API request. Engine listing is also being replaced by Model listing, which will consolidate both base and fine-tuned models in a single place. **We will maintain backward compatibility for requests using ‘engine’ as a parameter, but recommend updating your implementation as soon as you can to prevent future confusion.** For example, a request to the completions endpoint would now be (full details in our [API reference](https://beta.openai.com/docs/api-reference)): | | | | --- | --- | | **Deprecated** | **Current** | | ``` response = openai.Completion.create( engine="text-davinci-002", prompt=”Say hello world three times.”, temperature=0.6) ``` | ``` response = openai.Completion.create( model="text-davinci-002", prompt=”Say hello world three times.”, temperature=0.6) ``` | | ``` openai api completions.create -e text-davinci-002 -p "Say hello world three times." ``` | ``` openai api completions.create -m text-davinci-002 -p "Say hello world three times." ``` | | ``` curl https://api.openai.com/v1/engines/text-davinci-002/completions \-H "Content-Type: application/json" \-H "Authorization: Bearer YOUR_API_KEY" \-d '{"prompt": "Say hello world three times", "temperature": 0.6}' ``` | ``` curl https://api.openai.com/v1/completions \-H "Content-Type: application/json" \-H "Authorization: Bearer YOUR_API_KEY" \-d '{"prompt": "Say hello world three times","model":"text-davinci-002", "temperature": 0.6}' ``` | We have updated endpoint URL paths accordingly (full details in our [API reference](https://beta.openai.com/docs/api-reference)): | | | | --- | --- | | **Deprecated** | **Current** | | ``` https://api.openai.com/v1/engines/{engine_id}/completions ``` | ``` https://api.openai.com/v1/completions ``` | | ``` https://api.openai.com/v1/engines/{engine_id}/embeddings ``` | ``` https://api.openai.com/v1/embeddings ``` | | ``` https://api.openai.com/v1/engines ``` | ``` https://api.openai.com/v1/models ``` | | ``` https://api.openai.com/v1/engines/{engine_id}/edits ``` | ``` https://api.openai.com/v1/edits ``` |
What happened to ‘engines’?
6283125
https://help.openai.com/en/articles/6283125-what-happened-to-engines
Thank you for trying our generative AI tools! In your usage, you must adhere to our [Content Policy](https://labs.openai.com/policies/content-policy): **Do not attempt to create, upload, or share images that are not G-rated or that could cause harm.** * **Hate:** hateful symbols, negative stereotypes, comparing certain groups to animals/objects, or otherwise expressing or promoting hate based on identity. * **Harassment:** mocking, threatening, or bullying an individual. * **Violence:** violent acts and the suffering or humiliation of others. * **Self-harm:** suicide, cutting, eating disorders, and other attempts at harming oneself. * **Sexual:** nudity, sexual acts, sexual services, or content otherwise meant to arouse sexual excitement. * **Shocking:** bodily fluids, obscene gestures, or other profane subjects that may shock or disgust. * **Illegal activity:** drug use, theft, vandalism, and other illegal activities. * **Deception:** major conspiracies or events related to major ongoing geopolitical events. * **Political:** politicians, ballot-boxes, protests, or other content that may be used to influence the political process or to campaign. * **Public and personal health:** the treatment, prevention, diagnosis, or transmission of diseases, or people experiencing health ailments. * **Spam:** unsolicited bulk content. **Don’t mislead your audience about AI involvement.** * When sharing your work, we encourage you to proactively disclose AI involvement in your work. * You may remove the DALL·E signature if you wish, but you may not mislead others about the nature of the work. For example, you may not tell people that the work was entirely human generated or that the work is an unaltered photograph of a real event. **Respect the rights of others.** * Do not upload images of people without their consent. * Do not upload images to which you do not hold appropriate usage rights. * Do not create images of public figures.
Are there any restrictions to how I can use DALL·E 2? Is there a content policy?
6338764
https://help.openai.com/en/articles/6338764-are-there-any-restrictions-to-how-i-can-use-dall-e-2-is-there-a-content-policy
As we're ramping up DALL-E access, safe usage of the platform is our highest priority. Our filters aims to detect generated text that could be sensitive or unsafe. We've built the filter to err on the side of caution, so, occasionally, innocent prompts will be flagged as unsafe. Although suspensions are automatic, we manually review suspensions to determine whether or not it was justified. If it wasn’t justified, we reinstate access right away. If you have any questions on your usage, please see our [Content Policy](https://labs.openai.com/policies/content-policy).
I received a warning while using DALL·E 2. Will I be banned?
6338765
https://help.openai.com/en/articles/6338765-i-received-a-warning-while-using-dall-e-2-will-i-be-banned
If your account access has been deactivated, it's likely due to a violation of our [content policy](https://labs.openai.com/policies/content-policy) or [terms of use](https://labs.openai.com/policies/terms). If you believe this happened in error, please start a conversation with us from the Messenger at the bottom right of the screen. Choose the "DALL·E" option, select "Banned User Appeal" and include a justification for why your account should be reactivated. ​
Why was my DALL·E 2 account deactivated?
6378378
https://help.openai.com/en/articles/6378378-why-was-my-dall-e-2-account-deactivated
### **Deleting your account is permanent can cannot be undone.** **Deleting your account will prevent you from using the account to access OpenAI services, including ChatGPT, API, and DALL·E.** You will NOT be able to create a new account using the same email address. If you delete your account, we will delete your data within 30 days, except we may retain a limited set of data for longer where required or permitted by law. **Account Deletion** ==================== **Option 1: Use privacy.openai.com** ------------------------------------ You can submit requests to delete your account by submitting a request to “delete my data” through [privacy.openai.com](https://privacy.openai.com/policies). On that page you'll click on **Make a Privacy Request** *in the top right corner:* ![](https://downloads.intercomcdn.com/i/o/930061971/c44535b8da5bff44ad6d0e86/Screenshot+2024-01-10+at+11.30.49%E2%80%AFAM.png) Then in the popup that will appear (below) choose **Delete my OpenAI account**: ![](https://downloads.intercomcdn.com/i/o/929930246/4ccae9023c591308b39da8ec/Screenshot+2024-01-09+at+2.56.54+PM.png) **Option 2: Self-serve** ------------------------ 1. [Sign in to ChatGPT](https://chat.openai.com/chat) 2. In the bottom left click on Settings 3. Free: ​ ![](https://openai.intercom-attachments-7.com/i/o/845964781/3b22386c5e0a934e189dfbfd/8KwrupjnqkkSX2oOHiVdgbxO6yWlb7XtwZoheFdQu1PLzXgQ39gLLurIEjWvoYwVBTrttaHjnDs8GgGeXKR5PiRdp97pr54myEkfN4qhvxFWpGY_OwmGJcWRnBgta1zCw8bW8T4usNO8JBRdjXPl7gQ) 4. Plus: ![](https://openai.intercom-attachments-7.com/i/o/845964790/0a936cd55abd10ffc72e7314/NmprSoTHRT2_T6gfKLzcPrwhvORkEIny1Hc3tbBY0LSunDSh6zUofXEca_7ubsLqC4AcsaSpFmUE_qKgR3ZwRsF0zMLOOkk8jnM0oJn8_dJBBobh5r6tBo0tPUIVgq3_8CBNVR4Chp58RRCZ8T3tAvM) 5. In the Settings modal click on Data controls ![](https://openai.intercom-attachments-7.com/i/o/845964797/96de95d02407226fea1e7831/z1D7-qcFdMg-F14Oz5RAwUv0glyw2tyUtVtwYV-J-47GJ2ZrqdPaEhP4oWksdrc-DbV-EVTMyKMLgmmNrvT5ozzOZn0FZvRaIHLX8GWWov8JxPdevhqVxuRuhhVk7txi0i0Qv9DTn_ZuzZ9e8XCb0VI) ​ 6. Under Delete account click Delete * You may only delete your account if you have logged in within the last 10 minutes. * If you haven't logged in within the last 10 minutes then you'll see this modal where you need to refresh your login (aka sign in again) ![](https://openai.intercom-attachments-7.com/i/o/845964809/2ec57583a8c7ba004e68842e/UWoPCqqR0iyVb83H8FbpQI5IYqIdDZZs3VAuGdNz4QKpweLHSKJDbmherTHn-PL272CZEfTHZTQCDc8j3AlkF0oGw9Z7Jmz9aG84IPyJ_Ovtg-n8IDfrwOQ0Lvwl2x18TPAzkshiibQaQkuSRbAG8SA) 7. Confirmation modal will appear where you need to type your account email + "DELETE" into the input fields to unlock the "Permanently delete my account" button ![](https://openai.intercom-attachments-7.com/i/o/845964813/b0a4ea33e195e827db5434ba/NhaR53ZYFKY8KE1414JY5Giv7nV4hen1ZSSJ-mCHBivLZHxnkbS1Uxkmxkzy7NyRkycq1L8raQ5KxlgQsuat58tW8aEkks2EvUumlDFweY1_soJg4-hg7k8EF9rQEBjo5XnebXQRVi74foWFq-iLS4Q) 8. Once the inputs are filled out you unlock the "Permanently delete my account" button 9. Click "Permanently delete my account" to delete your account. **Common issues** ================= **Chat retention for deleted vs archived chats** ------------------------------------------------ **Deleted chats** are hard deleted from our systems within 30 days, unless they have been de-identified and disassociated from your account. If you have not [opted out](https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance), we may use these de-identified chats for training to improve model performance. **Archived chats** are retained just like your unarchived chats. Archiving a chat simply removes it from your chat history sidebar. Archived chats can be found and managed in your ChatGPT Settings. For more see **[How chat retention works in ChatGPT](https://help.openai.com/en/articles/8809935-how-chat-retention-works-in-chatgpt).** **User content opt-out** ------------------------ **ChatGPT, DALL·E and our other services for individuals** When you use ChatGPT, DALL-E, and our other services for individuals, we may use the content you provide to improve model performance. Learn more about your choices on how we use your content to improve model performance [here](https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance). **Enterprise services (such as API and ChatGPT Enterprise)** OpenAI does not train on business data. Learn more about our Enterprise privacy commitments [here](https://openai.com/enterprise-privacy). **If I delete my account can I create a new account with the same email?** -------------------------------------------------------------------------- No. You cannot create a new account using the same email address. ![](https://downloads.intercomcdn.com/i/o/925080821/de3ef0750cb15fbef5602d66/Screenshot+2024-01-04+at+10.40.29%E2%80%AFAM.png) **Can I reactivate my account after it's been deleted?** -------------------------------------------------------- No. But you can create a new account with a different email address. Click “Sign up” on the [ChatGPT login page](https://chat.openai.com/auth/login) or our [API login page](https://platform.openai.com/login). There are a couple caveats of which to be mindful: * Email addresses: **You'll need to use a new email address**. + Since every email address is unique per account, we require a different email address for new accounts. If you don't have an alternative email address, you can try using what's known as an email subaddress: instead of [[email protected]](mailto:[email protected]), try [[email protected]](mailto:[email protected]). Emails to this address should still go to the same inbox (everything after the + is typically ignored by your email provider), but we'll treat this as a unique email address. * Phone numbers: New accounts are still subject to our limit of [3 accounts per phone number](https://help.openai.com/articles/6613520-phone-verification-faq#h_de13bb96c0). Deleted accounts also count toward this limit. Deleting an account does not free up another spot. A phone number can only ever be used up to 3 times for verification to generate the first API key for your account on platform.openai.com. + Phone verification is **not** required to create an OpenAI account. + Phone verification is required for a new account to generate their first API key on platform.openai.com. **We don't support unlinking a phone number from an existing account** ---------------------------------------------------------------------- We do not allow you to unlink phone numbers from existing accounts. **How many times can I use my phone number to create OpenAI accounts?** ----------------------------------------------------------------------- A phone number can only ever be used for phone verification up to 3 times. This means if you have 3 OpenAI accounts you can use the same number for all three when completing phone verification on each initial API key generation across those three accounts. For anti-fraud and abuse reasons, we do **not** allow you to unlink phone numbers from OpenAI accounts to free up that number for reuse. This means deleting an OpenAI account does **not** free up the number to get around the limit. There is no workaround. See our [Phone Verification FAQ](https://help.openai.com/en/articles/6613520-phone-verification-faq). Can I change my authentication method after account deletion? ------------------------------------------------------------- ⚠️ Deleting your account does **NOT** allow you to change your authentication method. That said, if you originally signed up for OpenAI / ChatGPT **with an email and password** then in future logins you can choose Google/Apple login allowing users in that situation to then login either way.
How to delete your account
6378407
https://help.openai.com/en/articles/6378407-how-to-delete-your-account
`💡Note: DALL·E API is billed separately from labs.openai.com. Credits granted/purchased on labs.openai.com do not apply to DALL·E API. For the latest information on DALL·E API pricing, please see our [pricing page](https://openai.com/api/pricing).` **What’s a DALL·E Credit?** * You can use a DALL·E credit for a single request at labs.openai.com: generating images through a text prompt, an edit request, or a variation request. * Credits are deducted only for requests that return generations, so they won’t be deducted for content policy warnings and system errors. **What are free credits?** * Free credits are available to early adopters who signed up to use DALL·E before April 6, 2023 * They expire one month after they are granted. * Free credits replenish monthly. + For example, if you received credits on August 3rd, your free credits will refill on September 3rd. + If you joined on the 29th, 30th, or 31st of any month, your free credits will refill on the 28th of every month. **How do I buy DALL·E credits?** * You can buy DALL-E credits by using the “Buy Credits” button in your account page, or in the profile photo dropdown menu. **How do DALL·E credits work if I belong to a multi-person organization account?** * Both free and paid credits are shared within each org. * Only the owners of an org account can buy credits for the org. **What are the differences between free and paid credits?** * Free credits expire one month after they were granted, and paid credits expire 12 months from the date of purchase. * You currently get the same set of rights (including commercial use), regardless of whether an image was generated through a free or paid credit. ​
How DALL·E Credits Work
6399305
https://help.openai.com/en/articles/6399305-how-dall-e-credits-work
Yes! Please check out our [DALL·E API FAQ](https://help.openai.com/en/articles/6705023) for information about the API.
Is DALL·E available through an API?
6402865
https://help.openai.com/en/articles/6402865-is-dall-e-available-through-an-api
Subject to the [Content Policy](https://labs.openai.com/policies/content-policy) and [Terms](https://openai.com/api/policies/terms/), you own the images you create with DALL·E, including the right to reprint, sell, and merchandise – regardless of whether an image was generated through a free or paid credit.
Can I sell images I create with DALL·E?
6425277
https://help.openai.com/en/articles/6425277-can-i-sell-images-i-create-with-dall-e
You can login to access DALL·E 2 by using the button below. [Login to DALL·E 2](http://labs.openai.com/auth/login)
Where can I access DALL·E 2?
6431339
https://help.openai.com/en/articles/6431339-where-can-i-access-dall-e-2
Unfortunately, it's not currently possible to change the email address or the sign-in method associated with your account for DALL•E 2. You will need to continue using the same email address to login.
Can I change the email address I use to sign-in to DALL•E 2?
6431922
https://help.openai.com/en/articles/6431922-can-i-change-the-email-address-i-use-to-sign-in-to-dall-e-2
**Commercialization Questions** =============================== * **Can I use DALL·E for commercial uses, including NFTs and freelancing?** Yes. * **Can I sell DALL·E generations I created during the research preview?** Yes. * **Can I remove the watermark?** Yes. * **Are alternate payment options available?** At this time, we only accept payment via credit card. * **Where can I see how many credits I have?** You can see your credit amount by going to [labs.openai.com/account](https://labs.openai.com/account) or by selecting your icon in the top right corner. Note: DALL·E API is billed separately from labs.openai.com. Credits granted/purchased on labs.openai.com do not apply to DALL·E API. For the latest information on DALL·E API pricing, please see our [pricing page](https://openai.com/api/pricing). * **Do credits roll over month to month?** Free credits do not roll over month to month; please see "[How DALL•E Credits Work](https://help.openai.com/en/articles/6399305-how-dall-e-credits-work)" for details. **Product Questions** ===================== * **Why are parts of my images cropped?** In its current version, DALL**·**E can only produce images in a square. * **Can DALL·E transform the style of my image into another style?** We currently don't support transforming the style of an image into another style. However, you can edit parts of a generated image and recreate them in a style you define in the prompt. * **Is DALL·E available through an API?** Yes! Please see the [Image Generation guide](https://beta.openai.com/docs/guides/images/introduction) to learn more. * **Now that the credit system is in place is there still a 50-image per day limit?** No, there's no longer a 50-image per day limit. **Policy Questions** ==================== * **Why did I receive a content filter warning?** Our filter aims to detect generated text that could be sensitive or unsafe. The filter will make mistakes and we have currently built it to err on the side of caution, thus, resulting in more false positives. We're working on improving our filters, so this should become less of an issue in the future.
DALL·E - Content Policy FAQ
6468065
https://help.openai.com/en/articles/6468065-dall-e-content-policy-faq
This article reflects a historical pricing update, please visit openai.com/api/pricing for the most up-to-date pricing --- **1. What are the pricing changes?** We’re reducing the price per token for our standard GPT-3 and Embeddings models. Fine-tuned models are not affected. For details on this change, please see our pricing page: <https://openai.com/api/pricing/> | | | | | --- | --- | --- | | **MODEL** | **BEFORE** | **ON SEPT 1** | | Davinci | $0.06 / 1k tokens | $0.02 / 1k tokens | | Curie | $0.006 / 1k tokens | $0.002 / 1k tokens | | Babbage | $0.0012 / 1k tokens | $0.0005 / 1k tokens | | Ada | $0.0008 / 1k tokens | $0.0004 / 1k tokens | | Davinci Embeddings | $0.6 / 1k tokens | $0.2 / 1k tokens | | Curie Embeddings | $0.06 / 1k tokens | $0.02 / 1k tokens | | Babbage Embeddings | $0.012 / 1k tokens | $0.005 / 1k tokens | | Ada Embeddings | $0.008 / 1k tokens | $0.004 / 1k tokens | **2.** **When will this price reduction take effect?** These changes will take effect on September 1, 2022 00:00:00 UTC. **3. What led you to drop the prices?** We have been looking forward to reducing pricing for a long time. Our teams have made incredible progress in making our models more efficient to run, which has reduced the cost it takes to serve them, and we are now passing these savings along to our customers. **4. Which models are affected by this change?** The change affects our standard GPT-3 and Embeddings models. Fine-tuned models are not affected. As of August 2022, these models include: * text-davinci-002 * text-curie-001 * text-babbage-001 * text-ada-001 * davinci * curie * babbage * ada * text-similarity-ada-001 * text-similarity-babbage-001 * text-similarity-curie-001 * text-similarity-davinci-001 * text-search-ada-doc-001 * text-search-ada-query-001 * text-search-babbage-doc-001 * text-search-babbage-query-001 * text-search-curie-doc-001 * text-search-curie-query-001 * text-search-davinci-doc-001 * text-search-davinci-query-001 * code-search-ada-code-001 * code-search-ada-text-001 * code-search-babbage-code-001 * code-search-babbage-text-001 **5. Can I get a refund for my previous usage?** Our new pricing is effective September 1, 2022 00:00:00 UTC. We will not be issuing refunds. **6. How does it affect my existing usage limits this month?** This change will not change the soft or hard usage limits configured on your account. If you would like to change your usage limits, you can adjust them anytime in your [account settings](https://beta.openai.com/account/billing/limits). **7. Are the changes going to be reflected on the October bill?** Changes will be reflected on the September invoice which will be issued in October. You will also be able to see the changes in the usage panel in your account settings on September 1st. If you have any other questions about the pricing update - please log into your account and start a new conversation using the on-site chat tool.
September 2022 - OpenAI API Pricing Update FAQ
6485334
https://help.openai.com/en/articles/6485334-september-2022-openai-api-pricing-update-faq
The Content filter preferences can be found in the [Playground](https://beta.openai.com/playground) page underneath the "..." menu button. ​ ![](https://downloads.intercomcdn.com/i/o/569474034/375e088de97e9823f528a1ec/image.png) Once opened you can toggle the settings on and off to stop the warning message from showing. ​ ![](https://downloads.intercomcdn.com/i/o/569474316/c0433ad29b7c3a86c96e97c5/image.png)Please note, that although the warnings will no longer show the OpenAI [content policy](https://beta.openai.com/docs/usage-guidelines/content-policy) is still in effect.
How can I deactivate the content filter in the Playground?
6503842
https://help.openai.com/en/articles/6503842-how-can-i-deactivate-the-content-filter-in-the-playground
The DALL·E editor interface helps you edit images through inpainting and outpainting, giving you more control over your creative vision. ![](https://downloads.intercomcdn.com/i/o/571871271/eb4c662a2316d5cf2f753c60/Screen+Shot+2022-08-30+at+2.40.28+PM.png)The editor interface is in beta – there are a number of things to keep in mind while using this interface: * The newest editor experience is only available on desktop at the moment, we'll be rolling out these features to smaller screens in the coming months. * Expanded images are not currently saved automatically, make sure to download your incremental work often to avoid losing anything. * You cannot yet save expanded images to a collection or view the full image in your history, but we hope to add this soon. * For very large images, your browser may experience lag while downloading. Make sure to download often to avoid losing work due to browser freezes! The FAQ below will help you learn how to get the most out of these new tools: How do I access the DALL·E editor? ================================== Once you're logged in on a desktop device, you can launch the editor in two ways: * **Start with an image**: From any image on the DALL-E website, you can click the "Edit" button to drop into an editor with that image as the starting point. * **Start with a blank canvas:** If you'd prefer to start from scratch, you can bookmark and use the following URL: https://labs.openai.com/editor While users on mobile devices don't have access to advanced editor features like outpainting, you can still inpaint images by tapping "Edit" on an existing image you've generated or uploaded. How much does usage of the DALL·E editor cost? ============================================== Like DALL·E's other functionality, each prompt you submit by clicking the "Generate" button will deduct one credit from your credit balance (regardless of how many pixels you are filling in). You can always purchase additional credits from the user dropdown at the top right of the application. How do I use the editor most effectively? ========================================= The **Generation frame** contains the image context that the model will see when you submit a text prompt, so make sure that it contains enough useful context for the area you are expanding into, otherwise the style may drift from the rest of your image. ![](https://downloads.intercomcdn.com/i/o/571876595/9e431c455e24421079bee9d3/Screen+Shot+2022-08-30+at+2.55.38+PM.png)You can simultaneously **Erase** parts of your image to touch up or replace certain areas, and perfect the finer details. You can also **Upload** existing images, optionally resize them, and then place them within the canvas to bring additional imagery into the scene. This is a powerful feature that enables you to fuse images together, connect opposite ends of an image for loops, and "uncrop" images that you can combine with other tooling to create recursively expanding animations. The **Download** tool will export the latest state of the artwork as .png file. We recommend downloading often to keep snapshots of your work. You can always re-upload previous snapshots to continue where you left off. What keyboard shortcuts are supported? ====================================== The editor supports keyboard shortcuts for zooming, switching tools, undo/redo, and more. Press **?** while using the editor to show the full list of keyboard shortcuts. Are there any other tips & tricks to be aware of? ================================================= * Start with the character before the landscape, if there are characters involved, so you can get the body morphology right before filling the rest. * Make sure you're keeping enough of the existing image in the generation frame to avoid the style drifting too much. * Ask DALL·E for a muted color palette, especially as you stray further from the center, to avoid oversaturation and color-blasting. * Consider what story you’re trying to tell when picking the direction you want to expand the image into.
DALL·E Editor Guide
6516417
https://help.openai.com/en/articles/6516417-dall-e-editor-guide
We want to assure you that you won't be penalized for a failed generation. You won't be charged a credit if DALL·E 2 is unable to successfully generate an image based on your request. We understand that not every request will be successful, and we don't want to punish our users for that. So rest assured, you can keep trying different requests without worrying about wasting your credits on failed generations. You're only charged for successful requests. If you're looking for your generation history, you can find them on your ["My Collection"](https://labs.openai.com/collection) page. ``` This article was generated with the help of GPT-3. ```
Am I charged for a credit when my generation fails?
6582257
https://help.openai.com/en/articles/6582257-am-i-charged-for-a-credit-when-my-generation-fails
While DALL·E is continually evolving and improving, there are a few things you can do to improve your images right now. For discovering how you can design the best prompts for DALL·E, or find out best practices for processing images, we currently recommend: * [Guy Parsons' DALL·E 2 Prompt Book](https://dallery.gallery/the-dalle-2-prompt-book/) for guidance on designing the best prompts. * [Joining our Discord server](https://discord.com/invite/openai) and engaging with the community in channels such as #tips-and-tricks, #prompt-help, and #questions can be a great way to get advice and feedback from other users If you'd like to learn more about the new Outpainting feature, check out our DALL·E Editor Guide! [DALL·E Editor Guide](https://help.openai.com/en/articles/6516417-dall-e-editor-guide) ``` This article was generated with the help of GPT-3. ```
How can I improve my prompts with DALL·E?
6582391
https://help.openai.com/en/articles/6582391-how-can-i-improve-my-prompts-with-dall-e
When you have both free and paid credits in your account, our system will automatically use the credits that are going to expire first. In most cases, this will be your free credits. However, if you have paid credits that are expiring sooner than your free credits, those will be used first. Keep in mind that paid credits typically expire in one year, while free credits typically expire within a month. ``` This article was generated with the help of GPT-3. ```
How do my free and paid credits get used?
6584194
https://help.openai.com/en/articles/6584194-how-do-my-free-and-paid-credits-get-used
Every generation you create is automatically saved in the 'All generations' tab in '[My Collection](https://labs.openai.com/collection).' You can find past generations there, as well as your saved generations in the 'Favorites' tab. ``` This article was generated with the help of GPT-3. ```
Where can I find my old and/or saved generations?
6584249
https://help.openai.com/en/articles/6584249-where-can-i-find-my-old-and-or-saved-generations
**ChatGPT** Phone verification is no longer required for new OpenAI account creation or ChatGPT usage. **API** Phone verification is now mandated on platform.openai.com for generating your initial API key, though not for any subsequent API key generation after that. Why do I need to provide my phone number to generate my **first** API key on **platform**.openai.com? ----------------------------------------------------------------------------------------------------- When you generate your first API key on platform.openai.com, we do require a phone number for security reasons. This allows us to verify your account and ensure our platform remains secure. You only need to complete phone verification generating the 1st API key not any subsequent API keys after that. We don't use your phone number for any other purposes, and take your privacy very seriously. Can I use a premium number, landline, Google Voice, or other VoIP phone number? ------------------------------------------------------------------------------- We do **`not`** support use of `landlines`, `VoIP`, `Google Voice`, or `premium numbers` at this time. All of those types of phone numbers are often associated with higher instances of fraud or abuse. For this reason we only support completing phone verification via mobile phone numbers over an SMS text message, no exceptions. Have you always blocked VoIP numbers? ------------------------------------- Yes, we have always blocked VoIP services in the United States to ensure the safety and security of our users. Recently, we have expanded our blocking policy to include VoIP services internationally. This means that VoIP services are now blocked in countries outside the United States as well. I don't want to receive the SMS can I phone verify over email/call instead? --------------------------------------------------------------------------- No. The phone verification can only be completed with a text message via SMS (or WhatsApp, if available in your country). The code cannot be sent via email or done over phone call. Why am I not receiving my phone verification code SMS? ------------------------------------------------------ If you're not receiving your phone verification code, it's possible that our system has temporarily blocked you due to too many verification attempts or an issue occurred during your first request. Please try again in a few hours and make sure you're within cellphone coverage, and you're not using any text-blocker applications. What does this error mean? "Detected suspicious behavior from phone numbers similar to yours" --------------------------------------------------------------------------------------------- This means our system has detected unusual activity or patterns from phone numbers that are similar to the one you're using for verification. This error is triggered as a security measure to prevent potential fraud or abuse of the platform. Remember that security measures like this are in place to protect your account and maintain the integrity of the platform. Ensure that your personal information is accurate and up-to-date. How many times can I use the same phone number to complete the phone verification associated with an OpenAI account's first API key generation? ----------------------------------------------------------------------------------------------------------------------------------------------- A phone number can only ever be used for phone verification up to 3 times. This means if you have 3 OpenAI accounts you can use the same number for all three when completing phone verification on each initial API key generation across those three accounts. For anti-fraud and abuse reasons, we do **not** allow you to unlink phone numbers from OpenAI accounts to free up that number for reuse. This means deleting an OpenAI account does **not** free up the number to get around the limit. There is no workaround. How do free trial tokens work? ------------------------------ Free trial tokens to API users on platform.openai.com are only given for the first time you sign up then complete phone verification during the first API key generation. No accounts created after that get free trial tokens, no exceptions. How do I resolve I get an error that I can't sign up due to "unsupported country"? ---------------------------------------------------------------------------------- This may be that you're trying to complete phone verification on the initial API key generation on platform.openai.com using a phone number from a country or territory we do not support. See [Supported countries and territories](https://platform.openai.com/docs/supported-countries). Which countries do you support for WhatsApp phone verification? --------------------------------------------------------------- In certain countries you can complete phone verification with WhatsApp instead of via an SMS. As of Wednesday, September 27th, 2023 the countries we support for that include: * “IN”, # India * “ID”, # Indonesia * “PK”, # Pakistan * “NG”, # Nigeria * “IL”, # Israel * “SA”, # Saudi Arabia * “AE”, # United Arab Emirates * “UA”, # Ukraine * “MY”, # Malaysia * “TR”, # Turkey ### What will phone verification look like? Our default drop-down is set to the United States which looks like this: ![](https://downloads.intercomcdn.com/i/o/658048438/d0ae000cb03c874071cc470a/phone+verification+step+1.png)Then if you select one of the countries in our list above which include the WhatsApp alternative phone verification option - using India as an example - you'll see this UI: ![](https://downloads.intercomcdn.com/i/o/658049199/9d36ef51ff688434496e9a60/phone+verification+step+2.png)Then to get your code sent to WhatsApp you can select "YES" and that option appears: ![](https://downloads.intercomcdn.com/i/o/658049679/e35901be2b3899487a0d7c46/phone+verification+step+3.png)
Phone verification FAQ
6613520
https://help.openai.com/en/articles/6613520-phone-verification-faq
If you're not receiving your phone verification code, it's possible that our system has temporarily blocked you due to too many verification attempts or an issue occurred during your first request. Please try again in a few hours and make sure you're within cellphone coverage, and you're not using any text-blocker applications. Please note we do not allow land lines or VoIP (including Google Voice) numbers at this time. ``` This article was generated with the help of GPT-3. ```
Why am I not receiving my phone verification code?
6613605
https://help.openai.com/en/articles/6613605-why-am-i-not-receiving-my-phone-verification-code
**If you can’t log in, after having successfully logged in before…** -------------------------------------------------------------------- * Refresh your browser’s cache and cookies. We recommend using a desktop device to [log in](https://beta.openai.com/login). * Ensure that you are using the correct authentication method. For example, if you signed up using ‘Continue with Google’, try using that method to [log in](https://chat.openai.com/auth/login) too. **If you see 'There is already a user with email ...' or 'Wrong authentication method'...** * You will see this error if you attempt to login in using a different authentication method from what you originally used to register your account. Your account can only be authenticated if you log in with the auth method that was used during initial registration. For example, if you registered using Google sign-in, please continue using the same method. * If you're unsure which method you originally used for signing up please try [signing in](https://beta.openai.com/login) with each of the following methods from a non-Firefox incognito window: + Username + Password + "Continue with Google" button + "Continue with Microsoft" button **If you are trying to sign up, and you see ‘This user already exists’...** * This likely means you already began the sign up process, but did not complete it. Try to [login](https://beta.openai.com/login) instead. **If you received a Welcome email, but no verification email…** * Register at <https://beta.openai.com/signup>. **In the event you still receive "Something went wrong" or "Oops..."** **errors please try the following:** 1. Refresh your cache and cookies, then attempt the login with your chosen authentication method. 2. Try an incognito browser window to complete sign in 3. Try logging in from a different browser/computer to see if the issue still persists, as a security add-in or extension can occasionally cause this type of error. 4. Try another network (wired connection, home WiFi, work WiFi, library/cafe WiFi and/or cellular network). 
Why can't I log in to OpenAI platform?
6613629
https://help.openai.com/en/articles/6613629-why-can-t-i-log-in-to-openai-platform
You should be able to reset your password by clicking 'Forgot Password' [here](https://beta.openai.com/login) while logged out. If you can't log out, try from an incognito window. If you haven't received the reset email, make sure to check your spam folder. If it's not there, consider whether you originally signed in using a different authentication method such as 'Continue with Google.' If that's the case, there's no password to reset; simply log in using that authentication method. If you need to reset your Google or Microsoft password, you'll need to do so on their respective sites. ``` This article was generated with the help of GPT-3. ```
Why can't I reset my password?
6613657
https://help.openai.com/en/articles/6613657-why-can-t-i-reset-my-password
There are two ways to contact our support team, depending on whether you have an account with us. If you already have an account, simply login and use the "Help" button to start a conversation. If you don't have an account or can't login, you can still reach us by selecting the chat bubble icon in the bottom right of help.openai.com. ``` This article was generated with the help of GPT-3. ```
How can I contact support?
6614161
https://help.openai.com/en/articles/6614161-how-can-i-contact-support
There are two main options for checking your token usage: **1. [Usage dashboard](https://beta.openai.com/account/usage)** --------------------------------------------------------------- The [usage dashboard](https://beta.openai.com/account/usage) displays your API usage during the current and past monthly billing cycles. To display the usage of a particular user of your organizational account, you can use the dropdown next to "Daily usage breakdown". **2. Usage data from the API response** --------------------------------------- You can also access token usage data through the API. Token usage information is now included in responses from completions, edits, and embeddings endpoints. Information on prompt and completion tokens is contained in the "usage" key: ``` { "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWia", "object": "text_completion", "created": 1589478378, "model": "text-davinci-003", "choices": [ { "text": "\n\nThis is a test", "index": 0, "logprobs": null, "finish_reason": "length" } ], "usage": { "prompt_tokens": 5, "completion_tokens": 5, "total_tokens": 10 } } ```
How do I check my token usage?
6614209
https://help.openai.com/en/articles/6614209-how-do-i-check-my-token-usage
There are three reasons you might receive the "You've reached your usage limit" error: **If you're using a free trial account:** To set up a pay-as-you-go account using the API, you'll need to enter [billing information](https://platform.openai.com/account/billing) and upgrade to a paid plan. **If you're already on a paid plan,** you may need to either increase your [monthly budget](https://platform.openai.com/account/limits). To set your limit over the approved usage limit (normally, $120.00/month) please review your **[Usage Limits page](https://platform.openai.com/account/limits)** for information on advancing to the next tier. If your needs exceed what's available in the 'Increasing your limits' tier or you have an unique use case, click on 'Need help?' to submit a request for a higher limit. Our team will look into your request and respond as soon as we can. **Why did I get charged if I'm supposed to have free credits?** Free trial tokens to API users on platform.openai.com are only given the first time you sign up then complete phone verification during the first API key generation. No accounts created after that will receive free trial tokens.
Why am I getting an error message stating that I've reached my usage limit?
6614457
https://help.openai.com/en/articles/6614457-why-am-i-getting-an-error-message-stating-that-i-ve-reached-my-usage-limit
If you're wondering whether OpenAI models have knowledge of current events, the answer is that it depends on the specific model. The table below breaks down the different models and their respective training data ranges. | | | | --- | --- | | **Model name** | **TRAINING DATA** | | text-davinci-003 | Up to Jun 2021 | | text-davinci-002 | Up to Jun 2021 | | text-curie-001 | Up to Oct 2019 | | text-babbage-001 | Up to Oct 2019 | | text-ada-001 | Up to Oct 2019 | | code-davinci-002 | Up to Jun 2021 | | [Embeddings](https://beta.openai.com/docs/guides/embeddings/what-are-embeddings) models (e.g. text-similarity-ada-001) | up to August 2020​ |
Do the OpenAI API models have knowledge of current events?
6639781
https://help.openai.com/en/articles/6639781-do-the-openai-api-models-have-knowledge-of-current-events
You'll be billed at the end of each calendar month for usage during that month unless the parties have agreed to a different billing arrangement in writing. Invoices are typically issued within two weeks of the end of the billing cycle. For the latest information on pay-as-you-go pricing, please our [pricing page](https://openai.com/pricing).
When can I expect to receive my OpenAI API invoice?
6640792
https://help.openai.com/en/articles/6640792-when-can-i-expect-to-receive-my-openai-api-invoice
**Note**: The time for the name change you make on platform.openai.com to be reflected in ChatGPT may take up to 15 minutes. You can change your name in your user settings in **platform**.openai.com under User -> Settings -> User profile -> Name. <https://platform.openai.com/account/user-settings> Here is what the settings looks like: ![](https://downloads.intercomcdn.com/i/o/844048451/a904206d40d58034493cb2f6/Screenshot+2023-10-02+at+2.18.43+PM.png)ChatGPT ------- Change your name on [platform.openai.com](http://platform.openai.com/) and refresh ChatGPT to see the update. Requirements ------------ 1. Must have some name value 2. Must be 96 characters or shorter. 3. Must be only letters, certain punctuation, and spaces. No numbers.
How do I change my name for my OpenAI account?
6640864
https://help.openai.com/en/articles/6640864-how-do-i-change-my-name-for-my-openai-account
When using DALL·E in your work, it is important to be transparent about AI involvement and adhere to our [Content Policy](https://labs.openai.com/policies/content-policy) and [Terms of Use](https://labs.openai.com/policies/terms). Primarily, **don't mislead your audience about AI involvement.** * When sharing your work, we encourage you to proactively disclose AI involvement in your work. * You may remove the DALL·E signature/watermark in the bottom right corner if you wish, but you may not mislead others about the nature of the work. For example, you may not tell people that the work was entirely human generated or that the work is an unaltered photograph of a real event. If you'd like to cite DALL·E, we'd recommend including wording such as "This image was created with the assistance of DALL·E 2" or "This image was generated with the assistance of AI." ``` This article was generated with the help of GPT-3. ```
How should I credit DALL·E in my work?
6640875
https://help.openai.com/en/articles/6640875-how-should-i-credit-dall-e-in-my-work
**Receipts for credit purchases made at labs.openai.com** are sent to the email address you used when making the purchase. You can also access invoices by clicking "View payment history" in your [Labs account settings](https://labs.openai.com/account). **Please note that [DALL·E API](https://help.openai.com/en/articles/6705023)** usage is offered on a pay-as-you-go basis and is billed separately from labs.openai.com. You'll be billed at the end of each calendar month for usage during that month. Invoices are typically issued within two weeks of the end of the billing cycle. For the latest information on pay-as-you-go pricing, please see: <https://beta.openai.com/pricing>. ``` This article was generated with the help of GPT-3. ```
Where can I find my invoice for DALL·E credit purchases?
6641048
https://help.openai.com/en/articles/6641048-where-can-i-find-my-invoice-for-dall-e-credit-purchases
When you use your [fine-tuned model](https://platform.openai.com/docs/guides/fine-tuning) for the first time in a while, it might take a little while for it to load. This sometimes causes the first few requests to fail with a 429 code and an error message that reads "the model is still being loaded". The amount of time it takes to load a model will depend on the shared traffic and the size of the model. A larger model like `gpt-4`, for example, might take up to a few minutes to load, while smaller models might load much faster. Once the model is loaded, ChatCompletion requests should be much faster and you're less likely to experience timeouts. We recommend handling these errors programmatically and implementing retry logic. The first few calls may fail while the model loads. Retry the first call with exponential backoff until it succeeds, then continue as normal (see the "Retrying with exponential backoff" section of this [notebook](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_handle_rate_limits.ipynb) for examples).
What is the "model is still being loaded" error?
6643004
https://help.openai.com/en/articles/6643004-what-is-the-model-is-still-being-loaded-error
**OpenAI API** - the [Sharing & Publication policy](https://openai.com/api/policies/sharing-publication/) outlines how users may share and publish content generated through their use of the API. **DALL·E** - see the [Content policy](https://labs.openai.com/policies/content-policy) for details on what images can be created and shared.
What are OpenAI's policies regarding sharing and publication of generated content?
6643036
https://help.openai.com/en/articles/6643036-what-are-openai-s-policies-regarding-sharing-and-publication-of-generated-content
The [Embeddings](https://platform.openai.com/docs/guides/embeddings) and [Chat](https://platform.openai.com/docs/guides/chat) endpoints are a great combination to use when building a question-answering or chatbot application. Here's how you can get started: 1. Gather all of the information you need for your knowledge base. Use our Embeddings endpoint to make document embeddings for each section. 2. When a user asks a question, turn it into a query embedding and use it to find the most relevant sections from your knowledge base. 3. Use the relevant context from your knowledge base to create a prompt for the Completions endpoint, which can generate an answer for your user. We encourage you to take a look at our **[detailed notebook](https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb)** that provides step-by-step instructions. If you run into any issues or have questions, don't hesitate to join our [Community Forum](https://community.openai.com/) for help. We're excited to see what you build!
How to Use OpenAI API for Q&A and Chatbot Apps
6643167
https://help.openai.com/en/articles/6643167-how-to-use-openai-api-for-q-a-and-chatbot-apps
If the [`temperature`](https://platform.openai.com/docs/api-reference/chat/create#chat-create-temperature) parameter is set above 0, the model will likely produce different results each time - this is expected behavior. If you're seeing unexpected differences in the quality completions you receive from [Playground](https://platform.openai.com/playground) vs. the API with `temperature` set to 0, there are a few potential causes to consider. First, check that your prompt is exactly the same. Even slight differences, such as an extra space or newline character, can lead to different outputs. Next, ensure you're using the same parameters in both cases. For example, the `model` parameter set to `gpt-3.5-turbo` and `gpt-4` will produce different completions even with the same prompt, because `gpt-4` is a newer and more capable instruction-following [model](https://platform.openai.com/docs/models). If you've double-checked all of these things and are still seeing discrepancies, ask for help on the [Community Forum](https://community.openai.com/), where users may have experienced similar issues or may be able to assist in troubleshooting your specific case.
Why am I getting different completions on Playground vs. the API?
6643200
https://help.openai.com/en/articles/6643200-why-am-i-getting-different-completions-on-playground-vs-the-api
**As an "Explore" free trial API user,** you receive an initial credit of $5 that expires after three months if this is your first OpenAI account. [Upgrading to the pay-as-you-go plan](https://beta.openai.com/account/billing) will increase your usage limit to $120/month. **If you're a current API customer looking to increase your usage limit beyond your existing tier**, please review your **[Usage Limits page](https://platform.openai.com/account/limits)** for information on advancing to the next tier. Should your needs exceed what's available in the 'Increasing your limits' tier or you have an unique use case, click on 'Need help?' to submit a request for a higher limit. Our team will assess your request and respond as soon as we can.
How do I get more tokens or increase my monthly usage limits?
6643435
https://help.openai.com/en/articles/6643435-how-do-i-get-more-tokens-or-increase-my-monthly-usage-limits
If you are interested in finding and reporting security vulnerabilities in OpenAI's services, please read and follow our [Coordinated Vulnerability Disclosure Policy](https://openai.com/security/disclosure/). This policy explains how to: * Request authorization for testing * Identify what types of testing are in-scope and out-of-scope * Communicate with us securely We appreciate your efforts to help us improve our security and protect our users and technology.
How to Report Security Vulnerabilities to OpenAI
6653653
https://help.openai.com/en/articles/6653653-how-to-report-security-vulnerabilities-to-openai
💡 `If you're just getting started with OpenAI API, we recommend reading the [Introduction](https://beta.openai.com/docs/introduction/introduction) and [Quickstart](https://beta.openai.com/docs/quickstart) tutorials first.` **How prompt engineering works** ================================ Due to the way the instruction-following [models](https://beta.openai.com/docs/models) are trained or the data they are trained on, there are specific prompt formats that work particularly well and align better with the tasks at hand. Below we present a number of prompt formats we find work reliably well, but feel free to explore different formats, which may fit your task best. **Rules of Thumb and Examples** =============================== **Note**: the "*{text input here}*" is a placeholder for actual text/context **1.** Use the latest model ---------------------------- For best results, we generally recommend using the latest, most capable models. As of November 2022, the best options are the **“text-davinci-003”** [model](https://beta.openai.com/docs/models) for text generation, and the **“code-davinci-002”** model for code generation. **2. Put instructions at the beginning of the prompt and use ### or """ to separate the instruction and context** ----------------------------------------------------------------------------------------------------------------- Less effective ❌: ``` Summarize the text below as a bullet point list of the most important points. {text input here} ``` Better ✅: ``` Summarize the text below as a bullet point list of the most important points. Text: """ {text input here} """ ``` **3. Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etc** --------------------------------------------------------------------------------------------------------------------------- Be specific about the context, outcome, length, format, style, etc Less effective ❌: ``` Write a poem about OpenAI. ``` Better ✅: ``` Write a short inspiring poem about OpenAI, focusing on the recent DALL-E product launch (DALL-E is a text to image ML model) in the style of a {famous poet} ``` **4. Articulate the desired output format through examples ([example 1](https://beta.openai.com/playground/p/DoMbgEMmkXJ5xOyunwFZDHdg), [example 2](https://beta.openai.com/playground/p/3U5Wx7RTIdNNC9Fg8fc44omi)).** ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Less effective ❌: ``` Extract the entities mentioned in the text below. Extract the following 4 entity types: company names, people names, specific topics and themes. Text: {text} ``` Show, and tell - the models respond better when shown specific format requirements. This also makes it easier to programmatically parse out multiple outputs reliably. Better ✅: ``` Extract the important entities mentioned in the text below. First extract all company names, then extract all people names, then extract specific topics which fit the content and finally extract general overarching themes Desired format: Company names: <comma_separated_list_of_company_names> People names: -||- Specific topics: -||- General themes: -||- Text: {text} ``` **5. Start with zero-shot, then few-shot ([example](https://beta.openai.com/playground/p/Ts5kvNWlp7wtdgWEkIAbP1hJ)), neither of them worked, then fine-tune** ------------------------------------------------------------------------------------------------------------------------------------------------------------- ✅ Zero-shot ``` Extract keywords from the below text. Text: {text} Keywords: ``` ✅ Few-shot - provide a couple of examples ``` Extract keywords from the corresponding texts below. Text 1: Stripe provides APIs that web developers can use to integrate payment processing into their websites and mobile applications. Keywords 1: Stripe, payment processing, APIs, web developers, websites, mobile applications ## Text 2: OpenAI has trained cutting-edge language models that are very good at understanding and generating text. Our API provides access to these models and can be used to solve virtually any task that involves processing language. Keywords 2: OpenAI, language models, text processing, API. ## Text 3: {text} Keywords 3: ``` ✅Fine-tune: see fine-tune best practices [here](https://docs.google.com/document/d/1h-GTjNDDKPKU_Rsd0t1lXCAnHltaXTAzQ8K2HRhQf9U/edit#). **6. Reduce “fluffy” and imprecise descriptions** ------------------------------------------------- Less effective ❌: ``` The description for this product should be fairly short, a few sentences only, and not too much more. ``` Better ✅: ``` Use a 3 to 5 sentence paragraph to describe this product. ``` **7. Instead of just saying what not to do, say what to do instead** -------------------------------------------------------------------- Less effective ❌: ``` The following is a conversation between an Agent and a Customer. DO NOT ASK USERNAME OR PASSWORD. DO NOT REPEAT. Customer: I can’t log in to my account. Agent: ``` Better ✅: ``` The following is a conversation between an Agent and a Customer. The agent will attempt to diagnose the problem and suggest a solution, whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article www.samplewebsite.com/help/faq Customer: I can’t log in to my account. Agent: ``` **8. Code Generation Specific - Use “leading words” to nudge the model toward a particular pattern** ---------------------------------------------------------------------------------------------------- Less effective ❌: ``` # Write a simple python function that # 1. Ask me for a number in mile # 2. It converts miles to kilometers ``` In this code example below, adding “*import*” hints to the model that it should start writing in Python. (Similarly “SELECT” is a good hint for the start of a SQL statement.) Better ✅: ``` # Write a simple python function that # 1. Ask me for a number in mile # 2. It converts miles to kilometers import ``` **Parameters** =============== Generally, we find that **`model`** and **`temperature`** are the most commonly used parameters to alter the model output. 1. **`model` -** Higher performance [models](https://beta.openai.com/docs/models) are more expensive and have higher latency. 2. **`temperature` -** A measure of how often the model outputs a less likely token. The higher the `temperature`, the more random (and usually creative) the output. This, however, is not the same as “truthfulness”. For most factual use cases such as data extraction, and truthful Q&A, the `temperature` of 0 is best. 3. **`max_tokens`** (**maximum length)** - Does not control the length of the output, but a hard cutoff limit for token generation. Ideally you won’t hit this limit often, as your model will stop either when it thinks it’s finished, or when it hits a stop sequence you defined. 4. **`stop` (stop sequences)** - A set of characters (tokens) that, when generated, will cause the text generation to stop. For other parameter descriptions see the [API reference](https://beta.openai.com/docs/api-reference/completions/create). **Additional Resources** ======================== If you're interested in additional resources, we recommend: * Guides + [Text completion](https://beta.openai.com/docs/guides/completion/text-completion) - learn how to generate or edit text using our models + [Code completion](https://beta.openai.com/docs/guides/code/code-completion-private-beta) - explore prompt engineering for Codex + [Fine-tuning](https://beta.openai.com/docs/guides/fine-tuning/fine-tuning) - Learn how to train a custom model for your use case + [Embeddings](https://beta.openai.com/docs/guides/embeddings/embeddings) - learn how to search, classify, and compare text + [Moderation](https://beta.openai.com/docs/guides/moderation/moderation) * [OpenAI cookbook repo](https://github.com/openai/openai-cookbook/tree/main/examples) - contains example code and prompts for accomplishing common tasks with the API, including Question-answering with Embeddings * [Community Forum](https://community.openai.com/)
Best practices for prompt engineering with OpenAI API
6654000
https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api
**If you can’t log in, after having successfully logged in before…** -------------------------------------------------------------------- * Refresh your browser’s cache and cookies. We recommend using a desktop device to [log in](https://labs.openai.com/auth/login). * Ensure that you are using the correct authentication method. For example, if you signed up using ‘Continue with Google’, try using that method to [log in](https://chat.openai.com/auth/login) too. **If you see 'There is already a user with email ...' or 'Wrong authentication method'...** * You will see this error if you attempt to login in using a different authentication method from what you originally used to register your account. Your account can only be authenticated if you log in with the auth method that was used during initial registration. For example, if you registered using Google sign-in, please continue using the same method. * If you're unsure which method you originally used for signing up please try [signing in](https://labs.openai.com/auth/login) with each of the following methods from a non-Firefox incognito window: + Username + Password + "Continue with Google" button + "Continue with Microsoft" button **If you are trying to sign up, and you see ‘This user already exists’...** * This likely means you already began the [sign up](https://labs.openai.com/auth/login) process, but did not complete it. Try to [login](https://labs.openai.com/auth/login) instead. **If you received a Welcome email, but no verification email…** * Register at <https://labs.openai.com/auth/login> **In the event you still receive "Something went wrong" or "Oops..."** **errors please try the following:** 1. Refresh your cache and cookies, then attempt the login with your chosen authentication method. 2. Try an incognito browser window to complete sign in 3. Try logging in from a different browser/computer to see if the issue still persists, as a security add-in or extension can occasionally cause this type of error. 4. Try another network (wired connection, home WiFi, work WiFi, library/cafe WiFi and/or cellular network).
Why can't I log in to Labs / DALL•E?
6654303
https://help.openai.com/en/articles/6654303-why-can-t-i-log-in-to-labs-dall-e
**Have you ever tried to solve for x using the OpenAI playground?** ------------------------------------------------------------------- For example, solve for x: 3 x + 4 = 66 First you'd isolate terms with *x* to the left hand side like so: 3 x + (4 - 4) = 66 - 4 then: 3 x = 62 to get the result: x = 62 / 3 ... simple, right? Unfortunately, you won’t always get the same result from the [Playground](https://beta.openai.com/playground). **Our language models currently struggle with math** ---------------------------------------------------- The models are not yet capable at performing consistently when asked to solve math problems. In other words if you were to try this example in our Playground using text-davinci-002 you will likely get inconsistent answers when performing math. With some generations you will get the correct answer, however we do not recommend you depend on the GPT models for math tasks. **What you can do to improve output consistency in our Playground** ------------------------------------------------------------------- **Disclaimer**: Even implementing everything below there is only so far we can push the current model. 1. The GPT models are great at recognizing patterns, but without enough data they’ll try their best to interpret and recreate a pattern that seems most probable. With minimal data it’s likely to produce a wide variety of potential outputs. 2. A prompt designed like a homework assignment, will generally have clear instructions on the task and expected output, and may include an example task to further establish the expectations around the task and output format. The text-davinci-002 model does best with an instruction, so the request should be presented in a format that starts with an instruction. Without this the model may not understand your expectations and it will be a bit confused. **Using the "solve for x where 3x + 4 = 66" example:** ------------------------------------------------------ To improve this [prompt](https://beta.openai.com/playground/p/undsPkd4LAdmFC4SILzvnJ6e) we can add the following: 1. Start with an instruction like, “Given the algebraic equation below, solve for the provided variable”, then test to see the results. 2. Append to the instruction a description of the expected output, “Provide the answer in the format of ‘x=<insert answer>’“, then test once more 3. If results are still inconsistent, append an example problem to the instructions. This example will help establish the pattern that you want the model to recognize and follow, “Problem: 3x+4=66, solve for x. <newline> Answer: x=” 4. The final result will be a [prompt](https://beta.openai.com/playground/p/I4yzqABsUqjQASw6CwM1OftR) that looks like this: ``` Given the algebraic equation below, solve for the provided variable. Provide the answer in the format of ‘x=<insert answer>. Problem1: y-1=0, solve for y Answer1: y=1 --- Problem2: 3x+4=66, solve for x. Answer2: x= ``` **Overall recommendation for math problems** We are aware our currently available models are not yet capable at performing consistently when asked to solve math problems. Consider relying on tools like<https://www.wolframalpha.com/> for now when doing math such as algebraic equations.
Doing Math in the Playground
6681258
https://help.openai.com/en/articles/6681258-doing-math-in-the-playground
OpenAI maintains a [Community Libraries](https://beta.openai.com/docs/libraries/community-libraries) page where we list API clients that developers can use to access the OpenAI API. If you've built an open source library that you'd like added to this page – thank you! We love to see developers build additional API tooling for other developers. We also want to make sure we are steering developers to good solutions that will make them successful long term, so we have a few criteria that we require before listing libraries on our website. Please make sure you meet the criteria listed below, and then fill our our [Community Libraries request form](https://share.hsforms.com/1y0Ixew-rQOOZisFfnhszVA4sk30). 1. **Standard open source license** To be listed, we require that community libraries use a [permissive open-source license](https://choosealicense.com/) such as MIT. This allows our customers to more easily fork libraries if necessary in the event that the owners stop maintaining it or adding features. 2. **Load API keys through environment variables** Code samples in the README must encourage the use of environment variables to load the OpenAI API key, instead of hardcoding it in the source code. 3. **Correct, high quality code that accurately reflects the API** Code should be easy to read/follow, and should generally adhere to our [OpenAPI spec](https://github.com/openai/openai-openapi/blob/master/openapi.yaml) – new libraries should **not** include endpoints marked as `deprecated: true` in this spec. 4. **State that it’s an unofficial library** Please state somewhere near the top of your README that it’s an “unofficial" or "community-maintained” library. 5. **Commit to maintaining the library** This primarily means addressing issues and reviewing+merging pull requests. It can also be a good idea to set up Github Issue & PR templates like we have in our [official node library](https://github.com/openai/openai-node/tree/master/.github/ISSUE_TEMPLATE). ​
Adding your API client to the Community Libraries page
6684216
https://help.openai.com/en/articles/6684216-adding-your-api-client-to-the-community-libraries-page
The default rate limit for the DALL·E API depends which model you are using (DALL·E 2 vs DALL·E 3) along with your usage tier. For example, with DALL·E 3 and usage tier 3, you can generate 7 images per minute. Learn more in our [rate limits guide](https://platform.openai.com/docs/guides/rate-limits/usage-tiers). You can also check the specific limits for your account in your [limits page](https://platform.openai.com/account/limits).
What's the rate limit for the DALL·E API?
6696591
https://help.openai.com/en/articles/6696591-what-s-the-rate-limit-for-the-dall-e-api
**1. What is the DALL·E API and how can I access it?** The DALL·E API allows you to integrate state of the art image generation capabilities directly into your product. To get started, visit our [developer guide](https://beta.openai.com/docs/guides/images). **2. How do I pay for the DALL·E API?** The API usage is offered on a pay-as-you-go basis and is billed separately from labs.openai.com. You can find pricing information on our [pricing page](https://openai.com/api/pricing). For large volume discounts (>$5k/month), please [contact sales](https://openai.com/contact-sales/). **3. Can I use my OpenAI API trial credits ($5) or labs.openai.com credits on the DALL·E API?** You can use the OpenAI API free trial credits ($5) to make DALL·E API requests. DALL·E API is billed separately from labs.openai.com. Credits granted/purchased on labs.openai.com do not apply to DALL·E API. For the latest information on pricing, please see our [pricing page](https://openai.com/api/pricing). **4. Are there any API usage limits that I should be aware of?** The DALL**·**E API shares the usage limits with other OpenAI API services, which you can find in your [Limits settings](https://platform.openai.com/account/limits). Additionally, org-level rate limits enforce a cap on the number of images you can generate per minute. To learn more, we encourage you to read our help article, "What's [the rate limit for the DALL·E API?](https://help.openai.com/en/articles/6696591)", which provides additional detail. **5. Are there any restrictions on the type of content I can generate?** Yes - please read our [content policy](https://labs.openai.com/policies/content-policy) to learn what's not allowed on the DALL·E API. **6. Can I sell the images I generate with the API? Can I use it in my application?** Subject to the [Content Policy](https://labs.openai.com/policies/content-policy) and [Terms](https://openai.com/api/policies/terms/), you own the images you create with DALL·E, including the right to reprint, sell, and merchandise - regardless of whether an image was generated through a free or paid credit. **7. What do I need to do before I start serving API outputs to my users?** Before you launch your product, please make sure you're in compliance with our [use case policy](https://beta.openai.com/docs/usage-policies/use-case-policy) and include [end-user IDs](https://beta.openai.com/docs/usage-policies/end-user-ids) with requests. **8. How are images returned by the endpoint?** The API can output images as URLs (response\_format =url) or b64\_json. Our [developer guide](https://beta.openai.com/docs/guides/images) includes more details. **9, Which version of DALL·E is available via the API?** The API uses the latest version of DALL·E 2. **10. Are the Edit function and Variations features available in the API?** Yes - for more detailed instructions, please see our [developer guide](https://beta.openai.com/docs/guides/images). **11. Does it support outpainting?** Yes! There are many ways to use the /edits endpoint, including inpainting and outpainting. You can try it out firsthand in the [DALL·E Editor](https://labs.openai.com/editor). **12. How can I save output images as files?** The API can output images as URLs. You'll need to convert these to the format you need. Our [developer guide](https://beta.openai.com/docs/guides/images) includes more details. **13. How long do the generated URLs persist?** The URLs from the API will remain valid for one hour. **14. I'm stuck. How do I get help?** For general help, you can consult our [developer guide](https://beta.openai.com/docs/guides/images) and [help center](https://help.openai.com/en/), or ask questions on our [Community forum](https://community.openai.com/).
DALL·E API FAQ
6705023
https://help.openai.com/en/articles/6705023-dall-e-api-faq
While the OpenAI website is only available in English, you can use our models in other languages as well. The models are optimized for use in English, but many of them are robust enough to generate good results for a variety of languages. When thinking about how to adapt our models to different languages, we recommend starting with one of our pre-made prompts, such as this [English to French](https://beta.openai.com/examples/default-translate) prompt example. By replacing the English input and French output with the language you'd like to use, you can create a new prompt customized to your language. If you write your prompt to in Spanish, you're more likely to receive a response in Spanish. We'd recommend experimenting to see what you can achieve with the models!
How do I use the OpenAI API in different languages?
6742369
https://help.openai.com/en/articles/6742369-how-do-i-use-the-openai-api-in-different-languages
If you want to download the images you generated with DALL·E, you might be wondering how to do it in bulk. Unfortunately, there is no option to download multiple images at once from the website. However, you can still download your images individually by following these steps: 1. Click on the image you want to save. This will open the image in a larger view, with some options to edit it, share it, or create variations. 2. To download the image, simply click on the download icon in the top right corner of the image. This looks like a downward arrow with a horizontal line under it. ``` This article was generated with the help of GPT-3. ``` ​
How can I bulk download my generations?
6781152
https://help.openai.com/en/articles/6781152-how-can-i-bulk-download-my-generations
If you want to save your outpainting as a single image, you need to download it at the time of creation. Once you exit outpainting mode, you will not be able to access the full image again (unless you stitch the generation frames together manually). This is because generation frames are stored individually, without the rest of the larger composition. If you want download your outpainting as a single image whilst creating, just click the download icon in the top-right hand corner. This looks like a downward arrow with a horizontal line under it. ``` This article was generated with the help of GPT-3. ```
How can I download my outpainting?
6781222
https://help.openai.com/en/articles/6781222-how-can-i-download-my-outpainting
You might be tempted to instruct DALL·E to generate text in your image, by giving it instructions like "a blue sky with white clouds and the word hello in skywriting". However, this is not a reliable or effective way to create text. DALL·E is not currently designed to produce text, but to generate realistic and artistic images based on your keywords or phrases. Right now, it does not have a specific understanding of writing, labels or any other common text and often produces distorted or unintelligible results. ``` This article was generated with the help of GPT-3. ```
How can I generate text in my image?
6781228
https://help.openai.com/en/articles/6781228-how-can-i-generate-text-in-my-image
1. **How much does it cost to use ChatGPT?** * The research preview of ChatGPT is free to use. 2. **How does ChatGPT work?** * ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF) – a method that uses human demonstrations and preference comparisons to guide the model toward desired behavior. 3. **Why does the AI seem so real and lifelike?** * These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like. It is important to keep in mind that this is a direct result of the system's design (i.e. maximizing the similarity between outputs and the dataset the models were trained on) and that such outputs may be inaccurate, untruthful, and otherwise misleading at times. 4. **Can I trust that the AI is telling me the truth?** * ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content. We'd recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the "Thumbs Down" button. 5. **Who can view my conversations?** * As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements. 6. **Will you use my conversations for training?** * Yes. Your conversations may be reviewed by our AI trainers to improve our systems. 7. **Can you delete my data?** * Yes, please follow the [data deletion process](https://help.openai.com/en/articles/6378407-how-can-i-delete-my-account). 8. **Can you delete specific prompts?** * No, we are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations. 9. **Can I see my history of threads? How can I save a conversation I’ve had?** * Yes, you can now view and continue your past conversations. 10. **Where do you save my personal and conversation data?** * For more information on how we handle data, please see our [Privacy Policy](https://openai.com/privacy/) and [Terms of Use](https://openai.com/api/policies/terms/). 11. **How can I implement this? Is there any implementation guide for this?** * Developers can [now](https://openai.com/blog/introducing-chatgpt-and-whisper-apis) integrate ChatGPT into their applications and products through our API. Users can expect continuous model improvements and the option to choose dedicated capacity for deeper control over the models. To learn more, please check out the documentation [here](https://platform.openai.com/docs/api-reference/chat). 12. **Do I need a new account if I already have a Labs or Playground account?** * If you have an existing account at [labs.openai.com](https://www.google.com/url?q=http://labs.openai.com&sa=D&source=docs&ust=1669833084818742&usg=AOvVaw3xrSlGIVLLVKjnchqinjLs) or [beta.openai.com](https://www.google.com/url?q=http://beta.openai.com&sa=D&source=docs&ust=1669833084818875&usg=AOvVaw11EJaho-h4CU4I-OMT7x3j), then you can login directly at [chat.openai.com](https://www.google.com/url?q=http://chat.openai.com&sa=D&source=docs&ust=1669833084818926&usg=AOvVaw13rLwSrAYiV5hOL5oPsYDq) using the same login information. If you don't have an account, you'll need to sign-up for a new account at [chat.openai.com](https://www.google.com/url?q=http://chat.openai.com&sa=D&source=docs&ust=1669833084818980&usg=AOvVaw3_WRKLYk-Z3bm-D1EABgkJ). 13. **Why did ChatGPT give me an answer that’s not related to my question?** * ChatGPT will occasionally make up facts or “hallucinate” outputs. If you find an answer is unrelated, please provide that feedback by using the "Thumbs Down" button 14. **Can I use output from ChatGPT for commercial uses?** * Subject to the [Content Policy](https://labs.openai.com/policies/content-policy) and [Terms](https://openai.com/api/policies/terms/), you own the output you create with ChatGPT, including the right to reprint, sell, and merchandise – regardless of whether output was generated through a free or paid plan. 15. **I accidentally provided incorrect information during sign-up and now I'm unable to complete the process. How can I fix this issue?** * Please reach out to our support team by initiating a new conversation using the on-site chat tool at help.openai.com. We'll be happy to help!
What is ChatGPT?
6783457
https://help.openai.com/en/articles/6783457-what-is-chatgpt
Fine-tuning with GPT-3.5 ======================== Fine-tuning data provides models with examples of how it should respond do a given conversation. We'll want these examples to match the input that the model will see in production as closely as possible. #### First, system instructions. These tell the model how to act, and supply any contextual information. You should use the prompt used in the training dataset when calling the fine-tuned model. ``` {"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."} ``` #### **Second,** conversation **data.** We'll want to provide varied examples of conversations that the model may run into, such as "What's the capital of France?" and "Who wrote 'Romeo and Juliet'?" ``` {"role": "user", "content": "What's the capital of France?"} ``` #### Next, the agent response. Here, we present the model with an example of how to respond to the previous message, given the system instruction. For our snarky agent, we may choose a response like this: ``` {"role": "agent", "content": "Paris, as if everyone doesn't know that already."} ``` #### Finally, putting it all together. Once we have many examples, we can put these all together and begin training. Our dataset should look like follows: ``` {"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"} {"role": "agent", "content": "Paris, as if everyone doesn't know that already."}]} {"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Romeo and Juliet'?"}, {"role": "agent", "content": "Oh, just some guy named William Shakespeare. Ever heard of him?"}]} {"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "How far is the Moon from Earth?"}, {"role": "agent", "content": "Around 384,400 kilometers. Give or take a few, like that really matters."}]} ``` Fine-tuning with babbage and davinci ==================================== To fine-tune effectively without ChatCompletions, you need to format your data properly to provide clues to the model about where to start and stop generating text. **Indicator String** The indicator string is a symbol or sequence of symbols that you append to the end of your prompt to tell the model that you want it to start generating text after this string. For example, if you want the model to categorize items as colors, you can use an indicator string like '->'. The prompts in your dataset would look like this: * 'banana ->' * 'lime ->' * 'tomato ->' You can use any string as an indicator string as long as it doesn't appear anywhere else in the dataset. We recommend using '\n###\n'. **Stop Sequence** The stop sequence is another special symbol or sequence of symbols that you use to tell the model that you want it to stop generating text after that point. For example, if you want the model to generate one word as a completion, you can use a stop sequence such as "\n" (newline) or "." (period) to mark the end of the completion, like this: * 'prompt' : 'banana ->', 'completion' : ' yellow \n' * 'prompt' : 'lime ->', 'completion' : ' green \n' * 'prompt' : 'tomato ->', 'completion' : ' red \n' **Calling the model** You should use the same symbols used in your dataset when calling the model. If you used the dataset above, you should use '\n' as a stop sequence. You should also append '->' to your prompts as an indicator string (e.g. prompt: 'lemon -> ') It is important that you use consistent and unique symbols for the indicator string and the stop sequence, and that they don't appear anywhere else in your data. Otherwise, the model might get confused and generate unwanted or incorrect text. **Extra Recommendations** We also recommend appending a single space character at the beginning of your outputs. You can also use our [command line tool](https://beta.openai.com/docs/guides/fine-tuning/cli-data-preparation-tool) to help format your dataset, after you have prepared it.
How do I format my fine-tuning data?
6811186
https://help.openai.com/en/articles/6811186-how-do-i-format-my-fine-tuning-data
How can I tell how many tokens a string will have before I try to embed it? =========================================================================== For V2 embedding models, as of Dec 2022, there is not yet a way to split a string into tokens. The only way to get total token counts is to submit an API request. * If the request succeeds, you can extract the number of tokens from the response: `response[“usage”][“total\_tokens”]` * If the request fails for having too many tokens, you can extract the number of tokens from the error message: `This model's maximum context length is 8191 tokens, however you requested 10000 tokens (10000 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.` For V1 embedding models, which are based on GPT-2/GPT-3 tokenization, you can count tokens in a few ways: * For one-off checks, the [OpenAI tokenizer](https://beta.openai.com/tokenizer) page is convenient * In Python, [transformers.GPT2TokenizerFast](https://huggingface.co./docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast) (the GPT-2 tokenizer is the same as GPT-3) * In JavaScript, [gpt-3-encoder](https://www.npmjs.com/package/gpt-3-encoder) How can I retrieve K nearest embedding vectors quickly? ======================================================= For searching over many vectors quickly, we recommend using a vector database. Vector database options include: * [Pinecone](https://www.pinecone.io/), a fully managed vector database * [Weaviate](https://weaviate.io/), an open-source vector search engine * [Faiss](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/), a vector search algorithm by Facebook Which distance function should I use? ===================================== We recommend [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity). The choice of distance function typically doesn’t matter much. OpenAI embeddings are normalized to length 1, which means that: * Cosine similarity can be computed slightly faster using just a dot product * Cosine similarity and Euclidean distance will result in the identical rankings
Embeddings - Frequently Asked Questions
6824809
https://help.openai.com/en/articles/6824809-embeddings-frequently-asked-questions
**Introducing the GPT Store and ChatGPT Team plan (Jan 10, 2024)** ------------------------------------------------------------------ #### Discover what’s trending in the GPT Store The store features a diverse range of GPTs developed by our partners and the community. Browse popular and trending GPTs on the community leaderboard, with categories like DALL·E, writing, research, programming, education, and lifestyle. Explore GPTs at chat.openai.com/gpts. #### Use ChatGPT alongside your team We’re launching a new ChatGPT plan for teams of all sizes, which provides a secure, collaborative workspace to get the most out of ChatGPT at work. ChatGPT Team offers access to our advanced models like GPT-4 and DALL·E 3, and tools like Advanced Data Analysis. It additionally includes a dedicated collaborative workspace for your team and admin tools for team management. As with ChatGPT Enterprise, you own and control your business data — we do not train on your business data or conversations, and our models don’t learn from your usage. More details on our data privacy practices can be found on our [privacy page](https://openai.com/enterprise-privacy) and [Trust Portal](https://trust.openai.com/). You can learn more about the ChatGPT Team plan [here](https://openai.com/chatgpt/team). **ChatGPT with voice is available to all users (November 21, 2023)** -------------------------------------------------------------------- ChatGPT with voice is now available to all free users. Download the app on your phone and tap the headphones icon to start a conversation. **Introducing GPTs (November 6, 2023)** --------------------------------------- You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills. Learn more [here](https://openai.com/blog/introducing-gpts). We’re rolling out custom versions of ChatGPT that you can create for a specific purpose—called GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others. For example, GPTs can help you [learn the rules to any board game, help teach your kids math, or design stickers](https://openai.com/chatgpt#do-more-with-gpts). Plus and Enterprise users can start creating GPTs this week. Later this month, we’ll launch the GPT Store, so people can feature and make money from their GPTs. We plan to offer GPTs to more users soon. **Browsing is now out of beta (October 17, 2023)** -------------------------------------------------- Browsing, which we re-launched a few weeks ago, is moving out of beta. Plus and Enterprise users no longer need to switch the beta toggle to use browse, and are able to choose "Browse with Bing" from the GPT-4 model selector. **DALL·E 3 is now rolling out in beta (October 16, 2023)** ---------------------------------------------------------- We’ve integrated DALL·E 3 with ChatGPT, allowing it to respond to your requests with images. From a simple sentence to a detailed paragraph, ask ChatGPT what you want to see and it will translate your ideas into exceptionally accurate images. To use DALL·E 3 on both web and mobile, choose DALL·E 3 in the selector under GPT-4. The message limit may vary based on capacity. **Browsing is rolling back out to Plus users (September 27, 2023)** ------------------------------------------------------------------- Browsing is rolling out to all Plus users. ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources. It is no longer limited to data before September 2021. To try it out, enable Browsing in your beta features setting. * Click on 'Profile & Settings’ * Select 'Beta features' * Toggle on ‘Browse with Bing’ Choose Browse with Bing in the selector under GPT-4. **New voice and image capabilities in ChatGPT (September 25, 2023)** -------------------------------------------------------------------- We are beginning to roll out new voice and image capabilities in ChatGPT. They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about. Learn more [here](https://openai.com/blog/chatgpt-can-now-see-hear-and-speak). #### Voice (Beta) is now rolling out to Plus users on iOS and Android You can now use voice to engage in a back-and-forth conversation with your agent. Speak with it on the go, request a bedtime story, or settle a dinner table debate. To get started with voice, head to Settings → New Features on the mobile app and opt into voice conversations. Then, tap the headphone button located in the top-right corner of the home screen and choose your preferred voice out of five different voices. #### Image input will be generally available to Plus users on all platforms You can now show ChatGPT one or more images. Troubleshoot why your grill won’t start, explore the contents of your fridge to plan a meal, or analyze a complex graph for work-related data. To focus on a specific part of the image, you can use the drawing tool in our mobile app. To get started, tap the photo button to capture or choose an image. You can also discuss multiple images or use our drawing tool to guide your agent. **ChatGPT language support - Alpha on web (September 11, 2023)** ---------------------------------------------------------------- ChatGPT now supports a limited selection of languages in the interface: * Chinese (zh-Hans) * Chinese (zh-TW) * French (fr-FR) * German (de-DE) * Italian (it-IT) * Japanese (ja-JP) * Portuguese (pt-BR) * Russian (ru-RU) * Spanish (es-ES) If you've configured your browser to use one of these supported languages, you'll see a banner in ChatGPT that enables you to switch your language settings. You can deactivate this option at any time through the settings menu. This feature is in alpha, requires opting in, and currently can only be used on the web at chat.openai.com. Learn more [here](https://help.openai.com/en/articles/8357869-chatgpt-language-support-beta-web). Introducing ChatGPT Enterprise (August 28, 2023) ------------------------------------------------ Today we’re launching [ChatGPT Enterprise](https://openai.com/blog/introducing-chatgpt-enterprise), which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. ChatGPT Enterprise also provides unlimited access to Advanced Data Analysis, previously known as [Code Interpreter](https://openai.com/blog/chatgpt-plugins). [Learn more on our website](https://openai.com/enterprise) and connect with our sales team to get started. Custom instructions are now available to users in the EU & UK (August 21, 2023) ------------------------------------------------------------------------------- Custom instructions are now available to users in the European Union & United Kingdom. To add your instructions: * Click on your name * Select ‘Custom instructions’ Custom instructions are now available to free users (August 9, 2023) -------------------------------------------------------------------- Custom instructions are now available to ChatGPT users on the free plan, except for in the EU & UK where we will be rolling it out soon! Customize your interactions with ChatGPT by providing specific details and guidelines for your chats. To add your instructions: * Click on your name * Select ‘Custom instructions’ Updates to ChatGPT (August 3, 2023) ----------------------------------- We’re rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week: **1. Prompt examples:** A blank page can be intimidating. At the beginning of a new chat, you’ll now see examples to help you get started. **2. Suggested replies:** Go deeper with a click. ChatGPT now suggests relevant ways to continue your conversation. **3. GPT-4 by default, finally:** When starting a new chat as a Plus user, ChatGPT will remember your previously selected model — no more defaulting back to GPT-3.5. **4. Upload multiple files:** You can now ask ChatGPT to analyze data and generate insights across multiple files. This is available with the Code Interpreter beta for all Plus users. **5. Stay logged in:** You’ll no longer be logged out every 2 weeks! When you do need to log in, you’ll be greeted with a much more welcoming page. **6. Keyboard shortcuts:** Work faster with shortcuts, like ⌘ (Ctrl) + Shift + ; to copy last code block. Try ⌘ (Ctrl) + / to see the complete list. Introducing the ChatGPT app for Android (July 25, 2023) ------------------------------------------------------- ChatGPT for Android is now available for download in the United States, India, Bangladesh, and Brazil from the [Google Play Store](https://play.google.com/store/apps/details?id=com.openai.chatgpt). We plan to expand the rollout to additional countries over the next week. You can track the Android rollout [here](https://help.openai.com/en/articles/7947663-chatgpt-supported-countries). Custom instructions are rolling out in beta (July 20, 2023) ----------------------------------------------------------- We’re starting to roll out custom instructions, giving you more control over ChatGPT’s responses. Set your preferences once, and they’ll steer future conversations. You can read more about custom instructions in the blogpost [here](https://openai.com/blog/custom-instructions-for-chatgpt). Custom instructions are available to all Plus users and expanding to all users in the coming weeks. To enable beta features: * Click on 'Profile & Settings’ * Select 'Beta features' * Toggle on 'Custom instructions' To add your instructions: * Click on your name * Select ‘Custom instructions’ This feature is not yet available in the UK and EU. Higher message limits for GPT-4 (July 19, 2023) ----------------------------------------------- We're doubling the number of messages ChatGPT Plus customers can send with GPT-4. Rolling out over the next week, the new message limit will be 50 every 3 hours. Code interpreter is now rolling out in beta on web (July 6, 2023) ----------------------------------------------------------------- We’re rolling out [code interpreter](https://openai.com/blog/chatgpt-plugins#code-interpreter) to all ChatGPT Plus users over the next week. It lets ChatGPT run code, optionally with access to files you've uploaded. You can ask ChatGPT to analyze data, create charts, edit files, perform math, etc. We’ll be making these features accessible to Plus users on the web via the beta panel in your settings over the course of the next week. To enable code interpreter: * Click on your name * Select beta features from your settings * Toggle on the beta features you’d like to try Browsing is temporarily disabled (July 3, 2023) ----------------------------------------------- We've [learned](https://help.openai.com/en/articles/8077698-how-do-i-use-chatgpt-browse-with-bing-to-search-the-web) that the browsing beta can occasionally display content in ways we don't want, e.g. if a user specifically asks for a URL's full text, it may inadvertently fulfill this request. We are temporarily disabling Browse while we fix this. Browsing and search on mobile (June 22, 2023) --------------------------------------------- We’ve made two updates to the mobile ChatGPT app: * Browsing: Plus users can now use Browsing to get comprehensive answers and current insights on events and information that extend beyond the model's original training data. To try it out, enable Browsing in the “new features” section of your app settings. Then select GPT-4 in the model switcher and choose “Browse with Bing” in the drop-down. * Search History Improvements: Tapping on a search result takes you directly to the respective point in the conversation. iOS app available in more countries, shared links in alpha, Bing Plugin, disable history on iOS (May 24, 2023) -------------------------------------------------------------------------------------------------------------- #### ChatGPT app for iOS in more countries Good news! We’re expanding availability of the [ChatGPT app for iOS](https://openai.com/blog/introducing-the-chatgpt-app-for-ios) to more countries and regions. Users in 11 countries can now download the ChatGPT app in the [Apple App Store](https://apps.apple.com/app/chatgpt/id6448311069) including the United States: Albania, Croatia, France, Germany, Ireland, Jamaica, Korea, New Zealand, Nicaragua, Nigeria, and the United Kingdom. We will continue to roll out to more countries and regions in the coming weeks. You can track the iOS app rollout [here](https://help.openai.com/en/articles/7947663-chatgpt-supported-countries). #### Shared Links We're excited to introduce a new feature: shared links. This feature allows you to create and share your ChatGPT conversations with others. Recipients of your shared link can either view the conversation or copy it to their own chats to continue the thread. This feature is currently rolling out to a small set of testers in alpha, with plans to expand to all users (including free) in the upcoming weeks. To share your conversations: 1. Click on the thread you’d like to share 2. Select the “Share” button 3. Click on “Copy Link” [Learn more](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq). #### Bing Plugin Browse with Bing. We’ve integrated the browsing feature - currently in beta for paid users - more deeply with Bing. You can now click into queries that the model is performing. We look forward to expanding the integration soon. #### Disable chat history on iOS You can now disable your chat history on iOS. Conversations started on your device when chat history is disabled won’t be used to improve our models, won’t appear in your history on your other devices, and will only be stored for 30 days. Similar to the functionality on the web, this setting does not sync across browsers or devices. [Learn more](https://help.openai.com/en/articles/7730893-data-controls-faq). Web browsing and Plugins are now rolling out in beta (May 12, 2023) ------------------------------------------------------------------- If you are a ChatGPT Plus user, enjoy early access to experimental new features, which may change during development. We’ll be making these features accessible via a new beta panel in your settings, which is rolling out to all Plus users over the course of the next week. ![](https://downloads.intercomcdn.com/i/o/740734818/c7d818c221f5f023ab1a0c27/BetaPanel.png)Once the beta panel rolls out to you, you’ll be able to try two new features: * **Web browsing**: Try a new version of ChatGPT that knows when and how to browse the internet to answer questions about recent topics and events. * **Plugins:** Try a new version of ChatGPT that knows when and how to use third-party plugins that you enable. To use third-party plugins, follow these instructions: * Navigate to <https://chat.openai.com/> * Select “Plugins” from the model switcher * In the “Plugins” dropdown, click “Plugin Store” to install and enable new plugins To enable beta features: 1. Click on 'Profile & Settings' 2. Select 'Beta features' 3. Toggle on the features you’d like to try For more information on our rollout process, please check out the article [here](https://help.openai.com/en/articles/7897380-introducing-new-features-in-chatgpt). In addition to the beta panel, users can now choose to continue generating a message beyond the maximum token limit. Each continuation counts towards the message allowance. Updates to ChatGPT (May 3, 2023) -------------------------------- We’ve made several updates to ChatGPT! Here's what's new: * You can now turn off chat history and export your data from the ChatGPT settings. Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar. * We are deprecating the Legacy (GPT-3.5) model on May 10th. Users will be able to continue their existing conversations with this model, but new messages will use the default model. Introducing plugins in ChatGPT (March 23, 2023) ----------------------------------------------- We are announcing experimental support for AI plugins in ChatGPT — tools designed specifically for language models. Plugins can help ChatGPT access up-to-date information, run computations, or use third-party services. You can learn more about plugins [here](https://openai.com/blog/chatgpt-plugins). Today, we will begin extending plugin access to users and developers from our waitlist. The plugins we are rolling out with are: * Browsing: An experimental model that knows when and how to browse the internet * Code Interpreter: An experimental ChatGPT model that can use Python, and handles uploads and downloads * Third-party plugins: An experimental model that knows when and how to use external plugins. You can join the waitlist to try plugins here: * [ChatGPT Plugin Waitlist](https://share.hsforms.com/16C8k9E5FR5mRLYYkwohdiQ4sk30) Announcing GPT-4 in ChatGPT (March 14, 2023) -------------------------------------------- We’re excited to bring GPT-4, our latest model, to our ChatGPT Plus subscribers. GPT-4 has enhanced capabilities in: * Advanced reasoning * Complex instructions * More creativity To give every Plus subscriber a chance to try the model, we'll dynamically adjust the cap for GPT-4 usage based on demand. You can learn more about GPT-4 [here](https://openai.com/product/gpt-4). For this release, there are no updates to free accounts. Updates to ChatGPT (Feb 13, 2023) --------------------------------- We’ve made several updates to ChatGPT! Here's what's new: * We’ve updated performance of the ChatGPT model on our free plan in order to serve more users. * Based on user feedback, we are now defaulting Plus users to a faster version of ChatGPT, formerly known as “Turbo”. We’ll keep the previous version around for a while. * We rolled out the ability to purchase [ChatGPT Plus](https://openai.com/blog/chatgpt-plus/) internationally. Introducing ChatGPT Plus (Feb 9 2023) ------------------------------------- As we recently announced, our Plus plan comes with early access to new, experimental features. We are beginning to roll out a way for Plus users the ability to choose between different versions of ChatGPT: * Default: the standard ChatGPT model * Turbo: optimized for speed (alpha) Version selection is made easy with a dedicated dropdown menu at the top of the page. Depending on feedback, we may roll out this feature (or just Turbo) to all users soon. Factuality and mathematical improvements (Jan 30, 2023) ------------------------------------------------------- We’ve upgraded the ChatGPT model with improved factuality and mathematical capabilities. Updates to ChatGPT (Jan 9, 2023) -------------------------------- We're excited to announce several updates to ChatGPT! Here's what's new: 1. We made more improvements to the ChatGPT model! It should be generally better across a wide range of topics and has improved factuality. 2. Stop generating: Based on your feedback, we've added the ability to stop generating ChatGPT's response Performance updates to ChatGPT (Dec 15, 2022) --------------------------------------------- We're excited to announce several updates to ChatGPT! Here's what's new: 1. General performance: Among other improvements, users will notice that ChatGPT is now less likely to refuse to answer questions. 2. Conversation history: You’ll soon be able to view past conversations with ChatGPT, rename your saved conversations and delete the ones you don’t want to keep. We are gradually rolling out this feature. 3. Daily limit: To ensure a high-quality experience for all ChatGPT users, we are experimenting with a daily message cap. If you’re included in this group, you’ll be presented with an option to extend your access by providing feedback to ChatGPT. To see if you’re using the updated version, look for “ChatGPT Dec 15 Version” at the bottom of the screen.
ChatGPT — Release Notes
6825453
https://help.openai.com/en/articles/6825453-chatgpt-release-notes
For details on our data policy, please see our [Terms of Use](https://openai.com/terms/) and [Privacy Policy](https://openai.com/privacy/).
Terms of Use
6837156
https://help.openai.com/en/articles/6837156-terms-of-use
### Please read our **[rate limit documentation](https://beta.openai.com/docs/guides/rate-limits)** in its entirety. If you would like to increase your rate limits, please note that you can do so by [increasing your usage tier](https://platform.openai.com/docs/guides/rate-limits/usage-tiers). You can view your current rate limits, your current usage tier, and how to raise your usage tier/limits in the [Limits section](https://platform.openai.com/account/limits) of your account settings.
Rate Limits and 429: 'Too Many Requests' Errors
6843909
https://help.openai.com/en/articles/6843909-rate-limits-and-429-too-many-requests-errors
Here's an [article](https://help.openai.com/en/articles/6783457-chatgpt-faq) answering frequently asked questions about ChatGPT.
ChatGPT general questions
6843914
https://help.openai.com/en/articles/6843914-chatgpt-general-questions
When you get the error message: ``` Incorrect API key provided: API_KEY*********************************ZXY. You can find your API key at https://beta.openai.com ``` Here are a few simple steps you can take to resolve this issue. Step 1: Clear your browser's cache The first step is to clear your browser's cache. Sometimes, your browser may hold onto an outdated version of your API key, which can cause this error message to appear. To clear your browser's cache, follow the instructions for your specific browser: * For Google Chrome, click on the three dots in the top-right corner of the browser and select "History." Then, click on "Clear browsing data" and select "Cookies and other site data" and "Cached images and files." * For Firefox, click on the three lines in the top-right corner of the browser and select "Options." Then, click on "Privacy & Security" and scroll down to "Cookies and Site Data." Click on "Clear Data" and select "Cookies and Site Data" and "Cached Web Content." * For Safari, click on "Safari" in the top menu bar and select "Preferences." Then, click on the "Privacy" tab and click on "Manage Website Data." Select "Remove All" to clear your browser's cache. Step 2: Retry your request After clearing your browser's cache, try your request again. If the error message still appears, then move to the next step. Step 3: Check your API key Check your API key at **[https://beta.openai.com](https://beta.openai.com/)** and verify it with the API key shown in the error message. Sometimes, the error message may include an old or incorrect API key that you no longer use. Double-check that you are using the correct API key for the request you're making. Step 4: Verify that you're not using two different API keys Another possibility is that you may have accidentally used two different API keys. Make sure that you are using the same API key throughout your application or script and not switching between different keys. If you still need help please reach out to our support team, and they will assist you with resolving the issue. ​
Incorrect API key provided
6882433
https://help.openai.com/en/articles/6882433-incorrect-api-key-provided
Every organization is bound by rate limits which determine how many requests can be sent per second. This rate limit has been hit by the request. Rate limits can be quantized, meaning they are enforced over shorter periods of time (e.g. 60,000 requests/minute may be enforced as 1,000 requests/second). Sending short bursts of requests or contexts (prompts+max\_tokens) that are too long can lead to rate limit errors, even when you are technically below the rate limit per minute. **How can I fix it?** * Include [exponential back-off](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_handle_rate_limits.ipynb) logic in your code. This will catch and retry failed requests. * For token limits + Reduce the [max\_tokens](https://beta.openai.com/docs/api-reference/completions/create#completions/create-max_tokens) to match the size of your completions. Usage needs are estimated from this value, so reducing it will decrease the chance that you unexpectedly receive a rate limit error. For example, if your prompt creates completions around 400 tokens, the max\_tokens value should be around the same size. + [Optimize your prompts](https://github.com/openai/openai-cookbook/tree/main#more-prompt-advice). You can do this by making your instructions shorter, removing extra words, and getting rid of extra examples. You might need to work on your prompt and test it after these changes to make sure it still works well. The added benefit of a shorter prompt is reduced cost to you. If you need help, let us know. * If none of the previous steps work and you are consistently hitting a Rate Limit Error, you can increase your rate limits by [increasing your usage tier](https://platform.openai.com/docs/guides/rate-limits/usage-tiers). You can view your current rate limits, your current usage tier, and how to raise your usage tier/limits in the [Limits section](https://platform.openai.com/account/limits) of your account settings. If you'd like to know more, please check out our updated guidance [here](https://beta.openai.com/docs/guides/rate-limits).
Rate Limit Advice
6891753
https://help.openai.com/en/articles/6891753-rate-limit-advice
This error message indicates that your authentication credentials are invalid. This could happen for several reasons, such as: - You are using a revoked API key. - You are using a different API key than one under the requesting organization. - You are using an API key that does not have the required permissions for the endpoint you are calling. To resolve this error, please follow these steps: - Check that you are using the correct API key and organization ID in your request header. You can find your API key and organization ID in your account settings [here](https://platform.openai.com/account/api-keys). - If you are unsure whether your API key is valid, you can generate a new one here. Make sure to replace your old API key with the new one in your requests and follow our [best practices](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety).
Error Code 401 - Invalid Authentication
6891767
https://help.openai.com/en/articles/6891767-error-code-401-invalid-authentication
This error message indicates that the API key you are using in your request is not correct. This could happen for several reasons, such as: - You are using a typo or an extra space in your API key. - You are using an API key that belongs to a different organization. - You are using an API key that has been deleted or deactivated - Your API key might be cached. To resolve this error, please follow these steps: - Try clearing your browser's cache and cookies then try again. - Check that you are using the correct API key in your request header. Follow the instructions in our [Authentication](https://platform.openai.com/docs/api-reference/authentication) section to ensure your key is correctly formatted (i.e. 'Bearer <API\_KEY>') - If you are unsure whether your API key is correct, you can generate a new one [here](https://platform.openai.com/account/api-keys). Make sure to replace your old API key in your codebase and follow our [best practices](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety).
Error Code 401 - Incorrect API key provided
6891781
https://help.openai.com/en/articles/6891781-error-code-401-incorrect-api-key-provided
This error message indicates that your account is not part of an organization. This could happen for several reasons, such as: - You have left or been removed from your previous organization. - Your organization has been deleted. To resolve this error, please follow these steps: - If you have left or been removed from your previous organization, you can either request a new organization or get invited to an existing one. - To request a new organization, reach out to us via help.openai.com - Existing organization owners can invite you to join their organization via the [Members Panel](https://beta.openai.com/account/members).
Error Code 404 - You must be a member of an organization to use the API
6891827
https://help.openai.com/en/articles/6891827-error-code-404-you-must-be-a-member-of-an-organization-to-use-the-api
This error message indicates that you have hit your assigned rate limit for the API. This means that you have submitted too many tokens or requests in a short period of time and have exceeded the number of requests allowed. This could happen for several reasons, such as: - You are using a loop or a script that makes frequent or concurrent requests. - You are sharing your API key with other users or applications. - You are using a free plan that has a low rate limit. To resolve this error, please follow these steps: - Pace your requests and avoid making unnecessary or redundant calls. - If you are using a loop or a script, make sure to implement a backoff mechanism or a retry logic that respects the rate limit and the response headers. You can read more about our rate limiting policy and best practices [here](https://help.openai.com/en/articles/6891753-rate-limit-advice). - If you are sharing your organization with other users, note that limits are applied per organization and not per user. It is worth checking the usage of the rest of your team as this will contribute to this limit. - If you are using a free or low-tier plan, consider upgrading to a pay-as-you-go plan that offers a higher rate limit. - If you would like to increase your rate limits, please note that you can do so by [increasing your usage tier](https://platform.openai.com/docs/guides/rate-limits/usage-tiers). You can view your current rate limits, your current usage tier, and how to raise your usage tier/limits in the [Limits section](https://platform.openai.com/account/limits) of your account settings.
Error Code 429 - Rate limit reached for requests
6891829
https://help.openai.com/en/articles/6891829-error-code-429-rate-limit-reached-for-requests
This error message indicates that you have hit your maximum monthly budget for the API. This means that you have consumed all the credits or units allocated to your plan and have reached the limit of your billing cycle. This could happen for several reasons, such as: * You are using a high-volume or complex service that consumes a lot of credits or units per request. * You are using a large or diverse data set that requires a lot of requests to process. * Your limit is set too low for your organization’s usage. To resolve this error, please follow these steps: * Check your usage limit and monthly budget in your account settings [here](https://platform.openai.com/account/limits). You can see how many tokens your requests have consumed [here](https://platform.openai.com/account/usage). * If you are using a free plan, consider upgrading to a pay-as-you-go plan that offers a higher quota. * If you need a usage limit increase, you can apply for one [here](https://platform.openai.com/account/limits) under Usage Limits section. We will review your request and get back to you as soon as possible.
Error Code 429 - You exceeded your current quota, please check your plan and billing details.
6891831
https://help.openai.com/en/articles/6891831-error-code-429-you-exceeded-your-current-quota-please-check-your-plan-and-billing-details
This error message indicates that our servers are experiencing high traffic and are unable to process your request at the moment. This could happen for several reasons, such as: - There is a sudden spike or surge in demand for our services. - There is scheduled or unscheduled maintenance or update on our servers. - There is an unexpected or unavoidable outage or incident on our servers. To resolve this error, please follow these steps: - Retry your request after a brief wait. We recommend using an exponential backoff strategy or a retry logic that respects the response headers and the rate limit. You can read more about our best practices [here](https://help.openai.com/en/articles/6891753-rate-limit-advice). - Check our [status page](https://status.openai.com/) for any updates or announcements regarding our services and servers. - If you are still getting this error after a reasonable amount of time, please contact us for further assistance. We apologize for any inconvenience and appreciate your patience and understanding.
Error Code 429 - The engine is currently overloaded. Please try again later.
6891834
https://help.openai.com/en/articles/6891834-error-code-429-the-engine-is-currently-overloaded-please-try-again-later
This section outlines the main error codes returned by the OpenAI API, including both the cause and how to resolve the error. **Status Code Summaries** ------------------------- | | | | --- | --- | | [401](https://help.openai.com/en/articles/6891767-error-code-401-invalid-authentication) | **Cause:** Invalid Authentication **Solution:** Ensure the correct API key and requesting organization are being used. | | [404 - Incorrect API key provided](https://help.openai.com/en/articles/6891781-error-code-404-incorrect-api-key-provided) | **Cause:** The requesting API key is not correct. **Solution:** Ensure the API key used is correct or [generate a new API key](https://beta.openai.com/account/api-keys). | | [404 - You must be a member of an organization to use the API](https://help.openai.com/en/articles/6891827-error-code-404-you-must-be-a-member-of-an-organization-to-use-the-api) | **Cause** Your account is not part of an organization. **Solution** Contact us to get added to a new organization or ask your organization manager to invite you to an organization [here](https://beta.openai.com/account/members). | | [429 - Rate limit reached for requests](https://help.openai.com/en/articles/6891829-error-code-429-rate-limit-reached-for-requests) | **Cause** You have hit your assigned rate limit. **Solution** Pace your requests. Read more [here](https://help.openai.com/en/articles/6891753-rate-limit-advice). | | [429 - You exceeded your current quota, please check your plan and billing details.](https://help.openai.com/en/articles/6891831-error-code-429-you-exceeded-your-current-quota-please-check-your-plan-and-billing-details) | **Cause** For customers with prepaid billing, you have consumed all [credits in your account](https://platform.openai.com/account/billing). For customers with monthly billing, you have exceeded your [monthly budget](https://platform.openai.com/account/limits). **Solution** Buy additional credits or [increase your limits](https://platform.openai.com/account/limits). | | [429 - The engine is currently overloaded. Please try again later.](https://help.openai.com/en/articles/6891834-error-code-429-the-engine-is-currently-overloaded-please-try-again-later) | **Cause:** Our servers are experiencing high traffic. **Solution** Please retry your requests after a brief wait. | | 500 - The server had an error while processing your request. | **Cause** Issue on our servers. **Solution** Retry your request after a brief wait and contact us if the issue persists. Read [status page](https://status.openai.com/). |
API Error Code Guidance
6891839
https://help.openai.com/en/articles/6891839-api-error-code-guidance
An APIError indicates that something went wrong on our side when processing your request. This could be due to a temporary glitch, a bug, or a system outage. We apologize for any inconvenience and we are working hard to resolve any issues as soon as possible. You can check our status page for more information [here](https://status.openai.com/). If you encounter an APIError, please try the following steps: - Wait a few seconds and retry your request. Sometimes, the issue may be resolved quickly and your request may succeed on the second attempt. - Check our [status page](https://status.openai.com/) for any ongoing incidents or maintenance that may affect our services. If there is an active incident, please follow the updates and wait until it is resolved before retrying your request. - If the issue persists, contact our support team and provide them with the following information: - The model you were using - The error message and code you received - The request data and headers you sent - The timestamp and timezone of your request - Any other relevant details that may help us diagnose the issue Our support team will investigate the issue and get back to you as soon as possible.
APIError
6897179
https://help.openai.com/en/articles/6897179-apierror
A Timeout error indicates that your request took too long to complete and our server closed the connection. This could be due to a network issue, a heavy load on our services, or a complex request that requires more processing time. If you encounter a Timeout error, please try the following steps: - Wait a few seconds and retry your request. Sometimes, the network congestion or the load on our services may be reduced and your request may succeed on the second attempt. - Check your network settings and make sure you have a stable and fast internet connection. You may need to switch to a different network, use a wired connection, or reduce the number of devices or applications using your bandwidth. - You may also need to adjust your timeout parameter to allow more time for your request to complete. - If the issue persists, contact our support team and provide them with the following information: - The model you were using - The error message and code you received - The request data and headers you sent - The timestamp and timezone of your request - Any other relevant details that may help us diagnose the issue Our support team will investigate the issue and get back to you as soon as possible.
Timeout
6897186
https://help.openai.com/en/articles/6897186-timeout
An APIConnectionError indicates that your request could not reach our servers or establish a secure connection. This could be due to a network issue, a proxy configuration, an SSL certificate, or a firewall rule. If you encounter an APIConnectionError, please try the following steps: - Check your network settings and make sure you have a stable and fast internet connection. You may need to switch to a different network, use a wired connection, or reduce the number of devices or applications using your bandwidth. - Check your proxy configuration and make sure it is compatible with our services. You may need to update your proxy settings, use a different proxy, or bypass the proxy altogether. - Check your SSL certificates and make sure they are valid and up-to-date. You may need to install or renew your certificates, use a different certificate authority, or disable SSL verification. - Check your firewall rules and make sure they are not blocking or filtering our services. You may need to modify your firewall settings. - If the issue persists, contact our support team and provide them with the following information: - The model you were using - The error message and code you received - The request data and headers you sent - The timestamp and timezone of your request - Any other relevant details that may help us diagnose the issue
APIConnectionError
6897191
https://help.openai.com/en/articles/6897191-apiconnectionerror
An InvalidRequestError indicates that your request was malformed or missing some required parameters, such as a token or an input. This could be due to a typo, a formatting error, or a logic error in your code. If you encounter an InvalidRequestError, please try the following steps: - Read the error message carefully and identify the specific error made. The error message should advise you on what parameter was invalid or missing, and what value or format was expected. - Check the documentation for the specific API method you were calling and make sure you are sending valid and complete parameters. You may need to review the parameter names, types, values, and formats, and ensure they match the documentation. - Check the encoding, format, or size of your request data and make sure they are compatible with our services. You may need to encode your data in UTF-8, format your data in JSON, or compress your data if it is too large. - Test your request using a tool like Postman or curl and make sure it works as expected. You may need to debug your code and fix any errors or inconsistencies in your request logic. - Contact our support team and provide them with: - The model you were using - The error message and code you received - The request data and headers you sent - The timestamp and timezone of your request - Any other relevant details that may help us diagnose the issue Our support team will investigate the issue and get back to you as soon as possible.
InvalidRequestError
6897194
https://help.openai.com/en/articles/6897194-invalidrequesterror
An AuthenticationError indicates that your API key or token was invalid, expired, or revoked. This could be due to a typo, a formatting error, or a security breach. If you encounter an AuthenticationError, please try the following steps: - Check your API key or token and make sure it is correct and active. You may need to generate a new key from the API Key dashboard, ensure there are no extra spaces or characters, or use a different key or token if you have multiple ones. - Ensure that you have followed the correct [formatting](https://beta.openai.com/docs/api-reference/authentication).
AuthenticationError
6897198
https://help.openai.com/en/articles/6897198-authenticationerror
A PermissionError indicates that your API key or token does not have the required scope or role to perform the requested action. This could be due to a misconfiguration, a limitation, or a policy change. If you encounter a PermissionError, please contact our support team and provide them with the the following information: - The model you were using - The error message and code you received - The request data and headers you sent - The timestamp and timezone of your request - Any other relevant details that may help us diagnose the issue Our support team will investigate the issue and get back to you as soon as possible.
PermissionError
6897199
https://help.openai.com/en/articles/6897199-permissionerror
A RateLimitError indicates that you have hit your assigned rate limit. This means that you have sent too many tokens or requests in a given period of time, and our services have temporarily blocked you from sending more. We impose rate limits to ensure fair and efficient use of our resources and to prevent abuse or overload of our services. If you encounter a RateLimitError, please try the following steps: - Wait until your rate limit resets (one minute) and retry your request. The error message should give you a sense of your usage rate and permitted usage. - Send fewer tokens or requests or slow down. You may need to reduce the frequency or volume of your requests, batch your tokens, or implement exponential backoff. You can read our rate limit guidance [here](https://help.openai.com/en/articles/6891753-rate-limit-advice). - You can also check your usage statistics from your account dashboard.
RateLimitError
6897202
https://help.openai.com/en/articles/6897202-ratelimiterror
A ServiceUnavailableError indicates that our servers are temporarily unable to handle your request. This could be due to a planned or unplanned maintenance, a system upgrade, or a server failure. These errors can also be returned during periods of high traffic. We apologize for any inconvenience and we are working hard to restore our services as soon as possible. If you encounter a ServiceUnavailableError, please try the following steps: - Wait a few minutes and retry your request. Sometimes, the issue may be resolved quickly and your request may succeed on the next attempt. - Check our status page for any ongoing incidents or maintenance that may affect our services. If there is an active incident, please follow the updates and wait until it is resolved before retrying your request. - If the issue persists, contact our support team and provide them with the following information: - The model you were using - The error message and code you received - The request data and headers you sent - The timestamp and timezone of your request - Any other relevant details that may help us diagnose the issue Our support team will investigate the issue and get back to you as soon as possible.
ServiceUnavailableError
6897204
https://help.openai.com/en/articles/6897204-serviceunavailableerror
This article outlines the error types returned when using the OpenAI Python Library. Read a summary of the cause and solution, or click the article for more. | | | | --- | --- | | [APIError](https://help.openai.com/en/articles/6897179-apierror) | **Cause** Issue on our side. **Solution** Retry your request after a brief wait and contact us if the issue persists. | | [Timeout](https://help.openai.com/en/articles/6897186-timeout) | **Cause** Request timed out. **Solution** Retry your request after a brief wait and contact us if the issue persists. | | [APIConnectionError](https://help.openai.com/en/articles/6897191-apiconnectionerror) | **Cause** Issue connecting to our services. **Solution** Check your network settings, proxy configuration, SSL certificates, or firewall rules. | | [InvalidRequestError](https://help.openai.com/en/articles/6897194-invalidrequesterror) | **Cause:** Your request was malformed or missing some required parameters, such as a token or an input. **Solution:** The error message should advise you on the specific error made. Check the documentation for the specific API method you are calling and make sure you are sending valid and complete parameters. You may also need to check the encoding, format, or size of your request data. | | [AuthenticationError](https://help.openai.com/en/articles/6897198-authenticationerror) | **Cause** Your API key or token was invalid, expired, or revoked. **Solution:** Check your API key or token and make sure it is correct and active. You may need to generate a new one from your account dashboard. | | [PermissionError](https://help.openai.com/en/articles/6897199-permissionerror) | **Cause** Your API key or token does not have the required scope or role to perform the requested action. **Solution** Make sure your API key has the appropriate permissions for the action or model accessed. | | [RateLimitError](https://help.openai.com/en/articles/6897202-ratelimiterror) | **Cause** You have hit your assigned rate limit. **Solution** Pace your requests. Read more [here](https://help.openai.com/en/articles/6891753-rate-limit-advice). | | [ServiceUnavailableError](https://help.openai.com/en/articles/6897204-serviceunavailableerror) | **Cause** Issue on our servers. **Solution** Retry your request after a brief wait and contact us if the issue persists. | We advise you to programmatically handle errors returned by the API. To do so, you may wish to use a code snippet like below: ``` try: #Make your OpenAI API request here response = openai.Completion.create(model="text-davinci-003", prompt="Hello world") except openai.error.Timeout as e: #Handle timeout error, e.g. retry or log print(f"OpenAI API request timed out: {e}") pass except openai.error.APIError as e: #Handle API error, e.g. retry or log print(f"OpenAI API returned an API Error: {e}") pass except openai.error.APIConnectionError as e: #Handle connection error, e.g. check network or log print(f"OpenAI API request failed to connect: {e}") pass except openai.error.InvalidRequestError as e: #Handle invalid request error, e.g. validate parameters or log print(f"OpenAI API request was invalid: {e}") pass except openai.error.AuthenticationError as e: #Handle authentication error, e.g. check credentials or log print(f"OpenAI API request was not authorized: {e}") pass except openai.error.PermissionError as e: #Handle permission error, e.g. check scope or log print(f"OpenAI API request was not permitted: {e}") pass except openai.error.RateLimitError as e: #Handle rate limit error, e.g. wait or log print(f"OpenAI API request exceeded rate limit: {e}") pass ```
OpenAI Library Error Types Guidance
6897213
https://help.openai.com/en/articles/6897213-openai-library-error-types-guidance
The latency of a completion request is mostly influenced by two factors: the model and the number of tokens generated. Please read our updated documentation for [guidance on improving latencies.](https://beta.openai.com/docs/guides/production-best-practices/improving-latencies)
Guidance on improving latencies
6901266
https://help.openai.com/en/articles/6901266-guidance-on-improving-latencies
1. **What is ChatGPT Plus?** 1. ChatGPT Plus is a subscription plan for ChatGPT. It offers availability even when demand is high, faster response speed, and priority access to new features. 2. **Is the free version still available?** 1. Yes, free access to ChatGPT will still be provided. By offering this subscription pricing, we will be able to help support free access availability to as many people as possible. See our [general ChatGPT article](https://help.openai.com/en/articles/6783457-chatgpt-faq) for more information on our free offering. 3. **How can I cancel my subscription?** 1. You may cancel your subscription at any time. Click “My Account” in the [sidebar](https://chat.openai.com/chat). Then click “Manage my subscription” in the pop-up window. You’ll be directed to a Stripe checkout page where you can select “Cancel Plan”. Your cancellation will take effect the day after the next billing date. You can continue using our services until then. To avoid being charged for your next billing period, cancel your subscription at least 24 hours before your next billing date. Subscription fees are non-refundable. 4. **What is the refund policy?** 1. If you live in the EU, UK, or Turkey, you’re eligible for a refund if you cancel your subscription within 14 days of purchase. Please send us a message via our chat widget in the bottom right of your screen in our [Help Center](https://help.openai.com/en/), select the "Billing" option and select "I need a refund". 5. **How can I request a VAT tax refund?** 1. Please send us a message via our chat widget in the bottom right of your screen in our [Help Center](https://help.openai.com/en/), select the "Billing" option and then select "VAT exemption request". Be sure to include your billing information (name, email, and billing address) so we can process your request faster. 6. **My account got terminated. Can I get a refund?** 1. If we terminate your account for violating our Terms of Use, you still owe any unpaid fees, and will not be given a refund for any remaining credit or prepaid service. 7. **How can I opt out my data to improve model performance?** 1. Please fill out [this form](https://docs.google.com/forms/d/e/1FAIpQLScrnC-_A7JFs4LbIuzevQ_78hVERlNqqCPCt3d8XqnKOfdRdQ/viewform). Additionally, you may request your account to be [deleted](https://help.openai.com/en/articles/6378407-how-can-i-delete-my-account) at any time. 8. **Where can I find my invoice for ChatGPT Plus?** 1. Receipts for credit purchases made are sent to the email address you used when making the purchase. You may also view your invoices from the sidebar by clicking "My Account" and then "Manage my subscription". 9. **Are alternate payment options available?** 1. At this time, we only accept payment via credit card. 10. **I want to use ChatGPT Plus with sensitive data. Who can view my conversations?** 1. As part of our commitment to safe and responsible AI, we may review conversations to improve our systems and to ensure the content complies with our policies and safety requirements. For more information on how we handle data, please see our [Privacy Policy](https://openai.com/privacy/) and [Terms of Use](https://openai.com/terms/). 11. **Is the ChatGPT API included in the ChatGPT Plus subscription?** 1. No, the ChatGPT API and ChatGPT Plus subscription are billed separately. The API has its own pricing, which can be found at <https://openai.com/pricing>. The ChatGPT Plus subscription covers usage on chat.openai.com only and costs $20/month. 12. **I am using the free subscription of ChatGPT so does that mean I can use the ChatGPT API for free too?** 1. No, API usage is it's own separate cost. The ChatGPT API is not available for free. See our [Pricing](https://openai.com/pricing) page for details.
What is ChatGPT Plus?
6950777
https://help.openai.com/en/articles/6950777-what-is-chatgpt-plus

No dataset card yet

Downloads last month
429