{"Chunk ID": "How to count tokens with tiktoken", "Chunk": "tiktoken is a fast open-source tokenizer by OpenAI.\n\nGiven a text string (e.g., \"tiktoken is great!\") and an encoding (e.g., \"cl100k_base\"), a tokenizer can split the text string into a list of tokens (e.g., [\"t\", \"ik\", \"token\", \" is\", \" great\", \"!\"]).\n\nSplitting text strings into tokens is useful because GPT models see text in the form of tokens. Knowing how many tokens are in a text string can tell you (a) whether the string is too long for a text model to process and (b) how much an OpenAI API call costs (as usage is priced by token).", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to count tokens with tiktoken", "Chunk": "Encodings\nEncodings specify how text is converted into tokens. Different models use different encodings.\n\ntiktoken supports three encodings used by OpenAI models:\n\nEncoding name\tOpenAI models\ncl100k_base\tgpt-4, gpt-3.5-turbo, text-embedding-ada-002\np50k_base\tCodex models, text-davinci-002, text-davinci-003\nr50k_base (or gpt2)\tGPT-3 models like davinci\nYou can retrieve the encoding for a model using tiktoken.encoding_for_model() as follows:", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to count tokens with tiktoken", "Chunk": "0. Install tiktoken\nIf needed, install tiktoken with pip:\n\n%pip install --upgrade tiktoken\n\n1. Import tiktoken\nimport tiktoken\n\n2. Load an encoding\nUse tiktoken.get_encoding() to load an encoding by name.\n\nThe first time this runs, it will require an internet connection to download. Later runs won't need an internet connection.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to count tokens with tiktoken", "Chunk": "3. Turn text into tokens with encoding.encode()\nThe .encode() method converts a text string into a list of token integers.\n\nencoding.encode(\"tiktoken is great!\")\n\n[83, 1609, 5963, 374, 2294, 0]\nCount tokens by counting the length of the list returned by .encode().", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to count tokens with tiktoken", "Chunk": "4. Turn tokens into text with encoding.decode()\n.decode() converts a list of token integers to a string.\n\nencoding.decode([83, 1609, 5963, 374, 2294, 0])\n\n'tiktoken is great!'\nWarning: although .decode() can be applied to single tokens, beware that it can be lossy for tokens that aren't on utf-8 boundaries.\n\nFor single tokens, .decode_single_token_bytes() safely converts a single integer token to the bytes it represents.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to count tokens with tiktoken", "Chunk": "5. Comparing encodings\nDifferent encodings vary in how they split words, group spaces, and handle non-English characters. Using the methods above, we can compare different encodings on a few example strings.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to count tokens with tiktoken", "Chunk": "6. Counting tokens for chat completions API calls\nChatGPT models like gpt-3.5-turbo and gpt-4 use tokens in the same way as older completions models, but because of their message-based formatting, it's more difficult to count how many tokens will be used by a conversation.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to count tokens with tiktoken", "Chunk": "Below is an example function for counting tokens for messages passed to gpt-3.5-turbo or gpt-4.\n\nNote that the exact way that tokens are counted from messages may change from model to model. Consider the counts from the function below an estimate, not a timeless guarantee.\n\nIn particular, requests that use the optional functions input will consume extra tokens on top of the estimates calculated below:", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to count tokens with tiktoken", "Chunk": "def num_tokens_from_messages(messages, model=\"gpt-3.5-turbo-0613\"):\n \"\"\"Return the number of tokens used by a list of messages.\"\"\"\n try:\n encoding = tiktoken.encoding_for_model(model)\n except KeyError:\n print(\"Warning: model not found. Using cl100k_base encoding.\")\n encoding = tiktoken.get_encoding(\"cl100k_base\")\n if model in {\n \"gpt-3.5-turbo-0613\",\n \"gpt-3.5-turbo-16k-0613\",\n \"gpt-4-0314\",\n \"gpt-4-32k-0314\",\n \"gpt-4-0613\",\n \"gpt-4-32k-0613\",\n }:\n tokens_per_message = 3\n tokens_per_name = 1\n elif model == \"gpt-3.5-turbo-0301\":\n tokens_per_message = 4 # every message follows {role/name}\\n{content}\\n\n tokens_per_name = -1 # if there's a name, the role is omitted\n elif \"gpt-3.5-turbo\" in model:\n print(\"Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.\")\n return num_tokens_from_messages(messages, model=\"gpt-3.5-turbo-0613\")\n elif \"gpt-4\" in model:\n print(\"Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.\")\n return num_tokens_from_messages(messages, model=\"gpt-4-0613\")\n else:\n raise NotImplementedError(\n f\"\"\"num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.\"\"\"\n )\n num_tokens = 0\n for message in messages:\n num_tokens += tokens_per_message\n for key, value in message.items():\n num_tokens += len(encoding.encode(value))\n if key == \"name\":\n num_tokens += tokens_per_name\n num_tokens += 3 # every reply is primed with assistant\n return num_tokens\n\n# let's verify the function above matches the OpenAI API response\n\nimport openai\n\nexample_messages = [\n {\n \"role\": \"system\",\n \"content\": \"You are a helpful, pattern-following assistant that translates corporate jargon into plain English.\",\n },\n {\n \"role\": \"system\",\n \"name\": \"example_user\",\n \"content\": \"New synergies will help drive top-line growth.\",\n },\n {\n \"role\": \"system\",\n \"name\": \"example_assistant\",\n \"content\": \"Things working well together will increase revenue.\",\n },\n {\n \"role\": \"system\",\n \"name\": \"example_user\",\n \"content\": \"Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage.\",\n },\n {\n \"role\": \"system\",\n \"name\": \"example_assistant\",\n \"content\": \"Let's talk later when we're less busy about how to do better.\",\n },\n {\n \"role\": \"user\",\n \"content\": \"This late pivot means we don't have time to boil the ocean for the client deliverable.\",\n },\n]\n\nfor model in [\n \"gpt-3.5-turbo-0301\",\n \"gpt-3.5-turbo-0613\",\n \"gpt-3.5-turbo\",\n \"gpt-4-0314\",\n \"gpt-4-0613\",\n \"gpt-4\",\n ]:\n print(model)\n # example token count from the function defined above\n print(f\"{num_tokens_from_messages(example_messages, model)} prompt tokens counted by num_tokens_from_messages().\")\n # example token count from the OpenAI API\n response = openai.ChatCompletion.create(\n model=model,\n messages=example_messages,\n temperature=0,\n max_tokens=1, # we're only counting input tokens here, so let's not waste tokens on the output\n )\n print(f'{response[\"usage\"][\"prompt_tokens\"]} prompt tokens counted by the OpenAI API.')\n print()\n", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to call functions with chat models - Part 1", "Chunk": "This notebook covers how to use the Chat Completions API in combination with external functions to extend the capabilities of GPT models. functions is an optional parameter in the Chat Completion API which can be used to provide function specifications. The purpose of this is to enable models to generate function arguments which adhere to the provided specifications. Note that the API will not actually execute any function calls. It is up to developers to execute function calls using model outputs.", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Part 2", "Chunk": "If the functions parameter is provided then by default the model will decide when it is appropriate to use one of the functions. The API can be forced to use a specific function by setting the function_call parameter to {'name': ''}. The API can also be forced to not use any function by setting the function_call parameter to 'none'. If a function is used, the output will contain 'finish_reason': 'function_call' in the response, as well as a function_call object that has the name of the function and the generated function arguments.", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Overview", "Chunk": "Overview This notebook contains the following 2 sections: How to generate function arguments: Specify a set of functions and use the API to generate function arguments. How to call functions with model generated arguments: Close the loop by actually executing functions with model generated arguments.", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Installation", "Chunk": "!pip install scipy !pip install tenacity !pip install tiktoken !pip install termcolor !pip install openai !pip install requests import json import openai import requests from tenacity import retry, wait_random_exponential, stop_after_attempt from termcolor import colored GPT_MODEL = 'gpt-3.5-turbo-0613'", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Utilities", "Chunk": "Utilities First let's define a few utilities for making calls to the Chat Completions API and for maintaining and keeping track of the conversation state. @retry(wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3)) def chat_completion_request(messages, functions=None, function_call=None, model=GPT_MODEL): headers = { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + openai.api_key, } json_data = {'model': model, 'messages': messages} if functions is not None: json_data.update({'functions': functions}) if function_call is not None: json_data.update({'function_call': function_call}) try: response = requests.post( 'https://api.openai.com/v1/chat/completions', headers=headers, json=json_data, ) return response except Exception as e: print('Unable to generate ChatCompletion response') print(f'Exception: {e}') return e", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Pretty Print", "Chunk": "def pretty_print_conversation(messages): role_to_color = { 'system': 'red', 'user': 'green', 'assistant': 'blue', 'function': 'magenta', } for message in messages: if message['role'] == 'system': print(colored(f'system: {message['content']}\n', role_to_color[message['role']])) elif message['role'] == 'user': print(colored(f'user: {message['content']}\n', role_to_color[message['role']])) elif message['role'] == 'assistant' and message.get('function_call'): print(colored(f'assistant: {message['function_call']}\n', role_to_color[message['role']])) elif message['role'] == 'assistant' and not message.get('function_call'): print(colored(f'assistant: {message['content']}\n', role_to_color[message['role']])) elif message['role'] == 'function': print(colored(f'function ({message['name']}): {message['content']}\n', role_to_color[message['role']]))", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Basic Concepts", "Chunk": "Basic concepts Let's create some function specifications to interface with a hypothetical weather API. We'll pass these function specification to the Chat Completions API in order to generate function arguments that adhere to the specification. functions = [ { 'name': 'get_current_weather', 'description': 'Get the current weather', 'parameters': { 'type': 'object', 'properties': { 'location': { 'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA', }, 'format': { 'type': 'string', 'enum': ['celsius', 'fahrenheit'], 'description': 'The temperature unit to use. Infer this from the users location.', }, }, 'required': ['location', 'format'], }, }, { 'name': 'get_n_day_weather_forecast', 'description': 'Get an N-day weather forecast', 'parameters': { 'type': 'object', 'properties': { 'location': { 'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA', }, 'format': { 'type': 'string', 'enum': ['celsius', 'fahrenheit'], 'description': 'The temperature unit to use. Infer this from the users location.', }, 'num_days': { 'type': 'integer', 'description': 'The number of days to forecast', } }, 'required': ['location', 'format', 'num_days'] }, }, ]", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Weather Query", "Chunk": "If we prompt the model about the current weather, it will respond with some clarifying questions. messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'What's the weather like today'}) chat_response = chat_completion_request( messages, functions=functions ) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) assistant_message", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Weather Query Response", "Chunk": "{'role': 'assistant', 'content': 'In which city and state would you like to know the current weather?'} Once we provide the missing information, it will generate the appropriate function arguments for us. messages.append({'role': 'user', 'content': 'I'm in Glasgow, Scotland.'}) chat_response = chat_completion_request( messages, functions=functions ) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) assistant_message", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Weather Query Different Prompt", "Chunk": "If we prompt it differently, we can get it to target the other function we've told it about. messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'what is the weather going to be like in Glasgow, Scotland over the next x days'}) chat_response = chat_completion_request( messages, functions=functions ) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) assistant_message", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Weather Query Response 2", "Chunk": "{'role': 'assistant', 'content': 'Sure, I can help you with that. Please provide me with the number of days you want to forecast for.'} Once again, the model is asking us for clarification because it doesn't have enough information yet. In this case it already knows the location for the forecast, but it needs to know how many days are required in the forecast. messages.append({'role': 'user', 'content': '5 days'}) chat_response = chat_completion_request( messages, functions=functions ) chat_response.json()['choices'][0]", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Force Specific Function", "Chunk": "Forcing the use of specific functions or no function We can force the model to use a specific function, for example get_n_day_weather_forecast by using the function_call argument. By doing so, we force the model to make assumptions about how to use it. # in this cell we force the model to use get_n_day_weather_forecast messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'Give me a weather report for Toronto, Canada.'}) chat_response = chat_completion_request( messages, functions=functions, function_call={'name': 'get_n_day_weather_forecast'} ) chat_response.json()['choices'][0]['message']", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Without Forcing Function", "Chunk": "# if we don't force the model to use get_n_day_weather_forecast it may not messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'Give me a weather report for Toronto, Canada.'}) chat_response = chat_completion_request( messages, functions=functions ) chat_response.json()['choices'][0]['message']", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Force No Function", "Chunk": "We can also force the model to not use a function at all. By doing so we prevent it from producing a proper function call. messages = [] messages.append({'role': 'system', 'content': 'Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.'}) messages.append({'role': 'user', 'content': 'Give me the current weather (use Celcius) for Toronto, Canada.'}) chat_response = chat_completion_request( messages, functions=functions, function_call='none' ) chat_response.json()['choices'][0]['message']", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - SQL Functions", "Chunk": "How to call functions with model generated arguments In our next example, we'll demonstrate how to execute functions whose inputs are model-generated, and use this to implement an agent that can answer questions for us about a database. For simplicity we'll use the Chinook sample database. Note: SQL generation can be high-risk in a production environment since models are not perfectly reliable at generating correct SQL.", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - SQL Function Specification", "Chunk": "Specifying a function to execute SQL queries First let's define some helpful utility functions to extract data from a SQLite database. import sqlite3 conn = sqlite3.connect('data/Chinook.db') print('Opened database successfully')", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Database Schema Functions", "Chunk": "def get_table_names(conn): 'Return a list of table names' table_names = [] tables = conn.execute('SELECT name FROM sqlite_master WHERE type='table';') for table in tables.fetchall(): table_names.append(table[0]) return table_names def get_column_names(conn, table_name): 'Return a list of column names.' column_names = [] columns = conn.execute(f'PRAGMA table_info({table_name});').fetchall() for col in columns: column_names.append(col[1]) return column_names def get_database_info(conn): 'Return a list of dicts containing the table name and columns for each table in the database' table_dicts = [] for table_name in get_table_names(conn): columns_names = get_column_names(conn, table_name) table_dicts.append({'table_name': table_name, 'column_names': columns_names}) return table_dicts", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Database Schema Representation", "Chunk": "database_schema_dict = get_database_info(conn) database_schema_string = '\\n'.join( [ f'Table: {table['table_name']}'\\nColumns: {', '.join(table['column_names'])}' for table in database_schema_dict ] )", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - SQL Function Specification 2", "Chunk": "As before, we'll define a function specification for the function we'd like the API to generate arguments for. Notice that we are inserting the database schema into the function specification. This will be important for the model to know about. functions = [ { 'name': 'ask_database', 'description': 'Use this function to answer user questions about music. Input should be a fully formed SQL query.', 'parameters': { 'type': 'object', 'properties': { 'query': { 'type': 'string', 'description': f' SQL query extracting info to answer the user's question. SQL should be written using this database schema: {database_schema_string} The query should be returned in plain text, not in JSON.', }, }, 'required': ['query'], }, }, ]", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - Execute SQL Queries", "Chunk": "Executing SQL queries Now let's implement the function that will actually excute queries against the database. def ask_database(conn, query): 'Function to query SQLite database with a provided SQL query.' try: results = str(conn.execute(query).fetchall()) except Exception as e: results = f'query failed with error: {e}' return results def execute_function_call(message): if message['function_call']['name'] == 'ask_database': query = json.loads(message['function_call']['arguments'])['query'] results = ask_database(conn, query) else: results = f'Error: function {message['function_call']['name']} does not exist' return results", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - SQL Query Example", "Chunk": "messages = [] messages.append({'role': 'system', 'content': 'Answer user questions by generating SQL queries against the Chinook Music Database.'}) messages.append({'role': 'user', 'content': 'Hi, who are the top 5 artists by number of tracks?'}) chat_response = chat_completion_request(messages, functions) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) if assistant_message.get('function_call'): results = execute_function_call(assistant_message) messages.append({'role': 'function', 'name': assistant_message['function_call']['name'], 'content': results}) pretty_print_conversation(messages)", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "How to call functions with chat models - SQL Query Example 2", "Chunk": "messages.append({'role': 'user', 'content': 'What is the name of the album with the most tracks?'}) chat_response = chat_completion_request(messages, functions) assistant_message = chat_response.json()['choices'][0]['message'] messages.append(assistant_message) if assistant_message.get('function_call'): results = execute_function_call(assistant_message) messages.append({'role': 'function', 'content': results, 'name': assistant_message['function_call']['name']}) pretty_print_conversation(messages)", "Source": "https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "This notebook serves as a tool to preprocess and analyze the chat dataset used for fine-tuning a chat model. It checks for format errors, provides basic statistics, and estimates token counts for fine-tuning costs. The method shown here corresponds to legacy fine-tuning for models like babbage-002 and davinci-002. For fine-tuning gpt-3.5-turbo, see the current fine-tuning page.", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "We first load the chat dataset from an example JSONL file.\n\ndata_path = \"data/toy_chat_fine_tuning.jsonl\"\n\n# Load the dataset\nwith open(data_path, 'r', encoding='utf-8') as f:\n dataset = [json.loads(line) for line in f]\n\n# Initial dataset stats\nprint(\"Num examples:\", len(dataset))\nprint(\"First example:\")\nfor message in dataset[0][\"messages\"]:\n print(message)", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "We can perform a variety of error checks to validate that each conversation in the dataset adheres to the format expected by the fine-tuning API. Errors are categorized based on their nature for easier debugging.\n\nData Type Check: Checks whether each entry in the dataset is a dictionary (dict). Error type: data_type.\nPresence of Message List: Checks if a messages list is present in each entry. Error type: missing_messages_list.\nMessage Keys Check: Validates that each message in the messages list contains the keys role and content. Error type: message_missing_key.\nUnrecognized Keys in Messages: Logs if a message has keys other than role, content, and name. Error type: message_unrecognized_key.\nRole Validation: Ensures the role is one of \"system\", \"user\", or \"assistant\". Error type: unrecognized_role.\nContent Validation: Verifies that content has textual data and is a string. Error type: missing_content.\nAssistant Message Presence: Checks that each conversation has at least one message from the assistant. Error type: example_missing_assistant_message.\nThe code below performs these checks, and outputs counts for each type of error found are printed. This is useful for debugging and ensuring the dataset is ready for the next steps.", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "Lets define a few helpful utilities to be used in the rest of the notebook.\n\nencoding = tiktoken.get_encoding(\"cl100k_base\")\n\n# not exact!\n# simplified from https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb\ndef num_tokens_from_messages(messages, tokens_per_message=3, tokens_per_name=1):\n num_tokens = 0\n for message in messages:\n num_tokens += tokens_per_message\n for key, value in message.items():\n num_tokens += len(encoding.encode(value))\n if key == \"name\":\n num_tokens += tokens_per_name\n num_tokens += 3\n return num_tokens", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "def num_assistant_tokens_from_messages(messages):\n num_tokens = 0\n for message in messages:\n if message[\"role\"] == \"assistant\":\n num_tokens += len(encoding.encode(message[\"content\"]))\n return num_tokens", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "def print_distribution(values, name):\n print(f\"\\n#### Distribution of {name}:\")\n print(f\"min / max: {min(values)}, {max(values)}\")\n print(f\"mean / median: {np.mean(values)}, {np.median(values)}\")\n print(f\"p5 / p95: {np.quantile(values, 0.1)}, {np.quantile(values, 0.9)}\")", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "With some lightweight analysis we can identify potential issues in the dataset, like missing messages, and provide statistical insights into message and token counts.\n\nMissing System/User Messages: Counts the number of conversations missing a \"system\" or \"user\" message. Such messages are critical for defining the assistant's behavior and initiating the conversation.\nNumber of Messages Per Example: Summarizes the distribution of the number of messages in each conversation, providing insight into dialogue complexity.\nTotal Tokens Per Example: Calculates and summarizes the distribution of the total number of tokens in each conversation. Important for understanding fine-tuning costs.\nTokens in Assistant's Messages: Calculates the number of tokens in the assistant's messages per conversation and summarizes this distribution. Useful for understanding the assistant's verbosity.\nToken Limit Warnings: Checks if any examples exceed the maximum token limit (4096 tokens), as such examples will be truncated during fine-tuning, potentially resulting in data loss.", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "# Warnings and tokens counts\nn_missing_system = 0\nn_missing_user = 0\nn_messages = []\nconvo_lens = []\nassistant_message_lens = []\n\nfor ex in dataset:\n messages = ex[\"messages\"]\n if not any(message[\"role\"] == \"system\" for message in messages):\n n_missing_system += 1\n if not any(message[\"role\"] == \"user\" for message in messages):\n n_missing_user += 1\n n_messages.append(len(messages))\n convo_lens.append(num_tokens_from_messages(messages))\n assistant_message_lens.append(num_assistant_tokens_from_messages(messages))\n\nprint(\"Num examples missing system message:\", n_missing_system)\nprint(\"Num examples missing user message:\", n_missing_user)\nprint_distribution(n_messages, \"num_messages_per_example\")\nprint_distribution(convo_lens, \"num_total_tokens_per_example\")\nprint_distribution(assistant_message_lens, \"num_assistant_tokens_per_example\")\nn_too_long = sum(l > 4096 for l in convo_lens)\nprint(f\"\\n{n_too_long} examples may be over the 4096 token limit, they will be truncated during fine-tuning\")", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "In this final section, we estimate the total number of tokens that will be used for fine-tuning, which allows us to approximate the cost. It is worth noting that the duration of the fine-tuning jobs will also increase with the token count.\n\n# Pricing and default n_epochs estimate\nMAX_TOKENS_PER_EXAMPLE = 4096\n\nTARGET_EPOCHS = 3\nMIN_TARGET_EXAMPLES = 100\nMAX_TARGET_EXAMPLES = 25000\nMIN_DEFAULT_EPOCHS = 1\nMAX_DEFAULT_EPOCHS = 25\n\nn_epochs = TARGET_EPOCHS\nn_train_examples = len(dataset)\nif n_train_examples * TARGET_EPOCHS < MIN_TARGET_EXAMPLES:\n n_epochs = min(MAX_DEFAULT_EPOCHS, MIN_TARGET_EXAMPLES // n_train_examples)\nelif n_train_examples * TARGET_EPOCHS > MAX_TARGET_EXAMPLES:\n n_epochs = max(MIN_DEFAULT_EPOCHS, MAX_TARGET_EXAMPLES // n_train_examples)\n\nn_billing_tokens_in_dataset = sum(min(MAX_TOKENS_PER_EXAMPLE, length) for length in convo_lens)\nprint(f\"Dataset has ~{n_billing_tokens_in_dataset} tokens that will be charged for during training\")\nprint(f\"By default, you'll train for {n_epochs} epochs on this dataset\")\nprint(f\"By default, you'll be charged for ~{n_epochs * n_billing_tokens_in_dataset} tokens\")\n\nDataset has ~4306 tokens that will be charged for during training\nBy default, you'll train for 20 epochs on this dataset\nBy default, you'll be charged for ~86120 tokens\nSee https://openai.com/pricing to estimate total costs.", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "Wikipedia article section:\nCurling at the 2022 Winter Olympics\n\n==Results summary==\n\n===Women's tournament===\n\n====Playoffs====\n\n=====Gold medal game=====\n''Sunday, 20 February, 9:05''\n{{#lst:Curling at the 2022 Winter Olympics – Women's tournament|GM}}\n{{Player percentages\n| team1 = {{flagIOC|JPN|2022 Winter}}\n| [[Yurika Yoshida]] | 97%\n| [[Yumi Suzuki]] | 82%\n| [[Chinami Yoshida]] | 64%\n| [[Satsuki Fujisawa]] | 69%\n| teampct1 = 78%\n| team2 = {{flagIOC|GBR|2022 Winter}}\n| [[Hailey Duff]] | 90%\n| [[Jennifer Dodds]] | 89%\n| [[Vicky Wright]] | 89%\n| [[Eve Muirhead]] | 88%\n| teampct2 = 89%\n}}", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Curling at the 2022 Winter Olympics - Medal summary", "Chunk": "Wikipedia article section:\nCurling at the 2022 Winter Olympics\n\n==Medal summary==\n\n===Medal table===\n{{Medals table\n| caption =\n| host =\n| flag_template = flagIOC\n| event = 2022 Winter\n| team =\n| gold_CAN = 0 | silver_CAN = 0 | bronze_CAN = 1\n| gold_ITA = 1 | silver_ITA = 0 | bronze_ITA = 0\n| gold_NOR = 0 | silver_NOR = 1 | bronze_NOR = 0\n| gold_SWE = 1 | silver_SWE = 0 | bronze_SWE = 2\n| gold_GBR = 1 | silver_GBR = 1 | bronze_GBR = 0\n| gold_JPN = 0 | silver_JPN = 1 | bronze_JPN - 0\n}}", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Curling at the 2022 Winter Olympics - Men's tournament", "Chunk": "Wikipedia article section:\nCurling at the 2022 Winter Olympics\n\n==Results summary==\n\n===Men's tournament===\n\n====Playoffs====\n\n=====Gold medal game=====\n''Saturday, 19 February, 14:50''\n{{#lst:Curling at the 2022 Winter Olympics – Men's tournament|GM}}\n{{Player percentages\n| team1 = {{flagIOC|GBR|2022 Winter}}\n| [[Hammy McMillan Jr.]] | 95%\n| [[Bobby Lammie]] | 80%\n| [[Grant Hardie]] | 94%\n| [[Bruce Mouat]] | 89%\n| teampct1 = 90%\n| team2 = {{flagIOC|SWE|2022 Winter}}\n| [[Christoffer Sundgren]] | 99%\n| [[Rasmus Wranå]] | 95%\n| [[Oskar Eriksson]] | 93%\n| [[Niklas Edin]] | 87%\n| teampct2 = 94%\n}}", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Curling at the 2022 Winter Olympics - Medalists", "Chunk": "Wikipedia article section:\nCurling at the 2022 Winter Olympics\n\n==Medal summary==\n\n===Medalists===\n{| {{MedalistTable|type=Event|columns=1}}\n|-\n|Men
{{DetailsLink|Curling at the 2022 Winter Olympics – Men's tournament}}\n|{{flagIOC|SWE|2022 Winter}}
[[Niklas Edin]]
[[Oskar Eriksson]]
[[Rasmus Wranå]]
[[Christoffer Sundgren]]
[[Daniel Magnusson (curler)|Daniel Magnusson]]\n|{{flagIOC|GBR|2022 Winter}}
[[Bruce Mouat]]
[[Grant Hardie]]
[[Bobby Lammie]]
[[Hammy McMillan Jr.]]
[[Ross Whyte]]\n|{{flagIOC|CAN|2022 Winter}}
[[Brad Gushue]]
[[Mark Nichols (curler)|Mark Nichols]]
[[Brett Gallant]]
[[Geoff Walker (curler)|Geoff Walker]]
[[Marc Kennedy]]\n|-\n|Women
{{DetailsLink|Curling at the 2022 Winter Olympics – Women's tournament}}\n|{{flagIOC|GBR|2022 Winter}}
[[Eve Muirhead]]
[[Vicky Wright]]
[[Jennifer Dodds]]
[[Hailey Duff]]
[[Mili Smith]]\n|{{flagIOC|JPN|2022 Winter}}
[[Satsuki Fujisawa]]
[[Chinami Yoshida]]
[[Yumi Suzuki]]
[[Yurika Yoshida]]
[[Kotomi Ishizaki]]\n|{{flagIOC|SWE|2022 Winter}}
[[Anna Hasselborg]]
[[Sara McManus]]
[[Agnes Knochenhauer]]
[[Sofia Mabergs]]
[[Johanna Heldin]]\n|-\n|Mixed doubles
{{DetailsLink|Curling at the 2022 Winter Olympics – Mixed doubles tournament}}\n|{{flagIOC|ITA|2022 Winter}}
[[Stefania Constantini]]
[[Amos Mosaner]]\n|{{flagIOC|NOR|2022 Winter}}
[[Kristin Skaslien]]
[[Magnus Nedregotten]]\n|{{flagIOC|SWE|2022 Winter}}
[[Almida de Val]]
[[Oskar Eriksson]]\n|}", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Curling at the 2022 Winter Olympics - Men's tournament (Bronze medal game)", "Chunk": "Wikipedia article section:\nCurling at the 2022 Winter Olympics\n\n==Results summary==\n\n===Men's tournament===\n\n====Playoffs====\n\n=====Bronze medal game=====\n''Friday, 18 February, 14:05''\n{{#lst:Curling at the 2022 Winter Olympics – Men's tournament|BM}}\n{{Player percentages\n| team1 = {{flagIOC|USA|2022 Winter}}\n| [[John Landsteiner]] | 80%\n| [[Matt Hamilton (curler)|Matt Hamilton]] | 86%\n| [[Chris Plys]] | 74%\n| [[John Shuster]] | 69%\n| teampct1 = 77%\n| team2 = {{flagIOC|CAN|2022 Winter}}\n| [[Geoff Walker (curler)|Geoff Walker]] | 84%\n| [[Brett Gallant]] | 86%\n| [[Mark Nichols (curler)|Mark Nichols]] | 78%\n| [[Brad Gushue]] | 78%\n| teampct2 = 82%\n}}", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Curling at the 2022 Winter Olympics - Teams (Mixed doubles)", "Chunk": "Wikipedia article section:\nCurling at the 2022 Winter Olympics\n\n==Teams==\n\n===Mixed doubles===\n{| class=wikitable\n|-\n!width=200|{{flagIOC|AUS|2022 Winter}}\n!width=200|{{flagIOC|CAN|2022 Winter}}\n!width=200|{{flagIOC|CHN|2022 Winter}}\n!width=200|{{flagIOC|CZE|2022 Winter}}\n!width=200|{{flagIOC|GBR|2022 Winter}}\n|-\n|\n'''Female:''' [[Tahli Gill]]
\n'''Male:''' [[Dean Hewitt]]\n|\n'''Female:''' [[Rachel Koman
Male: John Morris]]\n|\n'''Female:''' Fan S
Male: L Z\n|\n'''Female:''' Z Paulov
Male: T Paul\n|\n'''Female:''' Jennifer Dodds
Male: B Mouat\n|-\n!width=200|{{flagIOC|ITA|2022 Winter}}\n!width=200|{{flagIOC|NOR|2022 Winter}}\n!width=200|{{flagIOC|SWE|2022 Winter}}\n!width=200|{{flagIOC|SUI|2022 Winter}}\n!width=200|{{flagIOC|USA|2022 Winter}}\n|-\n|\n'''Female:''' Stefania Constantini
'''Male:''' Amos Mosaner\n|\n'''Female:''' Kristin S
Male: Magnus Nedregotten\n|\n'''Female:''' A de Val
Male: O Eriksson\n|\n'''Female:''' Jenny Perret
Male: Martin Rios\n|\n'''Female:''' Vicky Persinger
Male: Chris Plys\n|}", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Curling at the 2022 Winter Olympics - Women's tournament (Bronze medal game)", "Chunk": "Wikipedia article section:\nCurling at the 2022 Winter Olympics\n\n==Results summary==\n\n===Women's tournament===\n\n====Playoffs====\n\n=====Bronze medal game=====\n''Saturday, 19 February, 20:05''\n{{#lst:Curling at the 2022 Winter Olympics – Women's tournament|BM}}\n{{Player percentages\n| team1 = {{flagIOC|SUI|2022 Winter}}\n| [[Melanie Barbezat]] | 79%\n| [[Esther Neuenschwander]] | 75%\n| [[Silvana Tirinzoni]] | 81%\n| [[Alina Pätz]] | 64%\n| teampct1 = 75%\n| team2 = {{flagIOC|SWE|2022 Winter}}\n| [[Sofia Mabergs]] | 89%\n| [[Agnes Knochenhauer]] | 80%\n| [[Sara McManus]] | 81%\n| [[Anna Hasselborg]] | 76%\n| teampct2 = 82%\n}}", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Curling at the 2022 Winter Olympics - Mixed doubles tournament (Gold medal game)", "Chunk": "Wikipedia article section:\nCurling at the 2022 Winter Olympics\n\n==Results summary==\n\n===Mixed doubles tournament===\n\n====Playoffs====\n\n=====Gold medal game=====\n''Tuesday, 8 February, 20:05''\n{{#lst:Curling at the 2022 Winter Olympics – Mixed doubles tournament|GM}}\n{| class=\"wikitable\"\n!colspan=4 width=400|Player percentages\n|-\n!colspan=2 width=200 style=\"white-space:nowrap;\"| {{flagIOC|ITA|2022 Winter}}\n!colspan=2 width=200 style=\"white-space:nowrap;\"| {{flagIOC|NOR|2022 Winter}}\n|-\n| [[Stefania Constantini]] || 83%\n| [[Kristin Skaslien]] || 70%\n|-\n| [[Amos Mosaner]] || 90%\n| [[Magnus Nedregotten]] || 69%\n|-\n| '''Total''' || 87%\n| '''Total''' || 69%\n|}", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "How to format inputs to ChatGPT models", "Chunk": "ChatGPT is powered by gpt-3.5-turbo and gpt-4, OpenAI's most advanced models. You can build your own applications with gpt-3.5-turbo or gpt-4 using the OpenAI API. Chat models take a series of messages as input, and return an AI-written message as output. This guide illustrates the chat format with a few example API calls.", "Source": "https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models"} {"Chunk ID": "Import the openai library", "Chunk": "# if needed, install and/or upgrade to the latest version of the OpenAI Python library %pip install --upgrade openai import the OpenAI Python library for calling the OpenAI API import openai", "Source": "https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models"} {"Chunk ID": "An example chat API call", "Chunk": "A chat API call has two required inputs: model: the name of the model you want to use (e.g., gpt-3.5-turbo, gpt-4, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613) messages: a list of message objects, where each object has two required fields: role: the role of the messenger (either system, user, or assistant) content: the content of the message (e.g., Write me a beautiful poem) Messages can also contain an optional name field, which gives the messenger a name. E.g., example-user, Alice, BlackbeardBot. Names may not contain spaces.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "Extract just the reply with:", "Chunk": "response['choices'][0]['message']['content'] 'Orange who?' Even non-conversation-based tasks can fit into the chat format, by placing the instruction in the first user message.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "Tips for instructing gpt-3.5-turbo-0301", "Chunk": "Best practices for instructing models may change from model version to model version. The advice that follows applies to gpt-3.5-turbo-0301 and may not apply to future models. System messages The system message can be used to prime the assistant with different personalities or behaviors.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "Few-shot prompting", "Chunk": "In some cases, it's easier to show the model what you want rather than tell the model what you want. One way to show the model what you want is with faked example messages.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "Transforming the few-shot example above, we could write:", "Chunk": "The business jargon translation example, but with example names for the example messages", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "Not every attempt at engineering conversations will succeed at first.", "Chunk": "If your first attempts fail, don't be afraid to experiment with different ways of priming or conditioning the model.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "Counting tokens", "Chunk": "When you submit your request, the API transforms the messages into a sequence of tokens. The number of tokens used affects: the cost of the request the time it takes to generate the response when the reply gets cut off from hitting the maximum token limit (4,096 for gpt-3.5-turbo or 8,192 for gpt-4)", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "Token Counting Function", "Chunk": "You can use the following function to count the number of tokens that a list of messages will use. Note that the exact way that tokens are counted from messages may change from model to model. Consider the counts from the function below an estimate, not a timeless guarantee.", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "Token Counting Function Code", "Chunk": "```python\ndef num_tokens_from_messages(messages, model=\"gpt-3.5-turbo-0613\"):\n \"\"\"Return the number of tokens used by a list of messages.\"\"\"\n try:\n encoding = tiktoken.encoding_for_model(model)\n except KeyError:\n print(\"Warning: model not found. Using cl100k_base encoding.\")\n encoding = tiktoken.get_encoding(\"cl100k_base\")\n if model in {\n \"gpt-3.5-turbo-0613\",\n \"gpt-3.5-turbo-16k-0613\",\n \"gpt-4-0314\",\n \"gpt-4-32k-0314\",\n \"gpt-4-0613\",\n \"gpt-4-32k-0613\",\n }:\n tokens_per_message = 3\n tokens_per_name = 1\n elif model == \"gpt-3.5-turbo-0301\":\n tokens_per_message = 4 # every message follows {role/name}\\n{content}\\n\n tokens_per_name = -1 # if there's a name, the role is omitted\n elif \"gpt-3.5-turbo\" in model:\n print(\"Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.\")\n return num_tokens_from_messages(messages, model=\"gpt-3.5-turbo-0613\")\n elif \"gpt-4\" in model:\n print(\"Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.\")\n return num_tokens_from_messages(messages, model=\"gpt-4-0613\")\n else:\n raise NotImplementedError(\n f\"\"\"num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.\"\"\"\n )\n num_tokens = 0\n for message in messages:\n num_tokens += tokens_per_message\n for key, value in message.items():\n num_tokens += len(encoding.encode(value))\n if key == \"name\":\n num_tokens += tokens_per_name\n num_tokens += 3 # every reply is primed with assistant\n return num_tokens\n```", "Source": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken"} {"Chunk ID": "How to stream completions", "Chunk": "By default, when you request a completion from the OpenAI, the entire completion is generated before being sent back in a single response.\n\nIf you're generating long completions, waiting for the response can take many seconds.\n\nTo get responses sooner, you can 'stream' the completion as it's being generated. This allows you to start printing or processing the beginning of the completion before the full completion is finished.", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream completions (cont.)", "Chunk": "To stream completions, set stream=True when calling the chat completions or completions endpoints. This will return an object that streams back the response as data-only server-sent events. Extract chunks from the delta field rather than the message field.", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "Downsides", "Chunk": "Note that using stream=True in a production application makes it more difficult to moderate the content of the completions, as partial completions may be more difficult to evaluate. which has implications for approved usage.\n\nAnother small drawback of streaming responses is that the response no longer includes the usage field to tell you how many tokens were consumed. After receiving and combining all of the responses, you can calculate this yourself using tiktoken.", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "Example code", "Chunk": "Below, this notebook shows:\n\nWhat a typical chat completion response looks like\nWhat a streaming chat completion response looks like\nHow much time is saved by streaming a chat completion\nHow to stream non-chat completions (used by older models like text-davinci-003)", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "Example code (cont.)", "Chunk": "# imports\nimport openai # for OpenAI API calls\nimport time # for measuring time duration of API calls\n\n1. What a typical chat completion response looks like\nWith a typical ChatCompletions API call, the response is first computed and then returned all at once.", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "Example code (cont.)", "Chunk": "The reply can be extracted with response['choices'][0]['message'].\n\nThe content of the reply can be extracted with response['choices'][0]['message']['content'].\n\nreply = response['choices'][0]['message']\nprint(f\"Extracted reply: \n{reply}\")\n\nreply_content = response['choices'][0]['message']['content']\nprint(f\"Extracted content: \n{reply_content}\")", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream a chat completion", "Chunk": "With a streaming API call, the response is sent back incrementally in chunks via an event stream. In Python, you can iterate over these events with a for loop.", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream a chat completion (cont.)", "Chunk": "As you can see above, streaming responses have a delta field rather than a message field. delta can hold things like:\n\na role token (e.g., {'role': 'assistant'})\na content token (e.g., {'content': '\\n\\n'})\nnothing (e.g., {}), when the stream is over", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream a chat completion (cont.)", "Chunk": "3. How much time is saved by streaming a chat completion\nNow let's ask gpt-3.5-turbo to count to 100 again, and see how long it takes.", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream a chat completion (cont.)", "Chunk": "# Example of an OpenAI ChatCompletion request with stream=True\n# https://platform.openai.com/docs/guides/chat\n\n# record the time before the request is sent\nstart_time = time.time()", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream a chat completion (cont.)", "Chunk": "# send a ChatCompletion request to count to 100\nresponse = openai.ChatCompletion.create(\n model='gpt-3.5-turbo',\n messages=[\n {'role': 'user', 'content': 'Count to 100, with a comma between each number and no newlines. E.g., 1, 2, 3, ...'}\n ],\n temperature=0,\n)", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream a chat completion (cont.)", "Chunk": "# calculate the time it took to receive the response\nresponse_time = time.time() - start_time\n\n# print the time delay and text received\nprint(f\"Full response received {response_time:.2f} seconds after request\")\nprint(f\"Full response received:\n{response}\")", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream a chat completion (cont.)", "Chunk": "The reply can be extracted with response['choices'][0]['message'].\n\nThe content of the reply can be extracted with response['choices'][0]['message']['content'].\n\nreply = response['choices'][0]['message']\nprint(f\"Extracted reply: \n{reply}\")\n\nreply_content = response['choices'][0]['message']['content']\nprint(f\"Extracted content: \n{reply_content}\")", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream non-chat completions (used by older models like text-davinci-003)", "Chunk": "A typical completion request\nWith a typical Completions API call, the text is first computed and then returned all at once.", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream non-chat completions (used by older models like text-davinci-003) (cont.)", "Chunk": "# Example of an OpenAI Completion request\n# https://beta.openai.com/docs/api-reference/completions/create\n\n# record the time before the request is sent\nstart_time = time.time()", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream non-chat completions (used by older models like text-davinci-003) (cont.)", "Chunk": "# send a Completion request to count to 100\nresponse = openai.Completion.create(\n model='text-davinci-002',\n prompt='1,2,3,',\n max_tokens=193,\n temperature=0,\n)", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream non-chat completions (used by older models like text-davinci-003) (cont.)", "Chunk": "# calculate the time it took to receive the response\nresponse_time = time.time() - start_time\n\n# extract the text from the response\ncompletion_text = response['choices'][0]['text']", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream non-chat completions (used by older models like text-davinci-003) (cont.)", "Chunk": "# print the time delay and text received\nprint(f\"Full response received {response_time:.2f} seconds after request\")\nprint(f\"Full text received: {completion_text}\")", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream non-chat completions (used by older models like text-davinci-003) (cont.)", "Chunk": "A streaming completion request\nWith a streaming Completions API call, the text is sent back via a series of events. In Python, you can iterate over these events with a for loop.", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream non-chat completions (used by older models like text-davinci-003) (cont.)", "Chunk": "# Example of an OpenAI Completion request, using the stream=True option\n# https://beta.openai.com/docs/api-reference/completions/create\n\n# record the time before the request is sent\nstart_time = time.time()", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream non-chat completions (used by older models like text-davinci-003) (cont.)", "Chunk": "# send a Completion request to count to 100\nresponse = openai.Completion.create(\n model='text-davinci-002',\n prompt='1,2,3,',\n max_tokens=193,\n temperature=0,\n stream=True, # this time, we set stream=True\n)", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "How to stream non-chat completions (used by older models like text-davinci-003) (cont.)", "Chunk": "# create variables to collect the stream of events\ncollected_events = []\ncompletion_text = ''\n# iterate through the stream of events\nfor event in response:\n event_time = time.time() - start_time # calculate the time delay of the event\n collected_events.append(event) # save the event response/...sponse received {event_time:.2f} seconds after request\")\nprint(f\"Full text received: {completion_text}\n However, with the streaming request, we received the first token after 0.18 seconds, and subsequent tokens every ~0.01-0.02 seconds\")", "Source": "https://cookbook.openai.com/examples/how_to_stream_completions"} {"Chunk ID": "Techniques to improve reliability", "Chunk": "When GPT-3 fails on a task, what should you do?\n\nSearch for a better prompt that elicits more reliable answers?\nInvest in thousands of examples to fine-tune a custom model?\nAssume the model is incapable of the task, and move on?\nThere is no simple answer - it depends. However, if your task involves logical reasoning or complexity, consider trying the techniques in this article to build more reliable, high-performing prompts.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Why GPT-3 fails on complex tasks", "Chunk": "If you were asked to multiply 13 by 17, would the answer pop immediately into your mind? For most of us, probably not. Yet, that doesn't mean humans are incapable of two-digit multiplication. With a few seconds, and some pen and paper, it's not too taxing to work out that 13 x 17 = 130 + 70 + 21 = 221.\n\nSimilarly, if you give GPT-3 a task that's too complex to do in the time it takes to calculate its next token, it may confabulate an incorrect guess. Yet, akin to humans, that doesn't necessarily mean the model is incapable of the task. With some time and space to reason things out, the model still may be able to answer reliably.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "As an example, if you ask text-davinci-002...", "Chunk": "As an example, if you ask text-davinci-002 the following math problem about juggling balls, it answers incorrectly:\n\nQ: A juggler has 16 balls. Half of the balls are golf balls and half of the golf balls are blue. How many blue golf balls are there?\nA:\nThere are 8 blue golf balls.\nDoes this mean that GPT-3 cannot do simple math problems? No; in fact, it turns out that by prompting the model with Let's think step by step, the model solves the problem reliably:\n\nQ: A juggler has 16 balls. Half of the balls are golf balls and half of the golf balls are blue. How many blue golf balls are there?\nA: Let's think step by step.\nThere are 16 balls in total.\nHalf of the balls are golf balls.\nThat means that there are 8 golf balls.\nHalf of the golf balls are blue.\nThat means that there are 4 blue golf balls.\nOf course, it's hard to tell from only a single example whether this Let's think step by step trick actually works in general or just got lucky on this particular problem. But it really does work. On a benchmark of word math problems, the Let's think step by step trick raised GPT-3's solve rate massively, from a worthless 18% to a decent 79%!", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Model capabilities depend on context", "Chunk": "When learning to work with GPT-3, one common conceptual mistake is to believe that its capabilities are fixed across all contexts. E.g., if GPT-3 gets a simple logic question wrong, then it must be incapable of simple logic.\n\nBut as the Let's think step by step example illustrates, apparent failures of GPT-3 can sometimes be remedied with a better prompt that helps the model steer itself toward the correct output.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "How to improve reliability on complex tasks", "Chunk": "The rest of this article shares techniques for improving reliability of large language models on complex tasks. Although some of the techniques are specific to certain types of problems, many of them are built upon general principles that can be applied to a wide range of tasks, e.g.:\n\nGive clearer instructions\nSplit complex tasks into simpler subtasks\nStructure the instruction to keep the model on task\nPrompt the model to explain before answering\nAsk for justifications of many possible answers, and then synthesize\nGenerate many outputs, and then use the model to pick the best one\nFine-tune custom models to maximize performance\nSplit complex tasks into simpler tasks\nOne way to give a model more time and space to think is to break tasks into simpler pieces.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 1", "Chunk": "As an example, consider a task where we ask the model a multiple-choice question about some text - in this case, a game of Clue. When asked directly, text-davinci-002 isn't able to put clues 3 & 5 together, and answers incorrectly:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 2", "Chunk": "Use the following clues to answer the following multiple-choice question.\n\nClues:\n1. Miss Scarlett was the only person in the lounge.\n2. The person with the pipe was in the kitchen.\n3. Colonel Mustard was the only person in the observatory.\n4. Professor Plum was not in the library nor the billiard room.\n5. The person with the candlestick was in the observatory.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 3", "Chunk": "Question: Was Colonel Mustard in the observatory with the candlestick?\n(a) Yes; Colonel Mustard was in the observatory with the candlestick\n(b) No; Colonel Mustard was not in the observatory with the candlestick\n(c) Unknown; there is not enough information to determine whether Colonel Mustard was in the observatory with the candlestick\n\nSolution:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 4", "Chunk": "(c) Unknown; there is not enough information to determine whether Colonel Mustard was in the observatory with the candlestick\nAlthough clues 3 and 5 establish that Colonel Mustard was the only person in the observatory and that the person in the observatory had the candlestick, the model fails to combine them into a correct answer of (a) Yes.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 5", "Chunk": "However, instead of asking for the answer directly, we can split the task into three pieces:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 6", "Chunk": "First, go through the clues one by one and consider whether the clue is potentially relevant\nSecond, combine the relevant clues to reason out the answer to the question\nThird, write the final answer: either (a), (b), or (c)", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 7", "Chunk": "Use the following clues to answer the following multiple-choice question, using the following procedure:\n(1) First, go through the clues one by one and consider whether the clue is potentially relevant\n(2) Second, combine the relevant clues to reason out the answer to the question\n(3) Third, map the answer to one of the multiple choice answers: either (a), (b), or (c)", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 8", "Chunk": "Clues:\n1. Miss Scarlett was the only person in the lounge.\n2. The person with the pipe was in the kitchen.\n3. Colonel Mustard was the only person in the observatory.\n4. Professor Plum was not in the library nor the billiard room.\n5. The person with the candlestick was in the observatory.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 9", "Chunk": "Question: Was Colonel Mustard in the observatory with the candlestick?\n(a) Yes; Colonel Mustard was in the observatory with the candlestick\n(b) No; Colonel Mustard was not in the observatory with the candlestick\n(c) Unknown; there is not enough information to determine whether Colonel Mustard was in the observatory with the candlestick", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 10", "Chunk": "Solution:\n(1) First, go through the clues one by one and consider whether the clue is potentially relevant:\nClue 1: Miss Scarlett was the only person in the lounge. This clue is not relevant.\nClue 2: The person with the pipe was in the kitchen. This clue is not relevant.\nClue 3: Colonel Mustard was the only person in the observatory. This clue is relevant.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 11", "Chunk": "Clue 4: Professor Plum was not in the library nor the billiard room. This clue is not relevant.\nClue 5: The person with the candlestick was in the observatory. This clue is relevant.\n\n(2) Second, combine the relevant clues to reason out the answer to the question:\nClue 3 and Clue 5 both say that the person with the candlestick was in the observatory. Colonel Mustard was the only person in the observatory, so Colonel Mustard was the person with the candlestick.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 12", "Chunk": "(3) Third, write the final answer:\n(a) Yes; Colonel Mustard was in the observatory with the candlestick\nBy giving the model more time and space to think, and guiding it along a reasoning plan, it's able to figure out the correct answer of (a) Yes.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 13", "Chunk": "Another benefit of splitting complex instructions into smaller subtasks is that it can help keep the model focused on each subtask.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 14", "Chunk": "For example, if we ask text-davinci-002 to summarize a text in its original language, the model can lapse back into English:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 15", "Chunk": "Summarize the text using the original language of the text. The summary should be one sentence long.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 16", "Chunk": "Text:\n\"La estadística (la forma femenina del término alemán Statistik, derivado a su vez del italiano statista, 'hombre de Estado')​ es una ciencia que estudia la variabilidad, colección, organización, análisis, interpretación, y presentación de los datos, así como el proceso aleatorio que los genera siguiendo las leyes de la probabilidad.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 17", "Chunk": "Summary:\nThe text explains that statistics is a science that studies the variability, collection, organization, analysis, interpretation, and presentation of data, as well as the random process that generates them following the laws of probability.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 18", "Chunk": "However, if we first ask the model to identify the language of the text, and then summarize the text, it becomes more reliable:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 19", "Chunk": "First, identify the language of the text. Second, summarize the text using the original language of the text. The summary should be one sentence long.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 20", "Chunk": "Language:\nSpanish\n\nLa estadística es una ciencia que estudia la variabilidad, colección, organización, análisis, interpretación, y presentación de los datos, así como el proceso aleatorio que los genera siguiendo las leyes de la probabilidad.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 21", "Chunk": "Prompt the model to explain before answering\nAnother powerful technique for improving the reliability of answers is to prompt the model to gradually reason out the answer rather than jumping immediately to the final answer. By 'thinking aloud' the model can be far more likely to arrive at the correct answer.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability - Part 22", "Chunk": "Zero-shot\nMethod\nPublished by Takeshi Kojima et al. in 2022, the easiest way to prompt a model to reason out the answer is to simply prepend answers with Let's think step by step", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability", "Chunk": "Source: Large Language Models are Zero-Shot Reasoners by Takeshi Kojima et al. (2022).\n\nResults\nApplying this simple trick to the MultiArith math dataset, the authors found Let's think step by step quadrupled the accuracy, from 18% to 79%!\n\nzero-shot reasoning example", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability (continued)", "Chunk": "Implications\nAlthough the Let's think step by step trick works well on math problems, it's not effective on all tasks. The authors found that it was most helpful for multi-step arithmetic problems, symbolic reasoning problems, strategy problems, and other reasoning problems. It didn't help with simple math problems or common sense questions, and presumably wouldn't help with many other non-reasoning tasks either.\n\nzero-shot reasoning example", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability (continued)", "Chunk": "To learn more, read the full paper.\n\nIf you apply this technique to your own tasks, don't be afraid to experiment with customizing the instruction. Let's think step by step is rather generic, so you may find better performance with instructions that hew to a stricter format customized to your use case. For example, you can try more structured variants like First, think step by step about why X might be true. Second, think step by step about why Y might be true. Third, think step by step about whether X or Y makes more sense.. And you can even give the model an example format to help keep it on track, e.g.:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Question: Can I claim a federal tax credit for my Toyota Prius Prime bought in 2021?", "Chunk": "(1) For each criterion, determine whether it is met by the vehicle purchase\n- Does the vehicle have at least four wheels? Let's think step by step.\n The Toyota Prius Prime has four wheels, so the answer is yes.\n- Does the vehicle weigh less than 14,000 pounds? Let's think step by step. The Toyota Prius Prime weighs less than 14,000 pounds, so the answer is yes.\n- Does the vehicle draw energy from a battery with at least 4 kilowatt hours that may be recharged from an external source? Let's think step by step. The Toyota Prius Prime has a battery with at least 4 kilowatt hours that may be recharged from an external source, so the answer is yes.\n- Was the vehicle purchased in a year before 2022? Let's think step by step. The Toyota Prius Prime was purchased in 2021, which is before 2022, so the answer is yes.\n- Was the vehicle purchased in a year after 2022? N/A\n- If so, is the vehicle present in the following list of North American-assembled vehicles? N/A\n\n(2) After considering each criterion in turn, phrase the final answer as \"Because of {reasons}, the answer is likely {yes or no}.\"\n\nBecause the Toyota Prius Prime meets all of the criteria for a federal tax credit, the answer is likely yes.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Method: Few-shot examples", "Chunk": "Method\nPrompting the model to reason out its answers can be done in many ways. One way is to demonstrate with a few examples ('few-shot'), as studied by Jason Wei and Denny Zhou et al. from Google. Here's an example few-shot chain-of-thought prompt:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Method: Few-shot examples (continued)", "Chunk": "Results\nTesting on grade school math problems, the authors found that chain of thought prompting tripled the solve rate, from 18% to 57%.\n\nIn addition to math problems, chain of thought prompting also lifted performance on questions related to sports understanding, coin flip tracking, and last letter concatenation. In most cases, not many examples were need to saturate the performance gains (less than 8 or so).\n\nTo learn more, read the full paper.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Method: Few-shot examples (continued)", "Chunk": "Implications\nOne advantage of the few-shot example-based approach relative to the Let's think step by step technique is that you can more easily specify the format, length, and style of reasoning that you want the model to perform before landing on its final answer. This can be particularly helpful in cases where the model isn't initially reasoning in the right way or depth.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Method: Fine-tuned", "Chunk": "Method\nIn general, to eke out maximum performance on a task, you'll need to fine-tune a custom model. However, fine-tuning a model using explanations may take thousands of example explanations, which are costly to write.\n\nIn 2022, Eric Zelikman and Yuhuai Wu et al. published a clever procedure for using a few-shot prompt to generate a dataset of explanations that could be used to fine-tune a model. The idea is to use a few-shot prompt to generate candidate explanations, and only keep the explanations that produce the correct answer. Then, to get additional explanations for some of the incorrect answers, retry the few-shot prompt but with correct answers given as part of the question. The authors called their procedure STaR (Self-taught Reasoner):", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Techniques to improve reliability", "Chunk": "Results\nWhen the authors applied this technique to a Common Sense Q&A dataset, they found that STaR outperformed both chain-of-thought prompting alone (73% > 37%) and fine-tuning alone (73% > 60%):\n\nSTaR results\nSource: STaR: Bootstrapping Reasoning With Reasoning by Eric Zelikman and Yujuai Wu et al. (2022)\n\nTo learn more, read the full paper.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Implications", "Chunk": "Using a few-shot prompt to extend or modify a fine-tuning dataset is an idea that can be generalized beyond explanation writing. For example, if you have large quantities of unstructured text that you want to train on, you may find opportunities to use a prompt to extract a structured dataset from your unstructured text, and then fine-tune a custom model on that structured dataset.\n\nExtensions to chain-of-thought prompting\nA number of extensions of chain-of-thought prompting have been published as well.\n\nSelection-inference prompting\nMethod\nPublished by Antonia Creswell et al., one extension of the chain-of-thought technique is to split the single prompt for generating explanations and answers into smaller parts. First, a prompt selects a relevant subset of facts from the text ('selection prompt'). Then, a second prompt infers a conclusion from the selected facts ('inference prompt'). These prompts are then alternated in a loop to generate multiple steps of reasoning and eventually land on a final answer. The authors illustrate the idea in the following figure:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Results", "Chunk": "Selection-inference prompting\nSource: Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning by Antonia Creswell et al. (2022)\n\nWhen applied to a 7B-parameter model, the authors found that selection-inference prompting substantially improved performance relative to chain-of-thought prompting on the bAbi and Proof Writer benchmark tasks (both of which require longer sequences of reasoning steps). The best performance they achieved combined both selection-inference prompting with fine-tuning.\n\nSelection-inference prompting\nSource: Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning by Antonia Creswell et al. (2022)", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Implications", "Chunk": "Although the gains on these benchmarks were large, these benchmarks were specifically chosen because they required longer sequences of reasoning. On problems that don't require reasoning with many steps, the gains are likely smaller.\n\nThe results highlight a couple of general lessons for working with large language models. One, splitting up complex tasks into smaller tasks is a great way to improve reliability and performance; the more atomic the task, the less room there is for the model to err. Two, getting maximum performance often means combining fine-tuning with whatever approach you've chosen.\n\nTo learn more, read the full paper.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Faithful reasoning architecture", "Chunk": "A few months after publishing the selection-inference prompting technique, the authors extended the technique in a follow-up paper, with ideas for:\n\nfiguring out when the selection-inference cycle should stop or continue\nadding a value function to help search over multiple reasoning paths\nreducing hallucination of fake facts by fine-tuning a model to reason about sentence labels (e.g., sen1) rather than writing out the sentences themselves\nMethod\nIn the original selection-inference technique, specialized 'selection' and 'inference' prompts are alternated to select facts and make inferences from those facts, combining to generate a sequence of reasoning steps.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Results", "Chunk": "The authors evaluated their technique on two benchmarks: the ProofWriter task (not shown) and EntailmentBankQA (shown). The technique increased accuracy substantially, especially on harder reasoning problems.\n\nImplications\nThis paper illustrates a number of helpful lessons for improving the reliability of large language models:\n\nSplit complex tasks into smaller, more reliable subtasks\nGenerate your answer in a step-by-step fashion, evaluating it along the way\nGenerate many possible answers and use another model or function to pick the ones that look best\nReduce hallucination by constraining what the model can say (e.g., by using sentence labels instead of sentences)\nMaximize performance of models by fine-tuning them on specialized tasks\nTo learn more, read the full paper.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Least-to-most prompting", "Chunk": "Least-to-most prompting\nSource: Least-to-most Prompting Enables Complex Reasoning in Large Language Models by Denny Zhou et al. (2022)\n\nResults\nWhen applied to benchmarks involving long reasoning chains using code-davinci-002 (which is optimized for code but can still understand text), the authors measured gains as large as 16% -> 99.7%!\n\nImplications\nAlthough the above gains from least-to-most prompting are impressive, they are measured on a very narrow set of tasks that require long reasoning chains.\n\nStill, they illustrate a common theme: increase reliability by (a) breaking complex tasks into smaller subtasks and (b) giving the model more time and space to work out the answer.\n\nTo learn more, read the full paper.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Related ideas", "Chunk": "Maieutic prompting\nMethod\nIn contrast to the previous techniques, which try to maximize the likelihood of correct answers, another approach is to use GPT-3 to generate a tree of possible explanations (both correct and incorrect), and then analyze their relationships to guess at which set is correct. This technique was coined maieutic prompting by Jaehun Jung et al. in May 2022 (maieutic means relating to the Socratic method of asking questions to elicit ideas).\n\nThe method is complicated, and works as follows:\n\nFirst, build a maieutic tree, where each node is a statement that could be true or false:\nStart with a multiple-choice question or true/false statement (e.g. War cannot have a tie)\nFor each possible answer to the question, use the model to generate a corresponding explanation (with a prompt like War cannot have a tie? True, because)\nThen, prompt the model with the question and the generated explanation, and ask it to produce the answer. If reversing the explanation (with a prefix like It is wrong to say that {explanation}) reverses the answer, then the explanation is considered 'logically integral.'", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Results", "Chunk": "Maieutic prompting results\nSource: Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations by Jaehun Jung et al. (2022)\n\nImplications\nBeyond the complexity, one limitation of this method is that it appears to only apply to questions that can be posed as multiple-choice.\n\nTo learn more, read the full paper.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Extensions", "Chunk": "Self-consistency\nMethod\nFor tasks with a discrete set of answers, one simple way to improve reliability is to sample multiple explanations & answers from the model (using a positive temperature) and then pick the final answer that appears most often.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Results", "Chunk": "This technique lifted accuracies by anywhere from 1 to 24 percentage points on a suite of math and reasoning benchmarks. (Plotted below are results from Google's LaMDA model; using Google's larger PaLM model, the baselines were higher but the gains were a bit smaller.)\n\nImplications\nAlthough this technique is simple to implement, it can be costly. Generating a set of 10 answers will increase your costs by 10x.\n\nAlso, as with many of these techniques, it applies only to tasks with a limited set of answers. For open-ended tasks where each answer is unique (such as writing a poem), it's not obvious what it would mean to pick the most common answer.\n\nLastly, this technique ought to be most beneficial when there are multiple paths or phrasings to reach an answer; if there's only one path, then the technique may not help at all. An extreme example: If the task was to generate a single token answer, then taking the most common token from 100 generations would be no different than taking the token with the highest logprobs (which you can get with a single generation at temperature=0).", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Verifiers", "Chunk": "Another key technique for improving task performance is to train a verifier or discriminator model to evaluate the outputs of the main generative model. If the discriminator rejects the output, then you can resample the generative model until you get an acceptable output. In many cases, it's easier to judge an answer than it is to create an answer, which helps explain the power of this method.\n\nMethod\nIn 2021, OpenAI researchers applied this technique to grade school math problems, using the following procedure:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Results", "Chunk": "With a 175B GPT-3 model and 8,000 training examples, this technique substantially lifted grade school math accuracy from ~33% to ~55%.\n\nImplications\nSimilar to the self-consistency technique, this method can get expensive, as generating, say, 100 solutions per task will increase your costs by roughly ~100x.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Theories of reliability", "Chunk": "Although the techniques above vary in their approach, they all share the goal of improving reliability on complex tasks. Mainly they do this by:\n\ndecomposing unreliable operations into smaller, more reliable operations (e.g., selection-inference prompting)\nusing multiple steps or multiple relationships to make the system's reliability greater than any individual component (e.g., maieutic prompting)\nProbabilistic graphical models\nThis paradigm of trying to build a reliable system out of less reliable components is reminiscent of probabilistic programming, and many of the analysis techniques of that field can be applied to this one.\n\nIn the paper Language Model Cascades, David Dohan et al. interpret the above techniques in the paradigm of probabilistic graphical models:", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "Implications", "Chunk": "Although formulating these techniques as probabilistic graphical models may not be immediately useful for solving any particular problem, the framework may be helpful in selecting, combining, and discovering new techniques.\n\nClosing thoughts\nResearch into large language models is very active and evolving rapidly. Not only do researchers continue to improve the models, they also continue to improve our understanding of how to best employ the models. To underscore the pace of these developments, note that all of the papers shared above were published within the past 12 months (as I write in Sep 2022).\n\nIn the future, expect better models and better techniques to be published. Even if the specific techniques here are eclipsed by future best practices, the general principles behind them will likely remain a key part of any expert user's toolkit.", "Source": "https://cookbook.openai.com/techniques_to_improve_reliability"} {"Chunk ID": "How large language models work", "Chunk": "Large language models are functions that map text to text. Given an input string of text, a large language model predicts the text that should come next.\n\nThe magic of large language models is that by being trained to minimize this prediction error over vast quantities of text, the models end up learning concepts useful for these predictions. For example, they learn:\n\nhow to spell\nhow grammar works\nhow to paraphrase\nhow to answer questions\nhow to hold a conversation\nhow to write in many languages\nhow to code\netc.\nNone of these capabilities are explicitly programmed in—they all emerge as a result of training.", "Source": "https://cookbook.openai.com/how_to_work_with_large_language_models"} {"Chunk ID": "How to control a large language model", "Chunk": "Of all the inputs to a large language model, by far the most influential is the text prompt.\n\nLarge language models can be prompted to produce output in a few ways:\n\nInstruction: Tell the model what you want\nCompletion: Induce the model to complete the beginning of what you want\nDemonstration: Show the model what you want, with either:\nA few examples in the prompt\nMany hundreds or thousands of examples in a fine-tuning training dataset\nAn example of each is shown below.", "Source": "https://cookbook.openai.com/how_to_work_with_large_language_models"} {"Chunk ID": "Instruction prompts", "Chunk": "Instruction-following models (e.g., text-davinci-003 or any model beginning with text-) are specially designed to follow instructions. Write your instruction at the top of the prompt (or at the bottom, or both), and the model will do its best to follow the instruction and then stop. Instructions can be detailed, so don't be afraid to write a paragraph explicitly detailing the output you want.\n\nExample instruction prompt:\n\nExtract the name of the author from the quotation below.\n\n“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”\n― Ted Chiang, Exhalation\nOutput:\n\nTed Chiang", "Source": "https://cookbook.openai.com/how_to_work_with_large_language_models"} {"Chunk ID": "Completion prompt example", "Chunk": "Completion-style prompts take advantage of how large language models try to write text they think is mostly likely to come next. To steer the model, try beginning a pattern or sentence that will be completed by the output you want to see. Relative to direct instructions, this mode of steering large language models can take more care and experimentation. In addition, the models won't necessarily know where to stop, so you will often need stop sequences or post-processing to cut off text generated beyond the desired output.\n\nExample completion prompt:\n\n“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”\n― Ted Chiang, Exhalation\n\nThe author of this quote is\nOutput:\n\nTed Chiang", "Source": "https://cookbook.openai.com/how_to_work_with_large_language_models"} {"Chunk ID": "Demonstration prompt example (few-shot learning)", "Chunk": "Similar to completion-style prompts, demonstrations can show the model what you want it to do. This approach is sometimes called few-shot learning, as the model learns from a few examples provided in the prompt.\n\nExample demonstration prompt:\n\nQuote:\n“When the reasoning mind is forced to confront the impossible again and again, it has no choice but to adapt.”\n― N.K. Jemisin, The Fifth Season\nAuthor: N.K. Jemisin\n\nQuote:\n“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”\n― Ted Chiang, Exhalation\nAuthor:\nOutput:\n\nTed Chiang", "Source": "https://cookbook.openai.com/how_to_work_with_large_language_models"} {"Chunk ID": "Fine-tuned prompt example", "Chunk": "With enough training examples, you can fine-tune a custom model. In this case, instructions become unnecessary, as the model can learn the task from the training data provided. However, it can be helpful to include separator sequences (e.g., -> or ### or any string that doesn't commonly appear in your inputs) to tell the model when the prompt has ended and the output should begin. Without separator sequences, there is a risk that the model continues elaborating on the input text rather than starting on the answer you want to see.\n\nExample fine-tuned prompt (for a model that has been custom trained on similar prompt-completion pairs):\n\n“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”\n― Ted Chiang, Exhalation\n###\nOutput:\n\nTed Chiang", "Source": "https://cookbook.openai.com/how_to_work_with_large_language_models"} {"Chunk ID": "Code Capabilities", "Chunk": "Large language models aren't only great at text - they can be great at code too. OpenAI's specialized code model is called Codex.\n\nCodex powers more than 70 products, including:\n\nGitHub Copilot (autocompletes code in VS Code and other IDEs)\nPygma (turns Figma designs into code)\nReplit (has an 'Explain code' button and other features)\nWarp (a smart terminal with AI command search)\nMachinet (writes Java unit test templates)\nNote that unlike instruction-following text models (e.g., text-davinci-002), Codex is not trained to follow instructions. As a result, designing good prompts can take more care.", "Source": "https://cookbook.openai.com/how_to_work_with_large_language_models"} {"Chunk ID": "More prompt advice", "Chunk": "For more prompt examples, visit OpenAI Examples.\n\nIn general, the input prompt is the best lever for improving model outputs. You can try tricks like:\n\nGive more explicit instructions. E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say \"I don't know\" when it doesn't know the answer, tell it 'Say \"I don't know\" if you do not know the answer.'\n\nSupply better examples. If you're demonstrating examples in your prompt, make sure that your examples are diverse and high quality.\n\nAsk the model to answer as if it was an expert. Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. E.g., \"The following answer is correct, high-quality, and written by an expert.\"\n\nPrompt the model to write down the series of steps explaining its reasoning. E.g., prepend your answer with something like \"Let's think step by step.\" Prompting the model to give an explanation of its reasoning before its final answer can increase the likelihood that its final answer is consistent and correct.", "Source": "https://cookbook.openai.com/how_to_work_with_large_language_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 1)", "Chunk": "This notebook provides a step-by-step guide for our new gpt-3.5-turbo fine-tuning. We'll perform entity extraction using the RecipeNLG dataset, which provides various recipes and a list of extracted generic ingredients for each. This is a common dataset for named entity recognition (NER) tasks.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 2)", "Chunk": "We will go through the following steps:\n\nSetup: Loading our dataset and filtering down to one domain to fine-tune on.\nData preparation: Preparing your data for fine-tuning by creating training and validation examples, and uploading them to the Files endpoint.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 3)", "Chunk": "Fine-tuning: Creating your fine-tuned model.\nInference: Using your fine-tuned model for inference on new inputs.\nBy the end of this you should be able to train, evaluate and deploy a fine-tuned gpt-3.5-turbo model.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 4)", "Chunk": "For more information on fine-tuning, you can refer to our documentation guide, API reference or blog post\n\nSetup\n# make sure to use the latest version of the openai python package\n!pip install --upgrade openai\n\nimport json\nimport openai\nimport os\nimport pandas as pd\nfrom pprint", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 5)", "Chunk": "OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\", \"\")\n\nFine-tuning works best when focused on a particular domain. It's important to make sure your dataset is both focused enough for the model to learn, but general enough that unseen examples won't be missed. Having this in mind, we have extracted a subset from the RecipesNLG dataset to only contain documents from www.cookbooks.com.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 6)", "Chunk": "# Read in the dataset we'll use for this task.\n# This will be the RecipesNLG dataset, which we've cleaned to only contain documents from www.cookbooks.com\nrecipe_df = pd.read_csv(\"data/cookbook_recipes_nlg_10k.csv\")\n\nrecipe_df.head()", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 7)", "Chunk": "Data preparation\nWe'll begin by preparing our data. When fine-tuning with the ChatCompletion format, each training example is a simple list of messages. For example, an entry could look like:", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 8)", "Chunk": "[{'role': 'system',\n 'content': 'You are a helpful recipe assistant. You are to extract the generic ingredients from each of the recipes provided.'},", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 9)", "Chunk": "{'role': 'user',\n 'content': 'Title: No-Bake Nut Cookies\\n\\nIngredients: [\"1 c. firmly packed brown sugar\", \"1/2 c. evaporated milk\", \"1/2 tsp. vanilla\", \"1/2 c. broken nuts (pecans)\", \"2 Tbsp. butter or margarine\", \"3 1/2 c. bite size shredded rice biscuits\"]\\n\\nGeneric ingredients: '},", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 10)", "Chunk": "{'role': 'assistant',\n 'content': '[\"brown sugar\", \"milk\", \"vanilla\", \"nuts\", \"butter\", \"bite size shredded rice biscuits\"]'}]", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 11)", "Chunk": "During the training process this conversation will be split, with the final entry being the completion that the model will produce, and the remainder of the messages acting as the prompt. Consider this when building your training examples - if your model will act on multi-turn conversations, then please provide representative examples so it doesn't perform poorly when the conversation starts to expand.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 12)", "Chunk": "Please note that currently there is a 4096 token limit for each training example. Anything longer than this will be truncated at 4096 tokens.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 13)", "Chunk": "training_data = []\n\nsystem_message = \"You are a helpful recipe assistant. You are to extract the generic ingredients from each of the recipes provided.\"", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 14)", "Chunk": "def create_user_message(row):\n return f\"Title: {row['title']}\\n\\nIngredients: {row['ingredients']}\\n\\nGeneric ingredients: \"", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 15)", "Chunk": "def prepare_example_conversation(row):\n messages = []\n messages.append({\"role\": \"system\", \"content\": system_message})", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 16)", "Chunk": "user_message = create_user_message(row)\n messages.append({\"role\": \"user\", \"content\": user_message})\n\n messages.append({\"role\": \"assistant\", \"content\": row[\"NER\"]})", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 17)", "Chunk": "return {\"messages\": messages}\n\npprint(prepare_example_conversation(recipe_df.iloc[0]))", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 18)", "Chunk": "Let's now do this for a subset of the dataset to use as our training data. You can begin with even 30-50 well-pruned examples. You should see performance continue to scale linearly as you increase the size of the training set, but your jobs will also take longer.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 19)", "Chunk": "# use the first 100 rows of the dataset for training\ntraining_df = recipe_df.loc[0:100]", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 20)", "Chunk": "# apply the prepare_example_conversation function to each row of the training_df\ntraining_data = training_df.apply(prepare_example_conversation, axis=1).tolist()", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 21)", "Chunk": "for example in training_data[:5]:\n print(example)", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 22)", "Chunk": "In addition to training data, we can also optionally provide validation data, which will be used to make sure that the model does not overfit your training set.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 23)", "Chunk": "validation_df = recipe_df.loc[101:200]\nvalidation_data = validation_df.apply(prepare_example_conversation, axis=1).tolist()", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 24)", "Chunk": "We then need to save our data as .jsonl files, with each line being one training example conversation.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 25)", "Chunk": "def write_jsonl(data_list: list, filename: str) -> None:\n with open(filename, \"w\") as out:\n for ddict in data_list:\n jout = json.dumps(ddict) + \"\\n\"\n out.write(jout)", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 26)", "Chunk": "training_file_name = \"tmp_recipe_finetune_training.jsonl\"\nwrite_jsonl(training_data, training_file_name)", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 27)", "Chunk": "validation_file_name = \"tmp_recipe_finetune_validation.jsonl\"\nwrite_jsonl(validation_data, validation_file_name)", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 28)", "Chunk": "This is what the first 5 lines of our training .jsonl file look like:\n\n# print the first 5 lines of the training file\n!head -n 5 tmp_recipe_finetune_training.jsonl", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 29)", "Chunk": "Upload files\nYou can now upload the files to our Files endpoint to be used by the fine-tuned model.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 30)", "Chunk": "training_response = openai.File.create(\n file=open(training_file_name, \"rb\"), purpose=\"fine-tune\"\n)\ntraining_file_id = training_response[\"id\"]\n\nvalidation_response = openai.File.create(\n file=open(validation_file_name, \"rb\"), purpose=\"fine-tune\"\n)\nvalidation_file_id = validation_response[\"id\"]", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 31)", "Chunk": "print(\"Training file ID:\", training_file_id)\nprint(\"Validation file ID:\", validation_file_id)", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 32)", "Chunk": "Fine-tuning\nNow we can create our fine-tuning job with the generated files and an optional suffix to identify the model. The response will contain an id which you can use to retrieve updates on the job.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 33)", "Chunk": "Note: The files have to first be processed by our system, so you might get a File not ready error. In that case, simply retry a few minutes later.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 34)", "Chunk": "response = openai.FineTuningJob.create(\n training_file=training_file_id,\n validation_file=validation_file_id,\n model=\"gpt-3.5-turbo\",\n suffix=\"recipe-ner\",\n)", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 35)", "Chunk": "job_id = response[\"id\"]\n\nprint(\"Job ID:\", response[\"id\"])\nprint(\"Status:\", response[\"status\"])", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 36)", "Chunk": "Check job status\nYou can make a GET request to the https://api.openai.com/v1/alpha/fine-tunes endpoint to list your alpha fine-tune jobs. In this instance you'll want to check that the ID you got from the previous step ends up as status: succeeded.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 37)", "Chunk": "Once it is completed, you can use the result_files to sample the results from the validation set (if you uploaded one), and use the ID from the fine_tuned_model parameter to invoke your trained model.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 38)", "Chunk": "response = openai.FineTuningJob.retrieve(job_id)\n\nprint(\"Job ID:\", response[\"id\"])", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 39)", "Chunk": "print(\"Status:\", response[\"status\"])\nprint(\"Trained Tokens:\", response[\"trained_tokens\"])", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 40)", "Chunk": "We can track the progress of the fine-tune with the events endpoint. You can rerun the cell below a few times until the fine-tune is ready.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 41)", "Chunk": "response = openai.FineTuningJob.list_events(id=job_id, limit=50)\n\nevents = response[\"data\"]\nevents.reverse()", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 42)", "Chunk": "for event in events:\n print(event[\"message\"])", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 43)", "Chunk": "Now that it's done, we can get a fine-tuned model ID from the job:\n\nresponse = openai.FineTuningJob.retrieve(job_id)\nfine_tuned_model_id = response[\"fine_tuned_model\"]\n\nif fine_tuned_model_id is None: \n raise RuntimeError(\"Fine-tuned model ID not found. Your job has likely not been completed yet.\")", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 44)", "Chunk": "print(\"Fine-tuned model ID:\", fine_tuned_model_id)", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 45)", "Chunk": "Inference\nThe last step is to use your fine-tuned model for inference. Similar to the classic FineTuning, you simply call ChatCompletions with your new fine-tuned model name filling the model parameter.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 46)", "Chunk": "test_df = recipe_df.loc[201:300]\ntest_row = test_df.iloc[0]\ntest_messages = []\ntest_messages.append({\"role\": \"system\", \"content\": system_message})", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 47)", "Chunk": "user_message = create_user_message(test_row)\ntest_messages.append({\"role\": \"user\", \"content\": create_user_message(test_row)})", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 48)", "Chunk": "pprint(test_messages)\nresponse = openai.ChatCompletion.create(\n model=fine_tuned_model_id, messages=test_messages, temperature=0, max_tokens=500\n)", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 49)", "Chunk": "print(response[\"choices\"][0][\"message\"][\"content\"])", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "How to fine-tune chat models (Chunk 50)", "Chunk": "Conclusion\nCongratulations, you are now ready to fine-tune your own models using the ChatCompletion format! We look forward to seeing what you build", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 1", "Chunk": "In this notebook we delve into the evaluation techniques for abstractive summarization tasks using a simple example. We explore traditional evaluation methods like ROUGE and BERTScore, in addition to showcasing a more novel approach using LLMs as evaluators.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 2", "Chunk": "Evaluating the quality of summaries is a time-consuming process, as it involves different quality metrics such as coherence, conciseness, readability and content. Traditional automatic evaluation metrics such as ROUGE and BERTScore and others are concrete and reliable, but they may not correlate well with the actual quality of summaries. They show relatively low correlation with human judgments, especially for open-ended generation tasks (Liu et al., 2023). There's a growing need to lean on human evaluations, user feedback, or model-based metrics while being vigilant about potential biases. While human judgment provides invaluable insights, it is often not scalable and can be cost-prohibitive.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 3", "Chunk": "In addition to these traditional metrics, we showcase a method (G-Eval) that leverages Large Language Models (LLMs) as a novel, reference-free metric for assessing abstractive summaries. In this case, we use gpt-4 to score candidate outputs. gpt-4 has effectively learned an internal model of language quality that allows it to differentiate between fluent, coherent text and low-quality text. Harnessing this internal scoring mechanism allows auto-evaluation of new candidate outputs generated by an LLM.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 4", "Chunk": "# Installing necessary packages for the evaluation\n# rouge: For evaluating with ROUGE metric\n# bert_score: For evaluating with BERTScore\n# openai: To interact with OpenAI's API\n!pip install rouge --quiet\n!pip install bert_score --quiet\n!pip install openai --quiet", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 5", "Chunk": "import openai\nimport os\nimport re\nimport pandas as pd\n\n# Python Implementation of the ROUGE Metric\nfrom rouge import Rouge\n\n# BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity.\nfrom bert_score import BERTScorer\n\nopenai.api_key = os.environ.get(\"OPENAI_API_KEY\")", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 6", "Chunk": "For the purposes of this notebook we'll use the example summarization below. Notice that we provide two generated summaries to compare, and a reference human-written summary, which evaluation metrics like ROUGE and BERTScore require.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 7", "Chunk": "Excerpt (excerpt):\n\nOpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. OpenAI will build safe and beneficial AGI directly, but will also consider its mission fulfilled if its work aids others to achieve this outcome. OpenAI follows several key principles for this purpose. First, broadly distributed benefits - any influence over AGI's deployment will be used for the benefit of all, and to avoid harmful uses or undue concentration of power. Second, long-term safety - OpenAI is committed to doing the research to make AGI safe, and to promote the adoption of such research across the AI community. Third, technical leadership - OpenAI aims to be at the forefront of AI capabilities. Fourth, a cooperative orientation - OpenAI actively cooperates with other research and policy institutions, and seeks to create a global community working together to address AGI's global challenges.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 8", "Chunk": "Summaries:\n\nReference Summary /ref_summary (human generated)\nEval Summary 1 / eval_summary_1 (system generated)\nEval Summary 2 / eval_summary_2 (system generated)", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 9", "Chunk": "Take a moment to figure out which summary you'd personally prefer and the one that captures OpenAI's mission really well.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Chunk 1)", "Chunk": "OpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. OpenAI will build safe and beneficial AGI directly, but will also consider its mission fulfilled if its work aids others to achieve this outcome. OpenAI follows several key principles for this purpose. First, broadly distributed benefits - any influence over AGI's deployment will be used for the benefit of all, and to avoid harmful uses or undue concentration of power. Second, long-term safety - OpenAI is committed to doing the research to make AGI safe, and to promote the adoption of such research across the AI community. Third, technical leadership - OpenAI aims to be at the forefront of AI capabilities. Fourth, a cooperative orientation - OpenAI actively cooperates with other research and policy institutions, and seeks to create a global community working together to address AGI's global challenges.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Chunk 2)", "Chunk": "ROUGE, which stands for Recall-Oriented Understudy for Gisting Evaluation, primarily gauges the overlap of words between a generated output and a reference text. It's a prevalent metric for evaluating automatic summarization tasks. Among its variants, ROUGE-L offers insights into the longest contiguous match between system-generated and reference summaries, gauging how well the system retains the original summary's essence.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Chunk 3)", "Chunk": "The table shows the ROUGE scores for evaluating two different summaries against a reference text. In the case of rouge-1, Summary 2 outperforms Summary 1, indicating a better overlap of individual words and for rouge-l, Summary 2 has a higher score, implying a closer match in the longest common subsequences, and thus a potentially better overall summarization in capturing the main content and order of the original text. Since Summary 2 has many words and short phrases directly lifted from the excerpt, its overlap with the reference summary would likely be higher, leading to higher ROUGE scores.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Chunk 4)", "Chunk": "While ROUGE and similar metrics, such as BLEU and METEOR, offer quantitative measures, they often fail to capture the true essence of a well-generated summary. They also correlate worse with human scores. Given the advancements in LLMs, which are adept at producing fluent and coherent summaries, traditional metrics like ROUGE may inadvertently penalize these models. This is especially true if the summaries are articulated differently but still encapsulate the core information accurately.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Chunk 5)", "Chunk": "Evaluating using BERTScore\nROUGE relies on the exact presence of words in both the predicted and reference texts, failing to interpret the underlying semantics. This is where BERTScore comes in and leverages the contextual embeddings from the BERT model, aiming to evaluate the similarity between a predicted and a reference sentence in the context of machine-generated text. By comparing embeddings from both sentences, BERTScore captures semantic similarities that might be missed by traditional n-gram based metrics.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Chunk 6)", "Chunk": "The close F1 Scores between the summaries indicate that they may perform similarly in capturing the key information. However, this small difference should be interpreted with caution. Since BERTScore may not fully grasp subtleties and high-level concepts that a human evaluator might understand, reliance solely on this metric could lead to misinterpreting the actual quality and nuances of the summary. An integrated approach combining BERTScore with human judgment and other metrics could offer a more reliable evaluation.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Chunk 7)", "Chunk": "Evaluating using GPT-4\nHere we implement an example reference-free text evaluator using gpt-4, inspired by the G-Eval framework which evaluates the quality of generated text using large language models. Unlike metrics like ROUGE or BERTScore that rely on comparison to reference summaries, the gpt-4 based evaluator assesses the quality of generated content based solely on the input prompt and text, without any ground truth references. This makes it applicable to new datasets and tasks where human references are sparse or unavailable.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Chunk 8)", "Chunk": "In this demonstration, we're using a direct scoring function where gpt-4 generates a discrete score (1-5) for each metric. Normalizing the scores and taking a weighted sum could result in more robust, continuous scores that better reflect the quality and diversity of the summaries.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 1", "Chunk": "Overall, the Summary 1 appears to outperform Summary 2 in three of the four categories (Coherence, Relevance and Fluency). Both summaries are found to be consistent with each other. The result might suggest that Summary 1 is generally preferable based on the given evaluation criteria.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 2", "Chunk": "Limitations\nNote that LLM-based metrics could have a bias towards preferring LLM-generated texts over human-written texts. Additionally LLM based metrics are sensitive to system messages/prompts. We recommend experimenting with other techniques that can help improve performance and/or get consistent scores, striking the right balance between high-quality expensive evaluation and automated evaluations. It is also worth noting that this scoring methodology is currently limited by gpt-4's context window.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization - Part 3", "Chunk": "Conclusion\nEvaluating abstractive summarization remains an open area for further improvement. Traditional metrics like ROUGE, BLEU, and BERTScore provide useful automatic evaluation but have limitations in capturing semantic similarity and nuanced aspects of summarization quality. Moreover, they require reference outputs which can be expensive to collect/label. LLM-based metrics offer promise as a reference-free method of evaluating coherence, fluency, and relevance. However, they too have potential biases favoring text generated by LLMs. Ultimately, a combination of automatic metrics and human evaluation is ideal for reliably assessing abstractive summarization systems. While human evaluation is indispensable for gaining a comprehensive understanding of summary quality, it should be complemented with automated evaluation to enable efficient, large-scale testing. The field will continue to evolve more robust evaluation techniques, balancing quality, scalability, and fairness. Advancing evaluation methods is crucial for driving progress in production applications.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Question answering using a search API and re-ranking", "Chunk": "Searching for relevant information can sometimes feel like looking for a needle in a haystack, but don’t despair, GPTs can actually do a lot of this work for us. In this guide we explore a way to augment existing search systems with various AI techniques, helping us sift through the noise.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Mimicking Human Browsing", "Chunk": "Two ways of retrieving information for GPT are:\n\nMimicking Human Browsing: GPT triggers a search, evaluates the results, and modifies the search query if necessary. It can also follow up on specific search results to form a chain of thought, much like a human user would do.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Retrieval with Embeddings", "Chunk": "Retrieval with Embeddings: Calculate embeddings for your content and a user query, and then retrieve the content most related as measured by cosine similarity. This technique is used heavily by search engines like Google.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Combining Approaches", "Chunk": "By combining these approaches, and drawing inspiration from re-ranking methods, we identify an approach that sits in the middle. This approach can be implemented on top of any existing search system, like the Slack search API, or an internal ElasticSearch instance with private data. Here’s how it works:", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Step 1: Search", "Chunk": "Step 1: Search\nUser asks a question.\nGPT generates a list of potential queries.\nSearch queries are executed in parallel.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Step 2: Re-rank", "Chunk": "Step 2: Re-rank\nEmbeddings for each result are used to calculate semantic similarity to a generated hypothetical ideal answer to the user question.\nResults are ranked and filtered based on this similarity metric.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Step 3: Answer", "Chunk": "Step 3: Answer\nGiven the top search results, the model generates an answer to the user’s question, including references and links.\nThis hybrid approach offers relatively low latency and can be integrated into any existing search endpoint, without requiring the upkeep of a vector database. Let's dive into it! We will use the News API as an example domain to search over.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Setup", "Chunk": "Setup\nIn addition to your OPENAI_API_KEY, you'll have to include a NEWS_API_KEY in your environment. You can get an API key here.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "User Asks a Question", "Chunk": "User asks a question. GPT generates a list of potential queries. Search queries are executed in parallel.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Re-rank", "Chunk": "Re-rank\nDrawing inspiration from HyDE (Gao et al.), we first generate a hypothetical ideal answer to rerank our compare our results against. This helps prioritize results that look like good answers, rather than those similar to our question. Here’s the prompt we use to generate our hypothetical answer.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Generate a Hypothetical Answer", "Chunk": "Generate a hypothetical answer to the user's question. This answer will be used to rank search results. Pretend you have all the information you need to answer, but don't use any actual facts. Instead, use placeholders like NAME did something, or NAME said something at PLACE.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Calculating Cosine Similarity", "Chunk": "Now, let's generate embeddings for the search results and the hypothetical answer. We then calculate the cosine distance between these embeddings, giving us a semantic similarity metric. Note that we can simply calculate the dot product in lieu of doing a full cosine similarity calculation since the OpenAI embeddings are returned normalized in our API.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Re-rank Results", "Chunk": "Finally, we use these similarity scores to sort and filter the results.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Top 5 Articles", "Chunk": "Print top 5 articles", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Display Top Results", "Chunk": "These results look a lot more relevant to our original query. Now, let's use the top 5 results to generate a final answer.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Generate a Final Answer", "Chunk": "Generate an answer to the user's question based on the given search results. TOP_RESULTS: [{'title': 'Article Title 1', 'description': 'Article Description 1', 'url': 'https://example.com/article1'}, ...] USER_QUESTION: Who won the NBA championship? And who was the MVP? Tell me a bit about the last game.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Question answering using a search API and re-ranking", "Chunk": "Now, in order to be as exhaustive as possible, we use the model to generate a list of diverse queries based on this question. QUERIES_INPUT = f\"\"\"\nYou have access to a search API that returns recent news articles. Generate an array of search queries that are relevant to this question. Use a variation of related keywords for the queries, trying to be as general as possible. Include as many queries as you can think of, including and excluding terms. For example, include queries like ['keyword_1 keyword_2', 'keyword_1', 'keyword_2']. Be creative. The more queries you include, the more likely you are to find relevant results. User question: {USER_QUESTION} Format: {\"queries\": [\"query_1\", \"query_2\", \"query_3\"]} queries = json_gpt(QUERIES_INPUT)[\"queries\"] # Let's include the original question as well for good measure queries.append(USER_QUESTION) queries The queries look good, so let's run the searches.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Re-rank", "Chunk": "As we can see, oftentimes, the search queries will return a large number of results, many of which are not relevant to the original question asked by the user. In order to improve the quality of the final answer, we use embeddings to re-rank and filter the results. 2. Re-rank Drawing inspiration from HyDE (Gao et al.), we first generate a hypothetical ideal answer to rerank our compare our results against. This helps prioritize results that look like good answers, rather than those similar to our question. Here’s the prompt we use to generate our hypothetical answer. HA_INPUT = f\"\"\"\nGenerate a hypothetical answer to the user's question. This answer will be used to rank search results. Pretend you have all the information you need to answer, but don't use any actual facts. Instead, use placeholders like NAME did something, or NAME said something at PLACE. User question: {USER_QUESTION} Format: {\"hypotheticalAnswer\": \"hypothetical answer text\"} hypothetical_answer = json_gpt(HA_INPUT)[\"hypotheticalAnswer\"] hypothetical_answer Now, let's generate embeddings for the search results and the hypothetical answer. We then calculate the cosine distance between these embeddings, giving us a semantic similarity metric. Note that we can simply calculate the dot product in lieu of doing a full cosine similarity calculation since the OpenAI embeddings are returned normalized in our API.", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Answer", "Chunk": "Finally, we use these similarity scores to sort and filter the results. scored_articles = zip(articles, cosine_similarities) # Sort articles by cosine similarity sorted_articles = sorted(scored_articles, key=lambda x: x[1], reverse=True) # Print top 5 articles print(\"Top 5 articles:\", \"\\n\") for article, score in sorted_articles[0:5]: print(\"Title:\", article[\"title\"]) print(\"Description:\", article[\"description\"]) print(\"Content:\", article[\"content\"][0:100] + \"...\") print(\"Score:\", score) print() Awesome! These results look a lot more relevant to our original query. Now, let's use the top 5 results to generate a final answer. 3. Answer formatted_top_results = [ { \"title\": article[\"title\"], \"description\": article[\"description\"], \"url\": article[\"url\"], } for article, _score in sorted_articles[0:5] ] ANSWER_INPUT = f\"\"\"\nGenerate an answer to the user's question based on the given search results. TOP_RESULTS: {formatted_top_results} USER_QUESTION: {USER_QUESTION} Include as much information as possible in the answer. Reference the relevant search result urls as markdown links.\"\"\" completion = openai.ChatCompletion.create( model=GPT_MODEL, messages=[{\"role\": \"user\", \"content\": ANSWER_INPUT}], temperature=0.5, stream=True, ) text = \"\" for chunk in completion: text += chunk.choices[0].delta.get(\"content\", \"\") display.clear_output(wait=True) display.display(display.Markdown(text))", "Source": "https://cookbook.openai.com/examples/question_answering_using_a_search_api"} {"Chunk ID": "Related resources - Part 1", "Chunk": "People are writing great tools and papers for improving outputs from GPT. Here are some cool ones we've seen:", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 1", "Chunk": "Guidance: A handy looking Python library from Microsoft that uses Handlebars templating to interleave generation, prompting, and logical control.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 2", "Chunk": "LangChain: A popular Python/JavaScript library for chaining sequences of language model prompts.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 3", "Chunk": "FLAML (A Fast Library for Automated Machine Learning & Tuning): A Python library for automating selection of models, hyperparameters, and other tunable choices.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 4", "Chunk": "Chainlit: A Python library for making chatbot interfaces.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 5", "Chunk": "Guardrails.ai: A Python library for validating outputs and retrying failures. Still in alpha, so expect sharp edges and bugs.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 6", "Chunk": "Semantic Kernel: A Python/C#/Java library from Microsoft that supports prompt templating, function chaining, vectorized memory, and intelligent planning.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 7", "Chunk": "Prompttools: Open-source Python tools for testing and evaluating models, vector DBs, and prompts.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 8", "Chunk": "Outlines: A Python library that provides a domain-specific language to simplify prompting and constrain generation.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 9", "Chunk": "Promptify: A small Python library for using language models to perform NLP tasks.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 10", "Chunk": "Scale Spellbook: A paid product for building, comparing, and shipping language model apps.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 11", "Chunk": "PromptPerfect: A paid product for testing and improving prompts.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 12", "Chunk": "Weights & Biases: A paid product for tracking model training and prompt engineering experiments.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 13", "Chunk": "OpenAI Evals: An open-source library for evaluating task performance of language models and prompts.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 14", "Chunk": "LlamaIndex: A Python library for augmenting LLM apps with data.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 15", "Chunk": "Arthur Shield: A paid product for detecting toxicity, hallucination, prompt injection, etc.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting libraries & tools - Part 16", "Chunk": "LMQL: A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting guides", "Chunk": "Brex's Prompt Engineering Guide: Brex's introduction to language models and prompt engineering.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting guides", "Chunk": "promptingguide.ai: A prompt engineering guide that demonstrates many techniques.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting guides", "Chunk": "OpenAI Cookbook: Techniques to improve reliability: A slightly dated (Sep 2022) review of techniques for prompting language models.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting guides", "Chunk": "Lil'Log Prompt Engineering: An OpenAI researcher's review of the prompt engineering literature (as of March 2023).", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Prompting guides", "Chunk": "learnprompting.org: An introductory course to prompt engineering.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Video courses", "Chunk": "Andrew Ng's DeepLearning.AI: A short course on prompt engineering for developers.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Video courses", "Chunk": "Andrej Karpathy's Let's build GPT: A detailed dive into the machine learning underlying GPT.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Video courses", "Chunk": "Prompt Engineering by DAIR.AI: A one-hour video on various prompt engineering techniques.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 1", "Chunk": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022): Using few-shot prompts to ask models to think step by step improves their reasoning. PaLM's score on math word problems (GSM8K) rises from 18% to 57%.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 2", "Chunk": "Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022): Taking votes from multiple outputs improves accuracy even more. Voting across 40 outputs raises PaLM's score on math word problems further, from 57% to 74%, and code-davinci-002's from 60% to 78%.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 3", "Chunk": "Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023): Searching over trees of step by step reasoning helps even more than voting over chains of thought. It lifts GPT-4's scores on creative writing and crosswords.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 4", "Chunk": "Language Models are Zero-Shot Reasoners (2022): Telling instruction-following models to think step by step improves their reasoning. It lifts text-davinci-002's score on math word problems (GSM8K) from 13% to 41%.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 5", "Chunk": "Large Language Models Are Human-Level Prompt Engineers (2023): Automated searching over possible prompts found a prompt that lifts scores on math word problems (GSM8K) to 43%, 2 percentage points above the human-written prompt in Language Models are Zero-Shot Reasoners.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 6", "Chunk": "Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling (2023): Automated searching over possible chain-of-thought prompts improved ChatGPT's scores on a few benchmarks by 0–20 percentage points.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 7", "Chunk": "Faithful Reasoning Using Large Language Models (2022): Reasoning can be improved by a system that combines: chains of thought generated by alternative selection and inference prompts, a halter model that chooses when to halt selection-inference loops, a value function to search over multiple reasoning paths, and sentence labels that help avoid hallucination.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 8", "Chunk": "STaR: Bootstrapping Reasoning With Reasoning (2022): Chain of thought reasoning can be baked into models via fine-tuning. For tasks with an answer key, example chains of thoughts can be generated by language models.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 9", "Chunk": "ReAct: Synergizing Reasoning and Acting in Language Models (2023): For tasks with tools or an environment, chain of thought works better you prescriptively alternate between Reasoning steps (thinking about what to do) and Acting (getting information from a tool or environment).", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 10", "Chunk": "Reflexion: an autonomous agent with dynamic memory and self-reflection (2023): Retrying tasks with memory of prior failures improves subsequent performance.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 11", "Chunk": "Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP (2023): Models augmented with knowledge via a 'retrieve-then-read' can be improved with multi-hop chains of searches.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Papers on advanced prompting to improve reasoning - Part 12", "Chunk": "Improving Factuality and Reasoning in Language Models through Multiagent Debate (2023): Generating debates between a few ChatGPT agents over a few rounds improves scores on various benchmarks. Math word problem scores rise from 77% to 85%.", "Source": "https://cookbook.openai.com/related_resources"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "The aim of this notebook is to walk through a comprehensive example of how to fine-tune OpenAI models for Retrieval Augmented Generation (RAG).\n\nWe will also be integrating Qdrant and Few-Shot Learning to boost the model's performance and reduce hallucinations. This could serve as a practical guide for ML practitioners, data scientists, and AI Engineers interested in leveraging the power of OpenAI models for specific use-cases. 🤩", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Setting up the Environment", "Chunk": "Install and Import Dependencies\n!pip install pandas openai tqdm tenacity scikit-learn tiktoken python-dotenv seaborn --upgrade --quiet\n\nimport json\nimport os\nimport time\n\nimport pandas as pd\nimport openai\nimport tiktoken\nimport seaborn as sns\nfrom tenacity import retry, wait_exponential\nfrom tqdm import tqdm\nfrom collections import defaultdict\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\ntqdm.pandas()\n\nSet your keys\nGet your OpenAI keys here and Qdrant keys after making a free cluster here.", "Source": null} {"Chunk ID": "Data Preparation: SQuADv2 Data Subsets", "Chunk": "For the purpose of demonstration, we'll make small slices from the train and validation splits of the SQuADv2 dataset. This dataset has questions and contexts where the answer is not present in the context, to help us evaluate how LLM handles this case.\n\nWe'll read the data from the JSON files and create a dataframe with the following columns: question, context, answer, is_impossible.\n\nDownload the Data\n# !mkdir -p local_cache\n# !wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O local_cache/train.json\n# !wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O local_cache/dev.json\n\nRead JSON to DataFrame\ndef json_to_dataframe_with_titles(json_data):\n qas = []\n context = []\n is_impossible = []\n answers = []\n titles = []\n\n for article in json_data['data']:\n title = article['title']\n for paragraph in article['paragraphs']:\n for qa in paragraph['qas']:\n qas.append(qa['question'].strip())\n context.append(paragraph['context'])\n is_impossible.append(qa['is_impossible'])\n\n ans_list = []\n for ans in qa['answers']:\n ans_list.append(ans['text'])\n answers.append(ans_list)\n titles.append(title)\n\n df = pd.DataFrame({'title': titles, 'question': qas, 'context': context, 'is_impossible': is_impossible, 'answers': answers})\n return df\n\ndef get_diverse_sample(df, sample_size=100, random_state=42):\n \"\"\"\n Get a diverse sample of the dataframe by sampling from each title\n \"\"\"\n sample_df = df.groupby(['title', 'is_impossible']).apply(lambda x: x.sample(min(len(x), max(1, sample_size // 50)), random_state=random_state)).reset_index(drop=True)\n \n if len(sample_df) < sample_size:\n remaining_sample_size = sample_size - len(sample_df)\n remaining_df = df.drop(sample_df.index).sample(remaining_sample_size, random_state=random_state)\n sample_df = pd.concat([sample_df, remaining_df]).sample(frac=1, random_state=random_state).reset_index(drop=True)\n\n return sample_df.sample(min(sample_size, len(sample_df)), random_state=random_state).reset_index(drop=True)\n\ntrain_df = json_to_dataframe_with_titles(json.load(open('local_cache/train.json')))\nval_df = json_to_dataframe_with_titles(json.load(open('local_cache/dev.json')))", "Source": null} {"Chunk ID": "Answering using Base gpt-3.5-turbo-0613 model", "Chunk": "3.1 Zero Shot Prompt\nLet's start by using the base gpt-3.5-turbo-0613 model to answer the questions. This prompt is a simple concatenation of the question and context, with a separator token in between: \n\n. We've a simple instruction part of the prompt:\n\nAnswer the following Question based on the Context only. Only answer from the Context. If you don't know the answer, say 'I don't know'.\n\nOther prompts are possible, but this is a good starting point. We'll use this prompt to answer the questions in the validation set.", "Source": null} {"Chunk ID": "Answering using Zero Shot Prompt", "Chunk": "3.2 Answering using Zero Shot Prompt\nNext, you'll need some re-usable functions which make an OpenAI API Call and return the answer. You'll use the ChatCompletion.create endpoint of the API, which takes a prompt and returns the completed text.\n\n# Function with tenacity for retries\n@retry(wait=wait_exponential(multiplier=1, min=2, max=6))\ndef api_call(messages, model):\n return openai.ChatCompletion.create(\n model=model,\n messages=messages,\n stop=[\"\\n\\n\"],\n max_tokens=100,\n temperature=0.0,\n )\n\n\n# Main function to answer question\ndef answer_question(row, prompt_func=get_prompt, model=\"gpt-3.5-turbo-0613\"):\n messages = prompt_func(row)\n response = api_call(messages, model)\n return response[\"choices\"][0][\"message\"][\"content\"]\n\n⏰ Time to run: ~3 min, 🛜 Needs Internet Connection\n\n# Use progress_apply with tqdm for progress bar\ndf[\"generated_answer\"] = df.progress_apply(answer_question, axis=1)\ndf.to_json(\"local_cache/100_val.json\", orient=\"records\", lines=True)\ndf = pd.read_json(\"local_cache/100_val.json\", orient=\"records\", lines=True)\ndf", "Source": null} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "Notice that the fine-tuned model skips questions more often -- and makes fewer mistakes. This is because the fine-tuned model is more conservative and skips questions when it's not sure.", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "evaluator.plot_model_comparison([\"generated_answer\", \"ft_generated_answer\"], scenario=\"idk_expected\", nice_names=[\"Baseline\", \"Fine-Tuned\"])", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "Notice that the fine-tuned model has learned to say \"I don't know\" a lot better than the prompt. Or, the model has gotten good at skipping questions.", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "Observations The fine-tuned model is better at saying \"I don't know\" Hallucinations drop from 100% to 0% with fine-tuning. Wrong answers drop from 17% to 6% with fine-tuning. Correct answers also drop from 83% to 60% with fine-tuning - this is because the fine-tuned model is more conservative and says \"I don't know\" more often. This is a good thing because it's better to say \"I don't know\" than to give a wrong answer.", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "That said, we want to improve the correctness of the model, even if that increases the hallucinations. We're looking for a model that is both correct and conservative, striking a balance between the two. We'll use Qdrant and Few-Shot Learning to achieve this.", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "💪 You're 2/3rds of the way there! Keep reading!", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "Section B: Few Shot Learning We'll select a few examples from the dataset, including cases where the answer is not present in the context. We'll then use these examples to create a prompt that we can use to fine-tune the model. We'll then measure the performance of the fine-tuned model.", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "What is next? Fine-Tuning OpenAI Model with Qdrant 6.1 Embed the Fine-Tuning Data 6.2 Embedding the Questions Using Qdrant to Improve RAG Prompt 6. Fine-Tuning OpenAI Model with Qdrant So far, we've been using the OpenAI model to answer questions without using examples of the answer. The previous step made it work better on in-context examples, while this one helps it generalize to unseen data, and attempt to learn when to say \"I don't know\" and when to give an answer.", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "This is where few-shot learning comes in! Few-shot learning is a type of transfer learning that allows us to answer questions where the answer is not present in the context. We can do this by providing a few examples of the answer we're looking for, and the model will learn to answer questions where the answer is not present in the context.", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "5.1 Embed the Training Data Embeddings are a way to represent sentences as an array of floats. We'll use the embeddings to find the most similar questions to the ones we're looking for.", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "import os from qdrant_client import QdrantClient from qdrant_client.http import models from qdrant_client.http.models import PointStruct from qdrant_client.http.models import Distance, VectorParams Now that we've the Qdrant imports in place, qdrant_client = QdrantClient( url=os.getenv(\"QDRANT_URL\"), api_key=os.getenv(\"QDRANT_API_KEY\"), timeout=6000, prefer_grpc=True ) collection_name = \"squadv2-cookbook\" # # Create the collection, run this only once # qdrant_client.recreate_collection( # collection_name=collection_name, # vectors_config=VectorParams(size=384, distance=Distance.COSINE), # ) from fastembed.embedding import DefaultEmbedding from typing import List import numpy as np import pandas as pd from tqdm.notebook import tqdm tqdm.pandas() embedding_model = DefaultEmbedding()", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "5.2 Embedding the Questions Next, you'll embed the entire training set questions. You'll use the question to question similarity to find the most similar questions to the question we're looking for. This is a workflow which is used in RAG to leverage the OpenAI model ability of in-context learning with more examples. This is what we call Few Shot Learning here. ❗️⏰ Important Note: This step can take up to 3 hours to complete. Please be patient. If you see Out of Memory errors or Kernel Crashes, please reduce the batch size to 32, restart the kernel and run the notebook again. This code needs to be run only ONCE. Function Breakdown for generate_points_from_dataframe Initialization: batch_size = 512 and total_batches set the stage for how many questions will be processed in one go. This is to prevent memory issues. If your machine can handle more, feel free to increase the batch size. If your kernel crashes, reduce the batch size to 32 and try again. Progress Bar: tqdm gives you a nice progress bar so you don't fall asleep. Batch Loop: The for-loop iterates through batches. start_idx and end_idx define the slice of the DataFrame to process. Generate Embeddings: batch_embeddings = embedding_model.embed(batch, batch_size=batch_size) - This is where the magic happens. Your questions get turned into embeddings. PointStruct Generation: Using .progress_apply, it turns each row into a PointStruct object. This includes an ID, the embedding vector, and other metadata. Returns the list of PointStruct objects, which can be used to create a collection in Qdrant.", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "def generate_points_from_dataframe(df: pd.DataFrame) -> List[PointStruct]: batch_size = 512 questions = df[\"question\"].tolist() total_batches = len(questions) // batch_size + 1 pbar = tqdm(total=len(questions), desc=\"Generating embeddings\") # Generate embeddings in batches to improve performance embeddings = [] for i in range(total_batches): start_idx = i * batch_size end_idx = min((i + 1) * batch_size, len(questions)) batch = questions[start_idx:end_idx] batch_embeddings = embedding_model.embed(batch, batch_size=batch_size) embeddings.extend(batch_embeddings) pbar.update(len(batch)) pbar.close() # Convert embeddings to list of lists embeddings_list = [embedding.tolist() for embedding in embeddings] # Create a temporary DataFrame to hold the embeddings and existing DataFrame columns temp_df = df.copy() temp_df[\"embeddings\"] = embeddings_list temp_df[\"id\"] = temp_df.index # Generate PointStruct objects using DataFrame apply method points = temp_df.progress_apply( lambda row: PointStruct( id=row[\"id\"], vector=row[\"embeddings\"], payload={ \"question\": row[\"question\"], \"title\": row[\"title\"], \"context\": row[\"context\"], \"is_impossible\": row[\"is_impossible\"], \"answers\": row[\"answers\"], }, ), axis=1, ).tolist() return points points = generate_points_from_dataframe(train_df) Upload the Embeddings to Qdrant Note that configuring Qdrant is outside the scope of this notebook. Please refer to the Qdrant for more information. We used a timeout of 600 seconds for the upload, and grpc compression to speed up the upload. operation_info = qdrant_client.upsert( collection_name=collection_name, wait=True, points=points ) print(operation_info)", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning", "Chunk": "6. Using Qdrant to Improve RAG Prompt\nNow that we've uploaded the embeddings to Qdrant, we can use Qdrant to find the most similar questions to the question we're looking for. We'll use the top 5 most similar questions to create a prompt that we can use to fine-tune the model. We'll then measure the performance of the fine-tuned model on the same validation set, but with few shot prompting!\n\nOur main function get_few_shot_prompt serves as the workhorse for generating prompts for few-shot learning. It does this by retrieving similar questions from Qdrant - a vector search engine, using an embeddings model. Here is the high-level workflow:\n\nRetrieve similar questions from Qdrant where the answer is present in the context\nRetrieve similar questions from Qdrant where the answer is IMPOSSIBLE i.e. the expected answer is \"I don't know\" to find in the context\nCreate a prompt using the retrieved questions\nFine-tune the model using the prompt\nEvaluate the fine-tuned model on the validation set with the same prompting technique\ndef get_few_shot_prompt(row):\n\n query, row_context = row[\"question\"], row[\"context\"]\n\n embeddings = list(embedding_model.embed([query]))\n query_embedding = embeddings[0].tolist()\n\n num_of_qa_to_retrieve = 5\n\n # Query Qdrant for similar questions that have an answer\n q1 = qdrant_client.search(\n collection_name=collection_name,\n query_vector=query_embedding,\n with_payload=True,\n limit=num_of_qa_to_retrieve,\n query_filter=models.Filter(\n must=[\n models.FieldCondition(\n key=\"is_impossible\",\n match=models.MatchValue(\n value=False,\n ),\n ),\n ],\n )\n )\n\n # Query Qdrant for similar questions that are IMPOSSIBLE to answer\n q2 = qdrant_client.search(\n collection_name=collection_name,\n query_vector=query_embedding,\n query_filter=models.Filter(\n must=[\n models.FieldCondition(\n key=\"is_impossible\",\n match=models.MatchValue(\n value=True,\n ),\n ),\n ]\n ),\n with_payload=True,\n limit=num_of_qa_to_retrieve,\n )\n\n instruction = \"\"\"Answer the following Question based on the Context only. Only answer from the Context. If you don't know the answer, say 'I don't know'.\n\n\"\"\"\n # If there is a next best question, add it to the prompt\n \n def q_to_prompt(q):\n question, context = q.payload[\"question\"], q.payload[\"context\"]\n answer = q.payload[\"answers\"][0] if len(q.payload[\"answers\"]) > 0 else \"I don't know\"\n return [\n {\n \"role\": \"user\", \n \"content\": f\"Question: {question}\\n\\nContext: {context}\\n\\nAnswer:\"\n },\n {\"role\": \"assistant\", \"content\": answer},\n ]\n\n rag_prompt = []\n \n if len(q1) >= 1:\n rag_prompt += q_to_prompt(q1[1])\n if len(q2) >= 1:\n rag_prompt += q_to_prompt(q2[1])\n if len(q1) >= 1:\n rag_prompt += q_to_prompt(q1[2])\n \n \n rag_prompt += [\n {\n \"role\": \"user\",\n \"content\": f\"Question: {query}\\n\\nContext: {row_context}\\n\\nAnswer:\"\n },\n ]\n\n rag_prompt = [{\"role\": \"system\", \"content\": instruction}] + rag_prompt\n return rag_prompt\n\n# ⏰ Time: 2 min\ntrain_sample[\"few_shot_prompt\"] = train_sample.progress_apply(get_few_shot_prompt, axis=1)\n", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "7. Fine-Tuning OpenAI Model with Qdrant", "Chunk": "7.1 Upload the Fine-Tuning Data to OpenAI\n# Prepare the OpenAI File format i.e. JSONL from train_sample\ndef dataframe_to_jsonl(df):\n def create_jsonl_entry(row):\n messages = row[\"few_shot_prompt\"]\n return json.dumps({\"messages\": messages})\n\n jsonl_output = df.progress_apply(create_jsonl_entry, axis=1)\n return \"\\n\".join(jsonl_output)\n\nwith open(\"local_cache/100_train_few_shot.jsonl\", \"w\") as f:\n f.write(dataframe_to_jsonl(train_sample))\n\n7.2 Fine-Tune the Model\n⏰ Time to run: ~15-30 minutes\n\nfine_tuner = OpenAIFineTuner(\n training_file_path=\"local_cache/100_train_few_shot.jsonl\",\n model_name=\"gpt-3.5-turbo\",\n suffix=\"trnfewshot20230907\"\n )\n\nmodel_id = fine_tuner.fine_tune_model()\nmodel_id\n\n# Let's try this out\ncompletion = openai.ChatCompletion.create(\n model=model_id,\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\n \"role\": \"user\",\n \"content\": \"Can you answer the following question based on the given context? If not, say, I don't know:\\n\\nQuestion: What is the capital of France?\\n\\nContext: The capital of Mars is Gaia. Answer:\",\n },\n {\n \"role\": \"assistant\",\n \"content\": \"I don't know\",\n },\n {\n \"role\": \"user\",\n \"content\": \"Question: Where did Maharana Pratap die?\\n\\nContext: Rana Pratap's defiance of the mighty Mughal empire, almost alone and unaided by the other Rajput states, constitute a glorious saga of Rajput valour and the spirit of self-sacrifice for cherished principles. Rana Pratap's methods of guerrilla warfare were later elaborated further by Malik Ambar, the Deccani general, and by Emperor Shivaji.\\nAnswer:\",\n },\n {\n \"role\": \"assistant\",\n \"content\": \"I don't know\",\n },\n {\n \"role\": \"user\",\n \"content\": \"Question: Who did Rana Pratap fight against?\\n\\nContext: In stark contrast to other Rajput rulers who accommodated and formed alliances with the various Muslim dynasties in the subcontinent, by the time Pratap ascended to the throne, Mewar was going through a long-standing conflict with the Mughals, which started with the defeat of his grandfather Rana Sanga in the Battle of Khanwa in 1527 and continued with the defeat of his father Udai Singh II in the Siege of Chittorgarh in 1568. Pratap Singh, gained distinction for his refusal to form any political alliance with the Mughal Empire and his resistance to Muslim domination. The conflicts between Pratap Singh and Akbar led to the Battle of Haldighati. Answer:\",\n },\n {\n \"role\": \"assistant\",\n \"content\": \"Akbar\",\n },\n {\n \"role\": \"user\",\n \"content\": \"Question: Which state is Chittorgarh in?\\n\\nContext: Chittorgarh, located in the southern part of the state of Rajasthan, 233 km (144.8 mi) from Ajmer, midway between Delhi and Mumbai on the National Highway 8 (India) in the road network of Golden Quadrilateral. Chittorgarh is situated where National Highways No. 76 & 79 intersect. Answer:\",\n },\n ],\n)\nprint(\"Correct Answer: Rajasthan\\nModel Answer:\")\nprint(completion.choices[0].message)\n\n⏰ Time to run: 5-15 min\n\ndf[\"ft_generated_answer_few_shot\"] = df.progress_apply(answer_question, model=model_id, prompt_func=get_few_shot_prompt, axis=1)\ndf.to_json(\"local_cache/100_val_ft_few_shot.json\", orient=\"records\", lines=True)\n", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "8. Evaluation", "Chunk": "But how well does the model perform? Let's compare the results from the 3 different models we've looked at so far:\n\nevaluator = Evaluator(df)\nevaluator.plot_model_comparison([\"generated_answer\", \"ft_generated_answer\", \"ft_generated_answer_few_shot\"], scenario=\"answer_expected\", nice_names=[\"Baseline\", \"Fine-Tuned\", \"Fine-Tuned with Few-Shot\"])\n\nThis is quite amazing -- we're able to get the best of both worlds! We're able to get the model to be both correct and conservative:\n\nThe model is correct 83% of the time -- this is the same as the base model\nThe model gives the wrong answer only 8% of the time -- down from 17% with the base model\nNext, let's look at the hallucinations. We want to reduce the hallucinations, but not at the cost of correctness. We want to strike a balance between the two. We've struck a good balance here:\n\nThe model hallucinates 53% of the time -- down from 100% with the base model\nThe model says \"I don't know\" 47% of the time -- up from NEVER with the base model\nevaluator.plot_model_comparison([\"generated_answer\", \"ft_generated_answer\", \"ft_generated_answer_few_shot\"], scenario=\"idk_expected\", nice_names=[\"Baseline\", \"Fine-Tuned\", \"Fine-Tuned with Few-Shot\"])\n\nFew Shot Fine-Tuning with Qdrant is a great way to control and steer the performance of your RAG system. Here, we made the model less conservative compared to zero shot and more confident by using Qdrant to find similar questions.\n\nYou can also use Qdrant to make the model more conservative. We did this by giving examples of questions where the answer is not present in the context.\nThis is biasing the model to say \"I don't know\" more often.\n\nSimilarly, one can also use Qdrant to make the model more confident by giving examples of questions where the answer is present in the context. This biases the model to give an answer more often. The trade-off is that the model will also hallucinate more often.\n\nYou can make this trade-off by adjusting the training data: distribution of questions and examples, as well as the kind and number of examples you retrieve from Qdrant.\n", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "9. Conclusion", "Chunk": "In this notebook, we've demonstrated how to fine-tune OpenAI models for specific use-cases. We've also demonstrated how to use Qdrant and Few-Shot Learning to improve the performance of the model.\n\nAggregate Results\nSo far, we've looked at the results for each scenario separately, i.e. each scenario summed to 100. Let's look at the results as an aggregate to get a broader sense of how the model is performing:\n\nCategory\tBase\tFine-Tuned\tFine-Tuned with Qdrant\nCorrect\t44%\t32%\t44%\nSkipped\t0%\t18%\t5%\nWrong\t9%\t3%\t4%\nHallucination\t47%\t7%\t25%\nI don't know\t0%\t40%\t22%\nObservations\nCompared to base model\nThe few shot fine-tuned with Qdrant model is as good as the base model at answering questions where the answer is present in the context.\nThe few shot fine-tuned with Qdrant model is better at saying \"I don't know\" when the answer is not present in the context.\nThe few shot fine-tuned with Qdrant model is better at reducing hallucinations.\nCompared to fine-tuned model\nThe few shot fine-tuned with Qdrant model gets more correct answers than the fine-tuned model: 83% of the questions are answered correctly vs 60% for the fine-tuned model\nThe few shot fine-tuned with Qdrant model is better at deciding when to say \"I don't know\" when the answer is not present in the context. 34% skip rate for the plain fine-tuning mode, vs 9% for the few shot fine-tuned with Qdrant model\nNow, you should be able to:\n\nNotice the trade-offs between the number of correct answers and hallucinations -- and how training dataset choice influences that!\nFine-tune OpenAI models for specific use-cases and use Qdrant to improve the performance of your RAG model\nGet started on how to evaluate the performance of your RAG model", "Source": "https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant"} {"Chunk ID": "Azure chat completion models with your own data (preview)_1", "Chunk": "This example shows how to use Azure OpenAI service models with your own data. The feature is currently in preview.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_2", "Chunk": "Azure OpenAI on your data enables you to run supported chat models such as GPT-3.5-Turbo and GPT-4 on your data without needing to train or fine-tune models. Running models on your data enables you to chat on top of, and analyze your data with greater accuracy and speed.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_3", "Chunk": "One of the key benefits of Azure OpenAI on your data is its ability to tailor the content of conversational AI. Because the model has access to, and can reference specific sources to support its responses, answers are not only based on its pretrained knowledge but also on the latest information available in the designated data source.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_4", "Chunk": "This grounding data also helps the model avoid generating responses based on outdated or incorrect information.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_5", "Chunk": "Azure OpenAI on your own data with Azure Cognitive Search provides a customizable, pre-built solution for knowledge retrieval, from which a conversational AI application can be built. To see alternative methods for knowledge retrieval and semantic search, check out the cookbook examples for vector databases.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_6", "Chunk": "How it works\nAzure OpenAI on your own data connects the model with your data, giving it the ability to retrieve and utilize data in a way that enhances the model's output.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_7", "Chunk": "Together with Azure Cognitive Search, data is retrieved from designated data sources based on the user input and provided conversation history. The data is then augmented and resubmitted as a prompt to the model, giving the model contextual information it can use to generate a response.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_8", "Chunk": "See the Data, privacy, and security for Azure OpenAI Service for more information.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_9", "Chunk": "Prerequisites\nTo get started, we'll cover a few prerequisites.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_10", "Chunk": "To properly access the Azure OpenAI Service, we need to create the proper resources at the Azure Portal (you can check a detailed guide on how to do this in the Microsoft Docs)", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_11", "Chunk": "To use your own data with Azure OpenAI models, you will need:", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_12", "Chunk": "Azure OpenAI access and a resource with a chat model deployed (for example, GPT-3 or GPT-4)", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_13", "Chunk": "Azure Cognitive Search resource", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_14", "Chunk": "Azure Blob Storage resource", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_15", "Chunk": "Your documents to be used as data (See data source options)", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_16", "Chunk": "For a full walk-through on how to upload your documents to blob storage and create an index using the Azure AI Studio, see this Quickstart.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_17", "Chunk": "Setup\nFirst, we install the necessary dependencies.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_18", "Chunk": "! pip install \"openai>=0.27.6\"\n! pip install python-dotenv", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_19", "Chunk": "In this example, we'll use dotenv to load our environment variables. To connect with Azure OpenAI and the Search index, the following variables should be added to a .env file in KEY=VALUE format:", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_20", "Chunk": "OPENAI_API_BASE - the Azure OpenAI endpoint. This can be found under \"Keys and Endpoints\" for your Azure OpenAI resource in the Azure Portal.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_21", "Chunk": "OPENAI_API_KEY - the Azure OpenAI API key. This can be found under \"Keys and Endpoints\" for your Azure OpenAI resource in the Azure Portal. Omit if using Azure Active Directory authentication (see below Authentication using Microsoft Active Directory)", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_22", "Chunk": "SEARCH_ENDPOINT - the Cognitive Search endpoint. This URL can be found on the \"Overview\" of your Search resource on the Azure Portal.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_23", "Chunk": "SEARCH_KEY - the Cognitive Search API key. Found under \"Keys\" for your Search resource in the Azure Portal.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_24", "Chunk": "SEARCH_INDEX_NAME - the name of the index you created with your own data.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_25", "Chunk": "import os\nimport openai\nimport dotenv", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_26", "Chunk": "dotenv.load_dotenv()", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_27", "Chunk": "openai.api_base = os.environ[\"OPENAI_API_BASE\"]", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_28", "Chunk": "# Azure OpenAI on your own data is only supported by the 2023-08-01-preview API version", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_29", "Chunk": "openai.api_version = \"2023-08-01-preview\"", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_30", "Chunk": "Authentication\nThe Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure credentials.", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure chat completion models with your own data (preview)_31", "Chunk": "use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory", "Source": "https://cookbook.openai.com/examples/azure/chat_with_your_own_data"} {"Chunk ID": "Azure Cognitive Search as a vector database for OpenAI embeddings", "Chunk": "This notebook provides step by step instuctions on using Azure Cognitive Search as a vector database with OpenAI embeddings. Azure Cognitive Search (formerly known as \"Azure Search\") is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Prerequisites", "Chunk": "For the purposes of this exercise you must have the following:\n\nAzure Cognitive Search Service\nOpenAI Key or Azure OpenAI credentials\n! pip install wget\n! pip install azure-search-documents --pre", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Import required libraries", "Chunk": "import openai\nimport json\nimport openai\nimport wget\nimport pandas as pd\nimport zipfile\nfrom azure.core.credentials import AzureKeyCredential\nfrom azure.search.documents import SearchClient\nfrom azure.search.documents.indexes import SearchIndexClient\nfrom azure.search.documents.models import Vector\nfrom azure.search.documents.indexes.models import (\n SearchIndex,\n SearchField,\n SearchFieldDataType,\n SimpleField,\n SearchableField,\n SearchIndex,\n SemanticConfiguration,\n PrioritizedFields,\n SemanticField,\n SearchField,\n SemanticSettings,\n VectorSearch,\n HnswVectorSearchAlgorithmConfiguration,\n)", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Configure OpenAI settings", "Chunk": "Configure your OpenAI or Azure OpenAI settings. For this example, we use Azure OpenAI.\n\nopenai.api_type = \"azure\"\nopenai.api_base = \"YOUR_AZURE_OPENAI_ENDPOINT\"\nopenai.api_version = \"2023-05-15\"\nopenai.api_key = \"YOUR_AZURE_OPENAI_KEY\"\nmodel: str = \"text-embedding-ada-002\"", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Configure Azure Cognitive Search Vector Store settings", "Chunk": "You can find this in the Azure Portal or using the Search Management SDK\n\nsearch_service_endpoint: str = \"YOUR_AZURE_SEARCH_ENDPOINT\"\nsearch_service_api_key: str = \"YOUR_AZURE_SEARCH_ADMIN_KEY\"\nindex_name: str = \"azure-cognitive-search-vector-demo\"\ncredential = AzureKeyCredential(search_service_api_key)", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Load data", "Chunk": "# The file is ~700 MB so this will take some time\nwget.download(embeddings_url)\nwith zipfile.ZipFile(\"vector_database_wikipedia_articles_embedded.zip\",\"r\") as zip_ref:\n zip_ref.extractall(\"../../data\")\n\narticle_df = pd.read_csv('../../data/vector_database_wikipedia_articles_embedded.csv')\n\n# Read vectors from strings back into a list using json.loads\narticle_df[\"title_vector\"] = article_df.title_vector.apply(json.loads)\narticle_df[\"content_vector\"] = article_df.content_vector.apply(json.loads)\narticle_df['vector_id'] = article_df['vector_id'].apply(str)\narticle_df.head()", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Create an index", "Chunk": "# Configure a search index\nindex_client = SearchIndexClient(\n endpoint=search_service_endpoint, credential=credential)\nfields = [\n SimpleField(name=\"id\", type=SearchFieldDataType.String),\n SimpleField(name=\"vector_id\", type=SearchFieldDataType.String, key=True),\n SimpleField(name=\"url\", type=SearchFieldDataType.String),\n SearchableField(name=\"title\", type=SearchFieldDataType.String),\n SearchableField(name=\"text\", type=SearchFieldDataType.String),\n SearchField(name=\"title_vector\", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n searchable=True, vector_search_dimensions=1536, vector_search_configuration=\"my-vector-config\"),\n SearchField(name=\"content_vector\", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n searchable=True, vector_search_dimensions=1536, vector_search_configuration=\"my-vector-config\"),\n]\n\n# Configure the vector search configuration\nvector_search = VectorSearch(\n algorithm_configurations=[\n HnswVectorSearchAlgorithmConfiguration(\n name=\"my-vector-config\",\n kind=\"hnsw\",\n parameters={\n \"m\": 4,\n \"efConstruction\": 400,\n \"efSearch\": 500,\n \"metric\": \"cosine\"\n }\n )\n ]\n)\n\n# Optional: configure semantic reranking by passing your title, keywords, and content fields\nsemantic_config = SemanticConfiguration(\n name=\"my-semantic-config\",\n prioritized_fields=PrioritizedFields(\n title_field=SemanticField(field_name=\"title\"),\n prioritized_keywords_fields=[SemanticField(field_name=\"url\")],\n prioritized_content_fields=[SemanticField(field_name=\"text\")]\n )\n)\n# Create the semantic settings with the configuration\nsemantic_settings = SemanticSettings(configurations=[semantic_config])\n\n# Create the index\nindex = SearchIndex(name=index_name, fields=fields,\n vector_search=vector_search, semantic_settings=semantic_settings)\nresult = index_client.create_or_update_index(index)\nprint(f'{result.name} created')", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Insert text and embeddings into vector store", "Chunk": "In this notebook, the wikipedia articles dataset provided by OpenAI, the embeddings are pre-computed. The code below takes the data frame and converts it into a dictionary list to upload to your Azure Search index.\n\n# Convert the 'id' and 'vector_id' columns to string so one of them can serve as our key field\narticle_df['id'] = article_df['id'].astype(str)\narticle_df['vector_id'] = article_df['vector_id'].astype(str)\n\n# Convert the DataFrame to a list of dictionaries\ndocuments = article_df.to_dict(orient='records')\n\nsearch_client = SearchClient(endpoint=search_service_endpoint, index_name=index_name, credential=credential)\n\n# Define the batch upload size\nbatch_size = 250\n\n# Split the documents into batches\nbatches = [documents[i:i + batch_size] for i in range(0, len(documents), batch_size)]\n\n# Upload each batch of documents\nfor batch in batches:\n result = search_client.upload_documents(batch)\n\nprint(f\"Uploaded {len(documents)} documents in total\")", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Perform a vector similarity search", "Chunk": "# Function to generate query embedding\ndef generate_embeddings(text):\n response = openai.Embedding.create(\n input=text, engine=model)\n embeddings = response['data'][0]['embedding']\n return embeddings\n\n# Pure Vector Search\nquery = \"modern art in Europe\"\n\nsearch_client = SearchClient(search_service_endpoint, index_name, AzureKeyCredential(search_service_api_key))\nvector = Vector(value=generate_embeddings(query), k=3, fields=\"content_vector\")\n\nresults = search_client.search(\n search_text=None,\n vectors=[vector],\n select=[\"title\", \"text\", \"url\"]\n)\n\nfor result in results:\n print(f\"Title: {result['title']}\")\n print(f\"Score: {result['@search.score']}\")\n print(f\"URL: {result['url']}\n\")", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Perform a Hybrid Search", "Chunk": "# Hybrid Search\nquery = \"Famous battles in Scottish history\"\n\nsearch_client = SearchClient(search_service_endpoint, index_name, AzureKeyCredential(search_service_api_key))\nvector = Vector(value=generate_embeddings(query), k=3, fields=\"content_vector\")\n\nresults = search_client.search(\n search_text=query,\n vectors=[vector],\n select=[\"title\", \"text\", \"url\"],\n top=3\n)\n\nfor result in results:\n print(f\"Title: {result['title']}\")\n print(f\"Score: {result['@search.score']}\")\n print(f\"URL: {result['url']}\n\")", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Perform a Hybrid Search with Reranking (powered by Bing)", "Chunk": "# Semantic Hybrid Search\nquery = \"Famous battles in Scottish history\"\n\nsearch_client = SearchClient(search_service_endpoint, index_name, AzureKeyCredential(search_service_api_key))\nvector = Vector(value=generate_embeddings(query), k=3, fields=\"content_vector\")\n\nresults = search_client.search(\n search_text=query,\n vectors=[vector],\n select=[\"title\", \"text\", \"url\"],\n query_type=\"semantic\", query_language=\"en-us\", semantic_configuration_name='my-semantic-config', query_caption=\"extractive\", query_answer=\"extractive\",\n top=3\n)\n\nsemantic_answers = results.get_answers()\nfor answer in semantic_answers:\n if answer.highlights:\n print(f\"Semantic Answer: {answer.highlights}\")\n else:\n print(f\"Semantic Answer: {answer.text}\")\n print(f\"Semantic Answer Score: {answer.score}\\n\")\n\nfor result in results:\n print(f\"Title: {result['title']}\")\n print(f\"URL: {result['url']}\")\n captions = result[\"@search.captions\"]\n if captions:\n caption = captions[0]\n if caption.highlights:\n print(f\"Caption: {caption.highlights}\\n\")\n else:\n print(f\"Caption: {caption.text}\\n\")", "Source": "https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai"} {"Chunk ID": "Using Tair as a vector database for OpenAI embeddings", "Chunk": "This notebook guides you step by step on using Tair as a vector database for OpenAI embeddings.", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Using precomputed embeddings created by OpenAI API", "Chunk": "Using precomputed embeddings created by OpenAI API. Storing the embeddings in a cloud instance of Tair.", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Converting raw text query to an embedding with OpenAI API", "Chunk": "Converting raw text query to an embedding with OpenAI API. Using Tair to perform the nearest neighbor search in the created collection.", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "What is Tair", "Chunk": "Tair is a cloud-native in-memory database service that is developed by Alibaba Cloud. Tair is compatible with open-source Redis and provides a variety of data models and enterprise-class capabilities to support your real-time online scenarios. Tair also introduces persistent memory-optimized instances that are based on the new non-volatile memory (NVM) storage medium. These instances can reduce costs by 30%, ensure data persistence, and provide almost the same performance as in-memory databases. Tair has been widely used in areas such as government affairs, finance, manufacturing, healthcare, and pan-Internet to meet their high-speed query and computing requirements.", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "TairVector is an in-house data structure", "Chunk": "TairVector is an in-house data structure that provides high-performance real-time storage and retrieval of vectors. TairVector provides two indexing algorithms: Hierarchical Navigable Small World (HNSW) and Flat Search. Additionally, TairVector supports multiple distance functions, such as Euclidean distance, inner product, and Jaccard distance. Compared with traditional vector retrieval services, TairVector has the following advantages:", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Deployment options", "Chunk": "Deployment options: Using Tair Cloud Vector Database. Click here to fast deploy it.", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Prerequisites", "Chunk": "Prerequisites: For the purposes of this exercise, we need to prepare a couple of things: Tair cloud server instance. The 'tair' library to interact with the tair database. An OpenAI API key. Install requirements.", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Install requirements", "Chunk": "Install requirements: This notebook obviously requires the openai and tair packages, but there are also some other additional libraries we will use. The following command installs them all: ! pip install openai redis tair pandas wget", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Prepare your OpenAI API key", "Chunk": "Prepare your OpenAI API key: The OpenAI API key is used for vectorization of the documents and queries. If you don't have an OpenAI API key, you can get one from https://beta.openai.com/account/api-keys. Once you get your key, please add it by getpass. import getpass openai.api_key = getpass.getpass('Input your OpenAI API key:')", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Connect to Tair", "Chunk": "Connect to Tair: First, add it to your environment variables. Connecting to a running instance of Tair server is easy with the official Python library. # The format of URL: redis://[[username]:[password]]@localhost:6379/0 TAIR_URL = getpass.getpass('Input your tair URL:') from tair import Tair as TairClient url = TAIR_URL client = TairClient.from_url(url) We can test the connection by ping: client.ping()", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Load data", "Chunk": "Load data: In this section, we are going to load the data prepared previously for this session, so you don't have to recompute the embeddings of Wikipedia articles with your own credits. import pandas as pd from ast import literal_eval # Path to your local CSV file csv_file_path = '../../data/vector_database_wikipedia_articles_embedded.csv' article_df = pd.read_csv(csv_file_path) # Read vectors from strings back into a list article_df['title_vector'] = article_df.title_vector.apply(literal_eval).values article_df['content_vector'] = article_df.content_vector.apply(literal_eval).values", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Create Index", "Chunk": "Create Index: Tair stores data in indexes where each object is described by one key. Each key contains a vector and multiple attribute_keys. We will start with creating two indexes, one for title_vector and one for content_vector, and then we will fill it with our precomputed embeddings. # set index parameters index = 'openai_test' embedding_dim = 1536 distance_type = 'L2' index_type = 'HNSW' data_type = 'FLOAT32' # Create two indexes, one for title_vector and one for content_vector, skip if already exists index_names = [index + '_title_vector', index+'_content_vector'] for index_name in index_names: index_connection = client.tvs_get_index(index_name) if index_connection is not None: print('Index already exists') else: client.tvs_create_index(name=index_name, dim=embedding_dim, distance_type=distance_type, index_type=index_type, data_type=data_type)", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Search data", "Chunk": "Search data: Once the data is put into Tair, we will start querying the collection for the closest vectors. We may provide an additional parameter vector_name to switch from title to content-based search. Since the precomputed embeddings were created with text-embedding-ada-002 OpenAI model, we also have to use it during search. def query_tair(client, query, vector_name='title_vector', top_k=5): # Creates an embedding vector from the user query embedded_query = openai.Embedding.create( input=query, model='text-embedding-ada-002', )['data'][0]['embedding'] embedded_query = np.array(embedded_query) # Search for the top k approximate nearest neighbors of the vector in an index query_result = client.tvs_knnsearch(index=index+'_'+vector_name, k=top_k, vector=embedded_query) return query_result import openai import numpy as np query_result = query_tair(client=client, query='modern art in Europe', vector_name='title_vector') for i in range(len(query_result)): title = client.tvs_hmget(index+'_'+'content_vector', query_result[i][0].decode('utf-8'), 'title') print(f'{i + 1}. {title[0].decode('utf-8')} (Distance: {round(query_result[i][1],3)})') # This time we'll query using content vector query_result = query_tair(client=client, query='Famous battles in Scottish history', vector_name='content_vector') for i in range(len(query_result)): title = client.tvs_hmget(index+'_'+'content_vector', query_result[i][0].decode('utf-8'), 'title') print(f'{i + 1}. {title[0].decode('utf-8')} (Distance: {round(query_result[i][1],3)})')", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai"} {"Chunk ID": "Question Answering with Langchain, Tair and OpenAI", "Chunk": "This notebook presents how to implement a Question Answering system with Langchain, Tair as a knowledge based and OpenAI embeddings. If you are not familiar with Tair, it’s better to check out the Getting_started_with_Tair_and_OpenAI.ipynb notebook.", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "Calculating the embeddings with OpenAI API", "Chunk": "This notebook presents an end-to-end process of:\n\nCalculating the embeddings with OpenAI API.\nStoring the embeddings in a Tair instance to build a knowledge base.\nConverting raw text query to an embedding with OpenAI API.\nUsing Tair to perform the nearest neighbour search in the created collection to find some context.\nAsking LLM to find the answer in a given context.\nAll the steps will be simplified to calling some corresponding Langchain methods.", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "Prerequisites", "Chunk": "For the purposes of this exercise, we need to prepare a couple of things: Tair cloud instance. Langchain as a framework. An OpenAI API key.\n\nInstall requirements\nThis notebook requires the following Python packages: openai, tiktoken, langchain, and tair.\n\nopenai provides convenient access to the OpenAI API.\ntiktoken is a fast BPE tokeniser for use with OpenAI's models.\nlangchain helps us to build applications with LLM more easily.\ntair library is used to interact with the tair vector database.\n! pip install openai tiktoken langchain tair", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "Prepare your OpenAI API key", "Chunk": "The OpenAI API key is used for vectorization of the documents and queries.\n\nIf you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys ).\n\nOnce you get your key, please add it by getpass.\n\nimport getpass\n\nopenai_api_key = getpass.getpass(\"Input your OpenAI API key:\")", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "Prepare your Tair URL", "Chunk": "To build the Tair connection, you need to have TAIR_URL.\n\n# The format of url: redis://[[username]:[password]]@localhost:6379/0\nTAIR_URL = getpass.getpass(\"Input your tair url:\")", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "Load data", "Chunk": "In this section, we are going to load the data containing some natural questions and answers to them. All the data will be used to create a Langchain application with Tair being the knowledge base.\n\nimport wget\n\n# All the examples come from https://ai.google.com/research/NaturalQuestions\n# This is a sample of the training set that we download and extract for some\n# further processing.\nwget.download(\"https://storage.googleapis.com/dataset-natural-questions/questions.json\")\nwget.download(\"https://storage.googleapis.com/dataset-natural-questions/answers.json\")\n\nimport json\n\nwith open(\"questions.json\", \"r\") as fp:\n questions = json.load(fp)\n\nwith open(\"answers.json\", \"r\") as fp:\n answers = json.load(fp)\n\nprint(questions[0])\nprint(answers[0])", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "Chain definition", "Chunk": "Langchain is already integrated with Tair and performs all the indexing for a given list of documents. In our case, we are going to store the set of answers we have.\n\nfrom langchain.vectorstores import Tair\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain import VectorDBQA, OpenAI\n\nembeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)\ndoc_store = Tair.from_texts(\n texts=answers, embedding=embeddings, tair_url=TAIR_URL,\n)\n\nAt this stage, all the possible answers are already stored in Tair, so we can define the whole QA chain.\n\nllm = OpenAI(openai_api_key=openai_api_key)\nqa = VectorDBQA.from_chain_type(\n llm=llm,\n chain_type=\"stuff\",\n vectorstore=doc_store,\n return_source_documents=False,\n)", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "Search data", "Chunk": "Once the data is put into Tair, we can start asking some questions. A question will be automatically vectorized by the OpenAI model, and the created vector will be used to find some possibly matching answers in Tair. Once retrieved, the most similar answers will be incorporated into the prompt sent to the OpenAI Large Language Model.\n\nimport random\n\nrandom.seed(52)\nselected_questions = random.choices(questions, k=5)\n\nimport time\nfor question in selected_questions:\n print(\">\", question)\n print(qa.run(question), end=\"\\n\\n\")\n # wait 20 seconds because of the rate limit\n time.sleep(20)", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "Custom prompt templates", "Chunk": "The stuff chain type in Langchain uses a specific prompt with question and context documents incorporated. This is what the default prompt looks like:\n\nUse the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n{context}\nQuestion: {question}\nHelpful Answer:\nWe can, however, provide our prompt template and change the behavior of the OpenAI LLM, while still using the stuff chain type. It is important to keep {context} and {question} as placeholders.\nExperimenting with custom prompts\nWe can try using a different prompt template, so the model:\n\nResponds with a single-sentence answer if it knows it.\nSuggests a random song title if it doesn't know the answer to our question.", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "Experimenting with custom prompts", "Chunk": "from langchain.prompts import PromptTemplate\ncustom_prompt = \"\"\"\nUse the following pieces of context to answer the question at the end. Please provide\na short single-sentence summary answer only. If you don't know the answer or if it's\nnot present in given context, don't try to make up an answer, but suggest me a random\nunrelated song title I could listen to.\nContext: {context}\nQuestion: {question}\nHelpful Answer:\n\"\"\"\n\ncustom_prompt_template = PromptTemplate(\n template=custom_prompt, input_variables=[\"context\", \"question\"]\n)\n\ncustom_qa = VectorDBQA.from_chain_type(\n llm=llm,\n chain_type=\"stuff\",\n vectorstore=doc_store,\n return_source_documents=False,\n chain_type_kwargs={\"prompt\": custom_prompt_template},\n)\n\nrandom.seed(41)\nfor question in random.choices(questions, k=5):\n print(\">\", question)\n print(custom_qa.run(question), end=\"\\n\\n\")\n # wait 20 seconds because of the rate limit\n time.sleep(20)", "Source": "https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai"} {"Chunk ID": "CQL Version", "Chunk": "In this quickstart you will learn how to build a \"philosophy quote finder & generator\" using OpenAI's vector embeddings and DataStax Astra DB (or a vector-capable Apache Cassandra® cluster, if you prefer) as the vector store for data persistence.\n\nThe basic workflow of this notebook is outlined below. You will evaluate and store the vector embeddings for a number of quotes by famous philosophers, use them to build a powerful search engine and, after that, even a generator of new quotes!", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Choose-your-framework", "Chunk": "Please note that this notebook uses the Cassandra drivers and runs CQL (Cassandra Query Language) statements directly, but we cover other choices of technology to accomplish the same task. Check out this folder's README for other options. This notebook can run either as a Colab notebook or as a regular Jupyter notebook.", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Setup", "Chunk": "First install some required packages:\n\n!pip install cassandra-driver openai", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Get DB connection", "Chunk": "A couple of secrets are required to create a Session object (a connection to your Astra DB instance).\n\n(Note: some steps will be slightly different on Google Colab and on local Jupyter, that's why the notebook will detect the runtime type.)", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Creation of the DB connection", "Chunk": "This is how you create a connection to Astra DB:\n\n(Incidentally, you could also use any Cassandra cluster (as long as it provides Vector capabilities), just by changing the parameters to the following Cluster instantiation.)", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Creation of the Vector table in CQL", "Chunk": "You need a table which support vectors and is equipped with metadata. Call it \"philosophers_cql\".\n\nEach row will store: a quote, its vector embedding, the quote author and a set of \"tags\". You also need a primary key to ensure uniqueness of rows.\n\nThe following is the full CQL command that creates the table (check out this page for more on the CQL syntax of this and the following statements):", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Add a vector index for ANN search", "Chunk": "In order to run ANN (approximate-nearest-neighbor) searches on the vectors in the table, you need to create a specific index on the embedding_vector column.\n\nWhen creating the index, you can optionally choose the \"similarity function\" used to compute vector distances: since for unit-length vectors (such as those from OpenAI) the \"cosine difference\" is the same as the \"dot product\", you'll use the latter which is computationally less expensive.\n\nRun this CQL statement:", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Add indexes for author and tag filtering", "Chunk": "That is enough to run vector searches on the table ... but you want to be able to optionally specify an author and/or some tags to restrict the quote search. Create two other indexes to support this:", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Connect to OpenAI", "Chunk": "Set up your secret key\n\nOPENAI_API_KEY = getpass(\"Please enter your OpenAI API Key: \")\nimport openai\n\nopenai.api_key = OPENAI_API_KEY\n\nA test call for embeddings\nQuickly check how one can get the embedding vectors for a list of input texts:\n\nembedding_model_name = \"text-embedding-ada-002\"\n\nresult = openai.Embedding.create(\n input=[\n \"This is a sentence\",\n \"A second sentence\"\n ],\n engine=embedding_model_name,\n)\n\nprint(f\"len(result.data) = {len(result.data)}\")\nprint(f\"result.data[1].embedding = {str(result.data[1].embedding)[:55]}...\")\nprint(f\"len(result.data[1].embedding) = {len(result.data[1].embedding)}\")", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Load quotes into the Vector Store", "Chunk": "Get a JSON file containing our quotes. We already prepared this collection and put it into this repo for quick loading.\n\n(Note: we adapted the following from a Kaggle dataset -- which we acknowledge -- and also added a few tags to each quote.)", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "A quick inspection of the input data structure:", "Chunk": "print(quote_dict[\"source\"])\n\ntotal_quotes = sum(len(quotes) for quotes in quote_dict[\"quotes\"].values())\nprint(f\"\\nQuotes loaded: {total_quotes}.\nBy author:\")\nprint(\"\\n\".join(f\" {author} ({len(quotes)})\" for author, quotes in quote_dict[\"quotes\"].items()))\n\nprint(\"\\nSome examples:\")\nfor author, quotes in list(quote_dict[\"quotes\"].items())[:2]:\n print(f\" {author}:\")\n for quote in quotes[:2]:\n print(f\" {quote['body'][:50]} ... (tags: {', '.join(quote['tags'])})\")", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Insert quotes into vector store", "Chunk": "You will compute the embeddings for the quotes and save them into the Vector Store, along with the text itself and the metadata planned for later use.\n\nTo optimize speed and reduce the calls, you'll perform batched calls to the embedding OpenAI service, with one batch per author.\n\nThe DB write is accomplished with a CQL statement. But since you'll run this particular insertion several times (albeit with different values), it's best to prepare the statement and then just run it over and over.\n\n(Note: for faster execution, the Cassandra drivers would let you do concurrent inserts, which we don't do here for a more straightforward demo code.)", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Use case 1: quote search engine", "Chunk": "For the quote-search functionality, you need first to make the input quote into a vector, and then use it to query the store (besides handling the optional metadata into the search call, that is).\n\nEncapsulate the search-engine functionality into a function for ease of re-use:", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Putting search to test", "Chunk": "Passing just a quote:\n\nfind_quote_and_author(\"We struggle all our life for nothing\", 3)", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Chunk 1", "Chunk": "find_quote_and_author(\"We struggle all our life for nothing\", 2, author=\"nietzsche\")\nSearch constrained to a tag (out of those saved earlier with the quotes):", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Chunk 2", "Chunk": "quote = \"Animals are our equals.\"\n# quote = \"Be good.\"\n# quote = \"This teapot is strange.\"\n\nsimilarity_threshold = 0.9\n\nquote_vector = openai.Embedding.create(\n input=[quote],\n engine=embedding_model_name,\n).data[0].embedding", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Chunk 3", "Chunk": "Use case 2: quote generator\nFor this task you need another component from OpenAI, namely an LLM to generate the quote for us (based on input obtained by querying the Vector Store).\n\nYou also need a template for the prompt that will be filled for the generate-quote LLM completion task:", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Chunk 4", "Chunk": "q_topic = generate_quote(\"politics and virtue\")\nprint(\"\\nA new generated quote:\")\nprint(q_topic)\n\nUse inspiration from just a single philosopher:\n\nq_topic = generate_quote(\"animals\", author=\"schopenhauer\")\nprint(\"\\nA new generated quote:\")\nprint(q_topic)", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Chunk 5", "Chunk": "(Optional) Partitioning\nThere's an interesting topic to examine before completing this quickstart. While, generally, tags and quotes can be in any relationship (e.g. a quote having multiple tags), authors are effectively an exact grouping (they define a \"disjoint partitioning\" on the set of quotes): each quote has exactly one author (for us, at least).", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Chunk 6", "Chunk": "Now, suppose you know in advance your application will usually (or always) run queries on a single author. Then you can take full advantage of the underlying database structure: if you group quotes in partitions (one per author), vector queries on just an author will use less resources and return much faster.", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Chunk 7", "Chunk": "Conclusion\nCongratulations! You have learned how to use OpenAI for vector embeddings and Astra DB / Cassandra for storage in order to build a sophisticated philosophical search engine and quote generator.", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Chunk 8", "Chunk": "Cleanup\nIf you want to remove all resources used for this demo, run this cell (warning: this will delete the tables and the data inserted in them!):\n\nsession.execute(f\"DROP TABLE IF EXISTS {keyspace}.philosophers_cql;\")\nsession.execute(f\"DROP TABLE IF EXISTS {keyspace}.philosophers_cql_partitioned;\")", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql"} {"Chunk ID": "Cassandra / Astra DB - Chunk 1", "Chunk": "The example notebooks in this directory show how to use the Vector Search capabilities available today in DataStax Astra DB, a serverless Database-as-a-Service built on Apache Cassandra®.", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme"} {"Chunk ID": "Cassandra / Astra DB - Chunk 2", "Chunk": "Moreover, support for vector-oriented workloads is making its way to the next major release of Cassandra, so that the code examples in this folder are designed to work equally well on it as soon as the vector capabilities get released.", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme"} {"Chunk ID": "Cassandra / Astra DB - Chunk 3", "Chunk": "If you want to know more about Astra DB and its Vector Search capabilities, head over to astra.datastax.com or try out one of these hands-on notebooks straight away:", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme"} {"Chunk ID": "Cassandra / Astra DB - Chunk 4", "Chunk": "Seach/generate quotes: CassIO colab url:https://colab.research.google.com/github/openai/openai-cookbook/blob/main/examples/vector_databases/cassandra_astradb/Philosophical_Quotes_cassIO.ipynb#scrollTo=08435bae-1bb9-4c14-ba21-7b4a7bdee3f5", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme"} {"Chunk ID": "Cassandra / Astra DB - Chunk 5", "Chunk": "Plain Cassandea colab url: https://colab.research.google.com/github/openai/openai-cookbook/blob/main/examples/vector_databases/cassandra_astradb/Philosophical_Quotes_CQL.ipynb", "Source": "https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 1", "Chunk": "This notebook demonstrates how to:\n\nIndex the OpenAI Wikipedia vector dataset into Elasticsearch\nEmbed a question with the OpenAI embeddings endpoint\nPerform semantic search on the Elasticsearch index using the encoded question\nSend the top search results to the OpenAI Chat Completions API endpoint for retrieval augmented generation (RAG)\nℹ️ If you've already worked through our semantic search notebook, you can skip ahead to the final step!", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 2", "Chunk": "Install packages and import modules\n# install packages\n\n!python3 -m pip install -qU openai pandas wget elasticsearch\n\n# import modules\n\nfrom getpass import getpass\nfrom elasticsearch import Elasticsearch, helpers\nimport wget\nimport zipfile\nimport pandas as pd\nimport json\nimport openai", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 3", "Chunk": "Connect to Elasticsearch\nℹ️ We're using an Elastic Cloud deployment of Elasticsearch for this notebook. If you don't already have an Elastic deployment, you can sign up for a free Elastic Cloud trial.\n\nTo connect to Elasticsearch, you need to create a client instance with the Cloud ID and password for your deployment.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 4", "Chunk": "Find the Cloud ID for your deployment by going to https://cloud.elastic.co/deployments and selecting your deployment.\n\nCLOUD_ID = getpass(\"Elastic deployment Cloud ID\")\nCLOUD_PASSWORD = getpass(\"Elastic deployment Password\")\nclient = Elasticsearch(\n cloud_id = CLOUD_ID,\n basic_auth=(\"elastic\", CLOUD_PASSWORD) # Alternatively use `api_key` instead of `basic_auth`\n)\n\n# Test connection to Elasticsearch\nprint(client.info())", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 5", "Chunk": "Download the dataset\nIn this step we download the OpenAI Wikipedia embeddings dataset, and extract the zip file.\n\nembeddings_url = 'https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip'\nwget.download(embeddings_url)", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 6", "Chunk": "with zipfile.ZipFile(\"vector_database_wikipedia_articles_embedded.zip\",\n\"r\") as zip_ref:\n zip_ref.extractall(\"data\")", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 7", "Chunk": "Read CSV file into a Pandas DataFrame.\nNext we use the Pandas library to read the unzipped CSV file into a DataFrame. This step makes it easier to index the data into Elasticsearch in bulk.\n\n\nwikipedia_dataframe = pd.read_csv(\"data/vector_database_wikipedia_articles_embedded.csv\")", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 8", "Chunk": "Create index with mapping\nNow we need to create an Elasticsearch index with the necessary mappings. This will enable us to index the data into Elasticsearch.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 9", "Chunk": "We use the dense_vector field type for the title_vector and content_vector fields. This is a special field type that allows us to store dense vectors in Elasticsearch.\n\nLater, we'll need to target the dense_vector field for kNN search.\n\nindex_mapping= {\n \"properties\": {\n \"title_vector\": {\n \"type\": \"dense_vector\",\n \"dims\": 1536,\n \"index\": \"true\",\n \"similarity\": \"cosine\"\n },\n \"content_vector\": {\n \"type\": \"dense_vector\",\n \"dims\": 1536,\n \"index\": \"true\",\n \"similarity\": \"cosine\"\n },\n \"text\": {\"type\": \"text\"},\n \"title\": {\"type\": \"text\"},\n \"url\": { \"type\": \"keyword\"},\n \"vector_id\": {\"type\": \"long\"}\n \n }\n}\n\nclient.indices.create(index=\"wikipedia_vector_index\", mappings=index_mapping)", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 10", "Chunk": "Index data into Elasticsearch\nThe following function generates the required bulk actions that can be passed to Elasticsearch's Bulk API, so we can index multiple documents efficiently in a single request.\n\nFor each row in the DataFrame, the function yields a dictionary representing a single document to be indexed.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 11", "Chunk": "def dataframe_to_bulk_actions(df):\n for index, row in df.iterrows():\n yield {\n \"_index\": 'wikipedia_vector_index',\n \"_id\": row['id'],\n \"_source\": {\n 'url' : row[\"url\"],\n 'title' : row[\"title\"],\n 'text' : row[\"text\"],\n 'title_vector' : json.loads(row[\"title_vector\"]),\n 'content_vector' : json.loads(row[\"content_vector\"]),\n 'vector_id' : row[\"vector_id\"]\n }\n }", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 12", "Chunk": "As the dataframe is large, we will index data in batches of 100. We index the data into Elasticsearch using the Python client's helpers for the bulk API.\n\nstart = 0\nend = len(wikipedia_dataframe)\nbatch_size = 100\nfor batch_start in range(start, end, batch_size):\n batch_end = min(batch_start + batch_size, end)\n batch_dataframe = wikipedia_dataframe.iloc[batch_start:batch_end]\n actions = dataframe_to_bulk_actions(batch_dataframe)\n helpers.bulk(client, actions)", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 13", "Chunk": "Let's test the index with a simple match query.\n\nprint(client.search(index=\"wikipedia_vector_index\", body={\n \"_source\": {\n \"excludes\": [\"title_vector\", \"content_vector\"]\n },\n \"query\": {\n \"match\": {\n \"text\": {\n \"query\": \"Hummingbird\"\n }\n }\n }\n}))", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 14", "Chunk": "Encode a question with OpenAI embedding model\nTo perform kNN search, we need to encode queries with the same embedding model used to encode the documents at index time. In this example, we need to use the text-embedding-ada-002 model.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 15", "Chunk": "You'll need your OpenAI API key to generate the embeddings.\n\n# Get OpenAI API key\nOPENAI_API_KEY = getpass(\"Enter OpenAI API key\")\n\n# Set API key\nopenai.api_key = OPENAI_API_KEY\n\n# Define model\nEMBEDDING_MODEL = \"text-embedding-ada-002\"", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 16", "Chunk": "# Define question\nquestion = 'Is the Atlantic the biggest ocean in the world?'\n\n# Create embedding\nquestion_embedding = openai.Embedding.create(input=question, model=EMBEDDING_MODEL)", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 17", "Chunk": "Run semantic search queries\nNow we're ready to run queries against our Elasticsearch index using our encoded question. We'll be doing a k-nearest neighbors search, using the Elasticsearch kNN query option.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 18", "Chunk": "First, we define a small function to pretty print the results.\n\n# Function to pretty print Elasticsearch results\n\ndef pretty_response(response):\n for hit in response['hits']['hits']:\n id = hit['_id']\n score = hit['_score']\n title = hit['_source']['title']\n text = hit['_source']['text']\n pretty_output = (f\"\\nID: {id}\\nTitle: {title}\\nSummary: {text}\\nScore: {score}\")\n print(pretty_output)", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 19", "Chunk": "Now let's run our kNN query.\n\nresponse = client.search(\n index = \"wikipedia_vector_index\",\n knn={\n \"field\": \"content_vector\",\n \"query_vector\": question_embedding[\"data\"][0][\"embedding\"],\n \"k\": 10,\n \"num_candidates\": 100\n }\n)\npretty_response(response)", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 20", "Chunk": "top_hit_summary = response['hits']['hits'][0]['_source']['text'] # Store content of top hit for final step\nSuccess! We've used kNN to perform semantic search over our dataset and found the top results.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 21", "Chunk": "Now we can use the Chat Completions API to work some generative AI magic using the top search result as additional context.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 22", "Chunk": "Use Chat Completions API for retrieval augmented generation\nNow we can send the question and the text to OpenAI's chat completion API.\n\nUsing a LLM model together with a retrieval model is known as retrieval augmented generation (RAG). We're using Elasticsearch to do what it does best, retrieve relevant documents. Then we use the LLM to do what it does best, tasks like generating summaries and answering questions, using the retrieved documents as context.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 23", "Chunk": "The model will generate a response to the question, using the top kNN hit as context. Use the messages list to shape your prompt to the model. In this example, we're using the gpt-3.5-turbo model.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 24", "Chunk": "summary = openai.ChatCompletion.create(\n model=\"gpt-3.5-turbo\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Answer the following question:\" \n + question \n + \"by using the following text:\" \n + top_hit_summary},\n ]\n)\n\nchoices = summary.choices\n\nfor choice in choices:\n print(\"------------------------------------------------------------\")\n print(choice.message.content)\n print(\"------------------------------------------------------------\")", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Retrieval augmented generation using Elasticsearch and OpenAI - Part 25", "Chunk": "Here's what that code does:Uses OpenAI's model to generate a response, Sends a conversation containing a system message and a user message to the model. The system message sets the assistant's role as 'helpful assistant'. The user message contains a question as specified in the original kNN query and some input text. The response from the model is stored in the summary.choices variable. Next steps. That was just one example of how to combine Elasticsearch with the power of OpenAI's models, to enable retrieval augmented generation. RAG allows you to avoid the costly and complex process of training or fine-tuning models, by leveraging out-of-the-box models, enhanced with additional context. Use this as a blueprint for your own experiments. To adapt the conversation for different use cases, customize the system message to define the assistant's behavior or persona. Adjust the user message to specify the task, such as summarization or question answering, along with the desired format of the response.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 1)", "Chunk": "This notebook demonstrates how to:\n\nIndex the OpenAI Wikipedia vector dataset into Elasticsearch\nEmbed a question with the OpenAI embeddings endpoint\nPerform semantic search on the Elasticsearch index using the encoded question\nInstall packages and import modules\n# install packages\n\n!python3 -m pip install -qU openai pandas wget elasticsearch\n\n# import modules\n\nfrom getpass import getpass\nfrom elasticsearch import Elasticsearch, helpers\nimport wget\nimport zipfile\nimport pandas as pd\nimport json\nimport openai", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 2)", "Chunk": "Connect to Elasticsearch\nℹ️ We're using an Elastic Cloud deployment of Elasticsearch for this notebook. If you don't already have an Elastic deployment, you can sign up for a free Elastic Cloud trial.\n\nTo connect to Elasticsearch, you need to create a client instance with the Cloud ID and password for your deployment.\n\nFind the Cloud ID for your deployment by going to https://cloud.elastic.co/deployments and selecting your deployment.\n\nCLOUD_ID = getpass(\"Elastic deployment Cloud ID\")\nCLOUD_PASSWORD = getpass(\"Elastic deployment Password\")\nclient = Elasticsearch(\n cloud_id = CLOUD_ID,\n basic_auth=(\"elastic\", CLOUD_PASSWORD) # Alternatively use `api_key` instead of `basic_auth`\n)", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 3)", "Chunk": "# Test connection to Elasticsearch\nprint(client.info())\nDownload the dataset\nIn this step we download the OpenAI Wikipedia embeddings dataset, and extract the zip file.\n\nembeddings_url = 'https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip'\nwget.download(embeddings_url)\n\nwith zipfile.ZipFile(\"vector_database_wikipedia_articles_embedded.zip\",\n\"r\") as zip_ref:\n zip_ref.extractall(\"data\")", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 4)", "Chunk": "Read CSV file into a Pandas DataFrame\nNext we use the Pandas library to read the unzipped CSV file into a DataFrame. This step makes it easier to index the data into Elasticsearch in bulk.\n\n\nwikipedia_dataframe = pd.read_csv(\"data/vector_database_wikipedia_articles_embedded.csv\")", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 5)", "Chunk": "Create index with mapping\nNow we need to create an Elasticsearch index with the necessary mappings. This will enable us to index the data into Elasticsearch.\n\nWe use the dense_vector field type for the title_vector and content_vector fields. This is a special field type that allows us to store dense vectors in Elasticsearch.\n\nLater, we'll need to target the dense_vector field for kNN search.\n\nindex_mapping= {\n \"properties\": {\n \"title_vector\": {\n \"type\": \"dense_vector\",\n \"dims\": 1536,\n \"index\": \"true\",\n \"similarity\": \"cosine\"\n },", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 6)", "Chunk": "\"content_vector\": {\n \"type\": \"dense_vector\",\n \"dims\": 1536,\n \"index\": \"true\",\n \"similarity\": \"cosine\"\n },\n \"text\": {\"type\": \"text\"},\n \"title\": {\"type\": \"text\"},\n \"url\": { \"type\": \"keyword\"},\n \"vector_id\": {\"type\": \"long\"}\n }\n}\n\nclient.indices.create(index=\"wikipedia_vector_index\", mappings=index_mapping)\nIndex data into Elasticsearch\nThe following function generates the required bulk actions that can be passed to Elasticsearch's Bulk API, so we can index multiple documents efficiently in a single request.\n\nFor each row in the DataFrame, the function yields a dictionary representing a single document to be indexed.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 7)", "Chunk": "def dataframe_to_bulk_actions(df):\n for index, row in df.iterrows():\n yield {\n \"_index\": 'wikipedia_vector_index',\n \"_id\": row['id'],\n \"_source\": {\n 'url' : row[\"url\"],\n 'title' : row[\"title\"],\n 'text' : row[\"text\"],\n 'title_vector' : json.loads(row[\"title_vector\"],\n 'content_vector' : json.loads(row[\"content_vector\"],\n 'vector_id' : row[\"vector_id\"]\n }\n }\n\nAs the dataframe is large, we will index data in batches of 100. We index the data into Elasticsearch using the Python client's helpers for the bulk API.", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 8)", "Chunk": "start = 0\nend = len(wikipedia_dataframe)\nbatch_size = 100\nfor batch_start in range(start, end, batch_size):\n batch_end = min(batch_start + batch_size, end)\n batch_dataframe = wikipedia_dataframe.iloc[batch_start:batch_end]\n actions = dataframe_to_bulk_actions(batch_dataframe)\n helpers.bulk(client, actions)\n\nLet's test the index with a simple match query.\n\nprint(client.search(index=\"wikipedia_vector_index\", body={\n \"_source\": {\n \"excludes\": [\"title_vector\", \"content_vector\"]\n },\n \"query\": {\n \"match\": {\n \"text\": {\n \"query\": \"Hummingbird\"\n }\n }\n }\n}))", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 9)", "Chunk": "Encode a question with OpenAI embedding model\nTo perform semantic search, we need to encode queries with the same embedding model used to encode the documents at index time. In this example, we need to use the text-embedding-ada-002 model.\n\nYou'll need your OpenAI API key to generate the embeddings.\n\n# Get OpenAI API key\nOPENAI_API_KEY = getpass(\"Enter OpenAI API key\")\n\n# Set API key\nopenai.api_key = OPENAI_API_KEY\n\n# Define model\nEMBEDDING_MODEL = \"text-embedding-ada-002\"\n\n# Define question\nquestion = 'Is the Atlantic the biggest ocean in the world?'\n\n# Create embedding\nquestion_embedding = openai.Embedding.create(input=question, model=EMBEDDING_MODEL)", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 10)", "Chunk": "Run semantic search queries\nNow we're ready to run queries against our Elasticsearch index using our encoded question. We'll be doing a k-nearest neighbors search, using the Elasticsearch kNN query option.\n\nFirst, we define a small function to pretty print the results.\n\n# Function to pretty print Elasticsearch results\n\ndef pretty_response(response):\n for hit in response['hits']['hits']:\n id = hit['_id']\n score = hit['_score']\n title = hit['_source']['title']\n text = hit['_source']['text']\n pretty_output = (f\"\\nID: {id}\\nTitle: {title}\\nSummary: {text}\\nScore: {score}\")\n print(pretty_output)\n\nNow let's run our kNN query.\n\nresponse = client.search(\n index = \"wikipedia_vector_index\",\n knn={\n \"field\": \"content_vector\",\n \"query_vector\": question_embedding[\"data\"][0][\"embedding\"],\n \"k\": 10,\n \"num_candidates\": 100\n }\n)\npretty_response(response)", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Semantic search using Elasticsearch and OpenAI (Part 11)", "Chunk": "Next steps\nSuccess! Now you know how to use Elasticsearch as a vector database to store embeddings, encode queries by calling the OpenAI embeddings endpoint, and run semantic search.\n\nPlay around with different queries, and if you want to try with your own data, you can experiment with different embedding models.\n\nℹ️ Check out our other notebook Retrieval augmented generation using Elasticsearch and OpenAI. That notebook builds on this example to demonstrate how to use Elasticsearch together with the OpenAI chat completions API for retrieval augmented generation (RAG).", "Source": "https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search"} {"Chunk ID": "Data preparation and analysis for chat model fine-tuning", "Chunk": "This notebook serves as a tool to preprocess and analyze the chat dataset used for fine-tuning a chat model. It checks for format errors, provides basic statistics, and estimates token counts for fine-tuning costs. The method shown here corresponds to legacy fine-tuning for models like babbage-002 and davinci-002. For fine-tuning gpt-3.5-turbo, see the current fine-tuning page.", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data loading", "Chunk": "We first load the chat dataset from an example JSONL file. data_path = \"data/toy_chat_fine_tuning.jsonl\" # Load the dataset with open(data_path, 'r', encoding='utf-8') as f: dataset = [json.loads(line) for line in f] # Initial dataset stats print(\"Num examples:\", len(dataset)) print(\"First example:\") for message in dataset[0][\"messages\"]: print(message)", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Format validation", "Chunk": "We can perform a variety of error checks to validate that each conversation in the dataset adheres to the format expected by the fine-tuning API. Errors are categorized based on their nature for easier debugging. Data Type Check: Checks whether each entry in the dataset is a dictionary (dict). Error type: data_type. Presence of Message List: Checks if a messages list is present in each entry. Error type: missing_messages_list. Message Keys Check: Validates that each message in the messages list contains the keys role and content. Error type: message_missing_key. Unrecognized Keys in Messages: Logs if a message has keys other than role, content, and name. Error type: message_unrecognized_key. Role Validation: Ensures the role is one of \"system\", \"user\", or \"assistant\". Error type: unrecognized_role. Content Validation: Verifies that content has textual data and is a string. Error type: missing_content. Assistant Message Presence: Checks that each conversation has at least one message from the assistant. Error type: example_missing_assistant_message. The code below performs these checks, and outputs counts for each type of error found are printed. This is useful for debugging and ensuring the dataset is ready for the next steps.", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Token Counting Utilities", "Chunk": "Lets define a few helpful utilities to be used in the rest of the notebook. encoding = tiktoken.get_encoding(\"cl100k_base\") # not exact! # simplified from https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb def num_tokens_from_messages(messages, tokens_per_message=3, tokens_per_name=1): num_tokens = 0 for message in messages: num_tokens += tokens_per_message for key, value in message.items(): num_tokens += len(encoding.encode(value)) if key == \"name\": num_tokens += tokens_per_name num_tokens += 3 return num_tokens def num_assistant_tokens_from_messages(messages): num_tokens = 0 for message in messages: if message[\"role\"] == \"assistant\": num_tokens += len(encoding.encode(message[\"content\"])) return num_tokens def print_distribution(values, name): print(f\"\\n#### Distribution of {name}:\") print(f\"min / max: {min(values)}, {max(values)}\") print(f\"mean / median: {np.mean(values)}, {np.median(values)}\") print(f\"p5 / p95: {np.quantile(values, 0.1)}, {np.quantile(values, 0.9)}\")", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Data Warnings and Token Counts", "Chunk": "With some lightweight analysis we can identify potential issues in the dataset, like missing messages, and provide statistical insights into message and token counts. Missing System/User Messages: Counts the number of conversations missing a \"system\" or \"user\" message. Such messages are critical for defining the assistant's behavior and initiating the conversation. Number of Messages Per Example: Summarizes the distribution of the number of messages in each conversation, providing insight into dialogue complexity. Total Tokens Per Example: Calculates and summarizes the distribution of the total number of tokens in each conversation. Important for understanding fine-tuning costs. Tokens in Assistant's Messages: Calculates the number of tokens in the assistant's messages per conversation and summarizes this distribution. Useful for understanding the assistant's verbosity. Token Limit Warnings: Checks if any examples exceed the maximum token limit (4096 tokens), as such examples will be truncated during fine-tuning, potentially resulting in data loss.", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Warnings and tokens counts", "Chunk": "n_missing_system = 0 n_missing_user = 0 n_messages = [] convo_lens = [] assistant_message_lens = [] for ex in dataset: messages = ex[\"messages\"] if not any(message[\"role\"] == \"system\" for message in messages): n_missing_system += 1 if not any(message[\"role\"] == \"user\" for message in messages): n_missing_user += 1 n_messages.append(len(messages)) convo_lens.append(num_tokens_from_messages(messages)) assistant_message_lens.append(num_assistant_tokens_from_messages(messages)) print(\"Num examples missing system message:\", n_missing_system) print(\"Num examples missing user message:\", n_missing_user) print_distribution(n_messages, \"num_messages_per_example\") print_distribution(convo_lens, \"num_total_tokens_per_example\") print_distribution(assistant_message_lens, \"num_assistant_tokens_per_example\") n_too_long = sum(l > 4096 for l in convo_lens) print(f\"\\n{n_too_long} examples may be over the 4096 token limit, they will be truncated during fine-tuning\")", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "Cost Estimation", "Chunk": "In this final section, we estimate the total number of tokens that will be used for fine-tuning, which allows us to approximate the cost. It is worth noting that the duration of the fine-tuning jobs will also increase with the token count. # Pricing and default n_epochs estimate MAX_TOKENS_PER_EXAMPLE = 4096 TARGET_EPOCHS = 3 MIN_TARGET_EXAMPLES = 100 MAX_TARGET_EXAMPLES = 25000 MIN_DEFAULT_EPOCHS = 1 MAX_DEFAULT_EPOCHS = 25 n_epochs = TARGET_EPOCHS n_train_examples = len(dataset) if n_train_examples * TARGET_EPOCHS < MIN_TARGET_EXAMPLES: n_epochs = min(MAX_DEFAULT_EPOCHS, MIN_TARGET_EXAMPLES // n_train_examples) elif n_train_examples * TARGET_EPOCHS > MAX_TARGET_EXAMPLES: n_epochs = max(MIN_DEFAULT_EPOCHS, MAX_TARGET_EXAMPLES // n_train_examples) n_billing_tokens_in_dataset = sum(min(MAX_TOKENS_PER_EXAMPLE, length) for length in convo_lens) print(f\"Dataset has ~{n_billing_tokens_in_dataset} tokens that will be charged for during training\") print(f\"By default, you'll train for {n_epochs} epochs on this dataset\") print(f\"By default, you'll be charged for ~{n_epochs * n_billing_tokens_in_dataset} tokens\")", "Source": "https://cookbook.openai.com/examples/chat_finetuning_data_prep"} {"Chunk ID": "How to fine-tune chat models", "Chunk": "This notebook provides a step-by-step guide for our new gpt-3.5-turbo fine-tuning. We'll perform entity extraction using the RecipeNLG dataset, which provides various recipes and a list of extracted generic ingredients for each. This is a common dataset for named entity recognition (NER) tasks.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "Data preparation", "Chunk": "We'll begin by preparing our data. When fine-tuning with the ChatCompletion format, each training example is a simple list of messages. For example, an entry could look like:\n\n[{'role': 'system', 'content': 'You are a helpful recipe assistant. You are to extract the generic ingredients from each of the recipes provided.'},\n\n{'role': 'user', 'content': 'Title: No-Bake Nut Cookies\\n\\nIngredients: [\"1 c. firmly packed brown sugar\", \"1/2 c. evaporated milk\", \"1/2 tsp. vanilla\", \"1/2 c. broken nuts (pecans)\", \"2 Tbsp. butter or margarine\", \"3 1/2 c. bite size shredded rice biscuits\"]\\n\\nGeneric ingredients: '},\n\n{'role': 'assistant', 'content': '[\"brown sugar\", \"milk\", \"vanilla\", \"nuts\", \"butter\", \"bite size shredded rice biscuits\"]'}]\n\nDuring the training process this conversation will be split, with the final entry being the completion that the model will produce, and the remainder of the messages acting as the prompt. Consider this when building your training examples - if your model will act on multi-turn conversations, then please provide representative examples so it doesn't perform poorly when the conversation starts to expand.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "Upload files", "Chunk": "You can now upload the files to our Files endpoint to be used by the fine-tuned model.\n\ntraining_response = openai.File.create(\n file=open(training_file_name, \"rb\"), purpose=\"fine-tune\"\n)\ntraining_file_id = training_response[\"id\"]\n\nvalidation_response = openai.File.create(\n file=open(validation_file_name, \"rb\"), purpose=\"fine-tune\"\n)\nvalidation_file_id = validation_response[\"id\"]\n\nprint(\"Training file ID:\", training_file_id)\nprint(\"Validation file ID:\", validation_file_id)", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "Fine-tuning", "Chunk": "Now we can create our fine-tuning job with the generated files and an optional suffix to identify the model. The response will contain an id which you can use to retrieve updates on the job.\n\nNote: The files have to first be processed by our system, so you might get a File not ready error. In that case, simply retry a few minutes later.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "Check job status", "Chunk": "You can make a GET request to the https://api.openai.com/v1/alpha/fine-tunes endpoint to list your alpha fine-tune jobs. In this instance you'll want to check that the ID you got from the previous step ends up as status: succeeded.\n\nOnce it is completed, you can use the result_files to sample the results from the validation set (if you uploaded one), and use the ID from the fine_tuned_model parameter to invoke your trained model.", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "Inference", "Chunk": "The last step is to use your fine-tuned model for inference. Similar to the classic FineTuning, you simply call ChatCompletions with your new fine-tuned model name filling the model parameter.\n\ntest_df = recipe_df.loc[201:300]\ntest_row = test_df.iloc[0]\ntest_messages = []\ntest_messages.append({\"role\": \"system\", \"content\": system_message})\nuser_message = create_user_message(test_row)\ntest_messages.append({\"role\": \"user\", \"content\": create_user_message(test_row)})\n\npprint(test_messages)\nresponse = openai.ChatCompletion.create(\n model=fine_tuned_model_id, messages=test_messages, temperature=0, max_tokens=500\n)\nprint(response[\"choices\"][0][\"message\"][\"content\"])", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "Conclusion", "Chunk": "Congratulations, you are now ready to fine-tune your own models using the ChatCompletion format! We look forward to seeing what you build", "Source": "https://cookbook.openai.com/examples/how_to_finetune_chat_models"} {"Chunk ID": "Evaluating Abstractive Summarization (Part 1)", "Chunk": "In this notebook we delve into the evaluation techniques for abstractive summarization tasks using a simple example. We explore traditional evaluation methods like ROUGE and BERTScore, in addition to showcasing a more novel approach using LLMs as evaluators.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Part 2)", "Chunk": "Evaluating the quality of summaries is a time-consuming process, as it involves different quality metrics such as coherence, conciseness, readability and content. Traditional automatic evaluation metrics such as ROUGE and BERTScore and others are concrete and reliable, but they may not correlate well with the actual quality of summaries. They show relatively low correlation with human judgments, especially for open-ended generation tasks (Liu et al., 2023). There's a growing need to lean on human evaluations, user feedback, or model-based metrics while being vigilant about potential biases. While human judgment provides invaluable insights, it is often not scalable and can be cost-prohibitive.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating Abstractive Summarization (Part 3)", "Chunk": "In addition to these traditional metrics, we showcase a method (G-Eval) that leverages Large Language Models (LLMs) as a novel, reference-free metric for assessing abstractive summaries. In this case, we use gpt-4 to score candidate outputs. gpt-4 has effectively learned an internal model of language quality that allows it to differentiate between fluent, coherent text and low-quality text. Harnessing this internal scoring mechanism allows auto-evaluation of new candidate outputs generated by an LLM.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Setup (Part 1)", "Chunk": "For the purposes of this notebook we'll use the example summarization below. Notice that we provide two generated summaries to compare, and a reference human-written summary, which evaluation metrics like ROUGE and BERTScore require.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Setup (Part 2)", "Chunk": "Excerpt (excerpt):\n\nOpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. OpenAI will build safe and beneficial AGI directly, but will also consider its mission fulfilled if its work aids others to achieve this outcome. OpenAI follows several key principles for this purpose. First, broadly distributed benefits - any influence over AGI's deployment will be used for the benefit of all, and to avoid harmful uses or undue concentration of power. Second, long-term safety - OpenAI is committed to doing the research to make AGI safe, and to promote the adoption of such research across the AI community. Third, technical leadership - OpenAI aims to be at the forefront of AI capabilities. Fourth, a cooperative orientation - OpenAI actively cooperates with other research and policy institutions, and seeks to create a global community working together to address AGI's global challenges.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Summaries (Part 1)", "Chunk": "Reference Summary /ref_summary (human generated) Eval Summary 1 / eval_summary_1 (system generated) Eval Summary 2 / eval_summary_2 (system generated)", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Summaries (Part 2)", "Chunk": "OpenAI aims to ensure artificial general intelligence (AGI) is used for everyone's benefit, avoiding harmful uses or undue power concentration. It is committed to researching AGI safety, promoting such studies among the AI community. OpenAI seeks to lead in AI capabilities and cooperates with global research and policy institutions to address AGI's challenges. OpenAI aims to AGI benefits all humanity, avoiding harmful uses and power concentration. It pioneers research into safe and beneficial AGI and promotes adoption globally. OpenAI maintains technical leadership in AI while cooperating with global institutions to address AGI challenges. It seeks to lead a collaborative worldwide effort developing AGI for collective good. OpenAI aims to ensure AGI is for everyone's use, totally avoiding harmful stuff or big power concentration. Committed to researching AGI's safe side, promoting these studies in AI folks. OpenAI wants to be top in AI things and works with worldwide research, policy groups to figure AGI's stuff.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using ROUGE", "Chunk": "ROUGE, which stands for Recall-Oriented Understudy for Gisting Evaluation, primarily gauges the overlap of words between a generated output and a reference text. It's a prevalent metric for evaluating automatic summarization tasks. Among its variants, ROUGE-L offers insights into the longest contiguous match between system-generated and reference summaries, gauging how well the system retains the original summary's essence.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using BERTScore", "Chunk": "ROUGE relies on the exact presence of words in both the predicted and reference texts, failing to interpret the underlying semantics. This is where BERTScore comes in and leverages the contextual embeddings from the BERT model, aiming to evaluate the similarity between a predicted and a reference sentence in the context of machine-generated text. By comparing embeddings from both sentences, BERTScore captures semantic similarities that might be missed by traditional n-gram based metrics.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 1)", "Chunk": "Here we implement an example reference-free text evaluator using gpt-4, inspired by the G-Eval framework which evaluates the quality of generated text using large language models. Unlike metrics like ROUGE or BERTScore that rely on comparison to reference summaries, the gpt-4 based evaluator assesses the quality of generated content based solely on the input prompt and text, without any ground truth references. This makes it applicable to new datasets and tasks where human references are sparse or unavailable.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 2)", "Chunk": "We craft prompts for each of these criteria, taking the original document and the summary as inputs, and leveraging chain-of-thought generation and guiding the model to output a numeric score from 1-5 for each criteria. We generate scores from gpt-4 with the defined prompts, comparing them across summaries. In this demonstration, we're using a direct scoring function where gpt-4 generates a discrete score (1-5) for each metric. Normalizing the scores and taking a weighted sum could result in more robust, continuous scores that better reflect the quality and diversity of the summaries.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 3)", "Chunk": "Evaluation prompt template based on G-Eval\nEVALUATION_PROMPT_TEMPLATE = \"\"\"\nYou will be given one summary written for an article. Your task is to rate the summary on one metric.\nPlease make sure you read and understand these instructions very carefully. \nPlease keep this document open while reviewing, and refer to it as needed.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 4)", "Chunk": "Metric 1: Relevance\n\nRELEVANCY_SCORE_CRITERIA = \"\"\"\nRelevance(1-5) - selection of important content from the source. The summary should include only important information from the source document. Annotators were instructed to penalize summaries which contained redundancies and excess information.\"\"\"", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 5)", "Chunk": "Metric 2: Coherence\n\nCOHERENCE_SCORE_CRITERIA = \"\"\"\nCoherence(1-5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby \"the summary should be well-structured and well-organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic.\"\"\"", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 6)", "Chunk": "Metric 3: Consistency\n\nCONSISTENCY_SCORE_CRITERIA = \"\"\"\nConsistency(1-5) - the factual alignment between the summary and the summarized source. A factually consistent summary contains only statements that are entailed by the source document. Annotators were also asked to penalize summaries that contained hallucinated facts.\"\"\"", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 7)", "Chunk": "Metric 4: Fluency\n\nFLUENCY_SCORE_CRITERIA = \"\"\"\nFluency(1-3): the quality of the summary in terms of grammar, spelling, punctuation, word choice, and sentence structure.\n1: Poor. The summary has many errors that make it hard to understand or sound unnatural.\n2: Fair. The summary has some errors that affect the clarity or smoothness of the text, but the main points are still comprehensible.\n3: Good. The summary has few or no errors and is easy to read and follow.\"\"\"", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 8)", "Chunk": "def get_geval_score(\n criteria: str, steps: str, document: str, summary: str, metric_name: str\n):\n prompt = EVALUATION_PROMPT_TEMPLATE.format(\n criteria=criteria,\n steps=steps,\n metric_name=metric_name,\n document=document,\n summary=summary,\n )", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 9)", "Chunk": "response = openai.ChatCompletion.create(\n model=\"gpt-4\",\n messages=[{\"role\": \"user\", \"content\": prompt}],\n temperature=0,\n max_tokens=5,\n top_p=1,\n frequency_penalty=0,\n presence_penalty=0,\n)", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 10)", "Chunk": "return response.choices[0].message.content", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 11)", "Chunk": "evaluation_metrics = {\n \"Relevance\": (RELEVANCY_SCORE_CRITERIA, RELEVANCY_SCORE_STEPS),\n \"Coherence\": (COHERENCE_SCORE_CRITERIA, COHERENCE_SCORE_STEPS),\n \"Consistency\": (CONSISTENCY_SCORE_CRITERIA, CONSISTENCY_SCORE_STEPS),\n \"Fluency\": (FLUENCY_SCORE_CRITERIA, FLUENCY_SCORE_STEPS),\n}", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 12)", "Chunk": "summaries = {\"Summary 1\": eval_summary_1, \"Summary 2\": eval_summary_2}", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 13)", "Chunk": "data = {\"Evaluation Type\": [], \"Summary Type\": [], \"Score\": []}", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 14)", "Chunk": "for eval_type, (criteria, steps) in evaluation_metrics.items():\n for summ_type, summary in summaries.items():\n data[\"Evaluation Type\"].append(eval_type)\n data[\"Summary Type\"].append(summ_type)\n result = get_geval_score(criteria, steps, excerpt, summary, eval_type)\n score_num = int(result.strip())\n data[\"Score\"].append(score_num)", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 15)", "Chunk": "pivot_df = pd.DataFrame(data, index=None).pivot(\n index=\"Evaluation Type\", columns=\"Summary Type\", values=\"Score\"\n)", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Evaluating using GPT-4 (Part 16)", "Chunk": "styled_pivot_df = pivot_df.style.apply(highlight_max, axis=1)\ndisplay(styled_pivot_df)", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Limitations", "Chunk": "Note that LLM-based metrics could have a bias towards preferring LLM-generated texts over human-written texts. Additionally LLM based metrics are sensitive to system messages/prompts. We recommend experimenting with other techniques that can help improve performance and/or get consistent scores, striking the right balance between high-quality expensive evaluation and automated evaluations. It is also worth noting that this scoring methodology is currently limited by gpt-4's context window.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Conclusion", "Chunk": "Conclusion\nEvaluating abstractive summarization remains an open area for further improvement. Traditional metrics like ROUGE, BLEU, and BERTScore provide useful automatic evaluation but have limitations in capturing semantic similarity and nuanced aspects of summarization quality. Moreover, they require reference outputs which can be expensive to collect/label. LLM-based metrics offer promise as a reference-free method of evaluating coherence, fluency, and relevance. However, they too have potential biases favoring text generated by LLMs. Ultimately, a combination of automatic metrics and human evaluation is ideal for reliably assessing abstractive summarization systems. While human evaluation is indispensable for gaining a comprehensive understanding of summary quality, it should be complemented with automated evaluation to enable efficient, large-scale testing. The field will continue to evolve more robust evaluation techniques, balancing quality, scalability, and fairness. Advancing evaluation methods is crucial for driving progress in production applications.", "Source": "https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization"} {"Chunk ID": "Function calling for nearby places: Leveraging the Google Places API and customer profiles", "Chunk": "This notebook is centered around the integration of the Google Places API and custom user profiles to enhance location-based searches. Our approach involves using the Google Places API in combination with user preferences, aiming to make location discovery more personal and relevant. Please note that while we focus on the Google Places API in this instance, there are numerous other APIs you could explore and apply in a similar fashion.", "Source": "https://cookbook.openai.com/examples/function_calling_finding_nearby_places"} {"Chunk ID": "Customer profile: This mock profile captures individual preferences for types of places (e.g., restaurants, parks, museums), budget, preferred ratings, and other specific requirements.", "Chunk": "We'll explore the application of three main components:\n\nCustomer profile: This mock profile captures individual preferences for types of places (e.g., restaurants, parks, museums), budget, preferred ratings, and other specific requirements.\n\nGoogle Places API: This API provides real-time data about nearby places. It factors in various data points such as ratings, types of venues, costs, and more from the locations around you.\n\nFunction calling: A single command such as 'I'm hungry' or 'I want to visit a museum' activates the function which combines the user profile data and Google Places API to identify suitable venues.", "Source": "https://cookbook.openai.com/examples/function_calling_finding_nearby_places"} {"Chunk ID": "Profile-based recommendations: Learn how to create a user profile and make place recommendations based on individual preferences.", "Chunk": "This notebook introduces two primary use cases:\n\nProfile-based recommendations: Learn how to create a user profile and make place recommendations based on individual preferences.\n\nAPI integration with function calling: Understand how to integrate and call Google Places API effectively to source real-time data of various places using function calling.\n\nPlease note that while this system is highly versatile, its effectiveness may vary based on user preferences and available place data. For the purposes of this notebook, the customer data is fake and the location is hardcoded.", "Source": "https://cookbook.openai.com/examples/function_calling_finding_nearby_places"} {"Chunk ID": "Setup Google Places API", "Chunk": "To use the Google Places API, you'll need two things:\n\nGoogle Account: If you don't already have one, you will need to create a Google account.\n\nGoogle Places API Key: The API key is a unique identifier that is used to authenticate requests associated with your project for usage and billing purposes. You can get your API key from the Google Cloud Console.\n\nPlease note that Google Places API is a paid service, and the cost is associated with the number of API calls made. Keep track of your usage to avoid any unexpected charges.\n\nThe requests library is also needed, you can download it by using the following command:", "Source": "https://cookbook.openai.com/examples/function_calling_finding_nearby_places"} {"Chunk ID": "In this code snippet, we are defining a function fetch_customer_profile that accepts a user_id and returns a mock user profile.", "Chunk": "In this code snippet, we are defining a function fetch_customer_profile that accepts a user_id and returns a mock user profile.\n\nThis function simulates an API call that fetches user data from a database. For this demo, we're using hard-coded data. The user profile contains various details such as the user's location (set to the coordinates of the Golden Gate Bridge for this example), preferences in food and activities, app usage metrics, recent interactions, and user rank.\n\nIn a production environment, you would replace this hard-coded data with a real API call to your user database.", "Source": "https://cookbook.openai.com/examples/function_calling_finding_nearby_places"} {"Chunk ID": "Using Qdrant for Embeddings Search", "Chunk": "This notebook takes you through a simple flow to download some data, embed it, and then index and search it using a selection of vector databases. This is a common requirement for customers who want to store and search our embeddings with their own data in a secure environment to support production use cases such as chatbots, topic modelling and more.", "Source": "https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search"} {"Chunk ID": "What is a Vector Database", "Chunk": "A vector database is a database made to store, manage and search embedding vectors. The use of embeddings to encode unstructured data (text, audio, video and more) as vectors for consumption by machine-learning models has exploded in recent years, due to the increasing effectiveness of AI in solving use cases involving natural language, image recognition and other unstructured forms of data. Vector databases have emerged as an effective solution for enterprises to deliver and scale these use cases.", "Source": "https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search"} {"Chunk ID": "Why use a Vector Database", "Chunk": "Vector databases enable enterprises to take many of the embeddings use cases we've shared in this repo (question and answering, chatbot and recommendation services, for example), and make use of them in a secure, scalable environment. Many of our customers make embeddings solve their problems at small scale but performance and security hold them back from going into production - we see vector databases as a key component in solving that, and in this guide we'll walk through the basics of embedding text data, storing it in a vector database and using it for semantic search.", "Source": "https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search"} {"Chunk ID": "Demo Flow", "Chunk": "The demo flow is:\n\nSetup: Import packages and set any required variables\nLoad data: Load a dataset and embed it using OpenAI embeddings\nQdrant\nSetup: Here we'll set up the Python client for Qdrant. For more details go here\nIndex Data: We'll create a collection with vectors for titles and content\nSearch Data: We'll run a few searches to confirm it works\nOnce you've run through this notebook you should have a basic understanding of how to setup and use vector databases, and can move on to more complex use cases making use of our embeddings.", "Source": "https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search"} {"Chunk ID": "Setup", "Chunk": "Import the required libraries and set the embedding model that we'd like to use.\n\n# We'll need to install Qdrant client\n!pip install qdrant-client\n\n#Install wget to pull zip file\n!pip install wget\nimport openai\n\nfrom typing:...", "Source": "https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search"} {"Chunk ID": "Load data", "Chunk": "In this section we'll load embedded data that we've prepared previous to this session.\n\nembeddings_url = 'https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip'\n\n# The file is ~700 MB so this will take some time\nwget.download(embeddings_url)...", "Source": "https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search"} {"Chunk ID": "Qdrant", "Chunk": "Qdrant. is a high-performant vector search database written in Rust. It offers both on-premise and cloud version, but for the purposes of that example we're going to use the local deployment mode.\n\nSetting everything up will require:\n\nSpinning up a local instance of Qdrant\nConfiguring the collection and storing the data in it\nTrying out with some queries\nSetup\nFor the local deployment, we are going to use Docker, according to the Qdrant documentation: https://qdrant.tech/documentation/quick_start/. Qdrant requires just a single container, but an example of the docker-compose.yaml file is available at ./qdrant/docker-compose.yaml in this repo.\n\nYou can start Qdrant instance locally by navigating to this directory and running docker-compose up -d\n\nqdrant = qdrant_client.QdrantClient(host='localhost', prefer_grpc=True)\n\nqdrant.get_collections()", "Source": "https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search"} {"Chunk ID": "Index data", "Chunk": "Qdrant stores data in collections where each object is described by at least one vector and may contain an additional metadata called payload. Our collection will be called Articles and each object will be described by both title and content vectors.\n\nWe'll be using an official qdrant-client package that has all the utility methods already built-in.\n\nfrom qdrant_client.http import models as rest\n\nvector_size = len(article_df['content_vector'][0])\n\nqdrant.recreate_collection(\n collection_name='Articles',\n vectors_config={\n 'title': rest.VectorParams(\n distance=rest.Distance.COSINE,\n size=vector_size,\n ),\n 'content': rest.VectorParams(\n distance=rest.Distance.COSINE,\n size=vector_size,\n ),\n }\n)\nqdrant.upsert(\n collection_name='Articles',\n points=[\n rest.PointStruct(\n id=k,\n vector={\n 'title': v['title_vector'],\n 'content': v['content_vector'],\n },\n payload=v.to_dict(),\n )\n for k, v in article_df.iterrows()\n ],\n)\n\n# Check the collection size to make sure all the points have been stored\nqdrant.count(collection_name='Articles')", "Source": "https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search"} {"Chunk ID": "Search Data", "Chunk": "Once the data is put into Qdrant we will start querying the collection for the closest vectors. We may provide an additional parameter vector_name to switch from title to content based search.\n\ndef query_qdrant(query, collection_name, vector_name='title', top_k=20):\n\n # Creates embedding vector from user query\n embedded_query = openai.Embedding.create(\n input=query,\n model=EMBEDDING_MODEL,\n )['data'][0]['embedding']\n \n query_results = qdrant.search(\n collection_name=collection_name,\n query_vector=(\n vector_name, embedded_query\n ),\n limit=top_k,\n )\n \n return query_results\n\nquery_results = query_qdrant('modern art in Europe', 'Articles')\nfor i, article in enumerate(query_results):\n print(f'{i + 1}. {article.payload['title']} (Score: {round(article.score, 3)})')\n# This time we'll query using content vector\nquery_results = query_qdrant('Famous battles in Scottish history', 'Articles', 'content')\nfor i, article in enumerate(query_results):\n print(f'{i + 1}. {article.payload['title']} (Score: {round(article.score, 3)})')", "Source": "https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search"}