Chunk ID
stringlengths
5
184
Chunk
stringlengths
20
3.59k
Source
stringclasses
22 values
Azure chat completion models with your own data (preview)_14
Azure Blob Storage resource
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_15
Your documents to be used as data (See data source options)
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_16
For a full walk-through on how to upload your documents to blob storage and create an index using the Azure AI Studio, see this Quickstart.
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_17
Setup First, we install the necessary dependencies.
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_18
! pip install "openai>=0.27.6" ! pip install python-dotenv
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_19
In this example, we'll use dotenv to load our environment variables. To connect with Azure OpenAI and the Search index, the following variables should be added to a .env file in KEY=VALUE format:
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_20
OPENAI_API_BASE - the Azure OpenAI endpoint. This can be found under "Keys and Endpoints" for your Azure OpenAI resource in the Azure Portal.
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_21
OPENAI_API_KEY - the Azure OpenAI API key. This can be found under "Keys and Endpoints" for your Azure OpenAI resource in the Azure Portal. Omit if using Azure Active Directory authentication (see below Authentication using Microsoft Active Directory)
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_22
SEARCH_ENDPOINT - the Cognitive Search endpoint. This URL can be found on the "Overview" of your Search resource on the Azure Portal.
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_23
SEARCH_KEY - the Cognitive Search API key. Found under "Keys" for your Search resource in the Azure Portal.
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_24
SEARCH_INDEX_NAME - the name of the index you created with your own data.
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_25
import os import openai import dotenv
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_26
dotenv.load_dotenv()
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_27
openai.api_base = os.environ["OPENAI_API_BASE"]
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_28
# Azure OpenAI on your own data is only supported by the 2023-08-01-preview API version
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_29
openai.api_version = "2023-08-01-preview"
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_30
Authentication The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure credentials.
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure chat completion models with your own data (preview)_31
use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory
https://cookbook.openai.com/examples/azure/chat_with_your_own_data
Azure Cognitive Search as a vector database for OpenAI embeddings
This notebook provides step by step instuctions on using Azure Cognitive Search as a vector database with OpenAI embeddings. Azure Cognitive Search (formerly known as "Azure Search") is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Prerequisites
For the purposes of this exercise you must have the following: Azure Cognitive Search Service OpenAI Key or Azure OpenAI credentials ! pip install wget ! pip install azure-search-documents --pre
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Import required libraries
import openai import json import openai import wget import pandas as pd import zipfile from azure.core.credentials import AzureKeyCredential from azure.search.documents import SearchClient from azure.search.documents.indexes import SearchIndexClient from azure.search.documents.models import Vector from azure.search.documents.indexes.models import ( SearchIndex, SearchField, SearchFieldDataType, SimpleField, SearchableField, SearchIndex, SemanticConfiguration, PrioritizedFields, SemanticField, SearchField, SemanticSettings, VectorSearch, HnswVectorSearchAlgorithmConfiguration, )
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Configure OpenAI settings
Configure your OpenAI or Azure OpenAI settings. For this example, we use Azure OpenAI. openai.api_type = "azure" openai.api_base = "YOUR_AZURE_OPENAI_ENDPOINT" openai.api_version = "2023-05-15" openai.api_key = "YOUR_AZURE_OPENAI_KEY" model: str = "text-embedding-ada-002"
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Configure Azure Cognitive Search Vector Store settings
You can find this in the Azure Portal or using the Search Management SDK search_service_endpoint: str = "YOUR_AZURE_SEARCH_ENDPOINT" search_service_api_key: str = "YOUR_AZURE_SEARCH_ADMIN_KEY" index_name: str = "azure-cognitive-search-vector-demo" credential = AzureKeyCredential(search_service_api_key)
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Load data
# The file is ~700 MB so this will take some time wget.download(embeddings_url) with zipfile.ZipFile("vector_database_wikipedia_articles_embedded.zip","r") as zip_ref: zip_ref.extractall("../../data") article_df = pd.read_csv('../../data/vector_database_wikipedia_articles_embedded.csv') # Read vectors from strings back into a list using json.loads article_df["title_vector"] = article_df.title_vector.apply(json.loads) article_df["content_vector"] = article_df.content_vector.apply(json.loads) article_df['vector_id'] = article_df['vector_id'].apply(str) article_df.head()
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Create an index
# Configure a search index index_client = SearchIndexClient( endpoint=search_service_endpoint, credential=credential) fields = [ SimpleField(name="id", type=SearchFieldDataType.String), SimpleField(name="vector_id", type=SearchFieldDataType.String, key=True), SimpleField(name="url", type=SearchFieldDataType.String), SearchableField(name="title", type=SearchFieldDataType.String), SearchableField(name="text", type=SearchFieldDataType.String), SearchField(name="title_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=1536, vector_search_configuration="my-vector-config"), SearchField(name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=1536, vector_search_configuration="my-vector-config"), ] # Configure the vector search configuration vector_search = VectorSearch( algorithm_configurations=[ HnswVectorSearchAlgorithmConfiguration( name="my-vector-config", kind="hnsw", parameters={ "m": 4, "efConstruction": 400, "efSearch": 500, "metric": "cosine" } ) ] ) # Optional: configure semantic reranking by passing your title, keywords, and content fields semantic_config = SemanticConfiguration( name="my-semantic-config", prioritized_fields=PrioritizedFields( title_field=SemanticField(field_name="title"), prioritized_keywords_fields=[SemanticField(field_name="url")], prioritized_content_fields=[SemanticField(field_name="text")] ) ) # Create the semantic settings with the configuration semantic_settings = SemanticSettings(configurations=[semantic_config]) # Create the index index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search, semantic_settings=semantic_settings) result = index_client.create_or_update_index(index) print(f'{result.name} created')
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Insert text and embeddings into vector store
In this notebook, the wikipedia articles dataset provided by OpenAI, the embeddings are pre-computed. The code below takes the data frame and converts it into a dictionary list to upload to your Azure Search index. # Convert the 'id' and 'vector_id' columns to string so one of them can serve as our key field article_df['id'] = article_df['id'].astype(str) article_df['vector_id'] = article_df['vector_id'].astype(str) # Convert the DataFrame to a list of dictionaries documents = article_df.to_dict(orient='records') search_client = SearchClient(endpoint=search_service_endpoint, index_name=index_name, credential=credential) # Define the batch upload size batch_size = 250 # Split the documents into batches batches = [documents[i:i + batch_size] for i in range(0, len(documents), batch_size)] # Upload each batch of documents for batch in batches: result = search_client.upload_documents(batch) print(f"Uploaded {len(documents)} documents in total")
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Perform a vector similarity search
# Function to generate query embedding def generate_embeddings(text): response = openai.Embedding.create( input=text, engine=model) embeddings = response['data'][0]['embedding'] return embeddings # Pure Vector Search query = "modern art in Europe" search_client = SearchClient(search_service_endpoint, index_name, AzureKeyCredential(search_service_api_key)) vector = Vector(value=generate_embeddings(query), k=3, fields="content_vector") results = search_client.search( search_text=None, vectors=[vector], select=["title", "text", "url"] ) for result in results: print(f"Title: {result['title']}") print(f"Score: {result['@search.score']}") print(f"URL: {result['url']} ")
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Perform a Hybrid Search
# Hybrid Search query = "Famous battles in Scottish history" search_client = SearchClient(search_service_endpoint, index_name, AzureKeyCredential(search_service_api_key)) vector = Vector(value=generate_embeddings(query), k=3, fields="content_vector") results = search_client.search( search_text=query, vectors=[vector], select=["title", "text", "url"], top=3 ) for result in results: print(f"Title: {result['title']}") print(f"Score: {result['@search.score']}") print(f"URL: {result['url']} ")
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Perform a Hybrid Search with Reranking (powered by Bing)
# Semantic Hybrid Search query = "Famous battles in Scottish history" search_client = SearchClient(search_service_endpoint, index_name, AzureKeyCredential(search_service_api_key)) vector = Vector(value=generate_embeddings(query), k=3, fields="content_vector") results = search_client.search( search_text=query, vectors=[vector], select=["title", "text", "url"], query_type="semantic", query_language="en-us", semantic_configuration_name='my-semantic-config', query_caption="extractive", query_answer="extractive", top=3 ) semantic_answers = results.get_answers() for answer in semantic_answers: if answer.highlights: print(f"Semantic Answer: {answer.highlights}") else: print(f"Semantic Answer: {answer.text}") print(f"Semantic Answer Score: {answer.score}\n") for result in results: print(f"Title: {result['title']}") print(f"URL: {result['url']}") captions = result["@search.captions"] if captions: caption = captions[0] if caption.highlights: print(f"Caption: {caption.highlights}\n") else: print(f"Caption: {caption.text}\n")
https://cookbook.openai.com/examples/vector_databases/azuresearch/getting_started_with_azure_cognitive_search_and_openai
Using Tair as a vector database for OpenAI embeddings
This notebook guides you step by step on using Tair as a vector database for OpenAI embeddings.
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Using precomputed embeddings created by OpenAI API
Using precomputed embeddings created by OpenAI API. Storing the embeddings in a cloud instance of Tair.
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Converting raw text query to an embedding with OpenAI API
Converting raw text query to an embedding with OpenAI API. Using Tair to perform the nearest neighbor search in the created collection.
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
What is Tair
Tair is a cloud-native in-memory database service that is developed by Alibaba Cloud. Tair is compatible with open-source Redis and provides a variety of data models and enterprise-class capabilities to support your real-time online scenarios. Tair also introduces persistent memory-optimized instances that are based on the new non-volatile memory (NVM) storage medium. These instances can reduce costs by 30%, ensure data persistence, and provide almost the same performance as in-memory databases. Tair has been widely used in areas such as government affairs, finance, manufacturing, healthcare, and pan-Internet to meet their high-speed query and computing requirements.
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
TairVector is an in-house data structure
TairVector is an in-house data structure that provides high-performance real-time storage and retrieval of vectors. TairVector provides two indexing algorithms: Hierarchical Navigable Small World (HNSW) and Flat Search. Additionally, TairVector supports multiple distance functions, such as Euclidean distance, inner product, and Jaccard distance. Compared with traditional vector retrieval services, TairVector has the following advantages:
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Deployment options
Deployment options: Using Tair Cloud Vector Database. Click here to fast deploy it.
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Prerequisites
Prerequisites: For the purposes of this exercise, we need to prepare a couple of things: Tair cloud server instance. The 'tair' library to interact with the tair database. An OpenAI API key. Install requirements.
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Install requirements
Install requirements: This notebook obviously requires the openai and tair packages, but there are also some other additional libraries we will use. The following command installs them all: ! pip install openai redis tair pandas wget
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Prepare your OpenAI API key
Prepare your OpenAI API key: The OpenAI API key is used for vectorization of the documents and queries. If you don't have an OpenAI API key, you can get one from https://beta.openai.com/account/api-keys. Once you get your key, please add it by getpass. import getpass openai.api_key = getpass.getpass('Input your OpenAI API key:')
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Connect to Tair
Connect to Tair: First, add it to your environment variables. Connecting to a running instance of Tair server is easy with the official Python library. # The format of URL: redis://[[username]:[password]]@localhost:6379/0 TAIR_URL = getpass.getpass('Input your tair URL:') from tair import Tair as TairClient url = TAIR_URL client = TairClient.from_url(url) We can test the connection by ping: client.ping()
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Load data
Load data: In this section, we are going to load the data prepared previously for this session, so you don't have to recompute the embeddings of Wikipedia articles with your own credits. import pandas as pd from ast import literal_eval # Path to your local CSV file csv_file_path = '../../data/vector_database_wikipedia_articles_embedded.csv' article_df = pd.read_csv(csv_file_path) # Read vectors from strings back into a list article_df['title_vector'] = article_df.title_vector.apply(literal_eval).values article_df['content_vector'] = article_df.content_vector.apply(literal_eval).values
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Create Index
Create Index: Tair stores data in indexes where each object is described by one key. Each key contains a vector and multiple attribute_keys. We will start with creating two indexes, one for title_vector and one for content_vector, and then we will fill it with our precomputed embeddings. # set index parameters index = 'openai_test' embedding_dim = 1536 distance_type = 'L2' index_type = 'HNSW' data_type = 'FLOAT32' # Create two indexes, one for title_vector and one for content_vector, skip if already exists index_names = [index + '_title_vector', index+'_content_vector'] for index_name in index_names: index_connection = client.tvs_get_index(index_name) if index_connection is not None: print('Index already exists') else: client.tvs_create_index(name=index_name, dim=embedding_dim, distance_type=distance_type, index_type=index_type, data_type=data_type)
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Search data
Search data: Once the data is put into Tair, we will start querying the collection for the closest vectors. We may provide an additional parameter vector_name to switch from title to content-based search. Since the precomputed embeddings were created with text-embedding-ada-002 OpenAI model, we also have to use it during search. def query_tair(client, query, vector_name='title_vector', top_k=5): # Creates an embedding vector from the user query embedded_query = openai.Embedding.create( input=query, model='text-embedding-ada-002', )['data'][0]['embedding'] embedded_query = np.array(embedded_query) # Search for the top k approximate nearest neighbors of the vector in an index query_result = client.tvs_knnsearch(index=index+'_'+vector_name, k=top_k, vector=embedded_query) return query_result import openai import numpy as np query_result = query_tair(client=client, query='modern art in Europe', vector_name='title_vector') for i in range(len(query_result)): title = client.tvs_hmget(index+'_'+'content_vector', query_result[i][0].decode('utf-8'), 'title') print(f'{i + 1}. {title[0].decode('utf-8')} (Distance: {round(query_result[i][1],3)})') # This time we'll query using content vector query_result = query_tair(client=client, query='Famous battles in Scottish history', vector_name='content_vector') for i in range(len(query_result)): title = client.tvs_hmget(index+'_'+'content_vector', query_result[i][0].decode('utf-8'), 'title') print(f'{i + 1}. {title[0].decode('utf-8')} (Distance: {round(query_result[i][1],3)})')
https://cookbook.openai.com/examples/vector_databases/tair/getting_started_with_tair_and_openai
Question Answering with Langchain, Tair and OpenAI
This notebook presents how to implement a Question Answering system with Langchain, Tair as a knowledge based and OpenAI embeddings. If you are not familiar with Tair, it’s better to check out the Getting_started_with_Tair_and_OpenAI.ipynb notebook.
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
Calculating the embeddings with OpenAI API
This notebook presents an end-to-end process of: Calculating the embeddings with OpenAI API. Storing the embeddings in a Tair instance to build a knowledge base. Converting raw text query to an embedding with OpenAI API. Using Tair to perform the nearest neighbour search in the created collection to find some context. Asking LLM to find the answer in a given context. All the steps will be simplified to calling some corresponding Langchain methods.
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
Prerequisites
For the purposes of this exercise, we need to prepare a couple of things: Tair cloud instance. Langchain as a framework. An OpenAI API key. Install requirements This notebook requires the following Python packages: openai, tiktoken, langchain, and tair. openai provides convenient access to the OpenAI API. tiktoken is a fast BPE tokeniser for use with OpenAI's models. langchain helps us to build applications with LLM more easily. tair library is used to interact with the tair vector database. ! pip install openai tiktoken langchain tair
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
Prepare your OpenAI API key
The OpenAI API key is used for vectorization of the documents and queries. If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys ). Once you get your key, please add it by getpass. import getpass openai_api_key = getpass.getpass("Input your OpenAI API key:")
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
Prepare your Tair URL
To build the Tair connection, you need to have TAIR_URL. # The format of url: redis://[[username]:[password]]@localhost:6379/0 TAIR_URL = getpass.getpass("Input your tair url:")
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
Load data
In this section, we are going to load the data containing some natural questions and answers to them. All the data will be used to create a Langchain application with Tair being the knowledge base. import wget # All the examples come from https://ai.google.com/research/NaturalQuestions # This is a sample of the training set that we download and extract for some # further processing. wget.download("https://storage.googleapis.com/dataset-natural-questions/questions.json") wget.download("https://storage.googleapis.com/dataset-natural-questions/answers.json") import json with open("questions.json", "r") as fp: questions = json.load(fp) with open("answers.json", "r") as fp: answers = json.load(fp) print(questions[0]) print(answers[0])
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
Chain definition
Langchain is already integrated with Tair and performs all the indexing for a given list of documents. In our case, we are going to store the set of answers we have. from langchain.vectorstores import Tair from langchain.embeddings import OpenAIEmbeddings from langchain import VectorDBQA, OpenAI embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key) doc_store = Tair.from_texts( texts=answers, embedding=embeddings, tair_url=TAIR_URL, ) At this stage, all the possible answers are already stored in Tair, so we can define the whole QA chain. llm = OpenAI(openai_api_key=openai_api_key) qa = VectorDBQA.from_chain_type( llm=llm, chain_type="stuff", vectorstore=doc_store, return_source_documents=False, )
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
Search data
Once the data is put into Tair, we can start asking some questions. A question will be automatically vectorized by the OpenAI model, and the created vector will be used to find some possibly matching answers in Tair. Once retrieved, the most similar answers will be incorporated into the prompt sent to the OpenAI Large Language Model. import random random.seed(52) selected_questions = random.choices(questions, k=5) import time for question in selected_questions: print(">", question) print(qa.run(question), end="\n\n") # wait 20 seconds because of the rate limit time.sleep(20)
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
Custom prompt templates
The stuff chain type in Langchain uses a specific prompt with question and context documents incorporated. This is what the default prompt looks like: Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Helpful Answer: We can, however, provide our prompt template and change the behavior of the OpenAI LLM, while still using the stuff chain type. It is important to keep {context} and {question} as placeholders. Experimenting with custom prompts We can try using a different prompt template, so the model: Responds with a single-sentence answer if it knows it. Suggests a random song title if it doesn't know the answer to our question.
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
Experimenting with custom prompts
from langchain.prompts import PromptTemplate custom_prompt = """ Use the following pieces of context to answer the question at the end. Please provide a short single-sentence summary answer only. If you don't know the answer or if it's not present in given context, don't try to make up an answer, but suggest me a random unrelated song title I could listen to. Context: {context} Question: {question} Helpful Answer: """ custom_prompt_template = PromptTemplate( template=custom_prompt, input_variables=["context", "question"] ) custom_qa = VectorDBQA.from_chain_type( llm=llm, chain_type="stuff", vectorstore=doc_store, return_source_documents=False, chain_type_kwargs={"prompt": custom_prompt_template}, ) random.seed(41) for question in random.choices(questions, k=5): print(">", question) print(custom_qa.run(question), end="\n\n") # wait 20 seconds because of the rate limit time.sleep(20)
https://cookbook.openai.com/examples/vector_databases/tair/qa_with_langchain_tair_and_openai
CQL Version
In this quickstart you will learn how to build a "philosophy quote finder & generator" using OpenAI's vector embeddings and DataStax Astra DB (or a vector-capable Apache Cassandra® cluster, if you prefer) as the vector store for data persistence. The basic workflow of this notebook is outlined below. You will evaluate and store the vector embeddings for a number of quotes by famous philosophers, use them to build a powerful search engine and, after that, even a generator of new quotes!
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Choose-your-framework
Please note that this notebook uses the Cassandra drivers and runs CQL (Cassandra Query Language) statements directly, but we cover other choices of technology to accomplish the same task. Check out this folder's README for other options. This notebook can run either as a Colab notebook or as a regular Jupyter notebook.
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Setup
First install some required packages: !pip install cassandra-driver openai
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Get DB connection
A couple of secrets are required to create a Session object (a connection to your Astra DB instance). (Note: some steps will be slightly different on Google Colab and on local Jupyter, that's why the notebook will detect the runtime type.)
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Creation of the DB connection
This is how you create a connection to Astra DB: (Incidentally, you could also use any Cassandra cluster (as long as it provides Vector capabilities), just by changing the parameters to the following Cluster instantiation.)
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Creation of the Vector table in CQL
You need a table which support vectors and is equipped with metadata. Call it "philosophers_cql". Each row will store: a quote, its vector embedding, the quote author and a set of "tags". You also need a primary key to ensure uniqueness of rows. The following is the full CQL command that creates the table (check out this page for more on the CQL syntax of this and the following statements):
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Add a vector index for ANN search
In order to run ANN (approximate-nearest-neighbor) searches on the vectors in the table, you need to create a specific index on the embedding_vector column. When creating the index, you can optionally choose the "similarity function" used to compute vector distances: since for unit-length vectors (such as those from OpenAI) the "cosine difference" is the same as the "dot product", you'll use the latter which is computationally less expensive. Run this CQL statement:
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Add indexes for author and tag filtering
That is enough to run vector searches on the table ... but you want to be able to optionally specify an author and/or some tags to restrict the quote search. Create two other indexes to support this:
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Connect to OpenAI
Set up your secret key OPENAI_API_KEY = getpass("Please enter your OpenAI API Key: ") import openai openai.api_key = OPENAI_API_KEY A test call for embeddings Quickly check how one can get the embedding vectors for a list of input texts: embedding_model_name = "text-embedding-ada-002" result = openai.Embedding.create( input=[ "This is a sentence", "A second sentence" ], engine=embedding_model_name, ) print(f"len(result.data) = {len(result.data)}") print(f"result.data[1].embedding = {str(result.data[1].embedding)[:55]}...") print(f"len(result.data[1].embedding) = {len(result.data[1].embedding)}")
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Load quotes into the Vector Store
Get a JSON file containing our quotes. We already prepared this collection and put it into this repo for quick loading. (Note: we adapted the following from a Kaggle dataset -- which we acknowledge -- and also added a few tags to each quote.)
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
A quick inspection of the input data structure:
print(quote_dict["source"]) total_quotes = sum(len(quotes) for quotes in quote_dict["quotes"].values()) print(f"\nQuotes loaded: {total_quotes}. By author:") print("\n".join(f" {author} ({len(quotes)})" for author, quotes in quote_dict["quotes"].items())) print("\nSome examples:") for author, quotes in list(quote_dict["quotes"].items())[:2]: print(f" {author}:") for quote in quotes[:2]: print(f" {quote['body'][:50]} ... (tags: {', '.join(quote['tags'])})")
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Insert quotes into vector store
You will compute the embeddings for the quotes and save them into the Vector Store, along with the text itself and the metadata planned for later use. To optimize speed and reduce the calls, you'll perform batched calls to the embedding OpenAI service, with one batch per author. The DB write is accomplished with a CQL statement. But since you'll run this particular insertion several times (albeit with different values), it's best to prepare the statement and then just run it over and over. (Note: for faster execution, the Cassandra drivers would let you do concurrent inserts, which we don't do here for a more straightforward demo code.)
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Use case 1: quote search engine
For the quote-search functionality, you need first to make the input quote into a vector, and then use it to query the store (besides handling the optional metadata into the search call, that is). Encapsulate the search-engine functionality into a function for ease of re-use:
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Putting search to test
Passing just a quote: find_quote_and_author("We struggle all our life for nothing", 3)
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Chunk 1
find_quote_and_author("We struggle all our life for nothing", 2, author="nietzsche") Search constrained to a tag (out of those saved earlier with the quotes):
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Chunk 2
quote = "Animals are our equals." # quote = "Be good." # quote = "This teapot is strange." similarity_threshold = 0.9 quote_vector = openai.Embedding.create( input=[quote], engine=embedding_model_name, ).data[0].embedding
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Chunk 3
Use case 2: quote generator For this task you need another component from OpenAI, namely an LLM to generate the quote for us (based on input obtained by querying the Vector Store). You also need a template for the prompt that will be filled for the generate-quote LLM completion task:
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Chunk 4
q_topic = generate_quote("politics and virtue") print("\nA new generated quote:") print(q_topic) Use inspiration from just a single philosopher: q_topic = generate_quote("animals", author="schopenhauer") print("\nA new generated quote:") print(q_topic)
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Chunk 5
(Optional) Partitioning There's an interesting topic to examine before completing this quickstart. While, generally, tags and quotes can be in any relationship (e.g. a quote having multiple tags), authors are effectively an exact grouping (they define a "disjoint partitioning" on the set of quotes): each quote has exactly one author (for us, at least).
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Chunk 6
Now, suppose you know in advance your application will usually (or always) run queries on a single author. Then you can take full advantage of the underlying database structure: if you group quotes in partitions (one per author), vector queries on just an author will use less resources and return much faster.
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Chunk 7
Conclusion Congratulations! You have learned how to use OpenAI for vector embeddings and Astra DB / Cassandra for storage in order to build a sophisticated philosophical search engine and quote generator.
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Chunk 8
Cleanup If you want to remove all resources used for this demo, run this cell (warning: this will delete the tables and the data inserted in them!): session.execute(f"DROP TABLE IF EXISTS {keyspace}.philosophers_cql;") session.execute(f"DROP TABLE IF EXISTS {keyspace}.philosophers_cql_partitioned;")
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/philosophical_quotes_cql
Cassandra / Astra DB - Chunk 1
The example notebooks in this directory show how to use the Vector Search capabilities available today in DataStax Astra DB, a serverless Database-as-a-Service built on Apache Cassandra®.
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme
Cassandra / Astra DB - Chunk 2
Moreover, support for vector-oriented workloads is making its way to the next major release of Cassandra, so that the code examples in this folder are designed to work equally well on it as soon as the vector capabilities get released.
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme
Cassandra / Astra DB - Chunk 3
If you want to know more about Astra DB and its Vector Search capabilities, head over to astra.datastax.com or try out one of these hands-on notebooks straight away:
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme
Cassandra / Astra DB - Chunk 4
Seach/generate quotes: CassIO colab url:https://colab.research.google.com/github/openai/openai-cookbook/blob/main/examples/vector_databases/cassandra_astradb/Philosophical_Quotes_cassIO.ipynb#scrollTo=08435bae-1bb9-4c14-ba21-7b4a7bdee3f5
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme
Cassandra / Astra DB - Chunk 5
Plain Cassandea colab url: https://colab.research.google.com/github/openai/openai-cookbook/blob/main/examples/vector_databases/cassandra_astradb/Philosophical_Quotes_CQL.ipynb
https://cookbook.openai.com/examples/vector_databases/cassandra_astradb/readme
Retrieval augmented generation using Elasticsearch and OpenAI - Part 1
This notebook demonstrates how to: Index the OpenAI Wikipedia vector dataset into Elasticsearch Embed a question with the OpenAI embeddings endpoint Perform semantic search on the Elasticsearch index using the encoded question Send the top search results to the OpenAI Chat Completions API endpoint for retrieval augmented generation (RAG) ℹ️ If you've already worked through our semantic search notebook, you can skip ahead to the final step!
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 2
Install packages and import modules # install packages !python3 -m pip install -qU openai pandas wget elasticsearch # import modules from getpass import getpass from elasticsearch import Elasticsearch, helpers import wget import zipfile import pandas as pd import json import openai
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 3
Connect to Elasticsearch ℹ️ We're using an Elastic Cloud deployment of Elasticsearch for this notebook. If you don't already have an Elastic deployment, you can sign up for a free Elastic Cloud trial. To connect to Elasticsearch, you need to create a client instance with the Cloud ID and password for your deployment.
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 4
Find the Cloud ID for your deployment by going to https://cloud.elastic.co/deployments and selecting your deployment. CLOUD_ID = getpass("Elastic deployment Cloud ID") CLOUD_PASSWORD = getpass("Elastic deployment Password") client = Elasticsearch( cloud_id = CLOUD_ID, basic_auth=("elastic", CLOUD_PASSWORD) # Alternatively use `api_key` instead of `basic_auth` ) # Test connection to Elasticsearch print(client.info())
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 5
Download the dataset In this step we download the OpenAI Wikipedia embeddings dataset, and extract the zip file. embeddings_url = 'https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip' wget.download(embeddings_url)
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 6
with zipfile.ZipFile("vector_database_wikipedia_articles_embedded.zip", "r") as zip_ref: zip_ref.extractall("data")
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 7
Read CSV file into a Pandas DataFrame. Next we use the Pandas library to read the unzipped CSV file into a DataFrame. This step makes it easier to index the data into Elasticsearch in bulk. wikipedia_dataframe = pd.read_csv("data/vector_database_wikipedia_articles_embedded.csv")
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 8
Create index with mapping Now we need to create an Elasticsearch index with the necessary mappings. This will enable us to index the data into Elasticsearch.
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 9
We use the dense_vector field type for the title_vector and content_vector fields. This is a special field type that allows us to store dense vectors in Elasticsearch. Later, we'll need to target the dense_vector field for kNN search. index_mapping= { "properties": { "title_vector": { "type": "dense_vector", "dims": 1536, "index": "true", "similarity": "cosine" }, "content_vector": { "type": "dense_vector", "dims": 1536, "index": "true", "similarity": "cosine" }, "text": {"type": "text"}, "title": {"type": "text"}, "url": { "type": "keyword"}, "vector_id": {"type": "long"} } } client.indices.create(index="wikipedia_vector_index", mappings=index_mapping)
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 10
Index data into Elasticsearch The following function generates the required bulk actions that can be passed to Elasticsearch's Bulk API, so we can index multiple documents efficiently in a single request. For each row in the DataFrame, the function yields a dictionary representing a single document to be indexed.
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 11
def dataframe_to_bulk_actions(df): for index, row in df.iterrows(): yield { "_index": 'wikipedia_vector_index', "_id": row['id'], "_source": { 'url' : row["url"], 'title' : row["title"], 'text' : row["text"], 'title_vector' : json.loads(row["title_vector"]), 'content_vector' : json.loads(row["content_vector"]), 'vector_id' : row["vector_id"] } }
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 12
As the dataframe is large, we will index data in batches of 100. We index the data into Elasticsearch using the Python client's helpers for the bulk API. start = 0 end = len(wikipedia_dataframe) batch_size = 100 for batch_start in range(start, end, batch_size): batch_end = min(batch_start + batch_size, end) batch_dataframe = wikipedia_dataframe.iloc[batch_start:batch_end] actions = dataframe_to_bulk_actions(batch_dataframe) helpers.bulk(client, actions)
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 13
Let's test the index with a simple match query. print(client.search(index="wikipedia_vector_index", body={ "_source": { "excludes": ["title_vector", "content_vector"] }, "query": { "match": { "text": { "query": "Hummingbird" } } } }))
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 14
Encode a question with OpenAI embedding model To perform kNN search, we need to encode queries with the same embedding model used to encode the documents at index time. In this example, we need to use the text-embedding-ada-002 model.
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 15
You'll need your OpenAI API key to generate the embeddings. # Get OpenAI API key OPENAI_API_KEY = getpass("Enter OpenAI API key") # Set API key openai.api_key = OPENAI_API_KEY # Define model EMBEDDING_MODEL = "text-embedding-ada-002"
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 16
# Define question question = 'Is the Atlantic the biggest ocean in the world?' # Create embedding question_embedding = openai.Embedding.create(input=question, model=EMBEDDING_MODEL)
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 17
Run semantic search queries Now we're ready to run queries against our Elasticsearch index using our encoded question. We'll be doing a k-nearest neighbors search, using the Elasticsearch kNN query option.
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 18
First, we define a small function to pretty print the results. # Function to pretty print Elasticsearch results def pretty_response(response): for hit in response['hits']['hits']: id = hit['_id'] score = hit['_score'] title = hit['_source']['title'] text = hit['_source']['text'] pretty_output = (f"\nID: {id}\nTitle: {title}\nSummary: {text}\nScore: {score}") print(pretty_output)
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 19
Now let's run our kNN query. response = client.search( index = "wikipedia_vector_index", knn={ "field": "content_vector", "query_vector": question_embedding["data"][0]["embedding"], "k": 10, "num_candidates": 100 } ) pretty_response(response)
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 20
top_hit_summary = response['hits']['hits'][0]['_source']['text'] # Store content of top hit for final step Success! We've used kNN to perform semantic search over our dataset and found the top results.
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation
Retrieval augmented generation using Elasticsearch and OpenAI - Part 21
Now we can use the Chat Completions API to work some generative AI magic using the top search result as additional context.
https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation