oci_generative_ai
Oracle Cloud Infrastructure Generative AI
Oracle Cloud Infrastructure (OCI) Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases, and which is available through a single API. Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned custom models based on your own data on dedicated AI clusters. Detailed documentation of the service and API is available here and here.
This notebook explains how to use OCI's Genrative AI models with LangChain.
Prerequisite
We will need to install the oci sdk
!pip install -U oci
OCI Generative AI API endpoint
https://inference.generativeai.us-chicago-1.oci.oraclecloud.com
Authentication
The authentication methods supported for this langchain integration are:
- API Key
- Session token
- Instance principal
- Resource principal
These follows the standard SDK authentication methods detailed here.
Usage
from langchain_community.llms import OCIGenAI
# use default authN method API-key
llm = OCIGenAI(
model_id="MY_MODEL",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id="MY_OCID",
)
response = llm.invoke("Tell me one fact about earth", temperature=0.7)
print(response)
API Reference:
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
# Use Session Token to authN
llm = OCIGenAI(
model_id="MY_MODEL",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id="MY_OCID",
auth_type="SECURITY_TOKEN",
auth_profile="MY_PROFILE", # replace with your profile name
model_kwargs={"temperature": 0.7, "top_p": 0.75, "max_tokens": 200},
)
prompt = PromptTemplate(input_variables=["query"], template="{query}")
llm_chain = LLMChain(llm=llm, prompt=prompt)
response = llm_chain.invoke("what is the capital of france?")
print(response)
API Reference:
from langchain_community.embeddings import OCIGenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
embeddings = OCIGenAIEmbeddings(
model_id="MY_EMBEDDING_MODEL",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id="MY_OCID",
)
vectorstore = FAISS.from_texts(
[
"Larry Ellison co-founded Oracle Corporation in 1977 with Bob Miner and Ed Oates.",
"Oracle Corporation is an American multinational computer technology company headquartered in Austin, Texas, United States.",
],
embedding=embeddings,
)
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = PromptTemplate.from_template(template)
llm = OCIGenAI(
model_id="MY_MODEL",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id="MY_OCID",
)
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
print(chain.invoke("when was oracle founded?"))
print(chain.invoke("where is oracle headquartered?"))