OpenAI
You are currently on a page documenting the use of OpenAI text completion models. The latest and most popular OpenAI models are chat completion models.
Unless you are specifically using gpt-3.5-turbo-instruct
, you are probably looking for this page instead.
OpenAI offers a spectrum of models with different levels of power suitable for different tasks.
This example goes over how to use LangChain to interact with OpenAI
models
Overviewโ
Integration detailsโ
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatOpenAI | langchain-openai | โ | beta | โ |
Setupโ
To access OpenAI models you'll need to create an OpenAI account, get an API key, and install the langchain-openai
integration package.
Credentialsโ
Head to https://platform.openai.com to sign up to OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable:
import getpass
import os
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
Enter your OpenAI API key: ยทยทยทยทยทยทยทยท
If you want to get automated best in-class tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Installationโ
The LangChain OpenAI integration lives in the langchain-openai
package:
%pip install -qU langchain-openai
Should you need to specify your organization ID, you can use the following cell. However, it is not required if you are only part of a single organization or intend to use your default organization. You can check your default organization here.
To specify your organization, you can use this:
OPENAI_ORGANIZATION = getpass()
os.environ["OPENAI_ORGANIZATION"] = OPENAI_ORGANIZATION
Instantiationโ
Now we can instantiate our model object and generate chat completions:
from langchain_openai import OpenAI
llm = OpenAI()
Invocationโ
llm.invoke("Hello how are you?")
"\n\nI'm an AI language model created by OpenAI, so I don't have feelings or emotions. But thank you for asking! How can I assist you today?"
Chainingโ
from langchain_core.prompts import PromptTemplate
prompt = PromptTemplate("How to say {input} in {output_language}:\n")
chain = prompt | llm
chain.invoke(
{
"output_language": "German",
"input": "I love programming.",
}
)
Using a proxyโ
If you are behind an explicit proxy, you can specify the http_client to pass through
%pip install httpx
import httpx
openai = OpenAI(
model_name="gpt-3.5-turbo-instruct",
http_client=httpx.Client(proxies="http://proxy.yourcompany.com:8080"),
)
API referenceโ
For detailed documentation of all OpenAI
llm features and configurations head to the API reference: https://python.langchain.com/v0.2/api_reference/openai/llms/langchain_openai.llms.base.OpenAI.html
Relatedโ
- LLM conceptual guide
- LLM how-to guides