Chat models and prompts: Build a simple LLM application with prompt templates and chat models.
First pip install langchain
then we can also add LangSmith to start logging traces LLMs. To do this we have to export langchain api key and set langchian tracing to true.
we can see that this run is logged to LangSmith, and can see the LangSmith trace. The LangSmith trace reports token usage information, latency, standard model parameters (such as temperature), and other information.
import getpass
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Let’s use large language model. We are using openai api so first install the required platform specific langchain library.
pip install -qU langchain-openai
then export openai api key and create a model. ChatOpenAI is a ChatModels and are instances of Langchain Runnables.
ChatModels receive message objects as input and generate message objects as output.
import getpass
import os
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o-mini")
We can use model directly as shown in examples. To simply call the model, we can pass in a list of messages to the .invoke method.
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage("Translate the following from English into Italian"),
HumanMessage("hi!"),
]
model.invoke(messages)
Message objects convey conversational roles and hold important data, such as tool calls and token usage counts.
LangChain also supports inputs as different format shown below:
model.invoke("Hello")
model.invoke([{"role": "user", "content": "Hello"}])
model.invoke([HumanMessage("Hello")])
We also stream individual token from a chat model
for token in model.stream(messages):
print(token.content, end="|")
Prompt templates are a concept in LangChain designed to assist with this transformation. They take in raw user input and return data (a prompt) that is ready to pass into a language model.
Let's create a prompt template here. It will take in two user variables:
language: The language to translate text into eg. Nepalitext: The text to translate eg. Hi!from langchain_core.prompts import ChatPromptTemplate
system_template = "Translate the following from English into {language}"
prompt_template = ChatPromptTemplate.from_messages(
[("system", system_template), ("user", "{text}")]
)
Note that ChatPromptTemplate supports multiple message role in a single template.
The input to this prompt template is a dictionary.
prompt = prompt_template.invoke({"language": "Italian", "text": "hi!"})
prompt
If we want to access the messages (i.e complete prompt message ) directly we do:
prompt.to_messages()
Output:
[SystemMessage(content='Translate the following from English into Nepali', additional_kwargs={}, response_metadata={}),
HumanMessage(content='hi!', additional_kwargs={}, response_metadata={})]
Finally, we can invoke the chat model on the formatted prompt:
response = model.invoke(prompt)
print(response.content)
Message content can contain both text and content blocks with additional structure.