In recent years, language models have become more advanced, allowing us to tackle complex tasks and extract information from large documents. However, these models have a limit on the amount of context they can consider, which can be tricky when dealing with lots of information. To overcome this challenge, LLM chains have emerged. They simplify the process of chaining multiple LLM calls together, making it easier to handle large volumes of data.LLM chains use different language model components to process information and generate responses in a unified way. In this article, we will discuss different components and conventions in LangChain.
LangChain provides AI developers with tools to connect language models with external data sources.LLMs are large deep-learning models pre-trained on large amounts of data that can generate responses to user queries by answering questions or creating images from text-based prompts.LangChain is a framework for building applications powered by LLMs. It provides:
Chains: Composable sequences of LLM calls
Prompts: Template management and optimization
Memory: Conversation and context management
Tools: Integration with external APIs and functions
Agents: Autonomous decision-making workflows
Why LangChain? While you can build AI apps with raw APIs, LangChain provides abstractions that make production systems easier to build, test, and maintain.
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder# System + User promptprompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant. Answer in {style}."), ("user", "{question}")])# With conversation historyconversation_prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant."), MessagesPlaceholder(variable_name="history"), ("user", "{input}")])
Few-Shot Examples:
Copy
from langchain_core.prompts import FewShotChatMessagePromptTemplateexamples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"},]example_prompt = ChatPromptTemplate.from_messages([ ("human", "{input}"), ("ai", "{output}"),])few_shot_prompt = FewShotChatMessagePromptTemplate( example_prompt=example_prompt, examples=examples,)final_prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant that gives antonyms."), few_shot_prompt, ("human", "{input}"),])
Prompt Management:
Copy
# Save prompts to fileprompt.save("prompts/translation.yaml")# Load from filefrom langchain_core.prompts import load_promptloaded_prompt = load_prompt("prompts/translation.yaml")
from langchain_core.tools import tool@tooldef search_web(query: str) -> str: """Search the web for information.""" # Implementation here return f"Results for {query}"@tooldef calculator(expression: str) -> str: """Evaluate a mathematical expression.""" try: result = eval(expression) return str(result) except: return "Invalid expression"tools = [search_web, calculator]
Using Tools with LLM:
Copy
from langchain_core.messages import HumanMessagellm_with_tools = llm.bind_tools(tools)response = llm_with_tools.invoke([ HumanMessage(content="What's 15 * 23? Then search for Python tutorials.")])# Check for tool callsif response.tool_calls: for tool_call in response.tool_calls: print(f"Tool: {tool_call['name']}") print(f"Args: {tool_call['args']}")
Structured Tools:
Copy
from langchain_core.tools import StructuredToolfrom pydantic import BaseModel, Fieldclass SearchInput(BaseModel): query: str = Field(description="Search query") max_results: int = Field(default=5, description="Maximum results")def search_function(query: str, max_results: int = 5) -> str: # Implementation return f"Found {max_results} results for {query}"structured_tool = StructuredTool.from_function( func=search_function, args_schema=SearchInput, name="web_search", description="Search the web for information")