Building GenAI Applications using LangChain
LangChain is an open-source framework designed to simplify the development of applications powered by large language models (LLMs) such as Google's Gemini or OpenAI's GPT series. It provides abstractions and utilities for chaining LLM calls, integrating with external data sources, and building complex workflows.
In this blog, we'll explore the core concepts of LangChain and demonstrate how to use it with TypeScript/JavaScript, featuring Google Gemini as the LLM provider.
In comming days, I'll continue to post more article about AI, GenAI, LangChain, LangGraph and other related capabilities. Hope these articles will help you understand the basic concepts and learn more about AI and GenAI.
Getting Started
Prerequisites
Before you begin, make sure you have:
- Node.js installed: Download and install Node.js from nodejs.org.
- Google Gemini API Key: Sign up for access and create an API key at Google AI Studio.
- Basic knowledge of TypeScript: Familiarity with TypeScript syntax and concepts is recommended.
- Understanding of Large Language Models (LLMs): You should know what LLMs are and their typical use cases. I am planning to write a separate article to talk more about LLMs in general.
Example Project
You can access the github repo for working example explained in this article Sample Project
Installation
Clone the sample project and install packages.
npm install
Examples
There are 2 examples in the project
- A simple usage of LLMs with LangChain
npm run start
- An example of that shows how to create
Tool
andAgent
using LangChainnpm run start-tools-example
Why LangChain?
While LLMs are powerful, building production-ready applications with them requires more than just sending prompts and receiving responses. You often need to:
- Chain multiple LLM calls together
- Integrate with APIs, databases, or files
- Manage memory and context
- Handle user interactions
LangChain provides tools to address these needs, making it easier to build robust, scalable LLM-powered apps.
Core Concepts
1. Models
LangChain supports various LLM providers. Here's how to use Google's Gemini:
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
const model = new ChatGoogleGenerativeAI({
apiKey: process.env.GOOGLE_API_KEY,
model: "gemini-2.0-flash-lite",
temperature: 0.7,
});
2. Prompts
Prompts are templates for interacting with LLMs. There are situations when prompts needs to be contextualized by agumentic information at runtime. This can be achived with help of placeholders in PromptTemplate
.
import { PromptTemplate } from "@langchain/core/prompts";
const prompt = new PromptTemplate({
template: "Translate the following English text to French: {text}",
inputVariables: ["text"],
});
const formattedPrompt = await prompt.format({ text: "Hello, world!" });
3. Chains
Chains allow you to combine models, prompts, and other logic into a workflow. For example, you can create a translation chain that takes user input, formats it with a prompt, and sends it to the LLM for processing.
In simple terms, think of chain as series of actions chained together to get the desired result.
Here's a more detailed example:
import "dotenv/config"; // This will load environment variables from .env
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { PromptTemplate } from "@langchain/core/prompts";
// Initialize the Gemini model
const model = new ChatGoogleGenerativeAI({
apiKey: process.env.GOOGLE_API_KEY,
model: "gemini-2.0-flash-lite",
temperature: 0.7,
});
// Create a prompt template for translation
const prompt = new PromptTemplate({
template: "Translate the following English text to French: {text}",
inputVariables: ["text"],
});
// Create a chain that combines the model and the prompt
const chain = prompt.pipe(model);
// Chain will handle the formatting of the prompt automatically (see below), so you don't need to call `format` separately.
// const formattedPrompt = await prompt.format({ text: "Hello, world!" });
// Function to translate text using the chain.
// Note that we are not formatting the prompt here, as the chain handles it.
async function translateToFrench(text: string) {
const response = await chain.invoke({ text });
console.log("French translation:", response.text);
}
// Example usage
translateToFrench("How are you?").then(() => {
console.log("Translation completed.");
}).catch((error) => {
console.error("Error during translation:", error);
});
This example demonstrates how to:
- Define a prompt template with placeholders.
- Initialize a chain that links the prompt and the LLM.
- Call the chain with input data and receive the processed output.
Chains can be extended to include multiple steps, such as data retrieval, formatting, or post-processing, enabling you to build sophisticated LLM-powered workflows.
4 Tools
LangChain tools are modular components that allow your LLM applications to interact with external systems, APIs, or perform specific tasks beyond text generation. Tools can include web search, calculators, database queries, external API call or any custom logic you define.
For example, you can use built-in tools like SerpAPI for web search, or create your own tool to fetch data from an API. In the below example we are creating our own custom tool to read the TODO tasks. We are reading tasks from constant but, in a real-world scenario, you might fetch this data from a database or an API.
Steps to use tool with LLM
- Create or have an existing tools that you want to use
- Create an
Agent
(will explain below). - Create a system message to provide context to the model about what it should do (this is called
SystemMessage
) - Invoke the 'Agent' with the system prompt and user query (user query is also called
HumanMessage
) - Based on user query, Agent will decide whether it needs to call a tool or not? If a tool call is needed Agent will call required tool(s)
- Finally
Agent
return a formatted response based on the user query and response received tool call
import "dotenv/config"; // This will load environment variables from .env
import { AIMessage, HumanMessage, SystemMessage, ToolMessage } from "@langchain/core/messages";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { Tool } from "@langchain/core/tools";
export type TodoStatus = "pending" | "in-progress" | "completed";
export interface TodoTask {
taskId: string;
description: string;
dayOfWeek: string; // e.g., "Monday", "Tuesday", etc.
status: TodoStatus;
}
const todos: Array<TodoTask> = [
{
"taskId": "1",
"description": "Pay electricity bill",
"dayOfWeek": "Monday",
"status": "pending"
},
{
"taskId": "2",
"description": "Watch an online course on data analysis",
"dayOfWeek": "Tuesday",
"status": "in-progress"
},
{
"taskId": "9",
"description": "Organize workspace",
"dayOfWeek": "Tuesday",
"status": "completed"
},
];
// Custom tool to return todo tasks
export class TodoTool extends Tool {
name = 'todo_tool';
description = "Returns a list todo tasks";
/**
* Returns all tasks for a given day of the week.
*/
protected _call(): Promise<string> {
// In a real-world scenario, you might fetch this data from a database or an API.
return Promise.resolve(JSON.stringify(todos));
}
}
Tools are especially powerful when combined with agents, enabling your application to reason about which tool to use for a given user query. This makes your LLM-powered app more dynamic and capable of handling complex, real-world tasks.
4.1 Why are Tools Needed?
While LLMs are excellent at understanding and generating text, they have limitations when it comes to accessing real-time information, performing calculations, or interacting with external systems. Tools bridge this gap by allowing your application to:
- Retrieve up-to-date data from the web or APIs
- Perform operations like math, search, or database queries
- Extend the LLM's capabilities beyond its training data
By integrating tools, you enable your LLM-powered app to provide more accurate, actionable, and context-aware responses, making it far more useful for real-world scenarios.
5. Agents
By themselves, language models can't take actions - they just output text. Agents are systems that take a high-level task and use an LLM as a reasoning engine to decide what actions to take and execute those actions.
Agents can make decisions and use tools (APIs, search engines, etc.) to answer complex queries.
Below Agent example uses the tool that we have created above. The tool returns a list of TODO tasks.
import { AgentExecutor } from "langchain/agents";
import { createToolCallingAgent } from "langchain/agents";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
async function executeUserQueryWithAgent(userQuery: string) {
const prompt = ChatPromptTemplate.fromMessages([
systemPrompt,
new MessagesPlaceholder("chat_history"),
new MessagesPlaceholder("user_query"),
new MessagesPlaceholder("agent_scratchpad"),
]);
// Create the agent with the model, tools, and prompt
const agent = await createToolCallingAgent({ llm: model, tools, prompt });
// Create an AgentExecutor to handle the agent's execution
const agentExecutor = new AgentExecutor({
agent,
tools,
});
// Invoke the agent with the user's query
const agentResponse = await agentExecutor.invoke({user_query: userQuery, chat_history: []});
console.log("Agent response:", agentResponse);
}
// Example usage
const userQuery = "Show my pending tasks?";
executeUserQueryWithAgent(userQuery).then(() => {
console.log("Execution completed.");
}).catch((error) => {
console.error("Error during Execution:", error);
});
6. Memory
Agents do not remember past conversations. Memory
helps Agents remembering past conversations and creating a context for next interaction.
LangChain supports memory to maintain context across interactions.
I'll explain more about how to use Memory
in details in my upcoming articlee. Here I'll show a quick example about how to provide the past conversation/chat history as Memory
to agent.
In this example, since we are providing chat_history
as context/Memory, agent know what we mean by 'first task'.
const prompt = ChatPromptTemplate.fromMessages([
systemPrompt,
new MessagesPlaceholder("chat_history"), // This is Memory for Agent
new MessagesPlaceholder("user_query"),
new MessagesPlaceholder("agent_scratchpad"),
]);
// Since we are providing chat_history as context/Memory, agent know what we mean by 'first task'
const agentResponse = await agentExecutor.invoke({
user_query: 'Mark my first task as completed',
chat_history: [
{ role: "user", content: "Show my pending tasks?" },
{ role: "assistant", content: "Here are your pending tasks: ...." },
]}
);
Example: Simple Q&A Bot with Gemini
const userQuery = "Show my pending tasks?";
executeUserQueryWithAgent(userQuery).then(() => {
console.log("Execution completed.");
}).catch((error) => {
console.error("Error during Execution:", error);
});
Conclusion
LangChain empowers developers to build advanced LLM applications with ease. By providing abstractions for models, prompts, chains, agents, and memory, it accelerates the development of chatbots, assistants, and knowledge-based systems.
Explore more at LangChain Documentation.