Engineering

How to Use LangChain.js in TypeScript (And When You Shouldn't)

Author

Eddie Hudson

Date Published

Sideview Typing

LangChain is everywhere. If you've spent any time researching how to build LLM-powered applications, you've seen it recommended in blog posts, YouTube tutorials, and Stack Overflow answers. It's become the default framework for wiring up language models.

But default doesn't always mean right.

This guide shows you how to actually use LangChain.js in a TypeScript project, with code you can run today. More importantly, we'll talk about when it's worth the complexity and when you should skip it entirely.

What is LangChain.js?

LangChain.js is an open-source framework for building applications powered by large language models. If you've used the Python version, the JavaScript/TypeScript implementation follows the same architecture and aims for feature parity.

At its core, LangChain gives you three things:

A unified interface to LLMs. Whether you're calling OpenAI, Anthropic, Cohere, or running a local model, you write the same code. Swap providers by changing a single import.

Prompt templating. Instead of concatenating strings (and inevitably screwing up the formatting), you define templates with variables that get filled in at runtime.

Chains and agents. This is the main event. Chains let you compose multiple steps: fetch data, format a prompt, call the model, parse the output. Agents take it further by letting the model decide which tools to call and in what order.

The GitHub repo is active, and the docs are decent. The Python version has a separate repo here if you need to cross-reference.

Setting Up a TypeScript Project

You'll need Node.js 18+ and npm. If you don't have TypeScript installed globally:

BASH
1npm install -g typescript

Create your project:

BASH
1mkdir langchain-ts-project
2cd langchain-ts-project
3npm init -y
4npm install langchain @langchain/openai @langchain/core ts-node typescript @types/node

Add a tsconfig.json:

JSON
1{
2 "compilerOptions": {
3 "target": "es2020",
4 "module": "commonjs",
5 "rootDir": "./src",
6 "outDir": "./dist",
7 "esModuleInterop": true,
8 "strict": true,
9 "skipLibCheck": true
10 },
11 "include": ["src/**/*"]
12}

That's the setup. Now the actual code.

A Working Example

Create src/index.ts. We'll build a simple chain that answers geography questions.

Initialize the Model

TYPESCRIPT
1import { ChatOpenAI } from "@langchain/openai";
2
3const model = new ChatOpenAI({
4 modelName: "gpt-3.5-turbo",
5 temperature: 0.7,
6});

Make sure OPENAI_API_KEY is set in your environment. The temperature parameter controls randomness. Lower values give more predictable outputs.

Create a Prompt Template

TYPESCRIPT
1import { ChatPromptTemplate } from "@langchain/core/prompts";
2
3const prompt = ChatPromptTemplate.fromMessages([
4 ["system", "You are a helpful assistant that gives concise answers."],
5 ["user", "What is the capital of {country}?"],
6]);

The {country} placeholder gets replaced when you invoke the chain. Cleaner than string interpolation, and you get validation if you forget a variable.

Build and Run the Chain

TYPESCRIPT
1import { StringOutputParser } from "@langchain/core/output_parsers";
2
3async function main() {
4 const chain = prompt.pipe(model).pipe(new StringOutputParser());
5
6 const france = await chain.invoke({ country: "France" });
7 console.log(`France: ${france}`);
8
9 const japan = await chain.invoke({ country: "Japan" });
10 console.log(`Japan: ${japan}`);
11}
12
13main();

The .pipe() method connects each step. Data flows left to right: prompt formats the input, model generates a response, parser extracts the text.

Run it:

BASH
1npx ts-node src/index.ts

You should see the capitals printed to your terminal.

Structured Output with Schemas

Raw text is fine for simple cases, but most applications need structured data. LangChain supports Zod schemas for typed outputs:

TYPESCRIPT
1import { z } from "zod";
2
3const responseSchema = z.object({
4 capital: z.string(),
5 population: z.number(),
6 funFact: z.string(),
7});
8
9// Use with structuredOutput or a function-calling model

This gives you TypeScript type safety on LLM responses. When the model returns malformed data, you catch it immediately instead of debugging downstream failures.

Beyond the Basics

The simple chain above barely scratches the surface. Here's where LangChain starts to justify its complexity.

Tools and Function Calling

You can give the model access to external data and services:

TYPESCRIPT
1import { DynamicTool } from "@langchain/core/tools";
2
3const weatherTool = new DynamicTool({
4 name: "get_weather",
5 description: "Gets current weather for a city",
6 func: async (city: string) => {
7 // Hit your weather API here
8 return `${city}: 22°C, partly cloudy`;
9 },
10});

Attach tools to an agent, and the model decides when to call them. This is how you build AI agents that can actually do things: check databases, call APIs, run calculations.

Conversation Memory

Chat applications need to remember what was said. LangChain has several memory implementations:

TYPESCRIPT
1import { BufferMemory } from "langchain/memory";
2import { ConversationChain } from "langchain/chains";
3
4const memory = new BufferMemory();
5const conversation = new ConversationChain({
6 llm: model,
7 memory: memory,
8});
9
10await conversation.call({ input: "My name is Alice." });
11await conversation.call({ input: "What's my name?" });
12// Returns "Alice"

BufferMemory stores the full conversation. For longer conversations, ConversationSummaryMemory compresses older messages. For production apps with multiple users, you'll want to persist this to a database.

LangGraph for Complex Agents

When you need more than linear chains, LangGraph adds stateful, graph-based orchestration. Think: cycles, conditional branches, multiple agents coordinating.

If you're building AI agents that need to loop back, retry failed steps, or hand off between specialized sub-agents, LangGraph handles the state management that would otherwise be a nightmare.

LangSmith for Debugging

LangSmith provides tracing and observability. Set a few environment variables and every chain execution gets logged with full input/output visibility.

BASH
1export LANGCHAIN_TRACING_V2=true
2export LANGCHAIN_API_KEY=your_key

No code changes. When a chain fails in production, you can see exactly what happened at each step. Worth setting up before you need it.

When LangChain.js Makes Sense

You're prototyping. LangChain gets you from idea to working demo fast. The abstractions handle boilerplate so you can focus on the interesting parts.

You need complex orchestration. Multi-step reasoning, tool use, conditional logic, RAG pipelines with embeddings and vectorstores. This is LangChain's strength. Building this from scratch is tedious and error-prone.

You want provider flexibility. Planning to test multiple models? LangChain's unified interface means you can swap OpenAI for Anthropic or a local model without rewriting your application logic.

When to Skip It

Your use case is simple. One API call, one prompt, one response. LangChain adds dependencies, abstractions, and concepts you don't need. Just call the OpenAI SDK directly.

Latency matters. Every abstraction layer adds overhead. If you're optimizing for response time, direct API calls will always be faster.

You want a lean dependency tree. Check your node_modules after installing LangChain. If supply chain security or bundle size matters to you, that dependency graph might be a problem.

The framework is fighting you. This is the big one. If you're spending more time working around LangChain than working with it, that's a sign. Sometimes raw API calls plus a few helper functions is all you need.

Frequently Asked Questions

Is LangChain still relevant?

Yes, but the conversation has matured. Early on, everyone reached for LangChain by default. Now teams are more selective. Complex orchestration? LangChain is solid. Simple integration? Probably overkill. The framework is actively maintained, and LangGraph is genuinely useful for agent workflows.

Why do developers find it difficult?

Too many abstractions doing too many things. Chains, agents, tools, memory, callbacks, output parsers. Each concept is reasonable on its own. Together, they create a learning curve that frustrates people who just want to call an API. The docs have improved, but density is still an issue. Start with basic chains. Add complexity only when you need it.

How are companies using it in production?

The patterns we see most: RAG systems over internal documents, support chatbots with tool access, and document processing pipelines. Teams typically adopt LangChain for orchestration while keeping other parts of their stack simple. Very few use every feature.

Is it open source?

Fully. MIT license for both Python and JavaScript. LangSmith has paid tiers, but the core framework is free to use and modify.

Can I use it with my existing Node.js app?

Same as any npm package. Install, import, call from your routes or services. Works with Express, Fastify, Hono, or whatever you're running. Most operations are async, so handle your promises appropriately.

What about GenAI with JavaScript more broadly?

JavaScript is a solid choice for GenAI work. LangChain.js is one option. You can also call provider APIs directly, use Vercel's AI SDK, or build on lighter abstractions. The ecosystem is maturing fast.

The Bottom Line

LangChain.js solves real problems. If you're building AI agents with multi-step reasoning, tool use, and complex state management, it saves significant development time.

But it's not the only way to build LLM applications, and it's not always the best way. Simple use cases don't need a framework. Performance-critical paths benefit from direct API access. And sometimes the right answer is purpose-built tooling that does one thing well.

Start with what your application actually needs. Add LangChain if you're writing the same patterns repeatedly. Drop it if it's getting in your way.

The best tool is the one that disappears into the background and lets you ship.

Related Posts