Skip to content

Commit eb0632d

Browse files
authored
docs: add memoization recipe (#4116)
1 parent 5d0e8aa commit eb0632d

File tree

1 file changed

+133
-0
lines changed

1 file changed

+133
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
---
2+
title: Markdown Chatbot with Memoization
3+
description: Learn how to build a performant chatbot that renders Markdown responses and uses memoization for optimal performance with Next.js and the AI SDK.
4+
tags: ['next', 'streaming', 'chatbot', 'markdown']
5+
---
6+
7+
# Markdown Chatbot with Memoization
8+
9+
When building a chatbot with Next.js and the AI SDK, you'll likely want to render the model's responses in Markdown format using a library like `react-markdown`. However, this can have negative performance implications as the Markdown is re-rendered on each new token received from the streaming response.
10+
11+
As conversations get longer and more complex, this performance impact becomes exponentially worse since the entire conversation history is re-rendered with each new token.
12+
13+
This recipe uses memoization - a performance optimization technique where the results of expensive function calls are cached and reused to avoid unnecessary re-computation. In this case, parsed Markdown blocks are memoized to prevent them from being re-parsed and re-rendered on each token update, which means that once a block is fully parsed, it's cached and reused rather than being regenerated. This approach significantly improves rendering performance for long conversations by eliminating redundant parsing and rendering operations.
14+
15+
## Server
16+
17+
On the server, you use a simple route handler that streams the response from the language model.
18+
19+
```tsx filename='app/api/chat/route.ts'
20+
import { openai } from '@ai-sdk/openai';
21+
import { streamText } from 'ai';
22+
23+
export const maxDuration = 60;
24+
25+
export async function POST(req: Request) {
26+
const { messages } = await req.json();
27+
28+
const result = streamText({
29+
system:
30+
'You are a helpful assistant. Respond to the user in Markdown format.',
31+
model: openai('gpt-4o'),
32+
messages,
33+
});
34+
35+
return result.toDataStreamResponse();
36+
}
37+
```
38+
39+
## Memoized Markdown Component
40+
41+
Next, create a memoized markdown component that will take in raw Markdown text into blocks and only updates when the content actually changes. This component splits Markdown content into blocks using the `marked` library to identify discrete Markdown elements, then uses React's memoization features to optimize re-rendering by only updating blocks that have actually changed.
42+
43+
```tsx filename='components/memoized-markdown.tsx'
44+
import { marked } from 'marked';
45+
import { memo, useMemo } from 'react';
46+
import ReactMarkdown from 'react-markdown';
47+
48+
function parseMarkdownIntoBlocks(markdown: string): string[] {
49+
const tokens = marked.lexer(markdown);
50+
return tokens.map(token => token.raw);
51+
}
52+
53+
const MemoizedMarkdownBlock = memo(
54+
({ content }: { content: string }) => {
55+
return <ReactMarkdown>{content}</ReactMarkdown>;
56+
},
57+
(prevProps, nextProps) => {
58+
if (prevProps.content !== nextProps.content) return false;
59+
return true;
60+
},
61+
);
62+
63+
MemoizedMarkdownBlock.displayName = 'MemoizedMarkdownBlock';
64+
65+
export const MemoizedMarkdown = memo(
66+
({ content, id }: { content: string; id: string }) => {
67+
const blocks = useMemo(() => parseMarkdownIntoBlocks(content), [content]);
68+
69+
return blocks.map((block, index) => (
70+
<MemoizedMarkdownBlock content={block} key={`${id}-block_${index}`} />
71+
));
72+
},
73+
);
74+
75+
MemoizedMarkdown.displayName = 'MemoizedMarkdown';
76+
```
77+
78+
## Client
79+
80+
Finally, on the client, use the `useChat` hook to manage the chat state and render the chat interface. You can use the `MemoizedMarkdown` component to render the message contents in Markdown format without compromising on performance. Additionally, you can render the form in it's own component so as to not trigger unnecessary re-renders of the chat messages. You can also use the `experimental_throttle` option that will throttle data updates to a specified interval, helping to manage rendering performance.
81+
82+
```typescript filename='app/page.tsx'
83+
'use client';
84+
85+
import { useChat } from 'ai/react';
86+
import { MemoizedMarkdown } from '@/components/memoized-markdown';
87+
88+
export default function Page() {
89+
const { messages } = useChat({
90+
id: 'chat',
91+
// Throttle the messages and data updates to 50ms:
92+
experimental_throttle: 50,
93+
});
94+
95+
return (
96+
<div className="flex flex-col w-full max-w-xl py-24 mx-auto stretch">
97+
<div className="space-y-8 mb-4">
98+
{messages.map(message => (
99+
<div key={message.id}>
100+
<div className="font-bold mb-2">
101+
{message.role === 'user' ? 'You' : 'Assistant'}
102+
</div>
103+
<div className="prose space-y-2">
104+
<MemoizedMarkdown id={message.id} content={message.content} />
105+
</div>
106+
</div>
107+
))}
108+
</div>
109+
<MessageInput />
110+
</div>
111+
);
112+
}
113+
114+
const MessageInput = () => {
115+
const { input, handleSubmit, handleInputChange } = useChat({ id: 'chat' });
116+
return (
117+
<form onSubmit={handleSubmit}>
118+
<input
119+
className="fixed bottom-0 w-full max-w-xl p-2 mb-8 dark:bg-zinc-900 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
120+
placeholder="Say something..."
121+
value={input}
122+
onChange={handleInputChange}
123+
/>
124+
</form>
125+
);
126+
};
127+
```
128+
129+
<Note>
130+
The chat state is shared between both components by using the same `id` value.
131+
This allows you to split the form and chat messages into separate components
132+
while maintaining synchronized state.
133+
</Note>

0 commit comments

Comments
 (0)