Skip to content

Commit 82b4d58

Browse files
committed
feat: fix final errors
1 parent 021fa04 commit 82b4d58

File tree

7 files changed

+156
-68
lines changed

7 files changed

+156
-68
lines changed

docs/agents/using-tools.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ import { registry } from '../ai/setup-registry'
7272
import { environment } from '../environment.mjs'
7373
import { conversationRepository } from '../repositories/conversation'
7474
import { buildDisplaySelectionButtons } from '../tools/display-selection-buttons'
75-
import { getFreeAppointments } from '../tools/get-free-appointments'
75+
import { buildGetFreeAppointments } from '../tools/get-free-appointments'
7676

7777
export const onMessage = new Composer()
7878

@@ -109,7 +109,7 @@ onMessage.on('message:text', async (context) => {
109109
system: PROMPT,
110110
tools: {
111111
displaySelectionButtons: buildDisplaySelectionButtons(context),
112-
getFreeAppointments,
112+
getFreeAppointments: buildGetFreeAppointments(),
113113
},
114114
})
115115

@@ -124,3 +124,5 @@ onMessage.on('message:text', async (context) => {
124124
await context.reply(text)
125125
})
126126
```
127+
128+
Now try to ask for an appointment and you will see how both tools are called in a row.

docs/bot/running.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,8 @@ export async function start(context: CommandContext<Context>): Promise<void> {
2424
This is where the echo functionality comes into play. The bot listens for incoming text messages from users and, upon receiving a message, responds by sending back the same message. This demonstrates the basic capability of the bot to handle and reply to user input.
2525

2626
```ts title="src/lib/handlers/on-message.ts"
27-
import { generateText } from 'ai'
2827
import { Composer } from 'grammy'
2928

30-
import { environment } from '../environment.mjs'
31-
3229
export const onMessage = new Composer()
3330

3431
onMessage.on('message:text', async (context) => {
@@ -49,14 +46,12 @@ Additionally, we ensure the bot shuts down properly when the process receives te
4946
```ts title="src/main.ts"
5047
import process from 'node:process'
5148

52-
import { environment } from './lib/environment.mjs'
5349
import { Bot } from 'grammy'
5450

5551
import { start } from './lib/commands/start'
5652
import { environment } from './lib/environment.mjs'
5753
import { onMessage } from './lib/handlers/on-message'
5854

59-
6055
async function main(): Promise<void> {
6156
const bot = new Bot(environment.BOT_TOKEN)
6257

@@ -97,6 +92,9 @@ This final section provides a step-by-step guide on how to set up and run the bo
9792
pnpm install
9893
```
9994

95+
!!! info
96+
This step was done automatically if you are using our devcontainer.
97+
10098
4. **Run the bot**:
10199
Start the bot in development mode:
102100
```bash

docs/chatbot/basic.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,9 @@ At this stage, the bot can respond intelligently using AI but lacks conversation
4141

4242
## Full code
4343

44+
!!! example
45+
46+
Update the next file to add the integration with the Vercel SDK AI
4447

4548
```ts title="src/lib/handlers/on-message.ts"
4649
import { generateText } from 'ai'

docs/chatbot/memory.md

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,14 +11,23 @@ We will create a table named `messages` to store the conversations.
1111
To create this table using **Drizzle ORM**, we can follow a similar structure to other tables created in the project. Below is a basic template:
1212

1313
```ts title="src/lib/db/schema/messages.ts"
14-
import { bigint, pgTable, serial, text, varchar, timestamp } from 'drizzle-orm/pg-core'
14+
import {
15+
bigint,
16+
pgTable,
17+
serial,
18+
text,
19+
timestamp,
20+
varchar,
21+
} from 'drizzle-orm/pg-core'
1522

1623
export const messages = pgTable('messages', {
1724
chatId: bigint({ mode: 'number' }).notNull(),
1825
content: text('content').notNull(),
1926
messageId: serial('message_id').primaryKey(),
2027
occurredOn: timestamp('occurred_on').defaultNow().notNull(),
21-
role: varchar('role', { length: 50 }).$type<'user' | 'assistant' | 'system' | 'tool'>().notNull(),
28+
role: varchar('role', { length: 50 })
29+
.$type<'user' | 'assistant' | 'system' | 'tool'>()
30+
.notNull(),
2231
})
2332
```
2433

@@ -99,6 +108,8 @@ export class ConversationRepository {
99108
await database.delete(messages).where(eq(messages.chatId, chatId))
100109
}
101110
}
111+
112+
export const conversationRepository = new ConversationRepository()
102113
```
103114

104115

@@ -117,7 +128,7 @@ import { conversationRepository } from '../repositories/conversation'
117128
export async function start(context: CommandContext<Context>): Promise<void> {
118129
const chatId = context.chat.id
119130
// Clear the conversation
120-
await conversationRepository.clearConversation(chatId)
131+
await conversationRepository.clear(chatId)
121132

122133
const content = 'Welcome, how can I help you?'
123134
// Store the assistant's welcome message

docs/chatbot/register.md

Lines changed: 43 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,47 @@
22

33
In this section, you will learn how to create registries for AI models in Vercel SDK. This setup allows you to register multiple AI providers and language models to be used in your project.
44

5+
6+
## Full code
7+
58
!!! info
69

7-
This step is usually not necessary, if you are only going to use one AI provider you can use it directly instead of creating a record.
10+
This step is usually not necessary, if you are only going to use one AI provider you can use it directly instead of creating a record. For convenience, this code is already included in the template but we will explain it here.
11+
12+
13+
```ts title="src/lib/ai/setup-registry.ts"
14+
import { openai as originalOpenAI } from '@ai-sdk/openai'
15+
import {
16+
experimental_createProviderRegistry as createProviderRegistry,
17+
experimental_customProvider as customProvider,
18+
} from 'ai'
19+
import { ollama as originalOllama } from 'ollama-ai-provider'
20+
21+
const ollama = customProvider({
22+
fallbackProvider: originalOllama,
23+
languageModels: {
24+
'qwen-2_5': originalOllama('qwen2.5'),
25+
},
26+
})
27+
28+
export const openai = customProvider({
29+
fallbackProvider: originalOpenAI,
30+
languageModels: {
31+
'gpt-4o-mini': originalOpenAI('gpt-4o-mini', {
32+
structuredOutputs: true,
33+
}),
34+
},
35+
})
36+
37+
export const registry = createProviderRegistry({
38+
ollama,
39+
openai,
40+
})
41+
```
842

943
## Setting up the Registry
1044

11-
The following steps will guide you on how to register AI providers such as OpenAI and Ollama using the Vercel SDK.
45+
The following steps will guide you on how to register AI providers such as OpenAI and Ollama using the Vercel SDK. Open the file `src/lib/ai/setup-registry.ts` to see how it works.
1246

1347
### Step 1: Import Required Modules
1448

@@ -45,6 +79,13 @@ export const openai = customProvider({
4579
})
4680
```
4781

82+
!!! info
83+
84+
You can more models and providers if you wish. Just update the `MODEL*` envvars in `.env` file to activate them. If you need an API token
85+
you will need update the `src/lib/environment.mjs` file too.
86+
87+
The model name should be `PROVIDER:MODEL_NAME` just like _Ollama_ and _OpenAI_ does.
88+
4889
### Step 3: Create the Registry
4990

5091
Once the providers are defined, create the registry that will include these custom providers:
@@ -69,36 +110,3 @@ Now, to use one or the other, edit the .env file and configure which provider an
69110
!!! warning
70111

71112
It is possible that the free models do not work as well as the proprietary ones in the examples that use tools. Especially if they are small, since it is normal that in local we cannot run models with more than 12B of parameters. After the publication of this tutorial new and better open models may appear, try other options to see if they work better. If not you can always try a commercial model.
72-
73-
74-
## Full code
75-
76-
```ts title="src/lib/ai/setup-registry.ts"
77-
import { openai as originalOpenAI } from '@ai-sdk/openai'
78-
import {
79-
experimental_createProviderRegistry as createProviderRegistry,
80-
experimental_customProvider as customProvider,
81-
} from 'ai'
82-
import { ollama as originalOllama } from 'ollama-ai-provider'
83-
84-
const ollama = customProvider({
85-
fallbackProvider: originalOllama,
86-
languageModels: {
87-
'qwen-2_5': originalOllama('qwen2.5'),
88-
},
89-
})
90-
91-
export const openai = customProvider({
92-
fallbackProvider: originalOpenAI,
93-
languageModels: {
94-
'gpt-4o-mini': originalOpenAI('gpt-4o-mini', {
95-
structuredOutputs: true,
96-
}),
97-
},
98-
})
99-
100-
export const registry = createProviderRegistry({
101-
ollama,
102-
openai,
103-
})
104-
```

docs/rag/rag.md

Lines changed: 40 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -44,39 +44,43 @@ export async function learn(context: CommandContext<Context>): Promise<void> {
4444
## Creating the `/ask` command
4545

4646
```ts title="src/lib/commands/ask.ts"
47-
bot.command("ask", async (context) => {
48-
const userQuery = context.match;
47+
import { generateText } from 'ai'
48+
import type { CommandContext, Context } from 'grammy'
49+
50+
import { findRelevantContent } from '../ai/embeddings'
51+
import { registry } from '../ai/setup-registry'
52+
import { environment } from '../environment.mjs'
53+
54+
export async function ask(context: CommandContext<Context>): Promise<void> {
55+
const userQuery = context.match
4956

5057
// Find relevant content using embeddings
51-
const relevantContent = await findRelevantContent(userQuery);
58+
const relevantContent = await findRelevantContent(userQuery)
5259

5360
if (relevantContent.length === 0) {
54-
await context.reply("Sorry, I couldn't find any relevant information.");
55-
return;
61+
await context.reply("Sorry, I couldn't find any relevant information.")
62+
return
5663
}
5764

5865
// Generate the response with the RAG-enhanced prompt
5966
const { text } = await generateText({
60-
messages: [{ content: userQuery, role: "user" }],
67+
messages: [{ content: userQuery, role: 'user' }],
6168
model: registry.languageModel(environment.MODEL),
6269
// Combine the relevant content into the system prompt
6370
system: `
64-
You are a chatbot designed to help users book hair salon appointments.
65-
Here is some additional information relevant to your query:
66-
67-
${relevantContent.map((content) => content.name).join("\n")}
68-
69-
Answer the user's question based on this information.
70-
If a user asks for information outside of these details,
71-
please respond with: "I'm sorry, but I cannot assist with that.
72-
For more information, please call us at (555) 456-7890 or email
73-
71+
You are a chatbot designed to help users book hair salon appointments.
72+
Here is some additional information relevant to your query:
73+
74+
${relevantContent.map((content) => content.name).join('\n')}
75+
76+
Answer the user's question based on this information.
77+
If a user asks for information outside of these details, please respond with: "I'm sorry, but I cannot assist with that. For more information, please call us at (555) 456-7890 or email us at [email protected]."
7478
`,
75-
});
79+
})
7680

7781
// Reply with the generated text
78-
await context.reply(text);
79-
});
82+
await context.reply(text)
83+
}
8084
```
8185

8286

@@ -129,3 +133,20 @@ main().catch((error) => console.error(error))
129133
The generateText method now includes the additional content in the system prompt. This augments the bot’s ability to respond in a contextually aware manner by incorporating specific information from the retrieved data.
130134

131135
With RAG, our bot can learn new information dynamically and retrieve relevant content to enhance its responses. By leveraging embeddings and prompt injection, the bot becomes more capable of answering user questions accurately. This setup demonstrates how RAG can be applied to improve interactions, making the bot more flexible and intelligent while still being grounded in specific data sources.
136+
137+
138+
!!! exercise
139+
140+
Add the information we had in the prompt:
141+
142+
1. `/learn Our salon offers a haircut service for $25.`
143+
2. `/learn Our salon provides hair color services for $50.`
144+
3. `/learn We also offer a manicure service for $15.`
145+
4. `/learn Our opening hours are Monday to Saturday from 9 AM to 7 PM.`
146+
5. `/learn Our salon is closed on Sundays.`
147+
148+
And them does some questions:
149+
150+
1. `/ask What are your opening hours?`
151+
2. `/ask How much is a haircut?`
152+
3. `/ask Say my name`

docs/rag/tools.md

Lines changed: 49 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,14 @@ This step involves defining the database schema for storing appointments, includ
1212

1313

1414
```ts title="src/lib/db/schema/appointments.ts"
15-
import { bigint, date, pgTable, serial, time, uniqueIndex } from "drizzle-orm/pg-core";
15+
import {
16+
bigint,
17+
date,
18+
pgTable,
19+
serial,
20+
time,
21+
uniqueIndex,
22+
} from 'drizzle-orm/pg-core'
1623

1724
export const appointments = pgTable(
1825
'appointments',
@@ -42,7 +49,7 @@ The repository class contains methods for interacting with the database. It hand
4249

4350
This logic ensures that the chatbot can always return relevant appointment data, even if none have been pre-created for that day. The dynamic nature of this repository is key to making the system respond to real-world conditions.
4451

45-
```ts
52+
```ts title="src/lib/repositories/appointments.ts"
4653
import { eq } from 'drizzle-orm/expressions'
4754

4855
import { db as database } from '../db/index'
@@ -100,6 +107,44 @@ export const appointmentsRepository = new AppointmentRepository()
100107

101108
Here, we define a tool (getFreeAppointments) that fetches free appointments for the next day using the repository. The tool returns a markdown list of available time slots, which can be directly integrated into the chatbot’s responses. This tool encapsulates the repository logic, ensuring that the chatbot can retrieve dynamic appointment data without direct interaction with the database.
102109

110+
```ts title="src/lib/tools/get-free-appointments.ts"
111+
import { type CoreTool, tool } from 'ai'
112+
import { format } from 'date-fns'
113+
import { z } from 'zod'
114+
115+
import { appointmentsRepository } from '../repositories/appointments'
116+
import { tomorrow } from '../utils'
117+
118+
export const buildGetFreeAppointments = (): CoreTool =>
119+
tool({
120+
description:
121+
'Use this tool to search for available appointment times for tomorrow. Returns the response',
122+
execute: async () => {
123+
console.log(`Called getFreeAppointments tool`)
124+
125+
const freeAppointments =
126+
await appointmentsRepository.getFreeAppointmentsForDay(tomorrow())
127+
128+
if (freeAppointments.length === 0) {
129+
return `Sorry, there are no available appointments for tomorrow.`
130+
}
131+
132+
const availableSlots = freeAppointments
133+
.map(
134+
(app) =>
135+
`- ${format(new Date(`1970-01-01T${app.timeSlot}`), 'HH:mm')}`,
136+
)
137+
.join('\n')
138+
139+
return `Available appointments are:\n${availableSlots}.`
140+
},
141+
parameters: z.object({}),
142+
})
143+
```
144+
145+
### Step 4: Adding the tool
146+
147+
Finally we incorporate the tool to the context of our bot.
103148

104149
```ts title="src/lib/handlers/on-message.ts"
105150
import { generateText } from 'ai'
@@ -108,7 +153,7 @@ import { Composer } from 'grammy'
108153
import { registry } from '../ai/setup-registry'
109154
import { environment } from '../environment.mjs'
110155
import { conversationRepository } from '../repositories/conversation'
111-
import { getFreeAppointments } from '../tools/get-free-appointments'
156+
import { buildGetFreeAppointments } from '../tools/get-free-appointments'
112157

113158
export const onMessage = new Composer()
114159

@@ -138,7 +183,7 @@ onMessage.on('message:text', async (context) => {
138183
model: registry.languageModel(environment.MODEL),
139184
system: PROMPT,
140185
tools: {
141-
getFreeAppointments,
186+
getFreeAppointments: buildGetFreeAppointments(),
142187
},
143188
})
144189

0 commit comments

Comments
 (0)