You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// Get video title, description, and thumbail from YouTube API v3
249
+
// Get video title, description, and thumbnail from YouTube API v3
250
250
const videoInfo =awaitgetVideoInfo(videosToLoad);
251
251
252
252
// Get video transcripts from SearchAPI.io, join the video info
@@ -523,7 +523,7 @@ async function storeVideoVectors(documents: VideoDocument[]) {
523
523
524
524
Notice that we first check if we have already generated a vector using the Redis Set `VECTOR_SET`. If we have, we skip the LLM and use the existing vector. This avoids unnecessary API calls and can speed things up.
525
525
526
-
### Redis vector search funcationality and AI integration for video Q&A
526
+
### Redis vector search functionality and AI integration for video Q&A
527
527
528
528
One of the key features of our application is the ability to search through video content using AI-generated queries. This section will cover how the backend handles search requests and interacts with the AI models.
529
529
@@ -661,7 +661,7 @@ You might ask why store the question as a vector? Why not just store the questio
661
661
662
662
### How to implement semantic vector caching in Redis
663
663
664
-
If you're already familiar with storing vectors in Redis, which we have covered in this tutorial, semantic vector caching is an extenson of that and operates in essentially the same way. The only difference is that we are storing the question as a vector, rather than the video summary. We are also using the [cache aside](https://www.youtube.com/watch?v=AJhTduDOVCs) pattern. The process is as follows:
664
+
If you're already familiar with storing vectors in Redis, which we have covered in this tutorial, semantic vector caching is an extension of that and operates in essentially the same way. The only difference is that we are storing the question as a vector, rather than the video summary. We are also using the [cache aside](https://www.youtube.com/watch?v=AJhTduDOVCs) pattern. The process is as follows:
665
665
666
666
1. When a user asks a question, we perform a vector similarity search for existing answers to the question.
667
667
1. If we find an answer, we return it to the user. Thus, avoiding a call to the LLM.
@@ -682,7 +682,7 @@ const answerVectorStore = new RedisVectorStore(embeddings, {
682
682
});
683
683
```
684
684
685
-
The `answerVectorStore` looks nearly identical to the `vectorStore` we defined earlier, but it uses a different [algorithm and disance metric](https://redis.io/docs/interact/search-and-query/advanced-concepts/vectors/). This algorithm is better suited for similarity searches for our questions.
685
+
The `answerVectorStore` looks nearly identical to the `vectorStore` we defined earlier, but it uses a different [algorithm and distance metric](https://redis.io/docs/interact/search-and-query/advanced-concepts/vectors/). This algorithm is better suited for similarity searches for our questions.
686
686
687
687
The following code demonstrates how to use the `answerVectorStore` to check if a similar question has already been answered.
0 commit comments