Skip to content

Commit fa448ea

Browse files
committed
feat(generative-ai): Add basic samples for Gemini / VertexAI inference (#3670)
* feat(generative-ai): Add sample for basic text generation. * chore: fix lint issues * feat: Add basic multimodal example * chore: add header * feat: Add streaming inference example * feat: Add example for multimodal streaming response * fix: Fix some tests that were not running correctly * chore: Clarify test names * fix: Adjust cloud storage bucket and argurment order for consistency * chore: fix lint errors * chore: Update some tests and prompt text to be clearer * Update model to the correct multimodal model * Clarify multimodal prompt based on order * chore: Update function calling stream model to address flaky test.
1 parent 0185fe4 commit fa448ea

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

generative-ai/snippets/inference/nonStreamMultiModalityBasic.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ async function generateContent(
4545
mime_type: 'image/jpeg',
4646
},
4747
},
48-
{text: 'Are following video and image correlated?'},
48+
{text: 'Are this video and image correlated?'},
4949
],
5050
},
5151
],

generative-ai/snippets/inference/streamMultiModalityBasic.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ async function generateContent(
4545
mime_type: 'image/jpeg',
4646
},
4747
},
48-
{text: 'Are following video and image correlated?'},
48+
{text: 'Are this video and image correlated?'},
4949
],
5050
},
5151
],

generative-ai/snippets/test/function-calling/functionCallingStreamChat.test.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ describe('Generative AI Function Calling Stream Chat', () => {
3030
*/
3131
// const projectId = 'YOUR_PROJECT_ID';
3232
// const location = 'YOUR_LOCATION';
33-
// const model = 'gemini-1.0-pro';
33+
// const model = 'gemini-1.5-pro-preview-0409';
3434

3535
it('should create stream chat and begin the conversation the same in each instance', async () => {
3636
const output = execSync(

generative-ai/snippets/test/inference/nonStreamMultiModalityBasic.test.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ describe('Generative AI Multimodal Text Inference', () => {
3030
*/
3131
// const projectId = 'YOUR_PROJECT_ID';
3232
// const location = 'YOUR_LOCATION';
33-
// const model = 'gemini-1.0-pro';
33+
// const model = 'gemini-1.5-pro-preview-0409';
3434

3535
it('should generate text based on a prompt containing text, a video, and an image', async () => {
3636
const output = execSync(

0 commit comments

Comments
 (0)