Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: generativeaionvertexai_embedding_batch #3913

Merged
merged 3 commits into from
Dec 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
105 changes: 105 additions & 0 deletions ai-platform/snippets/create-batch-embedding.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
/*
* Copyright 2024 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

'use strict';

async function main(projectId, inputUri, outputUri, jobName) {
// [START generativeaionvertexai_embedding_batch]
// Imports the aiplatform library
const aiplatformLib = require('@google-cloud/aiplatform');
const aiplatform = aiplatformLib.protos.google.cloud.aiplatform.v1;

/**
* TODO(developer): Uncomment/update these variables before running the sample.
*/
Comment on lines +25 to +27
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The TODO comment asks the developer to uncomment/update these variables. It would be helpful to provide more specific instructions or examples of how these variables should be updated, including valid formats and constraints. For instance, specify the expected format for inputUri and outputUri, and whether jobName can contain special characters.

Suggested change
/**
* TODO(developer): Uncomment/update these variables before running the sample.
*/
/**
* TODO(developer): Uncomment and replace the following placeholders before running the sample.
* - projectId: Replace with your Google Cloud project ID.
* - inputUri: Replace with the URI of the input dataset. This can be a BigQuery table or a Google Cloud Storage file.
* Examples: "gs://[BUCKET]/[DATASET].jsonl" or "bq://[PROJECT].[DATASET].[TABLE]".
* - outputUri: Replace with the URI where the output will be stored. This can be a BigQuery table or a Google Cloud Storage file.
* Examples: "gs://[BUCKET]/[OUTPUT].jsonl" or "bq://[PROJECT].[DATASET].[TABLE]".
* - jobName: Replace with a descriptive name for the batch embedding job. Avoid special characters.
*/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't follow our style guidelines and would make sample management unwieldy

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand your concern about style guidelines. Could you please clarify which specific aspects of the suggested changes violate your style guide? Knowing the specific rules will help me refine the suggestion to better align with your team's preferences. For example, are there length restrictions on comments, preferred ways to format URIs, or specific naming conventions for variables that are being violated?

// projectId = 'YOUR_PROJECT_ID';

// Optional: URI of the input dataset.
// Could be a BigQuery table or a Google Cloud Storage file.
// E.g. "gs://[BUCKET]/[DATASET].jsonl" OR "bq://[PROJECT].[DATASET].[TABLE]"
// inputUri =
// 'gs://cloud-samples-data/generative-ai/embeddings/embeddings_input.jsonl';

// Optional: URI where the output will be stored.
// Could be a BigQuery table or a Google Cloud Storage file.
// E.g. "gs://[BUCKET]/[OUTPUT].jsonl" OR "bq://[PROJECT].[DATASET].[TABLE]"
// outputUri = 'gs://your_bucket/embedding_batch_output';

// The name of the job
// jobName = `Batch embedding job: ${new Date().getMilliseconds()}`;

const textEmbeddingModel = 'text-embedding-005';
const location = 'us-central1';
Comment on lines +44 to +45
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

Consider making textEmbeddingModel and location configurable parameters passed to the main function. This would make the snippet more flexible and reusable.


// Configure the parent resource
const parent = `projects/${projectId}/locations/${location}`;
const modelName = `projects/${projectId}/locations/${location}/publishers/google/models/${textEmbeddingModel}`;

// Specifies the location of the api endpoint
const clientOptions = {
apiEndpoint: `${location}-aiplatform.googleapis.com`,
};

// Instantiates a client
const jobServiceClient = new aiplatformLib.JobServiceClient(clientOptions);

// Generates embeddings from text using batch processing.
// Read more: https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/batch-prediction-genai-embeddings
async function callBatchEmbedding() {
const gcsSource = new aiplatform.GcsSource({
uris: [inputUri],
});

const inputConfig = new aiplatform.BatchPredictionJob.InputConfig({
gcsSource,
instancesFormat: 'jsonl',
});

const gcsDestination = new aiplatform.GcsDestination({
outputUriPrefix: outputUri,
});

const outputConfig = new aiplatform.BatchPredictionJob.OutputConfig({
gcsDestination,
predictionsFormat: 'jsonl',
});

const batchPredictionJob = new aiplatform.BatchPredictionJob({
displayName: jobName,
model: modelName,
inputConfig,
outputConfig,
});

const request = {
parent,
batchPredictionJob,
};

// Create batch prediction job request
const [response] = await jobServiceClient.createBatchPredictionJob(request);

console.log('Raw response: ', JSON.stringify(response, null, 2));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

Instead of just logging the raw response, consider logging specific fields that are relevant to the user, such as the job ID, status, and output URI. This would make it easier for the user to track the job's progress and access the results.

Suggested change
console.log('Raw response: ', JSON.stringify(response, null, 2));
console.log('Batch prediction job created:', response.name);
console.log('Job ID:', response.name.split('/').pop());
console.log('Output URI:', response.outputConfig.gcsDestination.outputUriPrefix);

}

await callBatchEmbedding();
// [END generativeaionvertexai_embedding_batch]
}

main(...process.argv.slice(2)).catch(err => {
console.error(err.message);
process.exitCode = 1;
});
84 changes: 84 additions & 0 deletions ai-platform/snippets/test/create-batch-embedding.test.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
/*
* Copyright 2024 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

'use strict';

const {assert} = require('chai');
const {after, before, describe, it} = require('mocha');
const uuid = require('uuid').v4;
const cp = require('child_process');
const {JobServiceClient} = require('@google-cloud/aiplatform');
const {Storage} = require('@google-cloud/storage');

const execSync = cmd => cp.execSync(cmd, {encoding: 'utf-8'});

describe('Batch embedding', async () => {
const displayName = `batch-embedding-job-${uuid()}`;
const location = 'us-central1';
const inputUri =
'gs://cloud-samples-data/generative-ai/embeddings/embeddings_input.jsonl';
let outputUri = 'gs://ucaip-samples-test-output/';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's good practice to use a dedicated test project for integration tests to avoid conflicts with other resources. Consider using a separate project for testing and setting the project ID explicitly in the test configuration.

const jobServiceClient = new JobServiceClient({
apiEndpoint: `${location}-aiplatform.googleapis.com`,
});
const projectId = process.env.CAIP_PROJECT_ID;
const storage = new Storage({
projectId,
});
let batchPredictionJobId;
let bucket;

before(async () => {
const bucketName = `test-bucket-${uuid()}`;
// Create a Google Cloud Storage bucket for embedding output
[bucket] = await storage.createBucket(bucketName);
outputUri = `gs://${bucketName}/embedding_batch_output`;
});

after(async () => {
// Delete job
const name = jobServiceClient.batchPredictionJobPath(
projectId,
location,
batchPredictionJobId
);

const cancelRequest = {
name,
};

jobServiceClient.cancelBatchPredictionJob(cancelRequest).then(() => {
const deleteRequest = {
name,
};

return jobServiceClient.deleteBatchPredictionJob(deleteRequest);
});
// Delete the Google Cloud Storage bucket created for embedding output.
await bucket.delete();
});

it('should create batch prediction job', async () => {
const response = execSync(
`node ./create-batch-embedding.js ${projectId} ${inputUri} ${outputUri} ${displayName}`
);

assert.match(response, new RegExp(displayName));
batchPredictionJobId = response
.split(`/locations/${location}/batchPredictionJobs/`)[1]
.split('\n')[0];
});
});
Loading