-
Notifications
You must be signed in to change notification settings - Fork 2k
feat(genai): Sample/batch prediction #4181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Here is the summary of changes. You are about to add 3 region tags.
This comment is generated by snippet-bot.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Guiners, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly expands the Node.js Generative AI samples by adding robust examples for batch prediction. It provides clear demonstrations of how to initiate and monitor batch prediction jobs using various models (text embeddings and Gemini) and how to direct their outputs to either Google Cloud Storage or BigQuery. This enhancement makes it easier for developers to implement large-scale, asynchronous AI inference workflows.
Highlights
- New Batch Prediction Samples: Introduced three new Node.js samples demonstrating batch prediction capabilities for Google Cloud's Generative AI services.
- Diverse Output Destinations: The new samples cover storing batch prediction results in both Google Cloud Storage (GCS) and BigQuery (BQ).
- Model Support: Samples include batch prediction for text embedding models ('text-embedding-005') and Gemini models ('gemini-2.5-flash').
- Dependency Updates: Added
@google-cloud/bigquery
and@google-cloud/storage
topackage.json
to support BigQuery and GCS interactions within the samples and tests. - Comprehensive Testing: Accompanying test files were added for each new batch prediction sample, including cleanup logic for generated GCS and BQ resources.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds new samples for GenAI batch prediction. My review found a critical issue in all three new sample files: the job status polling loop has an inverted condition, which prevents it from waiting for job completion. I've also identified that none of the new tests clean up the resources they create (GCS objects, BigQuery tables), which could lead to issues in the test environment. I've provided suggestions to fix these problems, including using try...finally
for cleanup and strengthening test assertions. Additionally, there are a couple of minor issues like incorrect region tags and redundant code that I've pointed out.
'JOB_STATE_PAUSED', | ||
]); | ||
|
||
while (completedStates.has(job.state)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The polling loop condition is inverted. It currently checks while (completedStates.has(job.state))
, which means the loop will only run if the job is already in a completed state. Since the initial state is JOB_STATE_PENDING
, the loop will not execute at all, and the function will return prematurely. The condition should be negated to poll until the job reaches a completed state.
while (completedStates.has(job.state)) { | |
while (!completedStates.has(job.state)) { |
'JOB_STATE_PAUSED', | ||
]); | ||
|
||
while (completedStates.has(job.state)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The polling loop condition is inverted. It currently checks while (completedStates.has(job.state))
, which means the loop will only run if the job is already in a completed state. Since the initial state is JOB_STATE_PENDING
, the loop will not execute at all, and the function will return prematurely. The condition should be negated to poll until the job reaches a completed state.
while (completedStates.has(job.state)) { | |
while (!completedStates.has(job.state)) { |
'JOB_STATE_PAUSED', | ||
]); | ||
|
||
while (completedStates.has(job.state)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The polling loop condition is inverted. It currently checks while (completedStates.has(job.state))
, which means the loop will only run if the job is already in a completed state. Since the initial state is JOB_STATE_PENDING
, the loop will not execute at all, and the function will return prematurely. The condition should be negated to poll until the job reaches a completed state.
while (completedStates.has(job.state)) { | |
while (!completedStates.has(job.state)) { |
it('should return the batch job state', async function () { | ||
this.timeout(50000); | ||
const gscOutput = gcs_output_uri(); | ||
const gscUri = (await gscOutput).uri; | ||
const output = await sample.runBatchPredictionJob( | ||
gscUri, | ||
projectId, | ||
location | ||
); | ||
assert.notEqual(output, undefined); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test has two issues:
- It doesn't clean up the GCS resources it creates, which can lead to leftover artifacts in the test bucket. Using a
try...finally
block will ensure cleanup happens even if the test fails. - The assertion
assert.notEqual(output, undefined)
is too weak. With the bug in the sample file, it would pass by returningJOB_STATE_PENDING
. A better assertion is to check forJOB_STATE_SUCCEEDED
to confirm the job completed successfully.
Also, there's a minor typo gscOutput
which could be gcsOutput
for consistency.
it('should return the batch job state', async function () {
this.timeout(50000);
const gcsOutput = await gcs_output_uri();
try {
const output = await sample.runBatchPredictionJob(
gcsOutput.uri,
projectId,
location
);
assert.strictEqual(output, 'JOB_STATE_SUCCEEDED');
} finally {
await gcsOutput.cleanup();
}
});
it('should return the batch job state', async function () { | ||
this.timeout(50000); | ||
const bqOutput = bq_output_uri(); | ||
const bqUri = (await bqOutput).uri; | ||
const output = await sample.runBatchPredictionJob( | ||
bqUri, | ||
projectId, | ||
location | ||
); | ||
assert.notEqual(output, undefined); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test has two issues:
- It doesn't clean up the BigQuery table it creates, which can lead to leftover artifacts. Using a
try...finally
block will ensure cleanup happens even if the test fails. - The assertion
assert.notEqual(output, undefined)
is too weak. With the bug in the sample file, it would pass by returningJOB_STATE_PENDING
. A better assertion is to check forJOB_STATE_SUCCEEDED
to confirm the job completed successfully.
it('should return the batch job state', async function () {
this.timeout(50000);
const bqOutput = await bq_output_uri();
try {
const output = await sample.runBatchPredictionJob(
bqOutput.uri,
projectId,
location
);
assert.strictEqual(output, 'JOB_STATE_SUCCEEDED');
} finally {
await bqOutput.cleanup();
}
});
it('should return the batch job state', async function () { | ||
this.timeout(50000); | ||
const gscOutput = gcs_output_uri(); | ||
const gscUri = (await gscOutput).uri; | ||
const output = await sample.runBatchPredictionJob( | ||
gscUri, | ||
projectId, | ||
location | ||
); | ||
assert.notEqual(output, undefined); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test has two issues:
- It doesn't clean up the GCS resources it creates, which can lead to leftover artifacts in the test bucket. Using a
try...finally
block will ensure cleanup happens even if the test fails. - The assertion
assert.notEqual(output, undefined)
is too weak. With the bug in the sample file, it would pass by returningJOB_STATE_PENDING
. A better assertion is to check forJOB_STATE_SUCCEEDED
to confirm the job completed successfully.
Also, there's a minor typo gscOutput
which could be gcsOutput
for consistency.
it('should return the batch job state', async function () {
this.timeout(50000);
const gcsOutput = await gcs_output_uri();
try {
const output = await sample.runBatchPredictionJob(
gcsOutput.uri,
projectId,
location
);
assert.strictEqual(output, 'JOB_STATE_SUCCEEDED');
} finally {
await gcsOutput.cleanup();
}
});
if (job.state === 'JOB_STATE_FAILED') { | ||
console.log(`Job state: ${job.state}`); | ||
break; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Description
Fixes #
Note: Before submitting a pull request, please open an issue for discussion if you are not associated with Google.
Checklist
npm test
(see Testing)npm run lint
(see Style)GoogleCloudPlatform/nodejs-docs-samples
. Not a fork.