I've been using the Vertex API relatively successfully for the past few months, but I've noticed that when the text part of your prompt becomes extremely long, for example, 130,000 characters or so, the API seems to malfunction.
I've tried two approaches to integrating with Vertex:
vertexai-preview
package that ships with firebase
VertexAI
NodeJS package in a cloud function with increased (1 GB) memory allocation and a longer (120 sec) max runtimeAll of my calls to Vertex follow the documentation's pattern, where the "files" sent to the llm are included via a Cloud Storage URI and the text parts of the prompt are text
parts. Like this:
async function multiPartContent() {
const filePart = {fileData: {fileUri: "gs://generativeai-downloads/images/scones.jpg", mimeType: "image/jpeg"}};
const textPart = {text: 'What is this picture about?'};
const request = {
contents: [{role: 'user', parts: [textPart, filePart]}],
};
const streamingResult = await generativeVisionModel.generateContentStream(request);
for await (const item of streamingResult.stream) {
console.log('stream chunk: ', JSON.stringify(item));
}
const aggregatedResponse = await streamingResult.response;
console.log(aggregatedResponse.candidates[0].content);
}
In my case, I am using the generateContentStream
approach.
Given the massive context window, I expect to be able to send lots of information to the llm, then get a response back.
vertexai-preview
client-side packageWhen using the vertexai-preview
package, I get a Firebase Error with no message
property as I start pushing the requests, including more files and text.
I can confirm that my usage is nowhere near the 2m token context window. Usually, these heavier requests are around 200k tokens.
VertexAI
server-side approachHere's a relevant code block from my Cloud Function:
const req = {
contents: [{ role: "user", parts }],
};
console.log(`Initiating content generation for docId: ${docId}`);
const streamingResp = await generativeModel.generateContentStream(req);
This logic will work for non-large requests, but when there's a heavier request, it will fail. In the cloud logs, I'll see the "Initiating content generation" and, even though I'm catching the errors in my code (the block you see is inside a try / catch
block), I don't see any additional cloud logs. The process literally just poof ends.
text
part
sI've tried to convert long text
strings into multiple smaller (e.g. ~50k character) text
parts. So, the parts
I send the llm have, for example:
This didn't work at all.
text
part as a fileUri partI've tried converting long text
strings into stored plain text files, then sending them as fileUri parts.
This approach does seem to improve reliability. Here, I run into something of a prompt engineering problem, because the prompt actually is in the now-stored text file that I've sent to the LLM.
Overall, I'm finding it difficult to work with the Vertex API with these larger requests. The Vertex API claims to be able to process these heavy requests, but I'm just finding that as I make these higher-token requests the API completely fails with errors that are non-descriptive.
text
part of my prompt (also, if you happen to know what, if any, limitations there are on the text
part of Vertex prompts, please let me know)I'd love to know how to approach this.
After extensive testing, I discovered several key insights about working with Vertex AI for large-scale AI operations. Here's what I learned and how I solved it:
The vertexai-preview
package, while convenient for simple implementations, has significant limitations when handling:
Instead of this:
// Client directly waiting for AI response
const response = await vertexai.generateContent(prompt);
Use this pattern:
// Client
// 1. Initialize request
const requestId = await initiateAIProcess(prompt);
// 2. Listen for updates
onSnapshot(doc(db, 'aiResponses', requestId), (snapshot) => {
const response = snapshot.data();
if (response.status === 'complete') {
// Handle completion
}
});
// Server
exports.processAIRequest = functions
.runWith({
timeoutSeconds: 540, // 9 minutes
memory: '1GB'
})
.https.onCall(async (data) => {
// Process AI request
// Write results to Firestore
});
Rather than relying on the Preview package's default function configuration:
For long-running operations:
The Vertex AI Preview package is best suited for:
For production applications, especially those handling large amounts of data:
Consider breaking large requests into smaller, manageable chunks if possible
This approach has proven much more reliable for handling complex AI operations with large datasets.