I'm writing a generative AI Corpus and AI Vocal Analysis tool that is for training peoples voice and giving feedback. I'm using qwen4b-instruct as an LLM and have prompted it really good however my pipeline goes like this. Corpus > Wiki Data > LLM with rerouting, fall backs, and safe guards. I'm 99.5% done I swear and have a job interview in which I said I would showcase this project, however everything works but the JS is printing out of order. I have two versions, a buggy but more consistent backup that writes messages in the correct order and a fully functional AI that replaces the same word bubble over and over again.
In both codes I'm using
function onSendMessage(textRaw) {
if (!textRaw || !textRaw.trim()) return;
appendMessage(textRaw, 'user');
showTypingIndicator();
const interp = interpretUserText(textRaw);
const artist = extractArtistName(textRaw);
setTimeout(async () => {
removeTypingIndicator();
if (!CORPUS_READY && corpusPromise) await corpusPromise;
if (artist) {
const facts =
(await fetchArtistFactsWikidataSearch(artist)) ||
getArtistFactsMock(artist);
const intent = interp.wants_style ? "style_compare" : "artist_info";
const corpusLine = getResponseByIntent(intent, {
artist: facts?.name || artist
});
appendMessage(corpusLine, "coach");
if (USE_LLM) {
const prompt = buildLLMPrompt({
userText: textRaw,
intent,
corpusLine,
artistFacts: facts
});
// Optional: shadow prompt logging (your flag)
if (SHADOW_LLM) {
console.group("🜂 Shadow LLM Prompt");
console.log(prompt);
console.groupEnd();
}
callLLM(prompt).then(llmReply => {
if (llmReply && llmReply.trim()) {
updateLastCoachMessage(
[
corpusLine,
facts?.description ? `\n\n${facts.description}` : "",
llmReply
].filter(Boolean).join("\n\n")
);
}
});
}
return;
}
if (interp.technique) {
const corpusLine = getResponseByIntent("technique_coaching", {
technique: interp.technique
});
appendMessage(corpusLine, "coach");
if (USE_LLM) {
const prompt = buildLLMPrompt({
userText: textRaw,
intent: "technique_coaching",
corpusLine,
artistFacts: null
});
if (SHADOW_LLM) {
console.group("🜂 Shadow LLM Prompt");
console.log(prompt);
console.groupEnd();
}
callLLM(prompt).then(llmReply => {
if (llmReply && llmReply.trim()) {
updateLastCoachMessage(llmReply);
}
});
}
return;
}
const corpusLine = getResponseByIntent("metal_fallback");
appendMessage(corpusLine, "coach");
if (USE_LLM) {
const prompt = buildLLMPrompt({
userText: textRaw,
intent: "metal_fallback",
corpusLine,
artistFacts: null
});
if (SHADOW_LLM) {
console.group("🜂 Shadow LLM Prompt");
console.log(prompt);
console.groupEnd();
}
callLLM(prompt).then(llmReply => {
if (llmReply && llmReply.trim()) {
updateLastCoachMessage(llmReply);
}
});
}
}, 900);
}
Your bug is not the LLM. It is the UI update logic.
You start multiple async operations, but when they finish they all call updateLastCoachMessage(). That function always targets whatever bubble is last at that moment, so late async replies overwrite the same bubble or appear out of order.
Fix: stop updating "the last message". When you append a coach message, store its id or DOM reference and update that specific bubble when the LLM reply returns. Using await callLLM() makes the flow clearer, but the real fix is stable message targeting.