pythonchatbotmathjaxpy-shiny

How to get the timing of Shiny Chat's equation display right?


The code below is for a chatbot designed to format equations nicely in LaTeX. However, there's an issue where the equations in the current response are not LaTeX formatted, while the preceding response is correctly formatted as expected. In other words, request 1 returns a well-formatted response after I submit request 2, and the formatting for request 2 is achieved only after the submission of request 3.

Below is the code and a sample output demonstrating the issue. As seen in the picture, the second answer to "solve x^2-4=0" returns the LaTeX code of the equations instead of a properly formatted output. What could be causing this behavior?

This code is based on assistance I received from this question.

 import os

from app_utils import load_dotenv
from openai import AsyncOpenAI
from langchain_openai import ChatOpenAI

load_dotenv()
llm = AsyncOpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

from shiny import App, Inputs, Outputs, Session, ui

mathjax = ui.head_content(
    ui.tags.script(
        src="https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"
    ),
    ui.tags.script("""if (window.MathJax) {
        MathJax.Hub.Config({
            tex2jax: {
                inlineMath: [["$", "$"], ["\\(", "\\)"]],
                displayMath: [["$$", "$$"], ["\\[", "\\]"]],
                processEscapes: true
            },
            config: ["MMLorHTML.js"],
            jax: ["input/TeX", "output/HTML-CSS", "output/NativeMML"],
            extensions: ["MathMenu.js", "MathZoom.js"]
        });
        $(function() {
            Shiny.addCustomMessageHandler("typeset", function(message) {
                MathJax.Hub.Queue(['Typeset', MathJax.Hub]);
            });
        });
        }
        """),
)

app_ui = ui.page_fillable(
    mathjax,
    ui.panel_title("Demo Shiny Chat"),
    ui.chat_ui("chat"),  
    fillable_mobile=True,
)

def server(input: Inputs, output: Outputs, session: Session):
    # Create a chat instance and display it

    system_message = {
        "content": f"""
        act like a math teacher        
        """,
        "role": "system"
        }
    chat = ui.Chat(id="chat",
    messages=[system_message,
        {"content": "I am here to answer your math questions!", "role": "assistant"},
    ],)


    model_params = {
        "model": "gpt-4o",
        "temperature": 1,
        "max_tokens": 4096,
    }

    llm = ChatOpenAI(**model_params)
    
    @chat.on_user_submit
    async def _():
        messages = chat.messages(format="langchain")
        response = llm.astream(messages)
        await chat.append_message_stream(response)
        await session.send_custom_message("typeset", {"msg": "dummy"})
    
app = App(app_ui, server)

enter image description here enter image description here


Solution

  • We can define an reactive.effect which has a dependency (see req) on Chat.messages here, this is a reactive value containing the chat messages. The effect triggers when an answer was added and we can then call the typeset for rendering the math.

    @reactive.effect 
    async def __(): 
        req(chat.messages()) 
        # use a small time.sleep only if we have a delay
        await session.send_custom_message("typeset", {"msg": "dummy"})