pythoncssfrontendstreamlit

Can I remove the big box behind a chat input at the bottom of a Streamlit website?


I'm currently building a website that involves a chatbot, and in the chat message input, there's a black box behind it that is covering part of my background image. Is there a way to remove it? I tried finding it at the devtools but it just wasn't working.

Here's the code I'm using for that

#import packages
from dotenv import load_dotenv
import openai
import streamlit as st

@st.cache_data
def get_response(user_prompt):
    response = client.responses.create(
            model = "gpt-4o", #Selects the model that you want to use (use the cheap one)
            input = [ #List that keeps track of convo history as a list
            {"role" : "user", "content": user_prompt} #Prompt
            ],
            temperature = 0.7, #How creative the answer will get
            max_output_tokens = 100  #Limit response length 
        )   
    return response
#Load enviroment variables from the .env file
load_dotenv()

#Initialize the OpenAI Client
client = openai.OpenAI()
title = st.empty()
subtitle = st.empty()
title.markdown("<h1 style='text-align: center;'>Hello There!</h1>", unsafe_allow_html=True)
subtitle.markdown("<h2 style='text-align: center; '>How can we help you?</h2>", unsafe_allow_html=True)
page_bg_image = """
<style>
[data-testid="stAppViewContainer"] {
    background-image: url('https://snu.edu.in/site/assets/files/18543/gears-technological-elements-digital-blue-background-symbolizing-network-innovation-communication-3d-rendering.1600x0.webp');
    background-size: cover;
}
[data-testid="stHeader"]{
background-color: rgba(0,0,0,0);
}
[data-testid="stBottomBlockContainer"] {
    background: rgba(0,0,0,0);
}
</style>
"""
st.markdown(page_bg_image, unsafe_allow_html=True)
#Add a text input box for the user prompt
user_prompt = st.chat_input("Enter your prompt: ")
if user_prompt:
        title.empty()
        subtitle.empty()
        st.chat_message("user").write(user_prompt)
        with st.spinner("Generating response..."):
            response = get_response(user_prompt)
#Print the response
        st.chat_message("assistant").write(response.output[0].content[0].text)

enter image description here


Solution

  • One way to get the expected outcome is to use st.text_input() instead of st.chat_input().

    st.chat_input is “hard-coded” with Streamlit’s default container, which is why your earlier CSS hacks weren’t enough.

    and by switching to st.text_input, you have full control over placement and styles, so you can overlay it on top of your background image seamlessly.

    Here's the complete solution:

    import os
    from dotenv import load_dotenv
    import streamlit as st
    from huggingface_hub import InferenceClient
    
    # Load environment variables
    load_dotenv()
    
    # Initialize client
    client = InferenceClient(
        provider="groq",
        api_key=os.getenv("GROQ_API_KEY")
    )
    
    
    def get_response(input_prompt: str) -> str:
        """
            Send a user prompt to the model and return the response.
        """
    
        result = client.chat.completions.create(
            model="openai/gpt-oss-20b",
            messages=[
                {"role": "user",
                 "content": input_prompt}
            ],
            temperature=0.7,
            max_tokens=100
        )
    
        return result.choices[0].message.content
    
    
    # --- UI Styling ---
    title = st.empty()
    subtitle = st.empty()
    title.markdown("<h1 style='text-align: center;'>Hello There!</h1>", unsafe_allow_html=True)
    subtitle.markdown("<h2 style='text-align: center; '>How can we help you?</h2>", unsafe_allow_html=True)
    
    page_bg_image = """
    <style>
    [data-testid="stAppViewContainer"] {
        background-image: url('https://snu.edu.in/site/assets/files/18543/gears-technological-elements-digital-blue-background-symbolizing-network-innovation-communication-3d-rendering.1600x0.webp');
        background-size: cover;
        background-attachment: fixed;
    }
    
    /* Hide the default chat input */
    [data-testid="stChatInput"] {display: none !important;}
    
    /* Floating custom input box */
    [data-testid="stTextInput"] {
        position: fixed;
        left: 50%;
        transform: translateX(-50%);
        bottom: 24px;
        width: min(900px, 90%);
        z-index: 9999;
        padding: 0 !important;
    }
    [data-testid="stTextInput"] .stTextInput > div {
        background: rgba(255,255,255,0.08) !important;
        border-radius: 12px;
        padding: 8px 12px;
    }
    [data-testid="stTextInput"] input {
        background: transparent !important;
        color: black !important;
        border: none !important;
        outline: none !important;
    }
    </style>
    """
    st.markdown(page_bg_image, unsafe_allow_html=True)
    
    
    # Use a text_input as the submit box (press Enter to submit)
    user_prompt = st.text_input("", placeholder="Enter your prompt:", key="user_prompt")
    
    if user_prompt:
    
        # Clear title/subtitle after first input
        title.empty()
        subtitle.empty()
        st.chat_message("user").write(user_prompt)
    
        with st.spinner("Generating response..."):
            response = get_response(user_prompt)
    
        st.chat_message("assistant").write(response)
    

    output:

    enter image description here

    I've used the huggingface model to test it out, you can keep the openai client as it is.