rustfile-transferactix-web

How to effeciently Serve Large Files using Actix Web for Rust


Here is my scenario, I have written an API endpoint using and which is running inside a Linux VM. When the API is hit, it runs a job and creates a zip file. This zip file is around 3.5 GB in size.

I have a front-end that I have written in React (also running on the same VM), and when the above-mentioned background job is completed, it will show the user a Download button.

Now, I want to transfer this file from my Backend where the zip file got created to the User's desktop. I tried some methods mentioned on various websites and it did work but there is an issue: the entire 3.5 GB file is being sent to the user's end at once and is being loaded into the End user's browser memory, after which the download progress indicator begins, now as soon as the download completes the browser crashes.

I understand this is not the correct way of sending/handling large files, Using Chunks might be one way of sending data, or maybe using streams. Either of which I am not able to write since I am fairly new to using Rust. I even thought of storing the zip file in some storage bucket but that just adds on cost plus I have some Policy restrictions so dumped that idea.

Please help me understand how to serve these large files over the internet, and/or how to break the file into chunks.

I would really appreciate any help

Here is what I tried:

use actix_files::NamedFile;
use actix_web::{get, HttpRequest, Result};
use actix_http::http::{header, ContentDisposition, DispositionType};
use mime::Mime;

const FILE_PATH: &str = "/path/to/my/zip/fille/xyz.zip";

#[get("/api/v1/downloads/xyz.zip")]
async fn downloader(req: HttpRequest) -> Result<NamedFile> {
    let file = NamedFile::open(FILE_PATH)?;

    let content_disposition = ContentDisposition {
        disposition: DispositionType::Attachment,
        parameters: vec![],
    };

    let content_type: Mime = "application/zip".parse().unwrap();

    Ok(file
        .set_content_disposition(content_disposition)
        .set_content_type(content_type))
}

Solution

  • I went up with the idea provided by @Ross Rogers and used a Stream-based approach.

    I used the async_stream crate to use the stream!() macro to create a stream. Then I declared a vector called chunk which basically tells me how big should my chunk size be.

    Finally, I loop in and read the file with the declared chunk size and then finally yield them to create a stream

    Here is my code:

    #[get("/api/v1/download/file.zip")]
    pub async fn download_file_zip(req: HttpRequest, credential: BearerAuth) -> impl Responder {
    
       let file_path = "path/to/file.zip";
    
       debug!("File path was set to {}", file_path);
    
     if let Ok(mut file) = NamedFile::open(file_path.clone()) {
    
         debug!("File was opened Successfully");
    
          let my_data_stream = stream! {
    
            let mut chunk = vec![0u8; 10 * 1024 *1024]; // I decalare the chunk size here as 10 mb 
    
            loop {
    
                match file.read(&mut chunk) {
    
                    Ok(n) => {
                        if n == 0 {
                            break;
                        }
    
                        info!("Read {} bytes from file", n);
    
                        yield Result::<Bytes, std::io::Error>::Ok(Bytes::from(chunk[..n].to_vec())); // Yielding the chunk here
    
                    }
    
                    Err(e) => {
    
                        error!("Error reading file: {}", e);
                        yield Result::<Bytes, std::io::Error>::Err(e);
                        break;
                    }
                }
            }
        };
    
        debug!("Sending response...");
        HttpResponse::Ok()
            .content_type("application/octet-stream")
            .streaming(my_data_stream)  // Streaming my response here
    } else {
        HttpResponse::NotFound().finish()
      }
    }
    

    I am accepting this as the solution for time being until I find some better way to implement

    Thank you again! @Ross Rogers