I'm trying to create a container disk size limit in docker. Specifically, I have a container that downloads data, and I want this data to be under a limit, that I can cap beforehand.
So far, what I've created works on the surface-level, (prevents the file from actually being saved onto the computer) - however I can watch the container doing it's work, and I can see the download complete to 100%, before it says 'Download failed.' Therefore it seems like it's downloading to a temporary directory, and then checking the size of the file before passing it to the final location. (or not)
This doesn't fully resolve the issue I was trying to fix, because obviously the download consumes a lot of resources. I'm not sure what exactly I am missing here..
This is what creates the above behavior:
sudo zfs create new-pool/zfsvol1
sudo zfs set quota=1G new-pool/zfsvol1
docker run -e "TASK=download" -e "AZURE_SAS_TOKEN= ... " -v /newpool/zfsvol1:/data containerName azureFileToDownload
I got the same behavior while running the container interactively without volumes and downloading into the container. I tried changing the storage driver (inside $docker info) to zfs (from overlay) and it didn't help. I looked into docker plugins but they didn't seem like they would resolve the issue.
This is all run inside an Ubuntu VM; I made a zfs pool to test all of this. I'm pretty sure this is not supposed to happen because it's not very useful. Would anyone have an idea why this is happening?
Ok- so I actually figured out what was going on, and like @hmm suggested the problem wasn't because of Docker. The place it was buffering to was my memory, before downloading to the disk, and that was the issue. It seems like azcopy (Azure's copy command) first downloads to memory before saving to the disk, which is not great at all, but there is nothing to be done about it in this case. I think my approach itself works completely.