pythontimeoutdaskpython-s3fs

What is the correct way to set timeouts in s3fs.S3FileSystem?


I've tried various ways to set the read timeout on a s3fs.S3FileSystem object such as

s3 = s3fs.S3FileSystem(s3_additional_kwargs={"read_timeout": 500}, config_kwargs={"read_timeout": 500} )

or s3.read_timeout = 500 But none of them seem to be controlling the timeout as expected. Does anyone know of the correct way to set these types of parameters?

Thanks


Solution

  • This:

    S3FileSystem.read_timeout = 500
    

    would work before the creation of any instance, since it controls the default timeout applied to instances.

    If you want to set it per-instance, you need config_kwargs (passed to botocore's Config). It seems you tried this version, so it's worth following up with aiobotocore to see if their proxy AioConfig supports the argument.

    Note that there are other timeouts such as connect_timeout and lower-level HTTP/socket things that you might be hitting.