I was trying to put logs produced by an application on docker-compose to my s3 bucket using fluentd, but got the following error
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluent-plugin-s3-1.3.0/lib/fluent/plugin/out_s3.rb:456:in `rescue in check_apikeys'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluent-plugin-s3-1.3.0/lib/fluent/plugin/out_s3.rb:451:in `check_apikeys'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluent-plugin-s3-1.3.0/lib/fluent/plugin/out_s3.rb:240:in `start'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/root_agent.rb:203:in `block in start'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/root_agent.rb:192:in `block (2 levels) in lifecycle'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/root_agent.rb:191:in `each'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/root_agent.rb:191:in `block in lifecycle'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/root_agent.rb:178:in `each'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/root_agent.rb:178:in `lifecycle'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/root_agent.rb:202:in `start'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/engine.rb:248:in `start'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/engine.rb:147:in `run'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/supervisor.rb:720:in `block in run_worker'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/supervisor.rb:971:in `main_process'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/supervisor.rb:711:in `run_worker'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/command/fluentd.rb:376:in `<top (required)>'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:83:in `require'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:83:in `require'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/bin/fluentd:15:in `<top (required)>'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/bin/fluentd:23:in `load'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 /usr/bin/fluentd:23:in `<main>'
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 unexpected error error_class=RuntimeError error="can't call S3 API. Please check your credentials or s3_region configuration. error = #<Aws::S3::Errors::AccessDenied: Access Denied>"
fluentd | 2024-07-04 09:01:48 +0000 [error]: #0 suppressed same stacktrace
fluentd | 2024-07-04 09:01:48 +0000 [error]: Worker 0 finished unexpectedly with status 1
to write the configuration file for the output to s3 i followed the github guide of the s3 plugin:
<match pattern>
@type s3
aws_key_id x
aws_sec_key x
s3_bucket x
s3_region x
path logs/${tag}/%Y/%m/%d/
s3_object_key_format %{path}%{time_slice}_%{index}.%{file_extension}
# if you want to use ${tag} or %Y/%m/%d/ like syntax in path / s3_object_key_format,
# need to specify tag for ${tag} and time for %Y/%m/%d in <buffer> argument.
<buffer tag,time>
@type file
path /var/log/fluent/s3
timekey 3600 # 1 hour partition
timekey_wait 10m
timekey_use_utc true # use utc
</buffer>
<format>
@type json
</format>
</match>
I double checked the spelling of my bucket, of the availability zone and of the keys, everything is correct. What am I missing?
Solved: fluentd requires a set of permissions beside the simple PutObject: it requires PutObject, GetObject, ListBucket and ListBucketVersions