openstackopenstack-swift

Openstack SWIFT proxy-server malformed request on S3


I'm trying to setup S3 test bench for developers, using Devstack (using stable/newton branch in devstack configuration file - local.conf). While I'm able to browse containers and objects using CLI (openstack container / object, swift), I can't get access to containers using s3curl. In logs (full log is available at the link below) I see two different URLs at final stage of requests processing:

------------- “openstack container list” command, issued locally

proxy-server: Using identity: {'service_roles': [], 'roles': [u'admin'],
'project_domain': (u'default', u'Default'), 'auth_version': 3,
'user': (u'eac0298a83e44b12b2c08aa98e9b1c9a', u'admin'),
'user_domain': (u'default', u'Default'),
'tenant': (u'2d7365b17c8147e9aead99f870125d31', u'admin')}
(txn: txda7984e9e1f04b7792920-005811ca49)
[ ... ]
proxy-server: de.vs.ta.ck de.vs.ta.ck 27/Oct/2016/09/35/05 GET
/v1/AUTH_2d7365b17c8147e9aead99f870125d31%3Fformat%3Djson HTTP/1.0
200 - osc-lib keystoneauth1/2.14.0 python-requests/2.11.1 CPython/2.7.12
a5ef5769d7ef... - 42 - txda7984e9e1f04b7792920-005811ca49 - 0.0881
- - 1477560905.352745056 1477560905.440839052 -

You see correct URL in the request above.

------------- S3 session using s3curl from re.mo.te.host

proxy-server: Using identity: {'service_roles': [], 'roles': [u'admin'],
'project_domain': (u'default', u'Default'), 'auth_version': 3,
'user': (u'eac0298a83e44b12b2c08aa98e9b1c9a', u'admin'),
'user_domain': (u'default', u'Default'),
'tenant': (u'2d7365b17c8147e9aead99f870125d31', u'admin')}
(txn: tx61f057911f3e475eb1962-005811c95a)
[ ... ]
proxy-server: re.mo.te.host re.mo.te.host 27/Oct/2016/09/31/07 GET / HTTP/1.0
200 - curl/7.43.0 - - 219 - tx61f057911f3e475eb1962-005811c95a - 0.2074
- - 1477560666.966339111 1477560667.173743010 -

URL above is malformed and, of course, will return nothing. It seems something can bee wrong with proxy-server - having same information, it produces different request URLs for different kinds of access (swift client access vs remote S3 access).

For S3 access I created EC2 credentials:

/opt# openstack credential create --type ec2 --project admin admin '{"access" : "admin", "secret" : "adm1n0"}'
blob       :: {"access" : "admin", "secret" : "adm1n0"}
id         :: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
project_id :: 2d7365b17c8147e9aead99f870125d31
type       :: ec2
user_id    :: eac0298a83e44b12b2c08aa98e9b1c9a

and, of course, there are created containers and objects for admin/admin:

/opt# openstack object list c0
+----------+
| Name     |
+----------+
| list.txt |
+----------+

So, actually, keystone / swift /swift3 integration is ok and the problem is in access to object storage. Any ideas on what's is wrong and how to go ahead? Full log of proxy-server, as well proxy-server.conf are available at the following link:

https://drive.google.com/drive/folders/0Bw0rWy6Euivqdi1lT3pnUElHUmc?usp=sharing

Thank you!


Solution

  • To set logging of full URLS, use "force_swift_request_proxy_log = true" in proxy-server.conf (https://github.com/openstack/swift3/blob/1.11/etc/proxy-server.conf-sample#L110-L118)

    In ny case, problem was with the incorrect naming of containers - I used too short names (e.g. c0). By default, Swift use the naming requirements for non-US-East regions, so the name "c0" seems to be invalid; it looks like the bucket name is too short. From S3's docs (http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules) -

    The rules for DNS-compliant bucket names are:

    • Bucket names must be at least 3 and no more than 63 characters long.
    • ...

    Try either set the dns_compliant_bucket_names option to False or use longer (>3 characters) for container names.

    Thanks to SWIFT Team for help on this issue.