So I'm trying to create a Java SFTP server which acts as a front end for Apache S3 buckets. You connect via SFTP and manage S3 files in buckets like they were files on the SFTP server.
I've used Apache MINA (v1.2.0) as the SFTP server, which works fine using an SftpSubsystemFactory
and default FileSystemFactory
(providing the local filesystem).
I've chosen Amazon-S3-FileSystem-NIO2(v1.3.0) as the FileSystem
, which uses the Apache AWS SDK and seems to be the best option out there
public class S3FileSystemFactory implements FileSystemFactory {
private URI uri = URI.create("localhost");
public S3FileSystemFactory(URI uri){
this.uri = uri;
}
public FileSystem createFileSystem(Session session) throws IOException {
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
FileSystem s3FileSystem = FileSystems.newFileSystem(uri, new HashMap<String, Object>(), classLoader);
return s3FileSystem;
}
}
I'm just setting this as the FileSystemFactory
for MINA
SshServer sshd = SshServer.setUpDefaultServer();
sshd.setKeyPairProvider(buildHostKeyProviderFromFile(hostKeyType));
sshd.setPasswordAuthenticator(createPasswordAuthenticator());
sshd.setPublickeyAuthenticator(AcceptAllPublickeyAuthenticator.INSTANCE);
sshd.setSubsystemFactories(createSubsystemFactories());
sshd.setFileSystemFactory(createFileSystemFactory());
URI uri = URI.create("s3:///s3.amazonaws.com/my_bucket");
FileSystemFactory s3FileSystemFactory = new S3FileSystemFactory(uri);
sshd.setFileSystemFactory(s3FileSystemFactory);
I can connect to this server with FileZilla/Command Line but it auto connects to the ImageTransfer
bucket (not my_bucket
). I can navigate to other buckets, even sub buckets but can't display the contents, everything just looks like an empty directory.
This can be done via the FileSystem
I'm using as I can list directory contents, like so
Path p = s3FileSystem.getPath("/my_bucket");
String contents = "";
try (DirectoryStream<Path> stream = Files.newDirectoryStream(p, "*")) {
for (Path file : stream) {
contents += "\n \t - " + file.getFileName();
}
} catch (IOException | DirectoryIteratorException x) {}
I've been looking through the s3fs, MINA and AWS code (as the documentation is very limited) but can't pinpoint the source of this problem. Can anyone shed any light on what I'm doing wrong?
Logging
With logging for all libraries switched on, I get only one issue I can see
"HEAD
application/x-www-form-urlencoded; charset=utf-8
Fri, 20 May 2016 09:58:07 GMT
/MYURL/."
2016-05-20 10:58:07.240 DEBUG 13323 --- [system-thread-1] c.a.http.impl.client.SdkHttpClient : Stale connection check
2016-05-20 10:58:07.243 DEBUG 13323 --- [system-thread-1] c.a.http.impl.client.SdkHttpClient : Attempt 1 to execute request
2016-05-20 10:58:07.434 DEBUG 13323 --- [system-thread-1] c.a.http.impl.client.SdkHttpClient : Connection can be kept alive indefinitely
2016-05-20 10:58:07.435 DEBUG 13323 --- [system-thread-1] com.amazonaws.request : Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: null; Status Code: 404; Error Code: 404 Not Found; Request ID: MYREQID), S3 Extended Request ID: MYEXTREQID
The problem is twofold.
Firstly, Apache MINA SSHD requires the permission
attribute be present for any FileSystem
it uses, which is strange as that's a POSIX specific attribute so obviously Amazon-S3-FileSystem-NIO2 hasn't provided it.
Secondly, Apache MINA SSHD calls (and requires) an unimplemented method in S3FileSystemProvider that get's a FileChannel
public FileChannel newFileChannel(Path path,
Set<? extends OpenOption> options,
FileAttribute<?>... attrs)
A hacky solution is just to hard-code POSIX read/write permissions into S3 attributes returned and create a S3FileChannel
that simply calls the methods on the existing S3SeekableByteChannel.
That's the best solution I can come up with for now.