I´m building a docker image using that has a log file that must be mounted as an S3 folder. I´m using s3fs for the trick; however, I cannot run it, I´m constantly getting this bug:
s3fs: invalid option -- 'j'
This is an example of my Dockerfile:
FROM openjdk:11
RUN mkdir /home/app/
WORKDIR /home/app/
RUN mkdir logs
COPY ./target/MY_JAVA_APP.jar .
ENV AWS_ACCESS_KEY_ID=MY_KEY_ID
ENV AWS_SECRET_ACCESS_KEY=MY_SECRET_ACCESS_KEY
ENV AWS_REGION=MY_REGION
RUN apt-get update
RUN apt install s3fs -y
ARG S3_MOUNT_DIRECTORY=/home/app/logs
ENV S3_MOUNT_DIRECTORY=$S3_MOUNT_DIRECTORY
ARG S3_BUCKET_NAME=MY_BUCKET
ENV S3_BUCKET_NAME=$S3_BUCKET_NAME
RUN echo $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY > /root/.passwd-s3fs && \
chmod 600 /root/.passwd-s3fs
EXPOSE 8080
CMD ["java","-jar","MY_JAVA_APP.jar"]
ENTRYPOINT ["s3fs", "MY_BUCKET:/logs", "/home/app/logs", "-o", "dbglevel=info", "-f", "-o", "curldbg"]
I don´t have any j
anywhere, and I´m somehow stuck. I even asked in GitHub without a possible answer.
P.S.:
Important things:
My solution, thanks to Charles Duffy hints.
#REMEMBER. Take care with the Unix break lines if you use Windows or macOS.
#!/bin/sh
#The & is important to run s3fs in the background.
s3fs MY_BUCKET:/logs /home/app/logs &
java -jar MY_JAVA_APP.jar
COPY start.sh .
#the 777 must be changed, but I just needed to test to know if it worked.
RUN chmod 777 /home/app/start.sh
ENTRYPOINT ["sh", "start.sh"]