dockerweblogicnodemanager

Start Node manager in Weblogic (Docker) using script.


I tried to dockerize weblogic server. Now I am facing a issue with Starting node manager after server is started inside the docker container. My docker file as below.

FROM oracle/weblogic:12.1.3-generic

ENV JAVA_OPTIONS="${JAVA_OPTIONS} - 
Dweblogic.nodemanager.SecureListener=false" \
ADMIN_PORT="7001" \
ADMIN_HOST="localhost"

USER oracle
COPY dockerfiles/keyStore/keystore_ss.jks /u01/oracle/keystore/
COPY dockerfiles/patch/* /u01/oracle/patch/
COPY dockerfiles/local_domainScripts /u01/oracle/local_domainScripts/
COPY dockerfiles/scripts/* /u01/oracle/
COPY dockerfiles/applicationFiles/ /u01/oracle/applicationFiles/

USER root
RUN yum install -y procps
RUN chmod +x startWeblogic.sh

USER oracle

RUN /u01/oracle/wlst /u01/oracle/local_domainScripts/config.py

RUN nohup bash -c "/u01/oracle/user_projects/domains/local_domain/bin/startNodeManager.sh &" && sleep 4

CMD ["/u01/oracle/user_projects/domains/local_domain/startWebLogic.sh"]

This will create weblogic server instance. I want to start node manager after this server is started.

Run command:

docker run -d --name wls_local_domain --network=host --hostname localhost -p 7001:7001 test-docker:0-SNAPSHOT

When ./startNodeManager.sh is executed inside the container that will start the node manager. To start the node manager, weblogic server need to be started first.

I want to this using bash script. I tried this one but it didn't help github link


Solution

  • You can't (usefully) RUN a background process. That Dockerfile command launches an intermediate container executing the RUN command, saves its filesystem, and exits; there is no process running any more by the time the next Dockerfile command executes.

    If this is a commercially maintained image, you might look into whether Oracle has intstructions on how to use it. (From clicking around, none of the samples there start a node manager; is it necessary?)

    Best practice is generally to run only one server in a Docker container (and ideally in the foreground and as the container's main process). If that will work and there aren't shared filesystem dependencies, you can split all of this except the final CMD into one base Dockerfile, then have two additional Dockerfiles that just have a FROM line pointing at your mostly-built image and a requested CMD.

    If that really won't work then you'll have to fall back to running some init system in your container, typically supervisord.