I'm writing a dockerized Java 9 Spring application using Apache Storm 1.1.2, Apache Kafka 0.10, Zookeeper, and Docker Compose.
My topology was entirely working on a local cluster inside of my dockerized service but now that I'm moving it to a production cluster there is an issue.
My service to create an submit topologies to the Storm cluster seems to be working fine and the code looks mostly like this inside of a PostConstruct
KafkaSpoutConfig<String,String> spoutConf =
KafkaSpoutConfig.builder("kafka:9092", "topic")
.setProp(ConsumerConfig.GROUP_ID_CONFIG, "my-group-id")
.setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, MyDeserializer.class)
.build();
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("kafkaSpoutId", new KafkaSpout<String,String>(spoutConf));
builder.setBolt("boltId", new MyBolt()).shuffleGrouping("kafkaSpoutId");
Config conf = new Config();
conf.setNumWorkers(2);
conf.put(Config.STORM_ZOOKEEPER_SERVERS, List.of("zookeeper"));
conf.put(Config.STORM_ZOOKEEPER_PORT, 2181);
conf.put(Config.NIMBUS_SEEDS, List.of("nimbus"));
conf.put(Config.NIMBUS_THRIFT_PORT, 6627);
System.setProperty("storm.jar","/opt/app.jar");
StormSubmitter.submitTopology("topology-id", conf, builder.createTopology());
And my docker compose file looks like this.
version: "2.1"
services:
my-service:
image: my-service
mem_limit: 4G
memswap_limit: 4G
networks:
- default
environment:
- SPRING_PROFILES_ACTIVE=local
nimbus:
image: storm:1.1.2
container_name: nimbus
command: >
storm nimbus
-c storm.zookeeper.servers="[\"zookeeper\"]"
-c nimbus.seeds="[\"nimbus\"]"
networks:
- default
ports:
- 6627:6627
supervisor:
image: storm:1.1.2
container_name: supervisor
command: >
storm supervisor
-c storm.zookeeper.servers="[\"zookeeper\"]"
-c nimbus.seeds="[\"nimbus\"]"
networks:
- default
depends_on:
- nimbus
links:
- nimbus
restart: always
ports:
- 6700
- 6701
- 6702
- 6703
ui:
image: storm:1.1.2
command: storm ui -c nimbus.seeds="[\"nimbus\"]"
networks:
- default
ports:
- 8081:8080
networks:
default:
external:
name: myNetwork
All of the containers are up. In the UI I can see the topology created in the post construct but no Kafka messages are being processed and the bolt which should be using local Kafka producers to produce the aggregates is not publishing.
In the supervisor container at /logs/worker-artifact/topology-id****/6700/worker.log
I can see two exceptions repeated.
The first one (and more important I think) is ClassNotFoundException: org.apache.storm.kafka.spout.KafkaSpout
and the second exception is org.apache.storm.shade.org.jboss.netty.channel.ChannelException: Failed to bind to: 0.0.0.0/0.0.0.0:6700
UPDATE
Unfortunately, I can't post my whole pom but here are my Storm dependencies
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>${storm.version}</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka-client</artifactId>
<version>${storm.version}</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
</dependency>
and here is a my spring-boot-maven-plugin. I though adding the configuration to make the jar that is copied to my container non-executable would do the trick. When I examine the jar in the container, it looks to have the lines of the dependency included jar but with a ton of gibberish characters as well
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<executable>false</executable>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
</plugin>
</plugins>
</build>
Here is most of my dockerfile
FROM ${docker.repository}/openjdk:9.0.1
EXPOSE 80 1099
WORKDIR /opt
ENTRYPOINT ["java", \
"-Dinfo.serviceName=${project.artifactId}", \
"-Dinfo.serviceVersion=${project.version}"]
CMD ["-jar", "app.jar"]
LABEL VERSION="${project.version}" DESCRIPTION="${project.description}"
COPY ${project.build.finalName}-exec.jar /opt/app.jar
I figured it out. The issue is I was forming a non-executable jar but not a real fat jar. I added the following pulgins to my pom
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
Next I changed the System property to
System.setProperty("storm.jar", "/opt/storm.jar");
and added the following line to my Dockerfile
COPY ${project.build.finalName}-jar-with-dependencies.jar /opt/storm.jar
Finally I would first run mvn comiple assembly:single
then copy the project jar with dependencies from target/ to target/docker-ready like from project dir cp target/project-jar-with-dependencies.jar target/docker-ready/
and finally run mvn verify docker:build