I have deployment,that has error that is coming over the time.
NAME READY STATUS RESTARTS AGE
pod/picanagm-solution-5cb8887968-qk4pr 0/1 CrashLoopBackOff 140 (78s ago) 11h
pod/picanagm-solution-77f5fcfdc-kwd9w 1/1 Running 0 2d20h
logs of the failed pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m42s (x3258 over 11h) kubelet Back-off restarting failed container
and
Picked up JAVA_TOOL_OPTIONS: -Dlogging.config=/app/run/logback.xml -DcontentServer.factory-reset=folder -DcontentServer.factory-reset.folder-name=file:/app/bookmarks -DSameSite=none -Dconfiguration.sign-off.enabled=true -Ddata.extraction.templates.base.dir.path=${java.io.tmpdir} --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://10.244.4.81:8778/jolokia/
Exception in thread "main" java.lang.UnsupportedClassVersionError: com/activeviam/MINDZ/starter/main/MINDZApplication has been compiled by a more recent version of the Java Runtime (class file version 65.0), this version of the Java Runtime only recognizes class file versions up to 61.0
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1012)
One replicaset is ok,the other is not ready
replicaset.apps/picanagm-solution-5cb8887968 1 1 0 11h
Why? How to debug this behaviour?
The answer for this question Why did I got CrashLoopBackOff for one pod and the other is working fine?
is:
When the older one works well, it means there are no issues. However, when the new one isn't functioning properly, it indicates that there is an error with it. In a deployment with two replicas, the older one waits for the new one to be ready and will eventually go down on its own. So, if the new one isn't working correctly (
CrashLoopBackOff
), the older one will still be up and running.