I start tomcat with tomcat user, and execute jstack
with root user. I checked the gc log, find Full GC occur.
I search and find sudo -u tomcat jstack -l <pid>
will solve the issue. But why jstack with root user will cause full gc?
Environment:
[root@bogon ~]# cat /etc/redhat-release
CentOS release 6.8 (Final)
[root@bogon ~]# java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
[root@bogon ~]# service tomcat version
vagrant: Display Tomcat Version
Using CATALINA_BASE: /apps/tomcat
Using CATALINA_HOME: /usr/local/tomcat
Using CATALINA_TMPDIR: /apps/tomcat/temp
Using JRE_HOME: /usr/java/jdk1.8.0_65/
Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Server version: Apache Tomcat/8.5.11
Server built: Jan 10 2017 21:02:52 UTC
Server number: 8.5.11.0
OS Name: Linux
OS Version: 2.6.32-642.15.1.el6.x86_64
Architecture: amd64
JVM Version: 1.8.0_65-b17
JVM Vendor: Oracle Corporation
[root@bogon tomcat]# jstack -l 2644
2644: Unable to open socket file: target process not responding or HotSpot VM not loaded
The -F option can be used when the target process is not responding
GC logs
CommandLine flags: -XX:+DisableExplicitGC -XX:GCLogFileSize=2097152 -XX:+HeapDumpOnOutOfMemoryError -XX:InitialHeapSize=1149239296 -XX:InitiatingHeapOccupancyPercent=45 -XX:+ManagementServer -XX:MaxGCPauseMillis=200 -XX:MaxHeapSize=1149239296 -XX:MaxMetaspaceFreeRatio=80 -XX:MetaspaceSize=100663296 -XX:MinMetaspaceFreeRatio=50 -XX:NumberOfGCLogFiles=10 -XX:+PrintClassHistogram -XX:+PrintGC -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseG1GC -XX:+UseGCLogFileRotation -XX:+UseStringDeduplication
2017-07-05T20:13:10.591+0800: 38.999: [Full GC (Heap Inspection Initiated GC) 47M->13M(1096M), 0.0710197 secs]
[Eden: 48.0M(54.0M)->0.0B(54.0M) Survivors: 0.0B->0.0B Heap: 47.5M(1096.0M)->13.2M(1096.0M)], [Metaspace: 15845K->15845K(1062912K)]
[Times: user=0.04 sys=0.02, real=0.07 secs]
num #instances #bytes class name
----------------------------------------------
1: 27256 5409512 [C
2: 2889 3734776 [B
3: 28 935336 [J
4: 26366 632784 java.lang.String
5: 4464 392832 java.lang.reflect.Method
6: 2964 338128 java.lang.Class
7: 9495 303840 java.util.HashMap$Node
8: 1566 203296 [I
9: 2576 167120 [Ljava.lang.Object;
10: 971 120680 [Ljava.util.HashMap$Node;
11: 3610 115520 java.util.concurrent.ConcurrentHashMap$Node
12: 3335 69056 [Ljava.lang.Class;
13: 924 67912 [Ljava.lang.String;
14: 1697 67880 java.lang.ref.Finalizer
15: 1608 64320 java.util.LinkedHashMap$Entry
16: 1232 59136 java.util.HashMap
17: 652 52160 java.lang.reflect.Constructor
18: 3243 51888 java.lang.Object
19: 1453 46496 java.util.Hashtable$Entry
20: 1027 41080 java.util.TreeMap$Entry
21: 711 39816 java.util.zip.ZipFile$ZipFileInputStream
22: 708 39648 java.util.zip.ZipFile$ZipFileInflaterInputStream
23: 308 37176 [Ljava.lang.reflect.Method;
24: 764 36672 org.apache.tomcat.util.modeler.AttributeInfo
25: 56 31968 [Ljava.util.concurrent.ConcurrentHashMap$Node;
26: 442 31824 java.lang.reflect.Field
27: 30 31200 [[C
In your case GC is caused by -XX:+PrintClassHistogram
option. This flag forces heap inspection accompanied by Full GC in response to SIGQUIT
signal.
As you can find from this answer, jstack
uses SIGQUIT
to initiate communication with the target JVM. The same answer explains why jstack
works under tomcat user, but not under root.
In short, SIGQUIT
(also known as kill -3) plays two roles: to initiate Dynamic Attach and to dump JVM state. If Dynamic Attach fails, then JVM proceeds with heap inspection, and that in turn causes Full GC, because -XX:+PrintClassHistogram
option is on.