Metrics统计造成某台Region Server的CPU过高

一、现象

该机器的RegionServer的CPU几乎吃满,造成负载过高。

二、定位

通过top -H -p + jstack 组合,找到消耗CPU的TOP 20线程,发现大量是Metrics线程,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
tid: 20438 cpu: %57.4
"B.defaultRpcServer.handler=49,queue=1,port=16020" daemon prio=10 tid=0x00007f28a7a12800 nid=0x4fd6 runnable [0x00007f2836a5d000]
java.lang.Thread.State: RUNNABLE
at java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:579)
at java.lang.ThreadLocal$ThreadLocalMap.remove(ThreadLocal.java:461)
at java.lang.ThreadLocal$ThreadLocalMap.access$200(ThreadLocal.java:261)
at java.lang.ThreadLocal.remove(ThreadLocal.java:202)
at java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:429)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1340)
at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:883)
at com.yammer.metrics.stats.ExponentiallyDecayingSample.unlockForRegularUsage(ExponentiallyDecayingSample.java:196)
at com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:113)
at com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:81)
at org.apache.hadoop.metrics2.lib.MutableHistogram.add(MutableHistogram.java:81)
at org.apache.hadoop.metrics2.lib.MutableRangeHistogram.add(MutableRangeHistogram.java:59)
at org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.updatePut(MetricsRegionServerSourceImpl.java:98)
at org.apache.hadoop.hbase.regionserver.MetricsRegionServer.updatePut(MetricsRegionServer.java:65)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:791)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:717)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2146)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2178)

可以使用如下脚本,获取top N的线程:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#!/bin/bash
############################################################################
# @desc: 统计top N消耗CPU的线程栈
# @author: leifu
###########################################################################
if [ $# -eq 0 ];then
echo "please enter java pid"
exit -1
fi
jstack_cmd=""
if [[ $JAVA_HOME != "" ]]; then
jstack_cmd="$JAVA_HOME/bin/jstack"
else
r=`which jstack 2>/dev/null`
if [[ $r != "" ]]; then
jstack_cmd=$r
else
echo "can not find jstack"
exit -2
fi
fi
pid=$1
topN=$2
now=$(date "+%Y%m%d%H%M%S")
jstackFile=jstack_${pid}_${now}.txt
topFile=top_${pid}_${now}.txt
resultFile=result_${pid}_${now}.txt
#jstack到本地
jstack $pid > ${jstackFile}
#top -H -p 按照cpu消耗倒排将线程到本地
top -H -b -n 1 -p $pid | sed '1,/^$/d' | sed '1d;/^$/d' | grep -v $pid | sort -nrk9 | head -${topN} > ${topFile}
#从jstack中找到对应线程信息
cat ${topFile} | while read line
do
echo "$line" | awk '{print "tid: "$1," cpu: %"$9}' >> ${resultFile}
tid_0x=`printf "%0x" $(echo "$line" | awk '{print $1}')`
cat ${jstackFile} | grep $tid_0x -A20 | sed -n '1,/^$/p' >> ${resultFile}
done
#备份jstack和top信息
mkdir -p backup
mv ${jstackFile} backup/
mv ${topFile} backup/

使用方法:

1
sh jstack_top_cpu.sh ${pid} ${top}

输出:

1
2
backup文件夹包含jstack和top cpu thread。
result_${timestamp}是topN线程结果。

三、临时解决

重启后恢复。

四、真实原因

抓取到线程CPU消耗在metrics.updatePut方法,内部采有JDK的ReentrantReadWriteLock实现,出现问题概率不高,可能是表象。貌似是hbase的一个bug,已经进行了修复。https://issues.apache.org/jira/browse/HBASE-15222