目錄
- 由來(lái)
- 環(huán)境信息
- 排查過(guò)程
- 總結(jié)
由來(lái)
客戶是深信服的訂制系統(tǒng),基于 centos 改的,排查半天發(fā)現(xiàn)居然是文件損壞,而不是 docker 的問(wèn)題。
環(huán)境信息
docker信息:
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 2
Server Version: 18.09.3
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: e6b3f5632f50dbc4e9cb6288d911bf4f5e95b18e
runc version: 6635b4f0c6af3810594d2770f662f34ddc15b40d
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 20
Total Memory: 125.3GiB
Name: eds-1f21a854
ID: VZLV:26PU:ZUN6:QQEJ:GW3I:YETT:HMEU:CK6J:SIPL:CHKV:6ASN:5NDF
Docker Root Dir: /data/kube/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
reg.wps.lan:5000
treg.yun.wps.cn
127.0.0.0/8
Registry Mirrors:
https://registry.docker-cn.com/
https://docker.mirrors.ustc.edu.cn/
Live Restore Enabled: false
Product License: Community Engine
系統(tǒng)信息
$ uname -a
Linux eds-1f21a854 3.10.0 #1 SMP Mon Sep 28 12:00:30 CST 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
服務(wù)器信息:
$ cat product_name
SUGON-60G16
$ cat sys_vendor
SANGFOR
$ cat product_version
1.2
排查過(guò)程
安裝docker服務(wù)器掛了
時(shí)間2020 10/29 19:51:
實(shí)施: 客戶這邊部署的時(shí)候安裝docker的時(shí)候服務(wù)器掛了
我: 開(kāi)機(jī)后/var/log/message有信息嗎
實(shí)施: 只能恢復(fù)快照才能進(jìn)去,服務(wù)器進(jìn)不了,看不了信息
我: 不恢復(fù)快照起不來(lái)嗎
實(shí)施: 是的
到這里我以為是觸發(fā)了啥內(nèi)核 bug 直接內(nèi)核 panic 了服務(wù)器起不來(lái)。
時(shí)間2020 10/30 9:07:
我: 起不來(lái)的時(shí)候有進(jìn)控制臺(tái)去看啥原因起不來(lái)嗎
實(shí)施: 是客戶服務(wù)器沒(méi)法查看呢
我: 客戶沒(méi)去看下嗎
然后實(shí)施直接發(fā)來(lái)一個(gè)向日葵遠(yuǎn)程連接,我上去后發(fā)現(xiàn)不是常規(guī)的操作系統(tǒng),是基于 centos 改過(guò)的,沒(méi)找到/var/log/message,然后手動(dòng)執(zhí)行我們的 docker 安裝腳本。
bash -x install-docker.sh
然后輸出的信息在某一步就沒(méi)輸出了,應(yīng)該”掛了”,看了下腳本最后一條輸出調(diào)試信息的后面是啟動(dòng) docker,應(yīng)該是啟動(dòng) docker 觸發(fā)的。然后很久后還是無(wú)法連上和 ping 通,叫實(shí)施問(wèn)問(wèn)那邊現(xiàn)場(chǎng)看看是硬件服務(wù)器的話有沒(méi)有 idrac,ilo 之類的看看 tty 控制臺(tái)的信息。
現(xiàn)場(chǎng)人員看了下服務(wù)器是”正常開(kāi)機(jī)”的,我這邊嘗試還是連不上,現(xiàn)場(chǎng)問(wèn)我們的操作是否是改了路由,現(xiàn)場(chǎng) systemctl 看了下 docker 是起來(lái)的?,F(xiàn)場(chǎng)那邊還是 ping 不通網(wǎng)關(guān)。我這邊突然想到是不是壓根沒(méi)掛。。。
叫他 uptime -s 看看上次的啟動(dòng)時(shí)間,結(jié)果壓根沒(méi)重啟。。。
然后現(xiàn)場(chǎng)排查到是 iptables 的問(wèn)題,啟動(dòng) docker 的時(shí)候把他們的規(guī)則刷沒(méi)了。后面他們改了下都放開(kāi)了。所以前面的啟動(dòng) docker 把機(jī)器掛了實(shí)際上是iptables的影響導(dǎo)致網(wǎng)絡(luò)斷開(kāi),機(jī)器壓根沒(méi)重啟。
啟動(dòng)容器掛掉
然后繼續(xù),實(shí)施說(shuō)之前同樣的其他機(jī)器安裝 docker 的時(shí)候沒(méi)出現(xiàn)上面的問(wèn)題,而是啟動(dòng)的時(shí)候出現(xiàn)上面的問(wèn)題,我就手動(dòng)執(zhí)行下部署,結(jié)果報(bào)錯(cuò)。腳本開(kāi)-x調(diào)試看是load 部署鏡像的時(shí)候報(bào)錯(cuò)了。
error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/load?quiet=0: read unix @->/var/run/docker.sock: read: EOF
手動(dòng)執(zhí)行下:
$ docker load -i ./kube/images/deploy.tar
error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/load?quiet=0: read unix @->/var/run/docker.sock: read: connection reset by peer
jounalctl 看了下 docker daemon 沒(méi)任何相關(guān)日志,這個(gè)報(bào)錯(cuò)搜了下有的人說(shuō)是 /var/run/docker.sock 的 docker 組不存在,也有人直接chmod 777解決的。試了下還是不行。前臺(tái) debug 下 docker 看看有沒(méi)有有用的信息:
systemctl stop docker
pkill dockerd
dockerd -D
另開(kāi)一個(gè)終端執(zhí)行 load 鏡像操作:
$ docker load -i ./kube/images/deploy.tar
ab6425526dab: Loading layer [==================================================>] 126.3MB/126.3MB
c7fe3ea715ef: Loading layer [==================================================>] 340.3MB/340.3MB
7f7eae7934f7: Loading layer [==================================================>] 3.584kB/3.584kB
e99a66706b50: Loading layer [==================================================>] 2.105MB/2.105MB
245b79de6bb7: Loading layer [==================================================>] 283.5MB/283.5MB
2a56a4432cef: Loading layer [==================================================>] 93.18kB/93.18kB
0c2ea71066ab: Loading layer [==================================================>] 276.5kB/276.5kB
bb3f6df0f87c: Loading layer [==================================================>] 82.94kB/82.94kB
6f34ead3cef7: Loading layer [==================================================>] 946.7kB/946.7kB
c21422dd15f6: Loading layer [==================================================>] 1.97MB/1.97MB
940288517f4c: Loading layer [==================================================>] 458.2kB/458.2kB
0c52f1af61f4: Loading layer [==================================================>] 5.12kB/5.12kB
049e7edd48bc: Loading layer [==================================================>] 1.57MB/1.57MB
73307245d702: Loading layer [==================================================>] 5.632kB/5.632kB
f109309d8ffc: Loading layer [==================================================>] 2.175MB/2.175MB
Loaded image: xxxxxxxxxxxx.cn/platform/deploy-amd64:ubuntu.16.04
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
xxxxxxxxxxxx.cn/platform/deploy-amd64 ubuntu.16.04 3ad94a76af5d 3 months ago 734MB
調(diào)試這邊前臺(tái)日志輸出正常
...
DEBU[2020-10-30T14:48:39.955963986+08:00] Applied tar sha256:049e7edd48bc46e3dd5edf89c9caa8f0f7efbb41af403c5a54dd4f1008f604a7 to d58edd0d97bb672ef40e82e45c1603ca3ceaad847d9b9fc7c9b0588087019649, size: 1518278
DEBU[2020-10-30T14:48:39.960091040+08:00] Applying tar in /data/kube/docker/overlay2/b044bd592ae800ed071208c6b2f650c5cbdc7452702f56a23b9b4ffe4236ac18/diff storage-driver=overlay2
DEBU[2020-10-30T14:48:40.059510528+08:00] Applied tar sha256:73307245d7021f9627ca0b2cbfeab3aac0b65abfd476f6ec26bb92c75892d7e2 to b044bd592ae800ed071208c6b2f650c5cbdc7452702f56a23b9b4ffe4236ac18, size: 3284
DEBU[2020-10-30T14:48:40.063040538+08:00] Applying tar in /data/kube/docker/overlay2/03918b1d275aa284532b8b9c59ca158409416f904e13cc7084c598ed343e844f/diff storage-driver=overlay2
DEBU[2020-10-30T14:48:40.148209852+08:00] Applied tar sha256:f109309d8ffcb76589ad6389e80335d986b411c80122d990ab00a02a3a916e3e to 03918b1d275aa284532b8b9c59ca158409416f904e13cc7084c598ed343e844f, size: 2072803
^CINFO[2020-10-30T14:48:55.593523177+08:00] Processing signal 'interrupt'
DEBU[2020-10-30T14:48:55.593617229+08:00] daemon configured with a 15 seconds minimum shutdown timeout
DEBU[2020-10-30T14:48:55.593638628+08:00] start clean shutdown of all containers with a 15 seconds timeout...
DEBU[2020-10-30T14:48:55.594074457+08:00] Unix socket /run/docker/libnetwork/ebd15186e86385c48c4c5508d5f30eb83d5d74e56f09af5c82b6d6d9d63ec8b8.sock doesn't exist. cannot accept client connections
DEBU[2020-10-30T14:48:55.594106623+08:00] Cleaning up old mountid : start.
INFO[2020-10-30T14:48:55.594157536+08:00] stopping event stream following graceful shutdown error="<nil>" module=libcontainerd namespace=moby
DEBU[2020-10-30T14:48:55.594343122+08:00] Cleaning up old mountid : done.
DEBU[2020-10-30T14:48:55.594501828+08:00] Clean shutdown succeeded
INFO[2020-10-30T14:48:55.594520918+08:00] stopping healthcheck following graceful shutdown module=libcontainerd
INFO[2020-10-30T14:48:55.594531978+08:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
DEBU[2020-10-30T14:48:55.594603119+08:00] received signal signal=terminated
INFO[2020-10-30T14:48:55.594739890+08:00] pickfirstBalancer: HandleSubConnStateChange: 0xc4201a61b0, TRANSIENT_FAILURE module=grpc
INFO[2020-10-30T14:48:55.594751465+08:00] pickfirstBalancer: HandleSubConnStateChange: 0xc4201a61b0, CONNECTING module=grpc
看了下systemd的配置沒(méi)啥特殊的,就很迷,不知道為啥前臺(tái)運(yùn)行就能導(dǎo)入,后面實(shí)在想不到怎么排查,就懷疑可能是 socket 問(wèn)題,嘗試用 socat 轉(zhuǎn)發(fā)成 tcp 試試,結(jié)果還是不行(此處應(yīng)該daemon那加tcp 監(jiān)聽(tīng)127試試,不應(yīng)該通過(guò)socket,socat最終也是過(guò)的socket)
$ socat -d -d TCP-LISTEN:2375,fork,bind=127.0.0.1 UNIX:/var/run/docker.sock
2020/10/30 17:39:58 socat[5201] N listening on AF=2 127.0.0.1:2375
^[[C2020/10/30 17:42:06 socat[5201] N accepting connection from AF=2 127.0.0.1:35370 on AF=2 127.0.0.1:2375
2020/10/30 17:42:06 socat[5201] N forked off child process 11501
2020/10/30 17:42:06 socat[5201] N listening on AF=2 127.0.0.1:2375
2020/10/30 17:42:06 socat[11501] N opening connection to AF=1 "/var/run/docker.sock"
2020/10/30 17:42:06 socat[11501] N successfully connected from local address AF=1 "\0\0\0\0\0\0 \x0D\x@7\xE9\xEC\x7E\0\0\0\x01\0\0\0\0"
2020/10/30 17:42:06 socat[11501] N starting data transfer loop with FDs [6,6] and [5,5]
2020/10/30 17:42:12 socat[11501] E write(5, 0x55f098349920, 8192): Broken pipe
2020/10/30 17:42:12 socat[11501] N exit(1)
2020/10/30 17:42:12 socat[5201] N childdied(): handling signal 17
$ docker --log-level debug -H tcp://127.0.0.1:2375 load -i kube/images/deploy.tar
c7fe3ea715ef: Loading layer [==========================================> ] 286.9MB/340.3MB
unexpected EOF
最后耗了挺久的,當(dāng)時(shí)忙,去看了下另一個(gè)客戶的問(wèn)題,然會(huì)回到這邊,突發(fā)奇想的試試 load 其他鏡像,結(jié)果可以。。。
$ docker load -i kube/images/pause_3.1.tar
e17133b79956: Loading layer [==================================================>] 744.4kB/744.4kB
Loaded image: mirrorgooglecontainers/pause-amd64:3.1
$ docker load -i kube/images/tiller_v2.16.1.tar
77cae8ab23bf: Loading layer [==================================================>] 5.815MB/5.815MB
679105aa33fb: Loading layer [==================================================>] 6.184MB/6.184MB
639eab5d05b1: Loading layer [==================================================>] 40.46MB/40.46MB
87e5687e03f2: Loading layer [==================================================>] 41.13MB/41.13MB
Loaded image: gcr.io/kubernetes-helm/tiller:v2.16.1
$ docker load -i kube/images/calico_v3.1.3.tar
cd7100a72410: Loading layer [==================================================>] 4.403MB/4.403MB
ddc4cb8dae60: Loading layer [==================================================>] 7.84MB/7.84MB
77087b8943a2: Loading layer [==================================================>] 249.3kB/249.3kB
c7227c83afaf: Loading layer [==================================================>] 4.801MB/4.801MB
2e0e333a66b6: Loading layer [==================================================>] 231.8MB/231.8MB
Loaded image: calico/node:v3.1.3
2580685bfb60: Loading layer [==================================================>] 50.84MB/50.84MB
Loaded image: calico/kube-controllers:v3.1.3
0314be9edf00: Loading layer [==================================================>] 1.36MB/1.36MB
15db169413e5: Loading layer [==================================================>] 28.05MB/28.05MB
4252efcc5013: Loading layer [==================================================>] 2.818MB/2.818MB
76cf2496cf36: Loading layer [==================================================>] 3.03MB/3.03MB
91d3d3a16862: Loading layer [==================================================>] 2.995MB/2.995MB
18a58488ba3b: Loading layer [==================================================>] 3.474MB/3.474MB
8d8197f49da2: Loading layer [==================================================>] 27.34MB/27.34MB
7520364e0845: Loading layer [==================================================>] 9.216kB/9.216kB
b9d064622bd6: Loading layer [==================================================>] 2.56kB/2.56kB
Loaded image: calico/cni:v3.1.3
只有導(dǎo)入這個(gè)的時(shí)候才報(bào)錯(cuò)
$ docker load -i kube/images/deploy.tar
error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/load?quiet=0: read unix @->/var/run/docker.sock: read: connection reset by peer
然后出包的機(jī)器上對(duì)比了下這個(gè)文件的校驗(yàn)值發(fā)現(xiàn)不對(duì)。。。。
總結(jié)
有個(gè)疑問(wèn)就是為啥前臺(tái)可以,其次文件損壞導(dǎo)入的時(shí)候 docker daemon 居然不刷任何日志直接 connection reset,新版本沒(méi)測(cè)試過(guò)這種情況。
到此這篇關(guān)于docker錯(cuò)誤的耗時(shí)排查過(guò)程記錄的文章就介紹到這了,更多相關(guān)docker錯(cuò)誤耗時(shí)排查內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!