1)下载和分发 kubelet 二进制文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo “>>> ${node_node_ip}”
scp kubernetes/server/bin/kubelet root@${node_node_ip}:/opt/k8s/bin/
ssh root@${node_node_ip} “chmod +x /opt/k8s/bin/*”
done
2)创建 kubelet bootstrap kubeconfig 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${node_node_names[@]}
do
echo “>>> ${node_node_name}”
# 创建 token
export bootstrap_token=$(kubeadm token create \
–description kubelet-bootstrap-token \
–groups system:bootstrappers:${node_node_name} \
–kubeconfig ~/.kube/config)
# 设置集群参数
kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/cert/ca.pem \
–embed-certs=true \
–server=${kube_apiserver} \
–kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
–token=${bootstrap_token} \
–kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig
# 设置上下文参数
kubectl config set-context default \
–cluster=kubernetes \
–user=kubelet-bootstrap \
–kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig
# 设置默认上下文
kubectl config use-context default –kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig
done
解释说明: 向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书;
查看 kubeadm 为各节点创建的 token:
[root@k8s-master01 work]# kubeadm token list –kubeconfig ~/.kube/config
token ttl expires usages description extra groups
0zqowl.aye8f834jtq9vm9t 23h 2019-06-19t16:50:43+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node03
b46tq2.muab337gxwl0dsqn 23h 2019-06-19t16:50:43+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node02
heh41x.foguhh1qa5crpzlq 23h 2019-06-19t16:50:42+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node01
解释说明:
-> token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;
-> kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:<token id>,group 设置为 system:bootstrappers,
后续将为这个 group 设置 clusterrolebinding;
查看各 token 关联的 secret:
[root@k8s-master01 work]# kubectl get secrets -n kube-system|grep bootstrap-token
bootstrap-token-0zqowl bootstrap.kubernetes.io/token 7 88s
bootstrap-token-b46tq2 bootstrap.kubernetes.io/token 7 88s
bootstrap-token-heh41x bootstrap.kubernetes.io/token 7 89s
3) 分发 bootstrap kubeconfig 文件到所有node节点
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${node_node_names[@]}
do
echo “>>> ${node_node_name}”
scp kubelet-bootstrap-${node_node_name}.kubeconfig root@${node_node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
done
4) 创建和分发 kubelet 参数配置文件
从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet –help 会提示:
deprecated: this parameter should be set via the config file specified by the kubelets –config flag
创建 kubelet 参数配置文件模板:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kubelet-config.yaml.template <<eof
kind: kubeletconfiguration
apiversion: kubelet.config.k8s.io/v1beta1
address: “##node_node_ip##”
staticpodpath: “”
syncfrequency: 1m
filecheckfrequency: 20s
httpcheckfrequency: 20s
staticpodurl: “”
port: 10250
readonlyport: 0
rotatecertificates: true
servertlsbootstrap: true
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientcafile: “/etc/kubernetes/cert/ca.pem”
authorization:
mode: webhook
registrypullqps: 0
registryburst: 20
eventrecordqps: 0
eventburst: 20
enabledebugginghandlers: true
enablecontentionprofiling: true
healthzport: 10248
healthzbindaddress: “##node_node_ip##”
clusterdomain: “${cluster_dns_domain}”
clusterdns:
– “${cluster_dns_svc_ip}”
nodestatusupdatefrequency: 10s
nodestatusreportfrequency: 1m
imageminimumgcage: 2m
imagegchighthresholdpercent: 85
imagegclowthresholdpercent: 80
volumestatsaggperiod: 1m
kubeletcgroups: “”
systemcgroups: “”
cgrouproot: “”
cgroupsperqos: true
cgroupdriver: cgroupfs
runtimerequesttimeout: 10m
hairpinmode: promiscuous-bridge
maxpods: 220
podcidr: “${cluster_cidr}”
podpidslimit: -1
resolvconf: /etc/resolv.conf
maxopenfiles: 1000000
kubeapiqps: 1000
kubeapiburst: 2000
serializeimagepulls: false
evictionhard:
memory.available: “100mi”
nodefs.available: “10%”
nodefs.inodesfree: “5%”
imagefs.available: “15%”
evictionsoft: {}
enablecontrollerattachdetach: true
failswapon: true
containerlogmaxsize: 20mi
containerlogmaxfiles: 10
systemreserved: {}
kubereserved: {}
systemreservedcgroup: “”
kubereservedcgroup: “”
enforcenodeallocatable: [“pods”]
eof
解释说明:
-> address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 api;
-> readonlyport=0:关闭只读端口(默认 10255),等效为未指定;
-> authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
-> authentication.x509.clientcafile:指定签名客户端证书的 ca 证书,开启 http 证书认证;
-> authentication.webhook.enabled=true:开启 https bearer token 认证;
-> 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 unauthorized;
-> authroization.mode=webhook:kubelet 使用 subjectaccessreview api 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(rbac);
-> featuregates.rotatekubeletclientcertificate、featuregates.rotatekubeletservercertificate:自动 rotate 证书,证书的有效期取决于
kube-controller-manager 的 –experimental-cluster-signing-duration 参数;
-> 需要 root 账户运行;
为各节点创建和分发 kubelet 配置文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo “>>> ${node_node_ip}”
sed -e “s/##node_node_ip##/${node_node_ip}/” kubelet-config.yaml.template > kubelet-config-${node_node_ip}.yaml.template
scp kubelet-config-${node_node_ip}.yaml.template root@${node_node_ip}:/etc/kubernetes/kubelet-config.yaml
done
5)创建和分发 kubelet systemd unit 文件
创建 kubelet systemd unit 文件模板:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kubelet.service.template <<eof
[unit]
description=kubernetes kubelet
documentation=https://github.com/googlecloudplatform/kubernetes
after=docker.service
requires=docker.service
[service]
workingdirectory=${k8s_dir}/kubelet
execstart=/opt/k8s/bin/kubelet \\
–allow-privileged=true \\
–bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
–cert-dir=/etc/kubernetes/cert \\
–cni-conf-dir=/etc/cni/net.d \\
–container-runtime=docker \\
–container-runtime-endpoint=unix:///var/run/dockershim.sock \\
–root-dir=${k8s_dir}/kubelet \\
–kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
–config=/etc/kubernetes/kubelet-config.yaml \\
–hostname-override=##node_node_name## \\
–pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 \\
–image-pull-progress-deadline=15m \\
–volume-plugin-dir=${k8s_dir}/kubelet/kubelet-plugins/volume/exec/ \\
–logtostderr=true \\
–v=2
restart=always
restartsec=5
startlimitinterval=0
[install]
wantedby=multi-user.target
eof
解释说明:
-> 如果设置了 –hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 node 的情况;
-> –bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 tls bootstrapping 请求;
-> k8s approve kubelet 的 csr 请求后,在 –cert-dir 目录创建证书和私钥文件,然后写入 –kubeconfig 文件;
-> –pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸;
为各节点创建和分发 kubelet systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${node_node_names[@]}
do
echo “>>> ${node_node_name}”
sed -e “s/##node_node_name##/${node_node_name}/” kubelet.service.template > kubelet-${node_node_name}.service
scp kubelet-${node_node_name}.service root@${node_node_name}:/etc/systemd/system/kubelet.service
done
6)bootstrap token auth 和授予权限
-> kubelet启动时查找–kubeletconfig参数对应的文件是否存在,如果不存在则使用 –bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (csr)。
-> kube-apiserver 收到 csr 请求后,对其中的 token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:<token id>,group 设置为 system:bootstrappers,
这一过程称为 bootstrap token auth。
-> 默认情况下,这个 user 和 group 没有创建 csr 的权限,kubelet 启动失败,错误日志如下:
# journalctl -u kubelet -a |grep -a 2 certificatesigningrequests
may 9 22:48:41 k8s-master01 kubelet[128468]: i0526 22:48:41.798230 128468 certificate_manager.go:366] rotating certificates
may 9 22:48:41 k8s-master01 kubelet[128468]: e0526 22:48:41.801997 128468 certificate_manager.go:385] failed while requesting a signed certificate from the master: cannot cre
ate certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: user “system:bootstrap:82jfrm” cannot create resource “certificatesigningrequests” i
n api group “certificates.k8s.io” at the cluster scope
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.044828 128468 kubelet.go:2244] node “k8s-master01” not found
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.078658 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: failed to list *v1.service: unauthor
ized
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.079873 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: failed to list *v1.node: unauthorize
d
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.082683 128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: failed to list *v1beta1.csidriver: unau
thorized
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.084473 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: failed to list *v1.pod: unau
thorized
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.088466 128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: failed to list *v1beta1.runtimeclass: u
nauthorized
解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:
# kubectl create clusterrolebinding kubelet-bootstrap –clusterrole=system:node-bootstrapper –group=system:bootstrappers
7) 启动 kubelet 服务
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo “>>> ${node_node_ip}”
ssh root@${node_node_ip} “mkdir -p ${k8s_dir}/kubelet/kubelet-plugins/volume/exec/”
ssh root@${node_node_ip} “/usr/sbin/swapoff -a”
ssh root@${node_node_ip} “systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet”
done
解释说明:
-> 启动服务前必须先创建工作目录;
-> 关闭 swap 分区,否则 kubelet 会启动失败 (使用”journalctl -u kubelet |tail”命令查看错误日志)
kubelet 启动后使用 –bootstrap-kubeconfig 向 kube-apiserver 发送 csr 请求,
当这个 csr 被 approve 后,kube-controller-manager 为 kubelet 创建 tls 客户端证书、私钥和 –kubeletconfig 文件。
注意:kube-controller-manager 需要配置 –cluster-signing-cert-file 和 –cluster-signing-key-file 参数,才会为 tls bootstrap 创建证书和私钥。
[root@k8s-master01 work]# kubectl get csr
name age requestor condition
csr-4wk6q 108s system:bootstrap:0zqowl pending
csr-mjtl5 110s system:bootstrap:heh41x pending
csr-rfz27 109s system:bootstrap:b46tq2 pending
[root@k8s-master01 work]# kubectl get nodes
no resources found.
此时三个node节点的csr均处于 pending 状态;
8)自动 approve csr 请求
创建三个 clusterrolebinding,分别用于自动 approve client、renew client、renew server 证书:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > csr-crb.yaml <<eof
# approve all csrs for the group “system:bootstrappers”
kind: clusterrolebinding
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
– kind: group
name: system:bootstrappers
apigroup: rbac.authorization.k8s.io
roleref:
kind: clusterrole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apigroup: rbac.authorization.k8s.io
—
# to let a node of the group “system:nodes” renew its own credentials
kind: clusterrolebinding
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: node-client-cert-renewal
subjects:
– kind: group
name: system:nodes
apigroup: rbac.authorization.k8s.io
roleref:
kind: clusterrole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apigroup: rbac.authorization.k8s.io
—
# a clusterrole which instructs the csr approver to approve a node requesting a
# serving cert matching its client cert.
kind: clusterrole
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:
– apigroups: [“certificates.k8s.io”]
resources: [“certificatesigningrequests/selfnodeserver”]
verbs: [“create”]
—
# to let a node of the group “system:nodes” renew its own server credentials
kind: clusterrolebinding
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: node-server-cert-renewal
subjects:
– kind: group
name: system:nodes
apigroup: rbac.authorization.k8s.io
roleref:
kind: clusterrole
name: approve-node-server-renewal-csr
apigroup: rbac.authorization.k8s.io
eof
解释说明:
-> auto-approve-csrs-for-group:自动 approve node 的第一次 csr; 注意第一次 csr 时,请求的 group 为 system:bootstrappers;
-> node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 group 为 system:nodes;
-> node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 group 为 system:nodes;
执行创建:
[root@k8s-master01 work]# kubectl apply -f csr-crb.yaml
查看 kubelet 的情况
需要耐心等待一段时间(1-10 分钟),三个节点的 csr 都被自动 approved(测试时等待了很长一段时间才被自动approved)
[root@k8s-master01 work]# kubectl get csr
name age requestor condition
csr-4m4hc 37s system:node:k8s-node01 pending
csr-4wk6q 7m29s system:bootstrap:0zqowl approved,issued
csr-h8hq6 36s system:node:k8s-node02 pending
csr-mjtl5 7m31s system:bootstrap:heh41x approved,issued
csr-rfz27 7m30s system:bootstrap:b46tq2 approved,issued
csr-t9p6n 36s system:node:k8s-node03 pending
注意:
pending 的 csr 用于创建 kubelet server 证书,需要手动 approve,后续会说到这个。
此时发现所有node节点状态均为”ready”:
[root@k8s-master01 work]# kubectl get nodes
name status roles age version
k8s-node01 ready <none> 3m v1.14.2
k8s-node02 ready <none> 3m v1.14.2
k8s-node03 ready <none> 2m59s v1.14.2
kube-controller-manager 为各node节点生成了 kubeconfig 文件和公私钥(如下在node节点上执行):
[root@k8s-node01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw——- 1 root root 2310 jun 18 17:09 /etc/kubernetes/kubelet.kubeconfig
[root@k8s-node01 ~]# ls -l /etc/kubernetes/cert/|grep kubelet
-rw——- 1 root root 1273 jun 18 17:16 kubelet-client-2019-06-18-17-16-31.pem
lrwxrwxrwx 1 root root 59 jun 18 17:16 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem
注意:此时还没有自动生成 kubelet server 证书;
9)手动 approve server cert csr
基于安全性考虑,csr approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve:
[root@k8s-master01 work]# kubectl get csr
name age requestor condition
csr-4m4hc 6m4s system:node:k8s-node01 pending
csr-4wk6q 12m system:bootstrap:0zqowl approved,issued
csr-h8hq6 6m3s system:node:k8s-node02 pending
csr-mjtl5 12m system:bootstrap:heh41x approved,issued
csr-rfz27 12m system:bootstrap:b46tq2 approved,issued
csr-t9p6n 6m3s system:node:k8s-node03 pending
记住上面执行结果为”pending”的对应的csr的name名称,然后对这些csr进行手动approve
[root@k8s-master01 work]# kubectl certificate approve csr-4m4hc
certificatesigningrequest.certificates.k8s.io/csr-4m4hc approved
[root@k8s-master01 work]# kubectl certificate approve csr-h8hq6
certificatesigningrequest.certificates.k8s.io/csr-h8hq6 approved
[root@k8s-master01 work]# kubectl certificate approve csr-t9p6n
certificatesigningrequest.certificates.k8s.io/csr-t9p6n approved
再次查看csr,发现所有的csr都为approved了
[root@k8s-master01 work]# kubectl get csr
name age requestor condition
csr-4m4hc 7m46s system:node:k8s-node01 approved,issued
csr-4wk6q 14m system:bootstrap:0zqowl approved,issued
csr-h8hq6 7m45s system:node:k8s-node02 approved,issued
csr-mjtl5 14m system:bootstrap:heh41x approved,issued
csr-rfz27 14m system:bootstrap:b46tq2 approved,issued
csr-t9p6n 7m45s system:node:k8s-node03 approved,issued
再次到node节点上查看,发现已经自动生成 kubelet server 证书;
[root@k8s-node01 ~]# ls -l /etc/kubernetes/cert/kubelet-*
-rw——- 1 root root 1273 jun 18 17:16 /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem
lrwxrwxrwx 1 root root 59 jun 18 17:16 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem
-rw——- 1 root root 1317 jun 18 17:23 /etc/kubernetes/cert/kubelet-server-2019-06-18-17-23-13.pem
lrwxrwxrwx 1 root root 59 jun 18 17:23 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2019-06-18-17-23-13.pem
10)kubelet 提供的 api 接口
kubelet 启动后监听多个端口,用于接收 kube-apiserver 或其它客户端发送的请求:
在node节点执行下面命令
[root@k8s-node01 ~]# netstat -lnpt|grep kubelet
tcp 0 0 127.0.0.1:40831 0.0.0.0:* listen 24468/kubelet
tcp 0 0 172.16.60.244:10248 0.0.0.0:* listen 24468/kubelet
tcp 0 0 172.16.60.244:10250 0.0.0.0:* listen 24468/kubelet
解释说明:
-> 10248: healthz http服务端口,即健康检查服务的端口
-> 10250: kubelet服务监听的端口,api会检测他是否存活。即https服务,访问该端口时需要认证和授权(即使访问/healthz也需要);
-> 10255:只读端口,可以不用验证和授权机制,直接访问。这里配置”readonlyport: 0″表示未开启只读端口10255;如果配置”readonlyport: 10255″则打开10255端口
-> 从 k8s v1.10 开始,去除了 –cadvisor-port 参数(默认 4194 端口),不支持访问 cadvisor ui & api。
例如执行”kubectl exec -it nginx-ds-5aedg — sh”命令时,kube-apiserver会向 kubelet 发送如下请求:
post /exec/default/nginx-ds-5aedg/my-nginx?command=sh&input=1&output=1&tty=1
kubelet 接收 10250 端口的 https 请求,可以访问如下资源:
-> /pods、/runningpods
-> /metrics、/metrics/cadvisor、/metrics/probes
-> /spec
-> /stats、/stats/container
-> /logs
-> /run/、/exec/, /attach/, /portforward/, /containerlogs/
由于关闭了匿名认证,同时开启了webhook 授权,所有访问10250端口https api的请求都需要被认证和授权。
预定义的 clusterrole system:kubelet-api-admin 授予访问 kubelet 所有 api 的权限(kube-apiserver 使用的 kubernetes 证书 user 授予了该权限):
[root@k8s-master01 work]# kubectl describe clusterrole system:kubelet-api-admin
name: system:kubelet-api-admin
labels: kubernetes.io/bootstrapping=rbac-defaults
annotations: rbac.authorization.kubernetes.io/autoupdate: true
policyrule:
resources non-resource urls resource names verbs
——— —————– ————– —–
nodes/log [] [] [*]
nodes/metrics [] [] [*]
nodes/proxy [] [] [*]
nodes/spec [] [] [*]
nodes/stats [] [] [*]
nodes [] [] [get list watch proxy]
11) kubelet api 认证和授权
kubelet 配置了如下认证参数:
-> authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
-> authentication.x509.clientcafile:指定签名客户端证书的 ca 证书,开启 https 证书认证;
-> authentication.webhook.enabled=true:开启 https bearer token 认证;
同时配置了如下授权参数:
-> authroization.mode=webhook:开启 rbac 授权;
kubelet 收到请求后,使用 clientcafile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 unauthorized:
[root@k8s-master01 work]# curl -s –cacert /etc/kubernetes/cert/ca.pem https://172.16.60.244:10250/metrics
unauthorized
[root@k8s-master01 work]# curl -s –cacert /etc/kubernetes/cert/ca.pem -h “authorization: bearer 123456” https://172.16.60.244:10250/metrics
unauthorized
通过认证后,kubelet 使用 subjectaccessreview api 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(rbac);
下面进行证书认证和授权:
# 权限不足的证书;
[root@k8s-master01 work]# curl -s –cacert /etc/kubernetes/cert/ca.pem –cert /etc/kubernetes/cert/kube-controller-manager.pem –key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.16.60.244:10250/metrics
forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
# 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[root@k8s-master01 work]# curl -s –cacert /etc/kubernetes/cert/ca.pem –cert /opt/k8s/work/admin.pem –key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics|head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le=”0″} 0
apiserver_client_certificate_expiration_seconds_bucket{le=”1800″} 0
注意:–cacert、–cert、–key 的参数值必须是文件路径,否则返回 401 unauthorized;
bear token 认证和授权
创建一个 serviceaccount,将它和 clusterrole system:kubelet-api-admin 绑定,从而具有调用 kubelet api 的权限:
[root@k8s-master01 work]# kubectl create sa kubelet-api-test
[root@k8s-master01 work]# kubectl create clusterrolebinding kubelet-api-test –clusterrole=system:kubelet-api-admin –serviceaccount=default:kubelet-api-test
[root@k8s-master01 work]# secret=$(kubectl get secrets | grep kubelet-api-test | awk {print $1})
[root@k8s-master01 work]# token=$(kubectl describe secret ${secret} | grep -e ^token | awk {print $2})
[root@k8s-master01 work]# echo ${token}
eyjhbgcioijsuzi1niisimtpzci6iij9.eyjpc3mioijrdwjlcm5ldgvzl3nlcnzpy2vhy2nvdw50iiwia3vizxjuzxrlcy5pby9zzxj2awnlywnjb3vudc9uyw1lc3bhy2uioijkzwzhdwx0iiwia3vizxjuzxrlcy5pby9zzxj2awnlywnjb3vudc9zzwnyzxqubmftzsi6imt1ymvszxqtyxbplxrlc3qtdg9rzw4tanrymneilcjrdwjlcm5ldgvzlmlvl3nlcnzpy2vhy2nvdw50l3nlcnzpy2utywnjb3vudc5uyw1lijoia3vizwxldc1hcgktdgvzdcisimt1ymvybmv0zxmuaw8vc2vydmljzwfjy291bnqvc2vydmljzs1hy2nvdw50lnvpzci6imrjyjljzte0ltkxywmtmtflos05mgq0ltawnta1nmfjn2m4msisinn1yii6inn5c3rlbtpzzxj2awnlywnjb3vuddpkzwzhdwx0omt1ymvszxqtyxbplxrlc3qifq.i_uvqjoumldg4ldurfhxfdotm2addxgequqtcpolp_5g6ui-mjve5jhem_q8otmwfs5tqlcvkjhn2idfsrikk_mbe_yslqsneohdclzwhrvn6x84y62q49y-art12ylspfwwenw-2gawstmorbz7ayyau5-kgqmk95mmx57ic8uwvjylilw4jcnkmon5esomgaog30uvvsbiqvkkytwgtag5tah9wadujqttbjjdolgntpghxj-hmzo2givdgdrbs_unvhzgt2madlpp13qyv8zkibgpsbiwoak_olsfkq5-dirn04ncbh9kkyyh9jccmsuvepaj-lgtwj5zdufrhw
这时,再接着进行kubelet请求
[root@k8s-master01 work]# curl -s –cacert /etc/kubernetes/cert/ca.pem -h “authorization: bearer ${token}” https://172.16.60.244:10250/metrics|head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le=”0″} 0
apiserver_client_certificate_expiration_seconds_bucket{le=”1800″} 0
12)cadvisor 和 metrics
cadvisor 是内嵌在 kubelet 二进制中的,统计所在节点各容器的资源(cpu、内存、磁盘、网卡)使用情况的服务。
浏览器访问https://172.16.60.244:10250/metrics 和 https://172.16.60.244:10250/metrics/cadvisor 分别返回 kubelet 和 cadvisor 的 metrics。
注意:
-> kubelet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;
-> 参考下面的”浏览器访问kube-apiserver安全端口”,创建和导入相关证书,然后就可以在浏览器里成功访问kube-apiserver和上面的kubelet的10250端口了。
需要通过证书方式访问kubelet的10250端口
[root@k8s-master01 ~]# curl -s –cacert /etc/kubernetes/cert/ca.pem –cert /opt/k8s/work/admin.pem –key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics
[root@k8s-master01 ~]# curl -s –cacert /etc/kubernetes/cert/ca.pem –cert /opt/k8s/work/admin.pem –key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics/cadvisor
13)获取 kubelet 的配置
从 kube-apiserver 获取各节点 kubelet 的配置:
如果发现没有jq命令(json处理工具),可以直接yum安装jq:
[root@k8s-master01 ~]# yum install -y jq
使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[root@k8s-master01 ~]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 ~]# curl -ssl –cacert /etc/kubernetes/cert/ca.pem –cert /opt/k8s/work/admin.pem –key /opt/k8s/work/admin-key.pem ${kube_apiserver}/api/v1/nodes/k8s-node01/proxy/configz | jq .kubeletconfig|.kind=”kubeletconfiguration”|.apiversion=”kubelet.config.k8s.io/v1beta1″
{
“syncfrequency”: “1m0s”,
“filecheckfrequency”: “20s”,
“httpcheckfrequency”: “20s”,
“address”: “172.16.60.244”,
“port”: 10250,
“rotatecertificates”: true,
“servertlsbootstrap”: true,
“authentication”: {
“x509”: {
“clientcafile”: “/etc/kubernetes/cert/ca.pem”
},
“webhook”: {
“enabled”: true,
“cachettl”: “2m0s”
},
“anonymous”: {
“enabled”: false
}
},
“authorization”: {
“mode”: “webhook”,
“webhook”: {
“cacheauthorizedttl”: “5m0s”,
“cacheunauthorizedttl”: “30s”
}
},
“registrypullqps”: 0,
“registryburst”: 20,
“eventrecordqps”: 0,
“eventburst”: 20,
“enabledebugginghandlers”: true,
“enablecontentionprofiling”: true,
“healthzport”: 10248,
“healthzbindaddress”: “172.16.60.244”,
“oomscoreadj”: -999,
“clusterdomain”: “cluster.local”,
“clusterdns”: [
“10.254.0.2”
],
“streamingconnectionidletimeout”: “4h0m0s”,
“nodestatusupdatefrequency”: “10s”,
“nodestatusreportfrequency”: “1m0s”,
“nodeleasedurationseconds”: 40,
“imageminimumgcage”: “2m0s”,
“imagegchighthresholdpercent”: 85,
“imagegclowthresholdpercent”: 80,
“volumestatsaggperiod”: “1m0s”,
“cgroupsperqos”: true,
“cgroupdriver”: “cgroupfs”,
“cpumanagerpolicy”: “none”,
“cpumanagerreconcileperiod”: “10s”,
“runtimerequesttimeout”: “10m0s”,
“hairpinmode”: “promiscuous-bridge”,
“maxpods”: 220,
“podcidr”: “172.30.0.0/16”,
“podpidslimit”: -1,
“resolvconf”: “/etc/resolv.conf”,
“cpucfsquota”: true,
“cpucfsquotaperiod”: “100ms”,
“maxopenfiles”: 1000000,
“contenttype”: “application/vnd.kubernetes.protobuf”,
“kubeapiqps”: 1000,
“kubeapiburst”: 2000,
“serializeimagepulls”: false,
“evictionhard”: {
“memory.available”: “100mi”
},
“evictionpressuretransitionperiod”: “5m0s”,
“enablecontrollerattachdetach”: true,
“makeiptablesutilchains”: true,
“iptablesmasqueradebit”: 14,
“iptablesdropbit”: 15,
“failswapon”: true,
“containerlogmaxsize”: “20mi”,
“containerlogmaxfiles”: 10,
“configmapandsecretchangedetectionstrategy”: “watch”,
“enforcenodeallocatable”: [
“pods”
],
“kind”: “kubeletconfiguration”,
“apiversion”: “kubelet.config.k8s.io/v1beta1”
}
或者直接执行下面语句:(https://172.16.60.250:8443 就是变量${kube_apiserver})
[root@k8s-master01 ~]# curl -ssl –cacert /etc/kubernetes/cert/ca.pem –cert /opt/k8s/work/admin.pem –key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node01/proxy/configz | jq .kubeletconfig|.kind=”kubeletconfiguration”|.apiversion=”kubelet.config.k8s.io/v1beta1″
[root@k8s-master01 ~]# curl -ssl –cacert /etc/kubernetes/cert/ca.pem –cert /opt/k8s/work/admin.pem –key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node02/proxy/configz | jq .kubeletconfig|.kind=”kubeletconfiguration”|.apiversion=”kubelet.config.k8s.io/v1beta1″
[root@k8s-master01 ~]# curl -ssl –cacert /etc/kubernetes/cert/ca.pem –cert /opt/k8s/work/admin.pem –key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node03/proxy/configz | jq .kubeletconfig|.kind=”kubeletconfiguration”|.apiversion=”kubelet.config.k8s.io/v1beta1″