小知识:Ansible部署K8s集群的方法

环境:

主机 IP地址 组件 ansible 192.168.175.130 ansible master 192.168.175.140 docker,kubectl,kubeadm,kubelet node 192.168.175.141 docker,kubectl,kubeadm,kubelet node 192.168.175.142 docker,kubectl,kubeadm,kubelet

检查及调试相关命令:

?
1
2
3
4
$ ansible-playbook -v k8s-time-sync.yaml –syntax-check
$ ansible-playbook -v k8s-*.yaml -C
$ ansible-playbook -v k8s-yum-cfg.yaml -C –start-at-task=”Clean origin dir” –step
$ ansible-playbook -v k8s-kernel-cfg.yaml –step

主机inventory文件:

/root/ansible/hosts

?
1
2
3
4
5
6
7
8
9
[k8s_cluster]
master ansible_host=192.168.175.140
node1  ansible_host=192.168.175.141
node2  ansible_host=192.168.175.142
[k8s_cluster:vars]
ansible_port=22
ansible_user=root
ansible_password=hello123  

检查网络:k8s-check.yaml检查k8s各主机的网络是否可达;

检查k8s各主机操作系统版本是否达到要求;

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
– name: step01_check
hosts: k8s_cluster
gather_facts: no
tasks:
– name: check network
shell:
cmd: “ping -c 3 -m 2 {{ansible_host}}”
delegate_to: localhost
– name: get system version
shell: cat /etc/system-release
register: system_release
– name: check system version
vars:
system_version: “{{ system_release.stdout | regex_search(([7-9].[0-9]+).*?) }}”
suitable_version: 7.5
debug:
msg: “{{ The version of the operating system is + system_version +, suitable! if (system_version | float >= suitable_version) else The version of the operating system is unsuitable }}”

调试命令:

?
1
2
3
$ ansible-playbook –ssh-extra-args -o StrictHostKeyChecking=no -v -C k8s-check.yaml
$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v -C k8s-check.yaml
$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v k8s-check.yaml –start-at-task=”get system version”
连接配置:k8s-conn-cfg.yaml在ansible服务器的/etc/hosts文件中添加k8s主机名解析配置 生成密钥对,配置ansible免密登录到k8s各主机
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
– name: step02_conn_cfg
hosts: k8s_cluster
gather_facts: no
vars_prompt:
– name: RSA
prompt: Generate RSA or not(Yes/No)?
default: “no”
private: no
– name: password
prompt: input your login password?
default: “hello123”
tasks:
– name: Add DNS of k8s to ansible
delegate_to: localhost
lineinfile:
path: /etc/hosts
line: “{{ansible_host}}  {{inventory_hostname}}”
backup: yes
– name: Generate RSA
run_once: true
shell:
cmd: ssh-keygen -t rsa -f ~/.ssh/id_rsa -N
creates: /root/.ssh/id_rsa
when: RSA | bool
– name: Configure password free login
shell: |
/usr/bin/ssh-keyscan {{ ansible_host }} >> /root/.ssh/known_hosts 2> /dev/null
/usr/bin/ssh-keyscan {{ inventory_hostname }} >> /root/.ssh/known_hosts 2> /dev/null
/usr/bin/sshpass -p{{ password }} ssh-copy-id root@{{ ansible_host }}
#/usr/bin/sshpass -p{{ password }} ssh-copy-id root@{{ inventory_hostname }}
– name: Test ssh
shell: hostname

执行:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ ansible-playbook k8s-conn-cfg.yaml
Generate RSA or not(Yes/No)? [no]: yes
input your login password? [hello123]:
PLAY [step02_conn_cfg] **********************************************************************************************************
TASK [Add DNS of k8s to ansible] ************************************************************************************************
ok: [master -> localhost]
ok: [node1 -> localhost]
ok: [node2 -> localhost]
TASK [Generate RSA] *************************************************************************************************************
changed: [master -> localhost]
TASK [Configure password free login] ********************************************************************************************
changed: [node1 -> localhost]
changed: [node2 -> localhost]
TASK [Test ssh] *****************************************************************************************************************
changed: [master]
changed: [node1]
changed: [node2]
PLAY RECAP **********************************************************************************************************************
master                     : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node1                      : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node2                      : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

配置k8s集群dns解析: k8s-hosts-cfg.yaml

设置主机名

/etc/hosts文件中互相添加dns解析

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
– name: step03_cfg_host
hosts: k8s_cluster
gather_facts: no
tasks:
– name: set hostname
hostname:
name: “{{ inventory_hostname }}”
use: systemd
– name: Add dns to each other
lineinfile:
path: /etc/hosts
backup: yes
line: “{{item.value.ansible_host}}  {{item.key}}”
loop: “{{ hostvars | dict2items }}”
loop_control:
label: “{{ item.key }} {{ item.value.ansible_host }}”

执行:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ ansible-playbook k8s-hosts-cfg.yaml
PLAY [step03_cfg_host] **********************************************************************************************************
TASK [set hostname] *************************************************************************************************************
ok: [master]
ok: [node1]
ok: [node2]
TASK [Add dns to each other] ****************************************************************************************************
ok: [node2] => (item=node1 192.168.175.141)
ok: [master] => (item=node1 192.168.175.141)
ok: [node1] => (item=node1 192.168.175.141)
ok: [node2] => (item=node2 192.168.175.142)
ok: [master] => (item=node2 192.168.175.142)
ok: [node1] => (item=node2 192.168.175.142)
ok: [node2] => (item=master 192.168.175.140)
ok: [master] => (item=master 192.168.175.140)
ok: [node1] => (item=master 192.168.175.140)
PLAY RECAP **********************************************************************************************************************
master                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node1                      : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node2                      : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

配置yum源:k8s-yum-cfg.yaml

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
– name: step04_yum_cfg
hosts: k8s_cluster
gather_facts: no
tasks:
– name: Create back-up directory
file:
path: /etc/yum.repos.d/org/
state: directory
– name: Back-up old Yum files
shell:
cmd: mv -f /etc/yum.repos.d/*.repo /etc/yum.repos.d/org/
removes: /etc/yum.repos.d/org/
– name: Add new Yum files
copy:
src: ./files_yum/
dest: /etc/yum.repos.d/
– name: Check yum.repos.d
cmd: ls /etc/yum.repos.d/*

时钟同步:k8s-time-sync.yaml

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
– name: step05_time_sync
hosts: k8s_cluster
gather_facts: no
tasks:
– name: Start chronyd.service
systemd:
name: chronyd.service
state: started
enabled: yes
– name: Modify time zone & clock
shell: |
cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
clock -w
hwclock -w
– name: Check time now
command: date

禁用iptable、firewalld、NetworkManager服务

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
– name: step06_net_service
hosts: k8s_cluster
gather_facts: no
tasks:
– name: Stop some services for net
systemd:
name: “{{ item }}”
state: stopped
enabled: no
loop:
– firewalld
– iptables
– NetworkManager

执行:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ ansible-playbook -v k8s-net-service.yaml
… …
failed: [master] (item=iptables) => {
“ansible_loop_var”: “item”,
“changed”: false,
“item”: “iptables”
}
MSG:
Could not find the requested service iptables: host
PLAY RECAP **********************************************************************************************************************
master                     : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
node1                      : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
node2                      : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

禁用SElinux、swap:k8s-SE-swap-disable.yaml

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
– name: step07_net_service
hosts: k8s_cluster
gather_facts: no
tasks:
– name: SElinux disabled
lineinfile:
path: /etc/selinux/config
line: SELINUX=disabled
regexp: ^SELINUX=
state: present
backup: yes
– name: Swap disabled
path: /etc/fstab
line: #\1
regexp: (^/dev/mapper/centos-swap.*$)
backrefs: yes

修改内核:k8s-kernel-cfg.yaml

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
– name: step08_kernel_cfg
hosts: k8s_cluster
gather_facts: no
tasks:
– name: Create /etc/sysctl.d/kubernetes.conf
copy:
content:
dest: /etc/sysctl.d/kubernetes.conf
force: yes
– name: Cfg bridge and ip_forward
lineinfile:
path: /etc/sysctl.d/kubernetes.conf
line: “{{ item }}”
state: present
loop:
– net.bridge.bridge-nf-call-ip6tables = 1
– net.bridge.bridge-nf-call-iptables = 1
– net.ipv4.ip_forward = 1
– name: Load cfg
shell:
cmd: |
sysctl -p
modprobe br_netfilter
removes: /etc/sysctl.d/kubernetes.conf
– name: Check cfg
cmd: [ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3

执行:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ ansible-playbook -v k8s-kernel-cfg.yaml –step
TASK [Check cfg] ****************************************************************************************************************
changed: [master] => {
“changed”: true,
“cmd”: “[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3”,
“delta”: “0:00:00.011574”,
“end”: “2022-02-27 04:26:01.332896”,
“rc”: 0,
“start”: “2022-02-27 04:26:01.321322”
}
changed: [node2] => {
“delta”: “0:00:00.016331”,
“end”: “2022-02-27 04:26:01.351208”,
“start”: “2022-02-27 04:26:01.334877”
changed: [node1] => {
“delta”: “0:00:00.016923”,
“end”: “2022-02-27 04:26:01.355983”,
“start”: “2022-02-27 04:26:01.339060”
PLAY RECAP **********************************************************************************************************************
master                     : ok=4    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node1                      : ok=4    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node2                      : ok=4    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

配置ipvs:k8s-ipvs-cfg.yaml

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
– name: step09_ipvs_cfg
hosts: k8s_cluster
gather_facts: no
tasks:
– name: Install ipset and ipvsadm
yum:
name: “{{ item }}”
state: present
loop:
– ipset
– ipvsadm
– name: Load modules
shell: |
modprobe — ip_vs
modprobe — ip_vs_rr
modprobe — ip_vs_wrr
modprobe — ip_vs_sh
modprobe — nf_conntrack_ipv4
– name: Check cfg
shell:
cmd: [ $(lsmod | grep -e -ip_vs -e nf_conntrack_ipv4 | wc -l) -ge 2 ] && exit 0 || exit 3

安装docker:k8s-docker-install.yaml

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
– name: step10_docker_install
hosts: k8s_cluster
gather_facts: no
tasks:
– name: Install docker-ce
yum:
name: docker-ce-18.06.3.ce-3.el7
state: present
– name: Cfg docker
copy:
src: ./files_docker/daemon.json
dest: /etc/docker/
– name: Start docker
systemd:
name: docker.service
state: started
enabled: yes
– name: Check docker version
shell:
cmd: docker –version

安装k8s组件[kubeadm\kubelet\kubectl]:k8s-install-kubepkgs.yaml

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
– name: step11_k8s_install_kubepkgs
hosts: k8s_cluster
gather_facts: no
tasks:
– name: Install k8s components
yum:
name: “{{ item }}”
state: present
loop:
– kubeadm-1.17.4-0
– kubelet-1.17.4-0
– kubectl-1.17.4-0
– name: Cfg k8s
copy:
src: ./files_k8s/kubelet
dest: /etc/sysconfig/
force: no
backup: yes
– name: Start kubelet
systemd:
name: kubelet.service
state: started
enabled: yes

安装集群镜像:k8s-apps-images.yaml

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
– name: step12_apps_images
hosts: k8s_cluster
gather_facts: no
vars:
apps:
– kube-apiserver:v1.17.4
– kube-controller-manager:v1.17.4
– kube-scheduler:v1.17.4
– kube-proxy:v1.17.4
– pause:3.1
– etcd:3.4.3-0
– coredns:1.6.5
vars_prompt:
– name: cfg_python
prompt: Do you need to install docker pkg for python(Yes/No)?
default: “no”
private: no
tasks:
– block:
– name: Install python-pip
yum:
name: python-pip
state: present
– name: Install docker pkg for python
shell:
cmd: |
pip install docker==4.4.4
pip install websocket-client==0.32.0
creates: /usr/lib/python2.7/site-packages/docker/
when: cfg_python | bool
– name: Pull images
community.docker.docker_image:
name: “registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}”
source: pull
loop: “{{ apps }}”
– name: Tag images
repository: “k8s.gcr.io/{{ item }}”
force_tag: yes
source: local
– name: Remove images for ali
state: absent

执行:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
$ ansible-playbook k8s-apps-images.yaml
Do you need to install docker pkg for python(Yes/No)? [no]:
PLAY [step12_apps_images] *******************************************************************************************************
TASK [Install python-pip] *******************************************************************************************************
skipping: [node1]
skipping: [master]
skipping: [node2]
TASK [Install docker pkg for python] ********************************************************************************************
TASK [Pull images] **************************************************************************************************************
changed: [node1] => (item=kube-apiserver:v1.17.4)
changed: [node2] => (item=kube-apiserver:v1.17.4)
changed: [master] => (item=kube-apiserver:v1.17.4)
changed: [node1] => (item=kube-controller-manager:v1.17.4)
changed: [master] => (item=kube-controller-manager:v1.17.4)
changed: [node1] => (item=kube-scheduler:v1.17.4)
changed: [master] => (item=kube-scheduler:v1.17.4)
changed: [node1] => (item=kube-proxy:v1.17.4)
changed: [node2] => (item=kube-controller-manager:v1.17.4)
changed: [master] => (item=kube-proxy:v1.17.4)
changed: [node1] => (item=pause:3.1)
changed: [master] => (item=pause:3.1)
changed: [node2] => (item=kube-scheduler:v1.17.4)
changed: [node1] => (item=etcd:3.4.3-0)
changed: [master] => (item=etcd:3.4.3-0)
changed: [node2] => (item=kube-proxy:v1.17.4)
changed: [node1] => (item=coredns:1.6.5)
changed: [master] => (item=coredns:1.6.5)
changed: [node2] => (item=pause:3.1)
changed: [node2] => (item=etcd:3.4.3-0)
changed: [node2] => (item=coredns:1.6.5)
TASK [Tag images] ***************************************************************************************************************
ok: [node1] => (item=kube-apiserver:v1.17.4)
ok: [master] => (item=kube-apiserver:v1.17.4)
ok: [node2] => (item=kube-apiserver:v1.17.4)
ok: [node1] => (item=kube-controller-manager:v1.17.4)
ok: [master] => (item=kube-controller-manager:v1.17.4)
ok: [node2] => (item=kube-controller-manager:v1.17.4)
ok: [master] => (item=kube-scheduler:v1.17.4)
ok: [node1] => (item=kube-scheduler:v1.17.4)
ok: [node2] => (item=kube-scheduler:v1.17.4)
ok: [master] => (item=kube-proxy:v1.17.4)
ok: [node1] => (item=kube-proxy:v1.17.4)
ok: [node2] => (item=kube-proxy:v1.17.4)
ok: [master] => (item=pause:3.1)
ok: [node1] => (item=pause:3.1)
ok: [node2] => (item=pause:3.1)
ok: [master] => (item=etcd:3.4.3-0)
ok: [node1] => (item=etcd:3.4.3-0)
ok: [node2] => (item=etcd:3.4.3-0)
ok: [master] => (item=coredns:1.6.5)
ok: [node1] => (item=coredns:1.6.5)
ok: [node2] => (item=coredns:1.6.5)
TASK [Remove images for ali] ****************************************************************************************************
PLAY RECAP **********************************************************************************************************************
master                     : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0
node1                      : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0
node2                      : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0

k8s集群初始化:k8s-cluster-init.yaml

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
– name: step13_cluster_init
hosts: master
gather_facts: no
tasks:
– block:
– name: Kubeadm init
shell:
cmd:
kubeadm init
–apiserver-advertise-address={{ ansible_host }}
–kubernetes-version=v1.17.4
–service-cidr=10.96.0.0/12
–pod-network-cidr=10.244.0.0/16
–image-repository registry.aliyuncs.com/google_containers
– name: Create /root/.kube
file:
path: /root/.kube/
state: directory
owner: root
group: root
– name: Copy /root/.kube/config
copy:
src: /etc/kubernetes/admin.conf
dest: /root/.kube/config
remote_src: yes
backup: yes
– name: Copy kube-flannel
src: ./files_k8s/kube-flannel.yml
dest: /root/
– name: Apply kube-flannel
cmd: kubectl apply -f /root/kube-flannel.yml
– name: Get token
cmd: kubeadm token create –print-join-command
register: join_token
– name: debug join_token
debug:
var: join_token.stdout

到此这篇关于Ansible部署K8s集群的文章就介绍到这了,更多相关Ansible部署K8s集群内容请搜索服务器之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持服务器之家!

原文链接:https://www.cnblogs.com/MrReboot/p/15944007.html

声明: 猿站网有关资源均来自网络搜集与网友提供,任何涉及商业盈利目的的均不得使用,否则产生的一切后果将由您自己承担! 本平台资源仅供个人学习交流、测试使用 所有内容请在下载后24小时内删除,制止非法恶意传播,不对任何下载或转载者造成的危害负任何法律责任!也请大家支持、购置正版! 。本站一律禁止以任何方式发布或转载任何违法的相关信息访客发现请向站长举报,会员发帖仅代表会员个人观点,并不代表本站赞同其观点和对其真实性负责。本网站的资源部分来源于网络,如有侵权烦请发送邮件至:2697268773@qq.com进行处理。
建站知识

小知识:群晖NAS之Docker中部署青龙面板图文教程

2023-3-9 16:41:34

建站知识

小知识:关于k8s中subpath的使用详解

2023-3-9 16:56:17

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索