小知识:Keepalived+Nginx+Tomcat 实现高可用Web集群的示例代码

Keepalived+Nginx+Tomcat 实现高可用Web集群

%小知识:Keepalived+Nginx+Tomcat 实现高可用Web集群的示例代码-猿站网-插图

一、Nginx的安装过程

1.下载Nginx安装包,安装依赖环境包

(1)安装 C++编译环境

?
1
yum -y install gcc #C++

(2)安装pcre

?
1
yum -y install pcre-devel

(3)安装zlib

?
1
yum -y install zlib-devel

(4)安装Nginx

定位到nginx 解压文件位置,执行编译安装命令

?
1
2
3
[root@localhost nginx-1.12.2]# pwd
/usr/local/nginx/nginx-1.12.2
[root@localhost nginx-1.12.2]# ./configure && make && make install

(5)启动Nginx

安装完成后先寻找那安装完成的目录位置

?
1
2
3
[root@localhost nginx-1.12.2]# whereis nginx
nginx: /usr/local/nginx
[root@localhost nginx-1.12.2]#

进入Nginx子目录sbin启动Nginx

?
1
2
3
4
5
[root@localhost sbin]# ls
nginx
[root@localhost sbin]# ./nginx &
[1] 5768
[root@localhost sbin]#

查看Nginx是否启动

%小知识:Keepalived+Nginx+Tomcat 实现高可用Web集群的示例代码-1猿站网-插图

或通过进程查看Nginx启动情况

?
1
2
3
4
5
6
[root@localhost sbin]# ps -aux|grep nginx
root  5769 0.0 0.0 20484 608 ?  Ss 14:03 0:00 nginx: master process ./nginx
nobody  5770 0.0 0.0 23012 1620 ?  S 14:03 0:00 nginx: worker process
root  5796 0.0 0.0 112668 972 pts/0 R+ 14:07 0:00 grep –color=auto nginx
[1]+ 完成     ./nginx
[root@localhost sbin]#

到此Nginx安装完成并启动成功。

(6)Nginx快捷启动和开机启动配置

编辑Nginx快捷启动脚本【 注意Nginx安装路径 需要根据自己的NGINX路径进行改动

?
1
[root@localhost init.d]# vim /etc/rc.d/init.d/nginx
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
#!/bin/sh
#
# nginx – this script starts and stops the nginx daemon
#
# chkconfig: – 85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /usr/local/nginx/conf/nginx.conf
# pidfile: /usr/local/nginx/logs/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ “$NETWORKING” = “no” ] && exit 0
nginx=”/usr/local/nginx/sbin/nginx”
prog=$(basename $nginx)
NGINX_CONF_FILE=”/usr/local/nginx/conf/nginx.conf”
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/lock/subsys/nginx
make_dirs() {
# make required directories
user=`$nginx -V 2>&1 | grep “configure arguments:” | sed s/[^*]*–user=\([^ ]*\).*/\1/g -`
if [ -z “`grep $user /etc/passwd`” ]; then
useradd -M -s /bin/nologin $user
fi
options=`$nginx -V 2>&1 | grep configure arguments:`
for opt in $options; do
if [ `echo $opt | grep .*-temp-path` ]; then
value=`echo $opt | cut -d “=” -f 2`
if [ ! -d “$value” ]; then
# echo “creating” $value
mkdir -p $value && chown -R $user $value
fi
fi
done
}
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
make_dirs
echo -n $”Starting $prog: “
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $”Stopping $prog: “
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
#configtest || return $?
stop
sleep 1
start
}
reload() {
#configtest || return $?
echo -n $”Reloading $prog: “
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case “$1” in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $”Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}”
exit 2
esac

为启动脚本授权 并加入开机启动

?
1
2
[root@localhost init.d]# chmod -R 777 /etc/rc.d/init.d/nginx
[root@localhost init.d]# chkconfig nginx

启动Nginx

?
1
[root@localhost init.d]# ./nginx start

将Nginx加入系统环境变量

?
1
[root@localhost init.d]# echo export PATH=$PATH:/usr/local/nginx/sbin>>/etc/profile && source /etc/profile

Nginx命令 [ service nginx (start|stop|restart) ]

?
1
2
[root@localhost init.d]# service nginx start
Starting nginx (via systemctl):       [ 确定 ]

Tips: 快捷命令

?
1
service nginx (start|stop|restart)

二、KeepAlived安装和配置

1.安装Keepalived依赖环境

?
1
2
3
4
5
yum install -y popt-devel 
yum install -y ipvsadm
yum install -y libnl*
yum install -y libnf*
yum install -y openssl-devel

2.编译Keepalived并安装

?
1
2
[root@localhost keepalived-1.3.9]# ./configure
[root@localhost keepalived-1.3.9]# make && make install

3.将Keepalive 安装成系统服务

?
1
2
[root@localhost etc]# mkdir /etc/keepalived
[root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

手动复制默认的配置文件到默认路径

?
1
2
3
[root@localhost etc]# mkdir /etc/keepalived
[root@localhost etc]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

为keepalived 创建软链接

?
1
[root@localhost sysconfig]# ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin/

设置Keepalived开机自启动

?
1
2
3
[root@localhost sysconfig]# chkconfig keepalived on
注意:正在将请求转发到“systemctl enable keepalived.service”。
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service

启动Keepalived服务

?
1
[root@localhost keepalived]# keepalived -D -f /etc/keepalived/keepalived.conf

关闭Keepalived服务

?
1
[root@localhost keepalived]# killall keepalived

三、集群规划和搭建

%小知识:Keepalived+Nginx+Tomcat 实现高可用Web集群的示例代码-2猿站网-插图

环境准备:

CentOS 7.2

Keepalived Version 1.4.0 – December 29, 2017

Nginx Version: nginx/1.12.2

Tomcat Version:8

集群规划清单

虚拟机 IP 说明 Keepalived+Nginx1[Master] 192.168.43.101 Nginx Server 01 Keeepalived+Nginx[Backup] 192.168.43.102 Nginx Server 02 Tomcat01 192.168.43.103 Tomcat Web Server01 Tomcat02 192.168.43.104 Tomcat Web Server02 VIP 192.168.43.150 虚拟漂移IP

1.更改Tomcat默认欢迎页面,用于标识切换Web

更改TomcatServer01 节点ROOT/index.jsp 信息,加入TomcatIP地址,并加入Nginx值,即修改节点192.168.43.103信息如下:

?
1
2
3
<div id=”asf-box”>
<h1>${pageContext.servletContext.serverInfo}(192.168.224.103)<%=request.getHeader(“X-NGINX”)%></h1>
</div>

更改TomcatServer02 节点ROOT/index.jsp信息,加入TomcatIP地址,并加入Nginx值,即修改节点192.168.43.104信息如下:

?
1
2
3
<div id=”asf-box”>
<h1>${pageContext.servletContext.serverInfo}(192.168.224.104)<%=request.getHeader(“X-NGINX”)%></h1>
</div>

2.启动Tomcat服务,查看Tomcat服务IP信息,此时Nginx未启动,因此request-header没有Nginx信息。

%小知识:Keepalived+Nginx+Tomcat 实现高可用Web集群的示例代码-3猿站网-插图

3.配置Nginx代理信息

1.配置Master节点[192.168.43.101]代理信息

?
1
2
3
4
5
6
7
8
9
10
11
upstream tomcat {
server 192.168.43.103:8080 weight=1;
server 192.168.43.104:8080 weight=1;
}
server{
location / {
proxy_pass http://tomcat;
proxy_set_header X-NGINX “NGINX-1”;
}
#……其他省略
}

2.配置Backup节点[192.168.43.102]代理信息

?
1
2
3
4
5
6
7
8
9
10
11
upstream tomcat {
server 192.168.43.103:8080 weight=1;
server 192.168.43.104:8080 weight=1;
}
server{
location / {
proxy_pass http://tomcat;
proxy_set_header X-NGINX “NGINX-2”;
}
#……其他省略
}

3.启动Master 节点Nginx服务

?
1
2
[root@localhost init.d]# service nginx start
Starting nginx (via systemctl):       [ 确定 ]

此时访问 192.168.43.101 可以看到103和104节点Tcomat交替显示,说明Nginx服务已经将请求负载到了2台tomcat上。

%小知识:Keepalived+Nginx+Tomcat 实现高可用Web集群的示例代码-4猿站网-插图

4.同理配置Backup[192.168.43.102] Nginx信息,启动Nginx后,访问192.168.43.102后可以看到Backup节点已起到负载的效果。

%小知识:Keepalived+Nginx+Tomcat 实现高可用Web集群的示例代码-5猿站网-插图

4.配置Keepalived 脚本信息

1. 在Master节点和Slave节点 /etc/keepalived目录下添加check_nginx.sh 文件,用于检测Nginx的存货状况,添加keepalived.conf文件

check_nginx.sh文件信息如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/bash
#时间变量,用于记录日志
d=`date –date today +%Y%m%d_%H:%M:%S`
#计算nginx进程数量
n=`ps -C nginx –no-heading|wc -l`
#如果进程为0,则启动nginx,并且再次检测nginx进程数量,
#如果还为0,说明nginx无法启动,此时需要关闭keepalived
if [ $n -eq “0” ]; then
/etc/rc.d/init.d/nginx start
n2=`ps -C nginx –no-heading|wc -l`
if [ $n2 -eq “0” ]; then
echo “$d nginx down,keepalived will stop” >> /var/log/check_ng.log
systemctl stop keepalived
fi
fi

添加完成后,为check_nginx.sh 文件授权,便于脚本获得执行权限。

?
1
[root@localhost keepalived]# chmod -R 777 /etc/keepalived/check_nginx.sh

2.在Master 节点 /etc/keepalived目录下,添加keepalived.conf 文件,具体信息如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vrrp_script chk_nginx {
script “/etc/keepalived/check_nginx.sh” //检测nginx进程的脚本
interval 2
weight -20
}
global_defs {
notification_email {
//可以添加邮件提醒
}
}
vrrp_instance VI_1 {
state MASTER     #标示状态为MASTER 备份机为BACKUP
interface ens33    #设置实例绑定的网卡(ip addr查看,需要根据个人网卡绑定)
virtual_router_id 51   #同一实例下virtual_router_id必须相同
mcast_src_ip 192.168.43.101
priority 250     #MASTER权重要高于BACKUP 比如BACKUP为240
advert_int 1     #MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒
nopreempt      #非抢占模式
authentication {    #设置认证
auth_type PASS   #主从服务器验证方式
auth_pass 123456
}
track_script {
check_nginx
}
virtual_ipaddress {   #设置vip
192.168.43.150   #可以多个虚拟IP,换行即可
}
}

3.在Backup节点 etc/keepalived目录下添加 keepalived.conf 配置文件

信息如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vrrp_script chk_nginx {
script “/etc/keepalived/check_nginx.sh” //检测nginx进程的脚本
interval 2
weight -20
}
global_defs {
notification_email {
//可以添加邮件提醒
}
}
vrrp_instance VI_1 {
state BACKUP     #标示状态为MASTER 备份机为BACKUP
interface ens33    #设置实例绑定的网卡(ip addr查看)
virtual_router_id 51   #同一实例下virtual_router_id必须相同
mcast_src_ip 192.168.43.102
priority 240     #MASTER权重要高于BACKUP 比如BACKUP为240
advert_int 1     #MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒
nopreempt      #非抢占模式
authentication {    #设置认证
auth_type PASS   #主从服务器验证方式
auth_pass 123456
}
track_script {
check_nginx
}
virtual_ipaddress {   #设置vip
192.168.43.150   #可以多个虚拟IP,换行即可
}
}

Tips: 关于配置信息的几点说明

state – 主服务器需配成MASTER,从服务器需配成BACKUP interface – 这个是网卡名,我使用的是VM12.0的版本,所以这里网卡名为ens33 mcast_src_ip – 配置各自的实际IP地址 priority – 主服务器的优先级必须比从服务器的高,这里主服务器配置成250,从服务器配置成240 virtual_ipaddress – 配置虚拟IP(192.168.43.150) authentication – auth_pass主从服务器必须一致,keepalived靠这个来通信 virtual_router_id – 主从服务器必须保持一致

5.集群高可用(HA)验证

Step1 启动Master机器的Keepalived和 Nginx服务

?
1
2
[root@localhost keepalived]# keepalived -D -f /etc/keepalived/keepalived.conf
[root@localhost keepalived]# service nginx start

查看服务启动进程

?
1
2
3
4
[root@localhost keepalived]# ps -aux|grep nginx
root  6390 0.0 0.0 20484 612 ?  Ss 19:13 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody  6392 0.0 0.0 23008 1628 ?  S 19:13 0:00 nginx: worker process
root  6978 0.0 0.0 112672 968 pts/0 S+ 20:08 0:00 grep –color=auto nginx

查看Keepalived启动进程

?
1
2
3
4
5
[root@localhost keepalived]# ps -aux|grep keepalived
root  6402 0.0 0.0 45920 1016 ?  Ss 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf
root  6403 0.0 0.0 48044 1468 ?  S 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf
root  6404 0.0 0.0 50128 1780 ?  S 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf
root  7004 0.0 0.0 112672 976 pts/0 S+ 20:10 0:00 grep –color=auto keepalived

使用 ip add 查看虚拟IP绑定情况,如出现192.168.43.150 节点信息则绑定到Master节点

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@localhost keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:91:bf:59 brd ff:ff:ff:ff:ff:ff
inet 192.168.43.101/24 brd 192.168.43.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.43.150/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::9abb:4544:f6db:8255/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::b0b3:d0ca:7382:2779/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff

Step 2 启动Backup节点Nginx服务和Keepalived服务,查看服务启动情况,如Backup节点出现了虚拟IP,则Keepalvied配置文件有问题,此情况称为脑裂。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@localhost keepalived]# clear
[root@localhost keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:14:df:79 brd ff:ff:ff:ff:ff:ff
inet 192.168.43.102/24 brd 192.168.43.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff

Step 3 验证服务

浏览并多次强制刷新地址: http://192.168.43.150 ,可以看到103和104多次交替显示,并显示Nginx-1,则表明 Master节点在进行web服务转发。

Step 4 关闭Master keepalived服务和Nginx服务,访问Web服务观察服务转移情况

?
1
2
[root@localhost keepalived]# killall keepalived
[root@localhost keepalived]# service nginx stop

此时强制刷新192.168.43.150发现 页面交替显示103和104并显示Nginx-2 ,VIP已转移到192.168.43.102上,已证明服务自动切换到备份节点上。

Step 5 启动Master Keepalived 服务和Nginx服务

此时再次验证发现,VIP已被Master重新夺回,并页面交替显示 103和104,此时显示Nginx-1

四、Keepalived抢占模式和非抢占模式

keepalived的HA分为抢占模式和非抢占模式,抢占模式即MASTER从故障中恢复后,会将VIP从BACKUP节点中抢占过来。非抢占模式即MASTER恢复后不抢占BACKUP升级为MASTER后的VIP。

非抢占模式配置:

1> 在vrrp_instance块下两个节点各增加了nopreempt指令,表示不争抢vip

2> 节点的state都为BACKUP 两个keepalived节点都启动后,默认都是BACKUP状态,双方在发送组播信息后,会根据优先级来选举一个MASTER出来。由于两者都配置了nopreempt,所以MASTER从故障中恢复后,不会抢占vip。这样会避免VIP切换可能造成的服务延迟。

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持服务器之家。

原文链接:https://juejin.im/post/5d6c686e6fb9a06b155dd687

声明: 猿站网有关资源均来自网络搜集与网友提供,任何涉及商业盈利目的的均不得使用,否则产生的一切后果将由您自己承担! 本平台资源仅供个人学习交流、测试使用 所有内容请在下载后24小时内删除,制止非法恶意传播,不对任何下载或转载者造成的危害负任何法律责任!也请大家支持、购置正版! 。本站一律禁止以任何方式发布或转载任何违法的相关信息访客发现请向站长举报,会员发帖仅代表会员个人观点,并不代表本站赞同其观点和对其真实性负责。本网站的资源部分来源于网络,如有侵权烦请发送邮件至:2697268773@qq.com进行处理。
建站知识

小知识:详解linux根目录空间不足解决方案

2023-4-4 2:03:54

建站知识

小知识:linux负载均衡总结性说明 四层负载和七层负载有什么区别

2023-4-4 2:12:25

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索