本文收录在Linux运维企业架构实战系列
今天想起当初研究nginx反向代理负载均衡时,nginx自身的upstream后端配置用着非常不舒服; 当时使用的淘宝基于nginx二次开发的Tengine,今天总结一下。
官网:http://tengine.taobao.org/download.html[root@along app]/# wget http://tengine.taobao.org/download/tengine-2.2.3.tar.gz [root@along app]/# tar -xvf tengine-2.2.3.tar.gz
[root@along app]/# groupadd nginx [root@along app]/# useradd -s /sbin/nologin -g nginx -M nginx [root@along app]/# yum -y install gc gcc gcc-c++ pcre-devel zlib-devel openssl-devel
[root@along app]/# cd tengine-2.2.3/ [root@along tengine]/# ./configure --user=nginx --group=nginx --prefix=/app/tengine --with-http_stub_status_module --with-http_ssl_module --with-http_gzip_static_module [root@along tengine]/# make && make install [root@along tengine]/# chown -R nginx.nginx /app/tengine [root@along tengine]/# ll /app/tengine total 8 drwxr-xr-x 2 nginx nginx 4096 Feb 20 14:55 conf drwxr-xr-x 2 nginx nginx 40 Feb 20 14:50 html drwxr-xr-x 2 nginx nginx 4096 Feb 20 14:50 include drwxr-xr-x 2 nginx nginx 6 Feb 20 14:50 logs drwxr-xr-x 2 nginx nginx 6 Feb 20 14:50 modules drwxr-xr-x 2 nginx nginx 35 Feb 20 14:50 sbin
注:
[root@along nginx]/# vim /usr/lib/systemd/system/nginx.service [Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ After=network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/app/tengine/logs/nginx.pid ExecStartPre=/app/tengine/sbin/nginx -t -c /app/tengine/conf/nginx.conf ExecStart=/app/tengine/sbin/nginx -c /app/tengine/conf/nginx.conf ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target
[root@along ~]/# systemctl start nginx [root@along ~]/# ss -nutlp |grep 80 tcp LISTEN 0 128 /*:80 /*:/* users:(("nginx",pid=4933,fd=6),("nginx",pid=4932,fd=6))
网页访问验证
因为tengine的其他功能和nginx配置差不多,就不在演示了;主要演示,我认为较为方便的反向代理配置。
tengine配置反向代理格式和haproxy很相似;
后端两台服务器事先自己准备好网页服务(nginx/httpd等都可以)[root@along tengine]/# cd /app/tengine/conf/ [root@along conf]/# vim nginx.conf http { ... ... /#配置后端代理集群,默认是轮询算法 upstream srv { server 192.168.10.101:80; server 192.168.10.106:80; check interval=3000 rise=2 fall=5 timeout=1000 type=http; check_http_send "HEAD / HTTP/1.0rnrn"; check_http_expect_alive http_2xx http_3xx; } ... ... /#在server端location反向代理 server { location / { proxy_pass http://srv; } } ... ... }
(1)验证配置是否有误[root@along tengine]/# ./sbin/nginx -t nginx: the configuration file /app/tengine/conf/nginx.conf syntax is ok nginx: configuration file /app/tengine/conf/nginx.conf test is successful
(2)重启服务器[root@along tengine]/# systemctl restart nginx
(3)网页访问验证
因为默认是轮询算法,所以刷新页面,就会轮询调度到后台2个网页服务器
轮询是upstream的默认分配方式,即每个请求按照时间顺序轮流分配到不同的后端服务器,如果某个后端服务器down掉后,能自动剔除。upstream srv { server 192.168.10.101:80; server 192.168.10.106:80; }
加权轮询,轮询的加强版,即可以指定轮询比率,weight和访问几率成正比,主要应用于后端服务器异质的场景下。upstream srv { server 192.168.10.101:80 weight=1; server 192.168.10.106:80 weight=2; }
每个请求按照访问ip(即Nginx的前置服务器或者客户端IP)的hash结果分配,这样每个访客会固定访问一个后端服务器,可以解决session一致问题。
upstream srv { ip_hash; server 192.168.10.101:80; server 192.168.10.106:80; }
注意:
fair顾名思义,公平地按照后端服务器的响应时间(rt)来分配请求,响应时间短即rt小的后端服务器优先分配请求。如果需要使用这种调度算法,必须下载Nginx的upstr_fair模块。upstream srv { fair; server 192.168.10.101:80; server 192.168.10.106:80; }
与ip_hash类似,但是**按照访问url的hash结果来分配请求**,使得每个url定向到同一个后端服务器,主要应用于后端服务器为缓存时的场景下。upstream srv { server 192.168.10.101:80; server 192.168.10.106:80; hash $requesturi; hashmethod crc32; }
(1)参数说明
(2)举例说明如下:upstream backend { server backend1.example.com weight=5; server 127.0.0.1:8080 maxfails=3 failtimeout=30s; server unix:/tmp/backend3; }