Use NginX for Load Balancing


I’d like to describe how to use NginX for the load balancing with multiple backends, for example Apache.

The proposed network diagram is the following:
12479dbe099929c17bf1cb19795557e0

The task is pretty simple, and we are going to use the following NginX directives:

  • upstream – this directive is being provided by the HttpUpstream and allows to distribute the flow between multiple servers.
  • proxypass – this directive is being provided by the модулем HttpProxy. It allow to deliver (proxy) traffic.

For example, we have 3 web heads, that run 1 website:

Apapche#1:
ip: 192.168.10.10

Apapche#2
ip: 192.168.10.20

Apapche#3
ip: 192.168.10.30

Lets create the NginX website configuration file and enter the following code into it:

upstream http {
    server 192.168.10.10 weight=2 max_fails=2 fail_timeout=2s;
    server 192.168.10.20 weight=2 max_fails=2 fail_timeout=2s;
    server 192.168.10.30 weight=2 max_fails=2 fail_timeout=2s;
}
  •   weight – determines the weight of the server in the cluster. All servers are identical in this example and can work with equal amount of web clients. If you have 1 server that is more powerful than others, than you can set the higher value of this parameter. In this case NginX will deliver more traffic to it than to others.
  •   max_fails – determines the amount of failed attempt to connect with backend.
  •   fail_timeout – the timeout between failed connections.

In this example the backend will be marked as “non-working” after 2 failed connections during 4 seconds. NginX will not deliver traffic to it until the backend comes back online.

The following load balancing methods are available (described in the beginning of upstream section):

  •   ip_hash – according to this algorithm, all requests from client will be delivered to the same backend server according to the client’s ip addres. Not compatible with weight.
  •   least_conn – all new requests from clients will be delivered to the least loaded server.
  •   round-robin – default mode. Requests will be distributed between all servers one by one.

So we have defined the servers and websites. Next we need to tell NginX what to do with all this stuff. Let’s create the location with proxy definitions:

location / {
    proxy_read_timeout 1200;
    proxy_connect_timeout 1200;
    proxy_pass http://http/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

The complete config file looks like the following:

upstream http {
    server 192.168.10.10:80 weight=2 max_fails=2 fail_timeout=2s;
    server 192.168.10.20:80 weight=2 max_fails=2 fail_timeout=2s;
    server 192.168.10.30:80 weight=2 max_fails=2 fail_timeout=2s;
}

server {

servername mywebsite.com www.mywebsite.com;
listen 80;

location / {
    proxy_read_timeout 1200;
    proxy_connect_timeout 1200;
    proxy_pass http://http/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

The main benefit of NginX in terms of load balancing is the SSL Termination. In this case secure sessions are being established with the load balancer and responses are being received from it. Next secure session is being established between NginX and backend server. There are two approaches in configuration:

1. The load balancing of the https connections can be configured by the analog with the previous example. SSL websites should be described in the configuration of the Apache backend servers. In this case, https connection will be established with NginX, it will generate a new package and establish a secure connection with the Apache backend, receive the same encoded response, decode it, create a new package and deliver the final result to the client in a secure way. It is a quite time-consuming process. In terms of performance, sooo resource-consuming.

In this case the config files will look like the following:

upstream https {
    server 192.168.10.10:443 weight=2 max_fails=2 fail_timeout=2s;
    server 192.168.10.20:443 weight=2 max_fails=2 fail_timeout=2s;
    server 192.168.10.30:443 weight=2 max_fails=2 fail_timeout=2s;
}

server {
servername mywebsite.com www.mywebsite.com;
listen 443;

ssl on;
ssl_certificate /etc/nginx/SSL/hostname.pem;
ssl_certificate_key /etc/nginx/SSL/server.key;

location / {
    proxy_read_timeout 1200;
    proxy_connect_timeout 1200;
    proxy_pass https://https/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

2. Optionally we can configure https front-end in NginX and deliver http (not https) traffic to the backend servers. This approach can’t be considered as a “universal pill” because it will not work for all frameworks and CMS systems.
Most of the CMS systems have internal methods to determine if the current session is secure. Usually such checks are being performed on different login forms and checkouts pages. If session is not secure than the website will redirect client to https. Website can determine if session is secure by the presence of “HTTPS” variable, set to “on”, in the $_SERVER environment (request headers). Another approach is to check if connection is being established into 443 port.

It is not hard to guess that clients will receive error about redirect loop without additional configuration steps on backend servers.

If this method doesn’t work, please return to item 1.

In this case NginX configuration files will look like the following:

upstream https {
    server 192.168.10.10:80 weight=2 max_fails=2 fail_timeout=2s;
    server 192.168.10.20:80 weight=2 max_fails=2 fail_timeout=2s;
    server 192.168.10.30:80 weight=2 max_fails=2 fail_timeout=2s;
}

server {
servername mywebsite.com www.mywebsite.com;
listen 443;

ssl on;
ssl_certificate /etc/nginx/SSL/hostname.pem;
ssl_certificate_key /etc/nginx/SSL/server.key;

location / {
    proxy_read_timeout 1200;
    proxy_connect_timeout 1200;
    proxy_pass http://https/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header HTTPS on;
}

Optionally you can configure NginX to deliver the static content of the website. In this case you need to copy all website data to the NginX load balancer. Next you’ll need to define the location for static content in the nginx configuration files of the website. Paste the following right before the location /:

location ~* \.(ico|gif|jpeg|jpg|png|eot|ttf|swf|woff)$ {
    root /var/www/html;
    expires 30d;
    access_log off;
}
location ~* \.(css|js)$ {
    root /var/www/html;
    expires 1d;
    access_log off;
}

Make sure to replace /var/www/html with the actual path to your static files.

Optionally you can mount the website folder with NFS tools to the Nginx Server.

Share Button

Leave a Reply

You must be logged in to post a comment.