HaProxy is a really flexible tool for the load balancing. It has a whole bunch of options and customization options. Some quirks are present as well. It would be very hard to describe the contents of all of my drafts on this subject in a single article, because it’s going to get huge and heavy to digest. I will try to give an overview of the basic configuration principles.
At the time of writing this article, the stable version of HaProxy (1.5.3) had released. This version of HaProxy supports the balancing of the SSL connections. In those days, when I had to work with Haproxy, the delivery of SSL traffic to the destination server was usually carried by forwarding connections to port 443.
Unfortunately the latest version of HaProxy is not yet available in the software repositories of CentOS and Ubuntu. That is why I’m going to work with the previous version. I’m going to create the separate manual about haproxy from source package.
I’m going to work in CentOS.
HaProxy can be installed with the help of software manager:
yum install haproxy
The configuration file is located in
By default you can describe all configuration options in it. There are general settings and settings of back-ends.
The configuration options from “global” section, usually, don’t need to be cahnged.
Back-end is a server behind the load balancer. Another name is “web-head”.
There are two approaches in back-ends’ configuration:
1. Simple method “listen->servers” can be used when you have several web servers (let’s say three) with identical parameters (CPU/RAM) and all traffic is evenly distributed between them. There is no difference in servers’ roles. Each of them is able to handle all incoming requests.
listen listener_mane bind ip_address:80 option <option1> option <option2> .................... option <optionN< server server1 192.168.1.10:80 <option1> <option2> ...
2. Advanced method “frontend->backend->servers”. In this case servers are being grouped into
backends that are similar to the “listen” sections, depending on their role.
Backends are being grouped into
frontends. You can use ACLs in
frontends to deliver traffic to each backend.
frontend <instance_name> bind <ip_address:port> mode <layer mode> option <option1> option <option2> … option <optionN> acl <acl_name1> <acl_type> <acl_definition> use_backend <backend_name> if <acl_name1> default_backend <backend_name> backend static <backend_name> balance <ballance method> option <option1> option <option2> … option <optionN> server <server_name1> <nod_ip_address>:<port> <option1> <option2> … <option N> server <server_name2> <nod_ip_address>:<port> <option1> <option2> … <option N> backend web<backend_name> server <server_name2> <nod_ip_address>:<port> <option1> <option2> … <option N> backend backoffice <backend_name> server <server_name2> <nod_ip_address>:<port> <option1> <option2> … <option N>
bind – determines the ip address and port to listen for incoming connections.
There are only two load balancing modes (mode <layer mode>):
- http – is being used to balance http traffic. (OSI – 7th level). Allow to manipulate with headers and cookies.
- tcp – universal mode (OSI – 3rd-4th level). Can be used for any traffic (https, mysql, smtp, и т.д.)
option – allows to enable the additional features for each section. You can check all available options here.
The description of allo options for
server can be found at the following page: