After consuming the neutron LBaaS-v1 API service through cd.OS I decided to dig a bit on the LBaaS source code to find what is powering the service behind the scenes.
I find really interesting that each cloud provider comes up with a different terminology and feature set when dealing with load balancers, that includes the OpenStack, CloudStack and AWS implementations. They all use different names to refer to load balancer resources and its feature set also differs very significantly .
In the next series of blog posts I’ll try to find more information about the implementation of load balancers for OpenStack, CloudStack and AWS and document as much as I can.
Before looking at the OpenStack LBaaS implementation it is important to understand some of HAProxy’s terminology
- Access Control List (ACL)
- are used to test some condition and perform an action
- allows flexible network traffic forwarding based on a variety of factors
- which load balance algorithm to use
- a list of servers and ports
- A frontend defines how requests should be forwarded to backends
- Frontends are defined in the frontend section of the HAProxy configuration
- Their definitions are composed of the following components:
- a set of IP addresses and a port (e.g. 10.1.1.7:80, *:443, etc.)
- use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
- A logical set of devices, such as web servers, that you group together to receive and process traffic.
- The load-balancing algorithm chooses which member of the pool handles new requests or connections that are received on a listener. Each listener has one default pool.
- Looks like it maps to a HAProxy back-end config
- A virtual IP (VIP) makes a load balancer accessible by clients
- A VIP is allocated in form of a port in the selected load balanced subnet
- Define session persistence
- Looks like it maps to a HAProxy front-end config
- The application that runs on the back-end server.
- Determines whether or not back-end members of the pool can process a request. A pool can have one health monitor associated with it.
Types of algorithms
- Each server is used in turns, according to their weights.
- The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request
- The server with the lowest number of connections receives the connection.
Types of session persistence
- All connections that originate from the same source IP address are handled by the same member of the pool.
- The load-balancing function creates a cookie on the first request from a client. Subsequent requests that contain the same cookie value are handled by the same member of the pool.
- The load-balancing function relies on a cookie established by the back-end application. All requests with the same cookie value are handled by the same member of the pool.
Looking into the code
The code for the openstack load balancer service can be found here
After cloning the project and snooping around I found this directory:
/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy which seems to contain the code for the HAProxy driver implementation
The HAProxy driver implementation has a template directory with the following files:
The template files seem to define the front and back end configuration for the HAProxy load balancers. It looks like the LBaaS driver uses the templates to inject the data defined in the pool, VIP, member and monitor resources and create the required HAProxy config files.
Besides the templates the HAProxy driver has the following files:
- Builds the HAProxy configuration file for
- Seems to handle TLS load balancers
- Need to find more info about it
- HaproxyNSDriver implementation
The unit tests for the LBaaS driver can be found in the following directory:
In the unit tests there are some sample data for the load balancer implementation including front and back end config files