Nathan Evans' Nemesis of the Moment

Cultural learnings of HA-Proxy, for make benefit…

Posted in Unix Environment by Nathan B. Evans on March 3, 2011

I’ve been setting up lots and lots of small details on our HA-Proxy cluster this week. This post is just a small digest of some of the things I have learnt.

The option nolinger is considered harmful.

I read somewhere that this option should be enabled because it frees up socket resources quicker and doesn’t leave them lying around when blatently dead. I enabled it and thought nothing more of it. Having forgot I had done so, I then started noticing strange behaviours. Most tellingly was that HA-Proxy’s webstats UI would truncate abruptly before completing. Fortunately, Willy Tarreau (the author/maintainer) was very quick to respond to my pestering e-mails and after seeing my Wireshark trace he immediately had a few ideas of what could be causing it. After following his suggestion to avoid using the “no linger” option, I removed it from my configuration and the problem went away.

Therefore: “option nolinger considered harmful.” You’ve be warned!

Webstats UI has “hidden” administrative functions

While reading the infamous “wall of text” that is the HA-Proxy documentation, I came across a neat option called “stats admin“. It enables a single piece of extra functionality (at least it does in v1.4.11) that will let you flag servers as being online or offline. This is useful if you’re planning to take one or more servers out of a backend’s pool, for maintenance possibly. I would wager that Willy intends to add more administrative features in the future so adding this one to your config now could save you some time in the future.

Of course, it is not likely that you will want such a sensitive function to be exposed to everyone that uses webstats. So it is fortunate then that this option supports a condition expression. I set mine up like the following:

userlist UsersFor_HAProxyStatistics
  group admin users admin
  user admin insecure-password godwouldntbeupthislate
  user stats insecure-password letmein

listen HAProxy-Statistics *:81
  mode http
  stats enable
  stats uri /haproxy?stats
  stats refresh 60s
  stats show-node
  stats show-legends
  acl AuthOkay_ReadOnly http_auth(UsersFor_HAProxyStatistics)
  acl AuthOkay_Admin http_auth_group(UsersFor_HAProxyStatistics) admin
  stats http-request auth realm HAProxy-Statistics unless AuthOkay_ReadOnly
  stats admin if AuthOkay_Admin

Request/response rewriting is mutually exclusive of keep-alive connections

At least in current versions, HA-Proxy doesn’t seem to be able to perform rewriting on connections that have been kept alive. It is limited to analysing only the first request and response. Any further requests that occur on that connection will go unanalysed. So if you are doing request or response rewriting, it is imperative that you set a special option to ensure that a connection can only be used once.

In my case, I just added the following to my frontend definition.

option http-server-close

Identifying your frontend from your backend

I was creating some rules to ensure that a particular URL could only be accessed through my HTTPS frontend. I wanted to prevent unencrypted HTTP access to this URL because it was using HTTP Basic authentication which uses clear text passwords across the wire.

Fortunately, HA-Proxy supports a fairly neat way of doing this by the means of tagging your frontend with a unique identifier which can then be matched against by the backend.

First of all, I setup my frontends like the following:

frontend Public-HTTP
  id 80
  mode http
  bind *:80
  option http-server-close
  default_backend Web-Farm

frontend Public-HTTPS
  id 8443
  mode http
  # Note: Port 8443 because the true 443 is being terminated by Stunnel, which then forwards to this 8433.
  bind *:8443
  option http-server-close
  default_backend Web-Farm

Then in my backend I cleared a space for defining “reusable” ACLs and then added the protective rule for the URL in question:

backend Web-Farm
  mode http
  balance roundrobin
  option httpchk
  server Web0 172.16.61.181:80 check
  server Web1 172.16.61.182:80 check

  # Common/useful ACLs
  acl ViaFrontend_PublicHttp fe_id 80
  acl ViaFrontend_PublicHttps fe_id 8443

  # Application security for: /MyWebPage/
  acl PathIs_MyWebPage path_beg -i /mywebpage
  http-request deny if PathIs_MyWebPage !ViaFrontend_PublicHttps

The piece of magic that makes this all work is the fe_id ACL criterion. Note that the “fe” stands for “frontend”.

Note the http-request deny rule is comprised of two ACLs, by boolean AND’ing them. HA-Proxy defaults to AND’ing. If you want to OR just type “or” or “||“. Negation is done in the normal C way by using an exclamation symbol, as shown in the above example. I seem to like avoiding the use of the “unless” statement as I prefer the explicitness of using “if” and then using negation. But that’s just my personal preference as a long-time coder 🙂

Now if a user tries to visit http://.../MyWebPage they will get a big fat ugly 403 Forbidden error.

HTTP Basic authentication is finally very basic to do!

I came across a stumbling block this week. I assumed that Microsoft IIS, one of the best web servers available, could do HTTP Basic authentication i.e. clear text passwords over the wire and then validating against some sort of clear text password file or database. Turns out that while IIS does support HTTP Basic auth’, it doesn’t support any form of simple backend. You have to validate against either the web servers local Windows user accounts, or against Active Directory. Great. The web page in question was just a little hacky thing we knocked up to get a customer of ours out of a hole. We didn’t want to be creating maintenance headaches for ourselves by creating a local user account on each web server in the farm, nor did we fancy creating them an AD account. They don’t even belong to our company!

Fortunately (that word again), and despite how poorly documented it is, HA-Proxy *does* support this!

First of all you need to create a userlist that will contain your users/groups that you will authenticate against:

userlist UsersFor_AcmeCorp
  user joebloggs insecure-password letmein

Then in your backend, you need to create an ACL that uses the http_auth criterion. And lastly, create an http-request auth rule that will cause the appropriate 401 Unauthorized and WWW-Authenticate: Basic response to be generated if the authentication has failed.

backend HttpServers
  .. normal backend stuff goes here as usual ..
  acl AuthOkay_AcmeCorp http_auth(UsersFor_AcmeCorp)
  http-request auth realm AcmeCorp if !AuthOkay_AcmeCorp

Remove sensitive IIS / ASP.NET response headers

Security unconscious folk need not apply.

It’s a slight security risk to be leaking your precise IIS and ASP.NET version numbers. Whilst these can be turned off in IIS configuration, it is more a concern for your frontend load balancer i.e. HA-Proxy. The reason I believe this is because the headers can be useful debugging on the internal LAN/VPN inside your company. Only when the headers are about to touch the WAN does it become dangerous. Therefore:

frontend Public-HTTP
  # Remove headers that expose security-sensitive information.
  rspidel ^Server:.*$
  rspidel ^X-Powered-By:.*$
  rspidel ^X-AspNet-Version:.*$

HTTPS and separation of concerns

I don’t know about Apache, but IIS 7.5 can have some annoying (but arguably expected) behaviours when HA-Proxy is passing traffic where the client believes it has an end-to-end HTTPS connection with the web server. My setup involves Stunnel terminating the SSL connection and then from that point on it is just standard HTTP traffic to the backend servers. This means the backend servers don’t actually need to be listening on HTTPS/443 at all. However when GET requests come in to them using the https:/ scheme they can get a bit confused (or argumentative, I’m undecided). IIS seems to like sending back a 302 Moved Permanently response, with a Location header that uses the http:/ scheme. So then of course the web browser will follow the redirect to either a URL that doesn’t exist or one which does exist but is already merely a redirect to the https:/ scheme! Infinitely loop anyone?

The way to solve this is request rewriting, through some clever use of regular expressions.

frontend Public-HTTPS
  id 8443
  mode http
  bind *:8443
  option http-server-close
  default_backend Web-Farm

  # Rewrite requests so that they are passed to the backend as http:/ schemed requests.
  # This may be required if the backend web servers don't like handling https schemed requests over non-https transport.
  # I didn't use this in the end - but it might come in handy in the future so I left it commented out.
  # reqirep ^(\w+\ )https:/(/.*)$ \1http:/\2

  # Rewrite responses containing a Location header with HTTP scheme using the relative path.
  # We could alternatively just rewrite the http:/ to be https:/ but then it could break off-site redirects.
  rspirep ^Location:\s*http://.*?\.acmecorp.co.tld(/.*)$ Location:\ \1
  rspirep ^Location:(.*\?\w+=)http(%3a%2f%2f.*?\.acmecorp.co.tld%2f.*)$ Location:\ \1https\2

The first rspirep in the above example is the most important. The second is something more specific to a particular web application we’re hosting that uses a ?Redirect=http://yada.yada style query string in certain places.

The rsprep / rspirep rule (the i means case-insensitive matching) is very powerful. The only downside is that you do need to be fairly fluent with regular expressions. It requires only two parameters, the first is your regular expression and the second is your string replacement.

The string replacement that occurs in the second parameter supports expansion based upon indexed capture groups from the regular expression that was matched. This is useful for merging very specific pieces from the match back into the replacement string, as I am doing in the example above. They take the form of \1 or \2 etc. Where the number indicates the capture group index number. And capture groups are denoted in the regular expression by using parenthesis, if you didn’t know.

Truly “live” updates on the Webstats UI

One of the first things I noticed in the hours after deploying HA-Proxy is that the webstat counters that are held for each frontend, listen and backend are not actually updated as frequently as they perhaps ought to be. Indeed, the counters for any given connection are not accumulated until that connection has ended. This is bad if your application(s) tend to hold open long-duration connections. It reduces your usability of HA-Proxy’s reporting. I’m sure there are very good performance reasons that Willy did this, as that is what is alluded to in the documentation. Fortunately there is a very simple workaround for this in the form of the contstats option.

Simply add the following to your proxy and benefit from higher accuracy webstats:

option contstats

Until next time…

Tagged with:

Safely pairing HA-Proxy with virtual network interface providers like Keepalived or Heartbeat

Posted in Unix Environment by Nathan B. Evans on March 1, 2011

This is sort of a follow-up to the Deploying HA-Proxy + Keepalived with Mercurial for distributed config post.

During testing we were coming across an issue where the HA-Proxy instance running on the slave member of our cluster would fail to bind some of its frontend proxies:

Starting haproxy: [ALERT] : Starting proxy Public-HTTPS: cannot bind socket

After some head scratching I noticed that the problem was only arising on those proxies that explicitly defined the IP address of a virtual interface that was being managed by Keepalived (or maybe Heartbeat for you).

This is because both of these High-Availability clustering systems use a rather simplistic design whereby the “shared” virtual IP is only installed on the active node in the cluster. While the nodes that are in a dormant state (i.e. the slaves) do not actually have those virtual IPs assigned to them during that state. It’s a sort of “IP address hot-swapping” design. I learnt this by executing a simple a command, first from the master server:

$ ip a
<snipped stuff for brevity>
2: seth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:15:5d:28:7d:19 brd ff:ff:ff:ff:ff:ff
    inet 172.16.61.151/24 brd 172.16.61.255 scope global seth0
    inet 172.16.61.150/24 brd 172.16.61.255 scope global secondary seth0:0
    inet 172.16.61.159/24 brd 172.16.61.255 scope global secondary seth0:1
    inet6 fe80::215:5dff:fe28:7d19/64 scope link
       valid_lft forever preferred_lft forever
<snipped trailing stuff for brevity>

Then again, from the slave server:

$ ip a
<snipped stuff for brevity>
2: seth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:15:5d:2d:9c:11 brd ff:ff:ff:ff:ff:ff
    inet 172.16.61.152/24 brd 172.16.61.255 scope global seth0
    inet6 fe80::215:5dff:fe2d:9c11/64 scope link
       valid_lft forever preferred_lft forever
<snipped trailing stuff for brevity>

Unfortunately this behaviour can cause problems for programs like HA-Proxy which have been configured to expect the existence of specific network interfaces on the server. I was considering working around it by writing some scripts that hook events within the HA cluster to handle stopping and starting the HA-Proxy when needed. But this approach seemed clunky and unintuitive. So I dug a little deeper and came across a bit of a gem hidden away in the depths of the Linux networking stack. It is a simple boolean setting called “net.ipv4.ip_nonlocal_bind” and it allows a program like HA-Proxy to create listening sockets on network interfaces that do not actually exist on the server. It was created specially for this situation.

So in the end the fix was as simple as adding/updating the /etc/sysctl.conf file to include the following key/value pair:

net.ipv4.ip_nonlocal_bind=1

My previous experience of setting up these low-level High-Availability clusters was with Windows Server’s feature called Network Load Balancing (NLB). This works quite different from Keepalived and Heartbeat. It relies upon some low level ARP hacking/trickery and some sort of distributed time splicing algorithm. But it does ensure that each node in the cluster (whether in a master or slave position) will remain allocated with the virtual IP address(es) at all times. I suppose there is always more than one way to crack an egg…

Tagged with: , ,

Deploying HA-Proxy + Keepalived with Mercurial for distributed config

Posted in Automation, Source Control, Unix Environment by Nathan B. Evans on February 27, 2011

Something I have learnt (and re-learnt) too many times to count is that one of the strange wonders of working for a startup company is that the most bizarre tasks can land on your lap seemingly with no warning.

We’ve recently been doing a big revamp of our data centre environment, including two shiny new Hyper-V hosts, a Sonicwall firewall and the decommissioning of lots of legacy hardware that doesn’t support virtualisation. As part of all this work we needed to put in place several capabilities for routing application requests on our SaaS platform:

  1. Expose HTTP/80 and HTTPS/443 endpoints on the public web and route incoming requests based upon URL to specific (and possibly many) private internal servers.
  2. Expose a separate and “special” TCP 443 endpoint (on public web) that isn’t really HTTPS at all but will be used for tunnelling of our TCP application protocol. We intend to use this when we acquire pilot programme customers that don’t want the “hassle” of modifying anything on their network firewalls/proxies. Yes, really. Even worse, it will inspect the source IP address and, from that, determine what customer it is and then route it to the appropriate private internal server and port number.
  3. Expose various other TCP ports on public web and map these (in as traditional “port map” style as possible) directly to one specific private internal server.
  4. Be easy to change the configuration and be scriptable, so we can tick off the “continuous deployment” check box.
  5. Configuration changes must never tamper with existing connections.
  6. Optional bonus, be source controllable.

My first suggestion was that we would write some PowerShell scripts to access the Sonicwall firewall through SSH and control its firewall tables directly. This was the plan for several months in fact, whilst everything was getting put in place inside the data centre. I knew full well it wouldn’t be easy. First there was some political issues inside the company with regard to a developer (me) having access to a central firewall. Second, I knew that creation and testing of the scripts would be difficult and that the whole CLI on the Sonicwall would surely not be as good as a Cisco.

I knew I could achieve #1 and #3 easily on a Sonicwall, like with any router really. But #2 was a little bit of an unknown as, frankly, I doubted if a Sonicwall could do it without jumping through a ton of usability hoops. #4 and #6 were the greatest unknown. I know you can export a Sonicwall’s configuration from the web interface. But it comes down as a binary file; which sort of made me doubt whether the CLI could do it properly as some form of text file. And of course if you can’t get the configuration as a text file then it’s not really going to be truly source controllable either, so that’s #6 out.

Fortunately an alternative (and better!) solution presented itself in the form of HA-Proxy. I’ve been hearing more and more positive things about this over the past couple years: most notably from the Stack Exchange. And having recently finally shed my long-time slight phobia of Linux, I decided to have a go at setting it up this weekend on a virtual machine.

The only downside was that as soon as you move some of your routing decisions away from your core firewall then you start to get a bit worrisome about server failure. So naturally we had to ensure that whatever we came up with involving HA-Proxy can be deployed as a clustered master-master or master-slave style solution. That would mean that if our VM host “A” had a failure then Mr Backup over there, “B”, could immediately take up the load.

It seems that Stack Exchange chose to use the Linux-HA Heartbeat system for providing their master-slave cluster behaviour. In the end we opted for Keepalived instead. It is more or less the same thing except that it’s apparently more geared towards load balancers and proxies such as HA-Proxy. Whereas Heartbeat is designed more for situations where you only ever want one active server (i.e. master-slave(s)). Keepalived just seems more flexible in the event that we decide to switch to a master-master style cluster in the future.

HA-Proxy Configuration

Here’s the basic /etc/haproxy/haproxy.conf that I came up with to meet requirements #1, #2 and #3.

#
# Global settings for HA-Proxy.
global
	daemon
	maxconn 8192

#
# Default settings for all sections, unless overridden.
defaults
	mode http

	# Known-good TCP timeouts.
	timeout connect 5000ms
	timeout client 20000ms
	timeout server 20000ms

	# Prevents zombie connections hanging around holding resources.
	option nolinger

#
# Host HA-Proxy's web stats on Port 81.
listen HAProxy-Statistics *:81
	mode http
	stats enable
	stats uri /haproxy?stats
	stats refresh 20s
	stats show-node
	stats show-legends
	stats auth admin:letmein

#
# Front-ends
#
#########
	#
	# Public HTTP/80 endpoint.
	frontend Public-HTTP
		mode http
		bind *:80
		default_backend Web-Farm

	#
	# Public HTTPS/443 endpoint.
	frontend Public-HTTPS
		mode tcp
		bind 172.16.61.150:443
		default_backend Web-Farm-SSL

	#
	# A "fake" HTTPS endpoint that is used for tunnelling some customers based on the source IP address.
	# Note: At no point is this a true TLS/SSL connection!
	# Note 2: This only works if the customer network allows TCP 443 outbound without passing through an internal proxy (... which most of ours do).
	frontend Public-AppTunnel
		mode tcp

		#
		# Bind to a different interface so as not to conflict with Public-HTTPS (above).
		bind 172.16.61.159:443

		#
		# Pilot Customer 2 (testing)
		acl IsFrom_PilotCustomer2 src 213.213.213.0/24
		use_backend App-PilotCustomer2 if IsFrom_PilotCustomer2

#
# Back-ends
#
# General
#
#########
	#
	# IIS 7.5 web servers.
	backend Web-Farm
		mode http
		balance roundrobin
		option httpchk
		server Web0 172.16.61.181:80 check
		server Web1 172.16.61.182:80 check

	#
	# IIS 7.5 web servers, that expose HTTPS/443.
	# Note: This is probably not the best way, but it works for now. Need to investigate using the stunnel solution.
	backend Web-Farm-SSL
		mode tcp
		balance roundrobin
		server Web0 172.16.61.181:443 check
		server Web1 172.16.61.182:443 check

#
# Back-ends
#
# Application Servers (TCP bespoke protocol)
#
#########
	#
	# Customer 1
	listen App-Customer1
		mode tcp
		bind *:35007
		server AppLive0 172.16.61.12:35007 check

	#
	# Pilot Customer 2 (testing)
	listen App-PilotCustomer2
		mode tcp
		bind *:35096
		server AppLive0 172.16.61.12:35096 check

I doubt the file will remain this small for long. It’ll probably be 15x bigger in a week or two 🙂

Keepalived Configuration

And here’s the /etc/keepalived/keepalived.conf file.

vrrp_instance_VI_1 {
	state MASTER
	interface seth0
	virtual_router_id 51
	! this priority (below) should be higher on the master server, than on the slave.
	! a bit of a pain as it makes Mercurial'ising this config more difficult - anyone know a solution?
	priority 200
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass some_secure_password_goes_here
	}
	virtual_ipaddress {
		172.16.61.150
		172.16.61.159
	}
}

It is rather straight forward as far as other Keepalived configurations go. It is effectively no different to a Windows Server Network Load Balancing (NLB) deployment, with the right options to give the master-slave behaviour. Note the only reason I’ve specified two virtual IP addresses is because I need to use the TCP port 443 twice (for different purposes). These will be port mapped on the Sonicwall to different public IP addresses, of course.

Mercurial, auto-propagation script for haproxy.conf

#!/bin/sh
cd /etc/haproxy/

#
# Check whether remote repo contains new changesets.
# Otherwise we have no work to do and can abort.
if hg incoming; then
  #
  #
  echo "The HA-Proxy remote repo contains new changesets. Pulling changesets..."
  hg pull

  #
  # Update to the working directory to latest revision.
  echo "Updating HA-Proxy configuration to latest revision..."
  hg update -C

  #
  # Re-initialize the HA-Proxy by informing the running instance
  # to close its listen sockets and then load a new instance to
  # recapture those sockets. This ensures that no active
  # connections are dropped like a full restart would cause.
  echo "Reloading HA-Proxy with new configuration..."
  /etc/init.d/haproxy reload

else
  echo "The HA-Proxy local repo is already up to date."
fi

I turned the whole /etc/haproxy/ directory into a Mercurial repository. The script above was also included in this directory (to gain free version control!), called sync-haproxy-conf.sh. I cloned this repository onto our central Mercurial master server.

It is then just a case of setting up a basic “* * * * * /etc/haproxy/sync-haproxy-conf.sh” cronjob so that the script above gets executed every minute (don’t worry it’s not exactly going to generate much load).

This is very cool because we can use the slave HA-Proxy server as a sort of testing ground of sorts. We can modify the config on that server quite a lot and test against it (by connecting directly to it’s IP rather than the clustered/virtual IP provided by Keepalived). Then once we’ve got the config just right we can commit it to the Mercurial repository and then push the changeset(s) to the master server. Within 60 seconds then the other server (or servers, in your case possibly!) will then run the synchronisation script.

One very neat thing about the newer versions of HA-Proxy (I deployed version 1.4.11) is that they have an /etc/init.d script that already includes everything you need for doing configuration file rebinds/reloads. This is great because what actually happens is that HA-Proxy will send a special signal to the old process so that it stops listening on the front-end sockets. Then it will attempt to start the new instance based upon the new configuration. If this fails it will send another signal to the “old”, but now resurrected process, that it can resume listening. Otherwise the old process will eventually exit once all its existing client connections have ended. This is brilliant because it meets and rather elegantly exceeds exceeds our expectations for requirement #5.

The fact that our HA-Proxy’s will contain far more meticulous configuration details than even our Sonicwall, I think that this solution based upon Mercurial is simply brilliant. We have what is effectively a test and slave server all-in-one, and a hg revert or hg rollback command is of course only a click away.

It’s still a work in progress but so far I’m very pleased with the progress with HA-Proxy.