Nathan Evans' Nemesis of the Moment

Adding the Git changeset hash on every build

Posted in .NET Framework, Automation, Uncategorized by Nathan B. Evans on March 27, 2013

Version numbers in the traditional sense were obsoleted ever since DVCS arrived on the scene. They’re still useful for literal human consumption but I prefer having the full Git (or Hg) changeset hash available on the assembly as well so I can literally do CTRL+SHIFT+G on GitExtensions (or the equiv. on TortoiseHg etc) and paste in the changeset hash to go directly to it.

To set this up is fairly simple. Firstly, ensure your Git.exe is in your PATH. It generally will be if you installed Git with default/recommended settings. Second, import the NuGet package for MSBuild Community Tasks. Make sure you import the older version available here as there is a bug in the newer versions where the GitVersion task is broken.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <Import Project="$(SolutionDir)\.build\MSBuild.Community.Tasks.targets" />

  <Target Name="UpdateVersionAssemblyInfo" BeforeTargets="BeforeBuild">
    <PropertyGroup>
      <Major>1</Major>
      <Minor>0</Minor>
      <Build>0</Build>
      <Revision>0</Revision>
      <GitHash>unknown</GitHash>
    </PropertyGroup>

    <GitVersion LocalPath="$(SolutionDir)" Short="false">
      <Output TaskParameter="CommitHash" PropertyName="GitHash" />
    </GitVersion>

    <AssemblyInfo
      CodeLanguage="CS"
      OutputFile="$(SolutionDir)\VersionAssemblyInfo.cs"
      AssemblyInformationalVersion="$(Major).$(Minor).$(Build).$(Revision) (git $(GitHash))"
      AssemblyVersion="$(Major).$(Minor).$(Build).$(Revision)"
      AssemblyFileVersion="$(Major).$(Minor).$(Build).$(Revision)" />
  </Target>
</Project>

Create a VersionAssemblyInfo.cs in your solution root, and create a “Link” to this file in all your projects.

Then in each project file you just need to add a new import, typically underneath the “Microsoft.CSharp.targets” import.

<Import Project="$(SolutionDir)\VersionAssemblyInfo.targets" />

Now rebuild your project(s). If you check the disassembly with a tool like JustDecompile or Reflector and look at the assembly attributes you should see that (little known and rarely used) AssemblyInformationalVersionAttribute is there and contains both the human-readable major.minor.build.revision as well as the Git changeset hash.

You may now choose to replace the various areas of your application that display version information with this attribute instead.

Tagged with: , ,

Simulating the P of CAP on a Riak cluster

Posted in .NET Framework, Automation, Unix Environment by Nathan B. Evans on February 10, 2013

When developing and testing a distributed system, one of the essential concerns you will deal with is eventual consistency (EC). There are plenty of articles covering EC so I’m not going to dwell on that much here. What I am going to talk about is testing, particularly of the automated kind.

Testing an eventually consistent system is difficult because everything is transient and unpredictable. Unless you tune your consistency values like N and DW then you’re offered no guarantees about the extent of propagation of your commit around the cluster. And whilst consistency tuning may be acceptable for some tests, it most definitely won’t be acceptable for tests that cover concurrent write concerns such as your sibling resolution protocols.

What is “partition tolerance”?

This is where a single node or a group of nodes in the cluster become segregated from the rest of the cluster. I liken it to a “net split” on an IRC network. The system continues operating but it has been split into two or more parts. When a node has become segregated from the rest of the cluster it does not necessarily mean that its clients can also not reach it. Therefore all nodes can continue to perform writes on the dataset.

Generally speaking, a partition event is a transient situation. They may last a few seconds, a few minutes, hours.., days.. or even weeks. But the expectation is that eventually the partition will be repaired and the cluster returned to full health.

State changes of a cluster during a partition event

Fig. 1: State changes of a cluster during a partition event

In the diagram (Fig. 1) there is a cluster comprised of three nodes, and this is the sequence of events:

  1. N2 becomes network partitioned from N1 and N3.
    N1 and N3 can continue replicating between one another.
  2. Client(s) connected to either N1 or N3 perform two writes (indicated by the green and orange).
  3. Client(s) connected to N2 perform three writes (purple, red, blue).
  4. When the network partition is resolved, the cluster begins to heal by merging the commits between the nodes.
  5. The yellow and green commits (that were already replicated between N1 and N3) are propagated onto N2.
  6. The purple, red and blue commits on N2 are propagated onto N1 and N3.
  7. The cluster is now fully healed.

Simulating a partition

I have a Riak cluster of three nodes running on Windows Azure and I needed some way to deterministically simulate a network partition scenario. The solution that I came up with was quite simple. I basically wrote some iptables scripts that temporarily firewalled certain IP traffic on the LAN in order to prevent the selected node from communicating with any other nodes.

To erect the partition:

# Simulates a network partition scenario by blocking all TCP LAN traffic for a node.
# This will prevent the node from talking to other nodes in the cluster.
# The rules are not persisted, so a restart of the iptables service (or indeed the whole box) will reset things to normal.

# First add special exemption to allow loopback traffic on the LAN interface.
# Without this, riak-admin gets confused and thinks the local node is down when it isn't.
sudo iptables -I OUTPUT -p tcp -d $(hostname -i) -j ACCEPT
sudo iptables -I INPUT -p tcp -s $(hostname -i) -j ACCEPT

# Now block all other LAN traffic.
sudo iptables -I OUTPUT 2 -p tcp -d 10.0.0.0/8 -j REJECT
sudo iptables -I INPUT 2 -p tcp -s 10.0.0.0/8 -j REJECT

To tear down the partition:

# Restarts the iptables service, thereby resetting any temporary rules that were applied to it.
sudo service iptables restart

Disclaimer: These are just rough shell scripts designed for use on a test bed development environment.

I then came up with a fairly neat little class that wraps up the concern of a network partition:

internal class NetworkPartition : IDisposable {
    private readonly string _nodeName;
    private bool _closed;

    private NetworkPartition(string nodeName) {
        _nodeName = nodeName;
    }

    /// <summary>
    /// Creates a temporary network partition by segregating (at the IP firewall level) a particular node from all other nodes in the cluster.
    /// </summary>
    /// <param name="nodeName">The name of the node to be network partitioned. The name must exist as a PuTTY "Saved Session".</param>
    /// <returns>An object that can be disposed when the partition is to be removed.</returns>
    public static IDisposable Create(string nodeName) {
        var np = new NetworkPartition(nodeName);
        Plink.Execute("simulate-network-partition.sh", nodeName);
        return np;
    }

    private void Close() {
        if (_closed)
            return;

        Plink.Execute("restart-iptables.sh", _nodeName);

        _closed = true;
    }

    public void Dispose() {
        Close();
    }
}

Which allows me to write test cases in the following way:

RingStatus.AssertOkay("riak03");

using (NetworkPartition.Create("riak03")) {
    RingStatus.AssertDegraded("riak03");

    // TODO: Do other stuff here whilst riak03 is network partitioned from the rest of the cluster.
}

RingReady.Wait("riak03");
RingStatus.AssertOkay("riak03");

Yes, RingStatus and RingReady aren’t documented here. But they’re pretty simple.

Obviously as part of this work I had to write a quick and dirty wrapper around the PuTTY plink.exe tool. This tool is basically a CLI version of PuTTY and it is very good for scripting automated tasks.

My solution could be improved to support creating partitions that consist of more than just one node, but for me the added value of this would be very small at this stage. Maybe I will add support for this later when I get into stress testing territory!

Source

You can view it over on GitHub; the namespace is tentatively called “ClusterTools”. Bear in mind it’s currently held inside another project but if there’s enough interest I will make it standalone. There has been talk on the CorrugatedIron project (which is a Riak client library for .NET) about starting up a Contrib library, so maybe this could be one its first contributions.

Automated builds and versioning, with Mercurial

Posted in .NET Framework, Automation, Software Design, Source Control by Nathan B. Evans on May 16, 2011

Yes this blog post is about that seemingly perennial need to set-up some sort of automatically incrementing version number system for your latest project.

As it happens the “trigger” for me this time around was not a new project but more as a result of our switch from TFS to Mercurial for our source control. It took some time for Mercurial to “bed in”, but it definitely has now. So then you reach that point where you start asking “Okay, so what else can we do with this VCS to improve the way we work?”.

Our first pass at automated versioning was to simply copy the status quo that worked with TFS. This was rather crude at best. Basically we had a MSBuild task that would increment (based on our own strategy) the AssemblyFileVersionAttribute contained inside the GlobalAssemblyInfo.cs. The usual hoo-har really, involving a simple regular expression etc. This was fine. However we did not really like it because it was, effectively, storing versioning information inside of a file held inside the repository. Separation of concerns and all that. It also caused a small amount of workflow overhead involving merging of named branches – with the occasional albeit easily resolvable conflict. Not a major issue, but not ideal either. Of course, not all projects use named branches. But we do; as they’re totally awesome for maintaining many concurrently supported backward releases.

Versioning strategies

The way I see it, there is only a small number of routes you can go down with project versioning:

  1. Some sort of yyyy.mm.dd timestamp strategy.
    This is great for “forward only” projects that don’t need to maintain supported backward releases. So for web applications, cloud apps etc – this is, I suspect, quite popular. For other projects, it simply doesn’t make sense. Because if you want to release a minor bug fix for a release from over a year ago it wouldn’t make any sense for the version to suddenly jump to the current date. How would you differentiate between major product releases?
  2. The typical major.minor.build.revision strategy.
    The Major and Minor would be “fixed” for each production branch in the repository. And then you’d increment the build and/or revision parts based on some additional strategy.
  3. A DVCS-only strategy where you use the global changeset hash.
    Unfortunately this is of limited use today on .NET projects because both the AssemblyVersionAttribute and AssemblyFileVersionAttribute won’t accept neither a string nor a byte array. Of course there is nothing stopping you coming up with your own Attribute (we called ours DvcsIdentificationAttribute) and including/updating that in your GlobalAssemblyInfo.cs (or equivilent) whenever you run a build. But it is of zero use to the .NET framework itself.
  4. Some sort of hybrid between #1 and #2 (and possibly even #3!).
    This is what we do. We use a major.minor.yymm.revision strategy, to be precise.

We like our #4 hybrid strategy because it brings us the following useful characteristics:

  • It has “fixed” Major and Minor parts. Great for projects with multiple concurrent versions.
  • It contains a cursory Year and Month that can be read from a glance. When chasing down a problem on a customer environment it is simple things like this that can speed up diagnosis times.
  • An incremental Revision part that ensures each build in the same month has a unique index.

So then, how did we implement this strategy on the Microsoft stack and with Mercurial?

Implementation

The key to implementing this strategy is first and foremost with retrieving from the repository the “most recent” tag for the current branch. Originally I had big plans here to go write some .NET library to walk the Mercurial revlog file structure. It would have been a cool project to learn some of the nitty gritty details of how Mercurial works under the hood. Unfortunately, I soon discovered that Mercurial has a template command available that already does what I need. It’s called the “latesttag” template. It’s really simple to use as well, for example:

$ hg parents --template {latesttag}
> v5.4.1105.5-build

There is also a related and potentially useful template called “latesttagdistance“. This will count, as it walks the revlog tree, the number of changesets that it walks past in search for the latest tag. It is possible that you could use this value as the incrementation extent for the Revision part in the strategy.

At this point most of the Mercurial fan base will go off and write a Bash script or Python script to do the job. Unfortunately in .NET land it’s not quite that simple, as we all know. I could have written a, erm, “posh” Powershell script to do it, for sure. But then I have to wire that in to the MSBuild script – which I suspect would be a bit difficult and have all sort of gotchas involved.

So I wrote a couple MSBuild tasks to do it, with the interesting one aptly named as MercurialAssemblyFileVersionUpdate:

public class MercurialAssemblyFileVersionUpdate : AssemblyFileVersionUpdate {

    private Version _latest;

    public override bool Execute() {
        var cmd = new MercurialCommand {
            Repository = Path.GetDirectoryName(BuildEngine.ProjectFileOfTaskNode),
            Arguments = "parents --template {latesttag}"
        };

        if (!cmd.Execute()) {
            Log.LogMessagesFromStream(cmd.StandardOutput, MessageImportance.High);
            Log.LogError("The Mercurial Execution task has encountered an error.");
            return false;
        }

        _latest = ParseOutput(cmd.StandardOutput.ReadToEnd());

        return base.Execute();
    }

    protected override Version GetAssemblyFileVersion() {
        return _latest;
    }

    private Version ParseOutput(string value) {
        return string.IsNullOrEmpty(value) || value.Equals("null", StringComparison.InvariantCultureIgnoreCase)
                   ? base.GetAssemblyFileVersion()
                   : new Version(ParseVersionNumber(value));
    }

    private string ParseVersionNumber(string value) {
        var ver_trim = new Regex(@"(\d+\.\d+\.\d+\.\d+)", RegexOptions.Singleline | RegexOptions.CultureInvariant);

        var m = ver_trim.Match(value);
        if (m.Success)
            return m.Groups[0].Value;

        throw new InvalidOperationException(
            string.Format("The latest tag in the repository ({0}) is not a parsable version number.", value));
    }
}

Click here to view the full source, including a couple dependency classes that you’ll need for the full solution.

With that done, it was just a case of updating our MSBuild script to use the new bits:

<Target Name="incr-version">
  <MercurialExec Arguments='revert --no-backup "$(GlobalAssemblyInfoCsFileName)"' />
  <Message Text="Updating '$(GlobalAssemblyInfoCsFileName)' with new version number..." />
  <MercurialAssemblyFileVersionUpdate FileName="$(GlobalAssemblyInfoCsFileName)">
    <Output TaskParameter="VersionNumber" PropertyName="VersionNumber" />
    <Output TaskParameter="MajorNumber" PropertyName="MajorNumber" />
    <Output TaskParameter="MinorNumber" PropertyName="MinorNumber" />
    <Output TaskParameter="BuildNumber" PropertyName="BuildNumber" />
    <Output TaskParameter="RevisionNumber" PropertyName="RevisionNumber" />
  </MercurialAssemblyFileVersionUpdate></strong>
  <Message Text="Done update to '$(GlobalAssemblyInfoCsFileName)'." />
  <Message Text="Tagging current changeset in local repo." />
  <MercurialExec Arguments="tag --force v$(VersionNumber)-build" />
  <Message Text="Pushing commit to master server." />
  <MercurialExec Arguments='push' />
  <Message Text="All done." />
</Target>

Of course, don’t forget to include your tasks into the script, ala:

<UsingTask AssemblyFile="Build Tools\AcmeCorp.MsBuildTasks.dll"
           TaskName="AcmeCorp.MsBuildTasks.MercurialAssemblyFileVersionUpdate" />

<UsingTask AssemblyFile="Build Tools\AcmeCorp.MsBuildTasks.dll"
           TaskName="AcmeCorp.MsBuildTasks.MercurialExec" />

You’ll notice that the parsing implementation is quite forgiving. It is a regular expression that will extract anything that looks like a parsable System.Version string. This is cool because it means the tags themselves don’t have to be exactly pure as System.Version would need them. You can leave a “v” in front, or add a suffix like “-build”. Whatever your convention is, it just makes things a bit more convenient.

Remaining notes

The implementation above will generate global tags that are revision controlled in the repository. I did consider using “local tags” as those wouldn’t need to be held in the repository at all and could just sit on the automated build server only. However unfortunately the “latesttag” template does not work with local tags, it only appears to work with global tags. It would also of course mean that developers wouldn’t benefit from the tags at all, which would be a shame.

Mercurial stores global tags inside a .hgtags file in the root working directory. The file is revision controlled but only for auditing purposes. It does not matter which version you have in your working directory at any given time.

You may still require a workflow to perform merges between your branches after running a build, in order to propagate the .hgtags file immediately (and not leave it to someone else!). If any merge conflicts arise you should take the changes from both sides as the .hgtags file can be considered “cumulative”.

Related reading

http://kiln.stackexchange.com/questions/2194/best-practice-generating-build-numbers
http://stackoverflow.com/questions/4726257/most-recent-tag-before-tip-in-mercurial
http://www.jaharmi.com/2010/03/24/generate_version_numbers_for_mac_os_x_package_installers_with_mercurial_and_semantic_vers
http://mercurial.selenic.com/wiki/NearestExtension

Tagged with: ,

Deploying HA-Proxy + Keepalived with Mercurial for distributed config

Posted in Automation, Source Control, Unix Environment by Nathan B. Evans on February 27, 2011

Something I have learnt (and re-learnt) too many times to count is that one of the strange wonders of working for a startup company is that the most bizarre tasks can land on your lap seemingly with no warning.

We’ve recently been doing a big revamp of our data centre environment, including two shiny new Hyper-V hosts, a Sonicwall firewall and the decommissioning of lots of legacy hardware that doesn’t support virtualisation. As part of all this work we needed to put in place several capabilities for routing application requests on our SaaS platform:

  1. Expose HTTP/80 and HTTPS/443 endpoints on the public web and route incoming requests based upon URL to specific (and possibly many) private internal servers.
  2. Expose a separate and “special” TCP 443 endpoint (on public web) that isn’t really HTTPS at all but will be used for tunnelling of our TCP application protocol. We intend to use this when we acquire pilot programme customers that don’t want the “hassle” of modifying anything on their network firewalls/proxies. Yes, really. Even worse, it will inspect the source IP address and, from that, determine what customer it is and then route it to the appropriate private internal server and port number.
  3. Expose various other TCP ports on public web and map these (in as traditional “port map” style as possible) directly to one specific private internal server.
  4. Be easy to change the configuration and be scriptable, so we can tick off the “continuous deployment” check box.
  5. Configuration changes must never tamper with existing connections.
  6. Optional bonus, be source controllable.

My first suggestion was that we would write some PowerShell scripts to access the Sonicwall firewall through SSH and control its firewall tables directly. This was the plan for several months in fact, whilst everything was getting put in place inside the data centre. I knew full well it wouldn’t be easy. First there was some political issues inside the company with regard to a developer (me) having access to a central firewall. Second, I knew that creation and testing of the scripts would be difficult and that the whole CLI on the Sonicwall would surely not be as good as a Cisco.

I knew I could achieve #1 and #3 easily on a Sonicwall, like with any router really. But #2 was a little bit of an unknown as, frankly, I doubted if a Sonicwall could do it without jumping through a ton of usability hoops. #4 and #6 were the greatest unknown. I know you can export a Sonicwall’s configuration from the web interface. But it comes down as a binary file; which sort of made me doubt whether the CLI could do it properly as some form of text file. And of course if you can’t get the configuration as a text file then it’s not really going to be truly source controllable either, so that’s #6 out.

Fortunately an alternative (and better!) solution presented itself in the form of HA-Proxy. I’ve been hearing more and more positive things about this over the past couple years: most notably from the Stack Exchange. And having recently finally shed my long-time slight phobia of Linux, I decided to have a go at setting it up this weekend on a virtual machine.

The only downside was that as soon as you move some of your routing decisions away from your core firewall then you start to get a bit worrisome about server failure. So naturally we had to ensure that whatever we came up with involving HA-Proxy can be deployed as a clustered master-master or master-slave style solution. That would mean that if our VM host “A” had a failure then Mr Backup over there, “B”, could immediately take up the load.

It seems that Stack Exchange chose to use the Linux-HA Heartbeat system for providing their master-slave cluster behaviour. In the end we opted for Keepalived instead. It is more or less the same thing except that it’s apparently more geared towards load balancers and proxies such as HA-Proxy. Whereas Heartbeat is designed more for situations where you only ever want one active server (i.e. master-slave(s)). Keepalived just seems more flexible in the event that we decide to switch to a master-master style cluster in the future.

HA-Proxy Configuration

Here’s the basic /etc/haproxy/haproxy.conf that I came up with to meet requirements #1, #2 and #3.

#
# Global settings for HA-Proxy.
global
	daemon
	maxconn 8192

#
# Default settings for all sections, unless overridden.
defaults
	mode http

	# Known-good TCP timeouts.
	timeout connect 5000ms
	timeout client 20000ms
	timeout server 20000ms

	# Prevents zombie connections hanging around holding resources.
	option nolinger

#
# Host HA-Proxy's web stats on Port 81.
listen HAProxy-Statistics *:81
	mode http
	stats enable
	stats uri /haproxy?stats
	stats refresh 20s
	stats show-node
	stats show-legends
	stats auth admin:letmein

#
# Front-ends
#
#########
	#
	# Public HTTP/80 endpoint.
	frontend Public-HTTP
		mode http
		bind *:80
		default_backend Web-Farm

	#
	# Public HTTPS/443 endpoint.
	frontend Public-HTTPS
		mode tcp
		bind 172.16.61.150:443
		default_backend Web-Farm-SSL

	#
	# A "fake" HTTPS endpoint that is used for tunnelling some customers based on the source IP address.
	# Note: At no point is this a true TLS/SSL connection!
	# Note 2: This only works if the customer network allows TCP 443 outbound without passing through an internal proxy (... which most of ours do).
	frontend Public-AppTunnel
		mode tcp

		#
		# Bind to a different interface so as not to conflict with Public-HTTPS (above).
		bind 172.16.61.159:443

		#
		# Pilot Customer 2 (testing)
		acl IsFrom_PilotCustomer2 src 213.213.213.0/24
		use_backend App-PilotCustomer2 if IsFrom_PilotCustomer2

#
# Back-ends
#
# General
#
#########
	#
	# IIS 7.5 web servers.
	backend Web-Farm
		mode http
		balance roundrobin
		option httpchk
		server Web0 172.16.61.181:80 check
		server Web1 172.16.61.182:80 check

	#
	# IIS 7.5 web servers, that expose HTTPS/443.
	# Note: This is probably not the best way, but it works for now. Need to investigate using the stunnel solution.
	backend Web-Farm-SSL
		mode tcp
		balance roundrobin
		server Web0 172.16.61.181:443 check
		server Web1 172.16.61.182:443 check

#
# Back-ends
#
# Application Servers (TCP bespoke protocol)
#
#########
	#
	# Customer 1
	listen App-Customer1
		mode tcp
		bind *:35007
		server AppLive0 172.16.61.12:35007 check

	#
	# Pilot Customer 2 (testing)
	listen App-PilotCustomer2
		mode tcp
		bind *:35096
		server AppLive0 172.16.61.12:35096 check

I doubt the file will remain this small for long. It’ll probably be 15x bigger in a week or two 🙂

Keepalived Configuration

And here’s the /etc/keepalived/keepalived.conf file.

vrrp_instance_VI_1 {
	state MASTER
	interface seth0
	virtual_router_id 51
	! this priority (below) should be higher on the master server, than on the slave.
	! a bit of a pain as it makes Mercurial'ising this config more difficult - anyone know a solution?
	priority 200
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass some_secure_password_goes_here
	}
	virtual_ipaddress {
		172.16.61.150
		172.16.61.159
	}
}

It is rather straight forward as far as other Keepalived configurations go. It is effectively no different to a Windows Server Network Load Balancing (NLB) deployment, with the right options to give the master-slave behaviour. Note the only reason I’ve specified two virtual IP addresses is because I need to use the TCP port 443 twice (for different purposes). These will be port mapped on the Sonicwall to different public IP addresses, of course.

Mercurial, auto-propagation script for haproxy.conf

#!/bin/sh
cd /etc/haproxy/

#
# Check whether remote repo contains new changesets.
# Otherwise we have no work to do and can abort.
if hg incoming; then
  #
  #
  echo "The HA-Proxy remote repo contains new changesets. Pulling changesets..."
  hg pull

  #
  # Update to the working directory to latest revision.
  echo "Updating HA-Proxy configuration to latest revision..."
  hg update -C

  #
  # Re-initialize the HA-Proxy by informing the running instance
  # to close its listen sockets and then load a new instance to
  # recapture those sockets. This ensures that no active
  # connections are dropped like a full restart would cause.
  echo "Reloading HA-Proxy with new configuration..."
  /etc/init.d/haproxy reload

else
  echo "The HA-Proxy local repo is already up to date."
fi

I turned the whole /etc/haproxy/ directory into a Mercurial repository. The script above was also included in this directory (to gain free version control!), called sync-haproxy-conf.sh. I cloned this repository onto our central Mercurial master server.

It is then just a case of setting up a basic “* * * * * /etc/haproxy/sync-haproxy-conf.sh” cronjob so that the script above gets executed every minute (don’t worry it’s not exactly going to generate much load).

This is very cool because we can use the slave HA-Proxy server as a sort of testing ground of sorts. We can modify the config on that server quite a lot and test against it (by connecting directly to it’s IP rather than the clustered/virtual IP provided by Keepalived). Then once we’ve got the config just right we can commit it to the Mercurial repository and then push the changeset(s) to the master server. Within 60 seconds then the other server (or servers, in your case possibly!) will then run the synchronisation script.

One very neat thing about the newer versions of HA-Proxy (I deployed version 1.4.11) is that they have an /etc/init.d script that already includes everything you need for doing configuration file rebinds/reloads. This is great because what actually happens is that HA-Proxy will send a special signal to the old process so that it stops listening on the front-end sockets. Then it will attempt to start the new instance based upon the new configuration. If this fails it will send another signal to the “old”, but now resurrected process, that it can resume listening. Otherwise the old process will eventually exit once all its existing client connections have ended. This is brilliant because it meets and rather elegantly exceeds exceeds our expectations for requirement #5.

The fact that our HA-Proxy’s will contain far more meticulous configuration details than even our Sonicwall, I think that this solution based upon Mercurial is simply brilliant. We have what is effectively a test and slave server all-in-one, and a hg revert or hg rollback command is of course only a click away.

It’s still a work in progress but so far I’m very pleased with the progress with HA-Proxy.

Lightweight shelving of your work-in-progress, with Mercurial

Posted in Automation, Source Control by Nathan B. Evans on February 22, 2011

Before we switched to Mercurial in January 2011, we were a TFS shop. I hated TFS with some level of passion after using it for 4 years. It was slow, cranky and, well… merging was frequently a nightmare or sometimes actually impossible.

But one neat thing that TFS did provide was the concept of “shelve-sets”. For those that don’t know: these are basically uncommitted (or unchecked-in) changes, effectively your work-in-progress.

This feature was great because it let me:

  • develop on multiple PCs, and be able to easily migrate unfinished code changes between those PCs.
  • an immediate requirement came in that perhaps evaporated before the changes could be finished – but I suspected the requirement might spark up again a week or month later.
  • perform zero-risk code reviews without getting stuck in all the mud of the wider TFS system (this point clearly doesn’t apply to Mercurial nor any of the popular DVCS).

I’m not a big fan of the “Continuous Check-ins” movement that somebody seems to have started. It feels wrong to be committing stuff in unlogically grouped and unatomic units. And hell, the work-in-progress that you want to “shelve” may not even compile yet! Do I really want to be committing stuff that I might throw away later anyway? Maybe, if I was using Git which allows history rewriting far more easily. But Mercurial has a stricter policy around history and maintains a level of discipline to ensure an immutable history.

There is already an extension for Mercurial that performs a type of shelving. But this was not suitable for me. It maintains the storage folder locally on the PC and doesn’t provide any obviously supported means of copying that between PCs.

Mercurial provides (almost) everything you need out-of-the-box to be able to do a lightweight form of shelving, and you don’t even need to enable any extensions. What you need are the hg diff and hg import commands. Coupled with two shell scripts, and you will have a complete solution.

First setup your Mercurial.ini (or .hgrc) with the following useful aliases:

[alias]
gitdiff = diff --git
exportwip = gitdiff
importwip = import --no-commit --force

The reason we use the extended diff format from Git is that, primarily, this supports binary files whereas a vanilla diff does not. And the reason for the --no-commit option on the importwip alias is to prevent the imported work-in-progress from being immediately committed to the repository. You definitely don’t want that!

Now from a shell prompt you can do:

$ hg exportwip > my-work-in-progress-for-xyz-customer-22-02-2011.diff

Then, at a later date, or immediately from another PC (once you’ve copied the diff file across or e-mailed it to yourself):

$ hg importwip my-work-in-progress-for-xyz-customer-22-02-2011.diff

Note: There is no need for piping the file input on the importwip alias because the real command already requires that as an input argument.

Not-so powerful, uh, PowerShell…

If you’re a PowerShell user (like myself) there is a rather large annoyance in the way it performs binary file output when piping. If you use the straight “>” output piping operator in PowerShell then it will default to use Unicode encoding. This sounds fine on the surface, except that Mercurial’s diff engine does not support Unicode. It outputs in UTF8 and expects any input it is given to also be in UTF8. And when I say it does not support Unicode – I mean it really doesn’t. It will give a vague error of:

abort: no diffs found

Additionally, if you try to workaround this by changing the encoding used by PowerShell, as follows:

PS $ hg exportwip | out-file -filepath "my-work-in-progress-for-xyz-customer-22-02-2011.diff" -encoding OEM -force

Then what you get is a file that looks suitable. And in fact it should be suitable. The only difference appears to be that the line endings have been normalised from \n (Mercurial’s raw output when producing diffs) to \r\n (Windows environment style). Unfortunately there is a bug, as of at least Mercurial 1.7.5, where it cannot handle diffs that use the \r\n style of line endings (despite having the eol extension enabled). What you will get is an ugly exception like the following:

** unknown exception encountered, please report by visiting
**  http://mercurial.selenic.com/wiki/BugTracker
** Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)]
** Mercurial Distributed SCM (version 1.7.5)
** Extensions loaded: fixfrozenexts, graphlog, eol, fetch, transplant, rebase, purge, churn, mq
Traceback (most recent call last):
  File "hg", line 36, in
  File "mercurial\dispatch.pyo", line 16, in run
  File "mercurial\dispatch.pyo", line 36, in dispatch
  File "mercurial\dispatch.pyo", line 58, in _runcatch
  File "mercurial\dispatch.pyo", line 590, in _dispatch
  File "mercurial\dispatch.pyo", line 401, in runcommand
  File "mercurial\dispatch.pyo", line 641, in _runcommand
  File "mercurial\dispatch.pyo", line 595, in checkargs
  File "mercurial\dispatch.pyo", line 588, in
  File "mercurial\util.pyo", line 426, in check
  File "mercurial\dispatch.pyo", line 297, in __call__
  File "mercurial\util.pyo", line 426, in check
  File "mercurial\extensions.pyo", line 130, in wrap
  File "mercurial\util.pyo", line 426, in check
  File "hgext\mq.pyo", line 2988, in mqcommand
  File "mercurial\util.pyo", line 426, in check
  File "mercurial\extensions.pyo", line 130, in wrap
  File "mercurial\util.pyo", line 426, in check
  File "hgext\mq.pyo", line 2960, in mqimport
  File "mercurial\util.pyo", line 426, in check
  File "mercurial\commands.pyo", line 2371, in import_
  File "mercurial\commands.pyo", line 2328, in tryone
  File "mercurial\patch.pyo", line 1259, in patch
  File "mercurial\patch.pyo", line 1231, in internalpatch
  File "mercurial\patch.pyo", line 1110, in applydiff
  File "mercurial\patch.pyo", line 1129, in _applydiff
  File "mercurial\patch.pyo", line 1028, in iterhunks
KeyError: 'mycompany.snk\r'

The guys in the Mercurial chat room are aware of this issue as we were all scratching our collective heads on it for an hour or two the other day. Here’s hoping for a bug fix soon! Alternatively, the whole issue could be bypassed if the hg diff command had an option such as “--out <filename>” to write the diff to a file directly. That would remove the dependency upon the shell for file output piping.

Tagged with: ,