Nathan Evans' Nemesis of the Moment

Super fast way to extract width/height dimensions of PNG and JPEG images

Posted in .NET Framework, F# by Nathan B. Evans on April 17, 2015

Turns out that getting the width and height of an image file can be quite tricky if you don’t want to read the whole file into memory. The System.Drawing.Image type in .NET will read it all into memory; not good.

So I read the PNG and JPEG specifications and came up with this.

PNG was easy, as that has a static file header where the width and height are always stored in the same place. But JPEG is far far trickier since the header is built up of segments which are not in any particular order. There can be dozens of these header segments. There is always a particular segment called a Start Of Frame (SOF) which is the one that contains the width/height.

I’ve tried to build this as robustly and defensively as possible since I intend to use on both the server-side and on low memory mobile devices. It is also good at detecting invalid or malformed files and failing fast on those conditions.

It supports big and little endian architectures. And it supports memory streams, file streams and network streams i.e. both seekable and unseekable streams.

The JPEG implementation uses a little mutable state for performance and memory conservation reasons.

ImageSize for F#

/// Provides services to extract header or metadata information from image files of supported types.
module nbevans.Util.ImageSize =
    type DimensionsResult =
    | Success of Width : int * Height : int
    | Malformed of Information : string
    | NotSupported

    [<AutoOpen>]
    module private Utils =
        let ntoh buffer index length =
            if BitConverter.IsLittleEndian then Array.Reverse(buffer, index, length)
            buffer

        let ntoh_int32 buffer index =
            BitConverter.ToInt32(ntoh buffer index 4, index)

        let ntoh_int16 buffer index =
            BitConverter.ToInt16(ntoh buffer index 2, index)

        type Stream with
            /// Advances the position of the stream by seeking by a specified offset from the current position.
            /// Unlike Seek(), this method is safe for network streams and other types of stream where seeking is not possible.
            /// For these such "unseekable" streams, data will be read instead and immediately discarded.
            member stream.Advance(offset) =
                if stream.CanSeek then
                    stream.Seek(int64 offset, SeekOrigin.Current)
                else
                    let buffer = Array.zeroCreate offset
                    stream.Read(buffer, 0, offset) |> ignore
                    stream.Position

    module Png =
        /// Gets the width & height dimensions of a PNG image.
        let dimensions (sourceStream:Stream) =
            let signature = Array.zeroCreate 8
            if sourceStream.Read(signature, 0, 8) <> 8 || signature <> [| 137uy; 80uy; 78uy; 71uy; 13uy; 10uy; 26uy; 10uy |] then
                NotSupported
            else
                let chunk = Array.zeroCreate 8
                if sourceStream.Advance(8) <> 16L || sourceStream.Read(chunk, 0, 8) <> 8 then
                    Malformed "Expected chunk is not present."
                else
                    Success (ntoh_int32 chunk 0, ntoh_int32 chunk 4)

    module Jpeg =
        [<AutoOpen>]
        module private Markers =
            // All data markers are 2 bytes, where the first byte is a 0xFF prefix.
            let Prefix = 0xFFuy
            // The first data marker (i.e. first 2 bytes of the file) of every JPEG is this.
            let SOI_StartOfImage = 0xD8uy
            // JPEG has lots of different internal encoding types, which are indicated with a SOF data marker.
            // There are many like baseline, progressive, sequential, differential and various combinations of these too.
            // Fortunately the width/height is present in the same position of all of these SOF headers.
            let SOFn_StartOfFrame = [| 0xC0uy; 0xC1uy; 0xC2uy; 0xC3uy; 0xC5uy; 0xC6uy; 0xC7uy; 0xC9uy; 0xCAuy; 0xCBuy; 0xCDuy; 0xCEuy; 0xCFuy |]
            let SOS_StartOfScan = 0xDAuy

        /// Gets the width & height dimensions of a JPEG image.
        let dimensions (sourceStream:Stream) =
            let signature = Array.zeroCreate 2
            if sourceStream.Read(signature, 0, 2) <> 2 || signature.[0] <> Prefix || signature.[1] <> SOI_StartOfImage then
                NotSupported
            else
                let mutable result = Option<DimensionsResult>.None
                let marker = Array.zeroCreate 4

                while result.IsNone do
                    result <-
                        if sourceStream.Read(marker, 0, 4) <> 4 then
                            Some <| Malformed "Next data marker header cannot be read."
                        else
                            if marker.[0] = Prefix && SOFn_StartOfFrame |> Array.exists ((=) marker.[1]) then
                                // Reuse the marker array as a new buffer, skip over the first byte in the payload (which contains "sample precision"),
                                // and read the 4 bytes that contain two 16-bit values of the width and height, respectively.
                                let buffer = marker
                                sourceStream.Advance(1) |> ignore
                                if sourceStream.Read(buffer, 0, 4) <> 4 then
                                    Some <| Malformed "SOF data marker payload cannot be read."
                                else
                                    let lines = int <| ntoh_int16 buffer 0
                                    let samplesPerLine = int <| ntoh_int16 buffer 2
                                    Some <| Success (samplesPerLine, lines)

                            else if marker.[0] = Prefix && marker.[1] = SOS_StartOfScan then
                                // If we've reached the SOS marker then we missed the SOF marker.
                                // That's pretty bizarre and suggests a corrupt JPEG, or at least an unsupported SOF marker.
                                Some <| Malformed "SOS data marker was encountered prematurely."

                            else if marker.[0] <> Prefix then
                                // All data markers identifiers are 2 bytes and the first byte must be 0xFF.
                                Some <| Malformed "Next data marker header is malformed."

                            else
                                // After the data marker identifier is a 2 byte length (inclusive) of the payload.
                                // We need this to let us skip over the markers/payloads that are not interesting.
                                let length = (int <| ntoh_int16 marker 2) - 2
                                sourceStream.Advance(length) |> ignore
                                None

                defaultArg result (Malformed "End of data markers encountered prematurely.")
Advertisements
Tagged with: , , , , ,

Super skinny XML document generation with F#

Posted in .NET Framework, F# by Nathan B. Evans on April 15, 2015

I needed to generate some simple XML documents, in memory, from some F# script. From my C# days I was already familiar with the System.Xml.Linq namespace, which I still quite like. But it wasn’t particularly clean to use from F#. So I wrote a really simple F# wrapper for some of its most commonly used features.

XmlToolkit for F#

module nbevans.Util.XmlToolkit
open System.Text
open System.Xml
open System.Xml.Linq
open System.IO

let XDeclaration version encoding standalone = XDeclaration(version, encoding, standalone)
let XLocalName localName namespaceName = XName.Get(localName, namespaceName)
let XName expandedName = XName.Get(expandedName)
let XDocument xdecl content = XDocument(xdecl, content |> Seq.map (fun v -> v :> obj) |> Seq.toArray)
let XComment (value:string) = XComment(value) :> obj
let XElementNS localName namespaceName content = XElement(XLocalName localName namespaceName, content |> Seq.map (fun v -> v :> obj) |> Seq.toArray) :> obj
let XElement expandedName content = XElement(XName expandedName, content |> Seq.map (fun v -> v :> obj) |> Seq.toArray) :> obj
let XAttributeNS localName namespaceName value = XAttribute(XLocalName localName namespaceName, value) :> obj
let XAttribute expandedName value = XAttribute(XName expandedName, value) :> obj

type XDocument with
    /// Saves the XML document to a MemoryStream using UTF-8 encoding, indentation and character checking.
    member doc.Save() =
        let ms = new MemoryStream()
        use xtw = XmlWriter.Create(ms, XmlWriterSettings(Encoding = Encoding.UTF8, Indent = true, CheckCharacters = true))
        doc.Save(xtw)
        ms.Position <- 0L
        ms

The principle of this module is that it overrides the key type names like System.Xml.Linq.XElement and the others with F# functions that effectively provide the equivalent constructor behaviour but in a more functional signature. Then a XDocument type extension adds a useful Save() function (since the stock ones are so useless on their own). … and here is an example usage straight from my app (but hopefully you will get the idea):

let doc =
    XDocument (XDeclaration "1.0" "UTF-8" "yes") [
        XComment "This document was automatically generated by a configuration script."
        XElement "Metadata" [
            XElement "SystemMetadata" [
                XElement "ScannedBy" ["PCT"]
                XElement "GenerationDate" [DateTime.UtcNow.ToString("s")]
                XElement "IndexedBy" ["UNKNOWN"]
                XElement "IndexedOn" ["UNKNOWN"]
                XElement "FileName" [createPackageContentFileName cp.Id fileName]
                XElement "ScanInfo" [
                    XElement "NumberOfPagesScanned" [string formPdfPageCount]
                    XElement "IpAddress" ["UNKNOWN"]
                    XElement "MachineName" ["UNKNOWN"]
                    XElement "NumberOfBlankPages" ["0"]
                ]
            ]
            XElement "UserDefinedMetadata" [
                XElement "Address1" [defaultArg (gp.Fields.TryFind "property-address") "UNKNOWN"]
                XElement "Postcode" [defaultArg (gp.Fields.TryFind "property-postcode") "UNKNOWN"]
                XElement "Patchcode" ["1"]
                XElement "Reviewdate" [DateTime.UtcNow.AddYears(1).ToString("s")]
            ]
        ]
    ]

let ms = doc.Save()
// 'ms' at this point contains a System.IO.MemoryStream of the generated XML document.
// That was my use-case, but maybe you will want to adapt this code or the XmlToolkit module itself to use a different type of stream; perhaps a FileStream. I'm a firm believer in K.I.S.S to avoid over-engineering.
Tagged with: , , ,

WebSocket servers on Windows Server

Posted in .NET Framework, Software Design, Windows Environment by Nathan B. Evans on December 20, 2011

This is a slight continuation of the previous WebSockets versus REST… fight! post.

Buoyed with enthusiasm of WebSockets, I set about implementing a simple test harness of a WebSockets server in C#.NET using System.Net.HttpListener. Unfortunately, things did not go well. It turns out that HttpListener (and indeed, the underlying HTTP Server API a.k.a. http.sys) cannot be used at all to develop a WebSockets server on current versions of Windows. The http.sys is simply too strict with its policing of what it believes to be correct HTTP protocol.

In an IETF discussion thread, a Microsoft fellow called Stefen Shackow was quoted as saying the following:

The current technical issue for our stack is that the low-level Windows HTTP driver that handles incoming HTTP request (http.sys) does not recognize the current websockets format as having a valid entity body. As you have noted, the lack of a content length header means that http.sys does not make the nonce bytes in the entity available to any upstack callers. That’s part of the work we will need to do to build websockets support into http.sys. Basically we need to tweak http.sys to recognize what is really a non-HTTP request, as an HTTP request.

Implementation-wise this boils down to how strictly a server-side HTTP listener interprets incoming requests as HTTP. For example a server stack that instead treats port 80 as a TCP/IP socket as opposed to an HTTP endpoint can readily do whatever it wants with the websockets initiation request.

For our server-side HTTP stack we do plan to make the necessary changes to support websockets since we want IIS and ASP.NET to handle websockets workloads in the future. We have folks keeping an eye on the websockets spec as it progresses and we do plan to make whatever changes are necessary in the future.

This is a damn shame. As it stands right now, Server 2008/R2 boxes cannot host WebSockets. At least, not whilst sharing ports 80 and 443 with IIS web server. Because, sure, you could always write your WebSocket server to bind to those ports with a raw TCP socket and rewrite a ton of boilerplate HTTP code that http.sys can already do, and then put up with the fact that you can’t share the port with IIS on the same box. This is something that most people, me included, do not want to do.

Obviously it isn’t really anyone’s fault because back in the development time frame of Windows 7 and Server 2008/R2 (between 2006 to 2009) they could not have foreseen the WebSockets standard and the impact it might have on the design of APIs for HTTP servers.

The good thing is that Windows 8 Developer Preview seems to have this covered. According to the MSDN documentation, the HTTP Server API’s HttpSendHttpResponse function supports a new special flag called HTTP_SEND_RESPONSE_FLAG_OPAQUE that seems to suggest it will put the HTTP session into a sort of “dumb mode” whereby you can pass-thru pretty much whatever you want and http.sys won’t interfere:

HTTP_SEND_RESPONSE_FLAG_OPAQUE

Specifies that the request/response is not HTTP compliant and all subsequent bytes should be treated as entity-body. Applications specify this flag when it is accepting a Web Socket upgrade request and informing HTTP.sys to treat the connection data as opaque data.

This flag is only allowed when the StatusCode member of pHttpResponse is 101, switching protocols. HttpSendHttpResponse returns ERROR_INVALID_PARAMETER for all other HTTP response types if this flag is used.

Windows Developer Preview and later: This flag is supported.

Aside from the new System.Net.WebSockets namespace in .NET 4.5, there are also clear indications of this behaviour being exposed in the HttpListener of .NET 4.5 through a new HttpListenerContext.AcceptWebSocketAsync() method. The preliminary documentation seems to suggest that this method will support 2008 R2 and Windows 7. But this is almost certainly a misprint because I have inspected these areas of the .NET 4.5 libraries using Reflector and it is very clear that this is not the case:

The HttpListenerContext.AcceptWebSocketAsync() method directly calls into System.Net.WebSockets.WebSocketHelper (static class) which has a corresponding AcceptWebSocketAsync() method of its own. This method will then call a sanity check method tellingly named EnsureHttpSysSupportsWebSockets() which evaluates an expression containing the words “ComNetOS.IsWin8orLater“. I need say no more.

It seems clear now that Microsoft has chosen not to back port this minor HTTP Server API improvement to Server 2008 R2 / Windows 7. So now we must all hope that Windows Server 8 runs a beta program in tandem with the Windows 8 client, and launches within a month of each other. Otherwise Metro app development possibilities are going to be severely limited whilst we all wait for the Windows Server product to host our WebSocket server applications! Even still, it’s a shame that Server 2008 R2 won’t ever be able to host WebSockets.

It will be interesting if Willie Tarreau (of HA-Proxy fame) will come up with some enhancements in his project that might benefit those determined enough to still want to host (albeit, raw TCP-based) WebSockets on Server 2008 R2.

Tagged with: , ,

Automated builds and versioning, with Mercurial

Posted in .NET Framework, Automation, Software Design, Source Control by Nathan B. Evans on May 16, 2011

Yes this blog post is about that seemingly perennial need to set-up some sort of automatically incrementing version number system for your latest project.

As it happens the “trigger” for me this time around was not a new project but more as a result of our switch from TFS to Mercurial for our source control. It took some time for Mercurial to “bed in”, but it definitely has now. So then you reach that point where you start asking “Okay, so what else can we do with this VCS to improve the way we work?”.

Our first pass at automated versioning was to simply copy the status quo that worked with TFS. This was rather crude at best. Basically we had a MSBuild task that would increment (based on our own strategy) the AssemblyFileVersionAttribute contained inside the GlobalAssemblyInfo.cs. The usual hoo-har really, involving a simple regular expression etc. This was fine. However we did not really like it because it was, effectively, storing versioning information inside of a file held inside the repository. Separation of concerns and all that. It also caused a small amount of workflow overhead involving merging of named branches – with the occasional albeit easily resolvable conflict. Not a major issue, but not ideal either. Of course, not all projects use named branches. But we do; as they’re totally awesome for maintaining many concurrently supported backward releases.

Versioning strategies

The way I see it, there is only a small number of routes you can go down with project versioning:

  1. Some sort of yyyy.mm.dd timestamp strategy.
    This is great for “forward only” projects that don’t need to maintain supported backward releases. So for web applications, cloud apps etc – this is, I suspect, quite popular. For other projects, it simply doesn’t make sense. Because if you want to release a minor bug fix for a release from over a year ago it wouldn’t make any sense for the version to suddenly jump to the current date. How would you differentiate between major product releases?
  2. The typical major.minor.build.revision strategy.
    The Major and Minor would be “fixed” for each production branch in the repository. And then you’d increment the build and/or revision parts based on some additional strategy.
  3. A DVCS-only strategy where you use the global changeset hash.
    Unfortunately this is of limited use today on .NET projects because both the AssemblyVersionAttribute and AssemblyFileVersionAttribute won’t accept neither a string nor a byte array. Of course there is nothing stopping you coming up with your own Attribute (we called ours DvcsIdentificationAttribute) and including/updating that in your GlobalAssemblyInfo.cs (or equivilent) whenever you run a build. But it is of zero use to the .NET framework itself.
  4. Some sort of hybrid between #1 and #2 (and possibly even #3!).
    This is what we do. We use a major.minor.yymm.revision strategy, to be precise.

We like our #4 hybrid strategy because it brings us the following useful characteristics:

  • It has “fixed” Major and Minor parts. Great for projects with multiple concurrent versions.
  • It contains a cursory Year and Month that can be read from a glance. When chasing down a problem on a customer environment it is simple things like this that can speed up diagnosis times.
  • An incremental Revision part that ensures each build in the same month has a unique index.

So then, how did we implement this strategy on the Microsoft stack and with Mercurial?

Implementation

The key to implementing this strategy is first and foremost with retrieving from the repository the “most recent” tag for the current branch. Originally I had big plans here to go write some .NET library to walk the Mercurial revlog file structure. It would have been a cool project to learn some of the nitty gritty details of how Mercurial works under the hood. Unfortunately, I soon discovered that Mercurial has a template command available that already does what I need. It’s called the “latesttag” template. It’s really simple to use as well, for example:

$ hg parents --template {latesttag}
> v5.4.1105.5-build

There is also a related and potentially useful template called “latesttagdistance“. This will count, as it walks the revlog tree, the number of changesets that it walks past in search for the latest tag. It is possible that you could use this value as the incrementation extent for the Revision part in the strategy.

At this point most of the Mercurial fan base will go off and write a Bash script or Python script to do the job. Unfortunately in .NET land it’s not quite that simple, as we all know. I could have written a, erm, “posh” Powershell script to do it, for sure. But then I have to wire that in to the MSBuild script – which I suspect would be a bit difficult and have all sort of gotchas involved.

So I wrote a couple MSBuild tasks to do it, with the interesting one aptly named as MercurialAssemblyFileVersionUpdate:

public class MercurialAssemblyFileVersionUpdate : AssemblyFileVersionUpdate {

    private Version _latest;

    public override bool Execute() {
        var cmd = new MercurialCommand {
            Repository = Path.GetDirectoryName(BuildEngine.ProjectFileOfTaskNode),
            Arguments = "parents --template {latesttag}"
        };

        if (!cmd.Execute()) {
            Log.LogMessagesFromStream(cmd.StandardOutput, MessageImportance.High);
            Log.LogError("The Mercurial Execution task has encountered an error.");
            return false;
        }

        _latest = ParseOutput(cmd.StandardOutput.ReadToEnd());

        return base.Execute();
    }

    protected override Version GetAssemblyFileVersion() {
        return _latest;
    }

    private Version ParseOutput(string value) {
        return string.IsNullOrEmpty(value) || value.Equals("null", StringComparison.InvariantCultureIgnoreCase)
                   ? base.GetAssemblyFileVersion()
                   : new Version(ParseVersionNumber(value));
    }

    private string ParseVersionNumber(string value) {
        var ver_trim = new Regex(@"(\d+\.\d+\.\d+\.\d+)", RegexOptions.Singleline | RegexOptions.CultureInvariant);

        var m = ver_trim.Match(value);
        if (m.Success)
            return m.Groups[0].Value;

        throw new InvalidOperationException(
            string.Format("The latest tag in the repository ({0}) is not a parsable version number.", value));
    }
}

Click here to view the full source, including a couple dependency classes that you’ll need for the full solution.

With that done, it was just a case of updating our MSBuild script to use the new bits:

<Target Name="incr-version">
  <MercurialExec Arguments='revert --no-backup "$(GlobalAssemblyInfoCsFileName)"' />
  <Message Text="Updating '$(GlobalAssemblyInfoCsFileName)' with new version number..." />
  <MercurialAssemblyFileVersionUpdate FileName="$(GlobalAssemblyInfoCsFileName)">
    <Output TaskParameter="VersionNumber" PropertyName="VersionNumber" />
    <Output TaskParameter="MajorNumber" PropertyName="MajorNumber" />
    <Output TaskParameter="MinorNumber" PropertyName="MinorNumber" />
    <Output TaskParameter="BuildNumber" PropertyName="BuildNumber" />
    <Output TaskParameter="RevisionNumber" PropertyName="RevisionNumber" />
  </MercurialAssemblyFileVersionUpdate></strong>
  <Message Text="Done update to '$(GlobalAssemblyInfoCsFileName)'." />
  <Message Text="Tagging current changeset in local repo." />
  <MercurialExec Arguments="tag --force v$(VersionNumber)-build" />
  <Message Text="Pushing commit to master server." />
  <MercurialExec Arguments='push' />
  <Message Text="All done." />
</Target>

Of course, don’t forget to include your tasks into the script, ala:

<UsingTask AssemblyFile="Build Tools\AcmeCorp.MsBuildTasks.dll"
           TaskName="AcmeCorp.MsBuildTasks.MercurialAssemblyFileVersionUpdate" />

<UsingTask AssemblyFile="Build Tools\AcmeCorp.MsBuildTasks.dll"
           TaskName="AcmeCorp.MsBuildTasks.MercurialExec" />

You’ll notice that the parsing implementation is quite forgiving. It is a regular expression that will extract anything that looks like a parsable System.Version string. This is cool because it means the tags themselves don’t have to be exactly pure as System.Version would need them. You can leave a “v” in front, or add a suffix like “-build”. Whatever your convention is, it just makes things a bit more convenient.

Remaining notes

The implementation above will generate global tags that are revision controlled in the repository. I did consider using “local tags” as those wouldn’t need to be held in the repository at all and could just sit on the automated build server only. However unfortunately the “latesttag” template does not work with local tags, it only appears to work with global tags. It would also of course mean that developers wouldn’t benefit from the tags at all, which would be a shame.

Mercurial stores global tags inside a .hgtags file in the root working directory. The file is revision controlled but only for auditing purposes. It does not matter which version you have in your working directory at any given time.

You may still require a workflow to perform merges between your branches after running a build, in order to propagate the .hgtags file immediately (and not leave it to someone else!). If any merge conflicts arise you should take the changes from both sides as the .hgtags file can be considered “cumulative”.

Related reading

http://kiln.stackexchange.com/questions/2194/best-practice-generating-build-numbers
http://stackoverflow.com/questions/4726257/most-recent-tag-before-tip-in-mercurial
http://www.jaharmi.com/2010/03/24/generate_version_numbers_for_mac_os_x_package_installers_with_mercurial_and_semantic_vers
http://mercurial.selenic.com/wiki/NearestExtension

Tagged with: ,

Memory leaks with an infinite-lifetime instance of MarshalByRefObject

Posted in .NET Framework, Software Design by Nathan B. Evans on April 17, 2011

Recently we discovered an issue with the way our product performs AppDomain sandboxing. We were leaking small amounts of memory, quite badly, from each sandbox and every sandbox operation ever created over the lifetime of the process. After much investigation using CLR Profiler, it transpired that our subclasses of MarshalByRefObject which were using “null” as the return from their InitializeLifetimeService() will cause a permanent reference to be held open by some fairly low level area in the .NET remoting stack, specifically System.Runtime.Remoting.ServerIdentity (which is marked internal). This was preventing any of our object(s) derived from MarshalByRefObject from being garbage collected, and thus the memory footprint would keep growing and growing. Fortunately we run our servers with rather huge page files so it never got to a point where customers were affected, but obviously it was something we needed to fix fairly urgently.

After playing around a lot with all the lifetime services stuff like the ISponsor interface to do sponsorship with nasty hacky timeout values etc (which would have had side-affects for our product, but which we were about to resign ourselves too!) we came across a much better alternative solution in the form of RemotingServices.Disconnect(). Hoorah. Is it just me or does the whole remoting story in .NET need a damn good overhaul? Cross-AppDomain communications deserves something better.

With this discovery, I came up with a useful class that improves MarshalByRefObject  by adding deterministic disposal of both itself and any such nested objects (which is more an implementation detail for us but I’m sure could be useful for anyone).

/// <summary>
/// Enables access to objects across application domain boundaries.
/// This type differs from <see cref="MarshalByRefObject"/> by ensuring that the
/// service lifetime is managed deterministically by the consumer.
/// </summary>
public abstract class CrossAppDomainObject : MarshalByRefObject, IDisposable {

    private bool _disposed; 

    /// <summary>
    /// Gets an enumeration of nested <see cref="MarshalByRefObject"/> objects.
    /// </summary>
    protected virtual IEnumerable<MarshalByRefObject> NestedMarshalByRefObjects {
        get { yield break; }
    }

    ~CrossAppDomainObject() {
        Dispose(false);
    }

    /// <summary>
    /// Disconnects the remoting channel(s) of this object and all nested objects.
    /// </summary>
    private void Disconnect() {
        RemotingServices.Disconnect(this);

        foreach (var tmp in NestedMarshalByRefObjects)
            RemotingServices.Disconnect(tmp);
    }

    public sealed override object InitializeLifetimeService() {
        //
        // Returning null designates an infinite non-expiring lease.
        // We must therefore ensure that RemotingServices.Disconnect() is called when
        // it's no longer needed otherwise there will be a memory leak.
        //
        return null;
    }

    public void Dispose() {
        GC.SuppressFinalize(this);
        Dispose(true);
    }

    protected virtual void Dispose(bool disposing) {
        if (_disposed)
            return;

        Disconnect();
        _disposed = true;
    }

}

It was then just a case of modifying a few of our classes to derive from this instead of MarshalByRefObject and then update a couple other locations in our codebase by ensuring that Dispose() was called during clean-up.