Integration Testing with Docker Dependencies

Chris Dykstra
13 min readFeb 24, 2021

I’ve found myself with about a week between jobs. I wanted to take this time to recreate, document, and publish some of the things I’ve used containers for to make local development easier and not dependent on anything outside the local machine. I am a big proponent of pursuing a scenario where local development is as easy as Clone, Build, Run and deployment to tiers other than local is similarly as easy. As it turns out these two things are commonly one and the same.

This exercise is as much for me as it is for you. I want to have recreated the things I’ve done while working for someone else so that I have a copy that I own; the code I have produced while on the clock does not belong to me and, if I want to reuse something I’ve already done, I either need to recreate my own copy, commit theft, or rewrite the code for someone else to own once again. I want to have fully explored some more options to verify to myself that the ones I pick in the future are actually the ones I want to pick. Bonus: I can publish what I know and share it with you so that you can use it or build off of it.

My experience and examples here are with Visual Studio, Docker Desktop, and Windows 10. That doesn’t mean you can’t use these strategies with some other IDE and/or container tool. These all happen to fit together without too much work and I primarily develop .net code in a windows environment.

Articles in this series:

Containers for Local Stacks: how to use Docker Compose to create a local development environment

Containerized Build Agent: how to set up a Docker container image that can be used as a CI environment in Azure DevOps

Emulating Cloud Services with Containers: how to use a container to run or emulate cloud services locally without needing to install additional programs or services and without needing to pay to use the real thing before you’re ready to deploy

Integration Testing with Docker Dependencies (this story): learn how to use Docker from within an automated testing framework to spin up integration test dependencies and dispose of them when tests have completed

I have experienced some hot debate with the names given to automated tests at different levels of isolation. The only word agreed upon seems to be “Unit Test” meaning a test where a single unit of code is isolated from other units then test code exercises the unit and verifies that the correct things happened as a result. Even with that simple definition there is argument about what a “unit” is… not surprising since I rarely encounter anyone defining any of their assumed definitions before they start arguing (I am not an exception 😉). Is exposing your test or your unit to the .net framework no longer a unit test? Does it have any meaningful difference whether it is or it isn’t? This is further convoluted by my observation that it is rare for a developer to actually know the difference between (much less make separate use of) the practices of scripted programming, functional programming, and object oriented programming which all have an effect on the definition of “unit” in their respective contexts. I digress, this article isn’t about that. It’s not even about what makes a test a unit test, an integration test, or whatever other name you want to give it.

The definitions I am going to use (more to have words to use, the continuum of isolation matters more than the word it is called) are as follows: Unit tests isolate a meaningful collection of code from other code except for the runtime and test code / test framework. Integration tests allow one or more dependencies to enter into the test in some way or another. End to End tests are those tests that actively seek to eliminate any isolation, exercising everything together. Commonly End to End tests try to emulate the most important user interactions in the same way a user would perform them, verifying those interactions worked in the same way a user might look for feedback on success or failure of an activity.

If you aren’t familiar with the concept of the “automated test pyramid” pictured below the idea is that your project should ideally (under normal circumstances) have an inverse relationship between the number of tests and the level of isolation. By that I mean that, using the definitions above, you should have mostly unit tests, some integration tests, and a handful of end to end tests in most projects. As you let your isolation slip away your tests become more brittle, run longer, impose dependencies on their environment, and cost more to maintain. On the flip side, if your units become so isolated they are no longer meaningfully testable on their own your tests have become valueless. Moving up the continuum also has a correlation to flakiness (the tests “just fail” sometimes) that I have found to be correlated but not as tightly correlated as the other factors.

In an effort to keep tests meaningful while also managing their stability the first line of defense is in good code design. I’m not about to try to roll that topic into this article. After that you’ll likely run into some scenarios where a unit of code can be unit tested but those tests don’t verify the functionality of that unit completely e.g. you have a command class that executes a SQL stored procedure; you can unit test that the correct stored procedure name is used and that the parameters by name have the expected values but your test does not give confidence that this stored procedure class is correct without actually connecting it to a SQL database of some kind, only that your code interacts with the API of your SQL access library (likely ADO.net). I’ve found it pretty common to be able to get partial coverage from unit tests and to occasionally need to augment with integration tests. With good design your integration tests are likely to only need to expose one isolation vector in your artificial test environment by virtue of only having a single one possible.

Containers can be a great way to create a temporary instance of a dependency for any circumstance, not only for testing. When used in an integration test they will give your tests a pretty good deal of control over their dependencies and, by their nature, containers are disposable and duplicatable; you can’t reuse the same port but you could keep your tests isolated from each other so long as each one uses a different port even if they all rely on the same service coming from the same image. This can help you fight back the flakiness and brittleness that comes with exposing your test suite to outside dependencies by allowing you the opportunity to avoid shared state corruption or the outside influence of using a real environment (e.g. using the dev database to test your SQL interaction code). It will also give you the opportunity (and burden) of ensuring that setting up your test environment is part of the test, pushing the documentation your tests provide even further.

The Docker Engine has a ReST API that you can access to programatically control containers. You can find documentation of this API on the Docker documentation page (make sure to match the documentation version to your Docker version). The Docker CLI makes use of this API so anything you can do at the command line you should be able to do with the ReST API… though I have definitely found the CLI to be more self explanatory and have better documentation when it doesn’t explain itself.

While you could try to consume the ReST API directly there’s an option that I think is better. There are official and unofficial SDKs list on the Docker Engine SDKs page. We’re going to go through an example of using the .net SDK, Docker.DotNet, one of the ones that is unofficial according to the Docker page but is a .Net Foundation project. I have found the documentation in the readme of that repo to not be up to date or particularly helpful while producing the code for this article. What I have found helpful in using it is knowing that it’s little more than a façade over some generated code from the engine API so I can reference the engine API documentation to explain the Docker.DotNet API I’m seeing (which can, itself, sometimes lack the explanation I am looking for).

In a previous article in this series (Emulating Cloud Services with Containers) we talked about a service that had a dependency on Azure Storage, specifically Blob Storage. We used the Azurite Docker container image to create a local emulator for Blob Storage for local development to avoid paying for the real thing and to allow us to put whatever we wanted into the container to be discarded when we no longer want it. At the time of the previous article this project had no automated tests (gasp!) because it was only a POC. Now we’re going to talk about adding some automated tests. We’ll add unit tests where possible however most of this PoC application is code to interact with Blob Storage meaning most of our test code is going to be integration test code (or less than useful unit test code that is likely to allow defects to slip past us). We most definitely do not want to pay for automated test execution in cloud storage costs nor would we want to have to make sure that they’re cleaning up after themselves.

Let’s start by looking at the one unit test class added to the project in commit 26e0d15, for the single controller in the project

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using BlobStorageAdapter.Controllers;
using BlobStorageAdapter.Models;
using BlobStorageAdapter.TestDoubles.DataAccess;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace BlobStorageAdapter.Tests
{
[TestClass]
public class FileControllerTests
{
private readonly FileController _controller;
private readonly FakeGetFilesCommand _getFilesCommand = new();
private readonly FakeSaveFilesCommand _saveFilesCommand = new();
public FileControllerTests()
{
_controller = new FileController(() => _getFilesCommand, () => _saveFilesCommand);
}
[TestMethod]
public async Task AddFile_AllCases_SavesContentInTxt()
{
var content = Guid.NewGuid().ToString();
await _controller.AddFile(content);Assert.AreEqual(1, _saveFilesCommand.Files.Count(f => f.content == content));
var savedFile = _saveFilesCommand.Files.Single(f => f.content == content);
StringAssert.EndsWith(savedFile.name, ".txt");
}
[TestMethod]
public async Task AddFile_AllCases_ReturnsFileModel()
{
var content = Guid.NewGuid().ToString();
var file = await _controller.AddFile(content);Assert.AreEqual(content, file.Content);
Assert.AreEqual(1, _saveFilesCommand.Files.Count(f => f.name.Contains(file.Id.ToString())));
}
[TestMethod]
public async Task GetFiles_FilesExist_ReturnsFileModelWithoutTxtExtensionName()
{
var files = Enumerable.Repeat<Func<File>>(() => new File(Guid.NewGuid().ToString(), Guid.NewGuid()), 5)
.Select(f => f())
.ToArray();
foreach(var (content, id) in files) _getFilesCommand.Files.Add(($"{id}.txt", content));
var retrievedFiles = _controller.GetFiles();var actualFiles = new List<File>();
await foreach (var file in retrievedFiles)
{
actualFiles.Add(file);
}
Assert.IsTrue(files.OrderBy(f => f.Id).SequenceEqual(actualFiles.OrderBy(f => f.Id)));
}
}
}

You’ll notice that we’ve cut the controller off from any dependencies (except for the runtime and the framework) and replaced them with test doubles. In this case they are hand written test doubles which can be found in a TestDoubles project adjacent to the test project. My personal preference (tempered with pragmatism) is to use so-called manual test doubles for faking anything internal to my application and using a dynamic mocking framework (e.g. Moq) for anything external. I could explain why I do this… if we were here to talk about unit testing practices.

Next up lets look at an example of one of the integration tests.

using System;
using System.IO;
using System.Threading.Tasks;
using BlobStorageAdapter.DataAccess;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using MemoryStream = System.IO.MemoryStream;
namespace BlobStorageAdapter.Tests.DataAccess
{
[TestClass]
[TestCategory("Integration")]
public class SaveFileCommandIntegrationTests : BlobStorageIntegrationTest
{
private ISaveFileCommand _command;
[TestInitialize]
public void TestInitialize()
{
_command = new SaveFileCommand(ContainerClient);
}
[TestMethod]
public async Task Save_FileNameUniqueInBlob_UploadsContent()
{
var (name, content) = (Guid.NewGuid().ToString(), Guid.NewGuid().ToString());
await _command.Save(name, content);using var reader = new StreamReader((await ContainerClient.GetBlobClient(name).DownloadAsync()).Value.Content);
var actualContent = await reader.ReadToEndAsync();
Assert.AreEqual(content, actualContent);
}
[TestMethod]
public async Task Save_DuplicateFileName_ThrowsException()
{
var name = Guid.NewGuid().ToString();
await ContainerClient.GetBlobClient(name).UploadAsync(new MemoryStream());
var exception = await Assert.ThrowsExceptionAsync<Azure.RequestFailedException>(() => _command.Save(name, string.Empty));StringAssert.Contains(exception.Message, "already exists");
}
}
}

You might notice that this class has the suffix “IntegrationTests” instead of “Tests” and has the TestCategory attribute with the value “Integraiton” as well as a base class of BlobStorageIntegrationTest which is shared with the other two integration test classes to provide some shared plumbing to the shared dependency on an Azurite instance. It looks pretty similar to a unit test other than that, on the surface. We still have the Arrange, Act, Assert pattern in our test bodies, we follow the same test method naming conventions, and we use the same testing tools. If you’re cunning you might spot a test verifying that an exception is thrown and that the exception is not thrown by any of the code in this project, rather by the Blob Storage service. This is a prime example of something that might slip through if we were trying to unit test this by missing out on some of the meaningful interactions between our code and the service it is integrating the rest of our system with.

Now we’re getting to the meat of things, where our tests start to interact with Docker and control a container to fulfil the dependency requirement of the integration test classes. Let’s check out the shared base class, BlobStorageIntegrationTest.

using System;
using Azure.Storage.Blobs;
using DockerDaemon;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace BlobStorageAdapter.Tests.DataAccess
{
[TestClass]
public abstract class BlobStorageIntegrationTest
{
protected BlobServiceClient BlobStorageClient { get; private set; }
protected BlobContainerClient ContainerClient { get; private set; }
private const string DefaultConnectionString = "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;";
private readonly string _containerName = Guid.NewGuid().ToString();
private static IContainerLifetime? _blobStorageContainer;
[AssemblyInitialize]
public static void StartContainer(TestContext _)
{
_blobStorageContainer = ContainerBuilder.BuildTransientContainer("mcr.microsoft.com/azure-storage/azurite")
.ExposingPort(10000, 10000)
.Start()
.Result;
}
[TestInitialize]
public void BlobStorageInitialize()
{
BlobStorageClient = new BlobServiceClient(DefaultConnectionString);
var container = BlobStorageClient.CreateBlobContainer(_containerName);
ContainerClient = BlobStorageClient.GetBlobContainerClient(container.Value.Name);
}
[AssemblyCleanup]
public static void DisposeContainer()
{
_blobStorageContainer?.Dispose();
}
}
}

This class provides a BlobServiceClient and a BlobContainerClient to any inheriting classes. To do so it builds what’s been named here a “transient container” and then creates instances of those two types from the Azure.Storage.Blobs SDK nuget package. You might start to recognize some bits here of what we put together in the previous article, if you read it. Specifically the container name, the exposed port, and the default connection string used to connect to Azurite once it is started. The AssemblyInitialize and AssemblyCleanup attributes are there to give us a hook into the test lifetime provided by MSTest V2 in order to create a single container when the test assembly starts up (before any tests are run) and then dispose of it when the assembly spins down (after all tests are done). The TestInitialize attribute gives a second position in that test lifecycle guaranteed to be after the container has been started in our AssemblyInitialize method; it wouldn’t work so well to try to connect to and interact with an Azure Storage server that hadn’t been brought online yet.

Next up we have the ContainerBuilder class, which does the “turning on” part of our interaction with Docker through the API given to us in Docker.DotNet.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.NetworkInformation;
using System.Threading.Tasks;
using Docker.DotNet;
using Docker.DotNet.Models;
namespace DockerDaemon
{
public class ContainerBuilder : IContainerBuilder
{
private readonly string _containerName;
private readonly IDictionary<int, int> _exposedPorts = new Dictionary<int, int>();
private readonly IDictionary<string, string> _environmentVariables = new Dictionary<string, string>();
public static IContainerBuilder BuildTransientContainer(string containerName) =>
new ContainerBuilder(containerName);
private ContainerBuilder(string containerName)
{
if (string.IsNullOrEmpty(containerName)) throw new InvalidOperationException();
_containerName = containerName;
}
public IContainerBuilder ExposingPort(int hostPort, int containerPort)
{
_exposedPorts[hostPort] = containerPort;
return this;
}
public IContainerBuilder WithEnvironmentVariable(string name, string value)
{
_environmentVariables[name] = value;
return this;
}
public async Task<IContainerLifetime> Start()
{
using var configuration = new DockerClientConfiguration();
using var client = configuration.CreateClient();
await client.Images.CreateImageAsync(
new ImagesCreateParameters {FromImage = _containerName, Tag = "latest"}, null,
new Progress<JSONMessage>());
var container = await client.Containers.CreateContainerAsync
(
new CreateContainerParameters
{
Image = _containerName,
Env = _environmentVariables.Select(ev => $"{ev.Key}={ev.Value}").ToList(),
ExposedPorts = _exposedPorts.Select(p => $"{p.Value}/tcp").ToDictionary(p => p, _ => new EmptyStruct()),
HostConfig = new HostConfig
{
PortBindings = _exposedPorts.ToDictionary
(
p => $"{p.Value}/tcp",
p => (IList<PortBinding>) new List<PortBinding>
{
new() {HostPort = p.Key.ToString()}
}
)
}
}
);
await client.Containers.StartContainerAsync(container.ID, new ContainerStartParameters());
return new TransientContainerLifetime(container.ID);
}
}
}

The first part of this class follows a builder pattern exposing a fluent interface. Nothing too special there. The Start method is where it gets interesting. Docker.DotNet takes care of configuring the connection to the Docker Daemon for us in DockerClientConfiguration and CreateClient, even accounting for the possibility of running on Linux or Windows. After that CreateImageAsync gives us the same thing as docker pull except without the assumption that we want the latest version if no tags are specified (for whatever reason all versions is the assumption for no tags 🤷‍♀️).

CreateContainerAsync will give us a container in a stopped state, the properties of CreateContainerParameters giving us control over the command given to the engine. Image is the name of the image, env lets us supply environment variables, ExposedPorts allows specifications of which ports are open (the value of EmptyStruct in the dictionary is a bit of an oddity even if it does align with the documentation from Docker… just go with it), and the HostConfig lets us control how the container interacts with the host machine, specifically binding localhost ports to container ports. Finally StartContainerAsync lets us take that newly created stopped container and put it into a started state.

From there we hand things off to the container lifetime object we produce to handle disposing of the container at the correct time (and that time is at AssemblyCleanup). I’ve named it TransientContainerLifetime because this lifetime object is meant to control a transient container i.e. one that gets discarded as soon as it is no longer needed instead of one that might persist past when the tests that spun it up have been shut down.

using System;
using System.Threading.Tasks;
using Docker.DotNet;
using Docker.DotNet.Models;
namespace DockerDaemon
{
public class TransientContainerLifetime : IContainerLifetime
{
private readonly string _containerId;
internal TransientContainerLifetime(string containerId)
{
_containerId = containerId;
}
public async ValueTask DisposeAsync()
{
using var configuration = new DockerClientConfiguration();
using var client = configuration.CreateClient();
await client.Containers.RemoveContainerAsync
(
_containerId,
new ContainerRemoveParameters {Force = true}
);
GC.SuppressFinalize(this);
}
public void Dispose() => DisposeAsync().AsTask().Wait();~TransientContainerLifetime()
{
Dispose();
}
}
}

The most interesting part here is the DisposeAsync. The finalizer and Dispose methods both lead to the async disposal method but give a synchronous API and a failsafe for if the object isn’t disposed properly. DisposeAsync creates a docker client in the same way as the container builder then it runs removeContainerAysnc which stops the container if it is running (Force = true) and deletes it.

That’s all there is to it! If you clone this repo and run the tests (assuming you have a version of Docker Desktop installed that uses a Docker Enginer v1.41 compatible API) this code will take care of pulling, starting, using, and stopping an Azurite container instance, no environment setup needed other than having a Docker Engine running on the test runner machine. Even if the test will pull the container for you I’ve found that it feels better, as a human running these tests, to manually pull the container and watch the CLI output as the layers are downloaded and extracted, that’s just my preference though. You can watch things happen if you have your Docker UI open to the containers tab while the tests run if you’d like to see things in action.

--

--

Chris Dykstra

Senior Software Engineer looking to share tips and experience