Emulating Cloud Services with Containers

Chris Dykstra
9 min readFeb 23, 2021

I’ve found myself with about a week between jobs. I wanted to take this time to recreate, document, and publish some of the things I’ve used containers for to make local development easier and not dependent on anything outside the local machine. I am a big proponent of pursuing a scenario where local development is as easy as Clone, Build, Run and deployment to tiers other than local is similarly as easy. As it turns out these two things are commonly one and the same.

This exercise is as much for me as it is for you. I want to have recreated the things I’ve done while working for someone else so that I have a copy that I own; the code I have produced while on the clock does not belong to me and, if I want to reuse something I’ve already done, I either need to recreate my own copy, commit theft, or rewrite the code for someone else to own once again. I want to have fully explored some more options to verify to myself that the ones I pick in the future are actually the ones I want to pick. Bonus: I can publish what I know and share it with you so that you can use it or build off of it.

My experience and examples here are with Visual Studio, Docker Desktop, and Windows 10. That doesn’t mean you can’t use these strategies with some other IDE and/or container tool. These all happen to fit together without too much work and I primarily develop .net code in a windows environment.

Articles in this series:

Containers for Local Stacks: how to use Docker Compose to create a local development environment

Containerized Build Agent: how to set up a Docker container image that can be used as a CI environment in Azure DevOps

Emulating Cloud Services with Containers (this story): how to use a container to run or emulate cloud services locally without needing to install additional programs or services and without needing to pay to use the real thing before you’re ready to deploy

Integration Testing with Docker Dependencies: learn how to use Docker from within an automated testing framework to spin up integration test dependencies and dispose of them when tests have completed

If you don’t want to pay for live cloud services while you’re doing initial development or you don’t want your development cruft left behind when you’re ready to push version 1.0 out you might find some use from running services (or service emulators) locally. You could install these directly onto your machine, however you can get some more transience (if you make a mistake you can simply discard the container and start a new one, you only need to have installed a container runtime not the emulator itself) and ease of start up (two commands to document or maybe you make it part of your build using Docker Compose as outlined in Containers for Local Stacks, no installations needed to get set up) by using containers.

I’m going to use a Proof of Concept project that has a dependency on Azure Blob Storage to show how the Azurite emulator of Azure Storage in a container can be used to do local development without paying anything until you’re ready to put real blobs in the cloud. This will be a great excuse to talk about the basics of two very import Docker CLI commands, pull and run. After we talk about the Docker commands I’ll give a quick run through of how the code works, even if this series is trying to focus on containers.

To start, we need to find the container. I happened to already know that Azurite was the Azure Storage emulator I wanted (the other option, Azure Storage Emulator, has been superseded by Azurite). Originally I found this out through research on Google, which is how I would recommend you find other such containers for other services. Microsoft’s own documentation pages can lead you here, at least for Azurite. From there I searched for “Azurite Docker” which led me to the DockerHub page for it. This will give you some basic instructions and has a link to the more detailed documentation in the project’s Github readme.

Let’s get started by downloading the image locally so that our local Docker engine can run it. The command is actually given on the DockerHub page, docker pull mcr.microsoft.com/azure-storage/azurite. What is this doing? We are using Docker’s built in functionality to download an image from a remote repository named mcr.microsoft.com/azure-storage/azurite. This relies on some defaults. First, our local Docker engine is connected to DockerHub by default. DockerHub is not the only repository option, you can connect to others and create or host your own (or pay for private ones), DockerHub is simply the default and is a go to for publicly available images. The second default is that if no image tag is specified Docker will default to using the tag latest which is, by convention, the tag for the most recent stable version of a container image. More information on Docker pull can be found on the Docker documentation page for it. You’ll see something like this:

When that has finished we want to actually run an instance of Azurite. Docker run is the command we want for that; docker run -p 10000:10000 -p 10001:10001 mcr.microsoft.com/azure-storage/azurite. The last argument you should recognize from the pull command, it is the same name as the image we pulled. The two -p arguments are for ports to expose from the container to localhost; we are showing port 10000 of the container to port 10000 of the host machine, likewise for 10001, which are the ports where Blob and Queue services are hosted. This command can be found on the DockerHub page.

In this example we are purposefully making this container’s data content’s disposable. If you wanted to persist the data you put into the container you can mount a volume using the -v option which has a similar format to the port option (host location : container location). For this container, to persist the data from the emulator you would use -v c:/azurite:/data mcr.microsoft.com/azure-storage/azurite to save the azurite data to c:/Azurite (or somewhere else, if you prefer). If you were to stop one Azurite container with this volume mounted then start a new one with the same volume mount you should see the new container start up with the same data the old one had left behind in your host machine’s file system. Without this volume mount you should expect that stopping your Azurite container will erase all the data inside of it and a new container instance of the same image will start with no data inside of it.

We’re going to go one step further, because we only want the blob storage service and we don’t want our console to remain attached to the container, docker run -d -p 10000:10000 mcr.microsoft.com/azure-storage/azurite azurite-blob --blobHost 0.0.0.0 --oauth basic and --cert --key Optional. The -d option is shorthand for “detached” meaning the container will start and print out only an ID. Everything after the container name are arguments to the startup command of the container. In this case we are telling Azurite to only start the blob host on 0.0.0.0 (the default IP of a container within the private network it will reside in [alone, in this case]) and some authentication options to allow tools to access the blob service. You should see something like this:

You can find more detailed documentation of docker run on the Docker documentation page for it. You can also find documentation of other commands from the left menu on that same page; eventually you’ll probably want to do more than pull images and run them according to their documentation.

If you’re using Docker Desktop you can view and manage images and running containers from the Docker UI (including seeing the console output that you’ve detached from or opening a terminal into the container). Personally I use a mix of the command line and the UI. Your preference might be more heavily skewed towards all UI or all CLI. Whatever floats your boat.

Now that we’ve got a local emulator for our Blob storage lets use it with the POC application. This is a web API project that provides a ReST API over the existing Blob Storage API. It’s been named “adapter” because it’s exposing one API by translating another API into the ReST API, separating a consumer from the actual implementation and making the exposed API simpler to consumers while also putting the API our internal services depend on under our control (we are not bound by the blob storage API except in this project).

Let’s start with a connection string. Azurite provides a default, well known connection string. We’re going to use that because someone knowing the connection string to a local emulator that shouldn’t be visible outside our machine filled with test data is fine. In appsettings.json:

"ConnectionStrings": {
"BlobStorage": "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;"
}

Now we need to be able to get access to the .net types to access blob storage. Install two nuget packages, Azure.Storage.Blobs for the blob storage API and Microsoft.Extensions.Azure for dependency injection registration. In our startup class we make the addition to ConfigureServices of

services.AddAzureClients(c => c.AddBlobServiceClient(Configuration.GetConnectionString("BlobStorage")));

Now that we have a connection to the storage account we need a container to put our blobs into (text files, in this case). The below static class is used in startup to provide a client for injection for the one container this application will use. It is also used between building the application object and running it in our entry point Main method to create the container before the application actually starts running and could accept requests to interact with the storage container. It will check that the container exists, create it if it doesn’t, then return a client object specific to that container.

public static class BlobContainer
{
private const string ContainerName = "SomeNeatFiles";
public static async Task<BlobContainerClient> EnsureCreated(BlobServiceClient client)
{
if (!await ContainerExists(client))
{
await client.CreateBlobContainerAsync(ContainerName);
}
return client.GetBlobContainerClient(ContainerName);
}
private static async Task<bool> ContainerExists(BlobServiceClient client)
{
var containers = client.GetBlobContainersAsync(prefix: ContainerName);
await foreach (var page in containers)
{
if (page.Name == ContainerName) return true;
}
return false;
}

Next we have two command classes that get used by our one controller to fulfil the GET and POST methods on the exposed /Files/Text resource. These two deal with putting blobs into the container created above or getting them back out. Let’s look at the GetFilesCommand class. This command has a container client injected into it (the one we get from BlobContainer for the application’s document container) and it uses it to list all the blobs in the container, download each one, read the content out of them, convert the content to text, then return a tuple that the controller will use to reconstruct the File record type used as the API model definition. This all makes use of IAsyncEnumerable and async/await to read from the storage container as a stream instead of as a series of blocking operations.

public class GetFilesCommand : IGetFilesCommand
{
private readonly BlobContainerClient _blobClient;
public GetFilesCommand(BlobContainerClient blobClient)
{
_blobClient = blobClient;
}
public async IAsyncEnumerable<(string name, string content)> GetFiles()
{
await foreach (var file in _blobClient.GetBlobsAsync())
{
var fileClient = _blobClient.GetBlobClient(file.Name);
using var fileContent = (await fileClient.DownloadAsync()).Value;
await using var contentStream = new MemoryStream();
await fileContent.Content.CopyToAsync(contentStream);
contentStream.Position = 0;
using var reader = new StreamReader(contentStream);
yield return (file.Name, await reader.ReadToEndAsync());
}
}
}

Now lets put it to the test! We’re going to first put a couple files into storage using our API, look at them using Azure Storage Explorer, then take them back out using our API.

We can see our activity from the log output of our container

--

--

Chris Dykstra

Senior Software Engineer looking to share tips and experience