Containers for Local Stacks

Chris Dykstra
10 min readFeb 21, 2021

I’ve found myself with about a week between jobs. I wanted to take this time to recreate, document, and publish some of the things I’ve used containers for to make local development easier and not dependent on anything outside the local machine. I am a big proponent of pursuing a scenario where local development is as easy as Clone, Build, Run and deployment to tiers other than local is similarly as easy. As it turns out these two things are commonly one and the same.

This exercise is as much for me as it is for you. I want to have recreated the things I’ve done while working for someone else so that I have a copy that I own; the code I have produced while on the clock does not belong to me and, if I want to reuse something I’ve already done, I either need to recreate my own copy, commit theft, or rewrite the code for someone else to own once again. I want to have fully explored some more options to verify to myself that the ones I pick in the future are actually the ones I want to pick. Bonus: I can publish what I know and share it with you so that you can use it or build off of it.

My experience and examples here are with Visual Studio, Docker Desktop, and Windows 10. That doesn’t mean you can’t use these strategies with some other IDE and/or container tool. These all happen to fit together without too much work and I primarily develop .net code in a windows environment.

Articles in this series:

Containers for Local Stacks (this story): how to use Docker Compose to create a local development environment

Containerized Build Agent: how to set up a Docker container image that can be used as a CI environment in Azure DevOps

Emulating Cloud Services with Containers: how to use a container to run or emulate cloud services locally without needing to install additional programs or services and without needing to pay to use the real thing before you’re ready to deploy

Integration Testing with Docker Dependencies: learn how to use Docker from within an automated testing framework to spin up integration test dependencies and dispose of them when tests have completed

The first thing in this series that I will write about and produce Proof of Concept projects for is using containers for a local environment. Most commonly I have used this to spin up a local database or two that an ASP.Net application consumes and use one of the following strategies to get the schema into that disposable database, which will be reset fairly regularly. Having a disposable database promotes some good habits, like needing to be able to produce the entire schema on demand. It also allows me a lot more freedom than the practice of using a shared dev tier database, as I have found to be very common. If I make a mistake I can tear it all down and start over. If I want to fill the database with trash data I am fully able to without affecting anyone else and I can tear it all down and throw it away when I’m done.

This could, of course, be extended to other services than databases, even your own services if you’ve containerized them. I have heard so many times how some employer wants to start using containers and they are glad I have experience with Docker… only to find that they don’t want to actually have the effort expended, they want their existing code to work exactly as it does now and to magically be in a container with all the advantages of both and no cost. Ironically I haven’t found it to be very difficult to move existing services into containers with the tools provided in Visual Studio. Change scares people, particularly those who are used to being the ones in charge and making the decisions so it gets put off indefinitely. The result is that I usually only have databases available as containers I can pull and run, which is usually fine for local development.

This POC project is an .Net 5 ASP.Net web api with healthchecks implemented to determine whether the database dependency can be connected to (if not report unhealthy) and, if it can be, check if the required database is present (if not report degraded). Here is that POC on Github. It was created with the options for container support and OpenApi both enabled.

The first step is to get a local database going. It’s not that hard to install and run one without a container. It does make it harder to get to that “Clone, Build, Run” state I mentioned earlier. Using a Docker Compose project (dcproj) in your solution will let you use Docker Compose to set up a local environment with anything available in containers. The master branch has an application that depends on a database which is provided by the dcproj, which gets the health check from unhealthy to degraded. There are three branches that each implement different strategies to create that database with a schema and implement the two data access command interfaces, taking the health check from degraded to healthy and implementing the behavior of the one controller present which can be interacted with through swagger UI.

Before looking atthe strategies for creating the database and schema lets check out the Docker Compose definition.

version: '3.4'

services:
dockercomposelocalstack:
image: ${DOCKER_REGISTRY-}dockercomposelocalstack
depends_on:
- SqlServer
build:
context: .
dockerfile: DockerComposeLocalStack/Dockerfile
SqlServer:
image: mcr.microsoft.com/mssql/server
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
ports:
- 1433:1433

To get started you can right click on an existing project in Visual Studio, select add > Container Orchestrator Support then select Docker Compose from the dropdown in the modal presented. It will fill in the first part all on it’s own (except for depends_on, which helps to control the start order once you’ve added dependency containers) to put your application, in its container, into the Docker Compose network. Congratulations! You now have a local network inside Docker containing only your application. Visual Studio should also take care of configuration for debugging, making the dcproj the startup project and, when it starts, to do the same thing as your single project would have (go to the Swagger UI page, in this case).

Adding the SqlServer part will add a container to the network running the image mcr.microsoft.com/mssql/server (sql server 2019 at the time of writing). The Environment section sets the container to auto accept the EULA at startup and to use a predetermined password for SA. If this seems like a bad idea remember that this database is meant to be short lived, shouldn’t be filled with any data you care about, and is unlikely to be accessible outside your machine; it doesn’t matter if anyone knows your SA password. The ports: exposes the container’s port 1433 so that you can connect to that database through localhost,1433 and inspect what’s going on in there.

Why do you need Compose for this? If you were just to start up an mssql container and next to it start your application they wouldn’t be visible to each other. You could see them both, from outside Docker on the host machine, but the two containers are kept separate from each other on purpose.

Here’s one big bonus that I really like of this strategy: inside the virtual network Docker creates for the Compose you can refer to each container by the name given to it in the Compose yml file! If you look at the connection strings in the appsettings.json file you’ll notice that my database server is “SqlServer” the name I gave it in the yml file above! I think this is very cool, personally.

Now the health check will report degraded instead of unhealthy. The database server is present, so the sql health check passes. The specific database within the server that the custom health check is looking for does not exist. Nothing has been done to make that database exist!

From here I explored three options I have experienced in the past to get a simple schema into the database being spun up for me by Docker Compose. There are plenty of other options. Each had their own strengths and weaknesses. Two advantages all three shared were that the schema was in source control (this is a suprisingly uncommon practice or only partially in practice) and could be versioned and deployed alongside the application itself (even less common than source controlling the database code, unfortunately). They also all shared one thing I will caution you against: the sql server container is running and “ready” before the database inside of it has fully started… though it only takes a short time for the database server to start up after the container starts and implementing a strategy to wait for it is not difficult. All three branches will have two commits, one to get the healthcheck to healthy (the database exists) and one to implement the interfaces to access that database.

The first option was to use a SQL Server Database Project. This can be seen in the SqlProj branch.

The most notable advantage of this approach is that the entire schema is given a build step and it is all evaluated as one; relationships are evaluated and build errors result if there are problems with the schema. Additional advantages include that the project contains a description of the desired current state (as opposed to a stack of changes to mentally consume to get a picture of what the database should look like) and the database is managed by database code.

The biggest drawback is that it wasn’t possible, without some kind of workaround, to make applying the schema be a part of the Compose or application start. I had to click through the publish dialogs to apply the schema. This is not that big of a hindrance, it just isn’t built in like for the other options (powershell, postbuild steps, whatever you think of can solve this), and shouldn’t pose any problem for a CI/CD pipeline. One additional note, this option requires windows. Not a big deal for me, maybe a hard stop for some scenarios. There are workarounds for this as well but they are still workarounds.

The next option I produced a POC for was DbUp. This can be seen in the DbUp branch. The command interface implementations are the exact same as the SqlProj branch (in fact I cherry picked the commit from that branch). It seems highly likely that some very interesting things could come from combining the SQLProj approach with a DbUp implementation 🤔

The advantage here over the SqlProj option is that applying the schema can be done at app start, there’s no reason anyone needs to know to do anything other than build and start the application. Database code is still source controlled in database language. You also get a bonus of automatically having changes tracked inside the database itself, something a DBA might be a fan of.

One drawback here is that evaluation of the SQL is left until app start (though you can still get inspection of individual files from Roslyn or some other engine for SQL syntax). Another problem I’ve run into with DbUp is that it is built for rolling forward. This has two implications: SQL scripts need to describe forward moving changes to the schema, they do not store the schema itself (meaning that to get to the current state you need to parse the entire stack of changes, which can be taxing for a human but not a problem for a machine) and rollbacks become a little more cumbersome and definitely manual. One additional note: the way the change tracking works your files need to have unique names if they are tracked changes. Repeating a name or modifying a file after it has already been tracked as applied will not apply the changes to the database! This usually results in individual files needing to contain a version identifier of some kind, like the date of the change, to ensure the changes keep rolling forward.

The final option I produced an example of is Entity Framework Model First. This was the most different from the other two. This example can be found in the EntityFrameworkModelFirst branch.

Using Entity Framework you can ask the ORM to create the schema for you! Sounds great, right? Well, you will still need to provide some scripts from time to time to make transitions in schema. Like DbUp you are able to attach Entity Framework’s tooling to maintain the schema to your application’s startup code. Like the SqlProj example your DbContext class and the model types are a description of current state though, as mentioned, you will still need to provide transition scripts like DbUp (“migration” is the keyword you will need to find more on this via search engine).

Since this is adding an ORM to the mix you have a potentially double edged sword, complexity is added in some places and removed in others. Will it be more or less? That depends on how you use it, your team, your company, and other factors I probably can’t enumerate without being in the middle of them with you.

Entity Framework provides a lot of great features. They aren’t free (even if the library is), they cost memory, compute time, and sometimes performance, though I haven’t found those to be restrictive most of the time. You also are freed from needing to work with your database if you choose not to, which can be a blessing and a curse. You lose some control and are given a few additional constraints but in return for giving those up you also don’t have to do the work of interacting with the database, it is handled for you along with some additional features you would otherwise need to write yourself should you need them e.g. change tracking.

One final note on Entity Framework: it lets you do a lot more in your application’s language, which can be both good and bad. Without understanding of how the database actually works and how Entity Framework functions it can be easy to make mistakes that cause poor performance, bugs, or both. Some of the loss of control comes in the form of needing to accept that your database will take a form at least partially dictated by the ORM and not by you as well. Finally, having your database driven primarily by a language other than the database language can sometimes cause problems (sometimes they are technical, most often they are political).

Which should you choose? Short answer is I don’t know. Which would I choose? Same answer, I don’t know. It depends on what scenario I am currently in and what I might guess the future scenario might be. Fortunately if you or I follow good code design it should be pretty easy (relatively speaking) to move between these options. It won’t be free, of course, but your design should be such that changing out options doesn’t mean rewriting your entire application because you’ve allowed it to all become tightly coupled into a little ball.

--

--

Chris Dykstra

Senior Software Engineer looking to share tips and experience