This will work because it explicitly executes the PowerShell (which is part of Windows 10 and Docker Desktop only runs on Windows 10) and passes to it the command DockerCli.exe -SwitchDaemon.The path is determined via the System-Environment-Variable ProgramFiles which will resolve to your System-Root and Program-Files directory (in most cases. Docker only supports Docker Desktop on Windows for those versions of Windows 10 that are still within Microsoft’s servicing timeline. Containers and images created with Docker Desktop are shared between all user accounts on machines where it is installed. Install Windows 10 Insider Preview build 18975 (Slow) or later for WSL 2. Install Ubuntu from the Microsoft store. Enable WSL 2 by following this guide. Install the Remote - WSL extension for VS Code. Install the Docker WSL 2 Technical Preview. Once installed, Docker will recognize that you have WSL installed and prompt to enable WSL integration.
You can run any application in Docker as long as it can be installed and executed unattended, and the base operating system supports the app. Windows Server Core runs in Docker which means you can run pretty much any server or console application in Docker.
Update! For a full walkthrough on Dockerizing Windows apps, check out my book Docker on Windows and my Pluralsight course Modernizing .NET Apps with Docker.
Check out these examples:
Lately I've been Dockerizing a variety of Windows apps - from legacy .NET 2.0 WebForms apps to Java, .NET Core, Go and Node.js. Packaging Windows apps as Docker images to run in containers is straightforward - here's the 5-step guide.
Docker images for Windows apps need to be based on
microsoft/windowsservercore, or on another image based on one of those.
Which you use will depend on the application platform, runtime, and installation requirements. For any of the following you need Windows Server Core:
For anything else, you should be able to use Nano Server. I've successfully used Nano Server as the base image for Go, Java and Node.js apps.
Nano Server is preferred because it is so drastically slimmed down. It's easier to distribute, has a smaller attack surface, starts more quickly, and runs more leanly.
Being slimmed down may have problems though - certain Windows APIs just aren't present in Nano Server, so while your app may build into a Docker image it may not run correctly. You'll only find that out by testing, but if you do find problems you can just switch to using Server Core.
Unless you know you need Server Core, you should start with Nano Server. Begin by running an interactive container with
docker run -it --rm microsoft/nanoserver powershell and set up your app manually. If it all works, put the commands you ran into a Dockerfile. If something fails, try again with Server Core.
You don't have to use a base Windows image for your app. There are a growing number of images on Docker Hub which package app frameworks on top of Windows.
They are a good option if they get you started with the dependencies you need. These all come in Server Core and Nano Server variants:
A note of caution about derived images. When you have a Windows app running in a Docker container, you don't connect to it and run Windows Update to apply security patches. Instead, you build a new image with the latest patches and replace your running container. To support that, Microsoft release regular updates to the base images on Docker Hub, tagging them with a full version number (
10.0.14393.693 is the current version).
Base image updates usually happen monthly, so the latest Windows Server Core and Nano Server images have all the latest security patches applied. If you build your images from the Windows base image, you just need to rebuild to get the latest updates. If you use a derived image, you have a dependency on the image owner to update their image, before you can update yours.
If you use a derived image, make sure it has the same release cadence as the base images. Microsoft's images are usually updated at the same time as the Windows image, but official images may not be.
Alternatively, use the Dockerfile from a derived image to make your own 'golden' image. You'll have to manage the updates for that image, but you will control the timescales. (And you can send in a PR for the official image if you get there first).
You'll need to understand your application's requirements, so you can set up all the dependencies in the image. Both Nano Server and Windows Server Core have PowerShell set up, so you can install any software you need using PowerShell cmdlets.
Remember that the Dockerfile will be the ultimate source of truth for how to deploy and run your application. It's worth spending time on your Dockerfile so your Docker image is:
Windows features can be installed with
Add-WindowsFeature. If you want to see what features are available for an image, start an interactive container with
docker run -it --rm microsoft/windowsservercore powershell and run
On Server Core you'll see that .NET 4.6 is already installed, so you don't need to add features to run .NET Framework applications.
.NET is backwards-compatible, so you can use the installed .NET 4.6 to run any .NET application, back to .NET 2.0. In theory .NET 1.x apps can run too. I haven't tried that.
If you're running an ASP.NET web app but you want to use the base Windows image and control all your dependencies, you can add the Web Server and ASP.NET features:
There's a standard pattern for installing dependencies from the Internet - here's a simple example for downloading Node.js into your Docker image:
The version of Node to download and the expected SHA-256 checksum are captured as environment variables with the
ENV instruction. That makes it easy to upgrade Node in the future - just change the values in the Dockerfile and rebuild. It also makes it easy to see what version is present in a running container, you can just check the environment variable.
The download and hash check is done in a single
RUN instruction, using
Invoke-WebRequest to download the file and then
Get-FileHash to verify the checksum. If the hashes don't match, the build fails.
After these instructions run, your image has the Node.js runtime in a known location -
C:nodenode.exe. It's a known version of Node, verified from a trusted download source.
For dependencies that come packaged, you'll need to install them as part of the
RUN instruction. Here's an example for Elasticsearch which downloads and uncompresses a ZIP file:
It's the same pattern as before, capturing the checksum, downloading the file and checking the hash. In this case, if the hash is good the file is uncompressed with
Expand-Archive, moved to a known location and the Zip file is deleted.
Don't be tempted to keep the Zip file in the image, 'in case you need it'. You won't need it - if there's a problem with the image you'll build a new one. And it's important to remove the package in the same
RUN command, so the Zip file is downloaded, expanded and deleted in a single image layer.
It may take several iterations to build your image. While you're working on it, it's a good idea to store any downloads locally and add them to the image with
COPY. That saves you downloading large files every time. When you have your app working, replace the
COPY with the proper download-verify-delete
You can download and run MSIs using the same approach. Be aware that not all MSIs will be built to support unattended installation. A well-built MSI will support command-line switches for any options available in the UI, but that isn't always the case.
If you can install the app from an MSI you'll also need to ensure that the install completed before you move on to the next Dockerfile instruction - some MSIs continue to run in the background. This example from Stefan Scherer's iisnode Dockerfile uses
Start-Process ... -Wait to run the MSI:
Packaging your own app will be a simplified version of step 2. If you already have a build process which generates an unattended-friendly MSI, you can can copy it from the local machine into the image and install it with
This example is from the Modernize ASP.NET Apps - Ops Lab from Docker Labs on GitHub. The MSI supports app configuration with the
RELEASENAME option, and it runs unattended with the
With MSIs and other packaged deployment options (like Web Deploy) you need to choose between using what you currently have, or changing your build output to something more Docker friendly.
Web Deploy needs an agent installed into the image which adds an unnecessary piece of software. MSIs don't need an agent, but they're opaque, so it's not clear what's happening when the app gets installed. The Dockerfile isn't an explicit deployment guide if some of the steps are hidden.
xcopy deployment approach is better, where you package the application and its dependencies into a folder and copy that folder into the image. Your image will only run a single app, so there won't be any dependency clashes.
This example copies an ASP.NET Web app folder into the image, and configures it with IIS using PowerShell:
If you're looking at changing an existing build process to produce your app package, you should think about building your app in Docker too. Consolidating the build in a multi-stage Dockerfile means you can build your app anywhere without needing to install .NET or Visual Studio.
See Dockerizing .NET Apps with Microsoft's Build Images on Docker Hub.
When you run a container from an image, Docker starts the process specified in the
ENTRYPOINT instruction in the Dockerfile.
Modern app frameworks like .NET Core, Node and Go run as console apps - even for Web applications. That's easy to set up in the Dockerfile. This is how to run the open source Docker Registry - which is a Go application - inside a container:
registry is the name of the executable, and the other values are passed as options to the exe.
CMD work differently and can be used in conjunction. See how CMD and ENTRYPOINT interact to learn how to use them effectively.
Starting a single process is the ideal way to run apps in Docker. The engine monitors the process running in the container, so if it stops Docker can raise an error. If it's also a console app, then log entries written by the app are collected by Docker and can be viewed with
For .NET web apps running in IIS, you need to take a different approach. The actual process serving your app is
w3wp.exe, but that's managed by the IIS Windows service, which is running in the background.
IIS will keep your web app running, but Docker needs a process to start and monitor. In Microsoft's IIS image they use a tool called
ServiceMonitor.exe as the entrypoint. That tool continually checks a Windows service is running, so if IIS does fail the monitor process raises the failure to Docker.
Alternatively, you could run a PowerShell startup script to monitor IIS and add extra functionality - like tailing the IIS log files so they get exposed to Docker.
HEALTHCHECK is one of the most useful instructions in the Dockerfile and you should include one in every app you Dockerize for production. Healthchecks are how you tell Docker if the app inside your container is healthy.
Docker monitors the process running in the container, but that's just a basic liveness check. The process could be running, but your app could be in a failed state - for a .NET Core app, the
dotnet executable may be up but returning
503 to every request. Without a healthcheck, Docker has no way to know the app is failing.
A healthcheck is a script you define in the Dockerfile, which the Docker engine executes inside the container at regular intervals (30 seconds by default, but configurable at the image and container level).
This is a simple healthcheck for a web application, which makes a web request to the local host (remember the healthcheck executes inside the container) and checks for a
200 response status:
Healthcheck commands need to return
0 if the app is healthy, and
1 if not. The check you make inside the healthcheck can be as complex as you like - having a diagnostics endpoint in your app and testing that is a thorough approach.
Make sure your
HEALTHCHECK command is stable, and always returns
1. If the command itself fails, your container may not start.
Any type of app can have a healthcheck. Michael Friis added this simple but very useful check to the Microsoft SQL Server Express image:
The command verifies that the SQL Server database engine is running, and is able to respond to a simple query.
There are additional advantages in having a comprehensive healthcheck. The command runs when the container starts, so if your check exercises the main path in your app, it acts as a warm-up. When the first user request hits, the app is already running warm so there's no delay in sending the response.
Healthchecks are also very useful if you have expiry-based caching in your app. You can rely on the regular running of the healthcheck to keep your cache up-to date, so you could cache items for 25 seconds, knowing the healthcheck will run every 30 seconds and refresh them.
Dockerizing Windows apps is straightforward. The Dockerfile syntax is clean and simple, and you only need to learn a handful of instructions to build production-grade Docker images based on Windows Server Core or Nano Server.
Following these steps will get you a functioning Windows app in a Docker image - then you can look to optimizing your Dockerfile.