Below I describe the typical installation via docker-compose and the installation via a Synology package and the problems that arise. Installing using docker-compose The steps are: Pick a user (e.g. Sonarr-docker) Create the volume host directories as that user; Determine PUID and PGID from that user (more info here).
In this lab, we will look at some basic Docker commands and a simple build-ship-run workflow. We’ll start by running some simple containers, then we’ll use a Dockerfile to build a custom app. Finally, we’ll look at how to use bind mounts to modify a running container as you might if you were actively developing using Docker.
Difficulty: Beginner (assumes no familiarity with Docker)
Time: Approximately 30 minutes
You will need all of the following to complete this lab:
Use the following command to clone the lab’s repo from GitHub (you can click the command or manually type it). This will make a copy of the lab’s repo in a new sub-directory called
If you do not have a DockerID (a free login used to access Docker Hub), please visit Docker Hub and register for one. You will need this for later steps.
There are different ways to use containers. These include:
In this section you’ll try each of those options and see how Docker manages the workload.
In this step we’re going to start a new container and tell it to run the
hostname command. The container will start, execute the
hostname command, then exit.
Run the following command in your Linux console.
The output below shows that the
alpine:latest image could not be found locally. When this happens, Docker automatically pulls it from Docker Hub.
After the image is pulled, the container’s hostname is displayed (
888e89a3b36b in the example below).
Docker keeps a container running as long as the process it started inside the container is still running. In this case the
hostname process exits as soon as the output is written. This means the container stops. However, Docker doesn’t delete resources by default, so the container still exists in the
List all containers.
Notice that your Alpine Linux container is in the
Note: The container ID is the hostname that the container displayed. In the example above it’s
Containers which do one task and then exit can be very useful. You could build a Docker image that executes a script to configure something. Anyone can execute that task just by running the container - they don’t need the actual scripts or configuration information.
You can run a container based on a different version of Linux than is running on your Docker host.
In the next example, we are going to run an Ubuntu Linux container on top of an Alpine Linux Docker host (Play With Docker uses Alpine Linux for its nodes).
Run a Docker container and access its shell.
In this example, we’re giving Docker three parameters:
--interactivesays you want an interactive session.
--ttyallocates a pseudo-tty.
--rmtells Docker to go ahead and remove the container when it’s done executing.
The first two parameters allow you to interact with the Docker container.
We’re also telling the container to run
bash as its main process (PID 1).
When the container starts you’ll drop into the bash shell with the default prompt
[email protected]<container id>:/#. Docker has attached to the shell in the container, relaying input and output between your local session and the shell session in the container.
Run the following commands in the container.
ls / will list the contents of the root director in the container,
ps aux will show running processes in the container,
cat /etc/issue will show which Linux distro the container is running, in this case Ubuntu 18.04.3 LTS.
exit to leave the shell session. This will terminate the
bash process, causing the container to exit.
Note: As we used the
--rmflag when we started the container, Docker removed the container when it stopped. This means if you run another
docker container ls --allyou won’t see the Ubuntu container.
For fun, let’s check the version of our host VM.
You should see:
Notice that our host VM is running Alpine Linux, yet we were able to run an Ubuntu container. As previously mentioned, the distribution of Linux inside the container does not need to match the distribution of Linux running on the Docker host.
However, Linux containers require the Docker host to be running a Linux kernel. For example, Linux containers cannot run directly on Windows Docker hosts. The same is true of Windows containers - they need to run on a Docker host with a Windows kernel.
Interactive containers are useful when you are putting together your own image. You can run a container and verify all the steps you need to deploy your app, and capture them in a Dockerfile.
You cancommit a container to make an image from it - but you should avoid that wherever possible. It’s much better to use a repeatable Dockerfile to build your image. You’ll see that shortly.
Background containers are how you’ll run most applications. Here’s a simple example using MySQL.
Run a new MySQL container with the following command.
--detachwill run the container in the background.
--namewill name it mydb.
-ewill use an environment variable to specify the root password (NOTE: This should never be done in production).
As the MySQL image was not available locally, Docker automatically pulled it from Docker Hub.
As long as the MySQL process is running, Docker will keep the container running in the background.
List the running containers.
Notice your container is running.
You can check what’s happening in your containers by using a couple of built-in Docker commands:
docker container logs and
docker container top.
This shows the logs from the MySQL Docker container.
Let’s look at the processes running inside the container.
You should see the MySQL daemon (
mysqld) is running in the container.
Although MySQL is running, it is isolated within the container because no network ports have been published to the host. Network traffic cannot reach containers from the host unless ports are explicitly published.
List the MySQL version using
docker container exec.
docker container exec allows you to run a command inside a container. In this example, we’ll use
docker container exec to run the command-line equivalent of
mysql --user=root --password=$MYSQL_ROOT_PASSWORD --version inside our MySQL container.
You will see the MySQL version number, as well as a handy warning.
You can also use
docker container exec to connect to a new shell process inside an already-running container. Executing the command below will give you an interactive shell (
sh) inside your MySQL container.
Notice that your shell prompt has changed. This is because your shell is now connected to the
sh process running inside of your container.
Let’s check the version number by running the same command again, only this time from within the new shell session in the container.
Notice the output is the same as before.
exit to leave the interactive shell session.
In this step you’ll learn how to package your own apps as Docker images using a Dockerfile.
The Dockerfile syntax is straightforward. In this task, we’re going to create a simple NGINX website from a Dockerfile.
Let’s have a look at the Dockerfile we’ll be using, which builds a simple website that allows you to send a tweet.
Make sure you’re in the
Display the contents of the Dockerfile.
Let’s see what each of these lines in the Dockerfile do.
COPYis used to copy two files into the image:
index.html. and a graphic that will be used on our webpage.
In order to make the following commands more copy/paste friendly, export an environment variable containing your DockerID (if you don’t have a DockerID you can get one for free via Docker Hub).
You will have to manually type this command as it requires your unique DockerID.
export DOCKERID=<your docker id>
Echo the value of the variable back to the terminal to ensure it was stored correctly.
docker image build command to create a new Docker image using the instructions in the Dockerfile.
--tagallows us to give the image a custom name. In this case it’s comprised of our DockerID, the application name, and a version. Having the Docker ID attached to the name will allow us to store it on Docker Hub in a later step
.tells Docker to use the current directory as the build context
Be sure to include period (
.) at the end of the command.
The output below shows the Docker daemon executing each line in the Dockerfile
docker container run command to start a new container from the image you created.
As this container will be running an NGINX web server, we’ll use the
--publish flag to publish port 80 inside the container onto port 80 on the host. This will allow traffic coming in to the Docker host on port 80 to be directed to port 80 in the container. The format of the
--publish flag is
Any external traffic coming into the server on port 80 will now be directed into the container on port 80.
In a later step you will see how to map traffic from two different ports - this is necessary when two containers use the same port to communicate since you can only expose the port once on the host.
Click here to load the website which should be running.
Once you’ve accessed your website, shut it down and remove it.
Note: We used the
--forceparameter to remove the running container without shutting it down. This will ungracefully shutdown the container and permanently remove it from the Docker host.
In a production environment you may want to use
docker container stopto gracefully stop the container and leave it on the host. You can then use
docker container rmto permanently remove it.
When you’re actively working on an application it is inconvenient to have to stop the container, rebuild the image, and run a new version every time you make a change to your source code.
One way to streamline this process is to mount the source code directory on the local machine into the running container. This will allow any changes made to the files on the host to be immediately reflected in the container.
We do this using something called a bind mount.
When you use a bind mount, a file or directory on the host machine is mounted into a container running on the same host.
Let’s start the web app and mount the current directory into the container.
In this example we’ll use the
--mount flag to mount the current directory on the host into
/usr/share/nginx/html inside the container.
Be sure to run this command from within the
linux_tweet_app directory on your Docker host.
Remember from the Dockerfile,
usr/share/nginx/htmlis where the html files are stored for the web app.
The website should be running.
Bind mounts mean that any changes made to the local file system are immediately reflected in the running container.
Copy a new
index.html into the container.
The Git repo that you pulled earlier contains several different versions of an index.html file. You can manually run an
ls command from within the
~/linux_tweet_app directory to see a list of them. In this step we’ll replace
Go to the running website and refresh the page. Notice that the site has changed.
If you are comfortable with
viyou can use it to load the local
index.htmlfile and make additional changes. Those too would be reflected when you reload the webpage.If you are really adventurous, why not try using
execto access the running container and modify the files stored there.
Even though we’ve modified the
index.html local filesystem and seen it reflected in the running container, we’ve not actually changed the Docker image that the container was started from.
To show this, stop the current container and re-run the
1.0 image without a bind mount.
Stop and remove the currently running container.
Rerun the current version without a bind mount.
Notice the website is back to the original version.
Stop and remove the current container
To persist the changes you made to the
index.html file into the image, you need to build a new version of the image.
Build a new image and tag it as
Remember that you previously modified the
index.html file on the Docker hosts local filesystem. This means that running another
docker image build command will build a new image with the updated
Be sure to include the period (
.) at the end of the command.
Notice how fast that built! This is because Docker only modified the portion of the image that changed vs. rebuilding the whole image.
Let’s look at the images on the system.
You now have both versions of the web app on your host.
Run a new container from the new version of the image.
Check the new version of the website (You may need to refresh your browser to get the new version to load).
The web page will have an orange background.
We can run both versions side by side. The only thing we need to be aware of is that we cannot have two containers using port 80 on the same host.
As we’re already using port 80 for the container running from the
2.0 version of the image, we will start a new container and publish it on port 8080. Additionally, we need to give our container a unique name (
Run another new container, this time from the old version of the image.
Notice that this command maps the new container to port 8080 on the host. This is because two containers cannot map to the same port on a single Docker host.
View the old version of the website.
List the images on your Docker host.
You will see that you now have two
linux_tweet_app images - one tagged as
1.0 and the other as
These images are only stored in your Docker hosts local repository. Your Docker host will be deleted after the workshop. In this step we’ll push the images to a public repository so you can run them from any Linux machine with Docker.
Distribution is built into the Docker platform. You can build images locally and push them to a public or private registry, making them available to other users. Anyone with access can pull that image and run a container from it. The behavior of the app in the container will be the same for everyone, because the image contains the fully-configured app - the only requirements to run it are Linux and Docker.
Docker Hub is the default public registry for Docker images.
Before you can push your images, you will need to log into Docker Hub.
You will need to supply your Docker ID credentials when prompted.
1.0 of your web app using
docker image push.
You’ll see the progress as the image is pushed up to Docker Hub.
Now push version
Notice that several lines of the output say
Layer already exists. This is because Docker will leverage read-only layers that are the same as any previously uploaded image layers.
You can browse to
https://hub.docker.com/r/<your docker id>/ and see your newly-pushed Docker images. These are public repositories, so anyone can pull the image - you don’t even need a Docker ID to pull public images. Docker Hub also supports private repositories.
Check out the introduction to a multi-service application stack orchestration in the Application Containerization and Microservice Orchestration tutorial.
Docker runs in a separate network by default called a docker bridge network, which makes DHCP want to serve addresses to that network and not your LAN network where you probably want it. This document details why Docker Pi-hole DHCP is different from normal Pi-hole and how to fix the problem.
Docker BitWarden - self hosted password manager using bitwardenrs/server image: Docker Firefly III - self-hosted manager for your personal finances: Docker Install nginx-proxy-manager: App Synology DS Manager – Download Station macOS app and Safari Extension 2020 NEW. Synology has limited Docker availability in the package manager to only some select models (models and links updated ) 21 series:DS1821+, DS1621xs+, DS1621+, DVA3221 20 series:FS6400, FS3.
Docker's bridge network mode is default and recommended as a more secure setting for containers because docker is all about isolation, they isolate processes by default and the bridge network isolates the networking by default too. You gain access to the isolated container's service ports by using port forwards in your container's runtime config; for example
-p 67:67 is DHCP. However, DHCP protocol operates through a network 'broadcast' which cannot span multiple networks (docker's bridge, and your LAN network). In order to get DHCP on to your network there are a few approaches:
Here are details on setting up DHCP for Docker Pi-hole for various network modes available in docker.
Advantages: Simple, easy, and fast setup
Possibly the simplest way to get DHCP working with Docker Pi-hole is to use host networking which makes the container be on your LAN Network like a regular Raspberry Pi-hole would be, allowing it to broadcast DHCP. It will have the same IP as your Docker host server in this mode so you may still have to deal with port conflicts.
docker run --net=hostif you don't use docker-compose
Advantages: Works well with NAS devices or hard port conflicts
A Macvlan network is the most advanced option since it requires more network knowledge and setup. This mode is similar to host network mode but instead of borrowing the IP of your docker host computer it grabs a new IP address off your LAN network.
Having the container get its own IP not only solves the broadcast problem but avoids port conflicts you might have on devices such as NAS devices with web interfaces. Tony Lawrence detailed macvlan setup for Pi-hole first in the second part of his great blog series about Running Pi-hole on Synology Docker, check it out here: Free your Synology ports with Macvlan
Advantages: Works well with container web reverse proxies like Nginx or Traefik
If you want to use docker's bridged network mode then you need to run a DHCP relay. A relay points to your containers forwarded port 67 and spreads the broadcast signal from an isolated docker bridge onto your LAN network. Relays are very simple software, you just have to configure it to point to your Docker host's IP port 67.
Although uncommon, if your router is an advanced enough router it may support a DHCP relay. Try googling for your router manufacturer + DHCP relay or looking in your router's configuration around the DHCP settings or advanced areas.
If your router doesn't support it, you can run a software/container based DHCP relay on your LAN instead. The author of dnsmasq made a very tiny simple one called DHCP-helper. DerFetzer kindly shared his great setup of a DHCP-helper container on the Pi-hole Discourse forums.
The out of the box default bridge network has some limitations that a user created bridge network won't have. These limitations make it painful to use especially when connecting multiple containers together.
Avoid using the built-in default docker bridge network, the simplest way to do this is just use a docker-compose setup since it creates its own network automatically. If compose isn't an option the bridge network docs should help you create your own.
When you want to run your application in Docker on Synology you are not allowed to use all of the available parameters of the
docker run command. Check my other post about basics with Docker on Synology which contains an enumeration of all possible parameters.
Basically, you have two options how to run your application in Docker.
In any case, you will need to map some network ports to your new container and/or make possible to access some resources (e.g. Shared folders) from your Synology to your new container.
We will use a very simple application to demonstrate an overall deployment process. I chose GitList, which is fully available on the official hub.
The overall process consists of four steps:
Search for the keyword 'GitList' on the Registry tab in the Docker application and download it. If you need more instructions, check this.
You need to know a command how to run a downloaded application. You will usually find it on the official page of the downloaded image, in our case of the GitList:
Explaining of parameters:
|--rm=true||delete the current container immediately after the end of a run (useful in a case of no possible custom setting in the GitList, but Synology does not support that.|
|-p 8888:80||GitList inside a Docker container is listening on port 80, but your Synology uses this port for another purpose. We will run our GitList on port 8888 (or whatever you want).|
|-v /path/repo:/repos||The first part (/path/repo) will be a path on your Synology and /repos is a required path by GitList inside a container.|
Use the downloaded image for creating a new container with your application inside the Docker application on your Synology.
You may use either Launch with wizard (1) or Launch with Docker Run (2) options in the Launch menu on the Image tab:
There is no real difference between them, the second option tries to analyze your 'docker run ...' command and automatically fills in the wizard, which appears after that as well.
We will use an empty wizard, set up the Container Name (1), the Local Port (2) (your choice) and the Container Port (3) (must be 80):
On Step 2 we can create a shortcut on desktop (1) in DSM to the GitList homepage:
On the Summary page, open Advanced Settings (1) to map your folder with Git repositories to GitList:
Advanced Settings gives you an ability to Add Folder (1) placed on your Synology (2) and mount it to GitList (3):
After you started a new container (on the Container page), you may visit GitList in your browser through the IP assigned to Synology and using the port 8888:
The Docker application is quite deeply integrated into DSM, so you are able to configure an access to your application in the Firewall setting:
NOTE: GitList itself doesn't contain any user management or access restrictions, so enabling access to GitList outside of your network is not a good idea.