12/27/2021

Synology Docker Linux

86
  • To configure these devices on your Synology Docker Home Assistant you can follow the instructions provided here by Phil Hawthorne. QNAP NAS As QNAP within QTS now supports Docker (with a neat UI), you can simply install Home Assistant using Docker without the need for command-line.
  • Configuration testing. To test docker can run call dockerd command. To stop press CTRL+C. Reboot your Synology you can type. Once it is up SSH-in again and type to test your Docker. Sudo docker run hello-world. You should see finally the output from the hello-world container.
  • They need an occasional Linux desktop. So while I agree that it is best not to leave something like this running full time (in any case it does grab a LOT of resources), it is a really useful tool. Synology have made a 'simple' configuration tool for Docker that enables Docker to many non-expert users. But there is no real help then to such users.

Below I describe the typical installation via docker-compose and the installation via a Synology package and the problems that arise. Installing using docker-compose The steps are: Pick a user (e.g. Sonarr-docker) Create the volume host directories as that user; Determine PUID and PGID from that user (more info here).

In this lab, we will look at some basic Docker commands and a simple build-ship-run workflow. We’ll start by running some simple containers, then we’ll use a Dockerfile to build a custom app. Finally, we’ll look at how to use bind mounts to modify a running container as you might if you were actively developing using Docker.

Difficulty: Beginner (assumes no familiarity with Docker)

Time: Approximately 30 minutes

Tasks:

Task 0: Prerequisites

You will need all of the following to complete this lab:

  • A clone of the lab’s GitHub repo.
  • A DockerID.

Clone the Lab’s GitHub Repo

Use the following command to clone the lab’s repo from GitHub (you can click the command or manually type it). This will make a copy of the lab’s repo in a new sub-directory called linux_tweet_app.

Make sure you have a DockerID

If you do not have a DockerID (a free login used to access Docker Hub), please visit Docker Hub and register for one. You will need this for later steps.

Task 1: Run some simple Docker containers

There are different ways to use containers. These include:

  1. To run a single task: This could be a shell script or a custom app.
  2. Interactively: This connects you to the container similar to the way you SSH into a remote server.
  3. In the background: For long-running services like websites and databases.

In this section you’ll try each of those options and see how Docker manages the workload.

Run a single task in an Alpine Linux container

In this step we’re going to start a new container and tell it to run the hostname command. The container will start, execute the hostname command, then exit.

  1. Run the following command in your Linux console.

    The output below shows that the alpine:latest image could not be found locally. When this happens, Docker automatically pulls it from Docker Hub.

    After the image is pulled, the container’s hostname is displayed (888e89a3b36b in the example below).

  2. Docker keeps a container running as long as the process it started inside the container is still running. In this case the hostname process exits as soon as the output is written. This means the container stops. However, Docker doesn’t delete resources by default, so the container still exists in the Exited state.

    List all containers.

    Notice that your Alpine Linux container is in the Exited state.

    Note: The container ID is the hostname that the container displayed. In the example above it’s 888e89a3b36b.

Containers which do one task and then exit can be very useful. You could build a Docker image that executes a script to configure something. Anyone can execute that task just by running the container - they don’t need the actual scripts or configuration information.

Run an interactive Ubuntu container

You can run a container based on a different version of Linux than is running on your Docker host.

In the next example, we are going to run an Ubuntu Linux container on top of an Alpine Linux Docker host (Play With Docker uses Alpine Linux for its nodes).

  1. Run a Docker container and access its shell.

    In this example, we’re giving Docker three parameters:

    • --interactive says you want an interactive session.
    • --tty allocates a pseudo-tty.
    • --rm tells Docker to go ahead and remove the container when it’s done executing.

    The first two parameters allow you to interact with the Docker container.

    We’re also telling the container to run bash as its main process (PID 1).

    When the container starts you’ll drop into the bash shell with the default prompt [email protected]<container id>:/#. Docker has attached to the shell in the container, relaying input and output between your local session and the shell session in the container.

  2. Run the following commands in the container.

    ls / will list the contents of the root director in the container, ps aux will show running processes in the container, cat /etc/issue will show which Linux distro the container is running, in this case Ubuntu 18.04.3 LTS.

  3. Type exit to leave the shell session. This will terminate the bash process, causing the container to exit.

    Note: As we used the --rm flag when we started the container, Docker removed the container when it stopped. This means if you run another docker container ls --all you won’t see the Ubuntu container.

  4. For fun, let’s check the version of our host VM.

    You should see:

Notice that our host VM is running Alpine Linux, yet we were able to run an Ubuntu container. As previously mentioned, the distribution of Linux inside the container does not need to match the distribution of Linux running on the Docker host.

However, Linux containers require the Docker host to be running a Linux kernel. For example, Linux containers cannot run directly on Windows Docker hosts. The same is true of Windows containers - they need to run on a Docker host with a Windows kernel.

Interactive containers are useful when you are putting together your own image. You can run a container and verify all the steps you need to deploy your app, and capture them in a Dockerfile.

You cancommit a container to make an image from it - but you should avoid that wherever possible. It’s much better to use a repeatable Dockerfile to build your image. You’ll see that shortly.

Run a background MySQL container

Background containers are how you’ll run most applications. Here’s a simple example using MySQL.

  1. Run a new MySQL container with the following command.

    • --detach will run the container in the background.
    • --name will name it mydb.
    • -e will use an environment variable to specify the root password (NOTE: This should never be done in production).

    As the MySQL image was not available locally, Docker automatically pulled it from Docker Hub.

    As long as the MySQL process is running, Docker will keep the container running in the background.

  2. List the running containers.

    Notice your container is running.

  3. You can check what’s happening in your containers by using a couple of built-in Docker commands: docker container logs and docker container top.

    This shows the logs from the MySQL Docker container.

    Let’s look at the processes running inside the container.

    You should see the MySQL daemon (mysqld) is running in the container.

    Although MySQL is running, it is isolated within the container because no network ports have been published to the host. Network traffic cannot reach containers from the host unless ports are explicitly published.

  4. List the MySQL version using docker container exec.

    docker container exec allows you to run a command inside a container. In this example, we’ll use docker container exec to run the command-line equivalent of mysql --user=root --password=$MYSQL_ROOT_PASSWORD --version inside our MySQL container.

    You will see the MySQL version number, as well as a handy warning.

  5. You can also use docker container exec to connect to a new shell process inside an already-running container. Executing the command below will give you an interactive shell (sh) inside your MySQL container.

    Notice that your shell prompt has changed. This is because your shell is now connected to the sh process running inside of your container.

  6. Let’s check the version number by running the same command again, only this time from within the new shell session in the container.

    Notice the output is the same as before.

  7. Type exit to leave the interactive shell session.

Task 2: Package and run a custom app using Docker

In this step you’ll learn how to package your own apps as Docker images using a Dockerfile.

The Dockerfile syntax is straightforward. In this task, we’re going to create a simple NGINX website from a Dockerfile.

Build a simple website image

Let’s have a look at the Dockerfile we’ll be using, which builds a simple website that allows you to send a tweet.

  1. Make sure you’re in the linux_tweet_app directory.

  2. Display the contents of the Dockerfile.

    Let’s see what each of these lines in the Dockerfile do.

    • FROM specifies the base image to use as the starting point for this new image you’re creating. For this example we’re starting from nginx:latest.
    • COPY copies files from the Docker host into the image, at a known location. In this example, COPY is used to copy two files into the image: index.html. and a graphic that will be used on our webpage.
    • EXPOSE documents which ports the application uses.
    • CMD specifies what command to run when a container is started from the image. Notice that we can specify the command, as well as run-time arguments.
  3. In order to make the following commands more copy/paste friendly, export an environment variable containing your DockerID (if you don’t have a DockerID you can get one for free via Docker Hub).

    You will have to manually type this command as it requires your unique DockerID.

    export DOCKERID=<your docker id>

  4. Echo the value of the variable back to the terminal to ensure it was stored correctly.

  5. Use the docker image build command to create a new Docker image using the instructions in the Dockerfile.

    • --tag allows us to give the image a custom name. In this case it’s comprised of our DockerID, the application name, and a version. Having the Docker ID attached to the name will allow us to store it on Docker Hub in a later step
    • . tells Docker to use the current directory as the build context

    Be sure to include period (.) at the end of the command.

    The output below shows the Docker daemon executing each line in the Dockerfile

  6. Use the docker container run command to start a new container from the image you created.

    As this container will be running an NGINX web server, we’ll use the --publish flag to publish port 80 inside the container onto port 80 on the host. This will allow traffic coming in to the Docker host on port 80 to be directed to port 80 in the container. The format of the --publish flag is host_port:container_port.

    Any external traffic coming into the server on port 80 will now be directed into the container on port 80.

    In a later step you will see how to map traffic from two different ports - this is necessary when two containers use the same port to communicate since you can only expose the port once on the host.

  7. Click here to load the website which should be running.

  8. Once you’ve accessed your website, shut it down and remove it.

    Note: We used the --force parameter to remove the running container without shutting it down. This will ungracefully shutdown the container and permanently remove it from the Docker host.

    In a production environment you may want to use docker container stop to gracefully stop the container and leave it on the host. You can then use docker container rm to permanently remove it.

Task 3: Modify a running website

When you’re actively working on an application it is inconvenient to have to stop the container, rebuild the image, and run a new version every time you make a change to your source code.

One way to streamline this process is to mount the source code directory on the local machine into the running container. This will allow any changes made to the files on the host to be immediately reflected in the container.

We do this using something called a bind mount.

When you use a bind mount, a file or directory on the host machine is mounted into a container running on the same host.

Start our web app with a bind mount

  1. Let’s start the web app and mount the current directory into the container.

    In this example we’ll use the --mount flag to mount the current directory on the host into /usr/share/nginx/html inside the container.

    Be sure to run this command from within the linux_tweet_app directory on your Docker host.

    Remember from the Dockerfile, usr/share/nginx/html is where the html files are stored for the web app.

  2. The website should be running.

Modify the running website

Bind mounts mean that any changes made to the local file system are immediately reflected in the running container.

  1. Copy a new index.html into the container.

    The Git repo that you pulled earlier contains several different versions of an index.html file. You can manually run an ls command from within the ~/linux_tweet_app directory to see a list of them. In this step we’ll replace index.html with index-new.html.

  2. Go to the running website and refresh the page. Notice that the site has changed.

    If you are comfortable with vi you can use it to load the local index.html file and make additional changes. Those too would be reflected when you reload the webpage.If you are really adventurous, why not try using exec to access the running container and modify the files stored there.

Even though we’ve modified the index.html local filesystem and seen it reflected in the running container, we’ve not actually changed the Docker image that the container was started from.

To show this, stop the current container and re-run the 1.0 image without a bind mount.

  1. Stop and remove the currently running container.

  2. Rerun the current version without a bind mount.

  3. Notice the website is back to the original version.

  4. Stop and remove the current container

Update the image

To persist the changes you made to the index.html file into the image, you need to build a new version of the image.

  1. Build a new image and tag it as 2.0

    Remember that you previously modified the index.html file on the Docker hosts local filesystem. This means that running another docker image build command will build a new image with the updated index.html

    Be sure to include the period (.) at the end of the command.

    Notice how fast that built! This is because Docker only modified the portion of the image that changed vs. rebuilding the whole image.

  2. Let’s look at the images on the system.

    You now have both versions of the web app on your host.

Test the new version

  1. Run a new container from the new version of the image.

  2. Check the new version of the website (You may need to refresh your browser to get the new version to load).

    The web page will have an orange background.

    We can run both versions side by side. The only thing we need to be aware of is that we cannot have two containers using port 80 on the same host.

    As we’re already using port 80 for the container running from the 2.0 version of the image, we will start a new container and publish it on port 8080. Additionally, we need to give our container a unique name (old_linux_tweet_app)

  3. Run another new container, this time from the old version of the image.

    Notice that this command maps the new container to port 8080 on the host. This is because two containers cannot map to the same port on a single Docker host.

  4. View the old version of the website.

Push your images to Docker Hub

  1. List the images on your Docker host.

    You will see that you now have two linux_tweet_app images - one tagged as 1.0 and the other as 2.0.

    These images are only stored in your Docker hosts local repository. Your Docker host will be deleted after the workshop. In this step we’ll push the images to a public repository so you can run them from any Linux machine with Docker.

    Distribution is built into the Docker platform. You can build images locally and push them to a public or private registry, making them available to other users. Anyone with access can pull that image and run a container from it. The behavior of the app in the container will be the same for everyone, because the image contains the fully-configured app - the only requirements to run it are Linux and Docker.

    Docker Hub is the default public registry for Docker images.

  2. Before you can push your images, you will need to log into Docker Hub.

    You will need to supply your Docker ID credentials when prompted.

  3. Push version 1.0 of your web app using docker image push.

    You’ll see the progress as the image is pushed up to Docker Hub.

  4. Now push version 2.0.

    Notice that several lines of the output say Layer already exists. This is because Docker will leverage read-only layers that are the same as any previously uploaded image layers.

You can browse to https://hub.docker.com/r/<your docker id>/ and see your newly-pushed Docker images. These are public repositories, so anyone can pull the image - you don’t even need a Docker ID to pull public images. Docker Hub also supports private repositories.

Next Step

Synology Docker Linux Gui

Check out the introduction to a multi-service application stack orchestration in the Application Containerization and Microservice Orchestration tutorial.

  1. Tags BitTorrent, Docker, DS918+, NAS, OpenVPN, Synology, Transmission ← How to Find and Build a Foreclosure List for Free → How to Automate Python Workflows in Prefect (Step-by-Step Guide) 114 replies on “How To Set Up Transmission through a VPN on a Synology NAS with Docker”.
  2. The benefits of running the Ubiquiti UniFi controller in Docker on the Synology NAS is that it’s free (if you already own the Synology NAS) and super simple to setup. Depending on your Synology specs, such as a Synology DS1815+, you could easily manage several access points and sites using the docker UniFi controller and upgrading to 16GB.

Docker runs in a separate network by default called a docker bridge network, which makes DHCP want to serve addresses to that network and not your LAN network where you probably want it. This document details why Docker Pi-hole DHCP is different from normal Pi-hole and how to fix the problem.

Docker BitWarden - self hosted password manager using bitwardenrs/server image: Docker Firefly III - self-hosted manager for your personal finances: Docker Install nginx-proxy-manager: App Synology DS Manager – Download Station macOS app and Safari Extension 2020 NEW. Synology has limited Docker availability in the package manager to only some select models (models and links updated ) 21 series:DS1821+, DS1621xs+, DS1621+, DVA3221 20 series:FS6400, FS3.

Technical details¶

Docker's bridge network mode is default and recommended as a more secure setting for containers because docker is all about isolation, they isolate processes by default and the bridge network isolates the networking by default too. You gain access to the isolated container's service ports by using port forwards in your container's runtime config; for example -p 67:67 is DHCP. However, DHCP protocol operates through a network 'broadcast' which cannot span multiple networks (docker's bridge, and your LAN network). In order to get DHCP on to your network there are a few approaches:

Working network modes¶

Here are details on setting up DHCP for Docker Pi-hole for various network modes available in docker.

Docker Pi-hole with host networking mode¶

Advantages: Simple, easy, and fast setup

Possibly the simplest way to get DHCP working with Docker Pi-hole is to use host networking which makes the container be on your LAN Network like a regular Raspberry Pi-hole would be, allowing it to broadcast DHCP. It will have the same IP as your Docker host server in this mode so you may still have to deal with port conflicts.

  • Inside your docker-compose.yml remove all ports and replace them with: network_mode: host
  • docker run --net=host if you don't use docker-compose

Docker Pi-hole with a Macvlan network¶

Advantages: Works well with NAS devices or hard port conflicts

A Macvlan network is the most advanced option since it requires more network knowledge and setup. This mode is similar to host network mode but instead of borrowing the IP of your docker host computer it grabs a new IP address off your LAN network.

Docker On Synology Ds218

Having the container get its own IP not only solves the broadcast problem but avoids port conflicts you might have on devices such as NAS devices with web interfaces. Tony Lawrence detailed macvlan setup for Pi-hole first in the second part of his great blog series about Running Pi-hole on Synology Docker, check it out here: Free your Synology ports with Macvlan

Docker Pi-hole with a bridge networking¶

Advantages: Works well with container web reverse proxies like Nginx or Traefik

Synology Docker Linux Software

Docker On Synology Ds418play

If you want to use docker's bridged network mode then you need to run a DHCP relay. A relay points to your containers forwarded port 67 and spreads the broadcast signal from an isolated docker bridge onto your LAN network. Relays are very simple software, you just have to configure it to point to your Docker host's IP port 67.

Although uncommon, if your router is an advanced enough router it may support a DHCP relay. Try googling for your router manufacturer + DHCP relay or looking in your router's configuration around the DHCP settings or advanced areas.

If your router doesn't support it, you can run a software/container based DHCP relay on your LAN instead. The author of dnsmasq made a very tiny simple one called DHCP-helper. DerFetzer kindly shared his great setup of a DHCP-helper container on the Pi-hole Discourse forums.

Warning about the Default bridge network¶

The out of the box default bridge network has some limitations that a user created bridge network won't have. These limitations make it painful to use especially when connecting multiple containers together.

Avoid using the built-in default docker bridge network, the simplest way to do this is just use a docker-compose setup since it creates its own network automatically. If compose isn't an option the bridge network docs should help you create your own.

When you want to run your application in Docker on Synology you are not allowed to use all of the available parameters of the docker run command. Check my other post about basics with Docker on Synology which contains an enumeration of all possible parameters.

Basically, you have two options how to run your application in Docker.

Synology Docker Linux
  1. Create your own original dockerfile including your application and build your new image.
  2. Use one of existing images with some well-known applications (e.g. Jenkins, Gitlab, WordPress etc.) available on the official repository.

In any case, you will need to map some network ports to your new container and/or make possible to access some resources (e.g. Shared folders) from your Synology to your new container.

We will use a very simple application to demonstrate an overall deployment process. I chose GitList, which is fully available on the official hub.

The overall process consists of four steps:

1. Download the GitList image to your Synology

Search for the keyword 'GitList' on the Registry tab in the Docker application and download it. If you need more instructions, check this.

2. Find a required docker run command

You need to know a command how to run a downloaded application. You will usually find it on the official page of the downloaded image, in our case of the GitList:

Explaining of parameters:

--rm=true delete the current container immediately after the end of a run (useful in a case of no possible custom setting in the GitList, but Synology does not support that.
-p 8888:80 GitList inside a Docker container is listening on port 80, but your Synology uses this port for another purpose. We will run our GitList on port 8888 (or whatever you want).
-v /path/repo:/repos The first part (/path/repo) will be a path on your Synology and /repos is a required path by GitList inside a container.

3. Create a new container

Use the downloaded image for creating a new container with your application inside the Docker application on your Synology.

You may use either Launch with wizard (1) or Launch with Docker Run (2) options in the Launch menu on the Image tab:

There is no real difference between them, the second option tries to analyze your 'docker run ...' command and automatically fills in the wizard, which appears after that as well.

We will use an empty wizard, set up the Container Name (1), the Local Port (2) (your choice) and the Container Port (3) (must be 80):

On Step 2 we can create a shortcut on desktop (1) in DSM to the GitList homepage:

On the Summary page, open Advanced Settings (1) to map your folder with Git repositories to GitList:

Advanced Settings gives you an ability to Add Folder (1) placed on your Synology (2) and mount it to GitList (3):

4. Run it

After you started a new container (on the Container page), you may visit GitList in your browser through the IP assigned to Synology and using the port 8888:

Synology Docker Linux Free

The Docker application is quite deeply integrated into DSM, so you are able to configure an access to your application in the Firewall setting:

NOTE: GitList itself doesn't contain any user management or access restrictions, so enabling access to GitList outside of your network is not a good idea.

Synology Docker Linux Command

Comments are closed.
  • Most Viewed News

    • Virtualbox El Capitan
    • Install Docker Compose Version 3
    • Mac Torrents Website