12/28/2021

Install Nginx On Docker

94
  1. Install Nginx On Docker Linux
  2. Install Nginx On Docker Windows

Overview

I was recently diagnosing an issue at work where a service was configured with multiple differing ingress resources. The team’s reasoning for this was entirely reasonable and, above all, everything was working as expected.

If you want to know how to install docker on Linux, just have a look at the official docker documentation and docker-compose documentation. What is NGINX proxy manager. NGINX proxy manager is a reverse proxy management system, that is based on NGINX with a nice and clean web UI. Key to the simplicity of this installation is Docker Compose, which allows you to create a group of Docker containers, defined in a single file, with a single command. If you would like to learn more about how to do CI with Docker Compose, take a look at How To Configure a Continuous Integration Testing Environment with Docker and Docker.

However, once we tried to abandon Azure Dev Spaces and switch to Bridge to Kubernetes (“B2K”) it was quickly discovered that this setup wasn’t going to work straight out of the box - B2K doesn’t support multiple ingresses configured with the same domain name. The envoy proxy reports the following error:

As a result, I decided the best course of action was to understand the routing that the team had enabled, and work out a more efficient way of handling the routing requirements using a single ingress resource.

To make this as simple as possible, I decided to get a sample service up and running locally so I could verify scenarios locally without having to deploy into a full cluster.

Docker Desktop

I’m using a mac at the moment, but most (if not all) of the commands here will work on Windows too, especially using WSL2 rather than PowerShell.

The Docker Desktop version I have installed is 2.4.0.0 (stable) and is the latest stable version as of the time of writing.

I have the Kubernetes integration enabled already, but I had a version of Linkerd running there which I didn’t want to interfere with what I was doing. To get around this, I just used the Docker admin GUI to “reset the Kubernetes cluster”:

To install Docker Desktop, if you don’t have it installed already, go to https://docs.docker.com/desktop/ and follow the instructions for your OS.

Once installed, ensure that the Kubernetes integration is enabled.

Note: You don’t need to enabled “Show system containers” for any of the following steps to work.

Now you should be able to verify that your cluster is up and running:

Note: I’ve aliased kubectl to k, simply out of laziness efficiency.

This will show all pods in all namespaces:

Install Nginx

Now we have a simple 1-node cluster running under Docker Desktop, we need to install the Nginx ingress:

Tip: It’s not best practice to just blindly install Kubernetes resources by using yaml files taken straight from the internet. If you’re in any doubt, download the yaml file and save a copy of it locally. That way, you can inspect it and ensure that it’s always consistent when you apply it.

This will install a Nginx controller in the ingress-nginx namespace:

Routing

Now that you have installed the Nginx controller, you need to make sure that any deployments you make use a service of type NodePort rather than the default ClusterIP:

Domains

I’ve used a sample domain of chart-example.local in the Helm charts for this repo. In order for this to resolve locally you need to add an entry to your hosts file.

On a Mac, edit /private/etc/hosts. On Windows, edit c:WindowsSystem32Driversetchosts and add the following line at the end:

Now you can run a service on your local machine and make requests to it using the ingress routes you define in your deployment.

The rest of this article describes a really basic .NET Core application to prove that the routing works as expected. .NET Core is absolutely not required - this is just simple example.

Sample Application

The concept for the sample application is a simple one.

  • There will be three different API endpoints in the app:
    • /foo/{guid} will return a new foo object in JSON
    • /bar/{guid} will return a new bar object in JSON
    • / will return a 200 OK response and will be used as a liveness and readiness check

The point we’re trying to prove is that API requests to /foo/{guid} resolve correctly to the /foo/* route, and requests to /bar/{guid} resolve correctly to the /bar/* route.

The following requests should return the expected results:

This should return an object matching the following:

Similarly, a request to the /bar/* endpoint:

This should return an object matching the following:

The sample code for this application can be found at https://github.com/michaelrosedev/sample_api.

Dockerfile

The Dockerfile for this sample application is extremely simple - it simply uses the .NET Core SDK to restore dependencies and then build the application, then uses a second stage to copy the build artifacts into an alpine image with the .NET Core runtime.

Note: This image does not follow best practices - it simply takes the shortest path to get a running service. For production scenarios, you don’t want to be building containers that run as root and expose low ports like 80.

Helm

The Helm chart in this repo was generated automatically with mkdir helm && cd helm && helm init sample. I then made the following changes:

  • Added a namespace value to the values.yaml file
  • Added usage of the namespace value in the various Kubernetes resource manifest files to make sure the application is deployed to a specific namespace
  • Changed the image.repository to mikrose/sample (my Docker Hub account) and the image.version to 1.0.1 (the latest version of my sample application)
  • Changed service.type to NodePort (because ClusterIP won’t work without a load balancer in front of the cluster)
  • Enabled the ingress resource (because that’s the whole point of this exercise)
  • Added the paths /foo and /bar to the ingress in values.yaml:

Namespace

All the resources in the service that has the issue (see Overview) are in a dedicated namespace, and I want to reflect the same behaviour here.

The first thing I need to do then is add the desired namespace to my local cluster (sample):

This will create a new namespace called sample in the cluster.

Now we can install the Helm chart. Make sure you’re in the ./helm directory, then run the following command:

  • helm install {name} ./{chart-dir} -n {namespace}, i.e. helm install sample ./sample -n sample

This will install the sample application into the sample namespace.

You can then verify that the pod is running:

Tip: If you need to do any troubleshooting of 503 errors, first ensure you have changed your service to use a service.type of NodePort. Ask me how I know this…

Now you can make requests to your service and verify that your routes are working as expected:

And that’s it - we now have a working ingress route that we can hit from our local machine.

That means that it should be straightforward to configure and experiment with routing changes without having to resort to deploying into a full cluster - you can speed up your own local feedback loop and keep it all self-contained.

I will now be using this technique to wrap up an existing service and optimise the routing.

What is reverse proxy? What are its advantages?

What is a reverse proxy? Reverse proxy is kind of a server that sits in the front of many other servers, and forwards the client requests to the appropriate servers. The response from the server is then also received and forwarded by the proxy server to the client.

Why would you use such a setup? There are several good reasons for that. This setup can be used to set up a load balancer, caching or for protection from attacks.

I am not going into the details here. Instead, I'll show you how you can utilize the concept of reverse proxy to set up multiple services on the same server.

Take the same image as the one you saw above. What you can do is to run an Ngnix server in a docker container in reverse proxy mode. Other web services can also be run in their own respective containers.

Nginx container will be configured in a way that it knows which web service is running in which container.

This is a good way to save cost of hosting each service in a different server. You can have multiple services running in the same Linux server thanks to the reverse proxy server.

Setting up Nginx as reverse proxy to deploy multiple services on the same server using Docker

Let me show you how to go about configuring the above mentioned setup.

With these steps, you can install multiple web-based application containers running under Nginx with each standalone container corresponding to its own respective domain or subdomain.

First, let's see what you need in order to follow this tutorial.

Prerequisites

You'll be needing the following knowledge to get started with this tutorial easily. Althogh, you can get by without them as well.

  • A Linux system/server. You can easily deploy a Linux server in minutes using Linode cloud service.
  • Familiarity with Linux commands and terminal.
  • Basic knowledge of Docker.
  • You should have Docker and Docker Compose installed on your Linux server. Please read our guide on installing Docker and Docker Compose on CentOS.
  • You should also own a domain (so that you can set up services on sub-domains).

I have used domain.com as an example domain name in the tutorial. Please make sure you change it according to your own domains or subdomains.

Other than the above, please also make sure of the following things:

Docker

Change your domain’s DNS records

In your domain name provider’s A/AAAA or CNAME record panel, make sure that both the domain and subdomains (including www) point to your server’s IP address.

This is an example for your reference:

HostnameIP AddressTTL
domain.com172.105.50.178Default
*172.105.50.178Default
sub0.domain.com172.105.50.178Default
sub1.domain.com172.105.50.178Default

Swap space

To make sure all your container apps are at ease and never run out of memory after you deploy them, you must have the necessary swap space on your system.

You can always adjust swap according to the available RAM on your system. You can decide the swap space based on the bundle of app containers on the single server and estimating their cumulative RAM usage.

Step 1: Set up Nginx reverse proxy container

Start with setting up your nginx reverse proxy. Create a directory named 'reverse-proxy' and switch to it:

Create a file named docker-compose.yml, open it in your favourite terminal-based text editor like Vim or Nano.

For the nginx reverse proxy, I'll be using jwilder/nginx-proxy image. Copy and paste the following in the docker-compose.yml file:

Now let's go through the important parts of the compose file:

  • You have declared four volumes, html, dhparam, vhost and certs. They're persistent data that you'd definitely want to keep even after the container's been down. The html & vhost volumes will be very important in the next Let's Encrypt container deployment. They're designed to work together.
  • The docker socker is mounted read-only inside the container. This one's necessary for the reverse proxy container to generate nginx's configuration files, detect other containers with a specific environment variable.
  • Docker restart policy is set to always. Other options include on-failure and unless-stopped. In this case, always seemed more appropriate.
  • The ports 80 and 443 are bound to the host for http and https respectively.
  • Finally, it uses a different network, not the default bridge network.
Using a user defined network is very important. This will help in isolating all the containers that are to be proxied, along with enabling the reverse proxy container to forward the clients to their desired/intended containers and also let the containers communicate with each other (Which is not possible with the default bridge network unless icc is set to true for the daemon).

Keep in mind that YML is very finicky about tabs and indention.

Nginx

Step 2: Set up a container for automatic SSL certificate generation

For this, you can using jrcs/letsencrypt-nginx-proxy-companion container image.

On the same docker-compose.yml file that you used before, add the following lines:

In this service definition:

  • You're using the same exact volumes as you used for the reverse-proxy container. The html and vhost volumes sharing are necessary for the ACME Challenge of letsencrypt to be successful. This container will generate the certificates inside /etc/nginx/certs, in the container. This is why you are sharing this volume with your reverse proxy container. The dhparam volume will contain the dhparam file. The socket is mounted to detect other containers with a specific environment variable.
  • Here you have defined two environment variables. The NGINX_PROXY_CONTAINER variable points to the reverse proxy container. Set it to the name of the container. The DEFAULT_EMAIL is the email that'll be used while generating the certificates for each domain/subdomain.
  • The depends_on option is set so that this service waits for the reverse proxy to start first, then and only then, this'll start.
  • Finally, this container also shares the same network. This is necessary for the two containers to communicate.

Step 3: Finalize the docker compose file

Once the service definitions are done, complete the docker-compose file with the following lines:

The network net is set to external because the proxied containers will also have to use this network. And if we leave the network to get created by docker-comspose, the network name will depend on the current directory. This will create a weirdly named network.

Other than that, other containers will have to set that network to be external anyway, otherwise those compose files will also have to reside in this same directory, none of which is ideal.

Therefore, create the network using

The following is the whole content of the docker-compose.yml file.

Finally, you can deploy these two containers (Ngnix and Let's Encrypt) using the following command:

Step 4: Verify that Ngnix reverse proxy is working

Install

The container that'll serve the frontend will need to define two environment variables.

VIRTUAL_HOST: for generating the reverse proxy config

LETSENCRYPT_HOST: for generating the necessary certificates

Make sure that you have correct values for these two variables. You can run nginx-dummy image with reverse proxy like this:

Now if you go to your sub-domain used in the previous command, you should see a message from Ngnix server.

Once you have successfully tested it, you can stop the running docker container:

You may also stop the Ngnix reverse proxy if you are not going to use it:

Step 5: Run other service containers with reverse proxy

The process of setting up other containers so that they can be proxied is VERY simple.

I'll show it with two instances of Nextcloud deployment in a moment. Let me first tell you what you are doing here.

Do not bind to any port

The container can leave out the port that serves the frontend. The reverse proxy container will automatically detect that.

(OPTIONAL) Define VIRTUAL_PORT

If the reverse proxy container fails to detect the port, you can define another environment variable named VIRTUAL_PORT with the port serving the frontend or whichever service you want to get proxied, like '80' or '7765'.

Set Let's Encrypt email specific to a container

Install Nginx On Docker Linux

You can override the DEFAULT_EMAIL variable and set a specific email address for a specific container/web service's domain/subdomain certificate(s), by setting the email id to the environment variable LETSENCRYPT_EMAIL. This works on a per-container basis.

Now that you know all those stuff, let me show you the command that deploys a Nextcloud instance that'll be proxied using the nginx proxy container, and will have TLS(SSL/HTTPS) enabled.

This is NOT AN IDEAL deployment. The following command is used for demonstrative purpose only.

In the example, you used the same network as the reverse proxy containers, defined the two environment variables, with the appropriate subdomains (Set yours accordingly). After a couple of minutes, you should see Nextcloud running on sub0.domain.com. Open it in a browser to verify.

You can deploy another Nextcloud instance just like this one, on a different subdomain, like the following:

Now you should see a different Nextcloud instance running on a different subdomain on the same server.

With this method, you can deploy different web apps on the same server served under different subdomains, which is pretty handy.

Follow along

Now that you have this set up, you can go ahead and use this in actual deployments with the following examples:

Install Nginx On Docker Windows

For more articles like these, subscribe to our newsletter, or consider becoming a member. For any queries, don't hesitate to comment down below.

  • Most Viewed News

    • Esd Dmg
    • Install Yarn Docker
    • Python3 Macos Install
    • Mavericks Os Installer
    • Install Docker Container