On Multiple Deploy Environments

Why deploying to multiple environments is a must for enterprise software systems to leverage software automations with docker deployment environments in Azure

Azure makes it easy to create multiple deployment environments. Each environment should be as close to the production environment as possible. Which Azure can help with too.

The pairing of developers and operations (DevOps) is key to success. Doing what is necessary to create this pairing is largely a cultural concern – if folks won’t get along, the pairing will be less than fruitful. Also, if an organization doesn’t encourage cooperation, there’s no hope.

Culture isn’t the only battle. Finding the balance between responsibilities can be a challenge for many organizations just starting to apply DevOps principles. One straightforward way to look at it is: operations is an end-user of a developer’s deliverable. This means, while operations need to do their part in setting up a viable environment, developers need to be able to deliver a system that will work with that environment.

A great way to do this is by testing. It sounds simple enough to me. But when it comes to execution, it can be challenging. Developer machines are rarely equivalent to the environments a system is deployed to. Environments are often different between testing and production.

How can this be overcome?

Creating multiple deploy environments is key. This means getting developers and operations in sync as early as possible. Using deployable containers such as what is available with Docker can help reduce the differences between environments to practically zero. Creating deployment environments can further strengthen the trust between developers and operations. Read more about how you can get started by creating a static website using Docker.

What environments should be created?

There are three areas that need to be covered: Development, Testing, and Availability. This is best represented using five environments.

Development

When developers make their changes locally, they should be testing locally. This is a typical process that should be happening multiple times per day. However, with a development environment, changes can be deployed to a server for development testing. This could be nightly or better yet, each time a pull request is submitted. Testing a pull request is important before it can be approved.

QA

When a pull request is approved, it will likely not continue directly through to the QA environment. There should be a gate controlled by the QA (Quality Assurance) team. If they are ready, they can open the gate allowing deployment to the QA environment. This is where testers will dig into the site manually and run their own automated tests to ensure the system is ready for user acceptance testing.

UAT

UAT (User Acceptance Testing) is a testing phase that includes real users using the system in real-world scenarios. During this phase, the users will need an environment of their own. When the changes are approved, the system is deployed to the Staging environment.

UAT is often combined with either QA or Staging environments. In this article, we separate them. Learn more about the Staging environment next.

Staging

The Staging environment is where the last preparations are made for a move to production. Final checks – and double-checks – are made here. With certain deployment setups, this environment is actually the new production environment. Flipping the switch would make this environment the new production environment, and the old production environment would then be the new staging environment.

Production

When the system is in production we are far past the point of no return. The code is “in the wild” and real users are using it in real situations that have real impacts. In some deployment setups, this may be the old staging environment.

It is important that these are distinct environments meaning each environment has the correct version of the system and that the operations and data are separated from other environments. For example, we don’t want to push a button in Staging and cause code in QA to execute and modify data in UAT. This is a severe example and Azure makes it easy to avoid.

It is also important that each environment (Development, QA, UAT, Staging) is as similar as possible to the production environment. We want our systems to be tested thoroughly to be sure users in production receive as much of the business value as we invested in the system. “Similar” means machine resources are similar, system distribution is similar, etc. While each environment may have slightly different code as development progresses, they are otherwise the same. Again, this is easier to accomplish with container technologies such as Docker.

Azure makes it easier to set up and manage these environments. Create guarded continuous integration pipelines that allow safe code to enter production.

Two Simple Examples of Docker Support in Visual Studio 2019

How to leverage Docker support in Visual Studio 2019 to run and debug your ASP.Net Core applications.

Running applications in containers is continuing to be an important part of enterprise application development. This article will show how to take advantage of the built-in support for Docker in Visual Studio 2019.

First, create a new ASP.Net Core project in Visual Studio 2019. Docker support can be included when creating the project or it can be added later. I’ve opted to add it later in this example.

Docker Support

I have previously shown how to run a static website using Docker and how to set up a Docker container for Nx Workspace applications. Docker can also be used to run an ASP.Net Core application and Visual Studio 2019 makes it easy.

Adding Docker support using Visual Studio 2019 is more seamless if the default setup can be used. To add Docker support to an existing project, right-click the project, hover/select “Add” and choose “Docker Support…”

Add Docker support to an existing project
Adding Docker support to an existing project

After selecting “Docker Support…” a dialog will appear to allow choosing a target operating system between Windows or Linux:

Choose Target OS
Choose Target OS

I have selected Windows for this example. A Dockerfile will be generated automatically and added to the selected project. For this example, the generated Dockerfile looks like this:

#Depending on the operating system of the host machines(s) that will build or run the containers, the image specified in the FROM statement may need to be changed.
#For more information, please see https://aka.ms/containercompat

FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-nanoserver-1809 AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/core/sdk:2.2-nanoserver-1809 AS build
WORKDIR /src
COPY ["Hub/Hub.csproj", "Hub/"]
RUN dotnet restore "Hub/Hub.csproj"
COPY . .
WORKDIR "/src/Hub"
RUN dotnet build "Hub.csproj" -c Release -o /app

FROM build AS publish
RUN dotnet publish "Hub.csproj" -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Hub.dll"]

There is now an option to start and debug the app using Docker:

Start and debug using Docker
Start and debug using Docker

Wait, there’s more.

Container Orchestrator Support

Adding container orchestrator support is just as simple as adding Docker support. This support allows running and debugging multiple containerized applications.

First, right-click the project, hover/select “Add” and choose “Container Orchestrator Support…”

Add container orchestrator support to an existing project
Add container orchestrator support to an existing project

After selecting “Container Orchestrator Support…” a dialog will appear to allow choosing a container orchestrator between Kubernetes/Helm, Service Fabric, or Docker Compose:

Choose Container orchestrator
Choose Container orchestrator

I have selected Docker Compose for this example. A new project named “docker-compose” will be added to the solution containing three files:

.dockerignore

The .dockerignore file is used by the docker CLI to exclude files and directories from the build context.

Generated .dockerignore file
Generated .dockerignore file

docker-compose.yml

The docker-compose file will specify a service containing the details of the application used when adding container orchestrator support. In this example, it uses the details of the Hub application:

Generated docker-compose.yml file
Generated docker-compose.yml file

docker-compose.override.yml

The docker-compose.override file contains additional details regarding the services specified in the docker-compose.yml file. In this example, it contains the ASPNETCORE_ENVIRONMENT environment variable set to Development and specifies the port as 80. It also specifies a network the container will use for communication.

Generated docker-compose.override.yml file
Generated docker-compose.override.yml file

After adding container orchestrator support, a “Docker Compose” option will be added to allow running and debugging the application using Docker Compose.

Using docker-compose, it is also possible to specify an external IP Address for the application. This IP Address would be accessible by a browser and other utilities. To specify an IP Address, simply add a couple of lines to the service specified in the docker-compose.override.yml file:

services:
   hub:
     ...
     networks:
       default:
         ipv4_address: 172.25.159.13

The networks section specifies which network to update. Since “default” is the name of the network specified, it is the one modified. The “ipv4_address” value is assigned which means this container will be accessible from a browser by navigating to 172.25.159.13.

Docker and container orchestrator support in Visual Studio 2019 are two options that provide some exciting opportunities that I will be showing in more detail in a later article.

A Powerful Docker Container for an Nx Workspace Application

Discover how to easily create a Docker container for an Nx Workspace application with this step by step guide to creating a powerful site deployable in seconds with Docker

In a previous post, I briefly described the Nx Workspace and how to create Angular applications and libraries with Nrwl Extensions. I wanted the ability to run a prod build of the app in Docker for Windows so here is just one way of accomplishing that. With the Nx Workspace setup already I had to add just a few more files. This article assumes an Nx Workspace exists with an app named “client-demo”. It follows a similar approach to creating a static website using Docker. This article describes how to create a simple Docker container for an Nx Workspace Application.

NGINX

Using nxginx instead of a nanoserver due to size (~16 MB compared to 1+ GB) a nginx.conf file was needed. Place the file at the root of the Nx Workspace (the same level as the angular.json file):

// nginx.conf

worker_processes 1;

events {
worker_connections 1024;
}

http {
server {
listen 80;
server_name localhost;

root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;

gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/css application/javascript;

location / {
try_files $uri $uri/ /index.html;
}
}
}

Dockerfile

It is now time for the Dockerfile. This file acts as a sort of definition file for a Docker Image. Place this file at the same level as the nginx.conf file:

// Dockerfile

FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/apps/client-demo .

Docker Compose

The Dockerfile is created. To use Docker Compose, create a docker-compose.yml file at the same level as the Dockerfile:

// docker-compose.yml

version: '3.1'

services:
app:
image: 'client-demo-app'
build: '.'
ports:
- 3000:80

Docker Ignore

When creating a Docker Image not every file is needed. In this case, only the dist/ folder is really needed. Using a .dockerignore file can help keep files and directories out of the build context. Place this file at the same level as the Dockerfile:

// .dockerignore

node_modules
.git
libs
tools
apps

Package.json

To leverage the files that have been created scripts can be added to the package.json file. This file should already exist within the Nx Workspace. Simply add the following scripts:

// package.json

...
"scripts": {
...
"client-demo-build": "ng build client-demo --prod",
"client-demo-image": "docker image build -f Dockerfile.client-demo -t client-demo-app .",
"client-demo-run": "docker-compose -f docker-compose.client-demo.yml up",
"client-demo-stop": "docker-compose -f docker-compose.client-demo.yml down",
"client-demo": "yarn client-demo-build && yarn client-demo-image && yarn client-demo-run"
},
...

Each of these scripts can run with npm run <script> or yarn <script>.

client-demo-build: This script runs ng build with the –prod flag to create a prod build of the Angular app.

client-demo-image: This script builds the client-demo-app image given a specific Dockerfile named Dockerfile.client-demo.

client-demo-run: This script uses docker-compose to run the app with docker-compose up. A specific file is specified with the ‘-f’ flag named docker-compose.client-demo.yml.

client-demo-stop: This script acts as the opposite of docker-compose up. As long as this script runs after the client-demo-run script, the app can be started and stopped any number of times.

client-demo: This script simply chains the execution of other scripts to create the prod build of the Angular app, create the Docker image, and serve the app. As it is written, yarn is required.

After creating the Nx Workspace, creating the Docker support files, adding the scripts to package.json and running npm run client-demo or yarn client-demo access the app from a browser at http://localhost:3000.

Docker Container for an Nx Workspace Application viewable from a browser
Default Nx Workspace application

Run npm run client-demo-stop or yarn client-demo-stop to stop the app.

How to Easily Create a Static Website With Docker

Discover how to easily create a static website with Docker that can be viewed from a browser

The goal of this article is to describe a process for serving static web files from a Docker Container. It is surprisingly easy to create a static website with docker.

The website structure is very simple and consists of only 3 files:

./site/
  style.css
  app.js
  index.html

At the project root there is a Dockerfile:

./
  Dockerfile

The website displays “Loading” text. When the JavaScript file is loaded, Hello World is displayed in big red letters:

Static Website With Docker serving Hello World "Loading" view
Hello World “Loading” view

Here is the HTML:

<html>
  <head>
    <title>Sample Website</title>
    <script src="app.js"></script>
    <link href="style.css" rel="stylesheet" />
  </head>
  <body>Loading</body>
</html> 

Here is the Dockerfile:

FROM nanoserver/iis
COPY ./site/ /inetpub/wwwroot/ 

The lines in the Dockerfile are key to getting the webserver image created. This file allows us to create a new docker image. The image is used to run a docker container.

The first line specifies the base image. In this case, it is an image with a configured Nano Server with IIS. There are smaller webserver images that are usually preferable.

The second line will copy the local project files from the ‘site’ folder to the wwwroot folder of the nanoserver image.

That is everything needed to get a web server started to serve the web page. To create the image, start with docker build:

> docker build -t webserver-image:v1 .

The docker build command is used to create an image. When it is executed from a command line within the directory of a Dockerfile, the file will be used to create the image. The -t option allows the ability to name and optionally tag the image. In this case, the name is “webserver-image” with the “v1” tag. Tags are generally used to version images. The last argument is the path used to build the image. In this case, it is . which is the current directory.

Running the command will build the image:

> docker build -t webserver-image:v1 .
Sending build context to Docker daemon 26.11kB
Step 1/2 : FROM nanoserver/iis
---> 7eac2eab1a5c
Step 2/2 : COPY ./site/ /inetpub/wwwroot/
---> fca4962e8674
Successfully built fca4962e8674
Successfully tagged webserver-image:v1

The build succeeded. This can be verified by running docker image ls:

> docker image ls
REPOSITORY      TAG IMAGE ID     CREATED       SIZE
webserver-image v1  ffd9f77d44b7 3 seconds ago 1.29GB

If the build doesn’t succeed, there may be a few things to double-check. This includes making sure the Dockerfile is available, nanoserver images can be pulled, and paths are accurate.

Now that an image is created, it can be used to create a container. This can be done with the docker run command:

> docker run --name web-dev -d -it -p 80:80 webserver-image:v1

After running the command, the container id will be displayed:

> docker run --name web-dev -d -it -p 80:80 webserver-image:v1
fde46cdc36fabba3aef8cb3b91856dbd554ff22d63748d486b8eed68a9a3b370

A docker container was created successfully. This can be verified by executing docker container ls:

> docker container ls
CONTAINER ID IMAGE              COMMAND                  CREATED
STATUS        PORTS              NAMES
fde46cdc36fa webserver-image:v1 "c:\\windows\\system32…" 31 seconds ago
Up 25 seconds 0.0.0.0:80->80/tcp web-dev

The container id is displayed (a shorter version of what was shown when executing docker run). The image that was used for the container is also displayed along with when it was created, the status, port, and the container name.

The following docker inspect command will display the IP address:

> docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" web-dev
172.19.112.171

This IP address is what can be called in a browser to view the page:

Hello World Loading
Hello World “Loading” view

There is now a working container that serves the web page!

I learn by doing and found that most of us in tech do. That is why I got Manning Publications’ Docker in Action to learn Docker using their step-by-step instructions and immediately actionable information to apply to enterprise-level projects.

Their “In Action” series takes the reader on an active journey by way of doing. After learning the details of using Docker to release enterprise-level software I wanted to be sure I understood the concepts and practices behind the delivery. Manning Publications has another book called Docker in Practice. Their “In Practice” series dives deep into the concepts presented by the technology. Together, Docker in Action and Docker in Practice create a well-rounded course in leveraging Docker effectively.