A Simple Time Management Alternative With Trello

Learn how to get things done with a powerful time management alternative

We have all felt the elusiveness of time. It is hard to find the necessary time to get things done. This is especially noticed if you have obligations tied to your financial well-being. People will tell you, “you have to make the time.” What they don’t tell you is what you have to give up to do that. It doesn’t have to be this way with one simple time management alternative.

The Time Management Alternative Structure

People have been making lists to help remember things – milk at the grocery store, to clean the gutters, take out the trash, how to make that shrimp linguine everyone loved last week. But with the growing amount of life-hackery necessary to manage the growing demands on our time organization has become a key ingredient in getting things done.

With a little more structure, lists can do more than help keep your refrigerator stocked. There are free tools to help like Trello. It super-charges lists and offers a way to keep your to-dos organized. By leveraging the free version of Trello you can keep yourself organized and on track to accomplishing everything you need or want.

There are four features from Trello that make this possible. They are all offered for free.

Cards

Cards in Trello are your to-do items. Each item is represented by a card. Cards allow elaborate descriptions so you can write exactly what needs to be done. They also allow checklists to break down your tasks even further. While upgrading your plan will allow special Power-Ups that give cards even more power, I’ll focus on the free version for this article.

Lists

Lists in Trello are simply collections of Cards. Each list can be named, archived, and rearranged on a Board.

Boards

Boards in Trello are collections of Lists. The free version of Trello allows you to set background colors and images, have the recommended number of lists per board and allows you to have multiple boards.

Teams

Teams in Trello are a way to organize Boards. It may not be the best name for you. I find it helpful to think of it as a category for Board collections. The free version of Trello allows ten boards for each team. I find it helpful to create separate teams for my main priorities. For example, I have a team for managing this site. I also have teams to help manage my life at home.

These organization structures built into Trello provide a lot of potential for managing a growing to-do list. Leveraging these features effectively is important to make the most of them. I will attempt to describe a handy template I use for accomplishing goals with the help of Trello without needing calendars, reminders, or alarms.

Leveraging the Time Management Alternative

Your goals can be achieved with help from the power of Trello. The following sections will describe how I have done that and how you can too. I will start with Lists and what kind of Cards they would include. I’ll then move on to Teams and what kind of Boards they would include.

Brainstorm Lists within the time management alternative

This list contains thoughts and ideas. Each card is an item in a brainstorm. The list is a space for creative experimentation. The cards that come out of this list are then further categorized as Undecided, Not Doing, or Backlog. This is one of the most important lists. This is where all future activity begins.

Undecided List

When there are items that come out of a brainstorm that you are just not quite sure of, they go here. These would be considered later. The cards in this list are in a Trello-fueled limbo state. They may be completed later, or later it will be decided that they will not be done. The idea has been captured and we’ll decide later what to do with it.

Not Doing List

Items in this list are most likely not going to be done in the future. Each card came from a brainstorm or undecided list and was deemed unworthy to complete any further. This list is to preserve your ideas and offers you a chance to reconsider the worth of the items or fuel better ideas.

Backlog List

This list contains items that we are expecting to do in the future. When you decide that an item from the brainstorm will be done, it goes here first.

Prioritized List

The cards in this list represent items you have decided to do before others in your lists of ideas and backlog items. When you complete your current tasks, these are next. The cards in this list can be prioritized too. For example, you could order the list from top to bottom by importance. When a new item moves to in progress, it would be the card at the top of the list. Often, I would create a Proposed list placed before the Prioritized list. I would fill this list with backlog items as I prepare to prioritize them.

In Progress List

You have tasks that you are currently working on. They should be on this list. Keeping this list short is important. If everything is in progress, nothing is. Multi-tasking is a lie. I would recommend no more than three items at a time.

Complete List

Move your completed tasks to this list. You can track your progress towards your goals and celebrate each achievement along the way. Each card in this list can be reviewed or removed. I often add another list called Review to capture items that are completed but still await further analysis. This is an opportunity for continuous improvement. I recommend taking advantage of that.

Priority Teams with the time management alternative

Each priority team should represent a significant area of your life that you want to manage with the power of Trello. This could be long-term relationship goals or how to get rid of that collection of old dishware. Anything important enough to you should be made a team.

Each team would have at least two boards. One board is for ideas. This contains the first three lists: Brainstorm, Undecided, and Backlog. The other board would contain the other lists: Prioritized, In Progress, and Complete.

Notice throughout this article I haven’t set a date in Trello. While it is possible to add a date to cards and each card has an Activity log with a timestamp, there is no need to specify a date unless absolutely necessary. Keeping away from deadlines is one advantage of using Trello. I recommend using your best judgment and specify a date if it makes sense.

But couldn’t I just ignore my items?

A. Of course. You could ignore your board of items, forget your priorities, and choose not to organize your to-do items. None of those things encourage the completion of your goals.

This is a lot to set up! Is there an alternative?

A. Absolutely. What I described here works well for me. With the free version of Trello, I encourage you to experiment and find what works for you.

What if my number of free teams and boards meet the Trello maximum?

A. You can recycle any of the items in Trello. Teams, Boards, Lists, and Cards can be modified or archived and created anew. As your goals are accomplished, recycle your Trello teams.

Why Trello?

A. It is a free way to get organized and get things done. There are others that are less conducive to day-to-day activities, personal flexibility, and budget. After the year’s Black Friday and Cyber Monday shopping, why not try something that’s free?

How do I get Trello?

A. It is quick and easy. Follow the simple instructions provided by Trello to get started!

How to Leverage the Strength of Branch Policies

Create branch policies to tie your branch, pull requests, and build into a powerful automated experience

Branch policies can act as a sort of glue to combine a branch, a build, and pull requests. Many options are available to you when configuring branch policies. First, make sure you require a pull request. Next, you’ll need to create a build in Azure DevOps to leverage when configuring a build policy.

Make sure you have at least one reviewer:

Require a minimum number of reviewers for pull requests
Require a minimum number of reviewers for pull requests

Pull Requests are required and at least one approval is needed to complete them.

A build policy can be added too. Let’s do that. Click the Add build policy button and fill in the form:

Add build policy
Add build policy

The build pipeline is specified. The trigger should be Automatic. The build should be required and have an expiration. Give your build policy a name that describes its purpose. In this case, I called it Develop-Build-Policy.

Now, let’s look at other configuration options. One option is to Limit merge types. I will choose Squash Merge to help keep my Git history clean. I’ll also add myself as an automatically included code reviewer.

Branch policies setup
Branch policy setup

With a build policy added, we have an automated build set up to run after a pull request is created. When a feature branch needs to merge to develop, a pull request is required. When a pull request is created, the automated build will run. The pull request cannot be completed (which would cause a merge to the develop branch) until it receives approval from at least one required approver and the automated build succeeds.

Pull Request Policy status
Pull Request Policy status

Assuming the in-progress build succeeds, I could approve this pull request which would allow it to complete. After completing the pull request, the code in my feature branch would merge to the develop branch.

How to Create Build Pipelines in Azure DevOps

How to create build pipelines in Azure DevOps

DevOps has been a Huge advantage when creating enterprise software but, even small projects can benefit from DevOps. This article will describe how to create build pipelines in Azure DevOps.

DevOps is the union of people, process, and technology to continually provide value to customers.

Microsoft, What is DevOps?

Microsoft’s Azure DevOps can make it super simple. It not only includes source code repositories and a superior bug tracker. Leverage powerful pipelines to build and deploy applications. They can be configured to trigger a build on each code repository check-in. Azure DevOps makes it a simple process to leverage continuous integration options for small projects.

First, you will need to set up your branch in Azure Repos. Not sure how? Check out the previous post about how to require pull requests in Azure Repos.

Create a New Pipeline

We have Azure Repos set up with a master branch and a develop branch. The develop branch requires a pull request. The next thing we’ll do is create a build.

First, we need to create a new pipeline. Navigating to Azure Repos > Pipelines we can see there are no pipelines:

new Build Pipelines in Azure DevOps
New pipeline

Click the New pipeline button to start the process of creating a new pipeline. First is the Connect tab where you need to tell Azure DevOps where your code is located:

Where is your code?
Where is your code?

Select Azure Repos Git for this example since our code is in a Git Repository in Azure Repos.

After the connection is set up, the workflow will move to the Select step where you need to select a repository:

Select a repository
Select a repository

This one is easy for me – I only have one set up. Choose the desired repository. The repository should include the code that you wish to build with this pipeline.

After selecting a repository, you must configure your pipeline. There are a lot of templates available. Here is just a few:

Configure your pipeline
Configure your pipeline

I created a React app and will use the npm tool to run npm scripts. After selecting the tool, the yaml file will display. I made changes to it to support running the necessary npm scripts in my react app:

trigger:
- develop

pool:
  vmImage: 'ubuntu-latest'

steps:

- task: Npm@1
  inputs:
    command: 'install'
    workingDir: '$(Build.SourcesDirectory)/react-app/'

- task: Npm@1
  inputs:
    command: 'custom'
    workingDir: '$(Build.SourcesDirectory)/react-app/'
    customCommand: 'run build-test-ci'

Now click the blue Save and run button to the upper-right. This will display a form:

Save and run a build
Save and run a new build

This will add the azure-pipelines.yml file to the develop branch and run the build.

Azure DevOps Pipelines build in progress
Azure DevOps Pipelines build in progress

Assuming the build succeeds, you have a build that you can leverage in various automation scenarios.

A Simple Overview of Azure’s Global Infrastructure

View the basics of Azure’s Global Infrastructure including geographies, regions, and availability zones from Microsoft Learning

Azure's Global Infrastructure including Geographies, regions, and availability zones
Learn more from Microsoft Learning

Emerging Secrets

A tale of overcoming pride.

It’s three weeks before launch and you have no confidence in the upcoming release. After five years of development, millions of dollars, and countless overtime hours, the system will fail on delivery.

And you’ve known it from the beginning.

It is a disheartening feeling paired with panic. As the system’s development team leader, it is your responsibility to keep things in line, to keep the team moving forward in the right direction, and to anticipate the possibility of failure.

Bah, you think, there’s only so much that I can do.

You keep your head up, hands down, and continue driving the system forward while cycling through the collection of rationalities you’ve clung to since the problem’s discovery, it won’t be that bad, it’s not so severe, it’s just a bug we’ll resolve later.

Day after day the problem compounds.

Deep within the tangled web of daily activities, you lose sight of the impending problem. You are distracted by bugs in a non-critical component. The time spent on integration issues with the enterprise resource management software creates an illusion that the other integrations are working correctly.

Now judgment day is here. The systems are being deployed to production. You click the button to run the scripts that build the packages to send to the production environments. Finally, the puzzle pieces are put together to be leveraged by thousands of customers.

Besides the lack of training, the customers don’t run into any bugs. Deployment successful. You and your team rejoice.

The next week the systems you’ve slaved over are at a critical point: payday. Sales input data is validated and correct. The sales systems are communicating properly. You track the sales data moving from one system to another, checking the output analysis, a sea of green.

It is at 2 A.M. the following Sunday morning when your phone rings. The customers didn’t get paid.

“Fix it!” A whole-hearted response, valid to its core, forms the mantra driving the team.

You run the series of validations against the sales data. Green. You have your cohorts comb through system logs. No issues found. You then begin manual validation, double-checking each calculation.

The hairs on the back of your neck stand on end. Goosebumps are sent across your body in waves. You fall into a cold sweat. The validations are flawed causing a series of false positives. It’s the issue you never recorded, the bug you deferred and that had not been fixed.

“How was this not found earlier?” The dreaded question is laid out, plain and clear. It must be addressed.

The truth is, you did know about it. To save face, you lied to yourself and, worst of all, the team. As the project progressed, you manufactured a version of reality more acceptable to your pride. Now the truth comes out as it always does.

The company you represent faces enormous challenges. Customers need to be paid manually. The system needs to be repaired. And you can no longer be trusted.

Sitting in front of your home desktop computer, customized to be quiet, fast, and to emit an appealing array of lights across your wall, you lose yourself in the thought that maybe you won’t find a new job in your field.

You unconsciously hold your index finger on the down-arrow key, the list of job openings scroll past your eyes. You see none of them.


Your eyes snap open and you suck in a deep breath. You sit up from your bed, soaked in sweat. You check the clock: 3 A.M. You realize it was just a nightmare.

You know what you have to do.

When you arrive at work early that morning you set up a meeting with the project’s leadership. With system design underway, there is no time to lose. The issue is not something to put off until later. It must be recorded and addressed according to its assigned priority.

The team gathers in the meeting room, offering polite greetings as they enter. When everyone sits down, laptops closed, eager to hear what the meeting is about, you say, “There is a flaw in the third-party validation system we plan to use for validating sales data calculations. We may need to find an alternative.”

Require Pull Requests in Azure Repos

Pull requests offer another layer of defense against poor quality code. Enforcing them with Azure Repos is easy. Here’s how.

Git Repository

First, set up a Git repository in Azure DevOps. The user interface is generally straightforward but Microsoft describes how to create a new Git repo if one has not already been created for you.

Cloning the repository takes the usual steps – using the command line or GUI tools in Visual Studio or VS Code.

Now you are ready for your first commit.

A default branch is available with Azure Repos. It is called master. You’ll want at least one more. You will push code to the new branch during development.

To create the branch, navigate to Branches in Azure DevOps and select the button New branch:

A form will appear where further information about the branch can be specified:

  1. You’ll be able to name it. I would call it dev or develop – something that describes the nature of the branch and its purpose without being too long.
  2. It needs a Based on branch – the contents in the branch will be copied to your new branch.
  3. You may have the opportunity to link a work item. This is optional but worth it for tracking purposes.

Pull Requests

Don’t push code without it being reviewed. Pull Requests are the next line of defense before code enters a branch.

This is simple to set up in Azure DevOps. From the branches list in Azure Repos, click the ellipses next to the desired branch (in this case, develop) and select Branch Policies.

You will be taken to a screen with a lot of options. There is one checkbox to require pull requests:

From now on code will be entering the branch through a pull request.

Before writing code, create a feature branch off develop. Perform commits and push your feature branch. When you are ready to add your code to develop, create a new pull request and choose your feature branch. Make sure you create a pull request for the develop branch.

To do that, make sure the develop branch is selected in Azure Repos. Your new feature branch should display with a note that it’s available to add to develop with a pull request:

Simply click the Create a pull request link to start the process. Any automated build can run if a Build Policy is set for the develop branch. We’ll discuss that in a separate article.

On Multiple Deploy Environments

Why deploying to multiple environments is a must for enterprise software systems to leverage software automations with docker deployment environments in Azure

Azure makes it easy to create multiple deployment environments. Each environment should be as close to the production environment as possible. Which Azure can help with too.

The pairing of developers and operations (DevOps) is key to success. Doing what is necessary to create this pairing is largely a cultural concern – if folks won’t get along, the pairing will be less than fruitful. Also, if an organization doesn’t encourage cooperation, there’s no hope.

Culture isn’t the only battle. Finding the balance between responsibilities can be a challenge for many organizations just starting to apply DevOps principles. One straightforward way to look at it is: operations is an end-user of a developer’s deliverable. This means, while operations need to do their part in setting up a viable environment, developers need to be able to deliver a system that will work with that environment.

A great way to do this is by testing. It sounds simple enough to me. But when it comes to execution, it can be challenging. Developer machines are rarely equivalent to the environments a system is deployed to. Environments are often different between testing and production.

How can this be overcome?

Creating multiple deploy environments is key. This means getting developers and operations in sync as early as possible. Using deployable containers such as what is available with Docker can help reduce the differences between environments to practically zero. Creating deployment environments can further strengthen the trust between developers and operations. Read more about how you can get started by creating a static website using Docker.

What environments should be created?

There are three areas that need to be covered: Development, Testing, and Availability. This is best represented using five environments.

Development

When developers make their changes locally, they should be testing locally. This is a typical process that should be happening multiple times per day. However, with a development environment, changes can be deployed to a server for development testing. This could be nightly or better yet, each time a pull request is submitted. Testing a pull request is important before it can be approved.

QA

When a pull request is approved, it will likely not continue directly through to the QA environment. There should be a gate controlled by the QA (Quality Assurance) team. If they are ready, they can open the gate allowing deployment to the QA environment. This is where testers will dig into the site manually and run their own automated tests to ensure the system is ready for user acceptance testing.

UAT

UAT (User Acceptance Testing) is a testing phase that includes real users using the system in real-world scenarios. During this phase, the users will need an environment of their own. When the changes are approved, the system is deployed to the Staging environment.

UAT is often combined with either QA or Staging environments. In this article, we separate them. Learn more about the Staging environment next.

Staging

The Staging environment is where the last preparations are made for a move to production. Final checks – and double-checks – are made here. With certain deployment setups, this environment is actually the new production environment. Flipping the switch would make this environment the new production environment, and the old production environment would then be the new staging environment.

Production

When the system is in production we are far past the point of no return. The code is “in the wild” and real users are using it in real situations that have real impacts. In some deployment setups, this may be the old staging environment.

It is important that these are distinct environments meaning each environment has the correct version of the system and that the operations and data are separated from other environments. For example, we don’t want to push a button in Staging and cause code in QA to execute and modify data in UAT. This is a severe example and Azure makes it easy to avoid.

It is also important that each environment (Development, QA, UAT, Staging) is as similar as possible to the production environment. We want our systems to be tested thoroughly to be sure users in production receive as much of the business value as we invested in the system. “Similar” means machine resources are similar, system distribution is similar, etc. While each environment may have slightly different code as development progresses, they are otherwise the same. Again, this is easier to accomplish with container technologies such as Docker.

Azure makes it easier to set up and manage these environments. Create guarded continuous integration pipelines that allow safe code to enter production.

Two Simple Examples of Docker Support in Visual Studio 2019

How to leverage Docker support in Visual Studio 2019 to run and debug your ASP.Net Core applications.

Running applications in containers is continuing to be an important part of enterprise application development. This article will show how to take advantage of the built-in support for Docker in Visual Studio 2019.

First, create a new ASP.Net Core project in Visual Studio 2019. Docker support can be included when creating the project or it can be added later. I’ve opted to add it later in this example.

Docker Support

I have previously shown how to run a static website using Docker and how to set up a Docker container for Nx Workspace applications. Docker can also be used to run an ASP.Net Core application and Visual Studio 2019 makes it easy.

Adding Docker support using Visual Studio 2019 is more seamless if the default setup can be used. To add Docker support to an existing project, right-click the project, hover/select “Add” and choose “Docker Support…”

Add Docker support to an existing project
Adding Docker support to an existing project

After selecting “Docker Support…” a dialog will appear to allow choosing a target operating system between Windows or Linux:

Choose Target OS
Choose Target OS

I have selected Windows for this example. A Dockerfile will be generated automatically and added to the selected project. For this example, the generated Dockerfile looks like this:

#Depending on the operating system of the host machines(s) that will build or run the containers, the image specified in the FROM statement may need to be changed.
#For more information, please see https://aka.ms/containercompat

FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-nanoserver-1809 AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/core/sdk:2.2-nanoserver-1809 AS build
WORKDIR /src
COPY ["Hub/Hub.csproj", "Hub/"]
RUN dotnet restore "Hub/Hub.csproj"
COPY . .
WORKDIR "/src/Hub"
RUN dotnet build "Hub.csproj" -c Release -o /app

FROM build AS publish
RUN dotnet publish "Hub.csproj" -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Hub.dll"]

There is now an option to start and debug the app using Docker:

Start and debug using Docker
Start and debug using Docker

Wait, there’s more.

Container Orchestrator Support

Adding container orchestrator support is just as simple as adding Docker support. This support allows running and debugging multiple containerized applications.

First, right-click the project, hover/select “Add” and choose “Container Orchestrator Support…”

Add container orchestrator support to an existing project
Add container orchestrator support to an existing project

After selecting “Container Orchestrator Support…” a dialog will appear to allow choosing a container orchestrator between Kubernetes/Helm, Service Fabric, or Docker Compose:

Choose Container orchestrator
Choose Container orchestrator

I have selected Docker Compose for this example. A new project named “docker-compose” will be added to the solution containing three files:

.dockerignore

The .dockerignore file is used by the docker CLI to exclude files and directories from the build context.

Generated .dockerignore file
Generated .dockerignore file

docker-compose.yml

The docker-compose file will specify a service containing the details of the application used when adding container orchestrator support. In this example, it uses the details of the Hub application:

Generated docker-compose.yml file
Generated docker-compose.yml file

docker-compose.override.yml

The docker-compose.override file contains additional details regarding the services specified in the docker-compose.yml file. In this example, it contains the ASPNETCORE_ENVIRONMENT environment variable set to Development and specifies the port as 80. It also specifies a network the container will use for communication.

Generated docker-compose.override.yml file
Generated docker-compose.override.yml file

After adding container orchestrator support, a “Docker Compose” option will be added to allow running and debugging the application using Docker Compose.

Using docker-compose, it is also possible to specify an external IP Address for the application. This IP Address would be accessible by a browser and other utilities. To specify an IP Address, simply add a couple of lines to the service specified in the docker-compose.override.yml file:

services:
   hub:
     ...
     networks:
       default:
         ipv4_address: 172.25.159.13

The networks section specifies which network to update. Since “default” is the name of the network specified, it is the one modified. The “ipv4_address” value is assigned which means this container will be accessible from a browser by navigating to 172.25.159.13.

Docker and container orchestrator support in Visual Studio 2019 are two options that provide some exciting opportunities that I will be showing in more detail in a later article.

Creating ASP .NET Core Startup Tasks

How to create asp.net core startup tasks

Reference: https://andrewlock.net/reducing-latency-by-pre-building-singletons-in-asp-net-core/

The link above walks through a useful alternative to other app-startup features in .NET Core. He also provides descriptions of those other features.

It is useful when operations need to be performed only one time before the application runs: IWebHost.Run().

A Powerful Docker Container for an Nx Workspace Application

Discover how to easily create a Docker container for an Nx Workspace application with this step by step guide to creating a powerful site deployable in seconds with Docker

In a previous post, I briefly described the Nx Workspace and how to create Angular applications and libraries with Nrwl Extensions. I wanted the ability to run a prod build of the app in Docker for Windows so here is just one way of accomplishing that. With the Nx Workspace setup already I had to add just a few more files. This article assumes an Nx Workspace exists with an app named “client-demo”. It follows a similar approach to creating a static website using Docker. This article describes how to create a simple Docker container for an Nx Workspace Application.

NGINX

Using nxginx instead of a nanoserver due to size (~16 MB compared to 1+ GB) a nginx.conf file was needed. Place the file at the root of the Nx Workspace (the same level as the angular.json file):

// nginx.conf

worker_processes 1;

events {
worker_connections 1024;
}

http {
server {
listen 80;
server_name localhost;

root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;

gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/css application/javascript;

location / {
try_files $uri $uri/ /index.html;
}
}
}

Dockerfile

It is now time for the Dockerfile. This file acts as a sort of definition file for a Docker Image. Place this file at the same level as the nginx.conf file:

// Dockerfile

FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/apps/client-demo .

Docker Compose

The Dockerfile is created. To use Docker Compose, create a docker-compose.yml file at the same level as the Dockerfile:

// docker-compose.yml

version: '3.1'

services:
app:
image: 'client-demo-app'
build: '.'
ports:
- 3000:80

Docker Ignore

When creating a Docker Image not every file is needed. In this case, only the dist/ folder is really needed. Using a .dockerignore file can help keep files and directories out of the build context. Place this file at the same level as the Dockerfile:

// .dockerignore

node_modules
.git
libs
tools
apps

Package.json

To leverage the files that have been created scripts can be added to the package.json file. This file should already exist within the Nx Workspace. Simply add the following scripts:

// package.json

...
"scripts": {
...
"client-demo-build": "ng build client-demo --prod",
"client-demo-image": "docker image build -f Dockerfile.client-demo -t client-demo-app .",
"client-demo-run": "docker-compose -f docker-compose.client-demo.yml up",
"client-demo-stop": "docker-compose -f docker-compose.client-demo.yml down",
"client-demo": "yarn client-demo-build && yarn client-demo-image && yarn client-demo-run"
},
...

Each of these scripts can run with npm run <script> or yarn <script>.

client-demo-build: This script runs ng build with the –prod flag to create a prod build of the Angular app.

client-demo-image: This script builds the client-demo-app image given a specific Dockerfile named Dockerfile.client-demo.

client-demo-run: This script uses docker-compose to run the app with docker-compose up. A specific file is specified with the ‘-f’ flag named docker-compose.client-demo.yml.

client-demo-stop: This script acts as the opposite of docker-compose up. As long as this script runs after the client-demo-run script, the app can be started and stopped any number of times.

client-demo: This script simply chains the execution of other scripts to create the prod build of the Angular app, create the Docker image, and serve the app. As it is written, yarn is required.

After creating the Nx Workspace, creating the Docker support files, adding the scripts to package.json and running npm run client-demo or yarn client-demo access the app from a browser at http://localhost:3000.

Docker Container for an Nx Workspace Application viewable from a browser
Default Nx Workspace application

Run npm run client-demo-stop or yarn client-demo-stop to stop the app.