Latest Posts

How to Create Build Pipelines in Azure DevOps

How to create build pipelines in Azure DevOps

DevOps has been a Huge advantage when creating enterprise software but, even small projects can benefit from DevOps. This article will describe how to create build pipelines in Azure DevOps.

DevOps is the union of people, process, and technology to continually provide value to customers.

Microsoft, What is DevOps?

Microsoft’s Azure DevOps can make it super simple. It not only includes source code repositories and a superior bug tracker. Leverage powerful pipelines to build and deploy applications. They can be configured to trigger a build on each code repository check-in. Azure DevOps makes it a simple process to leverage continuous integration options for small projects.

First, you will need to set up your branch in Azure Repos. Not sure how? Check out the previous post about how to require pull requests in Azure Repos.

Create a New Pipeline

We have Azure Repos set up with a master branch and a develop branch. The develop branch requires a pull request. The next thing we’ll do is create a build.

First, we need to create a new pipeline. Navigating to Azure Repos > Pipelines we can see there are no pipelines:

new Build Pipelines in Azure DevOps
New pipeline

Click the New pipeline button to start the process of creating a new pipeline. First is the Connect tab where you need to tell Azure DevOps where your code is located:

Where is your code?
Where is your code?

Select Azure Repos Git for this example since our code is in a Git Repository in Azure Repos.

After the connection is set up, the workflow will move to the Select step where you need to select a repository:

Select a repository
Select a repository

This one is easy for me – I only have one set up. Choose the desired repository. The repository should include the code that you wish to build with this pipeline.

After selecting a repository, you must configure your pipeline. There are a lot of templates available. Here is just a few:

Configure your pipeline
Configure your pipeline

I created a React app and will use the npm tool to run npm scripts. After selecting the tool, the yaml file will display. I made changes to it to support running the necessary npm scripts in my react app:

trigger:
- develop

pool:
  vmImage: 'ubuntu-latest'

steps:

- task: Npm@1
  inputs:
    command: 'install'
    workingDir: '$(Build.SourcesDirectory)/react-app/'

- task: Npm@1
  inputs:
    command: 'custom'
    workingDir: '$(Build.SourcesDirectory)/react-app/'
    customCommand: 'run build-test-ci'

Now click the blue Save and run button to the upper-right. This will display a form:

Save and run a build
Save and run a new build

This will add the azure-pipelines.yml file to the develop branch and run the build.

Azure DevOps Pipelines build in progress
Azure DevOps Pipelines build in progress

Assuming the build succeeds, you have a build that you can leverage in various automation scenarios.

A Simple Overview of Azure’s Global Infrastructure

View the basics of Azure’s Global Infrastructure including geographies, regions, and availability zones from Microsoft Learning

Azure's Global Infrastructure including Geographies, regions, and availability zones
Learn more from Microsoft Learning

Emerging Secrets

A tale of overcoming pride.

It’s three weeks before launch and you have no confidence in the upcoming release. After five years of development, millions of dollars, and countless overtime hours, the system will fail on delivery.

And you’ve known it from the beginning.

It is a disheartening feeling paired with panic. As the system’s development team leader, it is your responsibility to keep things in line, to keep the team moving forward in the right direction, and to anticipate the possibility of failure.

Bah, you think, there’s only so much that I can do.

You keep your head up, hands down, and continue driving the system forward while cycling through the collection of rationalities you’ve clung to since the problem’s discovery, it won’t be that bad, it’s not so severe, it’s just a bug we’ll resolve later.

Day after day the problem compounds.

Deep within the tangled web of daily activities, you lose sight of the impending problem. You are distracted by bugs in a non-critical component. The time spent on integration issues with the enterprise resource management software creates an illusion that the other integrations are working correctly.

Now judgment day is here. The systems are being deployed to production. You click the button to run the scripts that build the packages to send to the production environments. Finally, the puzzle pieces are put together to be leveraged by thousands of customers.

Besides the lack of training, the customers don’t run into any bugs. Deployment successful. You and your team rejoice.

The next week the systems you’ve slaved over are at a critical point: payday. Sales input data is validated and correct. The sales systems are communicating properly. You track the sales data moving from one system to another, checking the output analysis, a sea of green.

It is at 2 A.M. the following Sunday morning when your phone rings. The customers didn’t get paid.

“Fix it!” A whole-hearted response, valid to its core, forms the mantra driving the team.

You run the series of validations against the sales data. Green. You have your cohorts comb through system logs. No issues found. You then begin manual validation, double-checking each calculation.

The hairs on the back of your neck stand on end. Goosebumps are sent across your body in waves. You fall into a cold sweat. The validations are flawed causing a series of false positives. It’s the issue you never recorded, the bug you deferred and that had not been fixed.

“How was this not found earlier?” The dreaded question is laid out, plain and clear. It must be addressed.

The truth is, you did know about it. To save face, you lied to yourself and, worst of all, the team. As the project progressed, you manufactured a version of reality more acceptable to your pride. Now the truth comes out as it always does.

The company you represent faces enormous challenges. Customers need to be paid manually. The system needs to be repaired. And you can no longer be trusted.

Sitting in front of your home desktop computer, customized to be quiet, fast, and to emit an appealing array of lights across your wall, you lose yourself in the thought that maybe you won’t find a new job in your field.

You unconsciously hold your index finger on the down-arrow key, the list of job openings scroll past your eyes. You see none of them.


Your eyes snap open and you suck in a deep breath. You sit up from your bed, soaked in sweat. You check the clock: 3 A.M. You realize it was just a nightmare.

You know what you have to do.

When you arrive at work early that morning you set up a meeting with the project’s leadership. With system design underway, there is no time to lose. The issue is not something to put off until later. It must be recorded and addressed according to its assigned priority.

The team gathers in the meeting room, offering polite greetings as they enter. When everyone sits down, laptops closed, eager to hear what the meeting is about, you say, “There is a flaw in the third-party validation system we plan to use for validating sales data calculations. We may need to find an alternative.”

Require Pull Requests in Azure Repos

Pull requests offer another layer of defense against poor quality code. Enforcing them with Azure Repos is easy. Here’s how.

Git Repository

First, set up a Git repository in Azure DevOps. The user interface is generally straightforward but Microsoft describes how to create a new Git repo if one has not already been created for you.

Cloning the repository takes the usual steps – using the command line or GUI tools in Visual Studio or VS Code.

Now you are ready for your first commit.

A default branch is available with Azure Repos. It is called master. You’ll want at least one more. You will push code to the new branch during development.

To create the branch, navigate to Branches in Azure DevOps and select the button New branch:

A form will appear where further information about the branch can be specified:

  1. You’ll be able to name it. I would call it dev or develop – something that describes the nature of the branch and its purpose without being too long.
  2. It needs a Based on branch – the contents in the branch will be copied to your new branch.
  3. You may have the opportunity to link a work item. This is optional but worth it for tracking purposes.

Pull Requests

Don’t push code without it being reviewed. Pull Requests are the next line of defense before code enters a branch.

This is simple to set up in Azure DevOps. From the branches list in Azure Repos, click the ellipses next to the desired branch (in this case, develop) and select Branch Policies.

You will be taken to a screen with a lot of options. There is one checkbox to require pull requests:

From now on code will be entering the branch through a pull request.

Before writing code, create a feature branch off develop. Perform commits and push your feature branch. When you are ready to add your code to develop, create a new pull request and choose your feature branch. Make sure you create a pull request for the develop branch.

To do that, make sure the develop branch is selected in Azure Repos. Your new feature branch should display with a note that it’s available to add to develop with a pull request:

Simply click the Create a pull request link to start the process. Any automated build can run if a Build Policy is set for the develop branch. We’ll discuss that in a separate article.

On Multiple Deploy Environments

Why deploying to multiple environments is a must for enterprise software systems to leverage software automations with docker deployment environments in Azure

Azure makes it easy to create multiple deployment environments. Each environment should be as close to the production environment as possible. Which Azure can help with too.

The pairing of developers and operations (DevOps) is key to success. Doing what is necessary to create this pairing is largely a cultural concern – if folks won’t get along, the pairing will be less than fruitful. Also, if an organization doesn’t encourage cooperation, there’s no hope.

Culture isn’t the only battle. Finding the balance between responsibilities can be a challenge for many organizations just starting to apply DevOps principles. One straightforward way to look at it is: operations is an end-user of a developer’s deliverable. This means, while operations need to do their part in setting up a viable environment, developers need to be able to deliver a system that will work with that environment.

A great way to do this is by testing. It sounds simple enough to me. But when it comes to execution, it can be challenging. Developer machines are rarely equivalent to the environments a system is deployed to. Environments are often different between testing and production.

How can this be overcome?

Creating multiple deploy environments is key. This means getting developers and operations in sync as early as possible. Using deployable containers such as what is available with Docker can help reduce the differences between environments to practically zero. Creating deployment environments can further strengthen the trust between developers and operations. Read more about how you can get started by creating a static website using Docker.

What environments should be created?

There are three areas that need to be covered: Development, Testing, and Availability. This is best represented using five environments.

Development

When developers make their changes locally, they should be testing locally. This is a typical process that should be happening multiple times per day. However, with a development environment, changes can be deployed to a server for development testing. This could be nightly or better yet, each time a pull request is submitted. Testing a pull request is important before it can be approved.

QA

When a pull request is approved, it will likely not continue directly through to the QA environment. There should be a gate controlled by the QA (Quality Assurance) team. If they are ready, they can open the gate allowing deployment to the QA environment. This is where testers will dig into the site manually and run their own automated tests to ensure the system is ready for user acceptance testing.

UAT

UAT (User Acceptance Testing) is a testing phase that includes real users using the system in real-world scenarios. During this phase, the users will need an environment of their own. When the changes are approved, the system is deployed to the Staging environment.

UAT is often combined with either QA or Staging environments. In this article, we separate them. Learn more about the Staging environment next.

Staging

The Staging environment is where the last preparations are made for a move to production. Final checks – and double-checks – are made here. With certain deployment setups, this environment is actually the new production environment. Flipping the switch would make this environment the new production environment, and the old production environment would then be the new staging environment.

Production

When the system is in production we are far past the point of no return. The code is “in the wild” and real users are using it in real situations that have real impacts. In some deployment setups, this may be the old staging environment.

It is important that these are distinct environments meaning each environment has the correct version of the system and that the operations and data are separated from other environments. For example, we don’t want to push a button in Staging and cause code in QA to execute and modify data in UAT. This is a severe example and Azure makes it easy to avoid.

It is also important that each environment (Development, QA, UAT, Staging) is as similar as possible to the production environment. We want our systems to be tested thoroughly to be sure users in production receive as much of the business value as we invested in the system. “Similar” means machine resources are similar, system distribution is similar, etc. While each environment may have slightly different code as development progresses, they are otherwise the same. Again, this is easier to accomplish with container technologies such as Docker.

Azure makes it easier to set up and manage these environments. Create guarded continuous integration pipelines that allow safe code to enter production.

How to Manage an Ever-Changing User Interface

Discover a philosophy of user interface management leading to adaptable front-ends that exceed dynamic market requirements and the ever-changing user interface

The user interface is the window into the needs of a business. These needs should be driven by customers either internal or external. We’ll refer to these customers as the market. As market needs shift the user interface will need to change. A responsibility of a professional front-end developer is to design the user interface implementation to support this change. How do we manage an ever-changing user interface?

Identifying what areas of the front-end are most impacted is an essential first step in managing shifts in the market. As we know from Robert C. Martin or Juval Lowy, each of these areas is an axis of change. Considering the volatility of an area can help when designing the front-end to more easily adapt to change.

We’ll see that the user interface is never done and certain areas of the user interface will likely change more frequently than others. We will also consider how we could exploit the axis of change to deliver a user interface designed with enough fluidity in the right areas to more easily flow with fluctuating market needs.

Volatility Rating

Everything about the user interface will change. This means color, size, position, text, user experience, design – everything – will change. There is no judgment here. The user interface is encouraged to change if it better suits the market. However, there are some areas that will change more frequently than others and have a greater impact on the system when they do. Considering these areas is essential when designing the user interface implementation.

Frequency

Starting with a simple example, the look and feel of the user interface may change. If, for instance, the look and feel will always change, the frequency is 100%.

Another area that may be added or altered is a data model. When a user interface contacts a service, there is a contract that defines the data that will be sent between the front-end and the service. This is the data model. When the market decides it needs an extra field in a form, that it needs a “button here that does x”, or removing a column from a table, it means altering or adding a data-model. This has its own change frequency.

Determining how frequently an area will change will help determine its volatility and how to approach its design and the design of future changes.

Impact

The look and feel of the user interface may always change which is only one part of the volatility rating. The impact of a change needs to be considered. Areas that impact the entire system will have the most impact when changed. The impact of change is reduced as its impact on the system is reduced. An example of this can be found in a previous article titled The Monolith Component. While the article focuses on a malformed component, it describes the kinds of impact code can have. Considering the impact is an important part of deciding how to make a change.

Exploiting the Evolution

Some areas are innately difficult to alter, especially when they impact a website user interface as a whole – such as look and feel. There are common practices when dealing with something like this: use a CSS pre-processor to leverage common principles and practices such as OOCSS, BEM, and SMACSS. With the advent of Modular CSS and other principles and practices, managing the look and feel of a website is less painful.

There are libraries and frameworks that aim to make front-end development less painful. Yet, they can only go so far. It will depend on the use, the application of these helpful libraries and frameworks – let’s call this advantaged code. Leveraging advantaged code becomes dependent on the application of two concepts: continuous improvement, and designing for change. These concepts attempt to answer a fundamental question: How can I make it easier to manage new or existing code in an ever-changing user interface?

Continuous Improvement

As more is learned, more can be applied. The details of the code begin to be deeply understood. The areas of the code that change most begin to reveal themselves. And, of course, the impact on the system of each change has a greater chance of being measurable.

When learning these things about the user interface, and how it is impacted by changing market needs, the code can be continuously improved to anticipate those changes.

Design for Change

Designing a user interface for change is only valuable if the rate of change and its impact on the system are measured and deemed inevitable. This is to avoid unnecessary costs such as increased user interface complexity and reduced available budgets.

As the user interface evolves with market needs it should continuously improve in the areas where the rate of change and the impact on the system are high enough. What is high enough in terms of change rate and system impact is largely determined by project concerns – available time and budget, developer experience, accessible business knowledge, etc.

I am not saying all changes are valid – meaning, there are some cases when a change should not be made. A simple example of this is security. If a requested change will compromise the security of the application, it is a responsibility of a professional developer to say, “no” preferably with an amount of tact appropriate for your relationship with the market. And hopefully, there would be enough trust in the partnership that the market will thank you for looking out for them.

Excluding the requests that are detrimental to the system, by measuring the rate of change and the impact on the system, changes to the front-end can be more easily supported, maintained, and you may even welcome them.

Two Simple Examples of Docker Support in Visual Studio 2019

How to leverage Docker support in Visual Studio 2019 to run and debug your ASP.Net Core applications.

Running applications in containers is continuing to be an important part of enterprise application development. This article will show how to take advantage of the built-in support for Docker in Visual Studio 2019.

First, create a new ASP.Net Core project in Visual Studio 2019. Docker support can be included when creating the project or it can be added later. I’ve opted to add it later in this example.

Docker Support

I have previously shown how to run a static website using Docker and how to set up a Docker container for Nx Workspace applications. Docker can also be used to run an ASP.Net Core application and Visual Studio 2019 makes it easy.

Adding Docker support using Visual Studio 2019 is more seamless if the default setup can be used. To add Docker support to an existing project, right-click the project, hover/select “Add” and choose “Docker Support…”

Add Docker support to an existing project
Adding Docker support to an existing project

After selecting “Docker Support…” a dialog will appear to allow choosing a target operating system between Windows or Linux:

Choose Target OS
Choose Target OS

I have selected Windows for this example. A Dockerfile will be generated automatically and added to the selected project. For this example, the generated Dockerfile looks like this:

#Depending on the operating system of the host machines(s) that will build or run the containers, the image specified in the FROM statement may need to be changed.
#For more information, please see https://aka.ms/containercompat

FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-nanoserver-1809 AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/core/sdk:2.2-nanoserver-1809 AS build
WORKDIR /src
COPY ["Hub/Hub.csproj", "Hub/"]
RUN dotnet restore "Hub/Hub.csproj"
COPY . .
WORKDIR "/src/Hub"
RUN dotnet build "Hub.csproj" -c Release -o /app

FROM build AS publish
RUN dotnet publish "Hub.csproj" -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Hub.dll"]

There is now an option to start and debug the app using Docker:

Start and debug using Docker
Start and debug using Docker

Wait, there’s more.

Container Orchestrator Support

Adding container orchestrator support is just as simple as adding Docker support. This support allows running and debugging multiple containerized applications.

First, right-click the project, hover/select “Add” and choose “Container Orchestrator Support…”

Add container orchestrator support to an existing project
Add container orchestrator support to an existing project

After selecting “Container Orchestrator Support…” a dialog will appear to allow choosing a container orchestrator between Kubernetes/Helm, Service Fabric, or Docker Compose:

Choose Container orchestrator
Choose Container orchestrator

I have selected Docker Compose for this example. A new project named “docker-compose” will be added to the solution containing three files:

.dockerignore

The .dockerignore file is used by the docker CLI to exclude files and directories from the build context.

Generated .dockerignore file
Generated .dockerignore file

docker-compose.yml

The docker-compose file will specify a service containing the details of the application used when adding container orchestrator support. In this example, it uses the details of the Hub application:

Generated docker-compose.yml file
Generated docker-compose.yml file

docker-compose.override.yml

The docker-compose.override file contains additional details regarding the services specified in the docker-compose.yml file. In this example, it contains the ASPNETCORE_ENVIRONMENT environment variable set to Development and specifies the port as 80. It also specifies a network the container will use for communication.

Generated docker-compose.override.yml file
Generated docker-compose.override.yml file

After adding container orchestrator support, a “Docker Compose” option will be added to allow running and debugging the application using Docker Compose.

Using docker-compose, it is also possible to specify an external IP Address for the application. This IP Address would be accessible by a browser and other utilities. To specify an IP Address, simply add a couple of lines to the service specified in the docker-compose.override.yml file:

services:
   hub:
     ...
     networks:
       default:
         ipv4_address: 172.25.159.13

The networks section specifies which network to update. Since “default” is the name of the network specified, it is the one modified. The “ipv4_address” value is assigned which means this container will be accessible from a browser by navigating to 172.25.159.13.

Docker and container orchestrator support in Visual Studio 2019 are two options that provide some exciting opportunities that I will be showing in more detail in a later article.

Creating ASP .NET Core Startup Tasks

How to create asp.net core startup tasks

Reference: https://andrewlock.net/reducing-latency-by-pre-building-singletons-in-asp-net-core/

The link above walks through a useful alternative to other app-startup features in .NET Core. He also provides descriptions of those other features.

It is useful when operations need to be performed only one time before the application runs: IWebHost.Run().

A Powerful Docker Container for an Nx Workspace Application

Discover how to easily create a Docker container for an Nx Workspace application with this step by step guide to creating a powerful site deployable in seconds with Docker

In a previous post, I briefly described the Nx Workspace and how to create Angular applications and libraries with Nrwl Extensions. I wanted the ability to run a prod build of the app in Docker for Windows so here is just one way of accomplishing that. With the Nx Workspace setup already I had to add just a few more files. This article assumes an Nx Workspace exists with an app named “client-demo”. It follows a similar approach to creating a static website using Docker. This article describes how to create a simple Docker container for an Nx Workspace Application.

NGINX

Using nxginx instead of a nanoserver due to size (~16 MB compared to 1+ GB) a nginx.conf file was needed. Place the file at the root of the Nx Workspace (the same level as the angular.json file):

// nginx.conf

worker_processes 1;

events {
worker_connections 1024;
}

http {
server {
listen 80;
server_name localhost;

root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;

gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/css application/javascript;

location / {
try_files $uri $uri/ /index.html;
}
}
}

Dockerfile

It is now time for the Dockerfile. This file acts as a sort of definition file for a Docker Image. Place this file at the same level as the nginx.conf file:

// Dockerfile

FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/apps/client-demo .

Docker Compose

The Dockerfile is created. To use Docker Compose, create a docker-compose.yml file at the same level as the Dockerfile:

// docker-compose.yml

version: '3.1'

services:
app:
image: 'client-demo-app'
build: '.'
ports:
- 3000:80

Docker Ignore

When creating a Docker Image not every file is needed. In this case, only the dist/ folder is really needed. Using a .dockerignore file can help keep files and directories out of the build context. Place this file at the same level as the Dockerfile:

// .dockerignore

node_modules
.git
libs
tools
apps

Package.json

To leverage the files that have been created scripts can be added to the package.json file. This file should already exist within the Nx Workspace. Simply add the following scripts:

// package.json

...
"scripts": {
...
"client-demo-build": "ng build client-demo --prod",
"client-demo-image": "docker image build -f Dockerfile.client-demo -t client-demo-app .",
"client-demo-run": "docker-compose -f docker-compose.client-demo.yml up",
"client-demo-stop": "docker-compose -f docker-compose.client-demo.yml down",
"client-demo": "yarn client-demo-build && yarn client-demo-image && yarn client-demo-run"
},
...

Each of these scripts can run with npm run <script> or yarn <script>.

client-demo-build: This script runs ng build with the –prod flag to create a prod build of the Angular app.

client-demo-image: This script builds the client-demo-app image given a specific Dockerfile named Dockerfile.client-demo.

client-demo-run: This script uses docker-compose to run the app with docker-compose up. A specific file is specified with the ‘-f’ flag named docker-compose.client-demo.yml.

client-demo-stop: This script acts as the opposite of docker-compose up. As long as this script runs after the client-demo-run script, the app can be started and stopped any number of times.

client-demo: This script simply chains the execution of other scripts to create the prod build of the Angular app, create the Docker image, and serve the app. As it is written, yarn is required.

After creating the Nx Workspace, creating the Docker support files, adding the scripts to package.json and running npm run client-demo or yarn client-demo access the app from a browser at http://localhost:3000.

Docker Container for an Nx Workspace Application viewable from a browser
Default Nx Workspace application

Run npm run client-demo-stop or yarn client-demo-stop to stop the app.

An Introduction to the Nx Workspace

Learn how to create and maintain flexible angular applications with Nrwl Extensions

Angular development is great. It offers a great way to break problems into small, easily managed parts. With the Angular CLI, more power is at our fingertips. Narwhal Technologies Inc has created even more power by providing extensions to the Angular CLI. In this article, I will describe how to leverage the Nrwl Extensions to create and maintain flexible angular apps.

To learn how to install Nrwl, visit their getting started guide: https://nrwl.io/nx/guide-getting-started

Make sure to install Angular and Nrwl globally using npm. Here is a list of versions I used for this article:

node: 8.15.0
npm: 5.0.0
"@angular/cli": "~7.1.0"
"@nrwl/schematics": "7.4.0"

Creating an Nx Workspace

The Nx Workspace is a collection of Angular applications and libraries. When creating the workspace there will be a number of options available during generation. To start, run the following command:

create-nx-workspace <workspace-name>

This will begin the process of generating the workspace with the provided name.

After a few initial packages are installed, a prompt will display to choose the stylesheet format:

Stylesheet format prompt
stylesheet format prompt

Use the arrow keys to choose between CSS, SCSS, SASS, LESS, and Stylus. After the desired format is highlighted, press Enter.

The next prompt to display is the NPM scope. This will allow applications to reference libraries using an npm scope. For example, given a library called ‘my-lib’ and the npm scope is ‘my’, an application can import the library with the following statement:

import { MyLibModule } from '@my/my-lib';

To learn more about npm scopes check out their documentation: https://docs.npmjs.com/about-scopes

After specifying an NPM scope, press Enter. A third prompt will appear to specify which package manager to use:

NPM scope and package manager prompts
npm scope and package manager prompts

Use the arrow keys to choose between npm and Yarn. After the desired format is highlighted, press Enter.

Now that the generation process has everything it needs, it will continue to create the folder structure, files, and configuration:

Completed Nx Workspace generation
completed Nx Workspace generation

Project Structure

There are two important folders available after the workspace generation.

FolderDescription
/appsContains a collection of applications in the workspace.
/libsContains a collection of libraries in the workspace.

Adding Applications

Before adding an application with the CLI be sure to navigate into the workspace folder. In our example, the folder is ‘my-platform-workspace’. Then use the Angular CLI to generate the app:

PS C:\NoRepo\NxWorkspace> cd my-platform-workspace
PS C:\NoRepo\NxWorkspace\my-platform-workspace> ng g app <app name>

Tip

When using Visual Studio Code, open the Nx Workspace folder. This will default the command window to the necessary directory by default when using the built-in support called Terminals.

After adding the application, a number of prompts will display and the app generation will proceed:

Adding an application
adding an application

Running the application can be done using the Angular CLI as usual:

PS C:\NoRepo\NxWorkspace\my-platform-workspace> ng serve my-first-app

When the app is done building, go to http://localhost:4200 from a browser and see the default view:

Default app built with Nrwl Nx
default app built with Nrwl Nx

Adding a Library

Adding a library is as easy as adding an application with the following command:

ng g lib <library name>

Generally, a module should be created for libraries so they an be easily imported by applications. Once the library is created, components can be added to the library. Make sure to export any library components or other Angular objects (providers, pipes, etc) that need to be used by applications.

The Dependency Graph

Looking at package.json, there are a number of scripts that have been added. One that is nice to have is to generate and view a dependency graph of all of the applications and libraries in the workspace. A dependency graph can be generated using the following command:

npm run dep-graph

For example, I’ve added my-lib and my-lib2 to the my-first-app. This is the resulting dependency graph:

Sample dependency graph
sample dependency graph

Here we can see that the my-first-app-e2e (end-to-end) test application is dependent on the my-first-app application. The application is dependent on the libraries my-lib and my-lib2. This is a very simple example. This gains more value as more applications share more libraries.

It is also possible to get the JSON version of the dependency graph which can be used in various creative ways to help automate your workflow. This is all thanks to Nrwl Extensions and the power of Nx Workspaces.