An Introduction to the Nx Workspace

Learn how to create and maintain flexible angular applications with Nrwl Extensions

Angular development is great. It offers a great way to break problems into small, easily managed parts. With the Angular CLI, more power is at our fingertips. Narwhal Technologies Inc has created even more power by providing extensions to the Angular CLI. In this article, I will describe how to leverage the Nrwl Extensions to create and maintain flexible angular apps.

To learn how to install Nrwl, visit their getting started guide: https://nrwl.io/nx/guide-getting-started

Make sure to install Angular and Nrwl globally using npm. Here is a list of versions I used for this article:

node: 8.15.0
npm: 5.0.0
"@angular/cli": "~7.1.0"
"@nrwl/schematics": "7.4.0"

Creating an Nx Workspace

The Nx Workspace is a collection of Angular applications and libraries. When creating the workspace there will be a number of options available during generation. To start, run the following command:

create-nx-workspace <workspace-name>

This will begin the process of generating the workspace with the provided name.

After a few initial packages are installed, a prompt will display to choose the stylesheet format:

Stylesheet format prompt
stylesheet format prompt

Use the arrow keys to choose between CSS, SCSS, SASS, LESS, and Stylus. After the desired format is highlighted, press Enter.

The next prompt to display is the NPM scope. This will allow applications to reference libraries using an npm scope. For example, given a library called ‘my-lib’ and the npm scope is ‘my’, an application can import the library with the following statement:

import { MyLibModule } from '@my/my-lib';

To learn more about npm scopes check out their documentation: https://docs.npmjs.com/about-scopes

After specifying an NPM scope, press Enter. A third prompt will appear to specify which package manager to use:

NPM scope and package manager prompts
npm scope and package manager prompts

Use the arrow keys to choose between npm and Yarn. After the desired format is highlighted, press Enter.

Now that the generation process has everything it needs, it will continue to create the folder structure, files, and configuration:

Completed Nx Workspace generation
completed Nx Workspace generation

Project Structure

There are two important folders available after the workspace generation.

FolderDescription
/appsContains a collection of applications in the workspace.
/libsContains a collection of libraries in the workspace.

Adding Applications

Before adding an application with the CLI be sure to navigate into the workspace folder. In our example, the folder is ‘my-platform-workspace’. Then use the Angular CLI to generate the app:

PS C:\NoRepo\NxWorkspace> cd my-platform-workspace
PS C:\NoRepo\NxWorkspace\my-platform-workspace> ng g app <app name>

Tip

When using Visual Studio Code, open the Nx Workspace folder. This will default the command window to the necessary directory by default when using the built-in support called Terminals.

After adding the application, a number of prompts will display and the app generation will proceed:

Adding an application
adding an application

Running the application can be done using the Angular CLI as usual:

PS C:\NoRepo\NxWorkspace\my-platform-workspace> ng serve my-first-app

When the app is done building, go to http://localhost:4200 from a browser and see the default view:

Default app built with Nrwl Nx
default app built with Nrwl Nx

Adding a Library

Adding a library is as easy as adding an application with the following command:

ng g lib <library name>

Generally, a module should be created for libraries so they an be easily imported by applications. Once the library is created, components can be added to the library. Make sure to export any library components or other Angular objects (providers, pipes, etc) that need to be used by applications.

The Dependency Graph

Looking at package.json, there are a number of scripts that have been added. One that is nice to have is to generate and view a dependency graph of all of the applications and libraries in the workspace. A dependency graph can be generated using the following command:

npm run dep-graph

For example, I’ve added my-lib and my-lib2 to the my-first-app. This is the resulting dependency graph:

Sample dependency graph
sample dependency graph

Here we can see that the my-first-app-e2e (end-to-end) test application is dependent on the my-first-app application. The application is dependent on the libraries my-lib and my-lib2. This is a very simple example. This gains more value as more applications share more libraries.

It is also possible to get the JSON version of the dependency graph which can be used in various creative ways to help automate your workflow. This is all thanks to Nrwl Extensions and the power of Nx Workspaces.

Handle Arguments in a PowerShell Script

Learn ways to handle arguments sent to a PowerShell script

So far, I haven’t used Powershell often in my work. I do know its a powerful tool and one that has saved me a lot of headaches when I have used it. One thing I recently found to be quite helpful is the ability to leverage the arguments sent to a PowerShell script file.

Here is the sample script that will be referenced throughout this article:

# ./HandlingArguments.ps1
# Args: 0 - string1; 1 - string2;
$args | ForEach-Object {
$arg = $_;
Write-Host $arg.GetType()
Write-Host $arg
Write-Host $arg[0]
Write-Host $arg[1]
}

I will begin by briefly describing this script. Certainly, the first two lines are just comments providing the name of the script file and some generic information about the args.

Next, is the use of the $args automatic variable. This variable contains an array of the arguments send to the script file. ForEach-Object provides us the ability to iterative over a collection of objects. Each item is an object within the collection and is represented inside the ForEach-Object body (if using a script block) as $_. Of course, I cannot forget about the symbol between $args and ForEach-Object, the | or “pipe” character. This character is used to pipe the input objects ($args array items) to ForEach-Object.

Variations of Argument Passing

There are various ways of passing data to this script. In the following example, we’ll simply pass the string “hello”:

PS> .\HandlingArguments.ps1 "hello"
System.String
hello
h
e

Here, the “hello” string is passed to the script and $args is an array with the string value “hello” as an item.

$_ is the string value “hello”. $_.GetType() gives us the System.String type. $_[0] and $_[1] access characters of the string “hello” (“h” and “e” respectively).

The important part to remember is that $args is an array of items passed to the script. When we iterate over that array we get the item value. In this case, that item value was “hello”.

Next, let’s send “hello” and “world” as two separate arguments to the script file:

PS> .\HandlingArguments.ps1 "hello" "world"
System.String
hello
h
e
System.String
world
w
o

Here, we see that there are now two sets of four outputs. The first mirrors the output when we just sent “hello”. The second is similar to the first but with the string “world” instead. Remember, $args is an array of items passed to the script. In this case, it is an array with two items: “hello” and “world”.

Next, let’s see what happens when we pass an array of strings to the script:

PS> .\HandlingArguments.ps1 ("hello", "world")
System.Object[]
hello world
hello
world

Here, we can see that we are back to just a single set of four outputs. $args is an array of items passed to the script. Since we passed a single array, it is an array consisting of a single item which is an array. $_ is the array containing “hello” and “world” as its items. $_.GetType() gives us the System.Object[] type. $_[0] and $_[1] access items within the array (“hello” and “world” respectively).

This article briefly described some of the ways to leverage the flexibility of Powershell to provide different output without changing the script file by simply changing the arguments passed to it.

The Monolith Component

Learn some useful ways to identify and squash a monolith component

It is more common than ever to have user interfaces made up of components. With so many user interface libraries and frameworks like Angular, React, VueJS, Aurelia and KnockoutJS components are everywhere. Components are the building blocks of a user interface. It is imperative that these blocks are built and organized in a way that supports proper testing, reduces defects and enables the extension requirement innate to ever-changing user experiences. This article will describe a component that counters these goals by challenging both the quality and flexibility of a user interface: The Monolith Component.

The Monolith Component is a byproduct of feature-driven functional decomposition. It is the user interface representation of the god object. The component is feature-packed, contains an excessive amount of code, and often becomes the grandparent of smaller child components. To make the problem worse, the child components could be referencing the grandparent and act as nothing more than a visual proxy to grandparent capabilities.

It isn’t hard to understand why this sort of thing happens. First, an agile team will use sprints to deliver features. Without enough careful consideration and planning of system design, the design and functionality tend to focus on feature-level concerns at a sprint cadence. Second, it’s not always intuitive to design something that is counter to our nature. For example, generally speaking, we first learn how to cook from our parents, and our parents first learned how to cook from their parents, and so on. In a fast-paced environment, a quicker path maybe just to learn from your grandparents. This is a small example of the innate grandparent (monolith) design we exist within. Instead of children components owning their responsibilities, they are implemented in their parents and likewise up the hierarchy to the level of the monolith component.

Applying the grandparent theme to user interface development leads to buggy components that are difficult to maintain, test, and extend. Overall, a monolith component will have too many responsibilities.

Identifying A Monolith Component

There are a few indicators of a monolith component.

#1 Line Count: if you see a component with 1000+ lines of code, you may be looking at a monolith component.

#2 Dependencies: if you see dozens of dependencies or more, you may be looking at a monolith component.

#3 Bugs: if most bug fixes involve looking at the same, single component, you may be looking at a monolith component.

#4 New Features: if new features involve looking at the same, single component, you may be looking at a monolith component.

#5 Responsibility: if it is easier to describe what a component does not do within the context of the page, you may be looking at a monolith component.

Refactoring A Monolith Component

Updating a monolith component can be a daunting task. It should get easier the more it is done. It does get easier if those updates mean breaking up the monolith into a more sane design. The more that is done, the better off the code, and developer, will be. To describe exactly how to refactor one of these components would depend on its implementation details. I will instead attempt to describe some general ideas that have helped me in the past with some simple examples along the way.

Test Test Test

The first step is having confidence that refactoring does not introduce breaking changes. To this end, maintaining tests that prove business value is essential. The presentation and logic shouldn’t change during a refactor. That is to say, the intent should not change. There are multiple ways to accomplish the same thing. Refactoring code is meant to update code so that it does the same thing in a cleaner way. Having tests that make sure the code satisfies business requirements without depending on how it satisfies them, will help avoid introducing bugs while refactoring.

Identify Responsibilities

The second step is knowing what the component does. A monolith component will likely have multiple responsibilities. Being able to list these will help us understand the component and how to split it into multiple components. It can reveal the patterns and domains that shape the component.

Responsibilities that are determined to be unnecessary should be removed:

Example 1: The section of code is disabled due to being wrapped in a comment. This code can’t execute. Remove it. Be careful when looking at comments in HTML. Often libraries and frameworks will give meaning or functionality to HTML comments.

Example 2: A block of HTML never displays because of a visibility condition that is never and can never be true. Assuming this isn’t a bug, the HTML block can be removed. The condition in code may not be necessary either.

We know the component’s responsibilities and we may have removed some that were not needed. Now we look at dependencies.

Understand Dependencies

The third step is knowing the internal and external dependencies of the component. The goal of this step is to answer: what does the component require so that it can perform its responsibilities?

Depending on what kind of dependency injection is used (the application does leverage DI right?) dependencies may be specified in different locations.

Looking at the constructor is a good place to start. The parameters of the constructor define what a component needs in order for an instance to be created.

Next, look at private and public properties. These will likely include the values or objects set in the constructor. They also represent the possible states of a component. Maybe an enum is used to define how to present a particular object in the view. Maybe a boolean is set to determine whether a checkbox should be checked on load. Maybe an object is used to store a server response object. These things can still be considered dependencies in this context – the component needs them to provide its business value.

Look for static members. Static members need a definition, not an instance. What do they offer and how does the component use them? Generally, these are easier to identify and extract to make more reusable.

Finally, look at how the dependencies are used within instance methods. If a non-primitive data type is specified in the constructor, what is the purpose of that dependency? Does it allow a consumer to get data from a server? Does it contain methods to filter a list of a particular data type? Is it the mechanism for triggering an event on a timer? Knowing how the dependencies are used can help when determining what business problem they are solving. This can help us group dependencies by business problems or domains.

Extract Common Ground

The fourth step is knowing the groups of common responsibilities and common dependencies that can be moved into other distinct components.

Find themes such as API calls, orchestration of child components, notification, and other domain logic.

Use a separate class to encapsulate API calls. This enables centralized control over forming requests and normalizing responses.

Use a dedicated component for handling the interaction between child components. Using a dedicated class to share data can help avoid coupling as well.

Use a separate class or component to handle other domain logic:

Example 1: The presentation of a timer is maintained. The timer is triggered, tracked, displayed, stopped and reset within the monolith component.

Example 2: The component allows the user to update the view with new data from the server without using the browser refresh button. After a user triggers a refresh: a server request is formed, the request is sent, a response is received and parsed, it is then sent to areas that need the parsed response and the view is updated appropriately.

Example 3: User input is maintained. A form contains multiple labels and input controls. The user input is validated. When the user input is invalid, a notification is displayed to the user. When a user completes the form, it can be submitted to the server so there are more server requests and responses to handle.

All of the logic and presentation of these examples can be contained in separate classes and components. They will take with them the necessary responsibilities and dependencies of the monolith component – only what they need.

Repeat As Necessary

The fifth step is knowing what is left after extracting the common ground. If we are lucky, this means the monolith component now does only one thing. If not, repeat the previous steps until the component does just one thing. This is assuming that all of the responsibilities of the component will be known and are necessary.

A monolith component is too big. It does too much. It knows too much. It has too many responsibilities. As developers, we are responsible for delegation. If a component does too much, it is our fault. There are many ways to prevent and refactor monolith components. There have been volumes of work describing refactoring methodologies. This article describes some ways that have worked for me when refactoring a monolith component.

How to Easily Create a Static Website With Docker

Discover how to easily create a static website with Docker that can be viewed from a browser

The goal of this article is to describe a process for serving static web files from a Docker Container. It is surprisingly easy to create a static website with docker.

The website structure is very simple and consists of only 3 files:

./site/
  style.css
  app.js
  index.html

At the project root there is a Dockerfile:

./
  Dockerfile

The website displays “Loading” text. When the JavaScript file is loaded, Hello World is displayed in big red letters:

Static Website With Docker serving Hello World "Loading" view
Hello World “Loading” view

Here is the HTML:

<html>
  <head>
    <title>Sample Website</title>
    <script src="app.js"></script>
    <link href="style.css" rel="stylesheet" />
  </head>
  <body>Loading</body>
</html> 

Here is the Dockerfile:

FROM nanoserver/iis
COPY ./site/ /inetpub/wwwroot/ 

The lines in the Dockerfile are key to getting the webserver image created. This file allows us to create a new docker image. The image is used to run a docker container.

The first line specifies the base image. In this case, it is an image with a configured Nano Server with IIS. There are smaller webserver images that are usually preferable.

The second line will copy the local project files from the ‘site’ folder to the wwwroot folder of the nanoserver image.

That is everything needed to get a web server started to serve the web page. To create the image, start with docker build:

> docker build -t webserver-image:v1 .

The docker build command is used to create an image. When it is executed from a command line within the directory of a Dockerfile, the file will be used to create the image. The -t option allows the ability to name and optionally tag the image. In this case, the name is “webserver-image” with the “v1” tag. Tags are generally used to version images. The last argument is the path used to build the image. In this case, it is . which is the current directory.

Running the command will build the image:

> docker build -t webserver-image:v1 .
Sending build context to Docker daemon 26.11kB
Step 1/2 : FROM nanoserver/iis
---> 7eac2eab1a5c
Step 2/2 : COPY ./site/ /inetpub/wwwroot/
---> fca4962e8674
Successfully built fca4962e8674
Successfully tagged webserver-image:v1

The build succeeded. This can be verified by running docker image ls:

> docker image ls
REPOSITORY      TAG IMAGE ID     CREATED       SIZE
webserver-image v1  ffd9f77d44b7 3 seconds ago 1.29GB

If the build doesn’t succeed, there may be a few things to double-check. This includes making sure the Dockerfile is available, nanoserver images can be pulled, and paths are accurate.

Now that an image is created, it can be used to create a container. This can be done with the docker run command:

> docker run --name web-dev -d -it -p 80:80 webserver-image:v1

After running the command, the container id will be displayed:

> docker run --name web-dev -d -it -p 80:80 webserver-image:v1
fde46cdc36fabba3aef8cb3b91856dbd554ff22d63748d486b8eed68a9a3b370

A docker container was created successfully. This can be verified by executing docker container ls:

> docker container ls
CONTAINER ID IMAGE              COMMAND                  CREATED
STATUS        PORTS              NAMES
fde46cdc36fa webserver-image:v1 "c:\\windows\\system32…" 31 seconds ago
Up 25 seconds 0.0.0.0:80->80/tcp web-dev

The container id is displayed (a shorter version of what was shown when executing docker run). The image that was used for the container is also displayed along with when it was created, the status, port, and the container name.

The following docker inspect command will display the IP address:

> docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" web-dev
172.19.112.171

This IP address is what can be called in a browser to view the page:

Hello World Loading
Hello World “Loading” view

There is now a working container that serves the web page!

I learn by doing and found that most of us in tech do. That is why I got Manning Publications’ Docker in Action to learn Docker using their step-by-step instructions and immediately actionable information to apply to enterprise-level projects.

Their “In Action” series takes the reader on an active journey by way of doing. After learning the details of using Docker to release enterprise-level software I wanted to be sure I understood the concepts and practices behind the delivery. Manning Publications has another book called Docker in Practice. Their “In Practice” series dives deep into the concepts presented by the technology. Together, Docker in Action and Docker in Practice create a well-rounded course in leveraging Docker effectively.

Tip: Vagrant Installation Prerequisite

Trouble installing Vagrant? This may help.

When using Windows 10, the BIOS of the system must have virtualization enabled and the OS must not enable Hyper-V before installing Vagrant. The feature in Windows may be disabled for a number of reasons. In my case, it was disabled because the virtualization feature of the system was disabled in the BIOS. The Windows machine needs to support virtualization, virtualization needs to be enabled in the BIOS, and the Hyper-V feature must be turned off in the Windows OS.

Finally, Vagrant may be installed

Microsoft Outlook Stuck on Loading Profile…

This may seem to be an unusual topic for Better Blogs but I fixed this a number of times. Normally, Outlook will load its data appropriately and load successfully. Sometimes, however, it gets stuck on the “Loading Profile…” step:

outlook_2013_hangs_loading_profile

Many different solutions can be found online. So why write a post about it? Because I have not found a fix for Outlook 16 (2016) on Windows 10. The internet will take you to Add/Remove Programs, opening Outlook in safe mode, or to the PC’s registry! The fix is more straightforward:

Step 1

Go to: C:\Users\<user name>\AppData\Local\Microsoft\Outlook

Step 2

Delete the Outlook Data Files (.nst).

Skype for Business may be using these as well. Don’t worry if this causes issues. Simply close all Skype instances (Task Manager helps) and continue to delete these files. The files will be created again the next time Outlook opens.

I’ve used these steps for a couple of previous versions of Outlook as well. It tends to fix it for me so I thought I’d store the steps before I forget again. I hope this helps you as well!

If the above solution doesn’t work for you, perhaps the wonderful collection of solutions here: https://www.pallareviews.com/3466/outlook-hangs-on-loading-profile/ may help?

Working with Vanilla JS in Web Applications

Writing JavaScript for IE and other antiquated browsers means classes and other helpful features of ES6 are not available. A similar effect can be achieved though and it is actually quite easy to do!

Why use Vanilla JS instead of any number of the frameworks available or even TypeScript? The answer is largely irrelevant if a choice has already been made. However, deciding to replace JavaScript with some alternative for all use cases is an absolute missing the mark. This article will describe the use of Vanilla JS leaving the choice of what language to use up to you.

Class Definitions & Namespaces

ES6 classes are not yet fully supported in the browser. Many of the limitations mentioned in this article are most relevant when developing with ES5 – such as developing for Internet Explorer or other antiquated browsers. Even without the full support of classes, a similar effect can be achieved in JavaScript and it is actually quite easy to do!

We first want to make sure that the class definition is contained. This means it should not pollute the global namespace with methods and variables. This can be accomplished by using a closure – a specific one called an IIFE.

(function (global) {
  "use strict";

  global.API = new MyObject();

  function MyObject() {
    var self = this;
    ... var privateVariable ... 
    ... function privateMethod() ... 
    ... self.publicMethod ... 
  }
})((1,eval)('this')); 

Notice that the global namespace is passed to the IIFE – since they are just methods, they can be used as such! If you want to know more about how the global namespace is obtained, check out this enlightening StackOverflow post: (1,eval)(‘this’) vs eval(‘this’) in JavaScript?

"use strict"; //seriously, do it.

The class can be initialized and stored at global scope such as inside a single app-specific namespace:

(function (global,app,http) {
  "use strict";

  global[app] = global[app] || {};
  global[app][http] = new Http();

  // global.App.http.publicMethods()
  function Http() {
    var self = this;
    // var privateVariables ...
    // self.publicMethods = function ...
    // function privateFunctions() ...
  }
})((1,eval)('this'),'App','http');

I find it easier to write client-side JavaScript as an API. Leveraging the design patterns this encourages offers many benefits to code quality and maintenance. In the code above, an Http instance is assigned to the http property in the global.App namespace. Certainly, this should contain our methods for making HTTP calls! Code organization is one of the best things about approaching the application’s client-side JavaScript in this way. Usually, the constructor function, not an instance, would be stored – which allows certain SOLID principles to be applied.

The Constructor Function

The Http function is a special kind – a constructor function. This means an instance can be created using the new operator with the constructor function call.

function MyObject() { }
var instance = new MyObject(); 

This should look familiar if you have ever created an instance in Object-Oriented code before.

Capturing this

The fact this isn’t always the same is both the power and the curse of JavaScript. The first line of the Http constructor function is capturing this in a specific context to help overcome the curse, and leverage the power:

function Http() {
  var self = this;
  ...
}

At the scope of the constructor function, this refers to the Http object. A private variable is declared and initialized to capture it and make it available to all public and private members of Http no matter what this happens to be during the invocation of those members. Capturing this only once and at the scope of the corresponding constructor function will reduce the possibility of this fulfilling its curse!

private Members

The variables and functions created at the scope of the Http constructor function will be available to all public and private members within the Http object.

function Http() {
  var self = this,
      eventHandlers = {};
  
  function addEventHandler(event, handler) { }
  function removeEventHandler(event, handler) { }
}

In this case, self, eventHandlers, and the add/remove event handler functions are private members of Http. They are not accessible to external sources – only public and private members of Http can access the private members of Http.

public Members

The properties and methods exposed from the Http object, that can be accessed from external code are considered public.

function Http() {
  var self = this;
  
  self.get = function (request) { ...
  self.post = function (request, data) { ...
}

Add public members to the self variable within the constructor function. This allows external code to perform the operations of an Http instance.

static Members

Members can be static as well. By declaring a variable on the constructor function itself, it can be assigned a value, instance, or function that is public while not depending on an instance to be created using the constructor function:

function Http() { }
Http.setup = function () { ... }

The static Http member can be used without creating an Http instance:

// ... application code doesn't create an Http instance
Http.setup();
// ... application code doesn't create an Http instance

The member is public and available anywhere the Http constructor function is available.

Execution Contexts

Without going into the depths of execution contexts in JavaScript, there are a few things to note. This section will describe a couple of different execution contexts and integration points at which JavaScript code is executed.

Global Context

There is only 1 global context – or global scope or global namespace. Any variable defined outside a function exists within the global context:

var x = 9;
function XManager() {
  var self = this;
  
  self.getX = function () { return x; }
  self.setX = function (value) { x = value; }
}

The global-scoped x variable is defined outside of the XManager function and assigned the value of 9. When getX is called, it will return the global-scoped x (the value of 9).

Local Scope – Function Execution Context

The alternative to the Global Scope is Local Scope. The local scope is defined by the function execution context:

var x = 9;
function XManager() {
  var self = this,
      x = 10;

  self.getInstanceX = function () {
    return x; // returns 10
  }
}

In this case, a variable x is declared twice. The first time is within the global execution context. This variable is accessible within XManager. Within the XManager constructor function, the private variable x is declared and initialized to 10. The getInstanceX method will return the variable x that is first in its execution context stack:

ecstack
Execution Context Stack (David Shariff)

The getInstanceX method is “Active Now”, XManager‘s private variable x is next, followed by the global-scoped variable x, and finally the global execution context.

All of this is to explain why getInstanceX returns 10 and not 9. Powerful stuff!

let & Block-Level Scope

I cannot discuss execution contexts without mentioning the keyword let. This keyword allows the declaration of block-level scope variables. Like ES6 classes, if antiquated browsers need to be supported, the let keyword will not be available.

function Start() {
  let x = 9; // variable assigned to value 9

  function XManager() {
    let x = 10; // different variable assigned to value 10

    function getX() {
      console.log(x); // return 10
    }

    console.log(x); // return 10
    getX();
  }

  console.log(x); // return 9
  XManager();
}

Start();

A block scope variable is accessible within its context (Start) and contained sub-blocks (XManager). The main difference from var is that the scope of var is the entire enclosing function. This means when using let, XManager and the contained sub-blocks (getX) have access to the new variable x assigned to the value of 10 while the variable x in the context of Start will still have the value of 9.

Event Handlers

Client-side JavaScript code is triggered by the user through DOM events as they interact with rendered HTML. When an event is triggered, its subscribers (event handlers) will be called to handle the event.

HTML – Event Subscription

<button id="submit" onclick="handleClick">Submit</button>

JAVASCRIPT – Event Subscription

var button = document.getElementById("submit");
button.addEventHandler('click', clickHandler);

JAVASCRIPT – Event Handler

function clickHandler() {
  console.log("Click event handled!");
}

Event handling marks the integration point between user interaction with HTML and the application API in JavaScript.

Understanding how to create objects and the Execution Context is important when writing client-side JavaScript. Designing the JavaScript as an API will help to further manage the pros and cons of the language.

Creating An ASP.NET MVC Project in Visual Studio 2015

Creating projects in Visual Studio 2015 is a guided process. This makes it a lot easier to create the correct project. This article describes the process for creating an ASP.Net MVC project within Visual Studio 2015.

Creating an ASP.Net MVC Project in Visual Studio 2015

Creating projects in Visual Studio 2015 is a guided process. This makes it a lot easier to create the correct project. The subsequent sections describe the process for creating an ASP.Net MVC project within Visual Studio 2015.

Prerequisites:

  • Visual Studio 2015 installed on a machine matching the recommended system specifications set by Microsoft

A New Project

The “New Project” dialog is used to create a new project in Visual Studio 2015. This can be opened in multiple ways:

  1. Start Page > Start list > “New Project…”
  2. File Menu > New > Project

OpenNewProjectDialog

Clicking “New Project…” from the Start list or navigating to “Project…” from the File > New menu will open the “New Project” Dialog. (Note: The dialog will look slightly different depending on the features and licenses installed).

FillingNewProjectDialog

To make sure the project is set up and configured properly by Visual Studio 2015 pay close attention to the following items:

  • .NET Framework version
    • This drop-down should contain all of the .NET Framework versions installed and supported by Visual Studio 2015
    • Select the version appropriate for your project. In this case, 4.6.2 is appropriate.
  • Project Type
    • Make sure to select the “ASP.NET Web Application (.NET Framework)” template to make sure you are not selecting a .NET Core version.
  • Name
    • This will name the ASP.NET MVC project. Name this according to established naming conventions of the organization. Generally it would use the following format: <Company Name>.<Project Name>.Web
  • Location
    • Select, Browse, or create a directory to contain the ASP.NET MVC project.
  • Solution Name
    • Name the solution file. If the “Create directory for solution” checkbox is checked, Visual Studio will create a directory for the solution named by the “Solution name” field. This solution directory contains all of the created projects. This is the default operation and is general considered best practice.

The next step is to click “OK”. The “New ASP.NET Web Application” dialog is displayed.

SelectATemplate

Within this dialog, make sure to choose the “MVC” template. An optional step is to include a Test project too. Then click “OK”.

Visual Studio will create the project based on the configuration settings. This may take a moment:

CreatingProjectProgress

When the creation process is complete, the project readme will open and the Solution Explorer will contain the solution and project:

ProjectCreateFinished

Enjoy your new project.

Did My Jasmine Expect Method Get Called?

Inspecting expectations in Jasmine

When unit testing with Jasmine, expect() calls are not mandatory. That is, calling expect() at least once is not enforced by Jasmine. I recently ran into a problem which caused me to ask myself “did that expect method get called?”. I couldn’t count on Jasmine for this – in fact, my tests pass whether I include the expect() call or comment it out! So I went digging..

I determined that I could simply create spies for my expect() calls. This is an easy way to leverage Jasmine to inspect your tests. Simply create your spy:

const expectSpy: jasmine.Spy =
   spyOn(window,'expect').and.callThrough();

I am using TypeScript for my unit tests. Since the expect() method is global and I am running my tests in a browser, I use the window object directly. There are ways to obtain the global object without this sort of hard-coding but, that is besides the point.

Moving on, the expect() calls must work properly so and.callThrough() is called. This is important. Without including and.callThrough(), your tests will fail because, rather than Jasmine’s expect() execution, you will be limited to a jasmine.Spy.

Here is a more complete example of a test with an expect spy – slightly modified from a sample Angular 2 application I have been working on:

it('should trigger on selection change', async(() => {
  const expectSpy: jasmine.Spy =
    spyOn(window,'expect').and.callThrough();

  const triggerSpy = spyOn(component, 'triggerThemeChanged');

  const select =
    fixture.debugElement.query(By.css('select.theme-selector'));

  dispatchEvent(select.nativeElement, 'change');

  fixture.whenStable().then(() => {
    expect(triggerSpy).toHaveBeenCalledTimes(1);
  }).then(() => {
    expect(expectSpy).toHaveBeenCalledTimes(2);
  });
}));

There are a few things about this test that are not the point of this article – what the heck is async() and the apparent improper use of dispatchEvent()? The important bits are the use of Promises as implied by the use of then() callbacks, the creation of the expect spy, and the inspection of the expect spy.

The test creates the expect spy and then uses expect() as usual within the test until it finally inspects the expect spy. Remember, the inspection of the expect spy counts as an expect() call! This is why expect(expectSpy).toHaveBeenCalledTimes(2) is called with 2 rather than 1.

I stopped at the call count. This test could be extended to further leverage the Jasmine API by looking at expectSpy.calls with other handy methods to make sure the expect() calls were made properly. I’ll leave that for an exercise for the reader. Just make sure your testing, at a minimum, covers the scope of your problem.

If you have had similar issues or have explored this in more depth I would be very interested in hearing about your journey! Comments are welcomed and appreciated.

Meteor Hang-up: Extracting Package….

An all too common tale of a stalled package installation and the valiant efforts to resolve it

In the world of Node.js and NPM, things can change at an increasingly rapid pace. This causes pain when starting or upgrading projects that require NPM packages. While there are sites like Greenkeeper, I see them as symptoms of a flawed system. Yes, I will say that without offering alternative solutions because at the moment I am aware of exactly zero. Suggestions welcome!

It is a wonderful world of possibility.

Complaining about NPM is not the point of this article. I’ll stop wasting time:

Recently I came across a few excellent tutorials about using Meteor, Ionic 2, Angular and React. They eventually brought me to Telescope Nova. My first thought was: this looks promising.

After forking and cloning and other Gitisms, I was ready to start the application:

npm install
npm run start

Of course, I have a Microsoft development background so when I saw a bunch of red because of ‘.sh’ I wondered why these two letters were such a problem. I ended up having to update my start script to exclude this bit of code. The script I excluded simply renames a sample_settings.json file to settings.json. I figured that was a safe thing to shortcut in this case by renaming it myself.

My next step was to try it again!

> Nova@1.0.0 start C:\Demo\Telescope
> meteor --settings settings.json
 [[[[[ C:\Demo\Telescope ]]]]]
 => Started proxy.
 => Started MongoDB.
 => Extracting std:account-ui@1.2.17

To be honest, I let it try for a few hours and it just could not get that pesky package extracted. Certainly, something had gone wrong before that. After digging into the depths of Nova, Meteor, and NPM, I finally explicitly searched within Stack Overflow for Extracting std:accounts-ui.

The search came up with only 2 results which are both linked at the bottom of this article. Most importantly: following the suggestions solved my problem.

I fixed the issue by relocating the 7z executable (7z.exe) from: C:\Users\[UserName]\AppData\Local\.meteor\packages\meteor-tool\1.4.2_3\mt-os.windows.x86_32\dev_bundle\bin to a place outside of any source code, build code, and tool locations. I relocated it and instead of removing it because I didn’t want to mess up my machine any more than it already may have been. Turns out, the missing 7z.exe was all it took to get my Meteor package installed properly!

It figures that the solution was to create a sort of FileNotFound scenario.

In an effort to spread the word, the following links lead me to this solution:

http://stackoverflow.com/questions/41155583/meteor-1-4-2-3-adding-package-extracts-forever-windows

http://stackoverflow.com/questions/41195227/meteor-package-extracting-forever

https://github.com/studiointeract/accounts-ui/issues/67

https://github.com/meteor/meteor/issues/7688

I hope this helps. It is a rather simple solution in the end. I am very interested in learning about your past issues with our current favorite packaging system and its various dependencies. Feel free to comment if you have hard-fought wisdom to share!

UPDATE: Just a quick update here, turns out this approach can be helpful when updating packages or if you get stuck here ‘Extracting meteor-tool@1.4.2_5’ (after the recent patch). Note: extracting meteor tools can take a while (upward of 30+ minutes) so expect to wait a bit to know if it fails.