How to Easily Overlap HTML Elements Without Position CSS

Discover how to easily overlap HTML elements in one simple way by leveraging the power of the CSS grid layout.

Sometimes an approved user interface design requires us to overlap HTML elements. This article describes how to do this with just HTML and CSS using the CSS grid layout.

The primary aim of the examples in this article is to prove a point. They may meet your requirements with similar code but may not fit exactly.

HTML

Starting with the definition of the HTML, we need at least three elements: the grid container, the background element, and the foreground element.

<div class="grid-container">
  <div class="background">
    This is the background
  </div>
  <div class="foreground">
    This is the foreground
  </div>
</div>

The grid container wraps each of the elements that need to be overlapped. Meanwhile, the child elements have classes specifying where in the grid the browser should place them.

Thereupon we can control which elements overlap implicitly via the order of HTML elements specified in the grid container. Conversely, we can explicitly control them with the z-index CSS property.

CSS

Although the HTML has been defined, the CSS still needs to be specified. The CSS is more involved and includes detail about the CSS grid. Therefore, understanding these details will allow you to change this code to meet your requirements.

.grid-container {
  display: grid;
  grid-template-columns: 1fr 1fr 1fr 1fr;
  grid-template-rows: 1fr 1fr 1fr 1fr;
  
  .background {
    grid-column: 1 / 5;
    grid-row: 1 / 4;
  }
  .foreground {
    grid-column: 2 / 4;
    grid-row: 3 / 5;       
  }
}

The Grid to Overlap HTML

The grid container specifies the definition of a 4 x 4 grid. I am using fr units for ease of use in this article. Your requirements may differ. Learn more about the flex value here: https://developer.mozilla.org/en-US/docs/Web/CSS/flex_value.

Firstly, a grid’s definition includes setting the display property to grid. Additionally, the grid-template-columns and grid-template-rows CSS properties define the rows, columns and their sizes. Learn more about CSS grid layout at: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout

Furthermore, we still need to specify where elements are placed within the grid. As an illustration using the CSS above, the background class positions an HTML element behind the foreground. Do this by setting the shorthand properties grid-column and grid-row to span more of the grid than the foreground.

To clarify, the grid-column property specifies the horizontal start and end edges of the HTML element within the grid.

Similarly, the grid-row property specifies the vertical start and end edges of the HTML element within the grid.

Remember: The grid’s edges start at 1, yet the end is one more than the number of rows or columns in the grid. We are specifying an edge not a cell.

In addition to the background class, we need to create the foreground class. The foreground class positions an element in front of the background element. Much like the background class, do this using the grid-column and grid-row properties.

Even though a requirement to overlap HTML elements can mean several things, adjusting the background class’s grid-column and grid-row properties can help position the background element to better match your requirements.

Overlap HTML

In summary, defining the CSS grid layout allows us to overlap HTML elements with just HTML and CSS. To that end, the grid-column and grid-row specify where each element is placed within the grid.

To demonstrate, the image below shows the grid lines from Google Dev Tools. You can see the 4 x 4 grid and the positions of the HTML elements.

See the grid lines and how CSS grid is used to overlap HTML elements
The grid lines and the position of the HTML elements.

See a working example using jsfiddle below or https://jsfiddle.net/asusultra/5cho1p2g/2/.

Note: You may notice that I use a few CSS properties for colors, height, and width. This is only for display purposes.

How to Manage an Ever-Changing User Interface

Discover a philosophy of user interface management leading to adaptable front-ends that exceed dynamic market requirements and the ever-changing user interface

The user interface is the window into the needs of a business. These needs should be driven by customers either internal or external. We’ll refer to these customers as the market. As market needs shift the user interface will need to change. A responsibility of a professional front-end developer is to design the user interface implementation to support this change. How do we manage an ever-changing user interface?

Identifying what areas of the front-end are most impacted is an essential first step in managing shifts in the market. As we know from Robert C. Martin or Juval Lowy, each of these areas is an axis of change. Considering the volatility of an area can help when designing the front-end to more easily adapt to change.

We’ll see that the user interface is never done and certain areas of the user interface will likely change more frequently than others. We will also consider how we could exploit the axis of change to deliver a user interface designed with enough fluidity in the right areas to more easily flow with fluctuating market needs.

Volatility Rating

Everything about the user interface will change. This means color, size, position, text, user experience, design – everything – will change. There is no judgment here. The user interface is encouraged to change if it better suits the market. However, there are some areas that will change more frequently than others and have a greater impact on the system when they do. Considering these areas is essential when designing the user interface implementation.

Frequency

Starting with a simple example, the look and feel of the user interface may change. If, for instance, the look and feel will always change, the frequency is 100%.

Another area that may be added or altered is a data model. When a user interface contacts a service, there is a contract that defines the data that will be sent between the front-end and the service. This is the data model. When the market decides it needs an extra field in a form, that it needs a “button here that does x”, or removing a column from a table, it means altering or adding a data-model. This has its own change frequency.

Determining how frequently an area will change will help determine its volatility and how to approach its design and the design of future changes.

Impact

The look and feel of the user interface may always change which is only one part of the volatility rating. The impact of a change needs to be considered. Areas that impact the entire system will have the most impact when changed. The impact of change is reduced as its impact on the system is reduced. An example of this can be found in a previous article titled The Monolith Component. While the article focuses on a malformed component, it describes the kinds of impact code can have. Considering the impact is an important part of deciding how to make a change.

Exploiting the Evolution

Some areas are innately difficult to alter, especially when they impact a website user interface as a whole – such as look and feel. There are common practices when dealing with something like this: use a CSS pre-processor to leverage common principles and practices such as OOCSS, BEM, and SMACSS. With the advent of Modular CSS and other principles and practices, managing the look and feel of a website is less painful.

There are libraries and frameworks that aim to make front-end development less painful. Yet, they can only go so far. It will depend on the use, the application of these helpful libraries and frameworks – let’s call this advantaged code. Leveraging advantaged code becomes dependent on the application of two concepts: continuous improvement, and designing for change. These concepts attempt to answer a fundamental question: How can I make it easier to manage new or existing code in an ever-changing user interface?

Continuous Improvement

As more is learned, more can be applied. The details of the code begin to be deeply understood. The areas of the code that change most begin to reveal themselves. And, of course, the impact on the system of each change has a greater chance of being measurable.

When learning these things about the user interface, and how it is impacted by changing market needs, the code can be continuously improved to anticipate those changes.

Design for Change

Designing a user interface for change is only valuable if the rate of change and its impact on the system are measured and deemed inevitable. This is to avoid unnecessary costs such as increased user interface complexity and reduced available budgets.

As the user interface evolves with market needs it should continuously improve in the areas where the rate of change and the impact on the system are high enough. What is high enough in terms of change rate and system impact is largely determined by project concerns – available time and budget, developer experience, accessible business knowledge, etc.

I am not saying all changes are valid – meaning, there are some cases when a change should not be made. A simple example of this is security. If a requested change will compromise the security of the application, it is a responsibility of a professional developer to say, “no” preferably with an amount of tact appropriate for your relationship with the market. And hopefully, there would be enough trust in the partnership that the market will thank you for looking out for them.

Excluding the requests that are detrimental to the system, by measuring the rate of change and the impact on the system, changes to the front-end can be more easily supported, maintained, and you may even welcome them.

Creating ASP .NET Core Startup Tasks

How to create asp.net core startup tasks

Reference: https://andrewlock.net/reducing-latency-by-pre-building-singletons-in-asp-net-core/

The link above walks through a useful alternative to other app-startup features in .NET Core. He also provides descriptions of those other features.

It is useful when operations need to be performed only one time before the application runs: IWebHost.Run().

A Powerful Docker Container for an Nx Workspace Application

Discover how to easily create a Docker container for an Nx Workspace application with this step by step guide to creating a powerful site deployable in seconds with Docker

In a previous post, I briefly described the Nx Workspace and how to create Angular applications and libraries with Nrwl Extensions. I wanted the ability to run a prod build of the app in Docker for Windows so here is just one way of accomplishing that. With the Nx Workspace setup already I had to add just a few more files. This article assumes an Nx Workspace exists with an app named “client-demo”. It follows a similar approach to creating a static website using Docker. This article describes how to create a simple Docker container for an Nx Workspace Application.

NGINX

Using nxginx instead of a nanoserver due to size (~16 MB compared to 1+ GB) a nginx.conf file was needed. Place the file at the root of the Nx Workspace (the same level as the angular.json file):

// nginx.conf

worker_processes 1;

events {
worker_connections 1024;
}

http {
server {
listen 80;
server_name localhost;

root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;

gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/css application/javascript;

location / {
try_files $uri $uri/ /index.html;
}
}
}

Dockerfile

It is now time for the Dockerfile. This file acts as a sort of definition file for a Docker Image. Place this file at the same level as the nginx.conf file:

// Dockerfile

FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/apps/client-demo .

Docker Compose

The Dockerfile is created. To use Docker Compose, create a docker-compose.yml file at the same level as the Dockerfile:

// docker-compose.yml

version: '3.1'

services:
app:
image: 'client-demo-app'
build: '.'
ports:
- 3000:80

Docker Ignore

When creating a Docker Image not every file is needed. In this case, only the dist/ folder is really needed. Using a .dockerignore file can help keep files and directories out of the build context. Place this file at the same level as the Dockerfile:

// .dockerignore

node_modules
.git
libs
tools
apps

Package.json

To leverage the files that have been created scripts can be added to the package.json file. This file should already exist within the Nx Workspace. Simply add the following scripts:

// package.json

...
"scripts": {
...
"client-demo-build": "ng build client-demo --prod",
"client-demo-image": "docker image build -f Dockerfile.client-demo -t client-demo-app .",
"client-demo-run": "docker-compose -f docker-compose.client-demo.yml up",
"client-demo-stop": "docker-compose -f docker-compose.client-demo.yml down",
"client-demo": "yarn client-demo-build && yarn client-demo-image && yarn client-demo-run"
},
...

Each of these scripts can run with npm run <script> or yarn <script>.

client-demo-build: This script runs ng build with the –prod flag to create a prod build of the Angular app.

client-demo-image: This script builds the client-demo-app image given a specific Dockerfile named Dockerfile.client-demo.

client-demo-run: This script uses docker-compose to run the app with docker-compose up. A specific file is specified with the ‘-f’ flag named docker-compose.client-demo.yml.

client-demo-stop: This script acts as the opposite of docker-compose up. As long as this script runs after the client-demo-run script, the app can be started and stopped any number of times.

client-demo: This script simply chains the execution of other scripts to create the prod build of the Angular app, create the Docker image, and serve the app. As it is written, yarn is required.

After creating the Nx Workspace, creating the Docker support files, adding the scripts to package.json and running npm run client-demo or yarn client-demo access the app from a browser at http://localhost:3000.

Docker Container for an Nx Workspace Application viewable from a browser
Default Nx Workspace application

Run npm run client-demo-stop or yarn client-demo-stop to stop the app.

An Introduction to the Nx Workspace

Learn how to create and maintain flexible angular applications with Nrwl Extensions

Angular development is great. It offers a great way to break problems into small, easily managed parts. With the Angular CLI, more power is at our fingertips. Narwhal Technologies Inc has created even more power by providing extensions to the Angular CLI. In this article, I will describe how to leverage the Nrwl Extensions to create and maintain flexible angular apps.

To learn how to install Nrwl, visit their getting started guide: https://nrwl.io/nx/guide-getting-started

Make sure to install Angular and Nrwl globally using npm. Here is a list of versions I used for this article:

node: 8.15.0
npm: 5.0.0
"@angular/cli": "~7.1.0"
"@nrwl/schematics": "7.4.0"

Creating an Nx Workspace

The Nx Workspace is a collection of Angular applications and libraries. When creating the workspace there will be a number of options available during generation. To start, run the following command:

create-nx-workspace <workspace-name>

This will begin the process of generating the workspace with the provided name.

After a few initial packages are installed, a prompt will display to choose the stylesheet format:

Stylesheet format prompt
stylesheet format prompt

Use the arrow keys to choose between CSS, SCSS, SASS, LESS, and Stylus. After the desired format is highlighted, press Enter.

The next prompt to display is the NPM scope. This will allow applications to reference libraries using an npm scope. For example, given a library called ‘my-lib’ and the npm scope is ‘my’, an application can import the library with the following statement:

import { MyLibModule } from '@my/my-lib';

To learn more about npm scopes check out their documentation: https://docs.npmjs.com/about-scopes

After specifying an NPM scope, press Enter. A third prompt will appear to specify which package manager to use:

NPM scope and package manager prompts
npm scope and package manager prompts

Use the arrow keys to choose between npm and Yarn. After the desired format is highlighted, press Enter.

Now that the generation process has everything it needs, it will continue to create the folder structure, files, and configuration:

Completed Nx Workspace generation
completed Nx Workspace generation

Project Structure

There are two important folders available after the workspace generation.

FolderDescription
/appsContains a collection of applications in the workspace.
/libsContains a collection of libraries in the workspace.

Adding Applications

Before adding an application with the CLI be sure to navigate into the workspace folder. In our example, the folder is ‘my-platform-workspace’. Then use the Angular CLI to generate the app:

PS C:\NoRepo\NxWorkspace> cd my-platform-workspace
PS C:\NoRepo\NxWorkspace\my-platform-workspace> ng g app <app name>

Tip

When using Visual Studio Code, open the Nx Workspace folder. This will default the command window to the necessary directory by default when using the built-in support called Terminals.

After adding the application, a number of prompts will display and the app generation will proceed:

Adding an application
adding an application

Running the application can be done using the Angular CLI as usual:

PS C:\NoRepo\NxWorkspace\my-platform-workspace> ng serve my-first-app

When the app is done building, go to http://localhost:4200 from a browser and see the default view:

Default app built with Nrwl Nx
default app built with Nrwl Nx

Adding a Library

Adding a library is as easy as adding an application with the following command:

ng g lib <library name>

Generally, a module should be created for libraries so they an be easily imported by applications. Once the library is created, components can be added to the library. Make sure to export any library components or other Angular objects (providers, pipes, etc) that need to be used by applications.

The Dependency Graph

Looking at package.json, there are a number of scripts that have been added. One that is nice to have is to generate and view a dependency graph of all of the applications and libraries in the workspace. A dependency graph can be generated using the following command:

npm run dep-graph

For example, I’ve added my-lib and my-lib2 to the my-first-app. This is the resulting dependency graph:

Sample dependency graph
sample dependency graph

Here we can see that the my-first-app-e2e (end-to-end) test application is dependent on the my-first-app application. The application is dependent on the libraries my-lib and my-lib2. This is a very simple example. This gains more value as more applications share more libraries.

It is also possible to get the JSON version of the dependency graph which can be used in various creative ways to help automate your workflow. This is all thanks to Nrwl Extensions and the power of Nx Workspaces.

How to Easily Create a Static Website With Docker

Discover how to easily create a static website with Docker that can be viewed from a browser

The goal of this article is to describe a process for serving static web files from a Docker Container. It is surprisingly easy to create a static website with docker.

The website structure is very simple and consists of only 3 files:

./site/
  style.css
  app.js
  index.html

At the project root there is a Dockerfile:

./
  Dockerfile

The website displays “Loading” text. When the JavaScript file is loaded, Hello World is displayed in big red letters:

Static Website With Docker serving Hello World "Loading" view
Hello World “Loading” view

Here is the HTML:

<html>
  <head>
    <title>Sample Website</title>
    <script src="app.js"></script>
    <link href="style.css" rel="stylesheet" />
  </head>
  <body>Loading</body>
</html> 

Here is the Dockerfile:

FROM nanoserver/iis
COPY ./site/ /inetpub/wwwroot/ 

The lines in the Dockerfile are key to getting the webserver image created. This file allows us to create a new docker image. The image is used to run a docker container.

The first line specifies the base image. In this case, it is an image with a configured Nano Server with IIS. There are smaller webserver images that are usually preferable.

The second line will copy the local project files from the ‘site’ folder to the wwwroot folder of the nanoserver image.

That is everything needed to get a web server started to serve the web page. To create the image, start with docker build:

> docker build -t webserver-image:v1 .

The docker build command is used to create an image. When it is executed from a command line within the directory of a Dockerfile, the file will be used to create the image. The -t option allows the ability to name and optionally tag the image. In this case, the name is “webserver-image” with the “v1” tag. Tags are generally used to version images. The last argument is the path used to build the image. In this case, it is . which is the current directory.

Running the command will build the image:

> docker build -t webserver-image:v1 .
Sending build context to Docker daemon 26.11kB
Step 1/2 : FROM nanoserver/iis
---> 7eac2eab1a5c
Step 2/2 : COPY ./site/ /inetpub/wwwroot/
---> fca4962e8674
Successfully built fca4962e8674
Successfully tagged webserver-image:v1

The build succeeded. This can be verified by running docker image ls:

> docker image ls
REPOSITORY      TAG IMAGE ID     CREATED       SIZE
webserver-image v1  ffd9f77d44b7 3 seconds ago 1.29GB

If the build doesn’t succeed, there may be a few things to double-check. This includes making sure the Dockerfile is available, nanoserver images can be pulled, and paths are accurate.

Now that an image is created, it can be used to create a container. This can be done with the docker run command:

> docker run --name web-dev -d -it -p 80:80 webserver-image:v1

After running the command, the container id will be displayed:

> docker run --name web-dev -d -it -p 80:80 webserver-image:v1
fde46cdc36fabba3aef8cb3b91856dbd554ff22d63748d486b8eed68a9a3b370

A docker container was created successfully. This can be verified by executing docker container ls:

> docker container ls
CONTAINER ID IMAGE              COMMAND                  CREATED
STATUS        PORTS              NAMES
fde46cdc36fa webserver-image:v1 "c:\\windows\\system32…" 31 seconds ago
Up 25 seconds 0.0.0.0:80->80/tcp web-dev

The container id is displayed (a shorter version of what was shown when executing docker run). The image that was used for the container is also displayed along with when it was created, the status, port, and the container name.

The following docker inspect command will display the IP address:

> docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" web-dev
172.19.112.171

This IP address is what can be called in a browser to view the page:

Hello World Loading
Hello World “Loading” view

There is now a working container that serves the web page!

I learn by doing and found that most of us in tech do. That is why I got Manning Publications’ Docker in Action to learn Docker using their step-by-step instructions and immediately actionable information to apply to enterprise-level projects.

Their “In Action” series takes the reader on an active journey by way of doing. After learning the details of using Docker to release enterprise-level software I wanted to be sure I understood the concepts and practices behind the delivery. Manning Publications has another book called Docker in Practice. Their “In Practice” series dives deep into the concepts presented by the technology. Together, Docker in Action and Docker in Practice create a well-rounded course in leveraging Docker effectively.

Working with Vanilla JS in Web Applications

Writing JavaScript for IE and other antiquated browsers means classes and other helpful features of ES6 are not available. A similar effect can be achieved though and it is actually quite easy to do!

Why use Vanilla JS instead of any number of the frameworks available or even TypeScript? The answer is largely irrelevant if a choice has already been made. However, deciding to replace JavaScript with some alternative for all use cases is an absolute missing the mark. This article will describe the use of Vanilla JS leaving the choice of what language to use up to you.

Class Definitions & Namespaces

ES6 classes are not yet fully supported in the browser. Many of the limitations mentioned in this article are most relevant when developing with ES5 – such as developing for Internet Explorer or other antiquated browsers. Even without the full support of classes, a similar effect can be achieved in JavaScript and it is actually quite easy to do!

We first want to make sure that the class definition is contained. This means it should not pollute the global namespace with methods and variables. This can be accomplished by using a closure – a specific one called an IIFE.

(function (global) {
  "use strict";

  global.API = new MyObject();

  function MyObject() {
    var self = this;
    ... var privateVariable ... 
    ... function privateMethod() ... 
    ... self.publicMethod ... 
  }
})((1,eval)('this')); 

Notice that the global namespace is passed to the IIFE – since they are just methods, they can be used as such! If you want to know more about how the global namespace is obtained, check out this enlightening StackOverflow post: (1,eval)(‘this’) vs eval(‘this’) in JavaScript?

"use strict"; //seriously, do it.

The class can be initialized and stored at global scope such as inside a single app-specific namespace:

(function (global,app,http) {
  "use strict";

  global[app] = global[app] || {};
  global[app][http] = new Http();

  // global.App.http.publicMethods()
  function Http() {
    var self = this;
    // var privateVariables ...
    // self.publicMethods = function ...
    // function privateFunctions() ...
  }
})((1,eval)('this'),'App','http');

I find it easier to write client-side JavaScript as an API. Leveraging the design patterns this encourages offers many benefits to code quality and maintenance. In the code above, an Http instance is assigned to the http property in the global.App namespace. Certainly, this should contain our methods for making HTTP calls! Code organization is one of the best things about approaching the application’s client-side JavaScript in this way. Usually, the constructor function, not an instance, would be stored – which allows certain SOLID principles to be applied.

The Constructor Function

The Http function is a special kind – a constructor function. This means an instance can be created using the new operator with the constructor function call.

function MyObject() { }
var instance = new MyObject(); 

This should look familiar if you have ever created an instance in Object-Oriented code before.

Capturing this

The fact this isn’t always the same is both the power and the curse of JavaScript. The first line of the Http constructor function is capturing this in a specific context to help overcome the curse, and leverage the power:

function Http() {
  var self = this;
  ...
}

At the scope of the constructor function, this refers to the Http object. A private variable is declared and initialized to capture it and make it available to all public and private members of Http no matter what this happens to be during the invocation of those members. Capturing this only once and at the scope of the corresponding constructor function will reduce the possibility of this fulfilling its curse!

private Members

The variables and functions created at the scope of the Http constructor function will be available to all public and private members within the Http object.

function Http() {
  var self = this,
      eventHandlers = {};
  
  function addEventHandler(event, handler) { }
  function removeEventHandler(event, handler) { }
}

In this case, self, eventHandlers, and the add/remove event handler functions are private members of Http. They are not accessible to external sources – only public and private members of Http can access the private members of Http.

public Members

The properties and methods exposed from the Http object, that can be accessed from external code are considered public.

function Http() {
  var self = this;
  
  self.get = function (request) { ...
  self.post = function (request, data) { ...
}

Add public members to the self variable within the constructor function. This allows external code to perform the operations of an Http instance.

static Members

Members can be static as well. By declaring a variable on the constructor function itself, it can be assigned a value, instance, or function that is public while not depending on an instance to be created using the constructor function:

function Http() { }
Http.setup = function () { ... }

The static Http member can be used without creating an Http instance:

// ... application code doesn't create an Http instance
Http.setup();
// ... application code doesn't create an Http instance

The member is public and available anywhere the Http constructor function is available.

Execution Contexts

Without going into the depths of execution contexts in JavaScript, there are a few things to note. This section will describe a couple of different execution contexts and integration points at which JavaScript code is executed.

Global Context

There is only 1 global context – or global scope or global namespace. Any variable defined outside a function exists within the global context:

var x = 9;
function XManager() {
  var self = this;
  
  self.getX = function () { return x; }
  self.setX = function (value) { x = value; }
}

The global-scoped x variable is defined outside of the XManager function and assigned the value of 9. When getX is called, it will return the global-scoped x (the value of 9).

Local Scope – Function Execution Context

The alternative to the Global Scope is Local Scope. The local scope is defined by the function execution context:

var x = 9;
function XManager() {
  var self = this,
      x = 10;

  self.getInstanceX = function () {
    return x; // returns 10
  }
}

In this case, a variable x is declared twice. The first time is within the global execution context. This variable is accessible within XManager. Within the XManager constructor function, the private variable x is declared and initialized to 10. The getInstanceX method will return the variable x that is first in its execution context stack:

ecstack
Execution Context Stack (David Shariff)

The getInstanceX method is “Active Now”, XManager‘s private variable x is next, followed by the global-scoped variable x, and finally the global execution context.

All of this is to explain why getInstanceX returns 10 and not 9. Powerful stuff!

let & Block-Level Scope

I cannot discuss execution contexts without mentioning the keyword let. This keyword allows the declaration of block-level scope variables. Like ES6 classes, if antiquated browsers need to be supported, the let keyword will not be available.

function Start() {
  let x = 9; // variable assigned to value 9

  function XManager() {
    let x = 10; // different variable assigned to value 10

    function getX() {
      console.log(x); // return 10
    }

    console.log(x); // return 10
    getX();
  }

  console.log(x); // return 9
  XManager();
}

Start();

A block scope variable is accessible within its context (Start) and contained sub-blocks (XManager). The main difference from var is that the scope of var is the entire enclosing function. This means when using let, XManager and the contained sub-blocks (getX) have access to the new variable x assigned to the value of 10 while the variable x in the context of Start will still have the value of 9.

Event Handlers

Client-side JavaScript code is triggered by the user through DOM events as they interact with rendered HTML. When an event is triggered, its subscribers (event handlers) will be called to handle the event.

HTML – Event Subscription

<button id="submit" onclick="handleClick">Submit</button>

JAVASCRIPT – Event Subscription

var button = document.getElementById("submit");
button.addEventHandler('click', clickHandler);

JAVASCRIPT – Event Handler

function clickHandler() {
  console.log("Click event handled!");
}

Event handling marks the integration point between user interaction with HTML and the application API in JavaScript.

Understanding how to create objects and the Execution Context is important when writing client-side JavaScript. Designing the JavaScript as an API will help to further manage the pros and cons of the language.

Creating An ASP.NET MVC Project in Visual Studio 2015

Creating projects in Visual Studio 2015 is a guided process. This makes it a lot easier to create the correct project. This article describes the process for creating an ASP.Net MVC project within Visual Studio 2015.

Creating an ASP.Net MVC Project in Visual Studio 2015

Creating projects in Visual Studio 2015 is a guided process. This makes it a lot easier to create the correct project. The subsequent sections describe the process for creating an ASP.Net MVC project within Visual Studio 2015.

Prerequisites:

  • Visual Studio 2015 installed on a machine matching the recommended system specifications set by Microsoft

A New Project

The “New Project” dialog is used to create a new project in Visual Studio 2015. This can be opened in multiple ways:

  1. Start Page > Start list > “New Project…”
  2. File Menu > New > Project

OpenNewProjectDialog

Clicking “New Project…” from the Start list or navigating to “Project…” from the File > New menu will open the “New Project” Dialog. (Note: The dialog will look slightly different depending on the features and licenses installed).

FillingNewProjectDialog

To make sure the project is set up and configured properly by Visual Studio 2015 pay close attention to the following items:

  • .NET Framework version
    • This drop-down should contain all of the .NET Framework versions installed and supported by Visual Studio 2015
    • Select the version appropriate for your project. In this case, 4.6.2 is appropriate.
  • Project Type
    • Make sure to select the “ASP.NET Web Application (.NET Framework)” template to make sure you are not selecting a .NET Core version.
  • Name
    • This will name the ASP.NET MVC project. Name this according to established naming conventions of the organization. Generally it would use the following format: <Company Name>.<Project Name>.Web
  • Location
    • Select, Browse, or create a directory to contain the ASP.NET MVC project.
  • Solution Name
    • Name the solution file. If the “Create directory for solution” checkbox is checked, Visual Studio will create a directory for the solution named by the “Solution name” field. This solution directory contains all of the created projects. This is the default operation and is general considered best practice.

The next step is to click “OK”. The “New ASP.NET Web Application” dialog is displayed.

SelectATemplate

Within this dialog, make sure to choose the “MVC” template. An optional step is to include a Test project too. Then click “OK”.

Visual Studio will create the project based on the configuration settings. This may take a moment:

CreatingProjectProgress

When the creation process is complete, the project readme will open and the Solution Explorer will contain the solution and project:

ProjectCreateFinished

Enjoy your new project.

Did My Jasmine Expect Method Get Called?

Inspecting expectations in Jasmine

When unit testing with Jasmine, expect() calls are not mandatory. That is, calling expect() at least once is not enforced by Jasmine. I recently ran into a problem which caused me to ask myself “did that expect method get called?”. I couldn’t count on Jasmine for this – in fact, my tests pass whether I include the expect() call or comment it out! So I went digging..

I determined that I could simply create spies for my expect() calls. This is an easy way to leverage Jasmine to inspect your tests. Simply create your spy:

const expectSpy: jasmine.Spy =
   spyOn(window,'expect').and.callThrough();

I am using TypeScript for my unit tests. Since the expect() method is global and I am running my tests in a browser, I use the window object directly. There are ways to obtain the global object without this sort of hard-coding but, that is besides the point.

Moving on, the expect() calls must work properly so and.callThrough() is called. This is important. Without including and.callThrough(), your tests will fail because, rather than Jasmine’s expect() execution, you will be limited to a jasmine.Spy.

Here is a more complete example of a test with an expect spy – slightly modified from a sample Angular 2 application I have been working on:

it('should trigger on selection change', async(() => {
  const expectSpy: jasmine.Spy =
    spyOn(window,'expect').and.callThrough();

  const triggerSpy = spyOn(component, 'triggerThemeChanged');

  const select =
    fixture.debugElement.query(By.css('select.theme-selector'));

  dispatchEvent(select.nativeElement, 'change');

  fixture.whenStable().then(() => {
    expect(triggerSpy).toHaveBeenCalledTimes(1);
  }).then(() => {
    expect(expectSpy).toHaveBeenCalledTimes(2);
  });
}));

There are a few things about this test that are not the point of this article – what the heck is async() and the apparent improper use of dispatchEvent()? The important bits are the use of Promises as implied by the use of then() callbacks, the creation of the expect spy, and the inspection of the expect spy.

The test creates the expect spy and then uses expect() as usual within the test until it finally inspects the expect spy. Remember, the inspection of the expect spy counts as an expect() call! This is why expect(expectSpy).toHaveBeenCalledTimes(2) is called with 2 rather than 1.

I stopped at the call count. This test could be extended to further leverage the Jasmine API by looking at expectSpy.calls with other handy methods to make sure the expect() calls were made properly. I’ll leave that for an exercise for the reader. Just make sure your testing, at a minimum, covers the scope of your problem.

If you have had similar issues or have explored this in more depth I would be very interested in hearing about your journey! Comments are welcomed and appreciated.

Meteor Hang-up: Extracting Package….

An all too common tale of a stalled package installation and the valiant efforts to resolve it

In the world of Node.js and NPM, things can change at an increasingly rapid pace. This causes pain when starting or upgrading projects that require NPM packages. While there are sites like Greenkeeper, I see them as symptoms of a flawed system. Yes, I will say that without offering alternative solutions because at the moment I am aware of exactly zero. Suggestions welcome!

It is a wonderful world of possibility.

Complaining about NPM is not the point of this article. I’ll stop wasting time:

Recently I came across a few excellent tutorials about using Meteor, Ionic 2, Angular and React. They eventually brought me to Telescope Nova. My first thought was: this looks promising.

After forking and cloning and other Gitisms, I was ready to start the application:

npm install
npm run start

Of course, I have a Microsoft development background so when I saw a bunch of red because of ‘.sh’ I wondered why these two letters were such a problem. I ended up having to update my start script to exclude this bit of code. The script I excluded simply renames a sample_settings.json file to settings.json. I figured that was a safe thing to shortcut in this case by renaming it myself.

My next step was to try it again!

> Nova@1.0.0 start C:\Demo\Telescope
> meteor --settings settings.json
 [[[[[ C:\Demo\Telescope ]]]]]
 => Started proxy.
 => Started MongoDB.
 => Extracting std:account-ui@1.2.17

To be honest, I let it try for a few hours and it just could not get that pesky package extracted. Certainly, something had gone wrong before that. After digging into the depths of Nova, Meteor, and NPM, I finally explicitly searched within Stack Overflow for Extracting std:accounts-ui.

The search came up with only 2 results which are both linked at the bottom of this article. Most importantly: following the suggestions solved my problem.

I fixed the issue by relocating the 7z executable (7z.exe) from: C:\Users\[UserName]\AppData\Local\.meteor\packages\meteor-tool\1.4.2_3\mt-os.windows.x86_32\dev_bundle\bin to a place outside of any source code, build code, and tool locations. I relocated it and instead of removing it because I didn’t want to mess up my machine any more than it already may have been. Turns out, the missing 7z.exe was all it took to get my Meteor package installed properly!

It figures that the solution was to create a sort of FileNotFound scenario.

In an effort to spread the word, the following links lead me to this solution:

http://stackoverflow.com/questions/41155583/meteor-1-4-2-3-adding-package-extracts-forever-windows

http://stackoverflow.com/questions/41195227/meteor-package-extracting-forever

https://github.com/studiointeract/accounts-ui/issues/67

https://github.com/meteor/meteor/issues/7688

I hope this helps. It is a rather simple solution in the end. I am very interested in learning about your past issues with our current favorite packaging system and its various dependencies. Feel free to comment if you have hard-fought wisdom to share!

UPDATE: Just a quick update here, turns out this approach can be helpful when updating packages or if you get stuck here ‘Extracting meteor-tool@1.4.2_5’ (after the recent patch). Note: extracting meteor tools can take a while (upward of 30+ minutes) so expect to wait a bit to know if it fails.