A Simple Time Management Alternative With Trello

Learn how to get things done with a powerful time management alternative

We have all felt the elusiveness of time. It is hard to find the necessary time to get things done. This is especially noticed if you have obligations tied to your financial well-being. People will tell you, “you have to make the time.” What they don’t tell you is what you have to give up to do that. It doesn’t have to be this way with one simple time management alternative.

The Time Management Alternative Structure

People have been making lists to help remember things – milk at the grocery store, to clean the gutters, take out the trash, how to make that shrimp linguine everyone loved last week. But with the growing amount of life-hackery necessary to manage the growing demands on our time organization has become a key ingredient in getting things done.

With a little more structure, lists can do more than help keep your refrigerator stocked. There are free tools to help like Trello. It super-charges lists and offers a way to keep your to-dos organized. By leveraging the free version of Trello you can keep yourself organized and on track to accomplishing everything you need or want.

There are four features from Trello that make this possible. They are all offered for free.

Cards

Cards in Trello are your to-do items. Each item is represented by a card. Cards allow elaborate descriptions so you can write exactly what needs to be done. They also allow checklists to break down your tasks even further. While upgrading your plan will allow special Power-Ups that give cards even more power, I’ll focus on the free version for this article.

Lists

Lists in Trello are simply collections of Cards. Each list can be named, archived, and rearranged on a Board.

Boards

Boards in Trello are collections of Lists. The free version of Trello allows you to set background colors and images, have the recommended number of lists per board and allows you to have multiple boards.

Teams

Teams in Trello are a way to organize Boards. It may not be the best name for you. I find it helpful to think of it as a category for Board collections. The free version of Trello allows ten boards for each team. I find it helpful to create separate teams for my main priorities. For example, I have a team for managing this site. I also have teams to help manage my life at home.

These organization structures built into Trello provide a lot of potential for managing a growing to-do list. Leveraging these features effectively is important to make the most of them. I will attempt to describe a handy template I use for accomplishing goals with the help of Trello without needing calendars, reminders, or alarms.

Leveraging the Time Management Alternative

Your goals can be achieved with help from the power of Trello. The following sections will describe how I have done that and how you can too. I will start with Lists and what kind of Cards they would include. I’ll then move on to Teams and what kind of Boards they would include.

Brainstorm Lists within the time management alternative

This list contains thoughts and ideas. Each card is an item in a brainstorm. The list is a space for creative experimentation. The cards that come out of this list are then further categorized as Undecided, Not Doing, or Backlog. This is one of the most important lists. This is where all future activity begins.

Undecided List

When there are items that come out of a brainstorm that you are just not quite sure of, they go here. These would be considered later. The cards in this list are in a Trello-fueled limbo state. They may be completed later, or later it will be decided that they will not be done. The idea has been captured and we’ll decide later what to do with it.

Not Doing List

Items in this list are most likely not going to be done in the future. Each card came from a brainstorm or undecided list and was deemed unworthy to complete any further. This list is to preserve your ideas and offers you a chance to reconsider the worth of the items or fuel better ideas.

Backlog List

This list contains items that we are expecting to do in the future. When you decide that an item from the brainstorm will be done, it goes here first.

Prioritized List

The cards in this list represent items you have decided to do before others in your lists of ideas and backlog items. When you complete your current tasks, these are next. The cards in this list can be prioritized too. For example, you could order the list from top to bottom by importance. When a new item moves to in progress, it would be the card at the top of the list. Often, I would create a Proposed list placed before the Prioritized list. I would fill this list with backlog items as I prepare to prioritize them.

In Progress List

You have tasks that you are currently working on. They should be on this list. Keeping this list short is important. If everything is in progress, nothing is. Multi-tasking is a lie. I would recommend no more than three items at a time.

Complete List

Move your completed tasks to this list. You can track your progress towards your goals and celebrate each achievement along the way. Each card in this list can be reviewed or removed. I often add another list called Review to capture items that are completed but still await further analysis. This is an opportunity for continuous improvement. I recommend taking advantage of that.

Priority Teams with the time management alternative

Each priority team should represent a significant area of your life that you want to manage with the power of Trello. This could be long-term relationship goals or how to get rid of that collection of old dishware. Anything important enough to you should be made a team.

Each team would have at least two boards. One board is for ideas. This contains the first three lists: Brainstorm, Undecided, and Backlog. The other board would contain the other lists: Prioritized, In Progress, and Complete.

Notice throughout this article I haven’t set a date in Trello. While it is possible to add a date to cards and each card has an Activity log with a timestamp, there is no need to specify a date unless absolutely necessary. Keeping away from deadlines is one advantage of using Trello. I recommend using your best judgment and specify a date if it makes sense.

But couldn’t I just ignore my items?

A. Of course. You could ignore your board of items, forget your priorities, and choose not to organize your to-do items. None of those things encourage the completion of your goals.

This is a lot to set up! Is there an alternative?

A. Absolutely. What I described here works well for me. With the free version of Trello, I encourage you to experiment and find what works for you.

What if my number of free teams and boards meet the Trello maximum?

A. You can recycle any of the items in Trello. Teams, Boards, Lists, and Cards can be modified or archived and created anew. As your goals are accomplished, recycle your Trello teams.

Why Trello?

A. It is a free way to get organized and get things done. There are others that are less conducive to day-to-day activities, personal flexibility, and budget. After the year’s Black Friday and Cyber Monday shopping, why not try something that’s free?

How do I get Trello?

A. It is quick and easy. Follow the simple instructions provided by Trello to get started!

Emerging Secrets

A tale of overcoming pride.

It’s three weeks before launch and you have no confidence in the upcoming release. After five years of development, millions of dollars, and countless overtime hours, the system will fail on delivery.

And you’ve known it from the beginning.

It is a disheartening feeling paired with panic. As the system’s development team leader, it is your responsibility to keep things in line, to keep the team moving forward in the right direction, and to anticipate the possibility of failure.

Bah, you think, there’s only so much that I can do.

You keep your head up, hands down, and continue driving the system forward while cycling through the collection of rationalities you’ve clung to since the problem’s discovery, it won’t be that bad, it’s not so severe, it’s just a bug we’ll resolve later.

Day after day the problem compounds.

Deep within the tangled web of daily activities, you lose sight of the impending problem. You are distracted by bugs in a non-critical component. The time spent on integration issues with the enterprise resource management software creates an illusion that the other integrations are working correctly.

Now judgment day is here. The systems are being deployed to production. You click the button to run the scripts that build the packages to send to the production environments. Finally, the puzzle pieces are put together to be leveraged by thousands of customers.

Besides the lack of training, the customers don’t run into any bugs. Deployment successful. You and your team rejoice.

The next week the systems you’ve slaved over are at a critical point: payday. Sales input data is validated and correct. The sales systems are communicating properly. You track the sales data moving from one system to another, checking the output analysis, a sea of green.

It is at 2 A.M. the following Sunday morning when your phone rings. The customers didn’t get paid.

“Fix it!” A whole-hearted response, valid to its core, forms the mantra driving the team.

You run the series of validations against the sales data. Green. You have your cohorts comb through system logs. No issues found. You then begin manual validation, double-checking each calculation.

The hairs on the back of your neck stand on end. Goosebumps are sent across your body in waves. You fall into a cold sweat. The validations are flawed causing a series of false positives. It’s the issue you never recorded, the bug you deferred and that had not been fixed.

“How was this not found earlier?” The dreaded question is laid out, plain and clear. It must be addressed.

The truth is, you did know about it. To save face, you lied to yourself and, worst of all, the team. As the project progressed, you manufactured a version of reality more acceptable to your pride. Now the truth comes out as it always does.

The company you represent faces enormous challenges. Customers need to be paid manually. The system needs to be repaired. And you can no longer be trusted.

Sitting in front of your home desktop computer, customized to be quiet, fast, and to emit an appealing array of lights across your wall, you lose yourself in the thought that maybe you won’t find a new job in your field.

You unconsciously hold your index finger on the down-arrow key, the list of job openings scroll past your eyes. You see none of them.


Your eyes snap open and you suck in a deep breath. You sit up from your bed, soaked in sweat. You check the clock: 3 A.M. You realize it was just a nightmare.

You know what you have to do.

When you arrive at work early that morning you set up a meeting with the project’s leadership. With system design underway, there is no time to lose. The issue is not something to put off until later. It must be recorded and addressed according to its assigned priority.

The team gathers in the meeting room, offering polite greetings as they enter. When everyone sits down, laptops closed, eager to hear what the meeting is about, you say, “There is a flaw in the third-party validation system we plan to use for validating sales data calculations. We may need to find an alternative.”

On Multiple Deploy Environments

Why deploying to multiple environments is a must for enterprise software systems to leverage software automations with docker deployment environments in Azure

Azure makes it easy to create multiple deployment environments. Each environment should be as close to the production environment as possible. Which Azure can help with too.

The pairing of developers and operations (DevOps) is key to success. Doing what is necessary to create this pairing is largely a cultural concern – if folks won’t get along, the pairing will be less than fruitful. Also, if an organization doesn’t encourage cooperation, there’s no hope.

Culture isn’t the only battle. Finding the balance between responsibilities can be a challenge for many organizations just starting to apply DevOps principles. One straightforward way to look at it is: operations is an end-user of a developer’s deliverable. This means, while operations need to do their part in setting up a viable environment, developers need to be able to deliver a system that will work with that environment.

A great way to do this is by testing. It sounds simple enough to me. But when it comes to execution, it can be challenging. Developer machines are rarely equivalent to the environments a system is deployed to. Environments are often different between testing and production.

How can this be overcome?

Creating multiple deploy environments is key. This means getting developers and operations in sync as early as possible. Using deployable containers such as what is available with Docker can help reduce the differences between environments to practically zero. Creating deployment environments can further strengthen the trust between developers and operations. Read more about how you can get started by creating a static website using Docker.

What environments should be created?

There are three areas that need to be covered: Development, Testing, and Availability. This is best represented using five environments.

Development

When developers make their changes locally, they should be testing locally. This is a typical process that should be happening multiple times per day. However, with a development environment, changes can be deployed to a server for development testing. This could be nightly or better yet, each time a pull request is submitted. Testing a pull request is important before it can be approved.

QA

When a pull request is approved, it will likely not continue directly through to the QA environment. There should be a gate controlled by the QA (Quality Assurance) team. If they are ready, they can open the gate allowing deployment to the QA environment. This is where testers will dig into the site manually and run their own automated tests to ensure the system is ready for user acceptance testing.

UAT

UAT (User Acceptance Testing) is a testing phase that includes real users using the system in real-world scenarios. During this phase, the users will need an environment of their own. When the changes are approved, the system is deployed to the Staging environment.

UAT is often combined with either QA or Staging environments. In this article, we separate them. Learn more about the Staging environment next.

Staging

The Staging environment is where the last preparations are made for a move to production. Final checks – and double-checks – are made here. With certain deployment setups, this environment is actually the new production environment. Flipping the switch would make this environment the new production environment, and the old production environment would then be the new staging environment.

Production

When the system is in production we are far past the point of no return. The code is “in the wild” and real users are using it in real situations that have real impacts. In some deployment setups, this may be the old staging environment.

It is important that these are distinct environments meaning each environment has the correct version of the system and that the operations and data are separated from other environments. For example, we don’t want to push a button in Staging and cause code in QA to execute and modify data in UAT. This is a severe example and Azure makes it easy to avoid.

It is also important that each environment (Development, QA, UAT, Staging) is as similar as possible to the production environment. We want our systems to be tested thoroughly to be sure users in production receive as much of the business value as we invested in the system. “Similar” means machine resources are similar, system distribution is similar, etc. While each environment may have slightly different code as development progresses, they are otherwise the same. Again, this is easier to accomplish with container technologies such as Docker.

Azure makes it easier to set up and manage these environments. Create guarded continuous integration pipelines that allow safe code to enter production.

How to Manage an Ever-Changing User Interface

Discover a philosophy of user interface management leading to adaptable front-ends that exceed dynamic market requirements and the ever-changing user interface

The user interface is the window into the needs of a business. These needs should be driven by customers either internal or external. We’ll refer to these customers as the market. As market needs shift the user interface will need to change. A responsibility of a professional front-end developer is to design the user interface implementation to support this change. How do we manage an ever-changing user interface?

Identifying what areas of the front-end are most impacted is an essential first step in managing shifts in the market. As we know from Robert C. Martin or Juval Lowy, each of these areas is an axis of change. Considering the volatility of an area can help when designing the front-end to more easily adapt to change.

We’ll see that the user interface is never done and certain areas of the user interface will likely change more frequently than others. We will also consider how we could exploit the axis of change to deliver a user interface designed with enough fluidity in the right areas to more easily flow with fluctuating market needs.

Volatility Rating

Everything about the user interface will change. This means color, size, position, text, user experience, design – everything – will change. There is no judgment here. The user interface is encouraged to change if it better suits the market. However, there are some areas that will change more frequently than others and have a greater impact on the system when they do. Considering these areas is essential when designing the user interface implementation.

Frequency

Starting with a simple example, the look and feel of the user interface may change. If, for instance, the look and feel will always change, the frequency is 100%.

Another area that may be added or altered is a data model. When a user interface contacts a service, there is a contract that defines the data that will be sent between the front-end and the service. This is the data model. When the market decides it needs an extra field in a form, that it needs a “button here that does x”, or removing a column from a table, it means altering or adding a data-model. This has its own change frequency.

Determining how frequently an area will change will help determine its volatility and how to approach its design and the design of future changes.

Impact

The look and feel of the user interface may always change which is only one part of the volatility rating. The impact of a change needs to be considered. Areas that impact the entire system will have the most impact when changed. The impact of change is reduced as its impact on the system is reduced. An example of this can be found in a previous article titled The Monolith Component. While the article focuses on a malformed component, it describes the kinds of impact code can have. Considering the impact is an important part of deciding how to make a change.

Exploiting the Evolution

Some areas are innately difficult to alter, especially when they impact a website user interface as a whole – such as look and feel. There are common practices when dealing with something like this: use a CSS pre-processor to leverage common principles and practices such as OOCSS, BEM, and SMACSS. With the advent of Modular CSS and other principles and practices, managing the look and feel of a website is less painful.

There are libraries and frameworks that aim to make front-end development less painful. Yet, they can only go so far. It will depend on the use, the application of these helpful libraries and frameworks – let’s call this advantaged code. Leveraging advantaged code becomes dependent on the application of two concepts: continuous improvement, and designing for change. These concepts attempt to answer a fundamental question: How can I make it easier to manage new or existing code in an ever-changing user interface?

Continuous Improvement

As more is learned, more can be applied. The details of the code begin to be deeply understood. The areas of the code that change most begin to reveal themselves. And, of course, the impact on the system of each change has a greater chance of being measurable.

When learning these things about the user interface, and how it is impacted by changing market needs, the code can be continuously improved to anticipate those changes.

Design for Change

Designing a user interface for change is only valuable if the rate of change and its impact on the system are measured and deemed inevitable. This is to avoid unnecessary costs such as increased user interface complexity and reduced available budgets.

As the user interface evolves with market needs it should continuously improve in the areas where the rate of change and the impact on the system are high enough. What is high enough in terms of change rate and system impact is largely determined by project concerns – available time and budget, developer experience, accessible business knowledge, etc.

I am not saying all changes are valid – meaning, there are some cases when a change should not be made. A simple example of this is security. If a requested change will compromise the security of the application, it is a responsibility of a professional developer to say, “no” preferably with an amount of tact appropriate for your relationship with the market. And hopefully, there would be enough trust in the partnership that the market will thank you for looking out for them.

Excluding the requests that are detrimental to the system, by measuring the rate of change and the impact on the system, changes to the front-end can be more easily supported, maintained, and you may even welcome them.

The Monolith Component

Learn some useful ways to identify and squash a monolith component

It is more common than ever to have user interfaces made up of components. With so many user interface libraries and frameworks like Angular, React, VueJS, Aurelia and KnockoutJS components are everywhere. Components are the building blocks of a user interface. It is imperative that these blocks are built and organized in a way that supports proper testing, reduces defects and enables the extension requirement innate to ever-changing user experiences. This article will describe a component that counters these goals by challenging both the quality and flexibility of a user interface: The Monolith Component.

The Monolith Component is a byproduct of feature-driven functional decomposition. It is the user interface representation of the god object. The component is feature-packed, contains an excessive amount of code, and often becomes the grandparent of smaller child components. To make the problem worse, the child components could be referencing the grandparent and act as nothing more than a visual proxy to grandparent capabilities.

It isn’t hard to understand why this sort of thing happens. First, an agile team will use sprints to deliver features. Without enough careful consideration and planning of system design, the design and functionality tend to focus on feature-level concerns at a sprint cadence. Second, it’s not always intuitive to design something that is counter to our nature. For example, generally speaking, we first learn how to cook from our parents, and our parents first learned how to cook from their parents, and so on. In a fast-paced environment, a quicker path maybe just to learn from your grandparents. This is a small example of the innate grandparent (monolith) design we exist within. Instead of children components owning their responsibilities, they are implemented in their parents and likewise up the hierarchy to the level of the monolith component.

Applying the grandparent theme to user interface development leads to buggy components that are difficult to maintain, test, and extend. Overall, a monolith component will have too many responsibilities.

Identifying A Monolith Component

There are a few indicators of a monolith component.

#1 Line Count: if you see a component with 1000+ lines of code, you may be looking at a monolith component.

#2 Dependencies: if you see dozens of dependencies or more, you may be looking at a monolith component.

#3 Bugs: if most bug fixes involve looking at the same, single component, you may be looking at a monolith component.

#4 New Features: if new features involve looking at the same, single component, you may be looking at a monolith component.

#5 Responsibility: if it is easier to describe what a component does not do within the context of the page, you may be looking at a monolith component.

Refactoring A Monolith Component

Updating a monolith component can be a daunting task. It should get easier the more it is done. It does get easier if those updates mean breaking up the monolith into a more sane design. The more that is done, the better off the code, and developer, will be. To describe exactly how to refactor one of these components would depend on its implementation details. I will instead attempt to describe some general ideas that have helped me in the past with some simple examples along the way.

Test Test Test

The first step is having confidence that refactoring does not introduce breaking changes. To this end, maintaining tests that prove business value is essential. The presentation and logic shouldn’t change during a refactor. That is to say, the intent should not change. There are multiple ways to accomplish the same thing. Refactoring code is meant to update code so that it does the same thing in a cleaner way. Having tests that make sure the code satisfies business requirements without depending on how it satisfies them, will help avoid introducing bugs while refactoring.

Identify Responsibilities

The second step is knowing what the component does. A monolith component will likely have multiple responsibilities. Being able to list these will help us understand the component and how to split it into multiple components. It can reveal the patterns and domains that shape the component.

Responsibilities that are determined to be unnecessary should be removed:

Example 1: The section of code is disabled due to being wrapped in a comment. This code can’t execute. Remove it. Be careful when looking at comments in HTML. Often libraries and frameworks will give meaning or functionality to HTML comments.

Example 2: A block of HTML never displays because of a visibility condition that is never and can never be true. Assuming this isn’t a bug, the HTML block can be removed. The condition in code may not be necessary either.

We know the component’s responsibilities and we may have removed some that were not needed. Now we look at dependencies.

Understand Dependencies

The third step is knowing the internal and external dependencies of the component. The goal of this step is to answer: what does the component require so that it can perform its responsibilities?

Depending on what kind of dependency injection is used (the application does leverage DI right?) dependencies may be specified in different locations.

Looking at the constructor is a good place to start. The parameters of the constructor define what a component needs in order for an instance to be created.

Next, look at private and public properties. These will likely include the values or objects set in the constructor. They also represent the possible states of a component. Maybe an enum is used to define how to present a particular object in the view. Maybe a boolean is set to determine whether a checkbox should be checked on load. Maybe an object is used to store a server response object. These things can still be considered dependencies in this context – the component needs them to provide its business value.

Look for static members. Static members need a definition, not an instance. What do they offer and how does the component use them? Generally, these are easier to identify and extract to make more reusable.

Finally, look at how the dependencies are used within instance methods. If a non-primitive data type is specified in the constructor, what is the purpose of that dependency? Does it allow a consumer to get data from a server? Does it contain methods to filter a list of a particular data type? Is it the mechanism for triggering an event on a timer? Knowing how the dependencies are used can help when determining what business problem they are solving. This can help us group dependencies by business problems or domains.

Extract Common Ground

The fourth step is knowing the groups of common responsibilities and common dependencies that can be moved into other distinct components.

Find themes such as API calls, orchestration of child components, notification, and other domain logic.

Use a separate class to encapsulate API calls. This enables centralized control over forming requests and normalizing responses.

Use a dedicated component for handling the interaction between child components. Using a dedicated class to share data can help avoid coupling as well.

Use a separate class or component to handle other domain logic:

Example 1: The presentation of a timer is maintained. The timer is triggered, tracked, displayed, stopped and reset within the monolith component.

Example 2: The component allows the user to update the view with new data from the server without using the browser refresh button. After a user triggers a refresh: a server request is formed, the request is sent, a response is received and parsed, it is then sent to areas that need the parsed response and the view is updated appropriately.

Example 3: User input is maintained. A form contains multiple labels and input controls. The user input is validated. When the user input is invalid, a notification is displayed to the user. When a user completes the form, it can be submitted to the server so there are more server requests and responses to handle.

All of the logic and presentation of these examples can be contained in separate classes and components. They will take with them the necessary responsibilities and dependencies of the monolith component – only what they need.

Repeat As Necessary

The fifth step is knowing what is left after extracting the common ground. If we are lucky, this means the monolith component now does only one thing. If not, repeat the previous steps until the component does just one thing. This is assuming that all of the responsibilities of the component will be known and are necessary.

A monolith component is too big. It does too much. It knows too much. It has too many responsibilities. As developers, we are responsible for delegation. If a component does too much, it is our fault. There are many ways to prevent and refactor monolith components. There have been volumes of work describing refactoring methodologies. This article describes some ways that have worked for me when refactoring a monolith component.

Working with Vanilla JS in Web Applications

Writing JavaScript for IE and other antiquated browsers means classes and other helpful features of ES6 are not available. A similar effect can be achieved though and it is actually quite easy to do!

Why use Vanilla JS instead of any number of the frameworks available or even TypeScript? The answer is largely irrelevant if a choice has already been made. However, deciding to replace JavaScript with some alternative for all use cases is an absolute missing the mark. This article will describe the use of Vanilla JS leaving the choice of what language to use up to you.

Class Definitions & Namespaces

ES6 classes are not yet fully supported in the browser. Many of the limitations mentioned in this article are most relevant when developing with ES5 – such as developing for Internet Explorer or other antiquated browsers. Even without the full support of classes, a similar effect can be achieved in JavaScript and it is actually quite easy to do!

We first want to make sure that the class definition is contained. This means it should not pollute the global namespace with methods and variables. This can be accomplished by using a closure – a specific one called an IIFE.

(function (global) {
  "use strict";

  global.API = new MyObject();

  function MyObject() {
    var self = this;
    ... var privateVariable ... 
    ... function privateMethod() ... 
    ... self.publicMethod ... 
  }
})((1,eval)('this')); 

Notice that the global namespace is passed to the IIFE – since they are just methods, they can be used as such! If you want to know more about how the global namespace is obtained, check out this enlightening StackOverflow post: (1,eval)(‘this’) vs eval(‘this’) in JavaScript?

"use strict"; //seriously, do it.

The class can be initialized and stored at global scope such as inside a single app-specific namespace:

(function (global,app,http) {
  "use strict";

  global[app] = global[app] || {};
  global[app][http] = new Http();

  // global.App.http.publicMethods()
  function Http() {
    var self = this;
    // var privateVariables ...
    // self.publicMethods = function ...
    // function privateFunctions() ...
  }
})((1,eval)('this'),'App','http');

I find it easier to write client-side JavaScript as an API. Leveraging the design patterns this encourages offers many benefits to code quality and maintenance. In the code above, an Http instance is assigned to the http property in the global.App namespace. Certainly, this should contain our methods for making HTTP calls! Code organization is one of the best things about approaching the application’s client-side JavaScript in this way. Usually, the constructor function, not an instance, would be stored – which allows certain SOLID principles to be applied.

The Constructor Function

The Http function is a special kind – a constructor function. This means an instance can be created using the new operator with the constructor function call.

function MyObject() { }
var instance = new MyObject(); 

This should look familiar if you have ever created an instance in Object-Oriented code before.

Capturing this

The fact this isn’t always the same is both the power and the curse of JavaScript. The first line of the Http constructor function is capturing this in a specific context to help overcome the curse, and leverage the power:

function Http() {
  var self = this;
  ...
}

At the scope of the constructor function, this refers to the Http object. A private variable is declared and initialized to capture it and make it available to all public and private members of Http no matter what this happens to be during the invocation of those members. Capturing this only once and at the scope of the corresponding constructor function will reduce the possibility of this fulfilling its curse!

private Members

The variables and functions created at the scope of the Http constructor function will be available to all public and private members within the Http object.

function Http() {
  var self = this,
      eventHandlers = {};
  
  function addEventHandler(event, handler) { }
  function removeEventHandler(event, handler) { }
}

In this case, self, eventHandlers, and the add/remove event handler functions are private members of Http. They are not accessible to external sources – only public and private members of Http can access the private members of Http.

public Members

The properties and methods exposed from the Http object, that can be accessed from external code are considered public.

function Http() {
  var self = this;
  
  self.get = function (request) { ...
  self.post = function (request, data) { ...
}

Add public members to the self variable within the constructor function. This allows external code to perform the operations of an Http instance.

static Members

Members can be static as well. By declaring a variable on the constructor function itself, it can be assigned a value, instance, or function that is public while not depending on an instance to be created using the constructor function:

function Http() { }
Http.setup = function () { ... }

The static Http member can be used without creating an Http instance:

// ... application code doesn't create an Http instance
Http.setup();
// ... application code doesn't create an Http instance

The member is public and available anywhere the Http constructor function is available.

Execution Contexts

Without going into the depths of execution contexts in JavaScript, there are a few things to note. This section will describe a couple of different execution contexts and integration points at which JavaScript code is executed.

Global Context

There is only 1 global context – or global scope or global namespace. Any variable defined outside a function exists within the global context:

var x = 9;
function XManager() {
  var self = this;
  
  self.getX = function () { return x; }
  self.setX = function (value) { x = value; }
}

The global-scoped x variable is defined outside of the XManager function and assigned the value of 9. When getX is called, it will return the global-scoped x (the value of 9).

Local Scope – Function Execution Context

The alternative to the Global Scope is Local Scope. The local scope is defined by the function execution context:

var x = 9;
function XManager() {
  var self = this,
      x = 10;

  self.getInstanceX = function () {
    return x; // returns 10
  }
}

In this case, a variable x is declared twice. The first time is within the global execution context. This variable is accessible within XManager. Within the XManager constructor function, the private variable x is declared and initialized to 10. The getInstanceX method will return the variable x that is first in its execution context stack:

ecstack
Execution Context Stack (David Shariff)

The getInstanceX method is “Active Now”, XManager‘s private variable x is next, followed by the global-scoped variable x, and finally the global execution context.

All of this is to explain why getInstanceX returns 10 and not 9. Powerful stuff!

let & Block-Level Scope

I cannot discuss execution contexts without mentioning the keyword let. This keyword allows the declaration of block-level scope variables. Like ES6 classes, if antiquated browsers need to be supported, the let keyword will not be available.

function Start() {
  let x = 9; // variable assigned to value 9

  function XManager() {
    let x = 10; // different variable assigned to value 10

    function getX() {
      console.log(x); // return 10
    }

    console.log(x); // return 10
    getX();
  }

  console.log(x); // return 9
  XManager();
}

Start();

A block scope variable is accessible within its context (Start) and contained sub-blocks (XManager). The main difference from var is that the scope of var is the entire enclosing function. This means when using let, XManager and the contained sub-blocks (getX) have access to the new variable x assigned to the value of 10 while the variable x in the context of Start will still have the value of 9.

Event Handlers

Client-side JavaScript code is triggered by the user through DOM events as they interact with rendered HTML. When an event is triggered, its subscribers (event handlers) will be called to handle the event.

HTML – Event Subscription

<button id="submit" onclick="handleClick">Submit</button>

JAVASCRIPT – Event Subscription

var button = document.getElementById("submit");
button.addEventHandler('click', clickHandler);

JAVASCRIPT – Event Handler

function clickHandler() {
  console.log("Click event handled!");
}

Event handling marks the integration point between user interaction with HTML and the application API in JavaScript.

Understanding how to create objects and the Execution Context is important when writing client-side JavaScript. Designing the JavaScript as an API will help to further manage the pros and cons of the language.

SOLID Systems Using Boundary Interfaces

With my latest post, learn how to create SOLID systems by applying common software development principles to layered software architectures – SOLID Systems Using Boundary Interfaces

https://magenic.com/thinking/solid-systems-using-boundary-interfaces