Modules in JavaScript Circa 2015

Wednesday, October 7, 2015 by K. Scott Allen

Until 2015, the JavaScript language has officially only offered only two types of variable scope – global scope and function scope. Avoiding global scope has been a primary architectural goal of nearly every library and framework authored over the last ten years. Avoiding global scope in the browser has meant we’ve relied heavily on closures and syntactical oddities like the immediately invoked function expression (IIFE) to provide encapsulation.

(function() {

    // code goes here, inside an IIFE

}());

Avoiding global scope also meant nearly every library and framework for the browser would expose functionality through a single global variable. Examples include $ or jQuery for the jQuery library, or _ for underscore and lodash.

After NodeJS arrived on the scene in 2009, Node developers found themselves creating larger code bases and consuming a larger number of libraries. This community adopted and put forward what now we call the CommonJS module standard. Shortly afterward, another community standard, the Asynchronous Module Definition standard (AMD), appeared for browser programming.

ECMAScript 2015 brings an official module standard to the JavaScript language. This module standard uses a different syntax than both CommonJS and AMD, but tools like WebPack and polyfills like the ES6 module loader make all of the module standards mostly interoperable. At this point in time, some preprocessing or polyfills are required to make the new module syntax work. Even though the syntax of the language is complete and standardized, the browser APIs and behaviors for processing modules are still a work in progress.

First, Why Modules?

The purpose of a module system is to allow JavaScript code bases to scale up in size. Modules give us a tool to manage the complexity of a large code base by providing just a few important features.

First, modules in JavaScript are file based. When using ES2015, instead of thinking about a script file, you should think of a script module. A file is a module. By default, any code you write inside the file is local to the module. Variables are no longer in the global scope by default, and there is no need to write a function or an IIFE to control the scope of a variable. Modules give us an implicit scope to hide implementation details.

Of course, some of the code you write in a module is code you might want to expose as an API and consume from a different module. ES2015 provides the export keyword for this purpose. Any object, function, class, value, or variable that you want to make available to the outside world is something you must explicitly export with the export keyword.

In a second module, you would use the import keyword to consume any exports of the first module.

The ability to spread code across multiple files and directories while still being able to access the functionality exposed by any other file without going through a global mediator makes modules an important addition to the JavaScript language. Perhaps no other feature of 2015 will have as much of an impact on the architecture of applications, frameworks, and libraries as modules. Let’s look at the syntax.

Module Syntax

Imagine you want to use an object that represents a person, and in a file named humans.js you place the following code.

function work(name) {
    return `${name} is working`;
}

export let person = {
    name: "Scott",
    doWork() {
        return work(this.name);
    }
    
}

Since we are working with modules, the work function remains hidden in the module scope, and no code outside of the module will have access to the work function. The person variable is an export of the module. More specifically, we call person a named export. Code inside of other modules can import person and work with the referenced object.

import {person} from "./lib/humans"

describe("The humans module", function () {

    it("should have a person", function () {
        expect(person.doWork()).toBe("Scott is working");
    });

});

There are a few points to make about the previous code snippet.

First, notice the module name does not require a .js extension. The filename is humans.js, but the module name for the file is humans.

Secondly, the humans module is in a subfolder of the test code in this example, so the module specifier is a relative path to the module (./lib/humans).

Finally, curly braces enclose the imports list from the humans module. The imports are a list because you can import more than one named export from another module. For example, if the humans module also exported the work function, the test code could have access to both exports with the following code.

import {person, work} from "./lib/humans"

You also have the ability to alias an import to a different name.

import {person as scott, work} from "./lib/humans"

describe("The humans module", function () {

    it("should have a person", function () {
        // now using scott instead of person
        expect(scott.doWork()).toBe("Scott is working");
    });

    it("should have a work function", function () {
        expect(work).toBeDefined();
    });
    
});

In addition to exporting variables, objects, values, and functions, you can also export a class. Imagine the humans module with the following code.

function work(name) {
    return `${name} is working`;
}

export class Person {

    constructor(name) {
        this.name = name;
    }

    doWork() {
        return work(this.name);
    }

}

Now the test code would look like the following.

import {Person} from "./lib/humans"

describe("The humans module", function () {

    it("should have a person class", function () {
        var person = new Person("Scott");
        expect(person.doWork()).toBe("Scott is working");

    });

});

Modules can also export a list of symbols using curly braces, instead of using the export keyword on individual declarations. As an example, we could rewrite the humans module and place all the exports in one location at the bottom of the file.

function work(name) {
    return `${name} is working`;
}

class Person {

    constructor(name) {
        this.name = name;
    }

    doWork() {
        return work(this.name);
    }
}

export {Person, work as worker }

Notice how an export list can also alias the name of an export, so the work function exports with the name worker.

Default Exports

The 2015 module standard allows each module to have a single default export. A module can have a default export and still export other names, but having a default export does impact how another module will import the default. First, here is how the humans module would look with a default export of the Person class.

function work(name) {
    return `${name} is working`;
}

export default class Person {

    constructor(name) {
        this.name = name;
    }

    doWork() {
        return work(this.name);
    }
    
}

As the code demonstrates, the default export for a module uses the default keyword.

A module who needs to import the default export of another module doesn’t specify a binding list with curly braces for the default export. Instead, the module simply defines a name for the incoming default, and the name doesn’t need to match the name used inside the exporting module.

import Human from "./lib/humans"

describe("The humans module", function () {

    it("should have a default export as a class", function () {
        var person = new Human("Scott");
        expect(person.doWork()).toBe("Scott is working");
    });

});

Mass Exportation and Importation

An import statement can use an asterisk to capture all the named exports of a module into a namespace object. Let’s change the humans module once again to export the Person class both as a named export and as a default export, and also export the work function.

function work(name) {
    return `${name} is working`;
}

class Person {
    constructor(name) {
        this.name = name;
    }
    doWork() {
        return work(this.name);
    }
}

export {work, Person}
export default Person

The test code can have access to all the exports of the humans module using import *.

import * as humans from "./lib/humans"

describe("The humans module", function () {

    it("should have a person class", function () {
        var person = new humans.Person("Scott");
        expect(person.doWork()).toBe("Scott is working");
    });

    it("should have a default export", function () {
        expect(humans.default).toBeDefined();
    });

}); 

Notice how the test code now has two paths to reach the Person class. One path is via humans.Person, the other path is humans.default.

An export statement can also use an asterisk. The asterisk is useful in scenarios where you want one module to gather exports from many sub-modules and publish them all as a unit. In CommonJS this scenario typically uses a file named index.js, and many of the existing module loader polyfills support using an index.js file when importing a directory.

For example, let’s add a file named creatures.js to the lib folder.

export class Animal {

    constructor(name) {
        this.name = name;
    }    

}

An index.js file in the lib folder can now combine the humans and creatures module into a single module.

export * from "./creatures"
export * from "./humans"

The test code can now import the lib directory and access features of both underlying modules.

import {Person, Animal} from "./lib/"

describe("The combined module", function () {

    it("should have a person class", function () {
        var person = new Person("Scott");
        expect(person.doWork()).toBe("Scott is working");
    });

    it("should have an Animal class", function () {
        expect(new Animal("Beaker").name).toBe("Beaker");
    });

});

Importing For the Side-Effects

In some scenarios you only want to reference a module so the code inside can execute and produce side-effects in the environment. In this case the import statement doesn't need to name any imports.

import "./lib"

Summary

We're coming to the close on this long series of posts covering ES2015. In the last few posts we'll look at new APIs the standard brings to life.

Authorization Policies and Middleware in ASP.NET 5

Tuesday, October 6, 2015 by K. Scott Allen

Imagine you want to protect a folder full of static assets in the wwwroot directory of an ASP.NET 5 project. There are several different approaches you could take to solve the problem, but here is one flexible solution using authorization policies and middleware.

Services

First, in the Startup class for the application, we will add the required services.

public void ConfigureServices(IServiceCollection services)
{
    services.AddAuthentication();
    services.AddAuthorization(options =>
    {
        options.AddPolicy("Authenticated", policy => policy.RequireAuthenticatedUser());
    });
}

For the default authorization service we’ll make a named policy available, the Authenticated policy. A policy can contain any number of requirements allowing you to check claims and identities. In this code we will ultimately be using the built-in DenyAnonymousAuthorizationRequirement, because this is the type of requirement returned by the RequireAuthenticatedUser method. But again, you could make the requirement verify any number of characteristics about the user and the request.

The name Authenticated is important, because we will refer to this policy when authorizing users for access to a protected folder.

Middleware

Next, let’s write a piece of middleware named ProtectFolder and start with an options class to parameterize the middleware.

public class ProtectFolderOptions
{
    public PathString Path { get; set; }
    public string PolicyName { get; set; }
}

There is also the obligatory extension method to add the middleware to the pipeline.

public static class ProtectFolderExtensions
{
    public static IApplicationBuilder UseProtectFolder(
        this IApplicationBuilder builder, 
        ProtectFolderOptions options)
    {
        return builder.UseMiddleware<ProtectFolder>(options);
    }
}

Then the middlware class itself.

public class ProtectFolder
{
    private readonly RequestDelegate _next;
    private readonly PathString _path;
    private readonly string _policyName;
   
    public ProtectFolder(RequestDelegate next, ProtectFolderOptions options)
    {
        _next = next;
        _path = options.Path;
        _policyName = options.PolicyName;
    }

    public async Task Invoke(HttpContext httpContext, 
                             IAuthorizationService authorizationService)
    {
        if(httpContext.Request.Path.StartsWithSegments(_path))
        {
            var authorized = await authorizationService.AuthorizeAsync(
                                httpContext.User, null, _policyName);
            if (!authorized)
            {
                await httpContext.Authentication.ChallengeAsync();
                return;
            }
        }

        await _next(httpContext);
    }
}

The Invoke method on a middleware object is injectable, so we’ll ask for the current authorization service and use the service to authorize the user if the current request is heading towards a protected folder. If authorization fails we use the authentication manager to challenge the user, which typically redirects the browser to a login page, depending on the authentication options of the application.

Pipeline Configuration

Back in the application’s Startup class, we’ll configure the new middleware to protect the /secret directory with the “Authenticated” policy.

public void Configure(IApplicationBuilder app)
{
    app.UseCookieAuthentication(options =>
    {
        options.AutomaticAuthentication = true;
    });

    app.UseProtectFolder(new ProtectFolderOptions
    {
        Path = "/Secret",
        PolicyName = "Authenticated"
    });

    app.UseStaticFiles();

    // ... more middleware
}

Just make sure the protection middleware is in place before the middleware to serve static files.

JavaScript Promises and Error Handling

Thursday, October 1, 2015 by K. Scott Allen

Errors in asynchronous code typically require a messy number of if else checks and a careful inspection of parameter values. Promises allow asynchronous code to apply structured error handling. When using promises, you can pass an error handler to the then method or use a catch method to process errors. Just like exceptions in regular code, an exception or rejection in asynchronous code will jump to the nearest error handler.

As an example, let’s use the following functions which log the execution path into a string variable.

var log = "";

function doWork() {
    log += "W";
    return Promise.resolve();
}

function doError() {
    log += "E";
    throw new Error("oops!");
}

function errorHandler(error) {
    log += "H";
}

We’ll use these functions with the following code.

doWork()
    .then(doWork)
    .then(doError)
    .then(doWork) // this will be skipped
    .then(doWork, errorHandler)
    .then(verify);
    
  function verify() {
    expect(log).toBe("WWEH");
    done();
}

The expectation is that the log variable will contain “WWEH” when the code finishes executing, meaning the flow of calls with reach doWork, then doWork, then doError, then errorHandler. There are two observations to make about this result, one obvious, one subtle.

The first observation is that when the call to doError throws an exception, execution jumps to the next rejection handler (errorHandler) and skips over any potential success handlers. This behavior is obvious once you think of promises as a tool to transform asynchronous code into a procedural flow of method calls. In synchronous code, an exception will jump over statements and up the stack to find a catch handler, and the asynchronous code in this example is no different.

What might not be immediately obvious is that the verify function will execute as a success handler after the error. Just like normal execution can resume in procedural code after a catch statement, normal execution can resume with promises after a handled error. Technically, the verify function executes because the error handler returns a successfully resolved promise. Remember the then method always returns a new promise, and unless the error handler explicitly rejects a new promise, the new promise resolves successfully.

A promise object also provides a catch method to handle errors. The last code sample could be written with a catch statement as follows. ]

doWork()
    .then(doWork)
    .then(doError)
    .then(doWork) 
    .then(doWork)
    .catch(errorHandler)
    .then(verify);

The catch method takes only a rejection handler method. There can be a difference in behavior between the following two code snippets:

.then(doWork, errorHandler)

… and …

.then(doWork)
.catch(errorHandler)

In the first code snippet, if the success handler throws an exception or rejects a promise, execution will not go into the error handler since the promise was already resolved at this level. With catch, you can always see an unhandled error from the previous success handler.

Finally, imagine you have a rejected promise in your code, but there is no error handler attached. You can simulate this scenario with the following line of code.

Promise.reject("error!");

Some native environments and promise polyfills will warn you about unhandled promise rejections by displaying a message in the console of the developer tools. An unhandled promise rejection means your application could be missing out on a critical error!

C# Fundamentals with Visual Studio 2015

Wednesday, September 30, 2015 by K. Scott Allen

I've created a new C# Fundamentals course with Visual Studio 2015. This course, like the previous course on Pluralsight, doesn't focus so much on syntax and language quirks. Instead,  I like to focus on what I consider the important fundamentals, like understanding the difference between reference types and value types, how types live inside assemblies, some basic design tips, and more.

image

Enjoy!

JavaScript Promise API

Tuesday, September 29, 2015 by K. Scott Allen

Yesterday's post looked at chaining promises. Now, let's take a closer  look at the API available for promises.

A Promise in JavaScript offers a few static methods you can use as convenience methods. For example, when you need to return a promise to a caller but you already have a value ready, the resolve method is handy.

let doAsyncWork = function () { 
    // note: no async work to perform 
    return Promise.resolve(10); 
}; 

doAsyncWork().then(result => { 
    expect(result).toBe(10); 
    done(); 
});

Likewise, the reject method will deliver a ready result to an error handler.

let doAsyncWork = function () {
    return Promise.reject("error!");
};

doAsyncWork().then(() => { }, message => {
    expect(message).toBe("error!");
    done();
});

The race method will combine multiple promises into a single promise. The single promise will resolve when the first of the multiple promises resolves. It’s a race to see who finishes first!

let slowExecutor = function (resolve, reject) {
    setTimeout(() => {
        resolve(9);
    }, 250);
};

let fastExecutor = function (resolve, reject) {
    setTimeout(() => {
        resolve(6);
    }, 100);
};

let p1 = new Promise(slowExecutor);
let p2 = new Promise(fastExecutor);

let p3 = Promise.race([p1, p2]);

p3.then(result => {
    expect(result).toBe(6);
    done();
});

In contrast to race, the all method will combine multiple promises into a single promise, and the single promise will resolve when all of the multiple promises resolve.

let slowExecutor = function (resolve, reject) {
    setTimeout(() => {
        resolve(9);
    }, 250);
};

let fastExecutor = function (resolve, reject) {
    setTimeout(() => {
        resolve(6);
    }, 100);
};

let p1 = new Promise(slowExecutor);
let p2 = new Promise(fastExecutor);

let p3 = Promise.all([p1, p2]);

p3.then(result => {
    expect(result[0]).toBe(9);
    expect(result[1]).toBe(6);
    done();
});

The results are delivered in the same order as the promises that produce each result appear in the array parameter to all.

Of course, not all promises resolve successfully, so the next post in this series will look at error handling with promises.

Chaining Promises in JavaScript

Monday, September 28, 2015 by K. Scott Allen

In part 1 we looked at the basic use of native promises in JavaScript 2015. In this post we'll look at how to compose and chain promises.

Chaining promises can make asynchronous code flow synchronously. For example, consider the following calculate method which delivers a result using a promise.

let calculate = function (value) {
    return new Promise((resolve, reject) => {
        setTimeout(() => {
            resolve(value + 1);
        }, 0);
    });
}; 

Imagine you need to invoke the calculate method four times, and each time you invoke calculate you need to pass the result from the previous call into the next call (this would simulate HTTP APIs where you need to make multiple requests, all dependent on one another, to fetch all of data required for a page). With promises, the series of method calls could look like the following.

calculate(1)
    .then(calculate)
    .then(calculate)
    .then(calculate)
    .then(verify);

function verify(result) {
    expect(result).toBe(5);
    done();
};

The above code also verifies (with Jasmine) that the final result is the value 5, because compute will add 1 to the result on each invocation. The code works because each call to the then method of a promise will result in a new promise. You might think this happens because the calculate method returns a new promise, but a new promise will appear even if the function passed to the then method doesn’t explicitly produce a promise. As an example, let’s replace one call to calculate with an arrow function that simply returns result + 1.

calculate(1)
    .then(calculate)
    .then(result => result + 1)
    .then(calculate)
    .then(verify);

function verify(result) {
    expect(result).toBe(5);
    done();
};

The result of the above code is still the value 5, and the object returned from the then call with an arrow function is still a promise, because the then method ensures that the success handler’s return value is wrapped into a promise. If the success handler does not return a value, the new promise will deliver the value undefined to the next handler.

To see how easy it is to create a new promise when you already have a value to resolve the promise, let’s look at the promise API in tomorrow's post.

Promises in ES2015 Part 1

Thursday, September 3, 2015 by K. Scott Allen

Asynchronous programming has been a hallmark of JavaScript programming since the beginning. Examples include waiting for button clicks, waiting for timers to expire, and waiting for network communications to complete. For most of JavaScript’s life, we’ve implemented these waiting activities using callback functions.

let calculate = function(callback) {

    setTimeout(() => {
        callback("This is the result"); 
    }, 0);

};

calculate(result => {
    expect(result).toBe("This is the result");
    done(); // done is required to complete 
            // this async test when using Jasmine
});

Over the years, as JavaScript applications grew in complexity, unofficial specifications started to emerge for a different approach to asynchronous programming using promises. A promise is an object that promises to deliver a result in the future. Promises are now an official part of the JavaScript language and offer advantages over the callback approach to asynchronous programming. Error handling is often easier using promises, and promises make it easier to compose multiple operations together. These advantages make code easier to read and write, as we’ll see in the upcoming posts.

Promise Objects

We need to look at promises from two different perspectives. The first perspective is the perspective of the code consuming a promise to wait for an asynchronous activity to complete. The second perspective is the perspective of the code responsible for producing a promise and managing an asynchronous activity behind the scenes.

A Promise Producer

In the previous example, the calculate function delivers a result asynchronously after a timer expires by invoking a callback function and passing along the result. When using promises, the calculate function could look like the following.

let calculate = function () {

    return new Promise((resolve, reject) => {
        setTimeout(() => {
            resolve(96);
        }, 0);
    });

};

Instead of taking a callback function as a parameter, the calculate method returns a new promise object. The Promise constructor takes a parameter known as the executor function. The previous code sample implements the executor function as an arrow function. The executor function itself takes two arguments. The first argument is the resolve function. Invoking the resolve function fulfills the promise and delivers a successful value to anyone who is waiting for a result from the promise. The second function is the reject function. Invoking the reject function rejects the promise, which signals an error. The above code always resolves the promise successfully and passes along a result of 96.

A Promise Consumer

Instead of passing a callback function to the calculate method, the consumer now invokes the calculate method and receives a promise object. A promise object provides an API that allows the consumer to execute code when the producer resolves or rejects the promise. The most important part of the promise API to a consumer is the then method. The then method allows the consumer to pass function arguments to execute when the promise resolves or rejects. The consumer code for calculate can now look like the following.

let success = function(result) {
    expect(result).toBe(96);
    done();
};

let error = function(reason) {
    // ... error handling code for a rejected promise
};

let promise = calculate();
promise.then(success, error);

The first argument to the then method is the success handler, while the second argument is the error handler.

On the surface, using promises might seem more involved than using simple callback functions. However, promises really start to shine when composing operations, and when handling errors. We’ll look at these topics in the coming posts.

Pluralsight Courses
What JavaScript Developers Should Know About ECMAScript 2015
The Podcast!