OdeToCode IC Logo

What Every JavaScript Developer Should Know About ECMAScript 2015

Monday, November 23, 2015 by K. Scott Allen
What Every JavaScript Developer Should Know About ECMAScript 2015

Now available on Amazon, What Every JavaScript Developer Should Know About ECMAScript 2015 is the book I'd like to read about the new features in the JavaScript language. The book isn't a reference manual or an exhaustive list of everything in the ES2015 specification. Instead, I purposefully selected what I think are the important features we will use in everyday programming. I expect the reader will already have a good understanding of the pre-2015 JavaScript language.

The book will be free for the rest of this week.

Chapters include:

  • Working with variables and parameters
  • Classes
  • Functional features
  • Asynchronous JavaScript
  • Modules
  • APIs

Special thanks to Ruben Bartelink and Porter T. Baer for feedback on the early drafts.

An ASP.NET 5 Overview Video

Thursday, November 12, 2015 by K. Scott Allen

I recorded an ASP.NET 5 Overview during some unexpected free time and placed the video here on OdeToCode. I recorded during the beta6 / beta7 timeframe, so the video will need some updates in the future, else you'll find two vast and trunkless legs of stone here.

image

The Evolution of JavaScript

Tuesday, November 10, 2015 by K. Scott Allen

I fired off an abstract titled The Evolution of JavaScript without giving a thought as to what it means for a language to evolve. After all, a language is not a living organism where Mother Nature applies the dispassionate sieve of natural selection against the replication errors in DNA. Wouldn’t a programming language rather fall under the viewpoint of intelligent design?

How can intelligent design explain the gap?

image

There is no formula to apply in moving a language forward. There is emotion, beliefs, and marketability. When the committee abandoned the 4th edition of the language, the language found diversity in pockets of community. jQuery showed us how programming the DOM can work when the API is consistent. Node showed us how to scale JavaScript and run outside 4-sided HTML elements. The cosmic impact of Steve Jobs exterminated natural predators, allowing natives like threads and sockets to emerge.

JavaScript is not a flat road in a planned community at the edge of the water. JavaScript is a Manhattan boulevard serving the enterprise and the pedestrian, angled to equal the constraints of terra firma. Always busy, bustling, dirty, and full of diversity, we can only hope the future doesn’t move too fast. Slow evolution has been good despite the frustrations. Don't listen to the slander, coriander, and mind the gap.

Maps and Sets in JavaScript

Wednesday, October 14, 2015 by K. Scott Allen

JavaScript has never offered a variety of data structures out of the box. If you needed any structure fancier than an array, you'd have to roll your own or borrow some code.

The 2015 specification brings a number of keyed collections to the language, including the Map, Set, WeakMap, and WeakSet.

Maps

Although you can treat any JavaScript object as a dictionary or key-value store, there are some drawbacks. Most notably, the only type of keys you can use to index into an object are keys of type string. The new Map collection removes the restriction on key types and offers some additional conveniences. The core of the API revolves around the get and set methods.

let map = new Map();

// add a new key-value pair
map.set("key", 301);

// overwrite an existing value
map.set("key", 302);

expect(map.get("key")).toBe(302);

The delete and has methods are also handy.

let map = new Map();
map.set("key", 611);

expect(map.has("key")).toBe(true);

map.delete("key");
expect(map.has("key")).toBe(false);

A key can now be any type of object.

var someKey = { firstName: "Scott"};
var someValue = { lastName: "Allen"};

var map = new Map();
map.set(someKey, someValue);

expect(map.size).toBe(1);
expect(map.get(someKey).lastName).toBe("Allen");
expect(map.get({firstName:"Scott"})).toBeUndefined();

When using objects as keys, JavaScript will compare pointers behind the scenes, as you probably expect (and as the above code demonstrates).

You can also construct a Map using a two dimensional array of key-value pairs.

let map = new Map([
    [1, "one"],
    [2, "two"],
    [3, "three"]
]);

Finally, maps are iterable in a number of ways. You can retrieve an iterator for the keys in a map (using the keys method), the values in a map (using the values method), and the key-value entries in a map (using the entries method, as you might have guessed). The map also has the magic [Symbol.iterator] method, so you can use a Map directly in a for of loop.

let map = new Map([
    [1, "one"],
    [2, "two"],
    [3, "three"]
]);
    
let sum = 0;
let combined = "";
for(let pair of map) {
    sum += pair[0];
    combined += pair[1];
} 
    
expect(map.size).toBe(3);
expect(sum).toBe(6);
expect(combined).toBe("onetwothree");

map.clear();
expect(map.size).toBe(0);

Note how each entry comes out as an array with the key in index 0 and the value in index 1. This is a scenario where destructuring is handy. 

for(let [key, value] of map) {
    sum += key;
    combined += value;
}

Set

Just like in mathematics, a Set maintains a collection of distinct objects. The API is slightly different from a Map. For example, you can construct a set by passing in any iterator.

let animals = ["bear", "snake", "elephant", "snake"];
let animalsSet = new Set(animals.values());

expect(animals.length).toBe(4);
expect(animalsSet.size).toBe(3);
expect(animalsSet.has("bear")).toBe(true);

animalsSet.delete("bear");
expect(animalsSet.size).toBe(2);

Just like a Map, a Set compares object types by comparing references. Unlike other languages, there is no ability to provide an alternative comparer strategy.

let scott = { name: "Scott "};

let people = new Set();
people.add(scott);
people.add(scott);
people.add( {name:"Scott"});

expect(people.size).toBe(2);
expect(people.has(scott)).toBe(true);
expect(people.has({name:"Scott"})).toBe(false);

Weak Collections

The WeakMap and WeakSet collections serve a similar purpose to their non-weak counterparts, but the objects held in these collections are eligible for garbage collection at any given point in time, making these weak collections ideal for many extensibility scenarios. The special nature of weak collections means you cannot iterate over the collections, because an entry could disappear before the iteration completes. There are also some other small API differences. For example, the keys for a Map and the values for a Set can only be object types (no primitives like strings), and there is no size method to uncover the number of entries in these collections. 

Coming up soon - more stimulating API additions in 2015 for objects, strings, and arrays.

DNX Framework Choices and ASP.NET 5

Tuesday, October 13, 2015 by K. Scott Allen

You can think of an ASP.NET 5 application as a DNX application, where DNX stands for .NET execution environment.

And the projects we create with a project.json file? These are DNX projects. The DNX is not just a runtime environment, but also an SDK.  Conceptually, I think of the new world like this:

DNX Choices

 

The DNX gives us the ability to cross compile a project across different renditions of a Common Language Runtime, and then execute on a specific variant of a CLR. We can compile against a full CLR, or a core CLR, or both. The compilation settings are controlled by the frameworks property in project.json.

"frameworks": {
  "dnx451": { },
  "dnxcore50": { }
}

The monikers to choose from currently include dnx451, dnx452, and dnx46 for a full CLR, or dnxcore50 for a core CLR. At execution time the code must execute against one of the targets specified in project.json. You can select the specific DNX environment using the project properties –> Debug tab in Visual Studio and run with or without the debugger.

 DNX Selection In Visual Studio

You can also use dnx from the command line to launch the application after selecting an environment using the .NET version manager (dnvm) tool. The following screen shot is showing how to list the available runtimes using dnvm list, and then configuring the 64 bit variant of the Full CLR, beta 7 version as an execution environment.

Using DNVM To Select A DNX Environment

After the above command completes, the dnx we’ll be using will be the dnx from the folder for the 64 bit variant of the Full CLR, beta 7 version, and any application launched using dnx will use the same.

Choosing

“Which environment is right for me?” is an obvious question when starting a DNX project. What follows is my opinion.

Choose Both

In this scenario, we’ll include both a core CLR and one version of a full CLR in the frameworks of project.json.

"frameworks": {
  "dnx46": { },
  "dnxcore50": { }
},

This choice is ideal for projects with reusable code. Specifically, projects you want to build as NuGet packages and consume from other DNX applications. In this scenario you won’t know which framework the consumer might need to use.

This choice also works if you haven’t made a decision on which CLR to use, but ultimately for applications (not libraries), I would expect to target a single framework.

Choose The Full CLR

In this scenario, we’ll specify a only a full CLR in the frameworks section.

"frameworks": {
  "dnx46": { },
}

Targeting a CLR for development and deployment gives you the best chance of compatibility with existing code. The full framework includes WCF, three or four types of XML serializers, GDI support, and the full range of reflection APIs. Of course, you’ll only be developing and running on Windows machines.

Choose the Core CLR

In this scenario, we’ll specify a only the core CLR in the frameworks section.

"frameworks": {
  "dnxcore50": { }
}

With the core CLR you can develop and deploy on Windows, Linux, and OS/X. Another primary advantage to the core CLR, even if you only run on Windows, is the ability to ship the framework bits with an application, meaning you don’t need a full .NET install on any server where you want to deploy. In fact, you can even have multiple applications on the same server using different versions of the core CLR (side-by-side versioning), and update one application without worrying about breaking the others. One downside to using a core CLR is how your existing code might require framework features that do not exist in the core. At least, features that don’t exist today.

Summary

I expect the core CLR will have a slow uptake but eventually by the primary target for all DNX applications. The advantages of going cross platform and providing an xcopy deployment of the framework itself will outweigh the reduced feature set. Most of the sub-frameworks in the full CLR are frameworks we don’t need.

Modules in JavaScript Circa 2015

Wednesday, October 7, 2015 by K. Scott Allen

Until 2015, the JavaScript language has officially only offered only two types of variable scope – global scope and function scope. Avoiding global scope has been a primary architectural goal of nearly every library and framework authored over the last ten years. Avoiding global scope in the browser has meant we’ve relied heavily on closures and syntactical oddities like the immediately invoked function expression (IIFE) to provide encapsulation.

(function() {

    // code goes here, inside an IIFE

}());

Avoiding global scope also meant nearly every library and framework for the browser would expose functionality through a single global variable. Examples include $ or jQuery for the jQuery library, or _ for underscore and lodash.

After NodeJS arrived on the scene in 2009, Node developers found themselves creating larger code bases and consuming a larger number of libraries. This community adopted and put forward what now we call the CommonJS module standard. Shortly afterward, another community standard, the Asynchronous Module Definition standard (AMD), appeared for browser programming.

ECMAScript 2015 brings an official module standard to the JavaScript language. This module standard uses a different syntax than both CommonJS and AMD, but tools like WebPack and polyfills like the ES6 module loader make all of the module standards mostly interoperable. At this point in time, some preprocessing or polyfills are required to make the new module syntax work. Even though the syntax of the language is complete and standardized, the browser APIs and behaviors for processing modules are still a work in progress.

First, Why Modules?

The purpose of a module system is to allow JavaScript code bases to scale up in size. Modules give us a tool to manage the complexity of a large code base by providing just a few important features.

First, modules in JavaScript are file based. When using ES2015, instead of thinking about a script file, you should think of a script module. A file is a module. By default, any code you write inside the file is local to the module. Variables are no longer in the global scope by default, and there is no need to write a function or an IIFE to control the scope of a variable. Modules give us an implicit scope to hide implementation details.

Of course, some of the code you write in a module is code you might want to expose as an API and consume from a different module. ES2015 provides the export keyword for this purpose. Any object, function, class, value, or variable that you want to make available to the outside world is something you must explicitly export with the export keyword.

In a second module, you would use the import keyword to consume any exports of the first module.

The ability to spread code across multiple files and directories while still being able to access the functionality exposed by any other file without going through a global mediator makes modules an important addition to the JavaScript language. Perhaps no other feature of 2015 will have as much of an impact on the architecture of applications, frameworks, and libraries as modules. Let’s look at the syntax.

Module Syntax

Imagine you want to use an object that represents a person, and in a file named humans.js you place the following code.

function work(name) {
    return `${name} is working`;
}

export let person = {
    name: "Scott",
    doWork() {
        return work(this.name);
    }
    
}

Since we are working with modules, the work function remains hidden in the module scope, and no code outside of the module will have access to the work function. The person variable is an export of the module. More specifically, we call person a named export. Code inside of other modules can import person and work with the referenced object.

import {person} from "./lib/humans"

describe("The humans module", function () {

    it("should have a person", function () {
        expect(person.doWork()).toBe("Scott is working");
    });

});

There are a few points to make about the previous code snippet.

First, notice the module name does not require a .js extension. The filename is humans.js, but the module name for the file is humans.

Secondly, the humans module is in a subfolder of the test code in this example, so the module specifier is a relative path to the module (./lib/humans).

Finally, curly braces enclose the imports list from the humans module. The imports are a list because you can import more than one named export from another module. For example, if the humans module also exported the work function, the test code could have access to both exports with the following code.

import {person, work} from "./lib/humans"

You also have the ability to alias an import to a different name.

import {person as scott, work} from "./lib/humans"

describe("The humans module", function () {

    it("should have a person", function () {
        // now using scott instead of person
        expect(scott.doWork()).toBe("Scott is working");
    });

    it("should have a work function", function () {
        expect(work).toBeDefined();
    });
    
});

In addition to exporting variables, objects, values, and functions, you can also export a class. Imagine the humans module with the following code.

function work(name) {
    return `${name} is working`;
}

export class Person {

    constructor(name) {
        this.name = name;
    }

    doWork() {
        return work(this.name);
    }

}

Now the test code would look like the following.

import {Person} from "./lib/humans"

describe("The humans module", function () {

    it("should have a person class", function () {
        var person = new Person("Scott");
        expect(person.doWork()).toBe("Scott is working");

    });

});

Modules can also export a list of symbols using curly braces, instead of using the export keyword on individual declarations. As an example, we could rewrite the humans module and place all the exports in one location at the bottom of the file.

function work(name) {
    return `${name} is working`;
}

class Person {

    constructor(name) {
        this.name = name;
    }

    doWork() {
        return work(this.name);
    }
}

export {Person, work as worker }

Notice how an export list can also alias the name of an export, so the work function exports with the name worker.

Default Exports

The 2015 module standard allows each module to have a single default export. A module can have a default export and still export other names, but having a default export does impact how another module will import the default. First, here is how the humans module would look with a default export of the Person class.

function work(name) {
    return `${name} is working`;
}

export default class Person {

    constructor(name) {
        this.name = name;
    }

    doWork() {
        return work(this.name);
    }
    
}

As the code demonstrates, the default export for a module uses the default keyword.

A module who needs to import the default export of another module doesn’t specify a binding list with curly braces for the default export. Instead, the module simply defines a name for the incoming default, and the name doesn’t need to match the name used inside the exporting module.

import Human from "./lib/humans"

describe("The humans module", function () {

    it("should have a default export as a class", function () {
        var person = new Human("Scott");
        expect(person.doWork()).toBe("Scott is working");
    });

});

Mass Exportation and Importation

An import statement can use an asterisk to capture all the named exports of a module into a namespace object. Let’s change the humans module once again to export the Person class both as a named export and as a default export, and also export the work function.

function work(name) {
    return `${name} is working`;
}

class Person {
    constructor(name) {
        this.name = name;
    }
    doWork() {
        return work(this.name);
    }
}

export {work, Person}
export default Person

The test code can have access to all the exports of the humans module using import *.

import * as humans from "./lib/humans"

describe("The humans module", function () {

    it("should have a person class", function () {
        var person = new humans.Person("Scott");
        expect(person.doWork()).toBe("Scott is working");
    });

    it("should have a default export", function () {
        expect(humans.default).toBeDefined();
    });

}); 

Notice how the test code now has two paths to reach the Person class. One path is via humans.Person, the other path is humans.default.

An export statement can also use an asterisk. The asterisk is useful in scenarios where you want one module to gather exports from many sub-modules and publish them all as a unit. In CommonJS this scenario typically uses a file named index.js, and many of the existing module loader polyfills support using an index.js file when importing a directory.

For example, let’s add a file named creatures.js to the lib folder.

export class Animal {

    constructor(name) {
        this.name = name;
    }    

}

An index.js file in the lib folder can now combine the humans and creatures module into a single module.

export * from "./creatures"
export * from "./humans"

The test code can now import the lib directory and access features of both underlying modules.

import {Person, Animal} from "./lib/"

describe("The combined module", function () {

    it("should have a person class", function () {
        var person = new Person("Scott");
        expect(person.doWork()).toBe("Scott is working");
    });

    it("should have an Animal class", function () {
        expect(new Animal("Beaker").name).toBe("Beaker");
    });

});

Importing For the Side-Effects

In some scenarios you only want to reference a module so the code inside can execute and produce side-effects in the environment. In this case the import statement doesn't need to name any imports.

import "./lib"

Summary

We're coming to the close on this long series of posts covering ES2015. In the last few posts we'll look at new APIs the standard brings to life.

Authorization Policies and Middleware in ASP.NET 5

Tuesday, October 6, 2015 by K. Scott Allen

Imagine you want to protect a folder full of static assets in the wwwroot directory of an ASP.NET 5 project. There are several different approaches you could take to solve the problem, but here is one flexible solution using authorization policies and middleware.

Services

First, in the Startup class for the application, we will add the required services.

public void ConfigureServices(IServiceCollection services)
{
    services.AddAuthentication();
    services.AddAuthorization(options =>
    {
        options.AddPolicy("Authenticated", policy => policy.RequireAuthenticatedUser());
    });
}

For the default authorization service we’ll make a named policy available, the Authenticated policy. A policy can contain any number of requirements allowing you to check claims and identities. In this code we will ultimately be using the built-in DenyAnonymousAuthorizationRequirement, because this is the type of requirement returned by the RequireAuthenticatedUser method. But again, you could make the requirement verify any number of characteristics about the user and the request.

The name Authenticated is important, because we will refer to this policy when authorizing users for access to a protected folder.

Middleware

Next, let’s write a piece of middleware named ProtectFolder and start with an options class to parameterize the middleware.

public class ProtectFolderOptions
{
    public PathString Path { get; set; }
    public string PolicyName { get; set; }
}

There is also the obligatory extension method to add the middleware to the pipeline.

public static class ProtectFolderExtensions
{
    public static IApplicationBuilder UseProtectFolder(
        this IApplicationBuilder builder, 
        ProtectFolderOptions options)
    {
        return builder.UseMiddleware<ProtectFolder>(options);
    }
}

Then the middlware class itself.

public class ProtectFolder
{
    private readonly RequestDelegate _next;
    private readonly PathString _path;
    private readonly string _policyName;
   
    public ProtectFolder(RequestDelegate next, ProtectFolderOptions options)
    {
        _next = next;
        _path = options.Path;
        _policyName = options.PolicyName;
    }

    public async Task Invoke(HttpContext httpContext, 
                             IAuthorizationService authorizationService)
    {
        if(httpContext.Request.Path.StartsWithSegments(_path))
        {
            var authorized = await authorizationService.AuthorizeAsync(
                                httpContext.User, null, _policyName);
            if (!authorized)
            {
                await httpContext.Authentication.ChallengeAsync();
                return;
            }
        }

        await _next(httpContext);
    }
}

The Invoke method on a middleware object is injectable, so we’ll ask for the current authorization service and use the service to authorize the user if the current request is heading towards a protected folder. If authorization fails we use the authentication manager to challenge the user, which typically redirects the browser to a login page, depending on the authentication options of the application.

Pipeline Configuration

Back in the application’s Startup class, we’ll configure the new middleware to protect the /secret directory with the “Authenticated” policy.

public void Configure(IApplicationBuilder app)
{
    app.UseCookieAuthentication(options =>
    {
        options.AutomaticAuthentication = true;
    });

    app.UseProtectFolder(new ProtectFolderOptions
    {
        Path = "/Secret",
        PolicyName = "Authenticated"
    });

    app.UseStaticFiles();

    // ... more middleware
}

Just make sure the protection middleware is in place before the middleware to serve static files.