Promises in ES2015 Part 1

Thursday, September 3, 2015 by K. Scott Allen
2 comments

Asynchronous programming has been a hallmark of JavaScript programming since the beginning. Examples include waiting for button clicks, waiting for timers to expire, and waiting for network communications to complete. For most of JavaScript’s life, we’ve implemented these waiting activities using callback functions.

let calculate = function(callback) {

    setTimeout(() => {
        callback("This is the result"); 
    }, 0);

};

calculate(result => {
    expect(result).toBe("This is the result");
    done(); // done is required to complete 
            // this async test when using Jasmine
});

Over the years, as JavaScript applications grew in complexity, unofficial specifications started to emerge for a different approach to asynchronous programming using promises. A promise is an object that promises to deliver a result in the future. Promises are now an official part of the JavaScript language and offer advantages over the callback approach to asynchronous programming. Error handling is often easier using promises, and promises make it easier to compose multiple operations together. These advantages make code easier to read and write, as we’ll see in the upcoming posts.

Promise Objects

We need to look at promises from two different perspectives. The first perspective is the perspective of the code consuming a promise to wait for an asynchronous activity to complete. The second perspective is the perspective of the code responsible for producing a promise and managing an asynchronous activity behind the scenes.

A Promise Producer

In the previous example, the calculate function delivers a result asynchronously after a timer expires by invoking a callback function and passing along the result. When using promises, the calculate function could look like the following.

let calculate = function () {

    return new Promise((resolve, reject) => {
        setTimeout(() => {
            resolve(96);
        }, 0);
    });

};

Instead of taking a callback function as a parameter, the calculate method returns a new promise object. The Promise constructor takes a parameter known as the executor function. The previous code sample implements the executor function as an arrow function. The executor function itself takes two arguments. The first argument is the resolve function. Invoking the resolve function fulfills the promise and delivers a successful value to anyone who is waiting for a result from the promise. The second function is the reject function. Invoking the reject function rejects the promise, which signals an error. The above code always resolves the promise successfully and passes along a result of 96.

A Promise Consumer

Instead of passing a callback function to the calculate method, the consumer now invokes the calculate method and receives a promise object. A promise object provides an API that allows the consumer to execute code when the producer resolves or rejects the promise. The most important part of the promise API to a consumer is the then method. The then method allows the consumer to pass function arguments to execute when the promise resolves or rejects. The consumer code for calculate can now look like the following.

let success = function(result) {
    expect(result).toBe(96);
    done();
};

let error = function(reason) {
    // ... error handling code for a rejected promise
};

let promise = calculate();
promise.then(success, error);

The first argument to the then method is the success handler, while the second argument is the error handler.

On the surface, using promises might seem more involved than using simple callback functions. However, promises really start to shine when composing operations, and when handling errors. We’ll look at these topics in the coming posts.

Building Applications with Aurelia

Wednesday, September 2, 2015 by K. Scott Allen
1 comment

Earlier this summer I released a “Building Applications with Aurelia” course on Pluralsight.

Building Applications with Aurelia

I’ve enjoyed working with Aurelia since the early days and seeing how developers combine Aurelia with other frameworks and libraries. You can follow the latest news, too, by watching the Aurelia blog.

Delegating yield in JavaScript

Tuesday, September 1, 2015 by K. Scott Allen
0 comments

In an earlier post we looked at generator functions in JavaScript.

Generator methods can call into other generator methods, and yield values received from another generator method. A generator can even unroll or flatten another generator’s result into it’s own iterator using yield*. As an example, consider the following generator method which yields two strings.

let inner = function*() {
    yield "Hello";
    yield "there";
}

The next generator will call the inner generator using the yield* syntax.

let outer = function*() {
    yield* inner();
    yield "World";
}

The yield* syntax will flatten the result from inner so that the outer generator yields three strings.

var result = Array.from(outer());
expect(result).toEqual(["Hello", "there", "World"]);

If the outer generator used yield instead of yield*, the result of outer would be inner’s iterator, followed by the string “World”.

ECMAScript 2015 Iterators Revisited

Tuesday, May 5, 2015 by K. Scott Allen
4 comments

In an earlier post, we saw how to work with iterators at a low level and use an iterator’s next method to move from one item to the next item. What’s interesting about iterators in JavaScript is how the consumer of an iterator can influence the internal state of the iterator by passing a parameter to the next method.

As an example, let’s look at the following range method which you can use to generate a sequence of numbers from start to end.

let range = function*(start, end) {

    let current = start;

    while(current <= end) {
        yield current;
        current += 1;
    }
}

If we ask range to give us numbers from one to ten, the range method will behave as expected.

let result = [];
let iterator = range(1,10);
let next = iterator.next();

while(!next.done) {
    result.push(next.value);
    next = iterator.next();
}

expect(result).toEqual([1,2,3,4,5,6,7,8,9,10]);

Now let’s make a small change to the range method. When we yield the current value, we’ll place the yield statement on the right-hand side of an assignment expression.

let range = function*(start, end) {
    let current = start;

    while(current <= end) {
        var delta = yield current;
        current += delta || 1;
    }
}

The range generator now has the ability to take a parameter from the consumer of the iterator and use this parameter to calculate the next value. If the consumer does not pass a value, the code defaults the next increment to 1, otherwise the code uses the value passed to retrieve the next value. If we iterate over range like we did before we will see the same results.

let result = [];
let iterator = range(1,10);
let next = iterator.next();

while(!next.done) {
    result.push(next.value);
    next = iterator.next();
}

expect(result).toEqual([1,2,3,4,5,6,7,8,9,10]);

Passing a parameter to next is optional, however, with the following code we’ll pass the number 2 on each call to next, which effectively increments the current value in the iterator by two instead of one.

let result = [];
let iterator = range(1,10);
let next = iterator.next();

while(!next.done) {
    result.push(next.value);
    next = iterator.next(2);
}

expect(result).toEqual([1,3,5,7,9]);

We could also pass the current value to next and produce a more interesting sequence.

let result = [];
let iterator = range(1,10);
let next = iterator.next();

while(!next.done) {
    result.push(next.value);
    next = iterator.next(next.value);
}

expect(result).toEqual([1,2,4,8]);

If we were to write the range method the hard way instead of using yield, it might come out to look like the following.

let range = function(start, end) {

    let firstCall = true;
    let current = start;

    return {

        next(delta = 1) {

            let result = { value: undefined, done: true};

            if(firstCall){
                firstCall = false;
            }
            else {
                current += delta;
            }

            if(current <= end) {
                result.value = current;
                result.done = false;
            }
            
            return result;
        }
    }
}

In this version of the code it is easy to see how the next method receives a parameter you can use in the iterator logic. When using yield, the parameter arrives as the return value of the yield expression. Also note that when implementing the range function using yield there is no ability to grab a parameter on the first call to the next method. The first call to next starts the iteration and returns a value, but the parameter received by the first yield in a generator method will be the value passed to the second invocation of next.

Start Your Transpilers

Friday, May 1, 2015 by K. Scott Allen
6 comments

Early last year I began to take ECMAScript 2015 seriously and leaned towards using the new language features of JavaScript sooner rather than later. The tools needed to make the new language work in existing browsers already existed. Some people thought I was crazy.

This year the ES2015 / ES6 specification is a final draft stage and is only waiting for a final blessing. I’m even more convinced that the language is ready to use. In fact, why stop with ES 2015? ES7 already has some firm specs and the tools are even better.

Consider these points:

1. The technical committee responsible for ECMAScript specifications has committed to delivering a new version of the language every year.  

2. The best way to keep up with the challenges presented by contemporary applications is to tackle those challenges using the best features of a language. ES6 is considerably better than ES5 and dramatically changes how to build and organize abstractions. ES7 should include better support for metaprogramming and async work.

3. Next generation frameworks, like Aurelia, are already taking advantage of ES7 features. It is natural for people who build frameworks like to stay on the cutting edge and use the best features a language has to offer.

4. Browsers will perpetually be behind the curve going forward, but tools and polyfills will make most new language features work. There will always be exceptions which require native support from the JavaScript VM, like collections using weak references, but almost everything in ES2015 can be polyfilled or transformed into a working syntax.

I still see some resistance to using new tools in a JavaScript build process, even though most environments have been using tools to minify and concat JavaScript for years. It is time to rethink any resistance to source code transformations. For the above reasons, I think everyone should start working with a transpiler (ES* –> ES5) or a compiler (TypeScript –> ES5) without further ado.

Serialization Options With Azure DocumentDB

Tuesday, April 28, 2015 by K. Scott Allen
4 comments

DocumentDBBehind the scenes, Azure’s DocumentDB uses Json.NET to serialize objects. Json.NET offers some flexibility in how serialization behaves through a number of serialization settings, but the DocumentDB SDK doesn’t expose these settings.

What if we want to change, say, the  NullValueHandling behavior to ignore null values?

Michalis Zervos offers one solution. Serialize objects using your own code then save the result as a document with the stream based APIs of DocumentDB. This approach gives you the ultimate control over when and how to serialize each object.

Another approach is to use the global default settings of Json.NET.

JsonConvert.DefaultSettings = () =>
{
    return new JsonSerializerSettings
    {
        NullValueHandling = NullValueHandling.Ignore
    };
};

Being global, you’ll want these settings to apply to all the documents saved into Azure, as well as any other serialization that might be happening in the same application. but it can turn this document:

{
  "Name": "Scott",
  "Medications": null,
  "Procedures": null,
  "id": "c15a4b48-bc7b-4440-a32b-88a9c345f705"
}

.. into this one:

{
  "Name": "Scott",
  "id": "e388eb16-de6f-4de6-9b11-851c2a67ef9e"
}

DocumentDb Limits and Statistical Outliers

Monday, April 27, 2015 by K. Scott Allen
4 comments

Azure’s DocumentDB has an appealing scalability model, but you must pay attention to the limits and quotas from the start. Of particular interest to me is the maximum request size for a document, which is currently 512kb. When DocumentDB first appeared the limit was a paltry 16KB, so 512kb feels roomy, but how much real application data can that hold?

Let’s say you need to store a collection of addresses for a hospital patient.

public class Patient
{
    public string Id { get; set; }
    public IList<Address> Addresses { get; set; }
}

public class Address
{
    public string Description { get; set; }
    public string City { get; set; }
    public string Country { get; set; }
}

In theory the list of address objects is an unbounded collection and could exceed the maximum request size and generate runtime errors. But in practice, how many addresses could a single person associate with? There is the home address, the business address, perhaps a temporary vacation address. You don’t want to complicate the design of the application to support unlimited addresses, so instead you might enforce a reasonable limit in the application logic and tell customers that having more than 5 addresses on file is not supported.

A Harder Problem

Here’s a slightly trickier scenario.

public class Patient
{
    public string Id { get; set; }
    public IList<Medication> Medications { get; set; }
}

public class Medication
{
    public string Code { get; set; }
    public DateTime Ordered { get; set; }
    public DateTime Administered { get; set; }
}

Each medication entry consists of an 8 character code and two DateTime properties, which gives us a fixed size for every medication a patient receives, but again the potential problem is the total number of medications a patient might receive.

The first question then, is how many Medication objects can a 512kb request support?

The answer, estimated with a calculator and verified with code, is just over 6,000.

The second question then, is 6,000 a safe number?

To answer the second question I found it useful to analyze some real data and find that the odds of busting the request size are roughly 1 in 100,000, which is just over 4 standard deviations. Generally a 4 sigma number is good enough to say “it won’t happen”, but what’s interesting when operating at scale, is that with 1 million patients you’ll observe the 4 sigma event not once, but 10 times.

From the business perspective, the result is unacceptable, so back to the drawing board.

We use to say that you spend 80% of your time on 20% of the problem. At scale there is the possibility of spending 80% of your time on 0.000007% of the problem.

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!