Using jspm with Visual Studio 2015 and ASP.NET 5

Wednesday, February 18, 2015 by K. Scott Allen

If you’ve been following along with the developments for ASP.NET vNext and Visual Studio 2015, you’ve probably seen the JavaScript tooling of choice for File –> New projects is Grunt and Bower.

But, what if you wanted to use more sophisticated and productive tools, like Gulp and jspm?

In this post, we’ll get setup using jspm instead of Bower, and write some ES6 code with CTP5 of Visual Studio 2015.

jspm

In a nutshell: jspm combines package management with module loading infrastructure and transpilers to provide a magical experience. You can write code using today’s JavaScript, or tomorrow’s JavaScript (ES6), and use any type of module system you like (ES6, AMD, or CommonJS). jspm figures everything out. By integrating package management with a smart script loader, jspm means less work for us.

1. Get Started

In VS2015 we’ll start with a minimum set of features by using File –> New Project, selecting “ASP.NET Web Application” and using the “ASP.NET 5 Empty” template. With ASP vNext, the resulting project looks like the following.

ASP.NET 5 Empty Project

Notice the wwwroot folder, which is new for ASP5. The wwwroot folder is the default folder for static assets like CSS and JS files, and is literally the root of the web site. We can create a new default.html file in wwwroot as the entry page for the application.

<html>
<head>
    <meta charset="utf-8" />
    <title>Working with jspm</title>
</head>
<body>
    <div id="output">Testing</div>
</body>
</html>

Pressing Ctrl+F5 should run the application as always, but we’ll need to add /default.html to the URL as the web server won’t find a default file without some extra configuration (see ‘Making default.html the default’ later in this post).

image

2. Get jspm

Once you have both NodeJs and a command line Git client installed, jspm is simple to setup.

npm install –g jspm
jspm init

You’ll want to run jspm init from the root of the project, which is one level above the wwwroot folder.

The init command will ask a series of questions to setup a project.json file (yes, the same project.json that npm uses, unlike Bower which creates it’s own json file). During the questioning you can choose the ES6 transpiler to use. As you can see below I prefer the 6to5 transpiler over Traceur these days, but note that 6to5 was just renamed to Babel last week.

Here’s how to answer:

Package.json file does not exist, create it? [yes]:
Would you like jspm to prefix the jspm package.json properties under jspm? [yes]:
Enter server baseURL (public folder path) [./]: ./wwwroot
Enter project code folder [wwwroot\]:
Enter jspm packages folder [wwwroot\jspm_packages]:
Enter config file path [wwwroot\config.js]:
Configuration file wwwroot\config.js doesn't exist, create it? [yes]:
Enter client baseURL (public folder URL) [/]:
Which ES6 transpiler would you like to use, Traceur or 6to5? [traceur]: 6to5
ok   Verified package.json at package.json
     Verified config file at wwwroot\config.js
     Looking up loader files...
       system.js
       system.src.js
       system.js.map
       es6-module-loader.js
       es6-module-loader.src.js
       es6-module-loader.js.map
       6to5.js
       6to5-runtime.js
       6to5-polyfill.js

     Using loader versions:
       es6-module-loader@0.13.1
       systemjs@0.13.2
       6to5@3.5.3
ok   Loader files downloaded successfully

The most important answer is the answer to the public folder path (./wwwroot). 

We want jspm to work with a package.json file at the root of the project, but store downloaded packages and configuration in the wwwroot folder. This is one way to work with jspm, but certainly not the only way. If there is enough interest, we can look at building and bundling in a future post.

3. Write Some Script

Next, we’ll update the default.html file to bring in System.js, the dynamic module loader installed by jspm, and the config file created by jspm, which tells the loader where to make requests for specific module and libraries.

Inside the body tag, the markup looks like:

<div id="output">Testing</div>

<script src="jspm_packages/system.js"></script>
<script src="config.js"></script>
<script>System.import("app/main");</script>

If we refresh the browser we should see some errors in the console that app/main.js couldn’t be loaded. This is  the System.import we requested in the above markup, and the syntax should be somewhat familiar to anyone who has used AMD or CommonJS modules before. The file can’t be loaded, because it doesn’t exist, so let’s create a main.js file in the app folder under wwwroot.

var element = document.getElementById("output");
element.innerText = "Hello, from main.js!";

Simple code, but a refresh of the browser should tell us everything is working.

systemjs running the app

Let’s make the code more interesting by adding a greeting.js file to the app folder in wwwroot.

export default element => {
    element.innerText = "Hello from the greeting module!";
};

Now we can change main.js to make use of the new greeting component (which is just an ES6 arrow function).

import greeter from "./greeting";

greeter(document.getElementById("output"));

The import / export syntax we are looking at is the new ES6 module syntax, which I haven’t covered in my series of ES6 posts, as yet, but we’ll get there. With the magic of System.js and friends, all this code, including the ES6 modules and arrow functions – it all just works.

running with es6 modules

4. Install Some Packages

The beauty of jspm is that we can now swing out to the command line to install new packages, like moment.js

> jspm install moment
     Updating registry cache...
     Looking up github:moment/moment
     Downloading github:moment/moment@2.9.0
ok   Installed moment as github:moment/moment@^2.9.0 (2.9.0)
ok   Install tree has no forks.

ok   Install complete.

 

This is just as easy as installing a package with Bower, but with jspm the package is ready to use immediately. Let’s change greeting.js to use moment:

import moment from "moment";

export default element => {

    let pleasantry = "Hello!";
    let timeLeft = moment().startOf("hour").fromNow();

    element.innerText = `${pleasantry} The hour started ${timeLeft}`;
};

And now the application looks like the following.

running with jspm and moment.js

5. Make default.html the default

In ASP.NET 5 there is no web.config, but modifying the behavior for static files is still fairly easy (not nearly as easy, but easy). Step one is to install the NuGet package for static file processing – Microsoft.AspNet.StaticFiles, then adding the following code to Startup.cs.

public class Startup
{
    public void Configure(IApplicationBuilder app)
    {
        app.UseFileServer(new FileServerOptions
        {
             EnableDefaultFiles = true,
             EnableDirectoryBrowsing = true,                   
        });            
    }
}

It’s really just the EnableDefaultFiles that will make our page appear at the root of the web site, because the static file handler will go looking for a default.html file on such a request.

6. Caveats

While everything shown here works well, the biggest hurdle to using ECMAScript 6 with Visual Studio 2015 is the editor, which doesn’t like ES6 syntax. You’ll want to keep another editor handy to avoid all the red squiggles under the import and export keywords, for example. We’ll have to hope ES6 syntax support is ready to go by RTM, because it is time to start using the new JavaScript.

An AngularJS Playbook

Tuesday, February 17, 2015 by K. Scott Allen

My latest Pluralsight release is “An AngularJS Playbook”.

This course is geared for developers who already know Angular. Topics include:

- How to manage API access tokens

- Strategies for building robust error and diagnostic services

- Data-binding instrumentation

- Working with UI Router and UI Bootstrap

- The latest fashion in form validation techniques

- Custom directives, custom directives, and custom directives

- Techniques to wrap and integrate 3rd party code with directives and services.

As always, thanks for watching!

AngularJS Playbook Video Course

Generators in ECMAScript 6

Monday, February 16, 2015 by K. Scott Allen

You’ll know you are looking at a generator, or more properly, a generator function, because a generator function contains an asterisk in the declaration.

function*() {

}

Once you have a generator function, you can use the yield keyword inside to return multiple values to the caller.

let numbers = function*() {

    yield 1;
    yield 2;
    yield 3;
};

But, the numbers don’t come back to the caller all at once. Internally, the runtime builds an iterator for you, so a caller can iterate through the results one by one. You can write a low level iterator and call next to move through each value, or use a for of loop.

let sum = 0;

for(let n of numbers()) {
    sum += n;
}

expect(sum).toBe(6);

Going back to the classroom class we used in the last ES6 post, we can now make a classroom an iterable simply using yield.

class Classroom {

    constructor(...students) {
        this.students = students;
    }

    *[Symbol.iterator]() {
        for(let s of this.students) yield s;
    }
}

var scienceClass = new Classroom("Tim", "Sue", "Joy");

var students = [];
for(let student of scienceClass){
    students.push(student);
}

expect(students).toEqual(["Tim", "Sue", "Joy"])

Notice how the iterator method needs an asterisk before the opening square bracket. The syntax is quirky, but an asterisk is always required when defining a generator function, because generator functions need to behave differently than normal functions right from the start. That’s the topic for the next post in this series.

Roslyn Code Gems - Performance Goals

Friday, February 13, 2015 by K. Scott Allen

Ever since the .NET compiler platform became open source, I’ve been poking around the Roslyn source code. It’s not often you get to look at the internals of a product with a large code base, and not surprisingly there are some gems inside. I have a collection of the gems and each gem falls into one of three categories.

1. Ideas to borrow

2. Trivia

3. Bizarre and outstanding

Today’s gem falls into category two – trivia.

Have you ever wondered what makes for a “captive” audience? According to the PerformanceGoals class, this would be an operation using from one to ten seconds of time.

static PerformanceGoals()
{
    // An interaction class defines how much time is expected to reach a time point, the response 
    // time point being the most commonly used. The interaction classes correspond to human perception,
    // so, for example, all interactions in the Fast class are perceived as fast and roughly feel like 
    // they have the same performance. By defining these interaction classes, we can describe 
    // performance using adjectives that have a precise, consistent meaning.
    //
    // Name             Target (ms)     Upper Bound (ms)        UX / Feedback
    // Instant          <=50            100                     No noticeable delay
    // Fast             50-100          200                     Minimally noticeable delay
    // Typical          100-300         500                     Slower, but still no feedback necessary
    // Responsive       300-500         1,000                   Slower yet, potentially show Wait cursor
    // Captive          >500            10,000                  Long, show Progress Dialog w/Cancel
    // Extended         >500            >10,000                 Long enough for the user to switch to something else

    // Used for throughput scenarios like parser bytes per second.
    const string Throughput_100 = "Throughput_100";

    Goals = new string[(int)FunctionId.Count];
    Goals[(int)FunctionId.CSharp_SyntaxTree_FullParse] = Throughput_100;
    Goals[(int)FunctionId.VisualBasic_SyntaxTree_FullParse] = Throughput_100;
}

Unlike the code in this post, the next entry in this series will feature a gem with potentially useful code – an optimized bit counter.

Conditional Access Operator in C# 6.0

Thursday, February 12, 2015 by K. Scott Allen

My series of posts on C# 6.0 has been on a long hiatus due to changes in the language feature set announced last year. Now with spring just around the corner in the northern hemisphere, Visual Studio 2015 reaching a 5th CTP status, and features hopefully stabilizing, it might be time to start up again.

As of C# version 5, there are many techniques you can use to avoid null reference exceptions (NREs). You can use brute force if else checks, the Null Object design pattern, or even extension methods, since extension method invocations will work on a null reference.

The conditional access operator, ?.,  adds one more option to the list for v6 of the language.

Imagine you are writing a method to log the execution of some code represented by an Action delegate. You aren’t sure if the caller will pass you a proper delegate, or if that delegate will have a method and a name.

The following code uses conditional access to retrieve the method name of an action, or “no name” if the code encounters a null value along the line.

public async Task LogOperation(Action operation)
{
    var name = operation?.Method?.Name ?? "no name";

    operation();
    await _logWriter.WriteAsync(name + " executed");
    
    // ... exception handling to come soon ...          
}

By dotting into the operation parameter using ?., you’ll be able to avoid NREs. ?. will only dereference a pointer if the pointer is non null. Otherwise, the operator will yield a null value.

By combining ?. with the null coalescing operator ??, you can easily specify defaults and write code that must be read using uptalk

Next up in the series: await and exceptions.

Debugging Map Reduce in MongoDB

Wednesday, February 11, 2015 by K. Scott Allen

There isn’t much insight into the execution of a map reduce script in MongoDB, but I’ve found three techniques to help. Of course the preferred technique for map reduce is to use declarative aggregation operators, but there are some problems that naturally lend themselves to copious amounts of imperative code. That’s the kind of debugging i needed to do recently.

Log File Debugging

In a Mongo script you can use print and printjson to send strings and objects into standard output. During a map reduce these functions don’t produce output on stdout, unfortunately, but the output will appear in the log file if the verbosity is set high enough. Starting mongod with a –vvvv flag works for me.

Log file output can be useful in some situations, but in general, digging through a log file created in high verbosity mode is difficult.

Attached To The Output Debugging

The best way I’ve found to debug map reduce scripts running inside Mongo is to attach logging data directly to the output of the map and reduce functions.

To debug map functions, this means you’ll emit an object that might have an array attached, like the following.

{
  "name": "Scott",
  "total" : 15,
  "events" : [
    "Step A worked",
    "Flag B is false",
    "More debugging here"
  ]
}

Inside the map function you can push debugging strings and objects into the events array. Of course the reduce function will have to preserve this debugging information, possibly by aggregating the arrays. However, if you are debugging the map function I’d suggest simplifying the process by not reducing and simply letting emitted objects pass through to the output collection. Another technique to do this is to emit using a key of new ObjectId, so each emitted object is in its own bucket.

As an aside, my favorite tool for poking around in Mongo data is Robomongo (works on OSX and Windows). Robomongo is shell oriented, so you can use all the Mongo commands you already know and love. 

robomongo for mongodb

Robomongo’s one shortcoming is in trying to edit system stored JavaScript. For that task I use MongoVue (Windows only, requires a  license to unlock some features).

Browser Debugging

By far the best debugging experience is to move a map or reduce function, along with some data, into a browser. The browser has extremely capable debugging tools where you can step through code and inspect variables, but there are a few things you’ll need to do in preparation.

1. Define any globals that the map function needs. At a minimum, this would be an emit function, which might be as simple as the following.

var result = null;

window.emit = function(id, value) {
    result = value;
};

2. Have a plan to manage ObjectId types on the client. With the C# driver I use the following ActionResult derived class to get raw documents to the browser with custom JSON settings to transform ObjectId fields into legal JSON. 

public class BsonResult : ActionResult
{
    private readonly BsonDocument _document;

    public BsonResult(BsonDocument document)
    {
        _document = document;
    }

    public override void ExecuteResult(ControllerContext context)
    {
        var settings = new JsonWriterSettings();
        settings.OutputMode = JsonOutputMode.Strict;

        var response = context.HttpContext.Response;
        response.ContentType = "application/json";                      
        response.Write(_document.ToJson(settings));
    }        
}

Note that using JsonOutputMode.Strict will give you a string that a browser can parse using JSON.parse, but it will change fields of type ObjectId into full fledged objects ({ “$oid”: “5fac…ffff”}). This behavior will create a problem if the map script ever tries to compare ObjectId fields by value (object1.id === object2.id will always be false). If the ObjectId creates a problem, the best plan, I think, is to walk through the document using reflection in the browser and change the fields into simple strings with the value of the ID.

Hope that helps!

Thoughts On End to End Testing of Browser Apps

Tuesday, February 10, 2015 by K. Scott Allen

In a previous post on using the PageObject pattern with Protractor, Martin asked how much time is wasted writing tests, and who pays for the wasted time?

To answer that question I want to think about the costs and benefits of end to end testing. I believe the cost benefit for true end to end testing looks like the following.

cost benefit of e2e testing

There are two significant slopes in the graph. First is the “getting started” slope, which ramps up positively, but slowly. Yes, there are some technical challenges in e2e testing, like learning how to write and debug tests with a new tool or framework. But, what typically slows the benefit growth is organizational. Unlike unit tests, which a developer might write on her own, e2e tests requires coordination and planning across teams both technical and non-technical. You need to plan the provisioning and automation of test environments, and have people create databases with data representative of production data, but scrubbed of protected personal data. Business involvement is crucial, too, as you need to make sure the testing effort is testing the right acceptance criteria for stakeholders.

All of the coordination and work required for e2e testing on a real application is a bit of a hurdle and is sure to build resistance to the effort in many projects. However, the sweet spot at the top of the graph, where the benefit reaches a maximum, is a nice place to be. The tests give everyone confidence that the application is working correctly, and allows teams to create features and deploy at a faster pace. There is a positive return in the investment made in e2e tests. Sure, the test suite might take one hour to run, but it can run any time of the day or night, and every run might save 40 hours of human drudgery.

There is also the ugly side to e2e testing where the benefit starts to slope downward. Although the slope might not always be negative, I do believe the law of diminishing returns is always in play. e2e tests can be amazingly brittle and fail with the slightest change in the system or environment. The failures lead to frustration and it is easy for a test suite to become the villain that everyone despises. I’ve seen this scenario play out when the test strategy is driven with mindless metrics, like when there is a goal to reach 90% code coverage.

In short, every application needs testing before release, and automated e2e tests can speed the process. Making a good test suite that doesn't become a detriment to the project is difficult, unfortunately, due to the complex nature of both software and the human mind. I encourage everyone to write high value tests for the riskier pieces of the application so the tests can catch real errors and build confidence.

My Pluralsight Courses

K.Scott Allen OdeToCode by K. Scott Allen
What JavaScript Developers Should Know About ECMAScript 2015
The Podcast!