A Different Perspective on ASP.NET Middleware

Monday, February 1, 2016 by K. Scott Allen

When I was a boy, my parents would select a weekend every late summer to freeze corn for our Sunday meals in the winter months. The process started early in the morning when dad would drive us to a local farm. We’d purchase a few hundred ears of un-shucked corn and load the corn into our own laundry baskets to bring them home.

Once home, we’d setup a processing pipeline for the corn. I was first in the pipeline. I’d pick an ear from the basket, shuck off the squeaky green husk, and hand the ear to mom. Mom would gently remove silk from the cob with a soft vegetable brush and not so gently eradicate damaged kernels with a palm-sized paring knife. My mom passed the corn to my dad, who would bundle up a dozen or so ears and drop them into boiling water. After a few minutes of blanching, my dad would transfer the hot corn into an ice bath and then pass the ears back up the pipeline to my mom.

Mom would use a simple device my father made to help cut the corn from the cob. The device was a small hardwood board, about ½ inch thick, with a nail through the middle. Pushing the cob onto the nail made for easier cutting with a boning knife, as the cob would stand up without the need to place any fingers in the cutting area. Mom could then sweep the cut kernels off the board and into a bowl where it was my responsibility to measure 2 cups of corn into freezer bags, and keep count of the total number of bags.

Allen Family Middleware For Corn Processing

The corn started with me un-husked and returned to me as whole cut kernels in puddle of sweet, milky juice. Well, not all the corn made the entire trip up and down the family food processing line. If I spotted an obviously bad ear, I always had the option of throwing the ear away instead of passing the ear to mom.

Substitute “HTTP request” for corn and “middleware component” for me, mom, and dad, and you’ll have an idea of how the middleware pipeline works in ASP.NET. Each component has a job to perform when the component receives a request. Each component can perform some work and optionally pass the request to the next component. Each HTTP transaction can visit each piece of middleware once on the way in, and once on the way out.

HTTP Processing Middleware

Now that I’ve made the analogy, it’s hard to look at any Startup.cs and not think about being a boy again in August.

Working With Typed Headers in ASP.NET

Monday, November 30, 2015 by K. Scott Allen

Inside of a controller in ASP.NET 5, the Request and Response properties both give you access to a Headers property. Headers is a simple IHeaderDictionary and will ultimately hand out string values.

Bring in the Microsoft.AspNet.Http namespace and you'll find a GetTypedHeaders extension method available for both Request and Response. Instead of working with raw strings, this extension method allows you to work with classes like MediaTypeHeaderValue, proper numbers, and strong types in general. 

var requestHeaders = Request.GetTypedHeaders();
var responseHeaders = Response.GetTypedHeaders();

var mediaType = requestHeaders.Accept[0].MediaType;
long? length = responseHeaders.ContentLength;

// ...

What Every JavaScript Developer Should Know About ECMAScript 2015

Monday, November 23, 2015 by K. Scott Allen
What Every JavaScript Developer Should Know About ECMAScript 2015

Now available on Amazon, What Every JavaScript Developer Should Know About ECMAScript 2015 is the book I'd like to read about the new features in the JavaScript language. The book isn't a reference manual or an exhaustive list of everything in the ES2015 specification. Instead, I purposefully selected what I think are the important features we will use in everyday programming. I expect the reader will already have a good understanding of the pre-2015 JavaScript language.

The book will be free for the rest of this week.

Chapters include:

  • Working with variables and parameters
  • Classes
  • Functional features
  • Asynchronous JavaScript
  • Modules
  • APIs

Special thanks to Ruben Bartelink and Porter T. Baer for feedback on the early drafts.

An ASP.NET 5 Overview Video

Thursday, November 12, 2015 by K. Scott Allen

I recorded an ASP.NET 5 Overview during some unexpected free time and placed the video here on OdeToCode. I recorded during the beta6 / beta7 timeframe, so the video will need some updates in the future, else you'll find two vast and trunkless legs of stone here.

image

The Evolution of JavaScript

Tuesday, November 10, 2015 by K. Scott Allen

I fired off an abstract titled The Evolution of JavaScript without giving a thought as to what it means for a language to evolve. After all, a language is not a living organism where Mother Nature applies the dispassionate sieve of natural selection against the replication errors in DNA. Wouldn’t a programming language rather fall under the viewpoint of intelligent design?

How can intelligent design explain the gap?

image

There is no formula to apply in moving a language forward. There is emotion, beliefs, and marketability. When the committee abandoned the 4th edition of the language, the language found diversity in pockets of community. jQuery showed us how programming the DOM can work when the API is consistent. Node showed us how to scale JavaScript and run outside 4-sided HTML elements. The cosmic impact of Steve Jobs exterminated natural predators, allowing natives like threads and sockets to emerge.

JavaScript is not a flat road in a planned community at the edge of the water. JavaScript is a Manhattan boulevard serving the enterprise and the pedestrian, angled to equal the constraints of terra firma. Always busy, bustling, dirty, and full of diversity, we can only hope the future doesn’t move too fast. Slow evolution has been good despite the frustrations. Don't listen to the slander, coriander, and mind the gap.

Maps and Sets in JavaScript

Wednesday, October 14, 2015 by K. Scott Allen

JavaScript has never offered a variety of data structures out of the box. If you needed any structure fancier than an array, you'd have to roll your own or borrow some code.

The 2015 specification brings a number of keyed collections to the language, including the Map, Set, WeakMap, and WeakSet.

Maps

Although you can treat any JavaScript object as a dictionary or key-value store, there are some drawbacks. Most notably, the only type of keys you can use to index into an object are keys of type string. The new Map collection removes the restriction on key types and offers some additional conveniences. The core of the API revolves around the get and set methods.

let map = new Map();

// add a new key-value pair
map.set("key", 301);

// overwrite an existing value
map.set("key", 302);

expect(map.get("key")).toBe(302);

The delete and has methods are also handy.

let map = new Map();
map.set("key", 611);

expect(map.has("key")).toBe(true);

map.delete("key");
expect(map.has("key")).toBe(false);

A key can now be any type of object.

var someKey = { firstName: "Scott"};
var someValue = { lastName: "Allen"};

var map = new Map();
map.set(someKey, someValue);

expect(map.size).toBe(1);
expect(map.get(someKey).lastName).toBe("Allen");
expect(map.get({firstName:"Scott"})).toBeUndefined();

When using objects as keys, JavaScript will compare pointers behind the scenes, as you probably expect (and as the above code demonstrates).

You can also construct a Map using a two dimensional array of key-value pairs.

let map = new Map([
    [1, "one"],
    [2, "two"],
    [3, "three"]
]);

Finally, maps are iterable in a number of ways. You can retrieve an iterator for the keys in a map (using the keys method), the values in a map (using the values method), and the key-value entries in a map (using the entries method, as you might have guessed). The map also has the magic [Symbol.iterator] method, so you can use a Map directly in a for of loop.

let map = new Map([
    [1, "one"],
    [2, "two"],
    [3, "three"]
]);
    
let sum = 0;
let combined = "";
for(let pair of map) {
    sum += pair[0];
    combined += pair[1];
} 
    
expect(map.size).toBe(3);
expect(sum).toBe(6);
expect(combined).toBe("onetwothree");

map.clear();
expect(map.size).toBe(0);

Note how each entry comes out as an array with the key in index 0 and the value in index 1. This is a scenario where destructuring is handy. 

for(let [key, value] of map) {
    sum += key;
    combined += value;
}

Set

Just like in mathematics, a Set maintains a collection of distinct objects. The API is slightly different from a Map. For example, you can construct a set by passing in any iterator.

let animals = ["bear", "snake", "elephant", "snake"];
let animalsSet = new Set(animals.values());

expect(animals.length).toBe(4);
expect(animalsSet.size).toBe(3);
expect(animalsSet.has("bear")).toBe(true);

animalsSet.delete("bear");
expect(animalsSet.size).toBe(2);

Just like a Map, a Set compares object types by comparing references. Unlike other languages, there is no ability to provide an alternative comparer strategy.

let scott = { name: "Scott "};

let people = new Set();
people.add(scott);
people.add(scott);
people.add( {name:"Scott"});

expect(people.size).toBe(2);
expect(people.has(scott)).toBe(true);
expect(people.has({name:"Scott"})).toBe(false);

Weak Collections

The WeakMap and WeakSet collections serve a similar purpose to their non-weak counterparts, but the objects held in these collections are eligible for garbage collection at any given point in time, making these weak collections ideal for many extensibility scenarios. The special nature of weak collections means you cannot iterate over the collections, because an entry could disappear before the iteration completes. There are also some other small API differences. For example, the keys for a Map and the values for a Set can only be object types (no primitives like strings), and there is no size method to uncover the number of entries in these collections. 

Coming up soon - more stimulating API additions in 2015 for objects, strings, and arrays.

DNX Framework Choices and ASP.NET 5

Tuesday, October 13, 2015 by K. Scott Allen

You can think of an ASP.NET 5 application as a DNX application, where DNX stands for .NET execution environment.

And the projects we create with a project.json file? These are DNX projects. The DNX is not just a runtime environment, but also an SDK.  Conceptually, I think of the new world like this:

DNX Choices

 

The DNX gives us the ability to cross compile a project across different renditions of a Common Language Runtime, and then execute on a specific variant of a CLR. We can compile against a full CLR, or a core CLR, or both. The compilation settings are controlled by the frameworks property in project.json.

"frameworks": {
  "dnx451": { },
  "dnxcore50": { }
}

The monikers to choose from currently include dnx451, dnx452, and dnx46 for a full CLR, or dnxcore50 for a core CLR. At execution time the code must execute against one of the targets specified in project.json. You can select the specific DNX environment using the project properties –> Debug tab in Visual Studio and run with or without the debugger.

 DNX Selection In Visual Studio

You can also use dnx from the command line to launch the application after selecting an environment using the .NET version manager (dnvm) tool. The following screen shot is showing how to list the available runtimes using dnvm list, and then configuring the 64 bit variant of the Full CLR, beta 7 version as an execution environment.

Using DNVM To Select A DNX Environment

After the above command completes, the dnx we’ll be using will be the dnx from the folder for the 64 bit variant of the Full CLR, beta 7 version, and any application launched using dnx will use the same.

Choosing

“Which environment is right for me?” is an obvious question when starting a DNX project. What follows is my opinion.

Choose Both

In this scenario, we’ll include both a core CLR and one version of a full CLR in the frameworks of project.json.

"frameworks": {
  "dnx46": { },
  "dnxcore50": { }
},

This choice is ideal for projects with reusable code. Specifically, projects you want to build as NuGet packages and consume from other DNX applications. In this scenario you won’t know which framework the consumer might need to use.

This choice also works if you haven’t made a decision on which CLR to use, but ultimately for applications (not libraries), I would expect to target a single framework.

Choose The Full CLR

In this scenario, we’ll specify a only a full CLR in the frameworks section.

"frameworks": {
  "dnx46": { },
}

Targeting a CLR for development and deployment gives you the best chance of compatibility with existing code. The full framework includes WCF, three or four types of XML serializers, GDI support, and the full range of reflection APIs. Of course, you’ll only be developing and running on Windows machines.

Choose the Core CLR

In this scenario, we’ll specify a only the core CLR in the frameworks section.

"frameworks": {
  "dnxcore50": { }
}

With the core CLR you can develop and deploy on Windows, Linux, and OS/X. Another primary advantage to the core CLR, even if you only run on Windows, is the ability to ship the framework bits with an application, meaning you don’t need a full .NET install on any server where you want to deploy. In fact, you can even have multiple applications on the same server using different versions of the core CLR (side-by-side versioning), and update one application without worrying about breaking the others. One downside to using a core CLR is how your existing code might require framework features that do not exist in the core. At least, features that don’t exist today.

Summary

I expect the core CLR will have a slow uptake but eventually by the primary target for all DNX applications. The advantages of going cross platform and providing an xcopy deployment of the framework itself will outweigh the reduced feature set. Most of the sub-frameworks in the full CLR are frameworks we don’t need.

Pluralsight Courses
What JavaScript Developers Should Know About ECMAScript 2015
The Podcast!