I’ll lived all my life near a town with the nickname “Hub City”. I know my town is not the only town in the 50 states with such a nickname, but we do have two major interstates, two mainline rail tracks, and one historic canal in the area. This is not Chicago, but we did have Ludacris fly through the regional airport last year.
The railroad tracks here have always piqued my interest. Trains too, but even more the mystery and history of the line itself. As a kid, I was told not to hang around railroad lines. But, being a kid, with a bike and a curiosity, I did anyway.
Where does it come from? Where does it go?
Those types of questions are easier to answer these days with all the satellite imagery and sites like OpenRailwayMap. I discovered, for example, the line closest to me now was built in the late 1800s when railroads were expanding. Back then, the more lines you built, the better chance you had of taking market share. When railroad companies consolidated in the 1970s, they abandoned most of this track. Still, there is a piece being used, albeit infrequently.
When the line is used on a cold winter night, the distant train whistle makes me hold my breath and listen. Two long, one short, one long. A B major 7th, I think. The 7th is there to tingle the hairs on your neck. It’s hard to believe how machinery and compressed air can provoke an emotional response. After all, there is the occasional horned owl in the area whose hollow cooing is always distant, lonely, and organic. Yet, the mechanical whistle is somehow more urgent, searching, and all-pervading. A proclamation.
I know where I’ve been. I know where I’m going.
It’s hard to believe how code and technology can provoke an emotional response. The shape of the code, the whitespace between. The spark that lights a fire when you uncover a new secret. Now that you’ve learned it won’t go away, but you had to earn it. Idioms and idiosyncrasies pour into the brain like milk into cereal. Changing something, and it’s good.
The whistle. How quickly things change. Or, perhaps the process was slower than I thought. Your idioms impossible, your idiosyncrasies an irritation. If only we could reverse the clock to reach the point before these neurons put together that particular chemical reaction, but there are high winds tonight. I’ve lost power. There was the whistle.
I know where I’ve been, but I don’t know where I’m going.
In episode 1405 I sit down with Carl and Richard at NDC London to talk about ASP.NET Core. I hope you find the conversation valuable.
In modern web programming, you can never have too many tokens. There are access tokens, refresh tokens, anti-XSRF tokens, and more. It’s the last type of token that I’ve gotten a lot of questions about recently. Specifically, does one need to protect against cross site requests forgeries when building an API based app? And if so, how does one create a token in an ASP.NET Core application?
In any application where the browser can implicitly authenticate the user, you’ll need to protect against cross-site request forgeries. Implicit authentication happens when the browser sends authentication information automatically, which is the case when using cookies for authentication, but also for applications using Windows authentication.
Generally, APIs don’t use cookies for authentication. Instead, APIs typically use bearer tokens, and custom JavaScript code running in the browser must send the token along by explicitly adding the token to a request.
However, there are also APIs living inside the same server process as a web application and using the same cookie as the application for authentication. This is the type of scenario where you must use anti forgery tokens to prevent an XSRF.
There is no additional work required to validate an anti-forgery token in an API request, because the [ValidateAntiForgeryToken] attribute in ASP.NET Core will look for tokens in a posted form input, or in an HTTP header. But, there is some additional work required to give the client a token. This is where the IAntiforgery service comes in.
[Route("api/[controller]")] public class XsrfTokenController : Controller { private readonly IAntiforgery _antiforgery; public XsrfTokenController(IAntiforgery antiforgery) { _antiforgery = antiforgery; } [HttpGet] public IActionResult Get() { var tokens = _antiforgery.GetAndStoreTokens(HttpContext); return new ObjectResult(new { token = tokens.RequestToken, tokenName = tokens.HeaderName }); } }
In the above code, we can inject the IAntiforgery service for an application and provide an endpoint a client can call to fetch the token and token name it needs to use in a request. The GetAndStoreTokens method will not only return a data structure with token information, it will also issue the anti-forgery cookie the framework will use in one-half of the validation algorithm. We can use a new ObjectResult to serialize the token information back to the client.
Note: if you want to change the header name, you can change the AntiForgeryOptions during startup of the application [1].
With the endpoint in place, you’ll need to fetch and store the token from JavaScript on the client. Here is a bit of Typescript code using Axios to fetch the token, then configure Axios to send the token with every HTTP request.
import axios, { AxiosResponse } from "axios"; import { IGolfer, IMatchSet } from "models" import { errorHandler } from "./error"; const XSRF_TOKEN_KEY = "xsrfToken"; const XSRF_TOKEN_NAME_KEY = "xsrfTokenName"; function reportError(message: string, response: AxiosResponse) { const formattedMessage = `${message} : Status ${response.status} ${response.statusText}` errorHandler.reportMessage(formattedMessage); } function setToken({token, tokenName}: { token: string, tokenName: string }) { window.sessionStorage.setItem(XSRF_TOKEN_KEY, token); window.sessionStorage.setItem(XSRF_TOKEN_NAME_KEY, tokenName); axios.defaults.headers.common[tokenName] = token; } function initializeXsrfToken() { let token = window.sessionStorage.getItem(XSRF_TOKEN_KEY); let tokenName = window.sessionStorage.getItem(XSRF_TOKEN_NAME_KEY); if (!token || !tokenName) { axios.get("/api/xsrfToken") .then(r => setToken(r.data)) .catch(r => reportError("Could not fetch XSRFTOKEN", r)); } else { setToken({ token: token, tokenName: tokenName }); } }
In this post we … well, forget it. No one reads these anyway.
[1] Tip: Using the name TolkeinToken can bring to life many literary references when discussing the application amongst team members.
The joke I’ve heard goes like this:
I went to an all night JavaScript hackathon and by morning we finally had the build process configured!
Like most jokes there is an element of truth to the matter.
I’ve been working on an application that is mostly server rendered and requires minimal amounts of JavaScript. However, there are “pockets” in the application that require a more sophisticated user experience, and thus a heavy dose of JavaScript. These pockets all map to a specific application feature, like “the accounting dashboard” or “the user profile management page”.
These facts led me to the following requirements:
1. All third party code should build into a single .js file.
2. Each application feature should build into a distinct .js file.
Requirement #1 requires the “vendor bundle”. This bundle contains all the frameworks and libraries each application feature depends on. By building all this code into a single bundle, the client can effectively cache the bundle, and we only need to rebuild the bundle when a framework updates.
Requirement #2 requires multiple “feature bundles”. Feature bundles are smaller than the vendor bundle, so feature bundles can re-build each time a file inside changes. In my project, an ASP.NET Core application using feature folders, the scripts for features are scattered inside the feature folders. I want to build feature bundles into an output folder and retain the same feature folder structure (example below).
I tinkered with various JavaScript bundlers and task runners until I settled on webpack. With webpack I found a solution that would support the above requirements and provide a decently fast development experience.
Here is a webpack configuration file for building the vendor bundle. In this case we will build a vendor bundle that includes React and ReactDOM, but webpack will examine any JS module name you add to the vendor array of the configuration file. webpack will place the named module and all of its dependencies into the output bundle named vendor.js. For example, Angular 2 applications would include “@angular/common” in the list. Since this is an ASP.NET Core application, I’m building the bundle into a subfolder of the wwwroot folder.
const webpack = require("webpack"); const path = require("path"); const assets = path.join(__dirname, "wwwroot", "assets"); module.exports = { resolve: { extensions: ["", ".js"] }, entry: { vendor: [ "react", "react-dom" ... and so on ... ] }, output: { path: assets, filename: "[name].js", library: "[name]_dll" }, plugins: [ new webpack.DllPlugin({ path: path.join(assets, "[name]-manifest.json"), name: '[name]_dll' }), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } }) ] };
webpack offers a number of different plugins to deal with common code, like the CommonsChunk plugin. After some experimentation, I’ve come to prefer the DllPlugin for this job. For Windows developers, the DllPlugin name is confusing, but the idea is to share common code using “dynamically linked libraries”, so the name borrows from Windows.
DllPlugin will keep track of all the JS modules webpack includes in a bundle and will write these module names into a manifest file. In this configuration, the manifest name is vendor-manifest.json. When we build the individual feature bundles, we can use the manifest file to know which modules do not need to appear in those feature bundles.
Important note: make sure the output.library property and the DllPlugin name property match. It is this match that allows a library to dynamically “link” at runtime.
I typically place this vendor configuration into a file named webpack.vendor.config.js. A simple npm script entry of “webpack --config webpack.vendor.config.js” will build the bundle on an as-needed basis.
Feature bundles are a bit trickier, because now we need webpack to find multiple entry modules scattered throughout the feature folders of an application. In the following configuration, we’ll dynamically build the entry property for webpack by searching for all .tsx files inside the feature folders (tsx being the extension for the TypeScript flavor of JSX).
const webpack = require("webpack"); const path = require("path"); const assets = path.join(__dirname, "wwwroot", "assets"); const glob = require("glob"); const entries = {}; const files = glob.sync("./Features/**/*.tsx"); files.forEach(file => { var name = file.match("./Features(.+/[^/]+)\.tsx$")[1]; entries[name] = file; }); module.exports = { resolve: { extensions: ["", ".ts", ".tsx", ".js"], modulesDirectories: [ "./client/script/", "./node_modules" ] }, entry: entries, output: { path: assets, filename: "[name].js" }, module: { loaders: [ { test: /\.tsx?$/, loader: 'ts-loader' } ] }, plugins: [ new webpack.DllReferencePlugin({ context: ".", manifest: require("./wwwroot/assets/vendor-manifest.json") }) ] };
A couple notes on this particular configuration file.
First, you might have .tsx files inside a feature folder that are not entry points for an application feature but are supporting modules for a particular feature. In this scenario, you might want to identify entry points using a naming convention (like dashboard.main.tsx). With the above config file, you can place supporting modules or common application code into the client/script directory. webpack’s resolve.modulesDirectories property controls this directory name, and once you enter in a specific directory name you’ll also need to explicitly include node_modules in the list if you still want webpack to search node_modules for a piece of code. Both webpack and the TypeScript compiler need to know about the custom location for modules, so you’ll also need to add a compilerOptions.path setting in the tsconfig.json config file for TypeScript (this is a fantastic new feature in TypeScript 2.*).
{ "compilerOptions": { "noImplicitAny": true, "noEmitOnError": true, "removeComments": false, "sourceMap": true, "module": "commonjs", "target": "es5", "jsx": "react", "baseUrl": ".", "moduleResolution": "node", "paths": { "*": [ "*", "Client/script/*" ] } }, "compileOnSave": false, "exclude": [ "node_modules", "wwwroot" ] }
Secondly, the output property of webpack’s configuration used to confuse me until I realized you can parameterize output.filename with [name] and [hash] parameters (hash being something you probably want to add to the configuration to help with cache busting). It looks like output.filename will only create a single file from all of the entries. But, if you have multiple keys in the entry property, webpack will build multiple output files and even create sub-directories.
For example, given the following entry:
entry: { '/Home/Home': './Features/Home/Home.tsx', '/Admin/Users/ManageProfile': './Features/Admin/Users/ManageProfile.tsx' }
webpack will create /home/home.js and /admin/users/manageprofile.js in the wwwroot/assets directory.
Finally, notice the use of the DllReferencePlugin in the webpack configuration file. Give this plugin the manifest file created during the vendor build and all of the framework code is excluded from the feature bundle. Now when building the page for a particular feature, include the vendor.js bundle first with a script tag, and the bundle specific to the given feature second.
As easy as it may sound, arriving at this particular solution was not an easy journey. The first time I attempted such a feat was roughly a year ago, and I gave up and went in a different direction. Tools at that time were not flexible enough to work with the combination of everything I wanted, like custom module folders, fast builds, and multiple bundles. Even when part of the toolchain worked, editors could fall apart and show false positive errors.
It is good to see tools, editors, and frameworks evolve to the point where the solution is possible. Still, there are many frustrating moments in understanding how the different pieces work together and knowing the mental model required to work with each tool, since different minds build different pieces. Two things I’ve learned are that documentation is still lacking in this ecosystem, and GitHub issues can never replace StackOverflow as a good place to look for answers.
Here are a few small projects I put together last month.
I think feature folders are the best way to organize controllers and views in ASP.NET MVC. If you aren’t familiar with feature folders, see Steve Smith’s MSDN article: Feature Slices for ASP.NET Core MVC.
To use feature folders with the OdeToCode.AddFeatureFolders NuGet package, all you need is to install the package and add one line of code to ConfigureServices.
public void ConfigureServices(IServiceCollection services) { services.AddMvc() .AddFeatureFolders(); // "Features" is the default feature folder root. To override, pass along // a new FeatureFolderOptions object with a different FeatureFolderName }
The sample application in GitHub demonstrates how you can still use Layout views and view components with feature folders. I’ve also allowed for nested folders, which I’ve found useful in complex, hierarchical applications. Nesting allows the feature structure to follow the user experience when the UI offers several layers of drill-down.
With the OdeToCode.UseNodeModules package you can serve files directly from the node_modules folder of a web project. Install the middleware in the Configure method of Startup.
public void Configure(IApplicationBuilder app, IHostingEnvironment environment) { // ... app.UseNodeModules(environment); // ... }
I’ve mentioned using node_modules on this blog before, and the topic generated a number of questions. Let me explain when and why I find UseNodeModules useful.
First, understand that npm has traditionally been a tool to install code you want to execute in NodeJS. But, over the last couple of years, more and more front-end dependencies have moved to npm, and npm is doing a better job supporting dependencies for both NodeJS and the browser. Today, for example, you can install React, Bootstrap, Aurelia, jQuery, Angular 2, and many other front-end packages of both the JS and CSS flavor.
Secondly, many people want to know why I don’t use Bower. Bower played a role in accelerating front-end development and is a great tool. But, when I can fetch all the resources I need directly using npm, I don’t see the need to install yet another package manager.
Thirdly, many tools understand and integrate with the node_modules folder structure and can resolve dependencies using package.json files and Node’s CommonJS module standard. These are tools like TypeScript and front-end tools like WebPack. In fact, TypeScript has adopted the “no tools required but npm” approach. I no longer need to use tsd or typings when I have npm and @types.
Given the above points, it is easy to stick with npm for all third-party JavaScript modules. It is also easy to install a library like Bootstrap and serve the minified CSS file directly from Bootstrap’s dist folder. Would I recommend every project take this approach? No! But, in certain conditions I’ve found it useful to serve files directly from node_modules. With the environment tag helper in ASP.NET Core you can easily switch between serving from node_modules (say, for debugging) and a CDN in production and QA.
Enjoy!
An enterprise developer moving to ASP.NET Core must feel a bit like a character in Asimov’s “The Gods Themselves”. In the book, humans contact aliens who live in an alternate universe with different physical laws. The landscape of ASP.NET Core is familiar. You can still find controllers, views, models, DbContext classes, script files, and CSS. But, the infrastructure and the laws are different.
For example, the hierarchy of XML configuration files in this new world is gone. The twin backbones of HTTP processing, HTTP Modules and HTTP Handlers, are also gone. In this post, we’ll talk about the replacement for modules and handlers, which is middleware.
Previous versions of ASP.NET gave us a customizable but rather inflexible HTTP processing pipeline. This pipeline allowed us to install HTTP modules and execute logic for cross cutting concerns like logging, authentication, and session management. Each module had the ability to subscribe to preset events raised by ASP.NET. When implementing a logger, for example, you might subscribe to the BeginRequest and EndRequest events and calculate the amount of time spent in between. One of the tricks in implementing a module was knowing the order of events in the pipeline so you could subscribe to an event and inspect an HTTP message at the right time. Catch a too-early event, and you might not know the user’s identity. Catch a too-late event and a handler might have already changed a record in the database.
Although the old model of HTTP processing served us well for over a decade, ASP.NET Core brings us a new pipeline based on middleware. The new pipeline is completely ours to configure and customize. During the startup of our application, we’ll use code to tell ASP.NET which pieces of middleware we want in the application, and the order in which the middleware should execute.
Once an HTTP request arrives at the ASP.NET server, the server will pass the request to the first piece of middleware in our application. Each piece of middleware has the option of creating a response, or calling into the next piece of middleware. One way to visualize the middleware is to think of a stack of components in your application. The stack builds a bi-directional pipeline. The first component will see every incoming request. If the first component passes a request to the next component in the stack, the first component will eventually see the response coming out of a component further up the stack.
A piece of middleware that comes late in the stack may never see a request if the previous piece of middleware does not pass the request along. This might happen, for example, because a piece of middleware you use for authorization checks finds out that the current user doesn’t have access to the application.
It’s important to know that some pieces of middleware will never create a response and only exist to implement cross cutting concerns. For example, there is a middleware component to transform an authentication token into a user identity, and another middleware component to add CORS headers into an outgoing response. Microsoft and other third parties provide us with hundreds of middleware components.
Other pieces of middleware will sometimes jump in to create or override an HTTP response at the appropriate time. For example, Microsoft provides a piece of middleware that will catch unhandled exceptions in the pipeline and create a “developer friendly” HTML response with a stack trace. A different piece of middleware will map the exception to a “user friendly” error page. You can configure different middleware pipelines for different environments, such as development versus production.
Another way to visualize the middleware pipeline is to think of a chain of responsibility.
Each piece of middleware has a specific focus. A piece of middleware to log every request would appear early in the chain to ensure the logging middleware sees every request. A later piece of middleware might even route a request outside of the middleware and into another framework or another set of components, like forwarding a request to the MVC framework for processing.
This article doesn’t provide extensive technical coverage of middleware. However, to give you a taste of what the code looks like, let’s see what it looks like to configure existing middleware and create a new middleware component.
Adding middleware to an application happens in the Configure method of the startup class for an application. The Configure method is injectable, meaning you can ask for any other services you need, but the one service you’ll always need is the IApplicationBuilder service. The application builder allows us to configure middleware. Most middleware will live in a NuGet package. Each NuGet package will include extension methods for IApplicationBuilder to add a middleware component using a simple method call. For example:
public void Configure(IApplicationBuilder app) { app.UseDeveloperExceptionPage(); app.UseFileServer(); app.UseCookieAuthentication(AppCookieAuthentication.Options); app.UseMvc(); }
Notice the extension methods all start with the word Use. The above code would create a pipeline with 4 pieces of middleware. All the above middleware is provided by Microsoft. The first piece of middleware displays a “developer friendly” error page if there is an uncaught exception later in the pipeline. The second piece of middleware will serve up files from the file system when a request matches the file name and path. The third piece transforms an ASP.NET authentication cookie into a user identity. The final piece of middleware will send the request to the MVC framework where MVC will try to match the request to an MVC controller.
You can implement a middleware component as a class with a constructor and an Invoke method. ASP.NET will pass a reference to the next piece of middleware as as a RequestDelegate constructor parameter. Each HTTP transaction will pass through the Invoke method of the middleware.
The following piece of middleware will write a greeting like “Hello!” into the response if the request path starts with /hello. Otherwise, the middleware will call into the next component to produce a response, but add an HTTP header with the current greeting text.
public class SayHelloMiddleware { public SayHelloMiddleware(RequestDelegate next, SayHelloOptions options) { _options = options; _next = next; } public async Task Invoke(HttpContext context) { if (context.Request.Path.StartsWithSegments("/hello")) { await context.Response.WriteAsync(_options.GreetingText); } else { await _next(context); context.Response.Headers.Add("X-GREETING", _options.GreetingText); } } readonly RequestDelegate _next; readonly SayHelloOptions _options; }
Although this middleware is trivial, the example should give you an idea of what middleware can do. First, Invoke will receive an HttpContext object with access to the request and the response. You can inspect incoming headers and create outgoing headers. You can read the request body or write into the response. The logic inside Invoke can decide if you want to call the next piece of middleware or handle the response entirely in the current middleware. Note the name Invoke is the method ASP.NET will automatically look for (no interface implementation required), and is injectable.
I’ve personally found middleware to be liberating. The ability to explicitly configure every component of the HTTP processing pipeline makes it easy to know what is happening inside an application. The application is also as lean as possible because we can install only the features an application requires.
Middleware is one reason why ASP.NET Core can perform better and use less memory than its predecessors.
There are a few downsides to middleware.
First, I know many enterprises with custom HTTP modules. The services provided by these modules range from custom authentication and authorization logic, to session state management, to custom instrumentation. To use these services in ASP.NET Core the logic will need to move into middleware. I don’t see the port from modules to middleware as challenging, but the port is time consuming for such critical pieces of infrastructure. You’ll want to identify any custom modules (and handlers) an enterprise application relies on so the port happens early.
Secondly, I’ve seen developers struggle with the philosophy of middleware. Middleware components are highly asynchronous and often follow functional programming idioms. Also, there are no interfaces to guide middleware development as most of the contracts rely on convention instead of the compiler. All of these changes make some developers uncomfortable.
Thirdly, when developers use middleware they often struggle finding the right middleware to use. Microsoft distributes ASP.NET Core middleware in granular NuGet packages, meaning you have to know the middleware exists, then find and install the package, and then find the extension method to install the middleware. As ASP.NET Core has moved from release candidate to the current 1.1 release, there has been churn in the package names themselves, which has led to frustration in finding the right package name.
Expect to see middleware play an increasingly important role in the future. Not only will Microsoft and others create more middleware, but also expect the sophistication of middleware to increase. Future middleware will not only continue to replace IIS features like URL re-writing, but also change our application architecture by enabling additional frameworks and the ability to compose new behavior into an application.
Don’t underestimate the importance of porting existing logic into middleware and the impact of a middleware on an application’s behavior.
ASP.NET Core and the Enterprise Part 3: Middleware (this one)
My latest course is now available on Plurasight. From the description:
Reactive programming is more than an API. Reactive programming is a mindset. In this course,you'll see how to setup and install RxJS and work with your first Observable and Observer. You'll use RxJS to manage asynchronous data delivered from DOM events, network requests, and JavaScript promises. You'll learn how to handle errors and exceptions in asynchronous code, and learn about the RxJS operators you can use as composable building blocks in a data processing pipeline. By the end of the course, you'll have the fundamental knowledge you need to use RxJS in your own applications, and use other frameworks that rely on RxJS.