Building Vendor and Feature Bundles with webpack

Thursday, December 1, 2016 by K. Scott Allen

webpackThe joke I’ve heard goes like this:

I went to an all night JavaScript hackathon and by morning we finally had the build process configured!

Like most jokes there is an element of truth to the matter.

I’ve been working on an application that is mostly server rendered and requires minimal amounts of JavaScript. However, there are “pockets” in the application that require a more sophisticated user experience, and thus a heavy dose of JavaScript. These pockets all map to a specific application feature, like “the accounting dashboard” or “the user profile management page”.

These facts led me to the following requirements:

1. All third party code should build into a single .js file.

2. Each application feature should build into a distinct .js file.

Requirement #1 requires the “vendor bundle”. This bundle contains all the frameworks and libraries each application feature depends on. By building all this code into a single bundle, the client can effectively cache the bundle, and we only need to rebuild the bundle when a framework updates.

Requirement #2 requires multiple “feature bundles”. Feature bundles are smaller than the vendor bundle, so feature bundles can re-build each time a file inside changes. In my project, an ASP.NET Core application using feature folders, the scripts for features are scattered inside the feature folders. I want to build feature bundles into an output folder and retain the same feature folder structure (example below).

I tinkered with various JavaScript bundlers and task runners until I settled on webpack. With webpack  I found a solution that would support the above requirements and provide a decently fast development experience.

The Vendor Bundle

Here is a webpack configuration file for building the vendor bundle. In this case we will build a vendor bundle that includes React and ReactDOM, but webpack will examine any JS module name you add to the vendor array of the configuration file. webpack will place the named module and all of its dependencies into the output bundle named vendor.js. For example, Angular 2 applications would include “@angular/common” in the list. Since this is an ASP.NET Core application, I’m building the bundle into a subfolder of the wwwroot folder.

const webpack = require("webpack");
const path = require("path");
const assets = path.join(__dirname, "wwwroot", "assets");

module.exports = {
    resolve: {
        extensions: ["", ".js"]
    },
    entry: {
        vendor: [
            "react",
            "react-dom"
            ... and so on ...
        ]
    },
    output: {
        path: assets,
        filename: "[name].js",
        library: "[name]_dll"      
    },
    plugins: [
        new webpack.DllPlugin({
            path: path.join(assets, "[name]-manifest.json"),
            name: '[name]_dll'
        }),
        new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } })
    ]
};

webpack offers a number of different plugins to deal with common code, like the CommonsChunk plugin. After some experimentation, I’ve come to prefer the DllPlugin for this job. For Windows developers, the DllPlugin name is confusing, but the idea is to share common code using “dynamically linked libraries”, so the name borrows from Windows.

DllPlugin will keep track of all the JS modules webpack includes in a bundle and will write these module names into a manifest file. In this configuration, the manifest name is vendor-manifest.json. When we build the individual feature bundles, we can use the manifest file to know which modules do not need to appear in those feature bundles.

Important note: make sure the output.library property and the DllPlugin name property match. It is this match that allows a library to dynamically “link” at runtime.

I typically place this vendor configuration into a file named webpack.vendor.config.js. A simple npm script entry of “webpack --config webpack.vendor.config.js” will build the bundle on an as-needed basis.

Feature Bundles

Feature bundles are a bit trickier, because now we need webpack to find multiple entry modules scattered throughout the feature folders of an application. In the following configuration, we’ll dynamically build the entry property for webpack by searching for all .tsx files inside the feature folders (tsx being the extension for the TypeScript flavor of JSX).

const webpack = require("webpack");
const path = require("path");
const assets = path.join(__dirname, "wwwroot", "assets");
const glob = require("glob");

const entries = {};
const files = glob.sync("./Features/**/*.tsx");
files.forEach(file => {
    var name = file.match("./Features(.+/[^/]+)\.tsx$")[1];
    entries[name] = file;
});

module.exports = {
    resolve: {
        extensions: ["", ".ts", ".tsx", ".js"],
        modulesDirectories: [
            "./client/script/",
            "./node_modules"
        ]
    },
    entry: entries,
    output: {
        path: assets,
        filename: "[name].js"    
    },
    module: {
        loaders: [
          { test: /\.tsx?$/, loader: 'ts-loader' }
        ]
    }, 
    plugins: [
         new webpack.DllReferencePlugin({            
             context: ".",
             manifest: require("./wwwroot/assets/vendor-manifest.json")
         })       
    ]
};

A couple notes on this particular configuration file.

First, you might have .tsx files inside a feature folder that are not entry points for an application feature but are supporting modules for a particular feature. In this scenario, you might want to identify entry points using a naming convention (like dashboard.main.tsx). With the above config file, you can place supporting modules or common application code into the client/script directory. webpack’s resolve.modulesDirectories property controls this directory name, and once you enter in a specific directory name you’ll also need to explicitly include node_modules in the list if you still want webpack to search node_modules for a piece of code. Both webpack and the TypeScript compiler need to know about the custom location for modules, so you’ll also need to add a compilerOptions.path setting in the tsconfig.json config file for TypeScript (this is a fantastic new feature in TypeScript 2.*).

{
  "compilerOptions": {
    "noImplicitAny": true,
    "noEmitOnError": true,
    "removeComments": false,
    "sourceMap": true,
    "module": "commonjs",
    "target": "es5",
    "jsx": "react",
    "baseUrl": ".",
    "moduleResolution": "node",
    "paths": {
      "*": [ "*", "Client/script/*" ] 
    }
  },  
  "compileOnSave": false,
  "exclude": [
    "node_modules",
    "wwwroot"
  ]
}

Secondly, the output property of webpack’s configuration used to confuse me until I realized you can parameterize output.filename with [name] and [hash] parameters (hash being something you probably want to add to the configuration to help with cache busting). It looks like output.filename will only create a single file from all of the entries. But, if you have multiple keys in the entry property, webpack will build multiple output files and even create sub-directories.

For example, given the following entry:

entry: {
    '/Home/Home': './Features/Home/Home.tsx',
    '/Admin/Users/ManageProfile': './Features/Admin/Users/ManageProfile.tsx'
}

webpack will create /home/home.js and /admin/users/manageprofile.js in the wwwroot/assets directory.

Finally, notice the use of the DllReferencePlugin in the webpack configuration file. Give this plugin the manifest file created during the vendor build and all of the framework code is excluded from the feature bundle. Now when building the page for a particular feature, include the vendor.js bundle first with a script tag, and the bundle specific to the given feature second.

Summary

As easy as it may sound, arriving at this particular solution was not an easy journey. The first time I attempted such a feat was roughly a year ago, and I gave up and went in a different direction. Tools at that time were not flexible enough to work with the combination of everything I wanted, like custom module folders, fast builds, and multiple bundles. Even when part of the toolchain worked, editors could fall apart and show false positive errors.

It is good to see tools, editors, and frameworks evolve to the point where the solution is possible. Still, there are many frustrating moments in understanding how the different pieces work together and knowing the mental model required to work with each tool, since different minds build different pieces. Two things I’ve learned are that documentation is still lacking in this ecosystem, and GitHub issues can never replace StackOverflow as a good place to look for answers.

AddFeatureFolders and UseNodeModules On Nuget For ASP.NET Core

Tuesday, November 29, 2016 by K. Scott Allen

Here are a few small projects I put together last month.

AddFeatureFolders

I think feature folders are the best way to organize controllers and views in ASP.NET MVC. If you aren’t familiar with feature folders, see Steve Smith’s MSDN article: Feature Slices for ASP.NET Core MVC.

To use feature folders with the OdeToCode.AddFeatureFolders NuGet package, all you need is to install the package and add one line of code to ConfigureServices.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc()
            .AddFeatureFolders();

    // "Features" is the default feature folder root. To override, pass along 
    // a new FeatureFolderOptions object with a different FeatureFolderName
}

The sample application in GitHub demonstrates how you can still use Layout views and view components with feature folders. I’ve also allowed for nested folders, which I’ve found useful in complex, hierarchical applications. Nesting allows the feature structure to follow the user experience when the UI offers several layers of drill-down.

image

UseNodeModules

With the OdeToCode.UseNodeModules package you can serve files directly from the node_modules folder of a web project. Install the middleware in the Configure method of Startup.

public void Configure(IApplicationBuilder app, IHostingEnvironment environment)
{
    // ...

    app.UseNodeModules(environment);

    // ...
}

I’ve mentioned using node_modules on this blog before, and the topic generated a number of questions. Let me explain when and why I find UseNodeModules useful.

First, understand that npm has traditionally been a tool to install code you want to execute in NodeJS. But, over the last couple of years, more and more front-end dependencies have moved to npm, and npm is doing a better job supporting dependencies for both NodeJS and the browser. Today, for example, you can install React, Bootstrap, Aurelia, jQuery, Angular 2, and many other front-end packages of both the JS and CSS flavor.

Secondly, many people want to know why I don’t use Bower. Bower played a role in accelerating front-end development and is a great tool. But, when I can fetch all the resources I need directly using npm, I don’t see the need to install yet another package manager. 

Thirdly, many tools understand and integrate with the node_modules folder structure and can resolve dependencies using package.json files and Node’s CommonJS module standard. These are tools like TypeScript and front-end tools like WebPack. In fact, TypeScript has adopted the “no tools required but npm” approach. I no longer need to use tsd or typings when I have npm and @types.

Given the above points, it is easy to stick with npm for all third-party JavaScript modules. It is also easy to install a library like Bootstrap and serve the minified CSS file directly from Bootstrap’s dist folder. Would I recommend every project take this approach? No! But, in certain conditions I’ve found it useful to serve files directly from node_modules. With the environment tag helper in ASP.NET Core you can easily switch between  serving from node_modules (say, for debugging) and a CDN in production and QA.

Enjoy!

ASP.NET Core and the Enterprise Part 3: Middleware

Tuesday, November 22, 2016 by K. Scott Allen

3_1An enterprise developer moving to ASP.NET Core must feel a bit like a character in Asimov’s “The Gods Themselves”. In the book, humans contact aliens who live in an alternate universe with different physical laws. The landscape of ASP.NET Core is familiar. You can still find controllers, views, models, DbContext classes, script files, and CSS. But, the infrastructure and the laws are different.

For example, the hierarchy of XML configuration files in this new world is gone. The twin backbones of HTTP processing, HTTP Modules and HTTP Handlers, are also gone. In this post, we’ll talk about the replacement for modules and handlers, which is middleware.

Processing HTTP Requests

Previous versions of ASP.NET gave us a customizable but rather inflexible HTTP processing pipeline. This pipeline allowed us to install HTTP modules and execute logic for cross cutting concerns like logging, authentication, and session management. Each module had the ability to subscribe to preset events raised by ASP.NET. When implementing a logger, for example, you might subscribe to the BeginRequest and EndRequest events and calculate the amount of time spent in between. One of the tricks in implementing a module was knowing the order of events in the pipeline so you could subscribe to an event and inspect an HTTP message at the right time. Catch a too-early event, and you might not know the user’s identity. Catch a too-late event and a handler might have already changed a record in the database.

ASP.NET Classic Pipeline with Module and Handlers

Although the old model of HTTP processing served us well for over a decade, ASP.NET Core brings us a new pipeline based on middleware. The new pipeline is completely ours to configure and customize. During the startup of our application, we’ll use code to tell ASP.NET which pieces of middleware we want in the application, and the order in which the middleware should execute.

Once an HTTP request arrives at the ASP.NET server, the server will pass the request to the first piece of middleware in our application. Each piece of middleware has the option of creating a response, or calling into the next piece of middleware. One way to visualize the middleware is to think of a stack of components in your application. The stack builds a bi-directional pipeline. The first component will see every incoming request. If the first component passes a request to the next component in the stack, the first component will eventually see the response coming out of a component further up the stack.

ASP.NET Middleware as a stack

A piece of middleware that comes late in the stack may never see a request if the previous piece of middleware does not pass the request along. This might happen, for example, because a piece of middleware you use for authorization checks finds out that the current user doesn’t have access to the application.

It’s important to know that some pieces of middleware will never create a response and only exist to implement cross cutting concerns. For example, there is a middleware component to transform an authentication token into a user identity, and another middleware component to add CORS headers into an outgoing response. Microsoft and other third parties provide us with hundreds of middleware components.

Other pieces of middleware will sometimes jump in to create or override an HTTP response at the appropriate time. For example, Microsoft provides a piece of middleware that will catch unhandled exceptions in the pipeline and create a “developer friendly” HTML response with a stack trace. A different piece of middleware will map the exception to a “user friendly” error page. You can configure different middleware pipelines for different environments, such as development versus production.

Another way to visualize the middleware pipeline is to think of a chain of responsibility.

ASP.NET Middleware Chain of Responsibility

Each piece of middleware has a specific focus. A piece of middleware to log every request would appear early in the chain to ensure the logging middleware sees every request. A later piece of middleware might even route a request outside of the middleware and into another framework or another set of components, like forwarding a request to the MVC framework for processing.

This article doesn’t provide extensive technical coverage of middleware. However, to give you a taste of what the code looks like, let’s see what it looks like to configure existing middleware and create a new middleware component.

Configuring Middleware

Adding middleware to an application happens in the Configure method of the startup class for an application. The Configure method is injectable, meaning you can ask for any other services you need, but the one service you’ll always need is the IApplicationBuilder service. The application builder allows us to configure middleware. Most middleware will live in a NuGet package. Each NuGet package will include extension methods for IApplicationBuilder to add a middleware component using a simple method call. For example:

public void Configure(IApplicationBuilder app)
{                       
    app.UseDeveloperExceptionPage();
    app.UseFileServer();
    app.UseCookieAuthentication(AppCookieAuthentication.Options);
    app.UseMvc();
}

Notice the extension methods all start with the word Use. The above code would create a pipeline with 4 pieces of middleware. All the above middleware is provided by Microsoft. The first piece of middleware displays a “developer friendly” error page if there is an uncaught exception later in the pipeline. The second piece of middleware will serve up files from the file system when a request matches the file name and path. The third piece transforms an ASP.NET authentication cookie into a user identity. The final piece of middleware will send the request to the MVC framework where MVC will try to match the request to an MVC controller.

Creating Middleware

You can implement a middleware component as a class with a constructor and an Invoke method. ASP.NET will pass a reference to the next piece of middleware as as a RequestDelegate constructor parameter. Each HTTP transaction will pass through the Invoke method of the middleware.

The following piece of middleware will write a greeting like “Hello!” into the response if the request path starts with /hello. Otherwise, the middleware will call into the next component to produce a response, but add an HTTP header with the current greeting text.

public class SayHelloMiddleware
{
    public SayHelloMiddleware(RequestDelegate next, SayHelloOptions options)
    {
        _options = options;
        _next = next;
    }

    public async Task Invoke(HttpContext context)
    {
        if (context.Request.Path.StartsWithSegments("/hello"))
        {
            await context.Response.WriteAsync(_options.GreetingText);
        }
        else
        {
            await _next(context);
            context.Response.Headers.Add("X-GREETING", _options.GreetingText);
        }
        
    }

    readonly RequestDelegate _next;
    readonly SayHelloOptions _options;
}

Although this middleware is trivial, the example should give you an idea of what middleware can do. First, Invoke will receive an HttpContext object with access to the request and the response. You can inspect incoming headers and create outgoing headers. You can read the request body or write into the response. The logic inside Invoke can decide if you want to call the next piece of middleware or handle the response entirely in the current middleware. Note the name Invoke is the method ASP.NET will automatically look for (no interface implementation required), and is injectable.

Advantages

I’ve personally found middleware to be liberating. The ability to explicitly configure every component of the HTTP processing pipeline makes it easy to know what is happening inside an application. The application is also as lean as possible because we can install only the features an application requires.

Middleware is one reason why ASP.NET Core can perform better and use less memory than its predecessors.

Disadvantages

There are a few downsides to middleware.

First, I know many enterprises with custom HTTP modules. The services provided by these modules range from custom authentication and authorization logic, to session state management, to custom instrumentation. To use these services in ASP.NET Core the logic will need to move into middleware. I don’t see the port from modules to middleware as challenging, but the port is time consuming for such critical pieces of infrastructure. You’ll want to identify any custom modules (and handlers) an enterprise application relies on so the port happens early.

Secondly, I’ve seen developers struggle with the philosophy of middleware. Middleware components are highly asynchronous and often follow functional programming idioms. Also, there are no interfaces to guide middleware development  as most of the contracts rely on convention instead of the compiler. All of these changes make some developers uncomfortable.

Thirdly, when developers use middleware they often struggle finding the right middleware to use. Microsoft distributes ASP.NET Core middleware in granular NuGet packages, meaning you have to know the middleware exists, then find and install the package, and then find the extension method to install the middleware. As ASP.NET Core has moved from release candidate to the current 1.1 release, there has been churn in the package names themselves, which has led to frustration in finding the right package name.

Summary

Expect to see middleware play an increasingly important role in the future.  Not only will Microsoft and others create more middleware, but also expect the sophistication of middleware to increase. Future middleware will not only continue to replace IIS features like URL re-writing, but also change our application architecture by enabling additional frameworks and the ability to compose new behavior into an application.

Don’t underestimate the importance of porting existing logic into middleware and the impact of a middleware on an application’s behavior.

Also In This Series:

ASP.NET Core and the Enterprise Part 3: Middleware (this one)

ASP.NET Core and the Enterprise Part 2: Hosting

ASP.NET Core and the Enterprise Part 1: Frameworks

Getting Started with Reactive Programming Using RxJS

Wednesday, November 16, 2016 by K. Scott Allen

My latest course is now available on Plurasight. From the description:

Reactive programming is more than an API. Reactive programming is a mindset. In this course,you'll see how to setup and install RxJS and work with your first Observable and Observer. You'll use RxJS to manage asynchronous data delivered from DOM events, network requests, and JavaScript promises. You'll learn how to handle errors and exceptions in asynchronous code, and learn about the RxJS operators you can use as composable building blocks in a data processing pipeline. By the end of the course, you'll have the fundamental knowledge you need to use RxJS in your own applications, and use other frameworks that rely on RxJS. 

Getting Started with Reactive Programming Using RxJS

ASP.NET Core and the Enterprise Part 2: Hosting

Tuesday, October 25, 2016 by K. Scott Allen

Kestrel the bird, not the serverThe hosting model for ASP.NET Core is dramatically different from previous versions of ASP.NET. This is also one area where I’ve seen a fair amount of misunderstanding.

ASP.NET Core is a set of libraries you can install into a project using the NuGet package manager. One of the packages you might install for HTTP message processing is a package named Microsoft.AspNetCore.Server.Kestrel. The word server is in the name because this new version of ASP.NET includes its own web servers, and the featured server has the name Kestrel. 

In the animal kingdom, a Kestrel is a bird of prey in the falcon family, but in the world of ASP.NET, Kestrel is a cross-platform web server. Kestrel builds on top of libuv, a cross-platform library for asynchronous I/O. libuv gives Kestrel a consistent streaming API to use across Windows and Linux. You also have the option of plugging in a server based on the Windows HTTP Server API (Web Listener), or writing your own IServer implementation. Without good reason, you’ll want to use Kestrel by default.

An Overview of How It Works

You can configure the server for your application in the entry point of the application. There is no Application_Start event in this new version of ASP.NET, nor is there any default XML configuration files. Instead, the start of the application is a static Main method, and configuration lives in the code.

public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

If you are looking at the above Program class with a static Main method and thinking the code looks like what you would see in a .NET console mode application, then you are thinking correctly. Compiling an ASP.NET project still produces a .dll file, but with .NET Core we launch the web server from the command line with the dotnet command line interface. The dotnet host will ultimately call into the Main method. In this way of working, .NET Core resembles environments like Java, Ruby, and Python.

dotnet run

 

If you are working on ASP.NET Core from Visual Studio, then you might never see the command line. Visual Studio continues to do a job it has always done, which is to hide some of the lower level details. With Visual Studio, you can set the application to run with Kestrel as a direct host, or to run the application in IIS Express (the default setting). In both cases, the dotnet host and Kestrel server are in play, even when using IIS Express. This brings us to the topic of running applications in production.  

ASP.NET Core Applications in Production

One you realize that ASP.NET includes a cross-platform host and web server, you might think you have all the pieces you need to push to production. There is some truth to this line of thought. Once you’ve invoked the Run method on the WebHost object in the above code, you have a running web server that will listen to HTTP requests and can work on everything from a 32 core Linux server to a Raspberry Pi. However, Microsoft strongly suggests using a hardened reverse proxy in front of your Kestrel server in production. The proxy could be IIS on Windows, or Apache or NGINX.

ASP.NET Core In Production

 

 

Why the reverse proxy? In short because technologies like IIS and Apache have been around for over 20 years and have seen all the evils the Internet can deliver to a network socket. Kestrel, on the other hand, is still a newborn babe. Also, reliable servers require additional infrastructure like careful process management to restart failed applications. Outside of ASP.NET, in the world of Java, Python, Ruby, and NodeJS web apps, you’ll see tools like Phusion Passenger and PM2 work in combination with the reverse proxy. These types of tools provide the watchdog monitoring, logging, balancing, and overall process management needed for a robust server. With ASP.NET Core on Windows you can use IIS to achieve the same goals. HTTP requests will still arrive at IIS first, and IIS can forward requests to the Kestrel application. You can have multiple applications deployed behind a single instance of IIS, and IIS will manage the application and provide logging, request filtering, URL rewrites, and many other useful features. In a way, this isn’t much different than what we’ve done in the past with ASP.NET, but once you dig behind the architectural diagrams, you’ll see the details are very different.

What’s Different?

Deploying ASP.NET Core applications to IIS requires a web.config file. ASP.NET Core knows nothing about web.config files. The web.config file only exists to configure IIS in a reverse proxy role. A typical web.config file will look like the following.

<configuration>
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
    </handlers>
    <aspNetCore processPath="dotnet" arguments=".\TheWebApp.dll" stdoutLogEnabled="false" 
                   stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false" />
  </system.webServer>
</configuration>

The web.config file instructs IIS to send requests for all paths and verbs to a new HTTP handler named aspNetCore. This ASP.NET Core Module for IIS is a piece of software you’ll need to install on an IIS server to run ASP.NET Core applications. The ASP.NET Core Module is available with the .NET Core SDK install, or with a special Windows Server Hosting .NET Core installer.

The second bit of the web.config file configures the ASP.NET Core module with instructions on how to start your application, which is to use the same dotnet command we saw earlier. Now, instead of our ASP.NET application running inside of a w3wp.exe IIS worker process, the application will execute inside of a dotnet.exe process.

With a better idea of how ASP.NET Core runs in production, let’s talk about benefits and risks.

Benefits

One benefit to Kestrel is the ability to execute across different platforms. You can author an ASP.NET application on Windows using Visual Studio and IIS Express, but deploy the application on Linux with Apache in front. In all scenarios, the server is always Kestrel and your application code doesn’t need to change.

Another benefit to Kestrel is the incredible work Microsoft has put into making a blazing fast web server with managed code. Inside the readme file for the ASP.NET Benchmarks repository, you’ll currently find benchmarks showing ASP.NET Core serving five times the number of requests per second as ASP.NET 4.6 on the same hardware. 313,001 requests per second compared to 57, 843.

benchmarks

Of course, benchmark code and benchmark results don’t always reflect how a specific business application will behave. It’s like seeing an F1 racing car manufactured by Toyota on the television and then thinking you’ll find a car that goes 220 mph at the local Toyota dealer. What you will find at the dealer are cars that indirectly benefit from the millions of dollars that Toyota puts into researching new technologies for their F1 cars. The benefits trickle down.

I decided to try some comparative benchmarks of my own. I created three applications with ASP.NET WebForms, ASP.NET MVC 5, and ASP.NET MVC Core. Each application delivers 3kb of HTML to the client using the typical patterns you would find in business applications. For example, using a master page in WebForms and using a Layout page in MVC. For ASP.NET MVC Core, I ran tests against a naked Kestrel server as well as a Kestrel server proxied by IIS.

Benchmarking ASP.NET

In my tests, the naked Kestrel server delivered 5 times the throughput of WebForms, and over twice the throughput of MVC 5. Once I moved the Core application behind IIS however, throughput dropped below the level of MVC 5. I know these results might surprise many people who believe that ASP.NET Core is inherently faster and lighter than its predecessors. However, ASP.NET Core’s predecessors were deeply integrated into IIS and could execute inside the same worker process where sockets were open. The current recommended setup adds an additional hop.

Another surprise – switching the ASP.NET Core application to run on the full .NET framework instead of .NET Core resulted in very little change in the throughput numbers. As developers, we want to believe .NET Core is also inherently faster and lighter than the full .NET framework, but for runtime performance in this specific application, the difference appears to be negligible.  

I do think we can reasonably expect the performance of ASP.NET Core with IIS to improve in the future, so we’ll leave performance as a benefit.

Risks

The biggest risk I’ve seen in the new hosting model is the confusion that results in using IIS with ASP.NET Core. There are ASP.NET related settings in IIS that have no impact on a .NET Core application. These are settings like the pipeline mode (integrated or classic), and the setting to select a version of the .NET framework for a specific AppPool.  

IIS Settings

Developers and IT operations will need to understand the new deployment strategy and understand which IIS settings are significant and which settings to ignore. Both parties will also need to learn new troubleshooting techniques and diagnose a new set of common errors. 502.5 – “Failed to start process” is a terrifying new common error. One reason for this error is that the web.config file specifies incorrect arguments for the dotnet host.

Summary

Currently, I feel the hosting area is one of the bigger areas of risk in ASP.NET Core. However, I think we will be able to mitigate the risks over time through a combination of experience, education, experimentation, and improvements from Microsoft based on customer feedback. It is important for developers and IT operations to look at how to host an ASP.NET core application early and not treat deployment as a known procedure that can wait till the end of the project.

Also In This Series

ASP.NET Core and the Enterprise Part 1: Frameworks

ASP.NET Core and the Enterprise Part 2: Hosting (current)

ASP.NET Core and the Enterprise Part 3: Middleware

ASP.NET Core and the Enterprise: Part 1 Frameworks

Tuesday, October 11, 2016 by K. Scott Allen

When larger companies with larger development teams ask me about ASP.NET Core, I generally frame the conversation in terms of risk and reward. Yes, there is a new architecture and yes, there are new features, but, when working on business applications with long lifecycles you want to look beyond the obvious topics for evangelism and measure the risks.

There are six areas to consider in evaluating ASP.NET Core and it’s impact on business and operations.

Six Areas To Evaluate ASP.NET Core In the Enterprise

First is understanding ASP.NET Core’s relationship with the new .NET Core framework. That’s the topic for this post. In the future we’ll also be evaluating the new hosting model for ASP.NET Core, the HTTP processing pipeline, security features, the new data access landscape, and finally ASP.NET Core itself.

ASP.NET Core and the .NET Frameworks

We now have two distinct flavors of the .NET framework to choose from. First there is the full .NET framework. The full .NET framework is the mature .NET framework that has been around since the beginning, is pre-installed with Windows, and includes application level frameworks like windows forms, web forms, WCF, and WPF. The most recent version at this time is 4.6.2.  

We also have .NET Core, a new and modular version of .NET that runs on more than just the Windows operating system.

Where ASP.NET Core fits into the picture is that ASP.NET Core is an application framework that can run on either the full .NET framework or on .NET Core. Selecting a .NET framework flavor is one of your first decisions when making a move to ASP.NET Core. Do you want to run on the full framework, or .NET Core, or will you need to support both?

ASP.NET Core and Framework Flavors

When you choose to run on the full .NET framework you are running on the framework you already know. Yes, ASP.NET Core will be a bit different from the ASP.NET frameworks of the past, but your underlying framework is the same. In order to understand why you might chose .NET Core instead of the full framework we need to dig into the risk and rewards of .NET Core.

.NET Core Rewards

.NET Core is a cross-platform framework, meaning.NET Core will run on Windows, on the Mac, and on various flavors of Linux. .NET Core also works in a Docker container for those who are using or thinking about using Docker software containers.

.NET Core Home Page

One must understand that ASP.NET Core does not require .NET Core as the underlying framework. But, if you do choose to use .NET Core as your underlying framework, you will be able to author and deploy ASP.NET applications and services on all of these various platforms. Linux is, of course, a big target for server-side applications. Most enterprises are heterogeneous and already have the IT expertise to run business applications on both Windows and Linux servers. There is also the opportunity to save money as Linux servers typically run a bit cheaper, particularly when using cloud based infrastructure. On Azure, for example, a single 4 core virtual machine running Linux is currently around $90 cheaper per month than its Windows counterpart.

What’s not immediately obvious when talking about a cross-platform .NET framework is how the necessary tooling also works across platforms. .NET Core has no hard dependency on Windows or Visual Studio. All of the low level tools you need to do work will run from the command line. There is an entire new world of text editors and IDEs available that we can now use to develop .NET applications, including Visual Studio code from Microsoft, Project Rider from Jet Brains, as well as text editors like Sublime and Atom.

Editors and IDEs are plentiful

I’ve worked on more than one project over the years where there is a front-end specialist who uses a Mac. The front-end developer hasn’t been able to install Visual Studio and work with an ASP.NET project the same way a Windows developer would work with the project. This type of scenario is considerably easier with .NET Core because a developer on Apple hardware can write, run, and debug ASP.NET code just as well as a developer on Windows.

.NET Core is also an open source project. On GitHub you can find not just the code for the framework itself, but also the code for the unit tests, and the documentation. You can view bugs in the GitHub issues list and see the current status of a bug. I’ve been telling people that one of the underappreciated advantages of Microsoft’s move towards open source is not in having the source code to a framework or library. We’ve always been able to get source code, even if we had to use a de-compiler. What I’ve found valuable are the unit tests, because they often given me better insight into how a particular feature works compared to using documentation or the source code itself. The unit tests can describe a feature from several different perspectives and also tell me what a piece of software is not designed to do.

.NET Core - the CoreFx repository

In the enterprise, there is often a worry that open source projects do not have the same level of support as closed source commercial products. However, Microsoft has announced a support lifecycle of 3 years for each major and minor release of .NET Core. The .NET Core 1.0.0 release was June 27, 2016, meaning the end of support for 1.0.0 is in June of 2019, or even later if there is a Long Term Support (LTS) release in the future.

.netcoresupport

Something you’ll notice when looking at the source code on GitHub is how modular the new .NET Core has become. This is no longer a monolithic framework where you have everything or you have nothing at all. With .NET Core the fundamental pieces are smaller and integrated with the NuGet package management ecosystem. You can pull in just the pieces of .NET Core that you need to build an application.

.NET Core also supports an application deployment model known as a self-contained deployment. A self-contained deployment means you can put your asp.net core application into production with all of the .NET core assemblies the application needs to function, and these framework assemblies are all local. There is no global assembly cache, there is no framework installation required on the server. If you want a micro-services architecture with two dozen services deployed on one server, each ASP.NET Core service can deploy with it’s own version of the .NET Core framework and never need to worry about conflicting with the versions of other services or requiring the other services to upgrade. Everyone lives in perfect isolation. The downside here being that you’ll need to manage patches and updates to the framework for each deployment.

Are there other downsides?

The Risks

Making a cross platform version of the .NET framework required some hard work and sacrifices. If the full .NET framework was a diet plan, you could have doughnuts for breakfast, a deep dish pizza for lunch, and a bacon cheeseburger with egg on top at dinner time. No human can really consume this many calories in a day, but the calories are there for you if you want to try and eat them all. .NET Core is more of a lite meal with all the essential vitamins included. But, depending on your appetite, you might not find enough here.

The .NET Diet Plans

Where am I going with this analogy?

When I say that there were sacrifices made in creating .NET Core, I mean some technologies and features are missing, and some frameworks of the past will probably never appear in .NET Core.

There are generally three reasons for a feature of .NET to not appear in .NET Core. One reason is that there wasn’t enough time to port a feature to .NET core and the team is still working on expanding coverage. A second reason is that a feature might not make sense for a cross platform library and only makes sense on Windows. And finally, there are some technologies in .NET that have been around for 15+ years and are not going to move forward.

Unfortunately, many of us working in the enterprise space still use frameworks introduced in the original .NET framework of 2001. So what are the big pieces that are missing and what do you need to know?

For starters, there are entire frameworks that have not made the move to .NET Core. These are frameworks like WinForms, WebForms, WPF, Workflow, and WCF. From an ASP.NET perspective we don’t really care about desktop technologies like WPF, but there is a tremendous amount of enterprise software written using WebForms, and there are a tremendous number of services using WCF. I also know there is still software around using the predecessor to WCF, which was ASMX web services. Applications that rely on these frameworks might never move to .NET Core as they will require a re-design and a re-write. When you think about WCF in particular, a rewrite might not only impact your business but also your partner’s business.

I do want to point out that when it comes to WCF there is no server side WCF replacement. But, if you are writing a server-side application that consumes WCF services hosted in another service, there is an open source version of the WCF client libraries available. There is also some tooling (currently in beta) for Visual Studio for adding a service reference to a .NET Core application.

Gone Missing in .NET Core

There are also features that are not in .NET Core, at least not currently. Some of these may appear in the future, but these are features like the once hugely popular DataSet class. There is also no support for working with XML schemas or XSLT in .NET core. There are no distributed transactions (perhaps a good thing), no ability to communicate with LDAP or ActiveDirectory, and no class for interacting with an SMTP server. Some of these features, like LDAP and SMTP can be replaced with HTTP based cloud services or third party providers, but this will still require some work in a legacy code base.

Some features exist in .NET Core, but with small changes in the API. Examples include the reflection API (changed for performance) and the encryption API (for better cross-platform support).

When moving to .NET core you also need your third party dependencies to support .NET core. Many popular libraries like StructureMap, Automapper, Mediatr, and JSON.net all have working versions for .NET core. MongoDB, as of this writing, just released a .NET Core driver. Oracle support for the Entity Framework, however, still has plans for .NET Core. If you rely on third party libraries, you’ll need those libraries to port to .NET Core, or find an alternative, or re-write your application to remove the dependency.

One tool you might find useful in analyzing your applications is the .NET Portability Analyzer. You can run the analyzer as a Visual Studio extension, or from the command line. With the analyzer you can select a target platform, like .NET Core, or ASP.NET Core, or a specific version of the full .NET framework, and the tool can tell you what problems you’ll face.

Summary

To summarize this part of the conversation, I think we are still a few years away from seeing .NET Core play a large role in the enterprise. We have a tremendous amount of legacy code that relies on WebForms and WCF services. There might not be enough of a return on investment to re-write or port these applications, at least not at this early stage in 2016.

However, it is clear that Microsoft’s future direction is in the Core space. Yes, the last update to the full .NET framework did include improvements for ASP.NET and WebForms, but clearly the future innovation and hard work will be in the new core frameworks like .NET core, ASP.NET Core, Entity Framework core, and whatever other cores come along in the future.

I do believe that now is a good time to start some planning and start prototyping. If you have requirements for new server-side applications, I’d push to try using .NET Core first, and fall back to the full framework if there is too much pain. Running ASP.NET Core on the full .NET framework will solve many problems with missing features and third party dependencies. Yes, running ASP.NET Core on the full framework does feel like a temporary solution to use in transition, but it is a step in the right direction, even if the solution feels messy. In the enterprise, we always have to deal with messy. Plus, the expertise an enterprise can gain in these early days will pay dividends in five years.

Also In This Series

ASP.NET Core and the Enterprise Part 1: Frameworks (current)

ASP.NET Core and the Enterprise Part 2: Hosting

ASP.NET Core and the Enterprise Part 3: Middleware

Updated Videos For ASP.NET Core

Wednesday, October 5, 2016 by K. Scott Allen

I’ve re-recorded my ASP.NET Core Fundamentals course for Pluralsight using the released bits of ASP.NET Core 1.0. I hope you enjoy the videos!

In the future I’d like to record videos showing my opinionated approach to using ASP.NET on larger projects. Let Pluralsight know if that’s the type of content you’d like to see!

 

 

ASP.NET Core Fundamentals

My Pluralsight Courses

K.Scott Allen OdeToCode by K. Scott Allen
What JavaScript Developers Should Know About ECMAScript 2015
The Podcast!