OdeToCode IC Logo

How the Next Delegate Works In ASP.NET Core Middleware

Monday, September 17, 2018 by K. Scott Allen

How does next know how to call the next piece of middleware in the HTTP processing pipeline? I’ve been asked this question more than once when helping to write middleware components for ASP.NET Core.

I thought it might be fun to answer the question by showing the code for an implementation of IApplicationBuilder. Keep in mind the code is meant to demonstrate how to build a middleware pipeline. There is no error handling, no optimizations, no pipeline branching features, and no service provider.

The Goal

We want an app builder with a Use method just like a real application builder, that is a Use method that takes a Func<RequestDelegate, RequestDelegate>. This Func<> represents a middleware component.

When we invoke the function we have to pass in a next delegate that represents the next piece of middleware in the pipeline. What we get back when we invoke the function is a second function that we can use to process each individual HTTP request.

The code below looks just like the code in the Configure method of a web app, although the middleware doesn’t do any real work. Instead, the components write log statements into a fake HTTP context.

app.Use(next =>
{
    return async ctx =>
    {
        ctx.AddLogItem("Enter middleware 1");
        await next(ctx);
        ctx.AddLogItem("Exit middleware 1");
    };
});

app.Use(next =>
{
    return async ctx =>
    {
        ctx.AddLogItem("Enter middleware 2");
        await next(ctx);
        ctx.AddLogItem("Exit middleware 2");
    };
});

app.Use(next =>
{
    return async ctx =>
    {
        ctx.AddLogItem("Enter middleware 3");
        await next(ctx);
        ctx.AddLogItem("Exit middleware 3");
    };
});

If we were to look at the log created during execution of the test, we should see log entries in this order:

Enter middleware 1
Enter middleware 2
Enter middleware 3
Exit middleware 3
Exit middleware 2
Exit middleware 1

In a unit test with the above code, I expect to be able to use the app builder to build a pipeline for processing requests represented by an HttpContext.

var pipeline = app.Build();

var request = new TestHttpContext();
pipeline(request);

var log = request.GetLogItem();

Assert.Equal(6, log.Count);
Assert.Equal("Enter middleware 1", log[0]);
Assert.Equal("Exit middleware 1", log[5]);

The Implementation

Each time there is a call to app.Use, we are going to need to keep track of the middleware component the code is adding to the pipeline. We’ll use the following class to hold the component. The class will also hold the next pointer, which we’ll have to compute later after all the calls to Use are finished and we know which component comes next. We’ll also store the Process delegate, which represents the HTTP message processing function returned by the component Func (which we can’t invoke until we know what comes next).

public class MiddlewareComponentNode
{
    public RequestDelegate Next;
    public RequestDelegate Process;
    public Func<RequestDelegate, RequestDelegate> Component;
}

In the application builder class, we only need to store a list of the component being registered with each call to Use. Later, when building the pipeline, the ability to look forwards and backwards from a given component will prove useful, so we’ll add the components to a linked list.

public void Use(Func<RequestDelegate, RequestDelegate> component)
{
    var node = new MiddlewareComponentNode
    {
        Component = component
    };

    Components.AddLast(node);
}

LinkedList<MiddlewareComponentNode> Components = new LinkedList<MiddlewareComponentNode>();

The real magic happens in Build. We’ll start with the last component in the pipeline and loop until we reach the first component. For each component, we have to create the next delegate. next will either point to the processing function for the next middleware component, or for the last component, be a function we provide that has no logic, or maybe sets the response status to 404. Once we have the next delegate, we can invoke the component function to create the processing function itself.

public RequestDelegate Build()
{
    var node = Components.Last;
    while(node != null)
    {
        node.Value.Next = GetNextFunc(node);
        node.Value.Process = node.Value.Component(node.Value.Next);
        node = node.Previous;
    }
    return Components.First.Value.Process;
}

private RequestDelegate GetNextFunc(LinkedListNode<MiddlewareComponentNode> node)
{
    if(node.Next == null)
    {
        // no more middleware components left in the list 
        return ctx =>
        {
            // consider a 404 status since no other middleware processed the request
            ctx.Response.StatusCode = 404;
            return Task.CompletedTask;
        };
    }
    else
    {
        return node.Next.Value.Process;
    }
}

This has been a "Build Your Own AppBuilder" excercise. "Build you own ________" exercises like this are a fun challenge and a good way to understand how a specific piece of software works behind the scene.

.NET Core Opinion #3 - Other Folders To Include in the Source Repository

Thursday, September 13, 2018 by K. Scott Allen

In addition to src and test folders, there are a few other top level folders I like to see in a repository for a .NET Core code base.

benchmarks – for performance sensitive projects. Benchmarks typically require a benchmarking framework and perhaps some custom applications. All of the benchmark related code can live inside this folder.

build – for build scripts and other build related files. Some build systems require build artifacts to live in the root of the repository, but supporting files can live here to avoid cluttering the root. More on build files in a future post.

docs – for markdown files, diagrams, and other documentation. There are a few possible audiences for this folder, depending on the project type. For OSS libraries, documentation could include contributor focused documentation, like build instructions and style guidelines. For business apps, the folder might target users with setup instructions.

samples – for code to demonstrate libraries and frameworks. My basic rule of thumb is that if the repo builds NuGet packages for other developers to consume, you’ll want a samples folder demonstrating some basic scenarios on how to use the package.

scripts – for scripts related to the project. These could be automation scripts for sample data, computer setup, cloud provisioning, or desired state configuration. More on scripts in a future post.

specs – for those projects building on published specs. Examples would be HL7 specifications for a health data parser, or the open language grammar for a parser.

tools – for utilities, possibly from a third part, that are required to build, run, or deploy the code.

As an aside, many of the benefits of .NET Core being open source are not related to the source itself, because we’ve always been able to get to the source code with tools like Reflector. Many of the benefits are seeing other artifacts like unit tests, sample projects, and experiments. When I was learning .NET Core, I found these other folders invaluable.

Coming up: more on the build and scripts folders.

CancellationTokens and Aborted ASP.NET Core Requests

Wednesday, September 12, 2018 by K. Scott Allen

When a client closes a connection during a long running web operation, it could be beneficial for some systems to take notice and stop work on creating the response.

There are two techniques you can use to detect an aborted request in ASP.NET Core. The first approach is to look at the RequestAborted property of HttpContext.

if (HttpContext.RequestAborted.IsCancellationRequested)
{
    // can stop working now
}

RequestAborted is a CancellationToken. Another approach is to allow model binding to pass this CancellationToken as an action parameter.

[HttpGet]
public async Task<ActionResult> GetHardWork(CancellationToken cancellationToken)
{
    // ...

    if (cancellationToken.IsCancellationRequested)
    {
        // stop!
    }
    
    // ...
}

And yes, both approaches work with the exact same object.

if(cancellationToken == HttpContext.RequestAborted)
{
    // this is true!
}

But, Why Am I Not Seeing Any Disconnects?

If you are hosted in IIS or IIS Express, the ASP.NET Core Module (ANCM) doesn’t tell ASP.NET Core to abort a request when the client disconnects. We know the ANCM is having some work done for the 2.2 release and moving to a faster in-process hosting model. Hopefully the 2.2 work will fix this issue, too.

.NET Core Opinion #2 - Managing a Repository Structure

Thursday, September 6, 2018 by K. Scott Allen

One of the challenges in maintaining an orderly repository structure is seeing the repository structure, at least if you are a Visual Studio user. Studio’s Solution Explorer window shows us the structure of a solution, but this presentation, which can include virtual folders, will disguise the true structure on disk, and trying to create folders and arrange files with the default Solution Explorer view is impossible.

Thus, I recommend using the command line when creating projects, folders, and files that live outside of projects. The dotnet CLI can manage Visual Studio sln files, so starting a new project might look like this:

Using the dotnet CLI to manage sln files

There are three key commands:

  1. dotnet new sln # to create a sln file
  2. dotnet new [template_name] -o [output path] # to create a project
  3. dotnet sln [sln_name] add [csproj_name] # to add a project to a solutition.

At this point, you can open the solution in Visual Studio and the files are all in the correct place. Yes, there are three commands to remember, but I find the commands easier than fighting with Studio and remembering when to uncheck the 'create solution folder' option.

If You Must Use Visual Studio

Here are two tips for those who can’t leave VS behind.

First, use the Blank Solution template in the File -> New Project dialog to get started. The Blank Solution allows you to create an empty sln file in exactly the location you need. Once the solution file is in place, you can add projects to the solution, but take care when specifying the path to the new project.

Secondly, the Solution Explorer in 2017 does have a mode where the explorer will show exactly what is on disk. You can toggle into this "Folder View" by clicking the icon in the toolbar shown in highlight below. From this view you can create folders that are real folders on disk, but you do lose some of the useful right-click context menus.

VS Folder View

With the folder structure under control, I’ll move on to other artifacts I like to see in a repository next.

Using npm During ASP.NET Core Git Deployments in Azure App Services

Tuesday, September 4, 2018 by K. Scott Allen

If you use npm to manage client-side dependencies for ASP.NET Core applications, and you deploy to Azure App Services using Git, then you need a way to run an npm install during deployment in App Services.

There are a few options available for customizing the Azure build and deployment process. One approach is to modify the deployment script Azure generates for your ASP.NET Core project. I’ve written about this previously in Customizing Node.js Deployments for Azure App Services

Of course, Node.js deployments in Azure will automatically run an npm install, so the previous example was showing how to add Webpack into the build, but you follow the same process to customize the build for ASP.NET Core.

You can also customize your build and publish operations by adding targets to the project’s .csproj file. For example:

<Target Condition="'$(DEPLOYMENT_TEMP)' != ''"
        Name="NpmInstall" 
        AfterTargets="Publish">
  <Exec Command="npm install --production"  
        WorkingDirectory="$(DEPLOYMENT_TEMP)">
  </Exec>
</Target>

The above Target will execute npm install after a dotnet publish operation in the DEPLOYMENT_TEMP folder. DEPLOYMENT_TEMP is one of the environment variables defined by Kudu in Azure.

Counting Array Entries in a Cosmos DB Document

Thursday, August 30, 2018 by K. Scott Allen

I’ve been trying to figure out the most efficient approach to counting matches in a Cosmos DB array, without redesigning collections and documents.

To explain the scenario, imagine a document based on the following class.

class Patient
{
    [JsonProperty(PropertyName = "id")]
    public string Id { get; set; }

    public int[] Attributes { get; set; }   
} 

I need a query to return patient documents with a count of how many values in the Attributes property match the values in an array I provide as a parameter.

That’s very abstract, so imagine the numbers in the array each represent a patient’s favorite food. 1 is pasta, 2 is beef, 3 is pork, 4 is chicken, 5 is eggplant, 6 is cauliflower, and so on. An Attributes value of [1, 4, 6] means a patient likes pasta, chicken, and cauliflower.

Now I need to issue a query to see what patients think of a meal that combines pasta, chicken, and eggplant (a [1, 4, 5]).

Cosmos provides a number of aggregation and array operators, including an ARRAY_CONTAINS, but to make multiple queries with dynamic parameter values, I thought a user-defined function might be easier.

In Cosmos, we implement UDFs as JavaScript functions. Here’s a UDF that takes two arrays and counts the number of items in the arrays that intersect, or match, so intersectionCount([2,5], [1,2,3,5,7,10]) returns 2.

function intersectionCount(array1, array2) {
    var count = array1.reduce((accumulator, value) => {
        if (array2.indexOf(value) > -1) {
            return accumulator + 1;
        }
        return accumulator;
    }, 0);
    return count;
}

One way to use the UDF is to query the collection and return the count of matches with each document.

SELECT p, udf.intersectionCount([4,5], p.Attributes) 
FROM Patients p

I can also use the UDF in a WHERE clause.

SELECT *
FROM Patients p
WHERE udf.intersectionCount([1,3], p.Attributes) > 1

The UDF makes the queries easy, but might not be the best approach for performance. You’ll need to evaluate the impact of this approach using your own data and application behavior.

.NET Core Opinion #1 - Structuring a Repository

Tuesday, August 28, 2018 by K. Scott Allen

There are numerous benefits to the open source nature of .NET Core. One specific benefit is the ability to look at how teams organize folders, projects, and files. You’ll see common conventions in the layout across the official.NET Core and ASP.NET Core repositories.

Here’s a typical layout:

.
|-- src/
|-- test/
|-- <solution>.sln 

Applying conventions across multiple repositories makes it easier for developers to move between repositories. The first three conventions I look for in project I work on are:

  1. A src folder, where all projects will live in sub-folders.
  2. A test folder, where all unit test projects will live in sub-folders
  3. A .sln file in the root of the repository (for Visual Studio environments)

Having a VS solution file in the root makes it easy for VS developers to clone a repo and open the file to get started. I'll also point out that these repository layout conventions existed in other ecosystems long before .NET Core came along.

In upcoming posts I’ll share some additional folders I like to see in every repository.