OdeToCode IC Logo

await the async Letdown

Monday, March 4, 2019 by K. Scott Allen

Microsoft added async features to the C# language with more than the usual fanfare. We were told async and await would fundamentally change how .NET developers write software, and future development would be async by default.

After awaiting the future for 7 years, I still brace myself anytime I start working with an async codebase. It is not a question of if, but a question of when. When will async let me down and give me a headache?

The headaches usually fall into one of these categories.

Old Code Is Still Sync Code

Last fall I was working on a project where I needed to execute code during a DocumentProcessed event. Inside the event, I could only use async APIs to read from one stream and write into a second stream. The delegate for the event looked like:

public delegate void ProcessDocumentDelegate(MarkdownDocument document);

If you’ve worked with async in C#, you’ll know that the void return type kills asynchrony. The event handler cannot return a Task, so there is nothing the method can return to encapsulate the work in progress.

The bigger problem here is that even if the event handler could return a Task, the code raising the event is old code with no knowledge of Task objects. I could not allow control to leave the event handler until my async work completed. Thus, I was facing the dreaded "sync over async" obstacle.

There is no getting around or going over this obstacle without feeling dirty. You have to hope you are writing code in a .NET Core console application where you can hold your nose and use .Result without fear of deadlocking. If the code is intended for a library with the possibility of execution in different environments, then the saying abandon hope all ye who enter here comes to mind.

New Code Is Sometimes Still Sync Code

When working with an old code base you can assume you’ll run into problems where async code needs to interact with sync code. But, the situation can happen in new code, and with new frameworks, too. For example, the ConfigureServices method of ASP.NET Core.

public void ConfigureServices(IServiceCollection services)
{
    // ...
}

There are many reasons why you might need async method calls inside ConfigureServices. You might need to obtain an access token or fetch a key over HTTPs from a service like Key Vault. Fortunately, there are a few different solutions for this scenario, and all of them we move the async calls out of ConfigureServices.

The easiest way out is to hope you are using a library designed like the KeyVault library, which moves async code into a callback, and invokes the callback later in an async context.

AuthenticationCallback callback = async (authority,resource,scope) =>
{
    // ...

    var authResult = await authContext.AcquireTokenAsync(resource, credential);
    return authResult.AccessToken;
};
 
var client = new KeyVaultClient(callback);

Another approach is to move the code into Program.cs, where we finally (after waiting for 2.1 releases of C#) have an async Main method to start the process. Finally, you can use one of the approaches described in Andrew Lock’s three part series on Running async tasks on app startup in ASP.NET Core.

The Task API is a Minefield

The Task class first appeared in .NET 4.0 ten years ago as part of the Task Parallel Library. Task was an abstraction with a focus on parallelism, not asynchrony. Because of this early focus, the Task API has morphed and changed over the years as Microsoft tries to push developers into the pit of async success. Unfortunately, the Task abstraction has left behind a trail of public APIs and code samples that funnel innocent developers into a pit of weeping and despair.

Example - which of the following can execute compute-bound work independently for several minutes? (Choose all that apply)

new Task(work);

Task.Run(work);

Task.Factory.StartNew(work);

Task.Factory.StartNew(work).ConfigureAwait(false);

Task.Factory.StartNew(work, TaskCreationOptions.LongRunning);

Answer: all of the above, but some of them better than others, depending on your context.

Good Code Turns Bad

Another disappointing feature of async is how good code can turn bad when using the code in an async context. For example, what sort of problem can the following code create?

using (var streamWriter = new StreamWriter(context.Response.Body))
{
    await streamWriter.WriteAsync("Hello World");
}

Answer: in a system trying to keep threads as busy as possible, the above code blocks a thread when flushing the writer during Dispose. Thanks to David Fowler’s Async Guidance for pointing out this problem, and other subtleties.

Awaiting the End

After all these years with tasks and async, there are still too many traps to catch developers. There is no single pattern to follow for common edge cases like sync over async, yet so many places in C# demand synchronous code (constructors, dispose, and iteration come to mind). Yes, new language features, like async iterators, might shorten this list, but I’m not convinced the pitfalls will disappear. I can only hope that, like the ConfigureAwait disaster, we don’t have to live with the work arounds sprinkled all through our code.

.NET Core Opinion 9 - Embrace Dependency Injection

Tuesday, February 26, 2019 by K. Scott Allen

Someone asked me why dependency injection is popular in .NET Core. They told me DI makes code harder to follow because you never know what classes and objects the app will use unless you run with a debugger.

The argument that DI makes software harder to understand has been around for a long time, because there is some truth to the argument. However, if you want to build flexible, testable, decoupled classes in C#, then using a container and constructor injection is still the simplest solution.

The alternative is to write code like the ASP.NET MVC AccountController (not the .NET Core controller, but the MVC 4 and 5 controllers). If you've worked with the framework over the years, you might remember a time when the project scaffolding gave us the following code.

public AccountController()
  : this(new UserManager<ApplicationUser>(
      new UserStore<ApplicationUser>(
       new ApplicationDbContext())))
{
}
then 
public AccountController(UserManager<ApplicationUser> userManager)
{
   UserManager = userManager;
}

public UserManager<ApplicationUser> UserManager { get; private set; }

The two different constructors do provide some flexibility. In a unit test, you can pass in a test double as a UserManager, but when the application is live the default constructor combines a DbContext with a UserStore to provide a production implementation.

The problem is, the production implementation becomes hard-coded into the default constructor. What if you want to wrap the UserStore with a caching or logging component? What if you wanted to use a non-default connection string for the DbContext? Then you need to scour the entire code base to find all the dependencies hardcoded with new.

Later versions of the scaffolding tried to improve the situation by centralizing dependency registration. The following is code from today's Startup.Auth.cs. Notice how the method is similar to ConfigureServices in ASP.NET Core.

public void ConfigureAuth(IAppBuilder app)
{
    // Configure the db context, user manager and 
    // signin manager to use a single instance per request
    app.CreatePerOwinContext(ApplicationDbContext.Create);
    app.CreatePerOwinContext<ApplicationUserManager>(ApplicationUserManager.Create);
    app.CreatePerOwinContext<ApplicationSignInManager>(ApplicationSignInManager.Create);

    // ...

}

While the central registration code is an improvement, the non-Core ASP.NET framework does not offer DI as a native service. The application needs to manually resolve a dependencies via an OwinContext reference. Now, the AccountController looks like:

public AccountController()
{
}

public AccountController(ApplicationUserManager userManager)
{
   UserManager = userManager;
}

public ApplicationUserManager UserManager
{
   get
   {
   return _userManager ?? HttpContext.GetOwinContext().GetUserManager<ApplicationUserManager>();
   }

   private set
   {
       _userManager = value;
   }
}

The problem is, every dependency requires a developer to write a property and follow the service locator anti-pattern. So, while the indirection of DI in ASP.NET Core does have some downsides, at least DI doesn’t add more code to a project. In fact, an AccountController in ASP.NET has a simpler setup.

public class AccountController
{
    public AccountController(ApplicationUserManager userManager)
    {
        UserManager = userManager;
    }

    // …

}

As always, software is about tradeoffs. If you want the flexibility of testable classes, go all in with dependency injection. The alternative is to still face uncertainties from indirection, but in a code base that is larger and harder to maintain.

Travelogue SDD London

Monday, February 25, 2019 by K. Scott Allen

My trip to the Software Design and Development conference came only a few days after returning home from NDC Minnesota, so I should have written this post 6 months ago. Life took some unexpected turns, so better late than never.

History

My first SDD was over 10 years ago. The opportunity came about when the Pluralsight founders opted to step out of the conference circuit and encouraged myself and others to step in. Back then the conference ran under a different name, but the organizer and the feel of the conference hasn't changed. Both are some of my favorites.

On all my trips to London I’ve always arrived on an overnight flight from Washington D.C. For this trip I took a daytime flight, leaving Washington at 9 am and reaching a dark and rainy Heathrow at 9 pm. I was bored with overnight flights into Europe and hoped the shift would allow me a quicker adjustment to the time change (it did).

I’ve always found London to be comfortable and familiar. I grew up in an old house by American standards, in an old town and near the older east coast cities. In the black cab from Paddington station, at night and in the rain, the Georgian architecture of London made me feel like I was riding through Northwest D.C. The terraced housing passed by like row houses in Baltimore. The smell of old wood near the river, and the Sunday roast. These are childhood experiences. London is closer to home than Chicago or Seattle.

The Streets of London

The Sunday Roast

The Conference Experience

If I’m ever in London when a cyclone hits, I’ll want to be at the Barbican Centre, the usual home for SDD. The brutalist architecture of the surrounding estate places concrete beneath your feet, above your head, and around you on all four sides. Razed to the cellars by bombs during World War II, the area today lives up to the old Latin meaning of the word barbican, which implies a residence that is “well-fortified”.

View from the Barbican

Residences at the Barbican

The first day of SDD for me was a C# workshop. It’s been a long time since I’ve taught a pure language workshop, and the experience was wonderful. In recent years, my workshops revolved around Angular. In December of 2017, after running an Angular workshop at NDC, I made the decision to escape from the asylum. I didn’t want to work with Angular, and I was tired of teaching students how to shave yaks. I especially didn’t want to shave my own yaks only to hear them bleat WARN deprecated with breaking changes the following week. In this workshop, I felt rewarded by teaching topics with significance in software development, and showed how to use patterns with staying power.

My final session at SDD was one of the most enjoyable sessions I’ve presented in a few years. I was able to open an editor and vamp on a .NET Core web app for 90 minutes. This type of presentation doesn’t work as a keynote, or in front of a huge audience, but on this day I had the perfect room and time slot.

The Culinary Experience

Before I arrived in London, I knew I would eat well during the week. Brian Randall was also speaking at the conference. No matter where I am in the world, I can call Brian, tell him what city I’m in, and he’ll have restaurant recommendations. When I’m in the same city as Brian, the culinary adventures are fantastic. Highlights from the past include Vivek Singh’s Cinnamon Club (in London), Bobby Flay’s Mesa Grill (in New York city at the time), and a couple dozen other fine restaurants over the years.

The SDD organizer, Nick Payne, is also a foodie. Nick organizes a speaker’s dinner every year, and not only does the restaurant exceed expectations, but the company and conversation does, too. This years dinner was at Rök, a restaurant with a rural Nordic influence in the décor, as well as the food.

This year’s highlight, though, was the Duck and Waffle. There’s only three buildings in all of London that can look down at The Gherkin, and the Duck and Waffle is at the top of one of those buildings. Brian and I had breakfast here one morning, and yes, the duck leg confit was tasty. The views, however, were breathtaking. There's a chill of insignificance that settles over me when I'm looking over the sprawl of 8 million people. Fortunately, breakfast day was a rare sunny morning in London. The warmth of the sun was a good counterbalance.

View from the Duck and Waffle

Duck leg, duck egg, and a waffle

The Phallic Gherkin

As much as I enjoy London, though, I had to cut this trip short. Wall Street beckoned.

Up next in this travel series: Pluralsight IPO day.

.NET Core Opinion 7 – Startup Responsibilities

Thursday, February 14, 2019 by K. Scott Allen

Over the years I’ve noticed that application startup code tends to attract smaller bits of code in the same way that a protostar accretes cosmic material until reaching the point where nuclear fusion begins. I’ve seen this happen in the main function of C programs, and (back when we never had enough HRESUTs to hold the HINSTANCEs of our HWINDOWs), in the WinMain function of C++ programs. I’ve also seen this happen inside of global.asa in classic ASP, and in global.asax.cs for ASP.NET. It’s as if we say to ourselves, "I only have two new lines of code to execute when the program starts up, so what could it hurt to jam these two lines in the middle of the 527 method calls we already have in the startup function?"

This post is a plea to avoid nuclear fusion in the Startup and Program files of ASP.NET Core.

Starting Up

There is a lengthy list of startup tasks for modern server applications. Warm up the cache, create the connection pool, configure the IoC container, instantiate the logging sinks, and all of this happens before you getting to the actual business of application startup. In ASP.NET Core, I used to see most of this logic end up inside of Startup.cs. Some of this code is moving over to Program.cs as developers start to recognize Program.Main as a usable entry point for a web application.

The next few opinionated posts will discuss strategies for organizing startup and entry code, and look at approaches you can use for clean startup code.

To get started, let’s talk about the Startup class in ASP.NET Core. If you believe every class should have a single responsibility, then it is easy to think the Startup class should manage all startup tasks. But, as I’ve already pointed out, there is a lot of work to do when starting up today’s apps. The Startup class, despite its name, should not take responsibility for any of these tasks.

In Startup, there are three significant methods with specific, limited assignments:

  • The constructor, where you can inject dependencies to use in the other methods.

  • ConfigureServices, where you can setup the service provider for the app

  • Configure, which should be limited to arranging middleware components in the correct order.

public class Startup
{
    public Startup(IConfiguration configuration)
    {
         // ...
    }

    public void ConfigureServices(IServiceCollection services)
    {
         // ...
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
         // ...
    }
}

Of course, you can also have environment specific methods, like ConfigureDevelopment and ConfigureProduction, but the rules remain the same. Initialization tasks like warming the cache don’t need to live in Startup, which already has enough to do. Where should these tasks go? That’s the topic for the next post.

10 Years of Workshop Material Added to the Creative Commons

Tuesday, February 12, 2019 by K. Scott Allen

I’ve kept most of my workshop and conference materials in a private GitHub repository for years. I recently made the repository public and added a CC-BY-4.0 license. The material includes slides, and hands-on labs, too. Some of the workshops are old (you’ll find some WinJS material inside [shudder]), but many of the workshops have aged well – C#, LINQ, and TDD are three workshops I could open and teach today. Other material, like the ASP.NET Core workshop, is recently updated. Actually, I think the ASP.NET Core material is the most practical and value focused technology workshop I’ve ever put together.

Why Now?

Ten years ago, Pluralsight decided to stop instructor led training and go 100% into video courses. As an author, I was happy to make video courses, but I also wanted to continue meeting students in face-to-face workshops. I still prefer workshops to conference sessions. I started making my own workshop material and ran classes under my own name and brand. Over the last 10 years I’ve been fortunate to work with remarkable teams from Mountain View in Silicon Valley, to Hyderabad, India, and many places in between. Four years ago on this day, actually, I was in Rotkruez, Switzerland, where I snapped the following picture on the way to lunch – one of numerous terrific meals I’ve shared with students over the years.

Road to Where?

The memory of being driven through the snowy forests of Switzerland is enough to spike my wanderlust, which for several reasons I now need to temper. I still enjoy the workshops and conferences, and seeing good friends, but I don’t need the stress and repetition of traveling and performing more than a few times a year. If I see a place I’d like to visit, I’m in the privileged position of being able to go without needing work as an excuse. For that, I’m thankful that Pluralsight decided to go all-in with video training.

I don’t advertise my workshops or publicize the fact that I offer training for sale. I still receive regular request for private training, and again I am lucky to choose where I want to go. Conferences still ask me for workshops, but conferences can also be political and finicky (thanks to Tibi and Nick P for being notable exceptions).

What I’m saying is that I’m not using my workshop material enough to justify keeping the material private. Besides, training on some of my favorite topics is a commodity these days. Everyone does ASP.NET Core training, for example. And, if there is one lesson I’ve learned from years of training in person and on video, it’s that the training materials are not the secret sauce that can make for a great workshop. The secret sauce is the teacher.

Maybe, someone else can find something useful to do with this stuff.

Using Environment Variables in Azure DevOps Pipelines

Friday, February 8, 2019 by K. Scott Allen

Imagine you have a unit test that depends on an environment variable.

[Fact]
public void CanGetMyVariable()
{
    var expected = "This is a test";
    var actual = Environment.GetEnvironmentVariable("MYVARIABLE");

    Assert.Equal(expected, actual);
}

Of course the dependency might not be so explicit. You could have a test that calls into code, that calls some other code, and the other code needs an environment variable. Or, maybe you have a script or tool that needs an environment variable. The question is - how do you setup environment variables in a DevOps pipeline?

The answer is easy - when a pipeline executes, Azure will place all pipeline variables into environment variables, so any tools, scripts, tasks, or processes you run as part of the build can access parameters through the environment.

In other words, I can make the above test pass by defining a variable in my pipeline YAML definition:

resources:
- repo: self

variables:
  MyVariable: 'This is a test'

pool:
  vmImage: vs2017-win2016

steps:
- task: DotNetCoreCLI@2
  displayName: Test
  inputs:
    command: test
    projects: '**/*[Tt]ests/*.csproj  '
    arguments: '--configuration $(BuildConfiguration)'

... Or in the DevOps pipeline GUI:

Setting DevOps Pipeline Variable

Also included are the built-in variables, like Build.BuildNumber and System.AccessToken. Just be aware that the variable names you use to reference these parameters can depend on the context. See Build Variables for more details.

.NET Core Opinion #8 – How to Use Azure DevOps Pipelines

Thursday, February 7, 2019 by K. Scott Allen

In a previous post I said to be wary of GUI build tools. In this episode of .NET Core Opinions, let me show you a "configuration as code" approach to building software using Azure DevOps.

Instead of the trivial one project demo you’ll see everywhere in the 5 minute demos for DevOps, let’s build a system that consists of:

  • An ASP.NET Core project that will run as a web application

  • A Go console project that will run once-a-week as a web job

  • An Azure Functions project

Let’s also add some constraints to the scenario (and address some common questions I receive).

  • We need to deploy the web and Go applications into the same App Service.

  • We need to deploy the functions project into a second App Service that runs on a consumption plan.

Using YAML

The first step in using YAML for builds is to select the YAML option when creating a new pipeline instead of selecting from the built-in templates that give you a graphical build definition. I would post more screen shots of this process, but honestly, the UI will most likely iterate and change before I finish this post. Look for “YAML” in the pipeline options, then click a button with affirmative text.

Configuration As Code

I should mention that the graphical build definitions are still valuable, even though you should avoid using them to define your actual build pipelines. You can fiddle with a graphical build, and at any time click on the "View YAML" link at the pipeline or individual task level.

GUI to YAML

I found this toggle view useful for migrating to YAML pipelines, because I could look at a working build and see what YAML I needed to replicate the process. In other words, migrating an existing pipeline to YAML is easy.

Once you get the feeling for how YAML pipelines work, the docs, particularly the YAML snippets in the tasks docs, give you everything you need. Also, there is an extension for VS Code that provides syntax highlighting and intellisense for Pipelines YAML.

The YAML you’ll create will describe all the repositories, containers, triggers, jobs, and steps needed for a build. You can check the file into your source code repository, then version and diff your builds!

A pipeline defined by YAML

Building .NET Core Projects

The essential building blocks for a pipeline are tasks. These are the same tasks you’d arrange in a list when defining a build using the GUI tools. In YAML, each task consists of the task name and version, then the task parameters. For example, to build all .NET Core projects across all folders in Release mode, run the DotNetCoreCLI task (currently version 2), which will run dotnet with a default command parameter of build.

- task: DotNetCoreCLI@2
  displayName: 'Build All .NET Core Projects'
  inputs:
    projects: '**/*.csproj'
    arguments: '-c Release'

Publishing .NET Core Projects

Ultimately, you want to run dotnet publish on ASP.NET Core projects. In YML, the task looks like:

- task: DotNetCoreCLI@2
  displayName: 'Publish WebApp'
  inputs:
    command: publish
    arguments: '-c Release'
    zipAfterPublish: false

Notice the zipAfterPublish setting is false. In builds where a repo contains various projects intended for multiple destinations, I prefer to move files around in staging areas and then create zip files in explicit steps. We’ll see those steps later*.

Building Go Projects

I’m throwing in the Go steps because I have a Go project in the mix, but I also want to demonstrate how Azure Pipelines and Azure DevOps is platform agnostic. The platform wants to provide DevOps and continuous delivery for every platform, and every language. Building a Go project was easy with the built in Go task.

- task: Go@0
  displayName: 'Install Go Deps'
  inputs:
    arguments: '-d'
    command: get
    workingDirectory: '$(System.DefaultWorkingDirectory)\cmd\goapp

- task: Go@0
  displayName: 'go build'
  inputs:
    command: build
    arguments: '-o cmd\goapp\app.exe cmd\goapp\main.go'

The first step is go get, which is like using dotnet restore in .NET Core. The second step is building a native executable from the entry point of the Go app in a file named main.go.

Building Azure Functions

If you want to use Azure Functions and the C# language, then I believe Functions 2.0 is the only way to go. The 1.0 runtime works, but 1.0 is not as mature when it comes to building, testing, and deploying code. Building a 2.0 project (dotnet build) places everything you need to deploy the functions into the output folder. There is no dotnet publish step needed.

Preparing for Publishing the Artifacts

Once all the projects are built, the assemblies and executables associated with each project are on the file system. This is the point where I like to start moving files around to simplify the steps where the pipeline creates release artifacts. Release artifacts are the deployable bits, and it makes sense to create multiple artifacts if a system needs to deploy to multiple resources. Based on the requirements I listed at the beginning of the post, we are going to need the build pipeline to produce two artifacts, like so:

Three projects go in, two artifacts come out

The first step is getting the files into the proper structure for artifact 1, which is the web app and the Go application combined. The Go application will execute on a schedule as an Azure Web Job. It is interesting how many people have asked me over the years how to deploy a web job with a web application. The key to the answer is to understand that Azure uses simple conventions to identity and execute web jobs that live inside an App Service. You don’t need to use the Azure portal to setup a web job, or find an obscure command on the CLI. You only need to copy the Web Job executable into the right folder underneath the web application.

- task: CopyFiles@2
  displayName: 'Copy Go App to WebJob Location'
  inputs:
    SourceFolder: cmd\goapp
    TargetFolder: WebApp\bin\Release\netcoreapp2.1\publish\App_Data\jobs\triggered\app

Placing the Go .exe file underneath App_Data\jobs\triggered\app, where app is whatever name you want for the job, is enough for Azure to find the web job. Inside this folder, a settings.job file can provide a cron expression to tell Azure when to run the job. In this case, 8 am every Monday:

{"schedule": "0 0 8 * * MON"}

Publishing the Artifacts

The final steps consist of zipping up files and folders containing the project outputs, and publishing the two zip files as artifacts. Remember one artifact contains the published web app output and the web job, while the second artifact consist of the build output from the Azure Functions project. The ArchiveFiles and PublishBuildArtifacts tasks in Azure do all the work.

- task: ArchiveFiles@2
  displayName: 'Archive WebApp
  inputs:
    rootFolderOrFile: WebApp\bin\Release\netcoreapp2.1\publish
    includeRootFolder: false
    archiveFile: WebApp\bin\Release\netcoreapp2.1\WebApp.zip

- task: ArchiveFiles@2
  displayName: 'Archive Function App
  inputs:
    rootFolderOrFile: FunctionApp\bin\Release\netcoreapp2.1
    includeRootFolder: false
    archiveFile: FunctionApp\bin\Release\netcoreapp2.1\FunctionApp.zip

- task: PublishBuildArtifacts@1
  displayName: 'Publish Artifact: WebApp
  inputs:
    PathtoPublish: WebApp\bin\Release\netcoreapp2.1\WebApp.zip
    ArtifactName: WebApp

- task: PublishBuildArtifacts@1
  displayName: 'Publish Artifact: FunctionApp'
  inputs:
    PathtoPublish: FunctionApp\bin\Release\netcoreapp2.1\FunctionApp.zip
    ArtifactName: FunctionApp

The Release Pipeline

Currently, YAML is not available for building a release pipeline, but the roadmap says the feature is coming soon. However, since we arranged the artifacts to simplify the release pipeline, all you should need to do is to feed the artifacts into Deploy Azure App Service tasks. Remember function projects, even on a consumption plan, deploy just like a web application, but like web jobs, use some conventions around naming and directory structure to indicate the bits are for a function app. The build output of the function project will already have the right files and directories in place.

A release pipeline (currently GUI)

Summary

Having build pipelines defined in a textual format makes the pipeline easier to modify and version changes over time.

Unfortunately, this YAML approach only works in Azure. There is no support for running, testing, or troubleshooting a YAML build locally or in development. For systems with any amount of complexity, you will be in better shape if you automate the build using command line scripts, or a build system like Cake. Then you can run your builds both locally and in the cloud. Remember, your developer builds need to be every bit as consistent and well defined as your production builds if you want a productive, happy team.

* Note that I’ve simplified the YAML in the code samples by removing actual project names and “shortening” the directory structure for the project.