OdeToCode IC Logo

Three Tips for Console Applications in .NET Core

Thursday, August 16, 2018 by K. Scott Allen

I worked on a .NET Core console application last week, and here are a few tips I want to pass along.

Arguments and Help Text Are Easy

The McMaster.Extensions.CommandLineUtils package takes care of parsing command line arguments. You can describe the expected parameters using C# attributes, or using a builder API. I prefer the attribute approach, shown here:

[Option(Description = "Path to the conversion folder ", ShortName = "p")]
[DirectoryExists]
public string Path { get; protected set; }

[Argument(0, Description ="convert | publish | clean")]
[AllowedValues("convert", "publish", "clean")]
public string Action { get; set; }

The basic concepts are Options, Arguments, and Commands. The McMaster package, which was forked from an ASP.NET Core related repository, takes care of populating properties with values the user provides in the command line arguments, as well as displaying help text. You can read more about the behavior in the docs.

Running the app and asking for help provides some nicely formatted documentation.

CLI Help Text Made Easy

Use -- to Delimit Arguments

If you are using dotnet to execute an application, and the target application needs parameters, using a -- will delimit dotnet parameters from the application parameters. In other words, to pass a p parameter to the application and not have dotnet think you are passing a project path, use the following:

dotnet run myproject -- -p ./folder

Dependency Injection for the Console

The ServiceProvider we’ve learned to use in ASP.NET Core is also available in console applications. Here’s the code to configure services and launch an application that can accept the configured services in a constructor.

 static void Main(string[] args)
{
    var provider = ConfigureServices();

    var app = new CommandLineApplication<Application>();
    app.Conventions
        .UseDefaultConventions()
        .UseConstructorInjection(provider);

    app.Execute(args);
}

public static ServiceProvider ConfigureServices()
{
    var services = new ServiceCollection();

    services.AddLogging(c => c.AddConsole());
    services.AddSingleton<IFileSystem, FileSystem>();
    services.AddSingleton<IMarkdownToHtml, MarkdownToHtml>();

    return services.BuildServiceProvider();
}

In this code, the class Application needs an OnExecute method. I like to separate the Program class (with the Main entry-point method) from the Application class that has Options, Arguments, and OnExecute.

Moving APIs to .NET Core

Monday, August 13, 2018 by K. Scott Allen

I’ve been porting a service from .NET to .NET Core. Part of the work is re-writing the Azure Service Bus code for .NET Core. The original Service Bus API lives in the NuGet package WindowsAzure.ServiceBus, but that package needs the full .NET framework.

The newer .NET Standard package is Microsoft.Azure.ServiceBus.

The idea of this post is to look at the changes in the APIs with a critical eye. There are a few things we can learn about the new world of .NET Core.

Statics Are Frowned Upon

The old way to construct a QueueClient was to use a static method on the QueueClient class itself.

var connectionString = "Endpoint://..";
var client = QueueClient.CreateFromConnectionString(connectionString);

The new style uses new with a constructor.

var connectionString = "Endpoint://..";
var client = new QueueClient(connectionString, "[QueueName]");

ASPNET core, with its service provider and dependency injection built-in, avoids APIs that use static types and static members. There is no more HttpContext.Current, for example.

Avoiding statics is good, but I’ll make an exception for using static methods instead of constructors in some situations.

When a type like QueueClient has several overloads for the constructor, each for a different purpose, the overloads become disjointed and confusing. Constructors are nameless methods, while static factory methods provide a name and a context for how an object comes to life. In other words, QueueClient.CreateFromConnectionString is easier to read and easier to find compared to examining parameters in the various overloads for the QueueClient constructor.

The New World is async Only

The old API offered both synchronous and asynchronous operations for sending and receiving messages. The new API is async only, which is perfectly acceptable in today's world where even the Main method can be async.

Binary Serialization is Still Tricky

The old queue client offered a BrokeredMessage type to encapsulate messages on the bus.

var message = new BrokeredMessage("Hello!");
client.Send(message);

Behind the scenes, BrokeredMessage would use a DataContractBinarySerializer to convert the payload into bytes. Originally there were no plans to offer any type of binary serialization in .NET Core. While binary serialization can offer benefits for type fidelity and performance, binary serialization also comes with compatibility headaches and attack vectors.

Although binary serializers did become available with .NET Core 2.0, you won’t find a BrokeredMessage in the new API. Instead, you must take serialization into your own hands and supply a Message object with an array of bytes. From "Messages, payloads, and serialization":

While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results.

Interoperability is good, and the API change certainly pushes developers into the pit of success, which is also a general theme for .NET Core APIs.

Edge versus Chrome - A Quick createElement Benchmark

Wednesday, July 18, 2018 by K. Scott Allen

I’ve been mixing up my browser usage over the last year to give MS Edge a closer look. Some pages are noticeably slower in Edge. Looking in the developer tools, the slow pages have thousands of calls to createElement and createDocumentFragment, so I thought it would be interesting to do some microbenchmarks.


Performance of createElement and createDocumentFragment

With today's stable releases, createElement is twice as fast on Chrome, while createDocumentFragment is an order of magnitude faster. 

7 Tips for Troubleshooting ASP.NET Core Startup Errors

Monday, July 16, 2018 by K. Scott Allen

“An unexpected error occurred” is the least informative error message of all error messages. It is as if cosmic rays have transformed your predictable computing machinery into a white noise generator.

Startup errors with ASP.NET Core don’t provide much information either, at least not in a production environment. Here are 7 tips for understanding and fixing those errors.

1. There are two types of startup errors.

There are unhandled exceptions originating outside of the Startup class, and exceptions from inside of Startup. These two error types can produce different behavior and may require different troubleshooting techniques.

2. ASP.NET Core will handle exceptions from inside the Startup class.

If code in the ConfigureServices or Configure methods throw an exception, the framework will catch the exception and continue execution.

Although the process continues to run after the exception, every incoming request will generate a 500 response with the message “An error occurred while starting the application”.

Two additional pieces of information about this behavior:

- If you want the process to fail in this scenario, call CaptureStartupErrors on the web host builder and pass the value false.

- In a production environment, the “error occurred” message is all the information you’ll see in a web browser. The framework follows the practice of not giving away error details in a response because error details might give an attacker too much information. You can change the environment setting using the environment variable ASPNETCORE_ENVIRONMENT, but see the next two tips first. You don’t have to change the entire environment to see more error details.

3. Set detailedErrors in code to see a stack trace.

The following bit of code allows for detailed error message, even in production, so use with caution.

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
           .CaptureStartupErrors(true) // the default
           .UseSetting("detailedErrors", "true")
           .UseStartup<Startup>();

4. Alternatively, set the ASPNETCORE_DETAILEDERRORS environment variable.

Set the value to true and you’ll also see a stack trace, even in production, so use with caution.

5. Unhandled exceptions outside of the Startup class will kill the process.

Perhaps you have code inside of Program.cs to run schema migrations or perform other initialization tasks which fail, or perhaps the application cannot bind to the desired ports. If you are running behind IIS, this is the scenario where you’ll see a generic 502.5 Process Failure error message.

An ASP.NET Core Startup Error Leads to a 502.5 Process Failure

These types of errors can be a bit more difficult to track down, but the following two tips should help.

6. For IIS, turn on standard output logging in web.config.

If you are carefully logging using other tools, you might be able to capture output there, too, but if all else fails, ASP.NET will write exception information to stdout by default. By turning the log flag to true, and creating the output directory, you’ll have a file with exception information and a stack trace inside to help track down the problem.

The following shows the web.config file created by dotnet publish and is typically the config file in use when hosting .NET Core in IIS. The attribute to change is the stdoutLogEnabled flag.

<system.webServer>
  <handlers>
    <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
  </handlers>
  <aspNetCore processPath="dotnet" arguments=".\codes.dll" 
              stdoutLogEnabled="true" stdoutLogFile=".\logs\stdout" />
</system.webServer>

Important: Make sure to create the logging output directory.

Important: Make sure to turn logging off after troubleshooting is complete.

7. Use the dotnet CLI to run the application on your server.

If you have access to the server, it is sometimes easier to go directly to the server and use dotnet to witness the exception in real time. There’s no need to turn on logging or set and unset environment variables. With Azure, as an example, I can go to the Kudu website for an app service, open the debug console, and launch the application like so:

Diagnosing Startup Errors in ASP.NET Core By Running a Published Project

There’s a good chance I’ll be able to witness the exception leading to the 502.5 error and see the stack trace. Keep in mind that with many environments, you might be running in a different security context than the web server process, so there is a chance you won’t see the same behavior.

Summary

Debugging startup errors in ASP.NET Core is a simple case of finding the exception. In many cases, #7 is the simplest approach that doesn’t require code or environment changes.

Customizing Node.js Deployments for Azure App Services

Monday, July 2, 2018 by K. Scott Allen

In my Developing with Node.js on Azure course I show how to setup a Git repository in an Azure App Service.

When you push code into the repository, Azure will use heuristics to figure out the type of application you are pushing into the repository. The outcome of the heuristics will create a deployment script. If Azure decides you are pushing a Node.js application, for example, the deployment script has the following steps inside:

1. Sync files from the Git repo into the web site directory

2. Select the Node version*

3. Run npm install in the web site directory.

After these steps, most Node.js applications are ready to start.

Customization

A common set of questions I hear revolve around how to change the deployment script to add additional simple steps. Perhaps the project needs to run a transpiler or a tool like Webpack before the application can start.

You can write your own script from scratch or copy and modify the script from Azure. I'd suggest starting by looking at the script Azure generates first. Go to the Kudu website for the app service and select "Download deployment script" under the Tools menu.

Download Kudu Deployment script


In the script, near the bottom, is a  :Deployment label and the three steps listed above. Here’s what I’ve added to one project’s deployment script to run webpack:

:: 4. run build step
pushd "%DEPLOYMENT_TARGET%"
call :ExecuteCmd !NPM_CMD! run build
IF !ERRORLEVEL! NEQ 0 goto error
popd

This customization doesn’t execute webpack directly, instead the customization executes an “npm run build” command. Any commands needed to build the app are encapsulated into a script or command found in package.json:

"scripts": {
  "serve": "webpack-serve --open --config webpack.config.js",
  "build": "webpack --mode production --config webpack.config.js"
},

One advantage to this approach is the ability to skip installing webpack as a global dependency. Instead, npm will search the node_modules/.bin folder for tools like webpack, grunt, gulp, and tsc. Although you can install tools globally in an app service plan, I tend to avoid global tools when possible and thunk through project.json instead.

You can now override Azure’s deployment script with your custom script by checking the script into your source code repository.

* Note the deployment script only uses the 2nd step if the App Service is Windows based. Otherwise the underlying container sets the Node.js version.

New and Updated Azure Course for Node.js Developers

Monday, June 25, 2018 by K. Scott Allen

In addition to the .NET course, I've completely updated my Developing with Node.js on Azure course. In the course we'll build and deploy web applications, work with with Azure SQL and Cosmos DB, store files in Azure storage, develop and deploy Azure Functions, and set up a continuous delivery pipeline from VSTS to Azure App Services.

image

Key Vault and Managed Service Identities

Wednesday, June 13, 2018 by K. Scott Allen

In previous posts we look at decryption with Azure Key Vault and how to think about the roles of the people and services interacting with Key Vault. In this post I want to call attention to an Azure feature that you can use in combination with Key Vault – the Managed Service Identity (MSI).

MSI helps to solve the Key Vault bootstrapping problem whereby an application needs access to a configuration secret stored outside of Key Vault to access all the secrets inside of Key Vault.

First, here’s the Startup code from the earlier post about decryption. This code needs an application ID and an application secret to access Key Vault.

AuthenticationCallback callback = async (authority,resource,scope) =>
{
    var appId = Configuration["AppId"];
    var appSecret = Configuration["AppSecret"];

    var authContext = new AuthenticationContext(authority);
    var credential = new ClientCredential(appId, appSecret);
    var authResult = await authContext.AcquireTokenAsync(resource, credential);
    return authResult.AccessToken;
};

var client = new KeyVaultClient(callback);

If this code is running in an environment where Azure Managed Service Identity is available, we could enable MSI, install the Microsoft.Azure.Services.AppAuthentication NuGet package, and then replace the code from above with the following.

var tokenProvider = new AzureServiceTokenProvider();
var callback = new AuthenticationCallback(tokenProvider.KeyVaultTokenCallback);
return new KeyVaultClient(callback);

Not only is the code simpler, but we don’t need to explicitly create and manage an application secret that gives the application access to Key Vault. People who talk to me about Key Vault worry about this secret because this is the secret that grants access to all other secrets. With MSI, the master secret isn’t explicitly required by the application. However, a secret is still in play. MSI makes the secret safer and easier to use.

What is MSI?

MSI is a feature of Azure AD available to specific types of services in Azure, including VMs, App Services, and Functions. When you create one of these services you have the option to enable MSI in the portal or with the command line. For an App Service I can enable MSI using:

az webapp assign-identity –resource-group {group} –name {name}
This command will create a managed identity for my app service. If the app service goes away, the managed identity goes away. Azure also takes care of rolling the credentials.

Enable MSI in the Portal

Enabling MSI for a resource also enables a local token endpoint for the resource. In an app service, this endpoint looks like http://127.0.0.1:41068/MSI/token/. This endpoint is where the AzureServiceTokenProvider can go to pick up a token. All the provider needs is a secret. For App Services, both the endpoint and the secret are available as environment variables.

Environment variableds as listerd in Project Kudu

The Magic of the AzureServiceTokenProvider

I like to see how software fails, so I had to run the app using an AzureServiceTokenProvider on my local laptop - far away from MSI environment variables and token endpoints.

To my surprise, the application was able to read a secret in Key Vault.

It turns out that AzureServiceTokenProvider has more than one strategy for obtaining a bearer token. One approach is to use the local MSI endpoint provided by Azure when running in Azure, but another approach is to use the Azure CLI. If you are logged in with the Azure CLI, the token provider can use the equivalent of the following to obtain an access token.

az account get-access-token
A different token provider can obtain a token via Visual Studio.

Summary

More and more services in Azure can now use Azure AD authentication, including Service Bus and as of May, Azure Storage. Using MSI in combination with the AzureServiceTokenProvider makes Azure AD authentication easier and safer.