OdeToCode IC Logo

A Custom Renderer Extension for Markdig

Thursday, January 23, 2020 by K. Scott Allen

A couple years ago I decided to stop using Windows Live Writer for authoring blog posts and build my own publishing tools using markdown and VSCode. Live Writer was a fantastic tool during its heyday, but some features started to feel cumbersome. Adding code into a blog post, as one example.

This blog uses SyntaxHighlighter to render code blocks, which requires HTML in a specific format. With WLW the HTML formatting required a toggle into HTML mode, or using an extension which was no longer supported in the OSS version of WLW.

What I really wanted was to author a post in markdown and use simple code fences to place code into a post.

``` csharp
public void AnOdeToCode()
{

}
```

Simple!

All I'd need is a markdown processor that would allow me to add some custom rendering for code fences.

Markdig Extensions

Markdig is a fast, powerful, CommonMark compliant, extensible Markdown processor for .NET. Thanks to Rick Strahl for bringing the library to my attention. I use Markdig in my tools to transform a markdown file into HTML for posting here on the site.

There are at least a couple different techniques you can use to write an extension for Markdig. What I needed was an extension point to render SyntaxHighlighter flavored HTML for every code fence in a post. With Markdig, this means adding an HtmlOBjectRenderer into the processing pipeline.

public class PreCodeRenderer : HtmlObjectRenderer<CodeBlock>
{
    private CodeBlockRenderer originalCodeBlockRenderer;
    
    public PreCodeRenderer(CodeBlockRenderer originalCodeBlockRenderer = null)
    {
        this.originalCodeBlockRenderer = originalCodeBlockRenderer ?? new CodeBlockRenderer();
    }
    public bool OutputAttributesOnPre { get; set; }
    protected override void Write(HtmlRenderer renderer, CodeBlock obj)
    {
        renderer.EnsureLine();
    
        var fencedCodeBlock = obj as FencedCodeBlock;
        if (fencedCodeBlock?.Info != null)
        {
            renderer.Write($"<pre class=\"brush: {fencedCodeBlock.Info}; gutter: false; toolbar: false; \">");
            renderer.EnsureLine();
            renderer.WriteLeafRawLines(obj, true, true);
            renderer.WriteLine("</pre>");
        }
        else
        {
            originalCodeBlockRenderer.Write(renderer, obj);
        }                
    }
}

Note that the Info property of a FencedCodeBlock will contain the info string, which is commonly used to specify the language of the code (csharp, xml, javascript, plain, go). The renderer builds a pre tag that SyntaxHighlighter will know how to use. The last step, the easy step, is to add PerCodeRenderer into a MarkdownPipelineBuilder before telling Markdig to process your markdown.

The C# Interactive Window

Tuesday, January 21, 2020 by K. Scott Allen

The C# Interactive window in VS is not the best interactive C# experience, LINQPad still has the honor, I believe, but it is a quick and convenient option for trying out a few lines of code.

C# Interactive

Avoiding the Debugger with Better Logging

Sunday, January 19, 2020 by K. Scott Allen

There's actually two reasons why I tend to avoid using debuggers. The first reason is a genuine belief that debuggers encourage short term thinking and quick fixes in software. The second reason is the terrible sights and sounds I witness when I launch a debugger like the VS debugger. It is the noise of combustion and heavy gears engaging. My window arrangement shatters and the work space transforms into a day-trading app with real time graphs and event tickers. A modal dialog pops up and tells me a thread was caught inside the reverse flux capacitor and allowed to execute freely with side-effects. I don't know what any of this means or has to do with finding my off-by-one error, which only adds to my sense of fear and confusion.

One way I avoid the debugger is by adding better logging to my software. The best time to think about what you need to log is when the software is misbehaving, and ideally before the software misbehaves in front of strangers. Sometimes the logging becomes verbose.

logger.Verbose($"ABBREVIATIONS");;
for (var index = ABBRSTART; index <= ABBREND; index++)
{
    for (var number = 0; number < NUMABBR; number++)
    {
        var offset = (NUMABBR * (index - 1)) + (WORDSIZE * 2);
        logger.Verbose($"For [{index}][{number}] the offset is {offset}");

        var ppAbbreviation = machine.Memory.WordAt(Header.ABBREVIATIONS);
        logger.Verbose($"For [{index}][{number}] the ppointer is {ppAbbreviation:X}");

        var pAbbreviation = machine.Memory.WordAddressAt(ppAbbreviation + offset);
        logger.Verbose($"For [{index}][{number}] the pointer is {pAbbreviation:X}");

        var location = machine.Memory.SpanAt(pAbbreviation);
        var result = decoder.Decode(location).Text;

        logger.Verbose($"Abbreviation [{index}][{number}] : {result}");
    }
}

Verbosity works well if you categorize correctly. Again, the best proving ground for a logging strategy is when the software is misbehaving. You can learn what knobs you need to tweak and what categories work well. With Serilog, which I still prefer, you can set the category to match type names in your software, then configure the log messages you want to see using code or configuration files.

public ILogger CreateLogger()
{
    var logger = new LoggerConfiguration()
        .MinimumLevel.Warning()
        .MinimumLevel.Override(typeof(FrameCollection).FullName, LogEventLevel.Verbose)
        .MinimumLevel.Override(typeof(Machine).FullName, LogEventLevel.Verbose)
        .MinimumLevel.Override(typeof(DebugOutputStream).FullName, LogEventLevel.Verbose)
        .MinimumLevel.Override(typeof(Instruction).FullName, LogEventLevel.Verbose)
        .MinimumLevel.Override(typeof(ZStringDecoder).FullName, LogEventLevel.Verbose)
        .Enrich.FromLogContext()
        .WriteTo.File(@"trace.log",
                      outputTemplate: "{SourceContext:lj}\n{Message:lj}{NewLine}{Exception}")
        .CreateLogger();
    return logger;
}

When is Visual Studio Not an Editor?

To use logs during test runs you need to sink log events into XUnit's ITestOutputHelper. The logs are available from the VS test runner by clicking on an "Open additional output for this result" link.

For one particular integration style test I have, the logs can get lengthy, which leads to an amusing message from VS.

Open an editor

An editor like Notepad? Am I not already in a piece of software that can edit text? It's like having the GPS in a Tesla tell me I'll need to use a 1988 Oldsmobile to reach my destination.

Moving Complexity

Thursday, January 16, 2020 by K. Scott Allen

I always feel a sense of satisfaction when I move a piece of complexity from outside an object to inside an object. It doesn't need to be a large amount of code, I've learned. Every little step helps in the long run.

Simple Refactoring

If I could go back 20 years and give myself some programming tips, one of those tips would certainly be this: You don't move code into an abstraction to reuse the code, you move code into an abstraction to use the code.

Nine Performance Tips for Azure App Services

Tuesday, January 14, 2020 by K. Scott Allen

This post originally appeared on the Progress Telerik blog.

Introduction

We always want the best performance from the software we deploy to Azure App Services. Not only does better performance make our customers happy, but better performance can also save us money if we "do more with less" in Azure. In this article we'll look at settings and strategies for improving the performance of web applications and web APIs running in an Azure App Service. We'll start with some easy configuration changes you can make for an instant improvement.

Enable HTTP/2

Microsoft announced support for HTTP/2 in App Services early in 2018. However, when you create a new App Service today, Azure will start you with HTTP 1.1 configured as the default protocol. HTTP/2 brings major changes to our favorite web protocol, and many of the changes aim to improve performance and reduce latency on the web. For example, header compression and binary formatting in HTTP/2 will reduce payload sizes. An even better example is the use of request pipelineing and multiplexing. These features allow for more concurrent requests using fewer network sockets and help to avoid one slow request from blocking all subsequent requests, which is a frequent problem in HTTP 1.1 that we call the "head-of-line" blocking problem.

To configure your App Service to use HTTP/2 with the portal, go to Platform Settings in the Configuration blade. Here you will find a dropdown to specify the HTTP version. With 2.0 selected, any clients that support HTTP/2 will upgrade their connection automatically. HTTP Version Selection in App Services

HTTP/2 might not benefit every application, so you will want to run performance tests and stress tests to document your improvements. Here's a simple test where I used the network tools in Firefox against a page hosted in an App Service. The page references a handful of script and CSS resources, and also includes 16 images. Each image is over 200 KB in size. First, I used the developer tools to record what happens on an App Service using HTTP 1.1. Notice how the later requests start in a blocked state (the red section of the bars). This is the dreaded "head-of-line blocking" problem where limitations on the number of connections and concurrent requests throttle the throughput between the client and the server. The client doesn't receive the final bytes for the page until 800ms after the first request starts.

HTTP 1.1 Blocking

Next, I switched on HTTP/2 support in the App Service. I didn't need to make any other configuration changes on the client or the server. The last byte arrives in less than 500ms. We avoid blocking thanks to the improved network utilization of HTTP/2.

HTTP/2 Improvemewnts

Turn Off the Application Request Routing Cookie

In front of every Azure App Service is a load balancer, even if you only run a single instance of your App Service Plan. The load balancer intercepts every request heading for your app service so when you do move to multiple instances of an app service plan, the load balancer can start to balance the request load against available instances. By default, Azure will make sure clients continue reaching the same app service instance during a session, because Azure can't guarantee your application isn't storing session state in server memory. To provide this behavior the load balancer will inject a cookie into the first response to a client. This cookie is what Azure calls the Application Request Routing Cookie.

If you have a stateless application and can allow the load balancer to distribute requests across instances without regard to previous requests, then turn off the routing cookie in the Configuration blade to improve performance and resiliency. You won't have requests waiting for a server restart, and when failures do happen, the load balancer can shift clients to a working instance quickly.

The routing configuration is another item you'll find in the Platform Settings of the App Service Configuration blade.

Turning off Instance Affinity

Keep the App Service Always On

If you've deployed applications into IIS in the past, you'll know that IIS will unload idle web sites after a period of inactivity. Azure App Services will also unload idle web sites. Although the unloading can free up resources for other applications that might be running on the same App Service Plan, this strategy hurts the performance of the app because the next incoming request will wait as the web application starts from nothing. Web application startup time can be notoriously slow, regardless of the technologies involved. The caches are empty, the connection pools are empty, and all requests are slower than normal as the site needs to warms up.

To prevent the idle shutdown, you can set the Always On flag in the App Service Configuration blade.

The Always On App Service Flag

Use a Local Cache

By default, the file system for your App Service is mounted from Azure Storage. The good news is your file system is durable, highly available, and accessible from multiple App Service instances. The sad news is your application makes a network call every time the app touches the file system. Some applications require the Azure Storage solution. These are the applications that write to the file system, perhaps when a user uploads a file, and they expect the file system changes to be durable, permanent, and immediately visible across all running instances of the application. Other applications might benefit from having a faster, local, read-only copy of the web site content. If this sounds like your application, or you want to run a test, then create a new App Setting for the app with a key of WEBSITE_LOCAL_CACHE_OPTION and a value of Always. You'll then have a d:\home folder pointing to a local cache on the machine and populated with a copy of your site content.

Using a Local Cache

Although I say the cache is read-only, the truth is you can write into the local cache folder. However, you'll lose any changes you make after an app restart. For more information on the tradeoffs involved and how the local cache works with log files, see the Azure App Service Local Cache overview.

Keep Your Customers Close, and Your Resources Even Closer

All the performance improvements we've looked at so far only require configuration changes. The next set of improvements require some additional infrastructure planning or restructuring, and in some cases changes to the application itself. The common theme in the next set of tips is to reduce the distance bits need to travel over the network. The speed of light is finite, so the further a bit has to travel, the longer the bit needs to reach a destination.

Co-locate Your App Service and Your Database

In Azure you assign most resources you create to a specific region. For example, when I create an App Service, I can place the service close to me in the East US region, or, if I'm on an extended visit to Europe, I can select the North Europe region. If you create multiple resources that work together closely, you'll want to place the resources together in the same region. In the past I've seen performance suffer when someone at the company accidentally places an App Service in one region and an associated Azure SQL instance in a different region. Every database query from the App Service becomes a trip across the continent, or around the world.

How do you check your existing subscriptions to make sure your resources are properly co-located? Assuming you don't want to click through the portal to check manually, you can write a custom script or program, or use Azure Policy. Azure Policy has a built-in rule to check every resource to ensure the resource location matches the location of the resource's parent resource group. All you need to do with this rule in place is make sure your associated resources are all in the same resource group. The policy definition for this audit rule looks like the following.

{
  "if": {
    "field": "location",
    "notIn": [
      "[resourcegroup().location]",
      "global"
    ]
  },
  "then": {
    "effect": "audit"
  }
}

Keep Your App Service Close to Your Customer

If most of your customer traffic originates from a specific area of the world, it makes sense to place your resources in the Azure region closest to your customers. Of course, many of us have customers fairly distributed around the world. In this case, you might consider geo-replicating your resources across multiple Azure regions and stay close to everyone. For App Services, this means creating multiple App Service plans inside of multiple Azure data centers around the world. Then, you'll typically use a technology like Azure Traffic Manager to direct customer traffic to the closest App Service instance.

Note: since I wrote this article, Microsoft introduced Azure Front Door. Front Door offers some additional capabilities that are not available from Traffic Manager. Features like SSL offload, instead failover, and DDoS protection. If you need global load balancing, you should also look at what the Front Door Service offers.

Traffic Manager is a DNS based load balancer. So, when a customer's web browser asks for the IP address associated with your application's domain, Traffic Manager can use rules you provide and other heuristics to select the IP address of a specific App Service. Traffic Manager can select the App Service with the lowest latency for a given customer request, or, you can also configure Traffic Manager to enforce geo-fencing where the load balancer sends all customers living in a specific province or country to the App Service you select. You can see the routing methods built into Traffic Manager in the Create Traffic Manager profile blade below.

Setting up a Traffic Manager Profile

There are tradeoffs and complications introduced by Traffic Manager. It is easy to replicate stateless web applications and web services across data centers around the world, but you'll need to spend some time planning a data access strategy. Keeping one database as the only source of truth is the easiest data access approach. But, if your App Service in Australia is reading data from a database in the U.K., you might be losing the performance benefits of geo-replicating the App Service. Another option is to replicate your data, too, but much depends on your business requirements for consistency. Data replication is typically asynchronous and delayed, and your business might not be able to live with the implications of eventual consistency.

Keep Your Content Close to the Customer

Azure's content delivery network allows you to take static content from Azure Storage, or from inside your App Service, and distribute the content to edge servers around the world. Again, the idea is to reduce the distance information need to travel, and therefore reduce the latency in network requests. Static files like script files, images, CSS files, and videos, and are all good candidates for caching on the CDN edge servers. A CDN can have other benefits, too. Since your App Service doesn't need to spend time or bandwidth serving files cached on a CDN, it has more resources available to produce your dynamic content.

When setting up a CDN profile in Azure, you can select a pricing plan with the features you need from a set of providers that includes Microsoft, Verizon, and Akamai.

Setting up a CDN Profile

Keep Your Apps Together

Today's architecture fashion is to decompose systems into a set of microservices. These microservices need to communicate with each other to process customer requests. Just like keeping your application and database close together can benefit performance, keeping your microservices close to each other can benefit performance, too.

With App Services, remember that multiple services can live on the same App Service Plan. Think of the plan like a virtual machine dedicated to the role of a web server. You can place as many applications on the web server as you like, and keeping services together can reduce network latency. However, keep in mind that having too many services on the same machine can stretch resources thin. It will take some experimentation and testing to figure out the best distribution of services, the ideal size of the App Service Plans, and the number of instances you need to handle all your customer requests.

Summary

We've looked at several strategies we can use to improve the performance of web applications and web APIs we've deployed to Azure App Services. Just remember that your first step before trying one of these strategies should be to measure your application performance and obtain a good baseline number. Not every strategy in this article will benefit every application. Starting with baseline performance numbers will allow you to compare strategies and see which ones are the most effective for your application.

Solving Access Denied in Crypto Machine Keys

Sunday, January 12, 2020 by K. Scott Allen

Previously on OdeToCode I posted about tracking down an AspNetCore build error. Once I realized the access denied message came from the ProgramData\Microsoft\Crypto\RSA\MachineKeys folder, I went searching for reasons why this might happen, and to find possible solutions. Here are some observations based on my research.

For Many Developers, Security Only Gets In The Way

During my NodeJS days I developed a strange vocalization tic. The tic manifested itself every time I found an answer on Stack Overflow suggesting sudo as the answer to a security problem. As in, "just use sudo npm install and you'll be done". The tic resurfaced during my research for access denied messages around MachineKeys. From GitHub issues to Stack Overflow and Microsoft support forms, nearly everyone suggests the developer give themselves "Full Control" of the folder and consider the job finished. There is no root cause analysis or thought given to the idea that although "Full Control" might work, it still might be the wrong answer.

Here is an example where the Full Control solution is encouraged, favorited, and linked to from other issues that encourage developers to use the same work around.

Given that the MachineKeys folder is not in an obvious location for storing user specific data, and given the folder name itself implies the contents are for all the users on the machine, and given the folder can contain the private keys of cryptographic key pairs, I immediately dismissed any answer suggesting full control or taking ownership of the folder.

Not Many People Have This Problem

I also realized during my research that not many people run into this specific problem. The few I found who did some real analysis were pointing fingers at Windows updates. And indeed, this particular machine accepts insider builds of Windows. For those not brave enough to use insider builds, I can say that insider updates are both frequent and are similar to playing with commercial grade fireworks. It's fun when they work, but you are only one misfire away from burning down your house.

I began to suspect my access denied error was a problem with my Windows configuration, and not with the AspNetCore builds or .NET tools. I tried the build for the same commit on two other machines, and both were successful.

Two out of three is a consensus in distributed computing, so now I was certain I had a configuration problem, probably caused by Windows Update incorrectly applying inherited security attributes. I started looking for how to fix the problem.

The Most Unhelpful Support Document in the History of Microsoft Support Documents

In my research I came across the support document titled "Default permissions for the MachineKeys folders" A promising title. There is even some rousing prose in the summary section implying how MachineKeys is a special folder.

The MachineKeys folder stores certificate pair keys for both the computer and users. ... The default permissions on the folder may be misleading when you attempt to determine the minimum permissions that are necessary for proper installation and the accessing of certificates.

I was waiting for the document to rattle off the exact permissions setup, but after an auspicious opening the document spirals into darkness with language and terms from Windows Server 2003. It is true that MachineKeys requires special permissions, but I found it easier to compare settings on two different computers than to translate a 16-year old support document.

My Solution

On my problem machine, the MachineKeys folder inherited permissions from the parent folder. These permissions gave Everyone read permissions on the folder and all files inside, and this is why al.exe would fail with an access denied error. Given that the folder can contain private keys from different users, these setting are also dangerous. The folder shouldn't inherit permissions, we need to set special permissions.

The first step in fixing the mess through the GUI is to right-click the folder, go to Properties, go to the Security tab, click Advanced at the bottom of the dialog, click "Disable inheritance", and then "Remove all inherited permissions from this object".

disable inheritance

Now there is a clean slate to work with. Use the Add button to create a new permissions entry selecting Everyone as the principal. In the Basic Permissions group, select only Read and Write. Make sure "Applies to" has "This folder only" selected.

special permissions

Also add the Administrators group, and allow Full control for this folder only. You'll know the right settings are in place when the regular security tab shows "Special Permissions" for both the Everyone group and Administrators.

Correctness

These special permissions allow average users to write into the folder, and then read and delete a file they write. But, they cannot list the contents of the folder or touch files from other users (although admins will have the ability to delete files from other users).

Summary

Solving security problems requires care and patience. But, I feel better having a working AspNetCore build without compromising the security settings on my machine.

Tracking Down an AspNetCore Build Error

Thursday, January 9, 2020 by K. Scott Allen

Building AspNet Core from source is straightforward, unless, like me, you run into strange errors. My first build compiled over 600 projects successfully, but ended up failing and showing the following message.

Build FAILED.

ALINK : error AL1078: Error signing assembly -- Access is denied. [...\LocalizationSample.csproj]
ALINK : error AL1078: Error signing assembly -- Access is denied. [...\LocalizationWebsite.csproj]
ALINK : error AL1078: Error signing assembly -- Access is denied. [...\RazorPagesWebSite.csproj]
ALINK : error AL1078: Error signing assembly -- Access is denied. [...\BasicTestApp.csproj]
    0 Warning(s)
    4 Error(s)

I first noticed that all four failures were because of AL.exe, a.k.a ALINK, the little known assembly linker that ships with .NET. Compared to a linker in the world of C++, the .NET linker is an obscure tool that is rarely seen unless a build needs to work with satellite assemblies generated from .resx files, or sign an assembly with a public/private key pair.

The next bit I noticed was that the first two projects involved localization, so I was sure the build errors were some sort of problem with satellite assemblies or resource files. Unfortunately, the error message access is denied is short on helpful details. I went looking for obvious problems and verified that projects existed in the correct directories, and that no files were missing.

Generating a Build Log

In need of more information, I decided to create a detailed log of of my AspNetCore build. Building all of AspNetCore requires running a build script, one of build.cmd, build.ps1, or build.sh, depending on your OS and mood. All of these scripts accept parameters. For example, build -BuildNative will build the native C/C++ projects, which includes the AspNetCoreV2 IIS module. The default is to only -BuildManaged. You can also thunk MSBuild parameters through the build script, so I used the following to create a binary log of detailed verbosity, and avoided writing to the console to save time:

build.cmd -bl -verbosity:detailed -noconsolelogger  

All AspNetCore build output goes into an artifacts folder, including the binary log. To view the binary log I use MSBuildStructureLog, a tool I've written about previously.

A word of warning - AspNetCore build logs are not the kind of logs that you can pack into an overnight bag and sling over your shoulder. If you want to open and view the logs I recommend a machine with at least 16GB of RAM, but 32 is better. The machine needs to handle a process with 10+ GB committed or else be sucking mud.

commit

Once the build log loads, it is easy to find build errors.

build log aspnetcore

The command line for each of the 4 build errors looked like the following:

CommandLineArguments = ...\al.exe /culture:es-ES /delaysign- /keyfile: and so on...

I tried the command line myself, and sure enough, there is an access denied error.

access denied

The good news, I thought to myself, was that I can easily reproduce the build error without running the entire build. The bad news, I thought to myself, was that I still don't know what resource is denying access. Time to bring in more tools.

Tracking Down Access Denied with SysInternals Process Monitor

The Sysinternals Suite has been invaluable for debugging over the years. Process Monitor in particular is the first tool that comes to mind when I need to track down file system activity.

If you've ever worked with procmon, you'll know the tool can capture an overwhelming amount of data. The first thing to do is to add one or more filters to the capture. In the screen shot below I am only showing file system activity for any process named al.exe. With the filter in place I can execute the command line, and sure enough, procmon shows the access denied error.

Process Monitor

The access denied message occurs on a file inside the ProgramData\Microsoft\Crypto\RSA\MachineKeys folder. It appears to me, based on my limited knowledge of the Windows crypto API, that al.exe is trying to write an ephemeral private key into the machine keys folder, but my user account doesn't have permission to create the file.

Now I know why the build is failing, but there is no clear solution to fix the problem.

What to do?

Stay tuned for the next blog post where I will reveal the unexciting solution to this problem in a stupendously boring fashion. Sneak preview: the solution does not involve sudo or changing folder permissions to give my account full control...