Developing with Node.js on Microsoft Azure

Tuesday, June 20, 2017 by K. Scott Allen

Catching up on announcements …

About 3 months ago, Pluralsight released my Developing with Node.js on Microsoft Azure course.  Recorded on macOS and taking the perspective of a Node developer, this course shows how to use Azure App Services to host a NodeJS application, as well as how to manage, monitor, debug, and scale the application. I also show how to use Cosmos DB, which is the NoSQL DB formerly known as Document DB, by taking advantage of the Mongo DB APIs.

Other topics include how to use Azure SQL, CLI tools, blob storage, and some cognitive services are thrown into the mix, too. An entire module is dedicated to serverless computing with Azure functions (implemented in JavaScript, of course).

In the finale of the course I show you how to setup a continuous delivery pipeline using Visual Studio Team Services to build, package, and deploy a Node app into Azure.

I hope you enjoy the course.

Developing with Node.js on Microsoft Azure

ASP.NET Configuration Options Will Understand Arrays

Monday, April 24, 2017 by K. Scott Allen

Continuing on topics from code reviews.

Last year I saw some C# code working very hard to process an application config file like the following:

  "Storage": {
    "Timeout":  "25", 
    "Blobs": [
        "Name": "Primary",
        "Url": ""

        "Name": "Secondary",
        "Url": ""


Fortunately, the Options framework in ASP.NET Core understands how to map this JSON into C#, including the Blobs array. All we need are some plain classes that follow the structure of the JSON.

public class AppConfig
    public Storage Storage { get; set; }            

public class Storage
    public int TimeOut { get; set; }
    public BlobSettings[] Blobs { get; set; }

public class BlobSettings
    public string Name { get; set; }
    public string Url { get; set; }

Then, we setup our IConfiguration for the application.

var config = new ConfigurationBuilder()

And once you’ve brought in the Microsoft.Extensions.Options package, you can configure the IOptions service and make AppConfig available.

public void ConfigureServices(IServiceCollection services)
    // ...


With everything in place, you can inject IOptions<AppConfig> anywhere in the application, and the object will have the settings from the configuration file.

ASP.NET Core Dependency Injection Understands Unbound Generics

Friday, April 21, 2017 by K. Scott Allen

Continuing with topics based on ASP.NET Core code reviews.

Here is a bit of code I came across in an application’s Startup class.

public void ConfigureServices(IServiceCollection services)
    services.AddScoped<IStore<User>, SqlStore<User>>();
    services.AddScoped<IStore<Invoice>, SqlStore<Invoice>>();
    services.AddScoped<IStore<Payment>, SqlStore<Payment>>();
    // ...

The actual code ran for many more lines, with the general idea that the application needs an IStore implementation for a number of distinguished objects in the system.

Because ASP.NET Core understands unbound generics, there is only one line of code required.

public void ConfigureServices(IServiceCollection services)
    services.AddScoped(typeof(IStore<>), typeof(SqlStore<>));

Unbound generics are not useful in day to day business programming, but if you are curious how the process works, I did show how to use unbound generics at a low level in my C# Generics course.

One downside to this approach is the fact that you might experience a runtime error (instead of a compile error) if a component requests an implementation of IStore<T> that isn’t possible. For example, if a concrete implementation of IStore<T> uses a generic constraint of class, then the following would happen:

Assert.Throws<ArgumentException>(() =>

However, this problem should be avoidable.

ASP.NET Core Middleware Components are Singletons

Wednesday, April 19, 2017 by K. Scott Allen

This is the first post in a series of posts based on code reviews of systems where ASP.NET Core is involved.

I recently came across code like the following:

public class FaultyMiddleware
    public FaultyMiddleware(RequestDelegate next)
        _next = next;

    public async Task Invoke(HttpContext context)
        // saving the context so we don't need to pass around as a parameter
        this._context = context;


        await _next(context);            

    private void DoSomeWork()
        // code that calls other private methods

    // ...

    HttpContext _context;
    RequestDelegate _next;

The problem here is a misunderstanding of how middleware components work in the ASP.NET Core pipeline. ASP.NET Core uses a single instance of a middleware component to process multiple requests, so it is best to think of the component as a singleton. Saving state into an instance field is going to create problems when there are concurrent requests working through the pipeline.

If there is so much work to do inside a component that you need multiple private methods, a possible solution is to delegate the work to another class and instantiate the class once per request. Something like the following:

public class RequestProcessor
    private readonly HttpContext _context;

    public RequestProcessor(HttpContext context)
        _context = context;

    public void DoSomeWork()
        // ... 

Now the middleware component has the single responsibility of following the implicit middleware contract so it fits into the ASP.NET Core processing pipeline. Meanwhile, the RequestProcessor, once given a more suitable name, is a class the system can use anytime there is work to do with an HttpContext.

Developing with .NET on Microsoft Azure

Tuesday, April 4, 2017 by K. Scott Allen

My latest Pluralsight course is alive and covers Azure from a .NET developers perspective. Some of what you’ll learn includes:

- How to create an app service to host your web application and API backend

- How to monitor, manage, debug, and scale an app service

- How to configure and use an Azure SQL database

- How to configure and use a DocumentDB collection

- How to work with storage accounts and blob storage

- How to take advantage of server-less computing with Azure Functions

- How to setup a continuous delivery pipeline into Azure from Visual Studio Team Services

- And much more …

Developing with .NET on Microsoft Azure

And here is some early feedback from the Twitterverse:

Thanks for watching!

The Joy of Azure CLI 2.0

Monday, April 3, 2017 by K. Scott Allen

The Joy of CookingThe title here is based on a book I remember in my mom’s kitchen: The Joy of Cooking. The cover of her book was worn, while the inside was dog eared and bookmarked with notes. I started reading my mom’s copy when I started working in a restaurant for spending money. In the days before TV channels dedicated to cooking, I learned quite a bit about cooking from this book and on-the-job training. The book is more than a collection of recipes. There is prose and personality inside.

I have a copy in my kitchen now.

Azure CLI 2

The new Azure CLI 2 is my favorite tool for Azure operations from the command line. The installation is simple and does have a dependency on Python. I look at the Python dependency as a good thing, since Python allows the CLI to work on macOS, WIndows, and Linux. You do not need to know anything about Python to use the CLI, although Python is a fun language to learn and use. I’ve done one course with Python and one day hope to do more.

The operations you can perform with the CLI are easy to find, since the tool organizes operations into hierarchical groups and sub-groups. After installation, just type “az” to see the top-level commands (many of which are not in the picture). Azure CLI 2 at work

You can use the ubiquitous -h switch to find additional subgroups. For example, here are the commands available for the “az appservice web” group.clip_image006

For many scenarios, you can use the CLI instead of using the Azure portal. Let’s say you’ve just used a scaffolding tool to create an application with Node or .NET Core, and now you want to create a web site in Azure with the local code. First, we’d place the code into a local git repository.

git init 
git add . 
git commit -a -m “first commit” 

Now you use a combination of git and az commands to create an app service and push the application to Azure.

az group create --location “Eastus” --name sample-app 
az appservice plan create --name sample-app-plan --resource-group sample-app --sku FREE 
az appservice web create --name sample-app --resource-group sample-group --plan sample-app-plan 
az appservice web source-control config-local-git --name sample-app --resource-group sample-app 

git remote add azure “https://[url-result-from-previous-operation]” 
git push azure master 

We can then have the CLI launch a browser to view the new application.

az appservice web browse --name sample-app --resource-group sample-app

To shorten the above commands, use -n for the name switch, and -g for the resource group name.


Notes for Getting Started with Power BI Embedded

Thursday, February 16, 2017 by K. Scott Allen

Doing some work where I thought Power BI Embedded would make for a good solution. The visuals are appealing and modern, and for customization there is the ability to use D3.js behind the scenes. I was also encouraged to see support in Azure for hosting Power BI reports. There were a few hiccups along the way, so here are some notes for anyone trying to use Power BI Embedded soon.

Getting Started

The Get started with Microsoft Power BI Embedded document is a natural place to go first. A good document, but there are a few key points that are left unsaid, or at least understated.

The first few steps of the document outline how to create a Power BI Embedded Workspace Collection. The screen shot at the end of the section shows the collection in the Azure portal with a workspace included in the collection. However, if you follow the same steps you won’t have a workspace in your collection, you’ll have just an empty collection. This behavior is normal, but when combined with some of the other points I’ll make did add to the confusion.

StartingPower BI Embedded Workspace Collection

Not mentioned in the portal or the documentation is the fact that the Workspace collection name you provide needs to be unique in the Azure world of collection names. Generally, in the Azure portal, the configuration blades will let you know when a name must be unique (by showing a domain the name will prefix). Power BI Embedded works a bit differently, and when it comes time to invoke APIs with a collection name it will make more sense to think of the name as unique. I’ll caveat this paragraph by saying I am deducing the uniqueness of a collection name based on behavior and API documentation.

Creating a Workspace

After creating a collection you’ll need to create a workspace to host reporting artifacts. There is currently no UI in the portal or PBI desktop tool to create a workspace in Azure, which feels odd. Everything I’ve worked with in the Azure portal has at least a minimal UI for common configuration of a resource, and creating a workspace is a common task.

Currently the only way to create a workspace is to use the HTTP APIs provided by Power BI. For automated software deployments, the API is a must have, but for experimentation it would also be nice to have a more approachable workspace setup to get the feel of how everything works.

The APIs

There are two sets of APIs to know about. There are the Power BI REST Operations, and the Power BI Resource Provider APIs. You can think of the resource provider APIs as the usual Azure resource provider APIs that would attached to any type of resource in Azure – virtual machines, app services, storage, etc. You can use these APIs to create a new workspace collection instead of using the portal in the UI. You can also achieve common tasks like listing or regenerating the access keys. These APIs require an Azure access token from Azure AD.

The Power BI REST operations allow you to work inside a workspace collection to create workspaces, import reports, and define data sources. There is some orthogonality missing to the API, it appears, as you can use an HTTP POST to create workspaces and reports, use HTTP GET to retrieve resource definitions, but in many cases, there are no HTTP DELETE operations to remove an item. These Power BI operations have a different base URL than the resource manager operations, they use, and they do not require a token from Azure AD. All you need for authorization is one of the access keys defined by the workspace collection.

The mental model to have here is the same model you would have for Azure Storage or DocumentDB, as two examples. There are the APIs to manage the resource which require an AD token (like to create a storage a account), and then there are the APIs to act as a client of the resource, and these APIs require only an access key (like to upload a blob into storage).

The Sample Program

To see how you can work with these APIs, Microsoft provides a sample console mode application on GitHub. After I cloned the repo I had to fix NuGet package references and assembly reference errors. Once I had the solution build, there were still 6 warnings from the C# compiler, which is unfortunate.  

If you want to run the application just to create your first workspace, or you want to borrow some code from the application to put in your own, there is one issue that had me stumped for a bit until I stepped through the code with a debugger. Specifically, this line of code:

var tenantId = (await GetTenantIdsAsync(commonToken.AccessToken)).FirstOrDefault();

If you sign into Azure using an account associated with multiple Azure directories, this line of code will only grab the first tenant ID, which might not be the ID you need to access the Power BI workspace collection you’ve created. This happened to me when trying the simplest possible operation in the example program, which is to get a list of all workspace collections, and initially led me to the wrong assumption that every Power BI operation required an AAD access token.

When combined with the other idiosyncrasies listed above, the sample app behavior got me to question if Power BI was ever going to work.

But, like with many technologies, I just needed some some persistence, some encouragement, a bit of luck, and some sleep to allow the all the thought model to sink in.

My Pluralsight Courses

K.Scott Allen OdeToCode by K. Scott Allen
What JavaScript Developers Should Know About ECMAScript 2015
The Podcast!