Notes for Getting Started with Power BI Embedded

Thursday, February 16, 2017 by K. Scott Allen

Doing some work where I thought Power BI Embedded would make for a good solution. The visuals are appealing and modern, and for customization there is the ability to use D3.js behind the scenes. I was also encouraged to see support in Azure for hosting Power BI reports. There were a few hiccups along the way, so here are some notes for anyone trying to use Power BI Embedded soon.

Getting Started

The Get started with Microsoft Power BI Embedded document is a natural place to go first. A good document, but there are a few key points that are left unsaid, or at least understated.

The first few steps of the document outline how to create a Power BI Embedded Workspace Collection. The screen shot at the end of the section shows the collection in the Azure portal with a workspace included in the collection. However, if you follow the same steps you won’t have a workspace in your collection, you’ll have just an empty collection. This behavior is normal, but when combined with some of the other points I’ll make did add to the confusion.

StartingPower BI Embedded Workspace Collection

Not mentioned in the portal or the documentation is the fact that the Workspace collection name you provide needs to be unique in the Azure world of collection names. Generally, in the Azure portal, the configuration blades will let you know when a name must be unique (by showing a domain the name will prefix). Power BI Embedded works a bit differently, and when it comes time to invoke APIs with a collection name it will make more sense to think of the name as unique. I’ll caveat this paragraph by saying I am deducing the uniqueness of a collection name based on behavior and API documentation.

Creating a Workspace

After creating a collection you’ll need to create a workspace to host reporting artifacts. There is currently no UI in the portal or PBI desktop tool to create a workspace in Azure, which feels odd. Everything I’ve worked with in the Azure portal has at least a minimal UI for common configuration of a resource, and creating a workspace is a common task.

Currently the only way to create a workspace is to use the HTTP APIs provided by Power BI. For automated software deployments, the API is a must have, but for experimentation it would also be nice to have a more approachable workspace setup to get the feel of how everything works.

The APIs

There are two sets of APIs to know about. There are the Power BI REST Operations, and the Power BI Resource Provider APIs. You can think of the resource provider APIs as the usual Azure resource provider APIs that would attached to any type of resource in Azure – virtual machines, app services, storage, etc. You can use these APIs to create a new workspace collection instead of using the portal in the UI. You can also achieve common tasks like listing or regenerating the access keys. These APIs require an Azure access token from Azure AD.

The Power BI REST operations allow you to work inside a workspace collection to create workspaces, import reports, and define data sources. There is some orthogonality missing to the API, it appears, as you can use an HTTP POST to create workspaces and reports, use HTTP GET to retrieve resource definitions, but in many cases, there are no HTTP DELETE operations to remove an item. These Power BI operations have a different base URL than the resource manager operations, they use https://api.powerbi.com, and they do not require a token from Azure AD. All you need for authorization is one of the access keys defined by the workspace collection.

The mental model to have here is the same model you would have for Azure Storage or DocumentDB, as two examples. There are the APIs to manage the resource which require an AD token (like to create a storage a account), and then there are the APIs to act as a client of the resource, and these APIs require only an access key (like to upload a blob into storage).

The Sample Program

To see how you can work with these APIs, Microsoft provides a sample console mode application on GitHub. After I cloned the repo I had to fix NuGet package references and assembly reference errors. Once I had the solution build, there were still 6 warnings from the C# compiler, which is unfortunate.  

If you want to run the application just to create your first workspace, or you want to borrow some code from the application to put in your own, there is one issue that had me stumped for a bit until I stepped through the code with a debugger. Specifically, this line of code:

var tenantId = (await GetTenantIdsAsync(commonToken.AccessToken)).FirstOrDefault();

If you sign into Azure using an account associated with multiple Azure directories, this line of code will only grab the first tenant ID, which might not be the ID you need to access the Power BI workspace collection you’ve created. This happened to me when trying the simplest possible operation in the example program, which is to get a list of all workspace collections, and initially led me to the wrong assumption that every Power BI operation required an AAD access token.

When combined with the other idiosyncrasies listed above, the sample app behavior got me to question if Power BI was ever going to work.

But, like with many technologies, I just needed some some persistence, some encouragement, a bit of luck, and some sleep to allow the all the thought model to sink in.

ASP.NET Core and the Enterprise Part 4: Data Access

Tuesday, February 14, 2017 by K. Scott Allen

When creating .NET Core and ASP.NET Core applications, programmers have many options for data storage and retrieval available. You’ll need to choose the option that fits your application’s needs and your team’s development style. In this article, I’ll give you a few thoughts and caveats on data access in the new world of .NET Core.

Data Options

Remember that an ASP.NET Core application can compile against the .NET Core framework or the full .NET framework. If you choose to use the full .NET framework, you’ll have all the same data access options that you had in the past. These options include low-level programming interfaces like ADO.NET and high-level ORMs like the Entity Framework.

If you want to target .NET Core, you have fewer options available today. However, because .NET Core is still new, we will see more options appear over time.

Bertrand Le Roy recently posted a comprehensive list of Microsoft and third-party .NET Core packages for data access. The list shows NoSQL support for Azure DocumentDB, RavenDB, MongoDB and Redis. For relational databases, you can connect to Microsoft SQL Server, PostgreSQL, MySQL and SQLite. You can choose Npoco, Dapper and the new Entity Framework Core as an ORM frameworks for .NET Core.

Entity Framework Core

Because the Entity Framework is a popular data access tool for .NET development, we will take a closer look at the new version of EF Core.

On the surface, EF Core is like its predecessors, featuring an API with DbContext and DbSet classes. You can query a data source using LINQ operators like Where, Order By and Select.

Under the covers, however, EF Core is significantly different from previous versions of EF. The EF team rewrote the framework and discarded much of the architecture that had been around since version 1 of the project. If you’ve used EF in the past, you might remember there was an ObjectContext hiding behind the DbContext class plus an unnecessarily complex entity data model. The new EF Core is considerably lighter, which brings us to the discussion of pros and cons.

What’s Missing?

In the EF Core rewrite,you won't find an entity data model or EDMX design tool. The controversial lazy loading feature is not supported for now but is listed on the roadmap. The ability to map stored procedures to entity operations is not in EF Core, but the framework still provides an API for sending raw SQL commands to the database. This feature currently allows you to map only results from raw SQL into known entity types. Personally, I’ve found the ability to consume views from SQL Server to be too restrictive.

With EF Core, you can take a “code first” approach to database development by generating database migrations from class definitions. However, the only tooling to support a “database first” approach to development is a command line scaffolding tool that can generate C# classes from database tables. There are no tools in Visual Studio to reverse engineer a database or update entity definitions to match changes in a database schema. Model visualization is another feature on the future roadmap.

Like EF 6, EF Core supports popular relational databases, including SQL Server, MySQL, SQLite and PostgreSQL, but Oracle is currently not supported in EF Core.

What’s Better?

EF Core is a cross-platform framework you can use on Linux, macOS and Windows. The new framework is considerably lighter than frameworks of the past and is also easier to extend and customize thanks to the application of the dependency inversion principle.

EF Core plans to extend the list of supported database providers beyond relational databases. Redis and Azure Table Storage providers are on the roadmap for the future.

One exciting new feature is the new in-memory database provider. The in-memory provider makes unit testing easier and is not intended as a provider you would ever use in production. In a unit test, you can configure EF Core to use the in-memory provider instead of writing mock objects or fake objects around a DbContext class, which can lead to considerably less coding and effort in testing.

What to Do?

EF Core is not a drop-in replacement for EF 6. Don’t’ consider EF Core an upgrade. Existing applications using the DbContext API of EF will have an easier time migrating to EF Core while applications relying on entity data models and ObjectContext APIs will need a rewrite. Fortunately, you can use previous versions of the Entity Framework with ASP.NET Core if you target the full .NET framework instead of .NET Core. Given some of the missing features of EF Core, you’ll need to evaluate the framework in the context of a specific application to make sure the new EF will be the right tool to use.

A Train in the Night

Sunday, February 12, 2017 by K. Scott Allen

trainI’ll lived all my life near a town with the nickname “Hub City”. I know my town is not the only town in the 50 states with such a nickname, but we do have two major interstates, two mainline rail tracks, and one historic canal in the area. This is not Chicago, but we did have Ludacris fly through the regional airport last year.

The railroad tracks here have always piqued my interest. Trains too, but even more the mystery and history of the line itself. As a kid, I was told not to hang around railroad lines. But, being a kid, with a bike and a curiosity, I did anyway.

Where does it come from? Where does it go?

Those types of questions are easier to answer these days with all the satellite imagery and sites like OpenRailwayMap. I discovered, for example, the line closest to me now was built in the late 1800s when railroads were expanding. Back then, the more lines you built, the better chance you had of taking market share. When railroad companies consolidated in the 1970s, they abandoned most of this track. Still, there is a piece being used, albeit infrequently.

When the line is used on a cold winter night, the distant train whistle makes me hold my breath and listen. Two long, one short, one long. A B major 7th, I think. The 7th is there to tingle the hairs on your neck. It’s hard to believe how machinery and compressed air can provoke an emotional response. After all, there is the occasional horned owl in the area whose hollow cooing is always distant, lonely, and organic. Yet, the mechanical whistle is somehow more urgent, searching, and all-pervading. A proclamation.

I know where I’ve been. I know where I’m going.

Code Whistles

It’s hard to believe how code and technology can provoke an emotional response. The shape of the code, the whitespace between. The spark that lights a fire when you uncover a new secret. Now that you’ve learned it won’t go away, but you had to earn it. Idioms and idiosyncrasies pour into the brain like milk into cereal. Changing something, and it’s good.

The whistle. How quickly things change. Or, perhaps the process was slower than I thought. Your idioms impossible, your idiosyncrasies an irritation. If only we could reverse the clock to reach the point before these neurons put together that particular chemical reaction, but there are high winds tonight. I’ve lost power. There was the whistle.

I know where I’ve been, but I don’t know where I’m going.

On .NET Rocks

Friday, February 10, 2017 by K. Scott Allen

In episode 1405 I sit down with Carl and Richard at NDC London to talk about ASP.NET Core. I hope you find the conversation valuable.

ASP.NET Core Opinionated Approach with Scott Allen

Anti-Forgery Tokens and ASP.NET Core APIs

Monday, February 6, 2017 by K. Scott Allen

In modern web programming, you can never have too many tokens. There are access tokens, refresh tokens, anti-XSRF tokens, and more. It’s the last type of token that I’ve gotten a lot of questions about recently. Specifically, does one need to protect against cross site requests forgeries when building an API based app? And if so, how does one create a token in an ASP.NET Core application?

Do I Need an XSRF Token?

In any application where the browser can implicitly authenticate the user, you’ll need to protect against cross-site request forgeries. Implicit authentication happens when the browser sends authentication information automatically, which is the case when using cookies for authentication, but also for applications using Windows authentication.

Generally, APIs don’t use cookies for authentication. Instead, APIs typically use bearer tokens, and custom JavaScript code running in the browser must send the token along by explicitly adding the token to a request.

However, there are also APIs living inside the same server process as a web application and using the same cookie as the application for authentication. This is the type of scenario where you must use anti forgery tokens to prevent an XSRF.

XSRF Tokens and ASP.NET Core APIs

There is no additional work required to validate an anti-forgery token in an API request, because the [ValidateAntiForgeryToken] attribute in ASP.NET Core will look for tokens in a posted form input, or in an HTTP header. But, there is some additional work required to give the client a token. This is where the IAntiforgery service comes in.

[Route("api/[controller]")]
public class XsrfTokenController : Controller
{
    private readonly IAntiforgery _antiforgery;

    public XsrfTokenController(IAntiforgery antiforgery)
    {
        _antiforgery = antiforgery;
    }

    [HttpGet]
    public IActionResult Get()
    {
        var tokens = _antiforgery.GetAndStoreTokens(HttpContext);

        return new ObjectResult(new {
            token = tokens.RequestToken,
            tokenName = tokens.HeaderName
        });
    }
}

In the above code, we can inject the IAntiforgery service for an application and provide an endpoint a client can call to fetch the token and token name it needs to use in a request. The GetAndStoreTokens method will not only return a data structure with token information, it will also issue the anti-forgery cookie the framework will use in one-half of the validation algorithm. We can use a new ObjectResult to serialize the token information back to the client.

Note: if you want to change the header name, you can change the AntiForgeryOptions during startup of the application [1].

With the endpoint in place, you’ll need to fetch and store the token from JavaScript on the client. Here is a bit of Typescript code using Axios to fetch the token, then configure Axios to send the token with every HTTP request.

import axios, { AxiosResponse } from "axios";
import { IGolfer, IMatchSet } from "models"
import { errorHandler } from "./error";

const XSRF_TOKEN_KEY = "xsrfToken";
const XSRF_TOKEN_NAME_KEY = "xsrfTokenName";

function reportError(message: string, response: AxiosResponse) {
    const formattedMessage = `${message} : Status ${response.status} ${response.statusText}`
    errorHandler.reportMessage(formattedMessage);
}

function setToken({token, tokenName}: { token: string, tokenName: string }) {
    window.sessionStorage.setItem(XSRF_TOKEN_KEY, token);
    window.sessionStorage.setItem(XSRF_TOKEN_NAME_KEY, tokenName);
    axios.defaults.headers.common[tokenName] = token;
}

function initializeXsrfToken() {
    let token = window.sessionStorage.getItem(XSRF_TOKEN_KEY);
    let tokenName = window.sessionStorage.getItem(XSRF_TOKEN_NAME_KEY);

    if (!token || !tokenName) {
        axios.get("/api/xsrfToken")
            .then(r => setToken(r.data))
            .catch(r => reportError("Could not fetch XSRFTOKEN", r));
    } else {
        setToken({ token: token, tokenName: tokenName });
    }
}

 

Summary

In this post we … well, forget it. No one reads these anyway.

[1] Tip: Using the name TolkeinToken can bring to life many literary references when discussing the application amongst team members.

Building Vendor and Feature Bundles with webpack

Thursday, December 1, 2016 by K. Scott Allen

webpackThe joke I’ve heard goes like this:

I went to an all night JavaScript hackathon and by morning we finally had the build process configured!

Like most jokes there is an element of truth to the matter.

I’ve been working on an application that is mostly server rendered and requires minimal amounts of JavaScript. However, there are “pockets” in the application that require a more sophisticated user experience, and thus a heavy dose of JavaScript. These pockets all map to a specific application feature, like “the accounting dashboard” or “the user profile management page”.

These facts led me to the following requirements:

1. All third party code should build into a single .js file.

2. Each application feature should build into a distinct .js file.

Requirement #1 requires the “vendor bundle”. This bundle contains all the frameworks and libraries each application feature depends on. By building all this code into a single bundle, the client can effectively cache the bundle, and we only need to rebuild the bundle when a framework updates.

Requirement #2 requires multiple “feature bundles”. Feature bundles are smaller than the vendor bundle, so feature bundles can re-build each time a file inside changes. In my project, an ASP.NET Core application using feature folders, the scripts for features are scattered inside the feature folders. I want to build feature bundles into an output folder and retain the same feature folder structure (example below).

I tinkered with various JavaScript bundlers and task runners until I settled on webpack. With webpack  I found a solution that would support the above requirements and provide a decently fast development experience.

The Vendor Bundle

Here is a webpack configuration file for building the vendor bundle. In this case we will build a vendor bundle that includes React and ReactDOM, but webpack will examine any JS module name you add to the vendor array of the configuration file. webpack will place the named module and all of its dependencies into the output bundle named vendor.js. For example, Angular 2 applications would include “@angular/common” in the list. Since this is an ASP.NET Core application, I’m building the bundle into a subfolder of the wwwroot folder.

const webpack = require("webpack");
const path = require("path");
const assets = path.join(__dirname, "wwwroot", "assets");

module.exports = {
    resolve: {
        extensions: ["", ".js"]
    },
    entry: {
        vendor: [
            "react",
            "react-dom"
            ... and so on ...
        ]
    },
    output: {
        path: assets,
        filename: "[name].js",
        library: "[name]_dll"      
    },
    plugins: [
        new webpack.DllPlugin({
            path: path.join(assets, "[name]-manifest.json"),
            name: '[name]_dll'
        }),
        new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } })
    ]
};

webpack offers a number of different plugins to deal with common code, like the CommonsChunk plugin. After some experimentation, I’ve come to prefer the DllPlugin for this job. For Windows developers, the DllPlugin name is confusing, but the idea is to share common code using “dynamically linked libraries”, so the name borrows from Windows.

DllPlugin will keep track of all the JS modules webpack includes in a bundle and will write these module names into a manifest file. In this configuration, the manifest name is vendor-manifest.json. When we build the individual feature bundles, we can use the manifest file to know which modules do not need to appear in those feature bundles.

Important note: make sure the output.library property and the DllPlugin name property match. It is this match that allows a library to dynamically “link” at runtime.

I typically place this vendor configuration into a file named webpack.vendor.config.js. A simple npm script entry of “webpack --config webpack.vendor.config.js” will build the bundle on an as-needed basis.

Feature Bundles

Feature bundles are a bit trickier, because now we need webpack to find multiple entry modules scattered throughout the feature folders of an application. In the following configuration, we’ll dynamically build the entry property for webpack by searching for all .tsx files inside the feature folders (tsx being the extension for the TypeScript flavor of JSX).

const webpack = require("webpack");
const path = require("path");
const assets = path.join(__dirname, "wwwroot", "assets");
const glob = require("glob");

const entries = {};
const files = glob.sync("./Features/**/*.tsx");
files.forEach(file => {
    var name = file.match("./Features(.+/[^/]+)\.tsx$")[1];
    entries[name] = file;
});

module.exports = {
    resolve: {
        extensions: ["", ".ts", ".tsx", ".js"],
        modulesDirectories: [
            "./client/script/",
            "./node_modules"
        ]
    },
    entry: entries,
    output: {
        path: assets,
        filename: "[name].js"    
    },
    module: {
        loaders: [
          { test: /\.tsx?$/, loader: 'ts-loader' }
        ]
    }, 
    plugins: [
         new webpack.DllReferencePlugin({            
             context: ".",
             manifest: require("./wwwroot/assets/vendor-manifest.json")
         })       
    ]
};

A couple notes on this particular configuration file.

First, you might have .tsx files inside a feature folder that are not entry points for an application feature but are supporting modules for a particular feature. In this scenario, you might want to identify entry points using a naming convention (like dashboard.main.tsx). With the above config file, you can place supporting modules or common application code into the client/script directory. webpack’s resolve.modulesDirectories property controls this directory name, and once you enter in a specific directory name you’ll also need to explicitly include node_modules in the list if you still want webpack to search node_modules for a piece of code. Both webpack and the TypeScript compiler need to know about the custom location for modules, so you’ll also need to add a compilerOptions.path setting in the tsconfig.json config file for TypeScript (this is a fantastic new feature in TypeScript 2.*).

{
  "compilerOptions": {
    "noImplicitAny": true,
    "noEmitOnError": true,
    "removeComments": false,
    "sourceMap": true,
    "module": "commonjs",
    "target": "es5",
    "jsx": "react",
    "baseUrl": ".",
    "moduleResolution": "node",
    "paths": {
      "*": [ "*", "Client/script/*" ] 
    }
  },  
  "compileOnSave": false,
  "exclude": [
    "node_modules",
    "wwwroot"
  ]
}

Secondly, the output property of webpack’s configuration used to confuse me until I realized you can parameterize output.filename with [name] and [hash] parameters (hash being something you probably want to add to the configuration to help with cache busting). It looks like output.filename will only create a single file from all of the entries. But, if you have multiple keys in the entry property, webpack will build multiple output files and even create sub-directories.

For example, given the following entry:

entry: {
    '/Home/Home': './Features/Home/Home.tsx',
    '/Admin/Users/ManageProfile': './Features/Admin/Users/ManageProfile.tsx'
}

webpack will create /home/home.js and /admin/users/manageprofile.js in the wwwroot/assets directory.

Finally, notice the use of the DllReferencePlugin in the webpack configuration file. Give this plugin the manifest file created during the vendor build and all of the framework code is excluded from the feature bundle. Now when building the page for a particular feature, include the vendor.js bundle first with a script tag, and the bundle specific to the given feature second.

Summary

As easy as it may sound, arriving at this particular solution was not an easy journey. The first time I attempted such a feat was roughly a year ago, and I gave up and went in a different direction. Tools at that time were not flexible enough to work with the combination of everything I wanted, like custom module folders, fast builds, and multiple bundles. Even when part of the toolchain worked, editors could fall apart and show false positive errors.

It is good to see tools, editors, and frameworks evolve to the point where the solution is possible. Still, there are many frustrating moments in understanding how the different pieces work together and knowing the mental model required to work with each tool, since different minds build different pieces. Two things I’ve learned are that documentation is still lacking in this ecosystem, and GitHub issues can never replace StackOverflow as a good place to look for answers.

AddFeatureFolders and UseNodeModules On Nuget For ASP.NET Core

Tuesday, November 29, 2016 by K. Scott Allen

Here are a few small projects I put together last month.

AddFeatureFolders

I think feature folders are the best way to organize controllers and views in ASP.NET MVC. If you aren’t familiar with feature folders, see Steve Smith’s MSDN article: Feature Slices for ASP.NET Core MVC.

To use feature folders with the OdeToCode.AddFeatureFolders NuGet package, all you need is to install the package and add one line of code to ConfigureServices.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc()
            .AddFeatureFolders();

    // "Features" is the default feature folder root. To override, pass along 
    // a new FeatureFolderOptions object with a different FeatureFolderName
}

The sample application in GitHub demonstrates how you can still use Layout views and view components with feature folders. I’ve also allowed for nested folders, which I’ve found useful in complex, hierarchical applications. Nesting allows the feature structure to follow the user experience when the UI offers several layers of drill-down.

image

UseNodeModules

With the OdeToCode.UseNodeModules package you can serve files directly from the node_modules folder of a web project. Install the middleware in the Configure method of Startup.

public void Configure(IApplicationBuilder app, IHostingEnvironment environment)
{
    // ...

    app.UseNodeModules(environment);

    // ...
}

I’ve mentioned using node_modules on this blog before, and the topic generated a number of questions. Let me explain when and why I find UseNodeModules useful.

First, understand that npm has traditionally been a tool to install code you want to execute in NodeJS. But, over the last couple of years, more and more front-end dependencies have moved to npm, and npm is doing a better job supporting dependencies for both NodeJS and the browser. Today, for example, you can install React, Bootstrap, Aurelia, jQuery, Angular 2, and many other front-end packages of both the JS and CSS flavor.

Secondly, many people want to know why I don’t use Bower. Bower played a role in accelerating front-end development and is a great tool. But, when I can fetch all the resources I need directly using npm, I don’t see the need to install yet another package manager. 

Thirdly, many tools understand and integrate with the node_modules folder structure and can resolve dependencies using package.json files and Node’s CommonJS module standard. These are tools like TypeScript and front-end tools like WebPack. In fact, TypeScript has adopted the “no tools required but npm” approach. I no longer need to use tsd or typings when I have npm and @types.

Given the above points, it is easy to stick with npm for all third-party JavaScript modules. It is also easy to install a library like Bootstrap and serve the minified CSS file directly from Bootstrap’s dist folder. Would I recommend every project take this approach? No! But, in certain conditions I’ve found it useful to serve files directly from node_modules. With the environment tag helper in ASP.NET Core you can easily switch between  serving from node_modules (say, for debugging) and a CDN in production and QA.

Enjoy!

My Pluralsight Courses

K.Scott Allen OdeToCode by K. Scott Allen
What JavaScript Developers Should Know About ECMAScript 2015
The Podcast!