OdeToCode IC Logo

Streaming Content in ASP.NET Core

Tuesday, January 9, 2018 by K. Scott Allen

I've seen people working hard to stream content in ASP.NET Core. The content could be a video file or other large download. The hard work usually involves byte arrays and memory streams.

Some of this hard work was necessary in 1.0, but there are a number of features in 2.0 that make the extra work unnecessary.

With Static Files

The ASP.NET static files middleware will add ETag headers and can respond to partial range requests. The transfer is efficient.  Here's a cURL request for bytes 0 to 10 of a file named sample.mp4 that lives in the wwwroot folder.

$ curl -sD - -o /dev/null -H "Range: bytes=0-10" localhost:51957/sample.mp4

HTTP/1.1 206 Partial Content
Content-Length: 11
Content-Type: video/mp4
Content-Range: bytes 0-10/10630899
Last-Modified: Wed, 14 May 2014 23:39:15 GMT
Accept-Ranges: bytes
ETag: "1cf6fcdb79b6573"

In the output you can see the 206 partial content response and Accept-Ranges header.

With File Results

There are many scenarios where serving files with static files middleware is problematic. One scenario is when the files require authorization checks. It is possible to enforce authorization checks on static files, but another approach is to use controller actions. Imagine serving up the following HTML:

<h2>File Stream</h2>
<video autoplay controls src="/stream/samplevideo"></video>

The video source points to the following controller action.

[Authorize(Policy = "viewerPolicy")]
public IActionResult SampleVideoStream()
    var path = Path.Combine(pathToVideos, "sample.mp4");
    return File(System.IO.File.OpenRead(path), "video/mp4");

FileStreamResult and VirtualFileResult also support Range headers with a 206 response, although they won't send ETags without using some additional code. In  many cases, however, the behavior is good enough to move forward without creating a lot of additional work.

Updated Course for ASP.NET Core 2.0

Thursday, January 4, 2018 by K. Scott Allen

Catching up on some Pluralsight announcements...

My ASP.NET Core Fundamentals Course has been re-recorded for ASP.NET Core 2.0.

Some pieces have changed dramatically. For example, I give only a brief overview of the ASP.NET Identity framework and spend more time on integrating with an OpenID Connect provider. New topics include how to use the Razor pages feature in v2. Enjoy!

ASP.NET Core 2 Fundamentals

Think About Your API Client

Tuesday, January 2, 2018 by K. Scott Allen

It’s easy to miscommunicate intentions.

Teams starting to build HTTP based APIs need to think in terms of resources and the operations they want a client to perform on those resources. The resource name should appear in the URL. The operation is defined by the HTTP method. These concepts are important to understand when building APIs for the outside world to consume. Sometimes the frameworks and abstractions we use make it easy to forget the ultimate goal of an HTTP API.

Here's an example.

I came across a bit of code that looks something like the following. The idea is to expose a resource with associated child resources. In my example, the parent is an invoice, the children are line items on the invoice.

public class InvoiceController : Controller
    public IEnumerable<LineItem> GetByInvoiceId(string invoiceId)
        // ... return all items for the invoice

    public LineItem GetByLineItemId(string lineItemId)
       // .. return a specific line item

From a C# developers perspective, the class name and method names are reasonable. However, an HTTP client only interacts with the API using HTTP messages. If I am putting together code to send an HTTP GET request to /getall/87, I'm not going to feel comfortable knowing what resource I'm interacting with. The URL (remember the R stands for resource) does not identity the resource in any manner, only an operation (GET, which should be handled by the HTTP message instead of appearing in the URL).

One of the keys to building an effective HTTP API is to map the resources in your domain to a set of URLs. There are various subtleties to take into account, but in general you'll build a more effective API if you think about the interface you are exposing to clients first, and then figure out how to implement the interface. In this scenario, I'd think about each invoice being a resource, and each line item being a nested resource inside a specific invoice. Thinking about the interface first, I'd try to expose an API like so:

GET /invoices/3  <- get details on the invoice with an ID of 3

GET /invoices/3/lineitems  <- get all the line items for an invoice with an id of 3

GET /invoices/3/lineitems/87  <- get the line item with an id of 87 (which is inside the invoice with an id of 3)

There is also nothing wrong with giving a nested resource a top level URL. In other words, a single resource can have multiple locators.  It all depends on how the client will need to use the API.

GET /lineitems/87  <- get the line item with an id of 87

In the end, this interface is more descriptive compared to:

GET getall/87

And the controller could look like the following (not all endpoints listed above are implemented):

public class InvoiceController : Controller
    public IEnumerable<LineItem> GetByInvoiceId(string invoiceId)
         // ... return all line items for given invoice

    public LineItem GetByLineItemId(string lineItemId)
        // .. return a specific item

Just remember contract first. Implementation later.

Cosmos DB Request Units and the .NET SDK

Tuesday, October 31, 2017 by K. Scott Allen

Platform database services in Azure, like Azure SQL and Cosmos DB, both need to charge each of us according to our utilization.

Azure SQL measures utilization using  a Database Throughput Unit (DTU). A DTU is a relative number we can use to compare pricing tiers. As a relative number, a single DTU remains intangible and mysterious.

Cosmos DB measures utilization using a Request Unit (RU). An RU is well defined, and there is even a calculator available to estimate RU provisioning.

Cosmos includes the resource charge for every operation as an HTTP response header. It is not immediately obvious how to access the request charge when using the .NET SDK, but as the following code shows, you can access a RequestCharge property as part of any IFeedResponse. The following code collects the charges over the course of iterating a paged query. 

var query = client.CreateDocumentQuery<T>(
    UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
    new FeedOptions { MaxItemCount = -1 })
var results = new List<T>();
var totalRequestCharge = 0.0;

while (query.HasMoreResults)
    var response = await query.ExecuteNextAsync<T>();
    totalRequestCharge += response.RequestCharge;

DefaultFiles and FileServer in ASP.NET Core

Wednesday, October 25, 2017 by K. Scott Allen

The DefaultFiles middleware of ASP.NET Core is often misunderstood.


The DefaultFiles middleware by itself will not serve any files. DefaultFiles will only re-write the request Path value to match a file name, if the request points to a directory where a default file exists.

You still need the StaticFiles middleware to serve the default file.


It’s also important to know that DefaultFiles, like StaticFiles, will work in the web root path by default (wwwroot). If you want both pieces of middleware to work with a different (or second) folder of files, you’ll need to give both pieces of middleware a file provider that points to the folder.

var clientPath = Path.Combine(env.ContentRootPath, "client");
var fileprovider = new PhysicalFileProvider(clientPath);
app.UseDefaultFiles(new DefaultFilesOptions
    DefaultFileNames = new [] { "foo.html" },
    FileProvider = fileprovider

app.UseStaticFiles(new StaticFileOptions
    FileProvider = fileprovider              

In most cases, the middleware you want to use for these scenarios is the FileServer middleware, which combines DefaultFiles and StaticFiles.

var clientPath = Path.Combine(env.ContentRootPath, "client");
var fileprovider = new PhysicalFileProvider(clientPath);
var fileServerOptions = new FileServerOptions();
                 .DefaultFileNames = new[] { "foo.html" };
fileServerOptions.FileProvider = fileprovider;


The Getting Started Questions

Tuesday, August 29, 2017 by K. Scott Allen

FullSizeRenderI hear a common set of questions from beginning programmers who feel they are struggling to get started.

- Why is programming so hard?

- How long does it take to be a good programmer?

- Should I consider a different career?

It is difficult to give a general answer to these questions, because the answers depend on the characteristics of the questioner.

I was fortunate to be a beginner when I was 13 years old. My parents saw a Texas Instruments 99/4A computer on sale at a department store and, on a whim, made a purchase. I find it ironic that department stores don’t sell computers anymore, but you can use a computer to buy anything you can find at a department store (as well as the junk bonds issued by those stores).

Despite all the improvements in the hardware, software, languages, and tools, I think it is harder to get started today than it was in the early days of the personal computer. Here are three reasons:

1. Applications today are more ambitious.  ASCII interfaces are now user experiences, and even the simplest modern app is a distributed system that requires cryptography and a threat model.

2. Expectations are higher. Business users and consumers both expect more from software. Software has to be fast, easy, available, and secure. 

3. Information is shallow.There’s plenty of content for beginners, but most of the material doesn’t teach anything with long term value. The 9 minute videos that are fashionable today demonstrate mechanical steps to reach a simplistic goal.

In other words, there is more to programming these days than just laying down code, and the breadth of knowledge you need keeps growing. In addition to networking, security, and design, add in the human elements of communication and team work. This is similar to the point in time when, to be a good physicist, you only needed to grasp algebra and geometry.

My suggestions to new developers are always:

1. Write more code. The more code you write, the more experience you’ll gain, and the more you’ll understand how software works. Write an application that you would use every day, or try to implement your own database or web server. The goal isn’t to build a production quality database. The goal is to gain some insight about how a database could work internally.

2. Read more code. When I started programming I read every bit of code I could find, which wasn’t much at the time. I remember typing in code listing from magazines. These days you can go to GitHub and read the code for database engines, compilers, web frameworks, operating systems, and more.

3. Have patience.  This does take time.

As for the career question, I can’t give a yes or no answer. I will say if code is something you think about when you are not in front of a computer, than I recommend you stick with programming, you’ll go far.

A Little Bit of History is Good

Wednesday, August 2, 2017 by K. Scott Allen

From the outside, most software development projects look like a product of intelligent design. I think this is the fault of marketing, which describes software like this:

“Start using the globally scalable FooBar service today! We publish and consume any data using advanced machine learning algorithms to geo-locate your files and optimize them into cloud-native bundles of progressive web assembly!”.

In reality, developing software is more like the process of natural selection. Here’s a different description of the FooBar service from a member of the development team:

“Our VP has a sister-in-law who is a CIO at Amaze Bank. She gave our company a contract to re-write the bank’s clearinghouse business, which relied on 1 FTP server and 25 fax machines. Once we had the basic system in place the sales department started selling the software to other banks around the world. We had to localize into 15 different languages and add certificate validation overnight. Then the CTO told us to port all the code to Python because the future of our business will be containerized micro machine learning services in the cloud”.

This example might be a little silly, but software is more than business requirements and white board drawings. There are personalities, superstitions, random business events, and, tradeoffs made for survival.

In my efforts as a trainer and presenter I’ve been, at times, discouraged from covering the history of specific technical subjects. “No one wants to know what the language was like 5 years ago, they just want to do their job today”, would be the type of feedback I’d hear. I think this is an unfortunate attitude for both the teacher and the student. Yes, history can be tricky to cover. Too often the history lesson is an excuse to talk about oneself, or full of inside jokes. But, when done well, history can be part of a story that transforms a flat technical topic into a character with some depth.

As an example, take the April 2017 issue of MSDN Magazine with the article by Taylor Brown: “Containers – Bringing Docker To Windows Developers with Windows Server Containers”. Throughout the article, Brown wove in a behind the scenes view of the decisions and challenges faced by the team. By the end, I had a better appreciation of not only how Windows containers work, but why they work they way they do.

For most of the general population, making software is like making sausages. They don’t want to see how the sausage was made. But, for those of us who are the sausage makers, a little bit of insight intro the struggles of others can be both educational and inspirational.