Thoughts on Azure Functions and Serverless Computing

Monday, July 10, 2017 by K. Scott Allen

For the last 10 months I’ve been working with Azure Functions and mulling over the implications of computing without servers.

If Azure Functions were an entrée, I’d say the dish layers executable code over unspecified hardware in the cloud with a sprinkling of input and output bindings on top. However, Azure Functions are not an entrée, so it might be better to describe the capabilities without using culinary terms.

With Azure Functions, I can write a function definition in a variety of languages – C#, F#, JavaScript, Bash, Powershell, and more. There is no distinction between compiled languages, interpreted languages, and languages typically associated with a terminal window. I can then declaratively describe an input to my function. An input might be a timer (I want the function to execute every 2 hours), or an HTTP message (I want the function invoked when an HTTPS request arrives at functions-test.azurewebsites.net/api/foo), or when a new blob appears in a storage container, or a new message appears in a queue. There are other inputs, as well as output bindings to send results to various destinations. By using a hosting plan known as a consumption plan, I can tell Azure to execute my function with whatever resources are necessary to meet the demand on my function. It’s like saying “here is my code, don’t let me down”.

Sample Azure Function for HTTP Message Processing

The Good

Azure Functions are cheap. While most cloud services will send a monthly invoice based on how many resources you’ve provisioned, Azure Functions only bill you for the resources you actively consume. Cost is based on execution time and memory usage for each function invocation.

Azure Functions are simple for simple scenarios. If you have the need for a single webhook to take some data from an HTTP POST and place the data into a database, then with functions there is no need to create an entire project, or provision an app service plan. 

The amount of code you write in a function will probably be less than writing the same behavior outside of Azure Functions. There’s less code because the declarative input and output bindings can remove boilerplate code. For example, when using Azure storage, there is no need to write code to connect to an account and find a container. The function runtime will wire up the everything the function needs and pass more actionable objects as function parameters. The following is an example function.json file that defines the bindings for a function.

{
  "disabled": false,
  "bindings": [
    {
      "authLevel": "function",
      "name": "req",
      "type": "httpTrigger",
      "direction": "in"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ]
}

Azure Function are scalable. Of course, many resources in Azure are scalable, but other resources require configuration and care to behave well under load.

The Cons

One criticism of Azure functions, and PaaS solutions in general, is vendor lock-in. The languages and the runtimes for Azure Functions are not specialized, meaning the C# and .NET or JavaScript and NodeJS code you write will move to other environments. However, the execution environment for Azure Functions is specialized. The input and output bindings that remove the boilerplate code necessary for connecting to other resources is a feature only the function environment provides. Thus, it requires some work to move Azure Function code to another environment.

One of the biggest drawbacks to Azure functions, actually, is that deploying, authoring, testing, and executing a function has been difficult in any environment outside of Azure and the Azure portal, although this situation is improving (see The Future section below). There have been a few attempts at creating Visual Studio project templates and command line tools which have never progressed beyond a preview version. The experience for maintaining multiple functions in a larger scale project has been frustrating. One of the selling points of Azure Functions is how the technology is so simple to use, but you can cross a threshold where Azure Functions become too simple to use.

The Future

There have been a few announcements this year that have Azure Functions moving in the right direction. First, there is now an installer for the Azure Functions runtime. With the installer, you can setup an execution environment on a Windows server outside of Azure.

Azure Functions Runtime Installer

Secondly, in addition to script files, Azure Functions now supports a class library deployment. Class libraries are more familiar to most .NET developers compared to C# script files. Plus, class libraries are easier to author with Intellisense and Visual Studio, as well as being easier to build and unit test. 

Summary

Azure Functions are hitting two sweet spots by providing a simple approach to put code in the cloud with the scripting model, while class libraries support projects with larger ambitions.

However, the ultimate goal for many people coming to Azure Functions is the cost effectiveness and scalability of serverless computing. The serverless model is so appealing, I can imagine a future where instead of moving code into a serverless platform, more serverless platforms will appear and come to your code.

Adding Feedback from Solvingj:


Great summary! I always look forward to your point of view on new .NET technologies.

I wanted to add some recent updates you will surely want to be aware of and perhaps mention:

-Deployment slots (preview)
-Built-In Swagger Support (getting there)
-The option to use more traditional annotation-based mapping of API routes

-Durable functions in beta
This enables a stateful orchestrator pattern, which is always on, can call other functions, etc.  Whereas Azure Functions originally represented a paradigm shift away from monolithic ASP.NET applications with many methods to tiny stateless applications, durable functions represent a slight paradigm shift back the other way.  I think the end-result is Azure Functions supporting a much broader (yet still healthy) range of workloads that can be moved over from ASP.NET monoliths, while maintaining it's "serverless" advantages.

-Continuous Integration
I know you're aware of this feature since it's been part of Azure App service for ages.  However, you did not mention it, and for us, this was perhaps the most attractive feature.  The combination of Azure Functions Runtime + GIT integration enables the complete elimination of whole categories of tedious DevOps engineering concerns during rapid prototyping.  No Docker, no Appveyor, just GIT and an Azure Function App.  Of course, you can still add Appveyor for automated testing when it makes sense, but it's not required early on.

Other Experiences
-Precompiled Functions
I happen to agree with Chris regarding precompiled functions.  In fact, once we refactored away from the .csx scripts and into libraries, what we were left with was a normal ASP.net structure, with one project for the function.json files and all the advantages of the Azure Functions runtime (most mentioned here).   Once we switched to using Functions with this pattern, the entire Cons section you described no longer applied to us.

-Bindings
One of the Cons we found that you did not mention was in the binding and configurations strategy.  We liked the idea of declarative configurations and removal of boiler plate code for building external connections in theory.  However, in practice, the output bindings were simply too inflexible for our applications.  Our functions were more interactive, needing to make multiple connections to the outside within a single function, and the bindings did not provide our code a handle to it's connection managers.  Thus, we ended up having to build our own anyway, rendering the built-in output bindings pointless. I submitted feature requests for this however.

 

Developing with Node.js on Microsoft Azure

Tuesday, June 20, 2017 by K. Scott Allen

Catching up on announcements …

About 3 months ago, Pluralsight released my Developing with Node.js on Microsoft Azure course.  Recorded on macOS and taking the perspective of a Node developer, this course shows how to use Azure App Services to host a NodeJS application, as well as how to manage, monitor, debug, and scale the application. I also show how to use Cosmos DB, which is the NoSQL DB formerly known as Document DB, by taking advantage of the Mongo DB APIs.

Other topics include how to use Azure SQL, CLI tools, blob storage, and some cognitive services are thrown into the mix, too. An entire module is dedicated to serverless computing with Azure functions (implemented in JavaScript, of course).

In the finale of the course I show you how to setup a continuous delivery pipeline using Visual Studio Team Services to build, package, and deploy a Node app into Azure.

I hope you enjoy the course.

Developing with Node.js on Microsoft Azure

ASP.NET Configuration Options Will Understand Arrays

Monday, April 24, 2017 by K. Scott Allen

Continuing on topics from code reviews.

Last year I saw some C# code working very hard to process an application config file like the following:

{
  "Storage": {
    "Timeout":  "25", 
    "Blobs": [
      {
        "Name": "Primary",
        "Url": "foo.com"

      },
      {
        "Name": "Secondary",
        "Url": "bar.com"

      }
    ]
  }
}

Fortunately, the Options framework in ASP.NET Core understands how to map this JSON into C#, including the Blobs array. All we need are some plain classes that follow the structure of the JSON.

public class AppConfig
{
    public Storage Storage { get; set; }            
}

public class Storage
{
    public int TimeOut { get; set; }
    public BlobSettings[] Blobs { get; set; }
}

public class BlobSettings
{
    public string Name { get; set; }
    public string Url { get; set; }
}

Then, we setup our IConfiguration for the application.

var config = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json")
    .Build();

And once you’ve brought in the Microsoft.Extensions.Options package, you can configure the IOptions service and make AppConfig available.

public void ConfigureServices(IServiceCollection services)
{
    // ...

    services.AddOptions();
    services.Configure<AppConfig>(config);
}

With everything in place, you can inject IOptions<AppConfig> anywhere in the application, and the object will have the settings from the configuration file.

ASP.NET Core Dependency Injection Understands Unbound Generics

Friday, April 21, 2017 by K. Scott Allen

Continuing with topics based on ASP.NET Core code reviews.

Here is a bit of code I came across in an application’s Startup class.

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped<IStore<User>, SqlStore<User>>();
    services.AddScoped<IStore<Invoice>, SqlStore<Invoice>>();
    services.AddScoped<IStore<Payment>, SqlStore<Payment>>();
    // ...
}

The actual code ran for many more lines, with the general idea that the application needs an IStore implementation for a number of distinguished objects in the system.

Because ASP.NET Core understands unbound generics, there is only one line of code required.

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped(typeof(IStore<>), typeof(SqlStore<>));
}

Unbound generics are not useful in day to day business programming, but if you are curious how the process works, I did show how to use unbound generics at a low level in my C# Generics course.

One downside to this approach is the fact that you might experience a runtime error (instead of a compile error) if a component requests an implementation of IStore<T> that isn’t possible. For example, if a concrete implementation of IStore<T> uses a generic constraint of class, then the following would happen:

Assert.Throws<ArgumentException>(() =>
{
    services.GetRequiredService<IStore<int>>();
});

However, this problem should be avoidable.

ASP.NET Core Middleware Components are Singletons

Wednesday, April 19, 2017 by K. Scott Allen

This is the first post in a series of posts based on code reviews of systems where ASP.NET Core is involved.

I recently came across code like the following:

public class FaultyMiddleware
{
    public FaultyMiddleware(RequestDelegate next)
    {
        _next = next;
    }

    public async Task Invoke(HttpContext context)
    {
        // saving the context so we don't need to pass around as a parameter
        this._context = context;

        DoSomeWork();

        await _next(context);            
    }

    private void DoSomeWork()
    {
        // code that calls other private methods
    }

    // ...

    HttpContext _context;
    RequestDelegate _next;
}

The problem here is a misunderstanding of how middleware components work in the ASP.NET Core pipeline. ASP.NET Core uses a single instance of a middleware component to process multiple requests, so it is best to think of the component as a singleton. Saving state into an instance field is going to create problems when there are concurrent requests working through the pipeline.

If there is so much work to do inside a component that you need multiple private methods, a possible solution is to delegate the work to another class and instantiate the class once per request. Something like the following:

public class RequestProcessor
{
    private readonly HttpContext _context;

    public RequestProcessor(HttpContext context)
    {
        _context = context;
    }

    public void DoSomeWork()
    {
        // ... 
    }
}

Now the middleware component has the single responsibility of following the implicit middleware contract so it fits into the ASP.NET Core processing pipeline. Meanwhile, the RequestProcessor, once given a more suitable name, is a class the system can use anytime there is work to do with an HttpContext.

Developing with .NET on Microsoft Azure

Tuesday, April 4, 2017 by K. Scott Allen

My latest Pluralsight course is alive and covers Azure from a .NET developers perspective. Some of what you’ll learn includes:

- How to create an app service to host your web application and API backend

- How to monitor, manage, debug, and scale an app service

- How to configure and use an Azure SQL database

- How to configure and use a DocumentDB collection

- How to work with storage accounts and blob storage

- How to take advantage of server-less computing with Azure Functions

- How to setup a continuous delivery pipeline into Azure from Visual Studio Team Services

- And much more …

Developing with .NET on Microsoft Azure

And here is some early feedback from the Twitterverse:

Thanks for watching!

The Joy of Azure CLI 2.0

Monday, April 3, 2017 by K. Scott Allen

The Joy of CookingThe title here is based on a book I remember in my mom’s kitchen: The Joy of Cooking. The cover of her book was worn, while the inside was dog eared and bookmarked with notes. I started reading my mom’s copy when I started working in a restaurant for spending money. In the days before TV channels dedicated to cooking, I learned quite a bit about cooking from this book and on-the-job training. The book is more than a collection of recipes. There is prose and personality inside.

I have a copy in my kitchen now.

Azure CLI 2

The new Azure CLI 2 is my favorite tool for Azure operations from the command line. The installation is simple and does have a dependency on Python. I look at the Python dependency as a good thing, since Python allows the CLI to work on macOS, WIndows, and Linux. You do not need to know anything about Python to use the CLI, although Python is a fun language to learn and use. I’ve done one course with Python and one day hope to do more.

The operations you can perform with the CLI are easy to find, since the tool organizes operations into hierarchical groups and sub-groups. After installation, just type “az” to see the top-level commands (many of which are not in the picture). Azure CLI 2 at work

You can use the ubiquitous -h switch to find additional subgroups. For example, here are the commands available for the “az appservice web” group.clip_image006

For many scenarios, you can use the CLI instead of using the Azure portal. Let’s say you’ve just used a scaffolding tool to create an application with Node or .NET Core, and now you want to create a web site in Azure with the local code. First, we’d place the code into a local git repository.

git init 
git add . 
git commit -a -m “first commit” 

Now you use a combination of git and az commands to create an app service and push the application to Azure.

az group create --location “Eastus” --name sample-app 
az appservice plan create --name sample-app-plan --resource-group sample-app --sku FREE 
az appservice web create --name sample-app --resource-group sample-group --plan sample-app-plan 
az appservice web source-control config-local-git --name sample-app --resource-group sample-app 

git remote add azure “https://[url-result-from-previous-operation]” 
git push azure master 

We can then have the CLI launch a browser to view the new application.

az appservice web browse --name sample-app --resource-group sample-app

To shorten the above commands, use -n for the name switch, and -g for the resource group name.

Joyous.

My Pluralsight Courses

K.Scott Allen OdeToCode by K. Scott Allen
What JavaScript Developers Should Know About ECMAScript 2015
The Podcast!