Azure WebJobs With Node.js

Monday, April 7, 2014 by K. Scott Allen
2 comments

Azure WebJobs are background services you can run in the cloud. The experience is easy and smooth. Scott has a thorough overview in “Introducing Windows Azure WebJobs”. 

In a previous post we looked at using JavaScript to read messages from Azure Queue storage.  We can use the code from that previous post in an Azure WebJob by creating a run.js file. WebJobs will automatically execute a run.js file using Node.

var config = require("./config.json");
var queue = require("./queue")(config);

var checkQueue = function () {
    queue.getSingleMessage()
        .then(processMessage)
        .catch(processError)
        .finally(setNextCheck);
};

var processMessage = function (message) {   
    if (message) {        
        console.dir(message);

        // processing commands, then ...

        return queue.deleteMessage(message);
    }
};

var processError = function(reason) {
    console.log("Error:");
    console.log(reason);
};

var setNextCheck = function () {
    setTimeout(checkQueue, config.checkFrequency);
};

checkQueue();

Aimagell that’s needed to deploy the job is to zip up run.js with all its dependencies (including the node_modules directory) and upload the zip into an Azure website. 

The above code expects to run continuously and poll a queue. You can configure each job to run continuously, on a schedule, or on demand in the Azure portal. Azure will store any output from the program in a log file that is one click away. 

Another Useful Link

How to deploy Windows Azure WebJobs by Amit Apple is a behind the scenes look at how to deploy a web job using Git or FTP.

Adapting The Azure Queue API For Node.js

Thursday, April 3, 2014 by K. Scott Allen
0 comments

The Azure SDK for Node.js is feature rich and comprehensive, but there is always room to provide some additional abstraction and tailor an API to make it easier to use inside a specific application.

For example, imagine we need to the ability to get a single messages from an Azure queue, and also delete a single message. Here is a module that can expose a custom API for use in a bigger application. As a bonus, the module adapts the API to use promises instead of callbacks.

var Q = require("Q");
var azure = require("azure");

module.exports = function(config) {

    var retryOperations = new azure.ExponentialRetryPolicyFilter();
    var queueService = azure.createQueueService(config.storageName, config.storageKey)
                            .withFilter(retryOperations);
    var singleMessageDefaults = { numofmessages: 1, visibilitytimeout: 2 * 60 };

    var getSingleMessage = function() {
        var deferred = Q.defer();
        queueService.getMessages(config.queueName, singleMessageDefaults,
                                 getSingleMessageComplete(deferred));
        return deferred.promise;
    };

    var deleteMessage = function(message) {
        var deferred = Q.defer();        
        queueService.deleteMessage(config.queueName, message.messageid,
                                   message.popreceipt, deleteComplete(deferred));
        return deferred.promise;
    };

    var getSingleMessageComplete = function(deferred) {
        return function(error, messages) {
            if (error) {
                deferred.reject(error);
            } else {
                if (messages.length) {
                    deferred.resolve(messages[0]);
                } else {
                    deferred.resolve();
                }
            }
        };
    };

    var deleteComplete = function(deferred) {
        return function(error) {
            if (error) {
                deferred.reject(error);
            } else {
                deferred.resolve();
            }
        };
    };

    return {
        getSingleMessage: getSingleMessage,
        deleteMessage: deleteMessage
    };
};

The configuration can live in a simple .json file.

{
    "storageName": "name",
    "storageKey": "qY5Qk...==",    
    "tableName": "patients",
    "queueName": "exportrequests"
}

And now checking the queue is easy.

var config = require("./config.json");
var queue = require("./queue")(config);

queue.getSingleMessage()
     .then(processMessage)
     .catch(processError);

Next up, we’ll see how to use this code from Node.js inside a continuously running Azure WebJob.

Scarcity In Software Development

Wednesday, April 2, 2014 by K. Scott Allen
0 comments

ScarcityIn software development we face many constraints, and we usually think of constraints as bad things that make our jobs miserable. If we had no constraints, we’d build beautiful software with impeccable error handling because there would be no errors.

In one of my first jobs I wrote firmware for lab devices. Each device had a 32kb ROM for program storage, and those 32kb of memory constrained the type of software I could create, and the tools I could use. As an aside, the limited memory did not constrain me from destroying small thermal printers attached to each device because 32kb is still big enough to hide the machine code that writes to memory-mapped IO in an accidentally infinite loop. 

Constraints don’t always have to be a negative, however, and I was reminded of this when reading Scarcity: Why Having Too Little Means So Much. Half the book is dedicated to the destructive impact of scarcity. 

Scarcity captures our minds automatically. And when it does, we do not make trade-offs using a careful cost benefit analysis.

But the other half of the book highlights the unintuitive benefits of scarcity. The authors pointed to one study where two groups of college undergrads were paid to proofread three essays. One group was given a single deadline three weeks out, and the other group was given three one week deadlines. The group with tighter, more frequent deadlines achieved higher productivity. 

They were late less often (although they had more deadlines to miss), they found more typos, and they earned more money.

I know some of you are thinking that the first group was waterfall and the second group was agile and of course the 2nd group performed better because agile iterations are always better and everyone is doing agile, even the people who define requirements during an 8 hour scrum at an off site location where they won’t get interrupted by customers who might pepper them with mundane questions.

The book does a good job of pointing out how a scarcity of any resource (time, money, or food, as some examples), will make our minds focus on the immediate problem, and this focus can yield positive results. However, the focus can also lead to long term problems.

A Scarcity Of Time

I've never seen a software project that was developed too fast. An application that will take one year of work will be even better if it only takes 6 months, and to make everyone happy the app would have to be done by yesterday. We are always working under time constraints. 

It’s not surprising then, that most tools, methodologies, languages, frameworks, and snake oils in this industry try to solve, either directly or indirectly, the time constraint. It doesn’t matter if we talk about sprints, Rails, code generators, or Erlang, they all at one point or another pledge to be more productive or otherwise allow us to deliver more features in less time inside a specific context. 

We’ve also developed some safe guards against being too quick, like unit testing. And often times these safe guards are in direct conflict with the tools and frameworks we use to boost productivity. A framework might allow us to crank out features quickly, but the code is untestable, so maintenance might be a problem in the future.

The days of me programming in a memory constrained environment are long gone, but there are always scarce resources in any software project. The Scarcity book meandered at times, but it is an interesting read for anyone in software because the psychological impacts of scarcity are valuable to understand in this industry.

Http Clients and Azure Blob Storage

Monday, March 31, 2014 by K. Scott Allen
2 comments

Although Azure Blob storage has a formal API that you can use from C#, Node, and many other environments, behind it all is a simple HTTP API. For example, with .NET you can use HttpClient to PUT a new blob into storage.

private readonly HttpClient _client = new HttpClient();
private readonly string _url = "https://pathtoblob";

public async Task WriteBlobStringAsync(string data)
{
    var content = new StringContent("Hello from C#");
    content.Headers.Add("x-ms-blob-type", "BlockBlob");

    var response = await _client.PutAsync(_url, content);
    response.EnsureSuccessStatusCode();            
}

The same can be done with Node using request.

var request = require('request');

var options = {
    url: 'https://bitmask.blob.core.windows.net/test/readme.txt?…',
    body: "Hello from Node",
    method: 'PUT',
    headers: {
        'x-ms-blob-type': 'BlockBlob'
    }    
};

var processResponse = function(error, response) {
    if (error) {
        console.log('error: ' + error);
    } else {
        console.log('response: ' + response.statusMessage);
        console.log('etag: ' + response.headers.etag);
    }    
};

request(options, processResponse);

You could also manage an upload using Node’s https module, but https is a low level API and you’d have to manage many small details, like the Content-Length header.

Uploads and Shared Access Signatures

Both these code examples expect to use URLs with Shared Access Signatures in the query string, so there is no need to know the storage account access keys or manage authorization headers. Shared access signatures allow you to grant access to various storage features in a granular fashion. There is a good overview of SAS on the Azure web site.

As an example, the following C# code will create a SAS for a “readme.txt” file in the “test” storage container. The SAS is good for ~4 hours and grants someone read and write privileges on the readme.txt blob. Note that the readme.txt blob does not have to exist before the code creates the SAS.

var storageAccount = CloudStorageAccount.Parse(
    CloudConfigurationManager.GetSetting("ConnectionNameInConfig"));
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("test");

var blob = container.GetBlockBlobReference("readme.txt");
var sasConstraints = new SharedAccessBlobPolicy
{
    SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-15),
    SharedAccessExpiryTime = DateTime.UtcNow.AddHours(4),
    Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.Write
};
string sasBlobToken = blob.GetSharedAccessSignature(sasConstraints);
return blob.Uri + sasBlobToken;

The SAS combined with the blob URL will look like:

https://bitmask.blob.core.windows.net/test/readme.txt?sv=2013-08-15&sr=b&sig=---&st=2014-03-15T15%3A27%3A14Z&se=2014-03-15T19%3A42%3A14Z&sp=rw

The SAS can be handed to clients who can now directly access storage instead of streaming data through your application as an intermediary.  The example here is what Azure calls an Ad hoc SAS, because all of the details, like the permissions and expiry time are in the URL itself.

Some Useful IIS Rewrite Rules

Thursday, March 27, 2014 by K. Scott Allen
0 comments

A few months ago Mads posted some IIS URL Rewrite rules in a post titled “URL rewrite and the www subdomain”.

Years ago, when I rewrote this site in ASP.NET MVC, I found URL rewriting to be invaluable. Some URLs in the new version of this site became obsolete. For example, an efficient Gravatar implementation from the Web Helpers library replaced an Identicon HTTP handler. I wanted to explicitly purge the handler from search engine results with an HTTP 410 response. 

<rule name="obsolete identicon" stopProcessing="true">
  <match url="/IdenticonHandler.ashx" />
  <action type="CustomResponse" statusCode="410" statusReason="Gone" 
    statusDescription="…" />
</rule>

Instead of having 4 different RSS feeds for different sections of the site, I collapsed all content into a single RSS feed. All previous RSS endpoints now redirect to FeedBurner.

<rule name="article rss feed" stopProcessing="true">
  <match url="articles/rss.aspx" />
  <action type="Redirect" url="http://feeds.feedburner.com/OdeToCode" redirectType="Permanent" />
</rule>

To make routing a bit easier, I wanted to avoid processing URLs like /articles or /blogs in ASP.NET and redirect those requests to to /articles/list with rules like the following.

<rule name="avoid articles directory" stopProcessing="true">
  <match url="articles[/]?$" />
  <action type="Redirect" url="articles/list" redirectType="Permanent" />
</rule>

A tricky scenario was preserving some endpoints that used the classic ASP.NET page name of “default.aspx” in the URL. For example, the list of all blog posts used to exist at /blogs/all/default.aspx, but I wanted to redirect these requests and avoid the page name.

<rule name="default page" stopProcessing="true">
  <match url="(.*)default.aspx" />
  <conditions>
    <add input="{REQUEST_URI}" negate="true" pattern="-default.aspx$" />
  </conditions>
  <action type="Redirect" url="{r:1}" redirectType="Permanent" />
</rule>

The negation condition in the above rule avoids redirecting requests for blog posts with default.aspx in the title of the post, of which there are 1 or 2.

Finally, I use a web.config transformation to add an additional rule in production to enforce the canonical host name of odetocode.com.

<rule name="Canonical Host Name" stopProcessing="true" 
      xdt:Transform="InsertBefore(/configuration/system.webServer/rewrite/rules/rule[1])">
  <match url="(.*)" />
  <conditions>
    <add input="{HTTP_HOST}" negate="true" pattern="^odetocode\.com$" />
  </conditions>
  <action type="Redirect" url="http://odetocode.com/{R:1}" redirectType="Permanent" />
</rule>

One post I found useful when developing these rules was RuslanY’s “10 URL Rewriting Tips and Tricks”.

Tips For JavaScript Promises

Wednesday, March 26, 2014 by K. Scott Allen
2 comments

Q tipsThere are couple scenarios where I occasionally see too much JavaScript code being written with promises.

For the examples, let’s assume we are working with a simple function returning a promise like the following doWork function. This code is using q, but everything here is also true for Angular’s $q service.

var doWork = function(){
    var deferred = Q.defer();

    setTimeout(function(){
        deferred.resolve("done");
    }, 1000);

    return deferred.promise;
};

Invoking doWork to get a result is simple.

var onSuccess = function(result) {
    console.log("Success: " + result);
};

var onError = function(reason) {
    console.log("Error: " + reason);
}

doWork().then(onSuccess, onError);

Wrapping A Promise Function

The first scenario that often involves too much code is the scenario where you want to wrap a promise to add some additional calculations after the initial promise resolves, but before returning a promise to a higher level component. Examples would include processing an HTTP response to add caching or some data manipulation.

With the doWork function, we might just want to add some additional text (“with added value”) to the return.

/* don't do this */
var workWrapper = function() {
    var defer = Q.defer();
    
    doWork().then(function(result) {
        defer.resolve(result + " with added value");
    }, function(reason){
        defer.reject(reason);
    });

    return defer.promise;
};

The above code goes to a lot of extra work to create a new deferred object and handle errors, but the same result could be achieved with less code.

/* do this instead */
var workWrapper = function() {
    return doWork().then(function(result) {
        return result + " with added value";
    });
};

An error will still prorogate correctly, and the then function will capture then plain return value (a string) and wrap it in a promise. To the caller, it’s still easy to grab the final result.

workWrapper().then(onSuccess, onError);

Do I Have A Promise Or Not?

A similar scenario exists when you have a function that may or may not return a promise. For example, if request is made for some data but the data is found in a cache, a function can return the data immediately. If the data isn’t in a cache the function might need to make an asynch call to fetch the data and return a promise for the future.

The below function simulates this scenario by returning a raw string value approximately half the time, and a promise the rest of the time.

var promiseOrValue = function() {
    if(Math.random() < 0.5) {
        return "done early";
    }
    else {
        return doWork();
    }
};

The easiest way to handle this scenario is not to test the return value to see if the value is a promise, but treat everything as a promise, which is easy to do with Q.when.

Q.when(promiseOrValue()).then(onSuccess, onError);

If you own the function, it’s even nicer if the function always returns a promise, even for data that is immediately available.

var alwaysAPromise= function() {
    if(Math.random() < 0.8) {
        return Q.when("done early");
    }
    else {
        return doWork();
    }
};

Then invoking the function is as easy as invoking the original doWork function.

alwaysAPromise().then(onSuccess, onError);

A Plunkr

If you want to experiement with promises, I put together a Plunkr using Jasmine to describe some of these behaviors with tests.

Dynamic Routes with AngularJS

Monday, March 24, 2014 by K. Scott Allen
7 comments

There is a simple rule in AngularJS that trips up many people because they simply aren’t aware of the rule. The rule is that every module has two phases, a configuration phase and a run phase. During the configuration phase you can only use service providers and constants, but during the run phase you only have access to services, and not the service providers.

One scenario where the rule will trip people up is the scenario where an application needs flexible, dynamic routes. Perhaps the routes are tailored to a user’s roles, like giving additional routes to a superuser, but regardless of the specifics you probably need some information from the server to generate the routes. The typical approach to server communication is to use the $http service, so a first attempt might be to write a config function that uses $http and $routeProvider to put together information on the available routes.

app.config(function ($http, $routeProvider) {

    var routes = $http.get("userInfo");
    // ... register routes with $routeProvider
                   
});

The above code will only generate an error.

Error: [$injector:unpr] Unknown provider: $http

Eventually you’ll figure out that a config function only has access to $httpProvider, not $http. Then you might try a run block, which does give you access to $http for server communication, but …

app.run(function ($http, $routeProvider) {

    var routes = $http.get("userInfo");
    // ... register routes with $routeProvider
    
});

… there is no access to providers during a run block.

[$injector:unpr] Unknown provider: $routeProviderProvider

There are a few different approaches to tackling this problem.

One approach would be to use a different wrapper for service communication, like jQuery’s $.get, perhaps combined with manual bootstrapping of the Angular application to ensure you have everything from the server you need to get started.

An Solution With C# and Razor

Another approach would be to use server side rendering to embed the information you need into the shell page of the application. For example, let’s say you are using the following class definitions.

public class ClientRoute
{
    public string Path { get; set; }
    public ClientRouteProperties Properties { get; set; }
}

public class ClientRouteProperties
{
    public string TemplateUrl { get; set; }
    public string Controller { get; set; }
    public string Resolve { get; set; }
}

And also a ClientRouteBuilder that can generate client side routes given the identity of a  user.

public class ClientRouteBuilder
{
    public string BuildRoutesFor(IPrincipal user)
    {
        var routes = new List<ClientRoute>()
        {
            new ClientRoute { 
                Path = "/index",
                Properties = new ClientRouteProperties
                {
                    TemplateUrl = "index.html",
                    Controller = "IndexController"
                }
            }
            
            // ... more routes
        };

        if (user.IsInRole("admin"))
        {
            routes.Add(new ClientRoute
            {
                Path = "/admin",
                Properties = new ClientRouteProperties
                {
                    TemplateUrl = "admin.html",
                    Controller = "AdminController"
                }
            });
        }

        return JsonConvert.SerializeObject(routes,new JsonSerializerSettings()
        {
            ContractResolver = new CamelCasePropertyNamesContractResolver()
        });
    }
}

In a Razor view you can use the builder to emit a JavaScript data structure with all the required routes, and embed the JavaScript required to config the application in the view as well.

<body ng-app="app">
    <div ng-view>
        
    </div>
    
    <script src="~/Scripts/angular.js"></script>
    <script src="~/Scripts/angular-route.js"></script>
    <script>
        (function() {

            var app = angular.module("app", ["ngRoute"]);

            /*** embed the routes ***/
            var routes = @Html.Raw(new ClientRouteBuilder().BuildRoutesFor(User))

            /*** register the routes ***/
            app.config(function ($routeProvider) {
                routes.forEach(function(route) {
                    $routeProvider.when(route.path, route.properties);
                 });
                $routeProvider.otherwise({
                    redirectTo: routes[0].path
                });
            });
        }());    

    </script>
    @* Rest of the app scripts *@
</body>

And Remember

You can’t effectively enforce security on the client, so views and API calls still need authorization on the server to make sure a malicious user hasn’t manipulated the routes.

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!