When Microsoft released the source code to MS-DOS and Word, I had to take a look. One of the first functions I came across was ReplacePropsCa from the srchfmt.c file.
/* %%Function:ReplacePropsCa %%Owner:rosiep */ ReplacePropsCa(prpp, pca) struct RPP *prpp; struct CA *pca; { struct CA caInval; if (prpp->cbgrpprlChp) { ExpandCaSprm(pca, &caInval, prpp->grpprlChp); ApplyGrpprlCa(prpp->grpprlChp, prpp->cbgrpprlChp, pca); if (!vfNoInval) { InvalCp(pca); InvalText(pca, fFalse /* fEdit */); } } if (prpp->cbgrpprlPap) { int fStc; struct CHP chp; struct PAP pap; if (fStc = (*prpp->grpprlPap == sprmPStc)) { CachePara(pca->doc, pca->cpFirst); pap = vpapFetch; } ExpandCaSprm(pca, &caInval, prpp->grpprlPap); ApplyGrpprlCa(prpp->grpprlPap, prpp->cbgrpprlPap, pca); if (fStc) { GetMajorityChp(pca, &chp); EmitSprmCMajCa(pca, &chp); if (!FMatchAbs(pca->doc, &pap, &vpapFetch)) InvalPageView(pca->doc); } if (!vfNoInval) { InvalCp(&caInval); InvalText (pca, fFalse /* fEdit */); DirtyOutline(pca->doc); } } }
Thought #1: Every Function Has An Owner. Although I see the occasional project where each file has a comment indicating the owner, I don’t remember ever seeing ownership declared on individual functions. I think the concept of collective ownership is a healthier approach to building software, both for the software and the developers. Today’s tools also make it easier to jump around in code.
Thought #2: The Flow Control Is All Wrong. Oh, wait, the flow control seems ok, it’s just the funny indentation of curly braces setting off alarm bells. Joel Spolsky has a post from 2005 titled Making Wrong Code Look Wrong in which he says:
This is the real art: making robust code by literally inventing conventions that make errors stand out on the screen.
After many years in 3 different languages using { and }, my eyes are accustomed to looking for a closing curly brace in the same column as the if. Not seeing the curly means code might accidently execute outside the conditional check. This function hides the closing curly and is full of evil.
Thought #3: The Notation Is Hilarious. Call it Hungarian Notation, or Anti-Hungarian Notation, or something not Hungarian at all but a custom DSL designed in C. In any case the idea of checking to see if a prpp->grpplPap is equal to a sprmPStc is just one brick in a wall of gibberish that reminds me of a Lewis Carroll poem.
`Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.
Both the function and the poem include gibberish, but at least the Lewis Carroll poem rhymes.
The idea is to dynamically generate a tabbed navigation using Angular and UI Bootstrap.
I’ve done this before, but this time around I needed the ability to deep link into a tab. That is, if a user bookmarks /someapp/tab2, then the 2nd tab should be active with its content showing.
Instead of using ngRouter, which is a bit simplistic, I decided to use UI Router. UI Router is not without quirks and bugs, but it does give the opportunity to setup multiple, named “states” for an application, and can manage nested states and routes through associated URLs. One of the first steps in working with UI Router is configuring the known states
var app = angular.module("routedTabs", ["ui.router", "ui.bootstrap"]); app.config(function($stateProvider, $urlRouterProvider){ $urlRouterProvider.otherwise("/main/tab1"); $stateProvider .state("main", { abstract: true, url:"/main", templateUrl:"main.html" }) .state("main.tab1", { url: "/tab1", templateUrl: "tab1.html" }) .state("main.tab2", { url: "/tab2", templateUrl: "tab2.html" }) .state("main.tab3", { url: "/tab3", templateUrl: "tab3.html" }); });
In the above code, “main” is a parent state with three children (tab1, tab2, and tab3). Each child has an associated URL (which will be appended to the parent URL) and a template. Each child template will plug into the parent template of main.html, which itself has to plug into the application shell.
In other words, the shell of the application uses the ui-view directive to position the parent template (main.html).
<body ng-app="routedTabs" class="container"> <div ui-view></div> </body>
This is not much different than using ngRouter and its ngView directive, but UI router also allows for main.html to use another ui-view directive where one of the child templates will appear.
<div ng-controller="mainController"> <tabset> <tab ng-repeat="t in tabs" heading="{{t.heading}}" select="go(t.route)" active="t.active"> </tab> </tabset> <h2>View:</h2> <div ui-view></div> </div>
This view requires a controller to provide the tab data.
app.controller("mainController", function($rootScope, $scope, $state) { $scope.tabs = [ { heading: "Tab 1", route:"main.tab1", active:false }, { heading: "Tab 2", route:"main.tab2", active:false }, { heading: "Tab 3", route:"main.tab3", active:false }, ]; $scope.go = function(route){ $state.go(route); }; $scope.active = function(route){ return $state.is(route); }; $scope.$on("$stateChangeSuccess", function() { $scope.tabs.forEach(function(tab) { tab.active = $scope.active(tab.route); }); }); });
The only reason to listen for UI router’s $stateChangeSuccess is to keep the right tab highlighted if the URL changes. It’s a bit of a hack and actually makes me wonder if using tabs from UI Bootstrap is worth the extra code, or if it would be easier to write something custom and integrate directly with UI router.
If you want to try the code for yourself, here it is on Plunkr.
A few months ago I found myself in a situation where I had to throw some dynamically generated scripts into the browser for testing (and occasionally inspecting and debugging). I wrote a small custom directive to take care of most of the work.
<div ng-controller="mainController"> <div otc-scripts scripts="scripts"> </div> </div>
In the above markup, it’s the mainController that will fetch the script text for execution, which I’ll simulate with the code below.
app.controller("mainController", function($scope, $timeout) { $scope.scripts = []; $timeout(function () { $scope.scripts = [ "alert('Hello');", "alert('World');" ]; }, 2000); });
The rest of the work relies on the otcScripts directive, which watches for a new script array to appear and then creates script tags and places the tags into the DOM.
app.directive("otcScripts", function() { var updateScripts = function (element) { return function (scripts) { element.empty(); angular.forEach(scripts, function (source, key) { var scriptTag = angular.element( document.createElement("script")); source = "//@ sourceURL=" + key + "\n" + source; scriptTag.text(source) element.append(scriptTag); }); }; }; return { restrict: "EA", scope: { scripts: "=" }, link: function(scope,element) { scope.$watch("scripts", updateScripts(element)); } }; });
I’m sure there are many different approaches to achieving the same result, but note the above code uses document.createElement directly, as this appears to be one foolproof approach that consistently works. The directive also prepends a @sourceURL to the script. If you give your @sourceURL a recognizable name, you’ll be able to recognize the code bits a little easier in the Chrome debugger.
Azure WebJobs are background services you can run in the cloud. The experience is easy and smooth. Scott has a thorough overview in “Introducing Windows Azure WebJobs”.
In a previous post we looked at using JavaScript to read messages from Azure Queue storage. We can use the code from that previous post in an Azure WebJob by creating a run.js file. WebJobs will automatically execute a run.js file using Node.
var config = require("./config.json"); var queue = require("./queue")(config); var checkQueue = function () { queue.getSingleMessage() .then(processMessage) .catch(processError) .finally(setNextCheck); }; var processMessage = function (message) { if (message) { console.dir(message); // processing commands, then ... return queue.deleteMessage(message); } }; var processError = function(reason) { console.log("Error:"); console.log(reason); }; var setNextCheck = function () { setTimeout(checkQueue, config.checkFrequency); }; checkQueue();
All that’s needed to deploy the job is to zip up run.js with all its dependencies (including the node_modules directory) and upload the zip into an Azure website.
The above code expects to run continuously and poll a queue. You can configure each job to run continuously, on a schedule, or on demand in the Azure portal. Azure will store any output from the program in a log file that is one click away.
How to deploy Windows Azure WebJobs by Amit Apple is a behind the scenes look at how to deploy a web job using Git or FTP.
The Azure SDK for Node.js is feature rich and comprehensive, but there is always room to provide some additional abstraction and tailor an API to make it easier to use inside a specific application.
For example, imagine we need to the ability to get a single messages from an Azure queue, and also delete a single message. Here is a module that can expose a custom API for use in a bigger application. As a bonus, the module adapts the API to use promises instead of callbacks.
var Q = require("Q"); var azure = require("azure"); module.exports = function(config) { var retryOperations = new azure.ExponentialRetryPolicyFilter(); var queueService = azure.createQueueService(config.storageName, config.storageKey) .withFilter(retryOperations); var singleMessageDefaults = { numofmessages: 1, visibilitytimeout: 2 * 60 }; var getSingleMessage = function() { var deferred = Q.defer(); queueService.getMessages(config.queueName, singleMessageDefaults, getSingleMessageComplete(deferred)); return deferred.promise; }; var deleteMessage = function(message) { var deferred = Q.defer(); queueService.deleteMessage(config.queueName, message.messageid, message.popreceipt, deleteComplete(deferred)); return deferred.promise; }; var getSingleMessageComplete = function(deferred) { return function(error, messages) { if (error) { deferred.reject(error); } else { if (messages.length) { deferred.resolve(messages[0]); } else { deferred.resolve(); } } }; }; var deleteComplete = function(deferred) { return function(error) { if (error) { deferred.reject(error); } else { deferred.resolve(); } }; }; return { getSingleMessage: getSingleMessage, deleteMessage: deleteMessage }; };
The configuration can live in a simple .json file.
{ "storageName": "name", "storageKey": "qY5Qk...==", "tableName": "patients", "queueName": "exportrequests" }
And now checking the queue is easy.
var config = require("./config.json"); var queue = require("./queue")(config); queue.getSingleMessage() .then(processMessage) .catch(processError);
Next up, we’ll see how to use this code from Node.js inside a continuously running Azure WebJob.
In software development we face many constraints, and we usually think of constraints as bad things that make our jobs miserable. If we had no constraints, we’d build beautiful software with impeccable error handling because there would be no errors.
In one of my first jobs I wrote firmware for lab devices. Each device had a 32kb ROM for program storage, and those 32kb of memory constrained the type of software I could create, and the tools I could use. As an aside, the limited memory did not constrain me from destroying small thermal printers attached to each device because 32kb is still big enough to hide the machine code that writes to memory-mapped IO in an accidentally infinite loop.
Constraints don’t always have to be a negative, however, and I was reminded of this when reading Scarcity: Why Having Too Little Means So Much. Half the book is dedicated to the destructive impact of scarcity.
Scarcity captures our minds automatically. And when it does, we do not make trade-offs using a careful cost benefit analysis.
But the other half of the book highlights the unintuitive benefits of scarcity. The authors pointed to one study where two groups of college undergrads were paid to proofread three essays. One group was given a single deadline three weeks out, and the other group was given three one week deadlines. The group with tighter, more frequent deadlines achieved higher productivity.
They were late less often (although they had more deadlines to miss), they found more typos, and they earned more money.
I know some of you are thinking that the first group was waterfall and the second group was agile and of course the 2nd group performed better because agile iterations are always better and everyone is doing agile, even the people who define requirements during an 8 hour scrum at an off site location where they won’t get interrupted by customers who might pepper them with mundane questions.
The book does a good job of pointing out how a scarcity of any resource (time, money, or food, as some examples), will make our minds focus on the immediate problem, and this focus can yield positive results. However, the focus can also lead to long term problems.
I've never seen a software project that was developed too fast. An application that will take one year of work will be even better if it only takes 6 months, and to make everyone happy the app would have to be done by yesterday. We are always working under time constraints.
It’s not surprising then, that most tools, methodologies, languages, frameworks, and snake oils in this industry try to solve, either directly or indirectly, the time constraint. It doesn’t matter if we talk about sprints, Rails, code generators, or Erlang, they all at one point or another pledge to be more productive or otherwise allow us to deliver more features in less time inside a specific context.
We’ve also developed some safe guards against being too quick, like unit testing. And often times these safe guards are in direct conflict with the tools and frameworks we use to boost productivity. A framework might allow us to crank out features quickly, but the code is untestable, so maintenance might be a problem in the future.
The days of me programming in a memory constrained environment are long gone, but there are always scarce resources in any software project. The Scarcity book meandered at times, but it is an interesting read for anyone in software because the psychological impacts of scarcity are valuable to understand in this industry.
Although Azure Blob storage has a formal API that you can use from C#, Node, and many other environments, behind it all is a simple HTTP API. For example, with .NET you can use HttpClient to PUT a new blob into storage.
private readonly HttpClient _client = new HttpClient(); private readonly string _url = "https://pathtoblob"; public async Task WriteBlobStringAsync(string data) { var content = new StringContent("Hello from C#"); content.Headers.Add("x-ms-blob-type", "BlockBlob"); var response = await _client.PutAsync(_url, content); response.EnsureSuccessStatusCode(); }
The same can be done with Node using request.
var request = require('request'); var options = { url: 'https://bitmask.blob.core.windows.net/test/readme.txt?…', body: "Hello from Node", method: 'PUT', headers: { 'x-ms-blob-type': 'BlockBlob' } }; var processResponse = function(error, response) { if (error) { console.log('error: ' + error); } else { console.log('response: ' + response.statusMessage); console.log('etag: ' + response.headers.etag); } }; request(options, processResponse);
You could also manage an upload using Node’s https module, but https is a low level API and you’d have to manage many small details, like the Content-Length header.
Both these code examples expect to use URLs with Shared Access Signatures in the query string, so there is no need to know the storage account access keys or manage authorization headers. Shared access signatures allow you to grant access to various storage features in a granular fashion. There is a good overview of SAS on the Azure web site.
As an example, the following C# code will create a SAS for a “readme.txt” file in the “test” storage container. The SAS is good for ~4 hours and grants someone read and write privileges on the readme.txt blob. Note that the readme.txt blob does not have to exist before the code creates the SAS.
var storageAccount = CloudStorageAccount.Parse( CloudConfigurationManager.GetSetting("ConnectionNameInConfig")); var blobClient = storageAccount.CreateCloudBlobClient(); var container = blobClient.GetContainerReference("test"); var blob = container.GetBlockBlobReference("readme.txt"); var sasConstraints = new SharedAccessBlobPolicy { SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-15), SharedAccessExpiryTime = DateTime.UtcNow.AddHours(4), Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.Write }; string sasBlobToken = blob.GetSharedAccessSignature(sasConstraints); return blob.Uri + sasBlobToken;
The SAS combined with the blob URL will look like:
https://bitmask.blob.core.windows.net/test/readme.txt?sv=2013-08-15&sr=b&sig=---&st=2014-03-15T15%3A27%3A14Z&se=2014-03-15T19%3A42%3A14Z&sp=rw
The SAS can be handed to clients who can now directly access storage instead of streaming data through your application as an intermediary. The example here is what Azure calls an Ad hoc SAS, because all of the details, like the permissions and expiry time are in the URL itself.