The Sites That Want You Versus The Sites That Have You

Wednesday, May 14, 2014 by K. Scott Allen

Let’s divide the world web web into two categories.

The Sites That Have You

These are the web sites that you must use because you are a captive customer. The web site for your primary bank would be one example.

The Sites That Want You

These are the sites that play in industries like banking and travel, but all they have to attract you in is the site. They don’t own flying machines or safety deposit boxes. A site like this in the travel industry only exists to provide the best experience possible in searching for tickets and reservations.

The Contrast

Not surprisingly, the sites that want you typically have more fully featured websites that are easier to use than the websites of those who have you

As an example, let’s look at zoomed out views of flight search results. The left hand side of the picture below is the search results on (just using them as an example). On the right hand side is the search results of a major U.S. air carrier (instead of naming names, let’s call them Unified Airlines).

The areas highlighted in green are flight options with details or general information to help select a flight.

The areas shaded in blue are search and filtering controls to help narrow in on the perfect flight.

The areas in red are advertisements, fee warnings, upsell opportunities, pitches for Unified Airlines credit cards, and wasted white space.

Comparing flight search results

We can call the total amounts of green space even, though Kayak displays twice as many flight results as Unified Airlines does above the fold.

The blue space winner is clearly Kayak. Unified doesn’t provide nearly as many filtering and sorting controls as Kayak and the majority of the controls they do provide are not only at the bottom of the page, but they also aren’t as interactive and require the browser to render an entirely new search results page.

It looks like the priority for Kayak’s development team is to build a great website for finding flights. The priority given to the Unified development team is to sign up travelers for a credit card. 

The Payload

Charriot Page LoadLet’s pick on another company, this one I’ll call Charriott Hotels. I frequently stay at Charriott properties and use their web site to book rooms. On slow WiFi connections, the desktop version of the site takes forever to load, and a quick peek at the network tab of the developer tools explains why (the image to the right is a zoomed out view).

Charriott’s home page sends out 57 network requests for more than 900KB of total payload. However, this is not bad. Most of travel sites, even the ones that want you, are making north of 50 requests for around 1MB of payload by the time all the destination vacation pictures and analytic scripts are finished.  Plus, Charriott minifies most of their scripts and CSS, gzips  and cache controls their static content, and bundles some (but not all) of their files together.

Still, even casual observation shows areas for improvement. The largest asset download is a 267KB download of minified script that includes:

  • jQuery
  • YUI 2.6
  • jQuery UI (including all effects)
  • 5 or 6 additional jQuery plugins (some un-minified)

I’m not a fan of optimizing script downloads just to save a kilobyte here and there, but for the home page of a major hotel brand I’d try to avoid loading two large script frameworks with overlapping functionality. The entire file must be downloaded and parsed before the home page is usable, and I’m certain it is possible to make this happen with less than 1/3 the amount of script currently in the page.

The Conclusion

Unified and Charriott actually have good web sites for the large companies that they are. Time and time again I see large company web sites that are disasters, even technology companies that understand design and computers. I don’t believe this is the fault of the development teams. I believe bad web sites are the product of politics, design by committee processes, and the  inherent difficulty in managing a large IT staff. The teams can make it happen, they just need the opportunity and an environment to make it happen.

OdeToCode Videos

Tuesday, May 13, 2014 by K. Scott Allen

I’ve started a collection of videos here on the site, and I’m starting with short clips about the next version of JavaScript – ECMAScript 6. Currently the collection includes:

  • Template strings
  • Rest parameters
  • Default parameter values
  • The spread operator

Topics coming soon include classes, and arrow functions (my favorite feature, for now).

Tips For Working With Windows Azure Media Services

Thursday, May 8, 2014 by K. Scott Allen
1 comment

I’ve been doing some work with Windows Azure Media services and making progress, although it takes some time and experimentation to work through the vocabulary of the API, documentation, and code snippets.

1. Uploading and encoding video into media services can be completed programmatically using the CloudMediaContext class from the NuGet package WindowsAzure.MediaServices.

2. Uploading creates an asset in media services. Each asset can contain multiple files, but you only want one video or audio file in the uploaded asset. WAMS will create a container in blob storage for each asset, so it seems best to create a new storage account dedicated to each media service.

3. For encoding you need to select a media processor by name, and perhaps a preset configuration by name. The processor names you can list using some C# code, and the values I currently see are:

        Windows Azure Media Encoder 3.7
        Windows Azure Media Packager 2.8
        Windows Azure Media Encryptor 2.8
        Windows Azure Media Encoder 2.3
        Storage Decryption 1.7

Preset names took some digging around, but I eventually found a complete list for the WAME at Media Services Encoder System Presets

What follows is a class that wraps CloudMediaContext and can list assets including the files inside, upload an asset, encode an asset, and list the available media processors. It is experimental code that assumes it is working inside a console application, but that behavior is easy to refactor out. Some of the LINQ queries are strange, but they work around the wonkiness of OData.

public class MediaService
    public MediaService()
        _context = new CloudMediaContext(

    public void Upload(string filePath)
        var assetName = Path.GetFileNameWithoutExtension(filePath) + "_i";
        var asset = _context.Assets.Create(assetName, AssetCreationOptions.None);

        var assetFileName = Path.GetFileName(filePath);
        var assetFile = asset.AssetFiles.Create(assetFileName);
        assetFile.UploadProgressChanged += (sender, args) => 
            Console.WriteLine("Up {0}:{1}", assetName, args.Progress);

    public void Encode(string filePath)
        var assetName = Path.GetFileNameWithoutExtension(filePath) + "_i";
        var asset = GetAsset(assetName);
        var job = _context.Jobs.Create("Encoding job " + assetName);
        var processor = GetMediaProcessor();
        var task = job.Tasks.AddNew("Encoding task" + assetName, 
                        processor, Configuration.PresetName, TaskOptions.None);
        task.OutputAssets.AddNew(assetName + "_o", AssetCreationOptions.None);

        job.StateChanged += (sender, args) => 
            Console.WriteLine("Job: {0} {1}", job.Name, args.CurrentState);
        var progress = job.GetExecutionProgressTask(CancellationToken.None);

    public void ListMedia()
        foreach (var asset in _context.Assets)
            Console.WriteLine("{0}", asset.Name);
            foreach (var file in asset.AssetFiles)
                Console.WriteLine("\t{0}", file.Name);

    public void ListMediaProcessors()
        Console.WriteLine("Available processors are:");
        foreach (var procesor in _context.MediaProcessors)
            Console.WriteLine("\t{0} {1}", procesor.Name, procesor.Version);

    IMediaProcessor GetMediaProcessor()
        var processors = _context.MediaProcessors
                                 .Where(p => p.Name == Configuration.EncoderName)
                                 .OrderBy(p => new Version(p.Version));
        if (!processors.Any())
            Console.WriteLine("Could not find processor {0}", Configuration.EncoderName);
        return processors.First();

    IAsset GetAsset(string name)
        var assets = _context.Assets.Where(a => a.Name == name).ToList();
        if (!assets.Any())
            Console.WriteLine("Could not find asset {0}", name);
        return assets.First();

    readonly CloudMediaContext _context;

The above class also assumes you have a Configuration class to read config information, which looks like the following.

    <add key="accountName" value="media services account name"/>
    <add key="accountKey" value="media services key"/>
    <add key="encoderName" value="Windows Azure Media Encoder"/>
    <add key="presetName" value="H264 Broadband SD 4x3"/>

Using $compile in Angular

Wednesday, May 7, 2014 by K. Scott Allen

Creating a custom directive in AngularJS is easy, let’s start with the HTML for a simple example.

{{ message }}
<div otc-dynamic></div>

The above markup is using a directive named otcDynamic, which only provides a template.

app.directive("otcDynamic", function(){
   return {
       template:"<button ng-click='doSomething()'>{{label}}</div>"

When combined with a controller, the presentation will allow the user to click a button to see a message appear on the screen.

app.controller("mainController", function($scope){

    $scope.label = "Please click";
    $scope.doSomething = function(){
      $scope.message = "Clicked!";


Make It Dynamic

Next, imagine the otcDynamic directive can’t use a static template. The directive needs to look at some boolean flags, user data, or service information, and dynamically construct the template markup. In the following example, we’ll only simulate this scenario. We are still using a static string, but we’ll pretend we created the string dynamically and use element.html to place the markup into the DOM.

app.directive("otcDynamic", function(){
    return {
        link: function(scope, element){
            element.html("<button ng-click='doSomething()'>{{label}}</button>");

The above sample no longer functions correctly and will only render a button displaying the literal text {{label}} to a user.

Markup has to go through a compilation phase for Angular to find and activate directives like ng-click and {{label}}.


The $compile service is the service to use for compilation. Invoking $compile against markup will produce a function you can use to bind the markup against a particular scope (what Angular calls a linking function). After linking, you’ll have DOM elements you can place into the browser.

app.directive("otcDynamic", function($compile){
        link: function(scope, element){
            var template = "<button ng-click='doSomething()'>{{label}}</button>";
            var linkFn = $compile(template);
            var content = linkFn(scope);

If you have to $compile in response to an element event, like a click event or other non-Angular code, you’ll need to invoke $apply for the proper scope lifecycle.

app.directive("otcDynamic", function($compile) {
    var template = "<button ng-click='doSomething()'>{{label}}</button>";
        link: function(scope, element){
            element.on("click", function() {
                scope.$apply(function() {
                    var content = $compile(template)(scope);

Dear Lenovo

Tuesday, May 6, 2014 by K. Scott Allen

Long time buyer, first time writer.

Over the years I’ve struggled each time I’ve decided it’s time to to buy a new ThinkPad. I’ve struggled because it used to be difficult to choose from so many solid entries in the T, X, and W lines.

These days I’m looking at and struggling to find a laptop computer anyone is happy to own.

Take a look at stars on the T series. The combined score is 12 / 20.


Lenovo T Series

We’ll round up the stars in the X series and give these ultrabooks a combined score of 13/20.

Lenovo X Series

These scores are in the “meh” category, and not what I’d expect from Lenovo’s flagship and premium brand. Reading through the reviews you’ll find most people are happy with the performance, the battery life, the selection of ports, and the build quality.  But I’m sure you’ve also noticed the copious rants about the keyboards you are designing and shipping on today’s models.

Perhaps we expect more from ThinkPads because the ThinkPad name was synonymous for “great keyboard”. Perhaps that’s why shortcut key aficionados were drawn to the ThinkPad line in the first place. We don’t need mice or track pads when we can use Alt+F4 or Alt+Insert to make things happen.

Now you’ve removed the Insert key from the X1 and turned the function keys into a capacitive flat-strip LED light show.

And moving the Home and End keys to the left side of the keyboard? I have no words to describe my sadness. I’ll instead use Peter Bright’s words from his article “Stop trying to innovate keyboards. You’re just making them worse”.

“I think these kind of keyboard games betray a fundamental misunderstanding of how people use keyboards. Companies might think that they're being innovative by replacing physical keys with soft keys, and they might think that they're making the keyboard somehow "easier to use" by removing keys from the keyboard. But they're not.”

Maybe the world has changed and the majority of productive professionals and do web conferences all day and Netflix movies all night. Perhaps this is the product line you need to stay alive in a world where the majority are consumed by consumption and touch.

Yet, I hope moving forward you will delight customers with qualities and features that are unique to ThinkPads, and not continue with these innovations that transform your products into inferior imitations of other brands.

With great sincerity,


Using an Azure PublishSettings File From C#

Monday, April 28, 2014 by K. Scott Allen

One of the fantastic aspects of cloud computing in general is the capability to automate every step of a process. Hanselman and Brady Gaster have both written about the Windows Azure Management Libraries, see Penny Pinching in the Cloud and Brady’s Announcement for some details.

The management libraries are wrappers around the Azure HTTP API and are a boon for businesses that run products on Azure. Not only do the libraries allow for automation but they also allow you (or rather me, at this very minute) to create custom applications for a business to manage Azure services. Although the Azure portal is full of features, it can be overwhelming to someone who doesn’t work with Azure on a daily basis. An even bigger issue is how there is no concept of roles in the portal. If you want to give someone the ability to shutdown a VM from the portal, you also give them the ability to do anything in the portal, including the ability to accidently delete all services the business holds dear.

A custom portal solves both of the above problems because you can build something tailored to the services and vocabulary the business uses, as well as perform role checks and restrict dangerous activities. The custom portal will need a management certificate to perform activities against the Azure API, and the easiest approach to obtain a management certificate is to download  a publish setting file.

Once you have a publish settings file, you can write some code to parse the information inside and make the data available to higher layer management activities. There are a few libraries out there that can work with publish setting files, but I have some special requirements and want to work with them directly. The contents of a publish settings file look like the following (and note there can be multiple Subscription elements inside).

      Name="Happy Subscription Name"
      ManagementCertificate="...base64 encoded certificate data..." />


Let’s use the following code as an example goal for how my custom publish settings should work. I want to:

  1. Create an object representing the settings by just handing off some text.
  2. Loop through the subscription in a file
  3. Ask each subscription to create credentials I can use to invoke the Azure HTTP API.
var fileContents = File.ReadAllText("odetocode.publishsettings");
var publishSettingsFile = new PublishSettingsFile(fileContents);

foreach (var subscription in publishSettingsFile.Subscriptions)
    Console.WriteLine("Showing compute services for: {0}", subscription.Name);
    var credentials = subscription.GetCredentials();
    using (var client = new ComputeManagementClient(credentials, subscription.ServiceUrl))
        var services = client.HostedServices.List();
        foreach (var service in services)
            Console.WriteLine("\t{0}", service.ServiceName);

It is the PublishSettingsFile class that parses the XML and creates PublishSetting objects. I’ve removed some error handling from the class so it doesn’t appear too intimidating.

public class PublishSettingsFile
    public PublishSettingsFile(string fileContents)
        var document = XDocument.Parse(fileContents);

        _subscriptions = document.Descendants("Subscription")

    private PublishSettings ToPublishSettings(XElement element)
        var settings = new PublishSettings();
        settings.Id = Get(element, "Id");
        settings.Name = Get(element, "Name");
        settings.ServiceUrl = GetUri(element, "ServiceManagementUrl");
        settings.Certificate = GetCertificate(element, "ManagementCertificate");
        return settings;

    private string Get(XElement element, string name)
        return (string) element.Attribute(name);

    private Uri GetUri(XElement element, string name)
        return new Uri(Get(element, name));

    private X509Certificate2 GetCertificate(XElement element, string name)
        var encodedData = Get(element, name);
        var certificateAsBytes = Convert.FromBase64String(encodedData);
        return new X509Certificate2(certificateAsBytes);

    public IEnumerable<PublishSettings> Subscriptions
            return _subscriptions;

    private readonly IList<PublishSettings> _subscriptions;

The PublishSettings class itself is relatively simple. It mostly holds data, but can also create the credentials object needed to communicate with Azure.

public class PublishSettings
    public string Id { get; set; }
    public string Name { get; set; }
    public Uri ServiceUrl  { get; set; }
    public X509Certificate2 Certificate { get; set; }

    public SubscriptionCloudCredentials GetCredentials()
        return new CertificateCloudCredentials(Id, Certificate);


In the future I’ll try to write more about the custom portal I’m building with ASP.NET MVC, WebAPI, and AngularJS. It has some interesting capabilities.

Canceling $http Requests in AngularJS

Thursday, April 24, 2014 by K. Scott Allen

One of the objects you can pass along in the config argument of an $http operation is a timeout promise. If the promise resolves, Angular will cancel the corresponding HTTP request.

Sounds easy, but in practice there are a few complications. Before we get to the complications, let’s look at some easy code. Imagine the following  inside of a controller where a user can click a Cancel button.

var canceller = $q.defer();

$http.get("/api/movies/slow/2", { timeout: canceller.promise })
        $ =;

$scope.cancel = function(){
    canceller.resolve("user cancelled");  

The code passes the canceller promise as the timeout option in the config object. If the user clicks cancel before the request completes, we’ll see the cancellation in the Network tab of the developer tools.

Cancelled HTTP Request

The complications come in real life scenarios where we have to manage multiple requests, provide the ability to cancel an operation to other client components, and figuring out if a given request is cancelled or not.

First, let’s look at a service that wraps $http to provide domain oriented operations. Typically services that talk using $http return simple promises, but now we need to return objects that provide a promise for the outstanding request, and  a method that can cancel the request.

app.factory("movies", function($http, $q){

    var getById = function(id){
        var canceller = $q.defer();

        var cancel = function(reason){

        var promise =
            $http.get("/api/movies/slow/" + id, { timeout: canceller.promise})

        return {
            promise: promise,
            cancel: cancel

    return {
        getById: getById


A client of the service might need to track multiple requests, if there is a UI like the following that allows a user to send multiple requests.

<div ng-controller="mainController">

    <button ng-click="start()">
        Start Request

        <li ng-repeat="request in requests">
            <button ng-click="cancel(request)">Cancel</button>

        <li ng-repeat="m in movies">{{m.title}}</li>


The following code will manage the UI and allow the user to cancel any outstanding request.

app.controller("mainController", function($scope, movies) {

    $scope.movies = [];
    $scope.requests = [];
    $ = 1;

    $scope.start = function(){

        var request = movies.getById($;
        }, function(reason){

    $scope.cancel = function(request){
        request.cancel("User cancelled");

    var clearRequest = function(request){
        $scope.requests.splice($scope.requests.indexOf(request), 1);

The logic gets messy and could use some additional encapsulation to keep request management from overwhelming  the controller, but this is the essence of what you’d need to do to allow cancellation of $http operations.

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!