A Few Thoughts on Better Unit Tests For AngularJS Controllers

Thursday, May 15, 2014 by K. Scott Allen
7 comments

There are a few aspects of unit testing AngularJS controllers that have made me uncomfortable over time. In this post I’ll describe some of these issues and what I’ve been trying on my current project to put the code in an acceptable state, as well as some general tips I’ve found useful.

Duplicated Setup Code

One approach to testing a controller with Jasmine is to use the module and inject helpers in a beforeEach block to create the dependencies for a controller. Most controllers will need multiple beforeEach blocks to setup the environment for different testing scenarios (there is the happy day scenario, the failing network scenario, the bad input scenario, etc). If you follow the pattern found in demo projects you’ll start to see too much code duplication in these setup blocks.

I’m comfortable with some amount of duplication inside of tests, as are others, however, the constant use of inject to bring in dependencies that 80% of the tests need becomes a noisy tax.

What I’ve been doing recently is using a single inject call per spec file in an opening beforeEach block. This block manually hoists all dependencies into the scope for other tests,  and also runs $http backend verifications after each test, even if they aren’t needed in every test.

var $rootScope, $controller, $q, $httpBackend, appConfig, scope;
beforeEach(inject(function (_$rootScope_, _$controller_, _$q_, _$httpBackend_, _appConfig_) {
    $q = _$q_;
    $rootScope = _$rootScope_;
    $controller = _$controller_;
    $httpBackend = _$httpBackend_;
    appConfig = _appConfig_;
    scope = $rootScope.$new();
}));

afterEach(function () {
    $httpBackend.verifyNoOutstandingExpectation();
    $httpBackend.verifyNoOutstandingRequest();
});

Now each scenario and the tests inside have all the core objects they need to setup the proper environment for testing. There is even a fresh scope object waiting for every test, and the rest of the test code no longer needs inject

describe("the reportListController", function () {

    beforeEach(function () {
        $httpBackend.when("GET", appConfig.reportUrl).respond([{}, {}, {}]);
        $controller("reportListController", {
            $scope: scope,
        });
        $httpBackend.flush();
    });

    it("should retrieve the reports to list", function() {
        expect(scope.reports.length).toBe(3);
    });
});

I believe this approach has been beneficial. The tests are leaner, easier to read, and easier to maintain.

inject Knows Underscores

Notice the injected function in the opening beforeEach uses parameter names like _$rootScope_ and _$q_. The inject function knows how to strip the underscores to get to the real service names, and since the parameters use underscores the variables in the outer scope can use pretty names like $rootScope and $q.

Only Give The Controller What The Test Needs

Sometimes I’ve seen examples using $controller that pass every dependency in the second parameter.

$controller("reportListController", {
    $scope: scope,
    $http: $http,
    $q: $q
});

Chances are the above code only really needs to pass $scope, because the injector will fill in the rest of the services as appropriate.

$controller("reportListController", {
    $scope: scope
});

Again there is less test code and the code is easier to maintain. Controller dependencies can change, but the test code doesn’t. Building your own mocks library like angular-mocks for custom services also helps this scenario, too. If you don’t need to “own” the dependency in a test, don’t bother to setup and pass the dependency to $controller.

Testing Controllers and Services as a Unit

Perhaps controversial, but I’ve started to write tests that do not mock services. Instead, I test a controller and most of the services the controller requires as a single unit. First let me give some details on what I mean, and then explain why I think this works well.

Let’s assume we have a reportListController that uses $scope, as well as two custom services that themselves use $http behind the scenes to communicate with the web server. Instead of having long complicated scenario setups with mocks and stubs, I usually only focus on a mock HTTP backend and scope.

$httpBackend.when("GET", appConfig.reportUrl).respond(reports);

$controller("reportListController", {
    $scope: scope
});

$httpBackend.flush();

This is a scenario for deleting reports, and the tests are relatively simple.

it("should delete the report", function () {
    $httpBackend.when("DELETE", appConfig.reportUrl + "/1").respond(200);

    scope.delete(reports[0]);
    $httpBackend.flush();
    expect(scope.reports.length).toBe(1);
});


it("should show a message", function () {
    $httpBackend.when("DELETE", appConfig.reportUrl + "/1").respond(200);

    scope.delete(reports[0]);
    $httpBackend.flush();
    expect($rootScope.alerts.length).toBe(1);
});

These two tests are exercising a number of logical pieces in the application. It’s not just testing the controller but also the model and also how the model interacts with two different services and how those services interact and respond to HTTP traffic.

I’m sure a few people will think these tests are blasphemous and the model and the services should be tested in isolation. However, I believe it is this type of right versus wrong thinking centered around “best practices” that severely limit the acceptance of unit testing in more circles. After years of mock object frameworks in other languages I’ve learned to avoid mocks whenever possible. Mock objects and mock methods generally:

  • make a test harder to read
  • make a test brittle
  • make it easier to produce false positives and false negatives

What I want to test in these scenarios is how the code inside the controller interacts with the services, because most of the logic inside is focused on orchestrating the underlying services to perform useful work. The code inside has to call service methods at the right time, and handle promises appropriately. I want to be able to change the implementation details without reworking the tests. These test work by providing an input (delete this report), and looking at the output (the number of reports), and only needs to provide some fake HTTP message processing to fill in the gaps.

If I had written two mock services for the controller and tested the services in isolation, I’d have more test code but less confidence that the system actually works.

Using Route Resolves Can Simplify Tests

Testing controllers that make service calls when instantiated can be a bit tricky, because everything has to be setup and in place before using the $controller service to instantiate the controller itself.

Using promise resolves in a route definition not only makes for an arguably better user experience, it also makes for easier controller testing because the controller is given everything it needs to get started. Both ui.router and ngRouter support resolves in a route definition, but since this post is already long in the tooth. We’ll look at using resolves in a future post.

The Sites That Want You Versus The Sites That Have You

Wednesday, May 14, 2014 by K. Scott Allen
2 comments

Let’s divide the world web web into two categories.

The Sites That Have You

These are the web sites that you must use because you are a captive customer. The web site for your primary bank would be one example.

The Sites That Want You

These are the sites that play in industries like banking and travel, but all they have to attract you in is the site. They don’t own flying machines or safety deposit boxes. A site like this in the travel industry only exists to provide the best experience possible in searching for tickets and reservations.

The Contrast

Not surprisingly, the sites that want you typically have more fully featured websites that are easier to use than the websites of those who have you

As an example, let’s look at zoomed out views of flight search results. The left hand side of the picture below is the search results on Kayak.com (just using them as an example). On the right hand side is the search results of a major U.S. air carrier (instead of naming names, let’s call them Unified Airlines).

The areas highlighted in green are flight options with details or general information to help select a flight.

The areas shaded in blue are search and filtering controls to help narrow in on the perfect flight.

The areas in red are advertisements, fee warnings, upsell opportunities, pitches for Unified Airlines credit cards, and wasted white space.

Comparing flight search results

We can call the total amounts of green space even, though Kayak displays twice as many flight results as Unified Airlines does above the fold.

The blue space winner is clearly Kayak. Unified doesn’t provide nearly as many filtering and sorting controls as Kayak and the majority of the controls they do provide are not only at the bottom of the page, but they also aren’t as interactive and require the browser to render an entirely new search results page.

It looks like the priority for Kayak’s development team is to build a great website for finding flights. The priority given to the Unified development team is to sign up travelers for a credit card. 

The Payload

Charriot Page LoadLet’s pick on another company, this one I’ll call Charriott Hotels. I frequently stay at Charriott properties and use their web site to book rooms. On slow WiFi connections, the desktop version of the site takes forever to load, and a quick peek at the network tab of the developer tools explains why (the image to the right is a zoomed out view).

Charriott’s home page sends out 57 network requests for more than 900KB of total payload. However, this is not bad. Most of travel sites, even the ones that want you, are making north of 50 requests for around 1MB of payload by the time all the destination vacation pictures and analytic scripts are finished.  Plus, Charriott minifies most of their scripts and CSS, gzips  and cache controls their static content, and bundles some (but not all) of their files together.

Still, even casual observation shows areas for improvement. The largest asset download is a 267KB download of minified script that includes:

  • jQuery
  • YUI 2.6
  • jQuery UI (including all effects)
  • 5 or 6 additional jQuery plugins (some un-minified)

I’m not a fan of optimizing script downloads just to save a kilobyte here and there, but for the home page of a major hotel brand I’d try to avoid loading two large script frameworks with overlapping functionality. The entire file must be downloaded and parsed before the home page is usable, and I’m certain it is possible to make this happen with less than 1/3 the amount of script currently in the page.

The Conclusion

Unified and Charriott actually have good web sites for the large companies that they are. Time and time again I see large company web sites that are disasters, even technology companies that understand design and computers. I don’t believe this is the fault of the development teams. I believe bad web sites are the product of politics, design by committee processes, and the  inherent difficulty in managing a large IT staff. The teams can make it happen, they just need the opportunity and an environment to make it happen.

OdeToCode Videos

Tuesday, May 13, 2014 by K. Scott Allen
5 comments

I’ve started a collection of videos here on the site, and I’m starting with short clips about the next version of JavaScript – ECMAScript 6. Currently the collection includes:

  • Template strings
  • Rest parameters
  • Default parameter values
  • The spread operator

Topics coming soon include classes, and arrow functions (my favorite feature, for now).

Tips For Working With Windows Azure Media Services

Thursday, May 8, 2014 by K. Scott Allen
1 comment

I’ve been doing some work with Windows Azure Media services and making progress, although it takes some time and experimentation to work through the vocabulary of the API, documentation, and code snippets.

1. Uploading and encoding video into media services can be completed programmatically using the CloudMediaContext class from the NuGet package WindowsAzure.MediaServices.

2. Uploading creates an asset in media services. Each asset can contain multiple files, but you only want one video or audio file in the uploaded asset. WAMS will create a container in blob storage for each asset, so it seems best to create a new storage account dedicated to each media service.

3. For encoding you need to select a media processor by name, and perhaps a preset configuration by name. The processor names you can list using some C# code, and the values I currently see are:

        Windows Azure Media Encoder 3.7
        Windows Azure Media Packager 2.8
        Windows Azure Media Encryptor 2.8
        Windows Azure Media Encoder 2.3
        Storage Decryption 1.7

Preset names took some digging around, but I eventually found a complete list for the WAME at Media Services Encoder System Presets

What follows is a class that wraps CloudMediaContext and can list assets including the files inside, upload an asset, encode an asset, and list the available media processors. It is experimental code that assumes it is working inside a console application, but that behavior is easy to refactor out. Some of the LINQ queries are strange, but they work around the wonkiness of OData.

public class MediaService
{
    public MediaService()
    {            
        _context = new CloudMediaContext(
                Configuration.AzureAccountName, 
                Configuration.AzureAccountKey
            );
    }

    public void Upload(string filePath)
    {
        var assetName = Path.GetFileNameWithoutExtension(filePath) + "_i";
        var asset = _context.Assets.Create(assetName, AssetCreationOptions.None);

        var assetFileName = Path.GetFileName(filePath);
        var assetFile = asset.AssetFiles.Create(assetFileName);
        assetFile.UploadProgressChanged += (sender, args) => 
            Console.WriteLine("Up {0}:{1}", assetName, args.Progress);
        assetFile.Upload(filePath);
    }

    public void Encode(string filePath)
    {
        var assetName = Path.GetFileNameWithoutExtension(filePath) + "_i";
        var asset = GetAsset(assetName);
        var job = _context.Jobs.Create("Encoding job " + assetName);
        var processor = GetMediaProcessor();
        var task = job.Tasks.AddNew("Encoding task" + assetName, 
                        processor, Configuration.PresetName, TaskOptions.None);
        task.InputAssets.Add(asset);
        task.OutputAssets.AddNew(assetName + "_o", AssetCreationOptions.None);

        job.StateChanged += (sender, args) => 
            Console.WriteLine("Job: {0} {1}", job.Name, args.CurrentState);
        job.Submit();
        
        var progress = job.GetExecutionProgressTask(CancellationToken.None);
        progress.Wait();
    }

    public void ListMedia()
    {
        foreach (var asset in _context.Assets)
        {
            Console.WriteLine("{0}", asset.Name);
            foreach (var file in asset.AssetFiles)
            {
                Console.WriteLine("\t{0}", file.Name);
            }
        }
    }

    public void ListMediaProcessors()
    {
        Console.WriteLine("Available processors are:");
        foreach (var procesor in _context.MediaProcessors)
        {
            Console.WriteLine("\t{0} {1}", procesor.Name, procesor.Version);
        }
    }

    IMediaProcessor GetMediaProcessor()
    {
        var processors = _context.MediaProcessors
                                 .Where(p => p.Name == Configuration.EncoderName)
                                 .ToList()
                                 .OrderBy(p => new Version(p.Version));
                                 
        if (!processors.Any())
        {
            Console.WriteLine("Could not find processor {0}", Configuration.EncoderName);
            ListMediaProcessors();
            Environment.Exit(-1);
        }
        return processors.First();
    }        

    IAsset GetAsset(string name)
    {
        var assets = _context.Assets.Where(a => a.Name == name).ToList();
        if (!assets.Any())
        {
            Console.WriteLine("Could not find asset {0}", name);
            Environment.Exit(-1);
        }
        return assets.First();
    }

    readonly CloudMediaContext _context;
}

The above class also assumes you have a Configuration class to read config information, which looks like the following.

<appSettings>
    <add key="accountName" value="media services account name"/>
    <add key="accountKey" value="media services key"/>
    <add key="encoderName" value="Windows Azure Media Encoder"/>
    <add key="presetName" value="H264 Broadband SD 4x3"/>
</appSettings>

Using $compile in Angular

Wednesday, May 7, 2014 by K. Scott Allen
0 comments

Creating a custom directive in AngularJS is easy, let’s start with the HTML for a simple example.

{{ message }}
<div otc-dynamic></div>

The above markup is using a directive named otcDynamic, which only provides a template.

app.directive("otcDynamic", function(){
   return {
       template:"<button ng-click='doSomething()'>{{label}}</div>"
   };
});

When combined with a controller, the presentation will allow the user to click a button to see a message appear on the screen.

app.controller("mainController", function($scope){

    $scope.label = "Please click";
    $scope.doSomething = function(){
      $scope.message = "Clicked!";
    };

});

Make It Dynamic

Next, imagine the otcDynamic directive can’t use a static template. The directive needs to look at some boolean flags, user data, or service information, and dynamically construct the template markup. In the following example, we’ll only simulate this scenario. We are still using a static string, but we’ll pretend we created the string dynamically and use element.html to place the markup into the DOM.

app.directive("otcDynamic", function(){
    return {
        link: function(scope, element){
            element.html("<button ng-click='doSomething()'>{{label}}</button>");
        }
    };
});

The above sample no longer functions correctly and will only render a button displaying the literal text {{label}} to a user.

Markup has to go through a compilation phase for Angular to find and activate directives like ng-click and {{label}}.

Compilation

The $compile service is the service to use for compilation. Invoking $compile against markup will produce a function you can use to bind the markup against a particular scope (what Angular calls a linking function). After linking, you’ll have DOM elements you can place into the browser.

app.directive("otcDynamic", function($compile){
    return{
        link: function(scope, element){
            var template = "<button ng-click='doSomething()'>{{label}}</button>";
            var linkFn = $compile(template);
            var content = linkFn(scope);
            element.append(content);
        }
    }
});

If you have to $compile in response to an element event, like a click event or other non-Angular code, you’ll need to invoke $apply for the proper scope lifecycle.

app.directive("otcDynamic", function($compile) {
    
    var template = "<button ng-click='doSomething()'>{{label}}</button>";
    
    return{
        link: function(scope, element){
            element.on("click", function() {
                scope.$apply(function() {
                    var content = $compile(template)(scope);
                    element.append(content);
               })
            });
        }
    }
});

Dear Lenovo

Tuesday, May 6, 2014 by K. Scott Allen
18 comments

Long time buyer, first time writer.

Over the years I’ve struggled each time I’ve decided it’s time to to buy a new ThinkPad. I’ve struggled because it used to be difficult to choose from so many solid entries in the T, X, and W lines.

These days I’m looking at lenovo.com and struggling to find a laptop computer anyone is happy to own.

Take a look at stars on the T series. The combined score is 12 / 20.

 

Lenovo T Series

We’ll round up the stars in the X series and give these ultrabooks a combined score of 13/20.

Lenovo X Series

These scores are in the “meh” category, and not what I’d expect from Lenovo’s flagship and premium brand. Reading through the reviews you’ll find most people are happy with the performance, the battery life, the selection of ports, and the build quality.  But I’m sure you’ve also noticed the copious rants about the keyboards you are designing and shipping on today’s models.

Perhaps we expect more from ThinkPads because the ThinkPad name was synonymous for “great keyboard”. Perhaps that’s why shortcut key aficionados were drawn to the ThinkPad line in the first place. We don’t need mice or track pads when we can use Alt+F4 or Alt+Insert to make things happen.

Now you’ve removed the Insert key from the X1 and turned the function keys into a capacitive flat-strip LED light show.

And moving the Home and End keys to the left side of the keyboard? I have no words to describe my sadness. I’ll instead use Peter Bright’s words from his article “Stop trying to innovate keyboards. You’re just making them worse”.

“I think these kind of keyboard games betray a fundamental misunderstanding of how people use keyboards. Companies might think that they're being innovative by replacing physical keys with soft keys, and they might think that they're making the keyboard somehow "easier to use" by removing keys from the keyboard. But they're not.”

Maybe the world has changed and the majority of productive professionals and do web conferences all day and Netflix movies all night. Perhaps this is the product line you need to stay alive in a world where the majority are consumed by consumption and touch.

Yet, I hope moving forward you will delight customers with qualities and features that are unique to ThinkPads, and not continue with these innovations that transform your products into inferior imitations of other brands.

With great sincerity,

--s

Using an Azure PublishSettings File From C#

Monday, April 28, 2014 by K. Scott Allen
3 comments

One of the fantastic aspects of cloud computing in general is the capability to automate every step of a process. Hanselman and Brady Gaster have both written about the Windows Azure Management Libraries, see Penny Pinching in the Cloud and Brady’s Announcement for some details.

The management libraries are wrappers around the Azure HTTP API and are a boon for businesses that run products on Azure. Not only do the libraries allow for automation but they also allow you (or rather me, at this very minute) to create custom applications for a business to manage Azure services. Although the Azure portal is full of features, it can be overwhelming to someone who doesn’t work with Azure on a daily basis. An even bigger issue is how there is no concept of roles in the portal. If you want to give someone the ability to shutdown a VM from the portal, you also give them the ability to do anything in the portal, including the ability to accidently delete all services the business holds dear.

A custom portal solves both of the above problems because you can build something tailored to the services and vocabulary the business uses, as well as perform role checks and restrict dangerous activities. The custom portal will need a management certificate to perform activities against the Azure API, and the easiest approach to obtain a management certificate is to download  a publish setting file.

Once you have a publish settings file, you can write some code to parse the information inside and make the data available to higher layer management activities. There are a few libraries out there that can work with publish setting files, but I have some special requirements and want to work with them directly. The contents of a publish settings file look like the following (and note there can be multiple Subscription elements inside).

<PublishData>
  <PublishProfile
    SchemaVersion="2.0"
    PublishMethod="AzureServiceManagementAPI">
    <Subscription
      ServiceManagementUrl="https://management.core.windows.net"
      Id="...guid..."
      Name="Happy Subscription Name"
      ManagementCertificate="...base64 encoded certificate data..." />
  </PublishProfile>
</PublishData>

 

Let’s use the following code as an example goal for how my custom publish settings should work. I want to:

  1. Create an object representing the settings by just handing off some text.
  2. Loop through the subscription in a file
  3. Ask each subscription to create credentials I can use to invoke the Azure HTTP API.
var fileContents = File.ReadAllText("odetocode.publishsettings");
var publishSettingsFile = new PublishSettingsFile(fileContents);

foreach (var subscription in publishSettingsFile.Subscriptions)
{
    Console.WriteLine("Showing compute services for: {0}", subscription.Name);
    
    var credentials = subscription.GetCredentials();
    using (var client = new ComputeManagementClient(credentials, subscription.ServiceUrl))
    {
        var services = client.HostedServices.List();
        foreach (var service in services)
        {
            Console.WriteLine("\t{0}", service.ServiceName);
        }
    }
}

It is the PublishSettingsFile class that parses the XML and creates PublishSetting objects. I’ve removed some error handling from the class so it doesn’t appear too intimidating.

public class PublishSettingsFile
{
    public PublishSettingsFile(string fileContents)
    {
        var document = XDocument.Parse(fileContents);

        _subscriptions = document.Descendants("Subscription")
                            .Select(ToPublishSettings).ToList();
    }

    private PublishSettings ToPublishSettings(XElement element)
    {
        var settings = new PublishSettings();
        settings.Id = Get(element, "Id");
        settings.Name = Get(element, "Name");
        settings.ServiceUrl = GetUri(element, "ServiceManagementUrl");
        settings.Certificate = GetCertificate(element, "ManagementCertificate");
        return settings;
    }        

    private string Get(XElement element, string name)
    {
        return (string) element.Attribute(name);
    }

    private Uri GetUri(XElement element, string name)
    {
        return new Uri(Get(element, name));
    }

    private X509Certificate2 GetCertificate(XElement element, string name)
    {
        var encodedData = Get(element, name);
        var certificateAsBytes = Convert.FromBase64String(encodedData);
        return new X509Certificate2(certificateAsBytes);
    }

    public IEnumerable<PublishSettings> Subscriptions
    {
        get
        {
            return _subscriptions;
        }
    }

    private readonly IList<PublishSettings> _subscriptions;
}

The PublishSettings class itself is relatively simple. It mostly holds data, but can also create the credentials object needed to communicate with Azure.

public class PublishSettings
{
    public string Id { get; set; }
    public string Name { get; set; }
    public Uri ServiceUrl  { get; set; }
    public X509Certificate2 Certificate { get; set; }

    public SubscriptionCloudCredentials GetCredentials()
    {
        return new CertificateCloudCredentials(Id, Certificate);
    }

}

In the future I’ll try to write more about the custom portal I’m building with ASP.NET MVC, WebAPI, and AngularJS. It has some interesting capabilities.

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!