Explanations of the compile and link functions in an AngularJS directive don’t often come with examples. Let’s look at a directive named simple which appears in the following markup. You can also play along in this plunk. Open the browser’s developer tools to see the logging output.
<div ng-repeat="i in [0,1,2]"> <simple> <div>Inner content</div> </simple> </div>
Notice the directive appears once inside an ng-repeat and will need to render three times. The directive also contains some inner content.
What we want to focus on is how and when the various directive functions execute, as well as the arguments to the compile and linking functions.
To see what happens we’ll use the following shell of a directive definition.
app.directive("simple", function(){ return { restrict: "EA", transclude:true, template:"<div>{{label}}<div ng-transclude></div></div>", compile: function(element, attributes){ return { pre: function(scope, element, attributes, controller, transcludeFn){ }, post: function(scope, element, attributes, controller, transcludeFn){ } } }, controller: function($scope){ } }; });
The first function to execute in the simple directive during view rendering will be the compile function. The compile function will receive the simple element as a jqLite reference, and the element contents will look like the content in the picture to the right.
Notice how Angular has already added the directive template, but has not performed any transclusion or setup the data binding.
At this point it is safe for the code inside the compile function to manipulate the element, however it is not a place where you want the code to wire up event handlers. The element passed to compile in this scenario will be an element that the framework clones three times because we are working inside an ngRepeat. It will be the clones of this element the framework places into the DOM, and these clones are not available until the linking functions start to run. The idea behind the compilation step is to allow for one time DOM manipulation before the cloning – a performance optimization.
This compile function in the sample above returns an object with the pre and post linking functions. However, many times we don’t need to hook into the compilation phase, so we can have a link function instead of a compile function.
app.directive("simple", function(){ return { //... link: function(scope, element, attributes){ }, controller: function($scope, $element){ } }; });
A link function will behave like the post-link function described below.
Since the ngRepeat requires three copies of simple, we will now execute the following functions once for each instance. The order is controller, pre, then post.
The first function to execute for each instance is the controller function. It is here where the code can initialize a scope object as any good controller function will do.
Note the controller can also take an $element argument and receive a reference to the simple element clone that will appear in the DOM.
The element will look just like the element in the previous picture because the framework hasn’t performed the transclusion or setup data binding, but it is the element that will live in the DOM, unlike the element reference in compile.
However, we try to keep controllers from referencing elements directly. You generally want to limit direct element interaction to the post link function.
By the time we reach the pre-link function (the function attached to the pre property of the object returned from compile), we’ll have both a scope initialized by the controller function, and a reference to a real element that will appear in the DOM.
However, we still don’t have transcluded content and the template isn’t linked to the scope because the bindings aren’t setup.
The pre link function is only useful in a couple special scenarios, which is why you can return a function from compile instead of an object and the function will be considered by the framework as the post link function.
Post link is the last function to execute. Now the transclusion is complete, the template is linked to a scope, and the view will update with data bound values after the next digest cycle .
In post-link it is safe to manipulate the DOM, attach event handlers, inspect child elements, and setup observations on attributes and watches on the scope.
Directives have many mysterious features when you first come across them, but with some time and experiments like these you can at least figure out the working pieces. As always, the harder part is knowing how to apply this knowledge to the real components you need to build. More on that in the future.
One common question I’ve gotten about AngularJS revolves around the debugging of binding expressions like {{ message }}. How does one track down expressions that aren’t working?
There are two approaches I can recommend. One approach is a tool, the other approach is a service decorator.
Batarang is a fairly well known Chrome extension. Once loaded, the extension can show you the contents of scope objects, as well as profiling and performance information.
For some problems Batarang requires too many mouse clicks, that’s when I like to use a simple service decorator.
In Angular, the $interpolate service is responsible for working with binding expression. The service returns a function you can invoke against a scope object to produce an interpolated string. The function is known as an interpolation function.
As an example, the following code will log “HELLO!” to the console.
app.run(function($interpolate){ var fn = $interpolate("{{message | uppercase}}"); var result = fn({message:"Hello!"}); console.log(result); });
The interpolation function is forgiving. For most scenarios the forgiveness is good, because we can write binding expressions in a template without worrying about null or undefined values.
As an example, the following code does not throw an exception or give any indication of an error even though message is misspelled as massage in the binding expression.
app.run(function($interpolate){ var fn = $interpolate("{{massage.length}}"); var result = fn({message:"Hello!"}); console.log(result); });
Instead of an error, the interpolation function yields an empty string. These are the types of cases we might want to know about. Unfortunately, the functions interpreting and executing the binding expressions are hidden and have many special case branches. However, we can decorate the $interpolate service and wrap the interpolation functions it produces.
app.config(function($provide){ $provide.decorator("$interpolate", function($delegate){ var interpolateWrap = function(){ var interpolationFn = $delegate.apply(this, arguments); if(interpolationFn) { return interpolationFnWrap(interpolationFn, arguments); } }; var interpolationFnWrap = function(interpolationFn, interpolationArgs){ return function(){ var result = interpolationFn.apply(this, arguments); var log = result ? console.log : console.warn; log.call(console, "interpolation of " + interpolationArgs[0].trim(), ":", result.trim()); return result; }; }; angular.extend(interpolateWrap, $delegate); return interpolateWrap; }); });
The wrapping function will log every execution that produces a string and warn about every interpolation that produces a false looking empty string (which is what happens if there is a type or undefined member on scope).
Now let’s see what happens with the following markup in a template. Note that foobar is not a property on scope and represents a bad binding expression we want to catch.
<span>{{ message }}</span> <span>{{1 + 2}}</span> <span>{{getMessage() | uppercase}}</span> <span>{{foobar}}</span> <div ng-repeat="i in [0,1,2,3]"> {{i * 2}} </div>
The output in the developer tools looks like the following.
Now we can see all the interpolation activity on a page, and the activity producing empty strings will stand out. You probably don’t want to ship with an active decorator like the one in this post, but it can make some debugging chores quick and easy.
In a previous post about testing I mentioned that route resolves can make authoring unit tests for a controller easier. Resolves can also help the user experience.
A resolve is a property you can attach to a route in both ngRoute and the more robust UI router. A resolve contains one or more promises that must resolve successfully before the route will change. This means you can wait for data to become available before showing a view, and simplify the initialization of the model inside a controller because the initial data is given to the controller instead of the controller needing to go out and fetch the data.
As an example, let’s use the following simple service which uses $q to simulate the async work required to fetch some data.
app.factory("messageService", function($q){ return { getMessage: function(){ return $q.when("Hello World!"); } }; });
And now the routing configuration that will use the service in a resolve.
$routeProvider .when("/news", { templateUrl: "newsView.html", controller: "newsController", resolve: { message: function(messageService){ return messageService.getMessage(); } } })
Resolve is a property on the routing configuration, and each property on resolve can be an injectable function (meaning it can ask for service dependencies). The function should return a promise.
When the promise completes successfully, the resolve property (message in this scenario) is available to inject into a controller function. In other words, all a controller needs to do to grab data gathered during resolve is to ask for the data using the same name as the resolve property (message).
app.controller("newsController", function (message) { $scope.message = message; });
You can work with multiple resolve properties. As an example, let’s introduce a 2nd service. Unlike the messageService, this service is a little bit slow.
app.factory("greetingService", function($q, $timeout){ return { getGreeting: function(){ var deferred = $q.defer(); $timeout(function(){ deferred.resolve("Allo!"); },2000); return deferred.promise; } } });
Now the resolve in the routing configuration has two promise producing functions.
.when("/news", { templateUrl: "newsView.html", controller: "newsController", resolve: { message: function(messageService){ return messageService.getMessage(); }, greeting: function(greetingService){ return greetingService.getGreeting(); } } })
And the associated controller can ask for both message and greeting.
app.controller("newsController", function ($scope, message, greeting) { $scope.message = message; $scope.greeting = greeting; });
Although there are benefits to moving code out of a controller, there are also drawbacks to having code inside the route definitions. For controllers that require a complicated setup I like to use a small service dedicated to providing resolve features for a controller. The service relies heavily on promise composition and might look like the following.
app.factory("newsControllerInitialData", function(messageService, greetingService, $q) { return function() { var message = messageService.getMessage(); var greeting = greetingService.getGreeting(); return $q.all([message, greeting]).then(function(results){ return { message: results[0], greeting: results[1] }; }); } });
Not only is the code inside a service easier to test than the code inside a route definition, but the route definitions are also easier to read.
.when("/news", { templateUrl: "newsView.html", controller: "newsController", resolve: { initialData: function(newsControllerInitialData){ return newsControllerInitialData(); } } })
And the controller is also easy.
app.controller("newsController", function ($scope, initialData) { $scope.message = initialData.message; $scope.greeting = initialData.greeting; });
One of the keys to all of this working is $q.all, which is a beautiful way to compose promises and run requests in parallel.
There are a few aspects of unit testing AngularJS controllers that have made me uncomfortable over time. In this post I’ll describe some of these issues and what I’ve been trying on my current project to put the code in an acceptable state, as well as some general tips I’ve found useful.
One approach to testing a controller with Jasmine is to use the module and inject helpers in a beforeEach block to create the dependencies for a controller. Most controllers will need multiple beforeEach blocks to setup the environment for different testing scenarios (there is the happy day scenario, the failing network scenario, the bad input scenario, etc). If you follow the pattern found in demo projects you’ll start to see too much code duplication in these setup blocks.
I’m comfortable with some amount of duplication inside of tests, as are others, however, the constant use of inject to bring in dependencies that 80% of the tests need becomes a noisy tax.
What I’ve been doing recently is using a single inject call per spec file in an opening beforeEach block. This block manually hoists all dependencies into the scope for other tests, and also runs $http backend verifications after each test, even if they aren’t needed in every test.
var $rootScope, $controller, $q, $httpBackend, appConfig, scope; beforeEach(inject(function (_$rootScope_, _$controller_, _$q_, _$httpBackend_, _appConfig_) { $q = _$q_; $rootScope = _$rootScope_; $controller = _$controller_; $httpBackend = _$httpBackend_; appConfig = _appConfig_; scope = $rootScope.$new(); })); afterEach(function () { $httpBackend.verifyNoOutstandingExpectation(); $httpBackend.verifyNoOutstandingRequest(); });
Now each scenario and the tests inside have all the core objects they need to setup the proper environment for testing. There is even a fresh scope object waiting for every test, and the rest of the test code no longer needs inject.
describe("the reportListController", function () { beforeEach(function () { $httpBackend.when("GET", appConfig.reportUrl).respond([{}, {}, {}]); $controller("reportListController", { $scope: scope, }); $httpBackend.flush(); }); it("should retrieve the reports to list", function() { expect(scope.reports.length).toBe(3); }); });
I believe this approach has been beneficial. The tests are leaner, easier to read, and easier to maintain.
Notice the injected function in the opening beforeEach uses parameter names like _$rootScope_ and _$q_. The inject function knows how to strip the underscores to get to the real service names, and since the parameters use underscores the variables in the outer scope can use pretty names like $rootScope and $q.
Sometimes I’ve seen examples using $controller that pass every dependency in the second parameter.
$controller("reportListController", { $scope: scope, $http: $http, $q: $q });
Chances are the above code only really needs to pass $scope, because the injector will fill in the rest of the services as appropriate.
$controller("reportListController", { $scope: scope });
Again there is less test code and the code is easier to maintain. Controller dependencies can change, but the test code doesn’t. Building your own mocks library like angular-mocks for custom services also helps this scenario, too. If you don’t need to “own” the dependency in a test, don’t bother to setup and pass the dependency to $controller.
Perhaps controversial, but I’ve started to write tests that do not mock services. Instead, I test a controller and most of the services the controller requires as a single unit. First let me give some details on what I mean, and then explain why I think this works well.
Let’s assume we have a reportListController that uses $scope, as well as two custom services that themselves use $http behind the scenes to communicate with the web server. Instead of having long complicated scenario setups with mocks and stubs, I usually only focus on a mock HTTP backend and scope.
$httpBackend.when("GET", appConfig.reportUrl).respond(reports); $controller("reportListController", { $scope: scope }); $httpBackend.flush();
This is a scenario for deleting reports, and the tests are relatively simple.
it("should delete the report", function () { $httpBackend.when("DELETE", appConfig.reportUrl + "/1").respond(200); scope.delete(reports[0]); $httpBackend.flush(); expect(scope.reports.length).toBe(1); }); it("should show a message", function () { $httpBackend.when("DELETE", appConfig.reportUrl + "/1").respond(200); scope.delete(reports[0]); $httpBackend.flush(); expect($rootScope.alerts.length).toBe(1); });
These two tests are exercising a number of logical pieces in the application. It’s not just testing the controller but also the model and also how the model interacts with two different services and how those services interact and respond to HTTP traffic.
I’m sure a few people will think these tests are blasphemous and the model and the services should be tested in isolation. However, I believe it is this type of right versus wrong thinking centered around “best practices” that severely limit the acceptance of unit testing in more circles. After years of mock object frameworks in other languages I’ve learned to avoid mocks whenever possible. Mock objects and mock methods generally:
What I want to test in these scenarios is how the code inside the controller interacts with the services, because most of the logic inside is focused on orchestrating the underlying services to perform useful work. The code inside has to call service methods at the right time, and handle promises appropriately. I want to be able to change the implementation details without reworking the tests. These test work by providing an input (delete this report), and looking at the output (the number of reports), and only needs to provide some fake HTTP message processing to fill in the gaps.
If I had written two mock services for the controller and tested the services in isolation, I’d have more test code but less confidence that the system actually works.
Testing controllers that make service calls when instantiated can be a bit tricky, because everything has to be setup and in place before using the $controller service to instantiate the controller itself.
Using promise resolves in a route definition not only makes for an arguably better user experience, it also makes for easier controller testing because the controller is given everything it needs to get started. Both ui.router and ngRouter support resolves in a route definition, but since this post is already long in the tooth. We’ll look at using resolves in a future post.
Let’s divide the world web web into two categories.
These are the web sites that you must use because you are a captive customer. The web site for your primary bank would be one example.
These are the sites that play in industries like banking and travel, but all they have to attract you in is the site. They don’t own flying machines or safety deposit boxes. A site like this in the travel industry only exists to provide the best experience possible in searching for tickets and reservations.
Not surprisingly, the sites that want you typically have more fully featured websites that are easier to use than the websites of those who have you.
As an example, let’s look at zoomed out views of flight search results. The left hand side of the picture below is the search results on Kayak.com (just using them as an example). On the right hand side is the search results of a major U.S. air carrier (instead of naming names, let’s call them Unified Airlines).
The areas highlighted in green are flight options with details or general information to help select a flight.
The areas shaded in blue are search and filtering controls to help narrow in on the perfect flight.
The areas in red are advertisements, fee warnings, upsell opportunities, pitches for Unified Airlines credit cards, and wasted white space.
We can call the total amounts of green space even, though Kayak displays twice as many flight results as Unified Airlines does above the fold.
The blue space winner is clearly Kayak. Unified doesn’t provide nearly as many filtering and sorting controls as Kayak and the majority of the controls they do provide are not only at the bottom of the page, but they also aren’t as interactive and require the browser to render an entirely new search results page.
It looks like the priority for Kayak’s development team is to build a great website for finding flights. The priority given to the Unified development team is to sign up travelers for a credit card.
Let’s pick on another company, this one I’ll call Charriott Hotels. I frequently stay at Charriott properties and use their web site to book rooms. On slow WiFi connections, the desktop version of the site takes forever to load, and a quick peek at the network tab of the developer tools explains why (the image to the right is a zoomed out view).
Charriott’s home page sends out 57 network requests for more than 900KB of total payload. However, this is not bad. Most of travel sites, even the ones that want you, are making north of 50 requests for around 1MB of payload by the time all the destination vacation pictures and analytic scripts are finished. Plus, Charriott minifies most of their scripts and CSS, gzips and cache controls their static content, and bundles some (but not all) of their files together.
Still, even casual observation shows areas for improvement. The largest asset download is a 267KB download of minified script that includes:
I’m not a fan of optimizing script downloads just to save a kilobyte here and there, but for the home page of a major hotel brand I’d try to avoid loading two large script frameworks with overlapping functionality. The entire file must be downloaded and parsed before the home page is usable, and I’m certain it is possible to make this happen with less than 1/3 the amount of script currently in the page.
Unified and Charriott actually have good web sites for the large companies that they are. Time and time again I see large company web sites that are disasters, even technology companies that understand design and computers. I don’t believe this is the fault of the development teams. I believe bad web sites are the product of politics, design by committee processes, and the inherent difficulty in managing a large IT staff. The teams can make it happen, they just need the opportunity and an environment to make it happen.
I’ve started a collection of videos here on the site, and I’m starting with short clips about the next version of JavaScript – ECMAScript 6. Currently the collection includes:
Topics coming soon include classes, and arrow functions (my favorite feature, for now).
I’ve been doing some work with Windows Azure Media services and making progress, although it takes some time and experimentation to work through the vocabulary of the API, documentation, and code snippets.
1. Uploading and encoding video into media services can be completed programmatically using the CloudMediaContext class from the NuGet package WindowsAzure.MediaServices.
2. Uploading creates an asset in media services. Each asset can contain multiple files, but you only want one video or audio file in the uploaded asset. WAMS will create a container in blob storage for each asset, so it seems best to create a new storage account dedicated to each media service.
3. For encoding you need to select a media processor by name, and perhaps a preset configuration by name. The processor names you can list using some C# code, and the values I currently see are:
Windows Azure Media Encoder 3.7
Windows Azure Media Packager 2.8
Windows Azure Media Encryptor 2.8
Windows Azure Media Encoder 2.3
Storage Decryption 1.7
Preset names took some digging around, but I eventually found a complete list for the WAME at Media Services Encoder System Presets.
What follows is a class that wraps CloudMediaContext and can list assets including the files inside, upload an asset, encode an asset, and list the available media processors. It is experimental code that assumes it is working inside a console application, but that behavior is easy to refactor out. Some of the LINQ queries are strange, but they work around the wonkiness of OData.
public class MediaService { public MediaService() { _context = new CloudMediaContext( Configuration.AzureAccountName, Configuration.AzureAccountKey ); } public void Upload(string filePath) { var assetName = Path.GetFileNameWithoutExtension(filePath) + "_i"; var asset = _context.Assets.Create(assetName, AssetCreationOptions.None); var assetFileName = Path.GetFileName(filePath); var assetFile = asset.AssetFiles.Create(assetFileName); assetFile.UploadProgressChanged += (sender, args) => Console.WriteLine("Up {0}:{1}", assetName, args.Progress); assetFile.Upload(filePath); } public void Encode(string filePath) { var assetName = Path.GetFileNameWithoutExtension(filePath) + "_i"; var asset = GetAsset(assetName); var job = _context.Jobs.Create("Encoding job " + assetName); var processor = GetMediaProcessor(); var task = job.Tasks.AddNew("Encoding task" + assetName, processor, Configuration.PresetName, TaskOptions.None); task.InputAssets.Add(asset); task.OutputAssets.AddNew(assetName + "_o", AssetCreationOptions.None); job.StateChanged += (sender, args) => Console.WriteLine("Job: {0} {1}", job.Name, args.CurrentState); job.Submit(); var progress = job.GetExecutionProgressTask(CancellationToken.None); progress.Wait(); } public void ListMedia() { foreach (var asset in _context.Assets) { Console.WriteLine("{0}", asset.Name); foreach (var file in asset.AssetFiles) { Console.WriteLine("\t{0}", file.Name); } } } public void ListMediaProcessors() { Console.WriteLine("Available processors are:"); foreach (var procesor in _context.MediaProcessors) { Console.WriteLine("\t{0} {1}", procesor.Name, procesor.Version); } } IMediaProcessor GetMediaProcessor() { var processors = _context.MediaProcessors .Where(p => p.Name == Configuration.EncoderName) .ToList() .OrderBy(p => new Version(p.Version)); if (!processors.Any()) { Console.WriteLine("Could not find processor {0}", Configuration.EncoderName); ListMediaProcessors(); Environment.Exit(-1); } return processors.First(); } IAsset GetAsset(string name) { var assets = _context.Assets.Where(a => a.Name == name).ToList(); if (!assets.Any()) { Console.WriteLine("Could not find asset {0}", name); Environment.Exit(-1); } return assets.First(); } readonly CloudMediaContext _context; }
The above class also assumes you have a Configuration class to read config information, which looks like the following.
<appSettings> <add key="accountName" value="media services account name"/> <add key="accountKey" value="media services key"/> <add key="encoderName" value="Windows Azure Media Encoder"/> <add key="presetName" value="H264 Broadband SD 4x3"/> </appSettings>