Creating a custom directive in AngularJS is easy, let’s start with the HTML for a simple example.
{{ message }} <div otc-dynamic></div>
The above markup is using a directive named otcDynamic, which only provides a template.
app.directive("otcDynamic", function(){ return { template:"<button ng-click='doSomething()'>{{label}}</div>" }; });
When combined with a controller, the presentation will allow the user to click a button to see a message appear on the screen.
app.controller("mainController", function($scope){ $scope.label = "Please click"; $scope.doSomething = function(){ $scope.message = "Clicked!"; }; });
Next, imagine the otcDynamic directive can’t use a static template. The directive needs to look at some boolean flags, user data, or service information, and dynamically construct the template markup. In the following example, we’ll only simulate this scenario. We are still using a static string, but we’ll pretend we created the string dynamically and use element.html to place the markup into the DOM.
app.directive("otcDynamic", function(){ return { link: function(scope, element){ element.html("<button ng-click='doSomething()'>{{label}}</button>"); } }; });
The above sample no longer functions correctly and will only render a button displaying the literal text {{label}} to a user.
Markup has to go through a compilation phase for Angular to find and activate directives like ng-click and {{label}}.
The $compile service is the service to use for compilation. Invoking $compile against markup will produce a function you can use to bind the markup against a particular scope (what Angular calls a linking function). After linking, you’ll have DOM elements you can place into the browser.
app.directive("otcDynamic", function($compile){ return{ link: function(scope, element){ var template = "<button ng-click='doSomething()'>{{label}}</button>"; var linkFn = $compile(template); var content = linkFn(scope); element.append(content); } } });
If you have to $compile in response to an element event, like a click event or other non-Angular code, you’ll need to invoke $apply for the proper scope lifecycle.
app.directive("otcDynamic", function($compile) { var template = "<button ng-click='doSomething()'>{{label}}</button>"; return{ link: function(scope, element){ element.on("click", function() { scope.$apply(function() { var content = $compile(template)(scope); element.append(content); }) }); } } });
Long time buyer, first time writer.
Over the years I’ve struggled each time I’ve decided it’s time to to buy a new ThinkPad. I’ve struggled because it used to be difficult to choose from so many solid entries in the T, X, and W lines.
These days I’m looking at lenovo.com and struggling to find a laptop computer anyone is happy to own.
Take a look at stars on the T series. The combined score is 12 / 20.
We’ll round up the stars in the X series and give these ultrabooks a combined score of 13/20.
These scores are in the “meh” category, and not what I’d expect from Lenovo’s flagship and premium brand. Reading through the reviews you’ll find most people are happy with the performance, the battery life, the selection of ports, and the build quality. But I’m sure you’ve also noticed the copious rants about the keyboards you are designing and shipping on today’s models.
Perhaps we expect more from ThinkPads because the ThinkPad name was synonymous for “great keyboard”. Perhaps that’s why shortcut key aficionados were drawn to the ThinkPad line in the first place. We don’t need mice or track pads when we can use Alt+F4 or Alt+Insert to make things happen.
Now you’ve removed the Insert key from the X1 and turned the function keys into a capacitive flat-strip LED light show.
And moving the Home and End keys to the left side of the keyboard? I have no words to describe my sadness. I’ll instead use Peter Bright’s words from his article “Stop trying to innovate keyboards. You’re just making them worse”.
“I think these kind of keyboard games betray a fundamental misunderstanding of how people use keyboards. Companies might think that they're being innovative by replacing physical keys with soft keys, and they might think that they're making the keyboard somehow "easier to use" by removing keys from the keyboard. But they're not.”
Maybe the world has changed and the majority of productive professionals and do web conferences all day and Netflix movies all night. Perhaps this is the product line you need to stay alive in a world where the majority are consumed by consumption and touch.
Yet, I hope moving forward you will delight customers with qualities and features that are unique to ThinkPads, and not continue with these innovations that transform your products into inferior imitations of other brands.
With great sincerity,
--s
One of the fantastic aspects of cloud computing in general is the capability to automate every step of a process. Hanselman and Brady Gaster have both written about the Windows Azure Management Libraries, see Penny Pinching in the Cloud and Brady’s Announcement for some details.
The management libraries are wrappers around the Azure HTTP API and are a boon for businesses that run products on Azure. Not only do the libraries allow for automation but they also allow you (or rather me, at this very minute) to create custom applications for a business to manage Azure services. Although the Azure portal is full of features, it can be overwhelming to someone who doesn’t work with Azure on a daily basis. An even bigger issue is how there is no concept of roles in the portal. If you want to give someone the ability to shutdown a VM from the portal, you also give them the ability to do anything in the portal, including the ability to accidently delete all services the business holds dear.
A custom portal solves both of the above problems because you can build something tailored to the services and vocabulary the business uses, as well as perform role checks and restrict dangerous activities. The custom portal will need a management certificate to perform activities against the Azure API, and the easiest approach to obtain a management certificate is to download a publish setting file.
Once you have a publish settings file, you can write some code to parse the information inside and make the data available to higher layer management activities. There are a few libraries out there that can work with publish setting files, but I have some special requirements and want to work with them directly. The contents of a publish settings file look like the following (and note there can be multiple Subscription elements inside).
<PublishData> <PublishProfile SchemaVersion="2.0" PublishMethod="AzureServiceManagementAPI"> <Subscription ServiceManagementUrl="https://management.core.windows.net" Id="...guid..." Name="Happy Subscription Name" ManagementCertificate="...base64 encoded certificate data..." /> </PublishProfile> </PublishData>
Let’s use the following code as an example goal for how my custom publish settings should work. I want to:
var fileContents = File.ReadAllText("odetocode.publishsettings"); var publishSettingsFile = new PublishSettingsFile(fileContents); foreach (var subscription in publishSettingsFile.Subscriptions) { Console.WriteLine("Showing compute services for: {0}", subscription.Name); var credentials = subscription.GetCredentials(); using (var client = new ComputeManagementClient(credentials, subscription.ServiceUrl)) { var services = client.HostedServices.List(); foreach (var service in services) { Console.WriteLine("\t{0}", service.ServiceName); } } }
It is the PublishSettingsFile class that parses the XML and creates PublishSetting objects. I’ve removed some error handling from the class so it doesn’t appear too intimidating.
public class PublishSettingsFile { public PublishSettingsFile(string fileContents) { var document = XDocument.Parse(fileContents); _subscriptions = document.Descendants("Subscription") .Select(ToPublishSettings).ToList(); } private PublishSettings ToPublishSettings(XElement element) { var settings = new PublishSettings(); settings.Id = Get(element, "Id"); settings.Name = Get(element, "Name"); settings.ServiceUrl = GetUri(element, "ServiceManagementUrl"); settings.Certificate = GetCertificate(element, "ManagementCertificate"); return settings; } private string Get(XElement element, string name) { return (string) element.Attribute(name); } private Uri GetUri(XElement element, string name) { return new Uri(Get(element, name)); } private X509Certificate2 GetCertificate(XElement element, string name) { var encodedData = Get(element, name); var certificateAsBytes = Convert.FromBase64String(encodedData); return new X509Certificate2(certificateAsBytes); } public IEnumerable<PublishSettings> Subscriptions { get { return _subscriptions; } } private readonly IList<PublishSettings> _subscriptions; }
The PublishSettings class itself is relatively simple. It mostly holds data, but can also create the credentials object needed to communicate with Azure.
public class PublishSettings { public string Id { get; set; } public string Name { get; set; } public Uri ServiceUrl { get; set; } public X509Certificate2 Certificate { get; set; } public SubscriptionCloudCredentials GetCredentials() { return new CertificateCloudCredentials(Id, Certificate); } }
In the future I’ll try to write more about the custom portal I’m building with ASP.NET MVC, WebAPI, and AngularJS. It has some interesting capabilities.
One of the objects you can pass along in the config argument of an $http operation is a timeout promise. If the promise resolves, Angular will cancel the corresponding HTTP request.
Sounds easy, but in practice there are a few complications. Before we get to the complications, let’s look at some easy code. Imagine the following inside of a controller where a user can click a Cancel button.
var canceller = $q.defer(); $http.get("/api/movies/slow/2", { timeout: canceller.promise }) .then(function(response){ $scope.movie = response.data; }); $scope.cancel = function(){ canceller.resolve("user cancelled"); };
The code passes the canceller promise as the timeout option in the config object. If the user clicks cancel before the request completes, we’ll see the cancellation in the Network tab of the developer tools.
The complications come in real life scenarios where we have to manage multiple requests, provide the ability to cancel an operation to other client components, and figuring out if a given request is cancelled or not.
First, let’s look at a service that wraps $http to provide domain oriented operations. Typically services that talk using $http return simple promises, but now we need to return objects that provide a promise for the outstanding request, and a method that can cancel the request.
app.factory("movies", function($http, $q){ var getById = function(id){ var canceller = $q.defer(); var cancel = function(reason){ canceller.resolve(reason); }; var promise = $http.get("/api/movies/slow/" + id, { timeout: canceller.promise}) .then(function(response){ return response.data; }); return { promise: promise, cancel: cancel }; }; return { getById: getById }; });
A client of the service might need to track multiple requests, if there is a UI like the following that allows a user to send multiple requests.
<div ng-controller="mainController"> <button ng-click="start()"> Start Request </button> <ul> <li ng-repeat="request in requests"> <button ng-click="cancel(request)">Cancel</button> </li> </ul> <ul> <li ng-repeat="m in movies">{{m.title}}</li> </ul> </div>
The following code will manage the UI and allow the user to cancel any outstanding request.
app.controller("mainController", function($scope, movies) { $scope.movies = []; $scope.requests = []; $scope.id = 1; $scope.start = function(){ var request = movies.getById($scope.id++); $scope.requests.push(request); request.promise.then(function(movie){ $scope.movies.push(movie); clearRequest(request); }, function(reason){ console.log(reason); }); }; $scope.cancel = function(request){ request.cancel("User cancelled"); clearRequest(request); }; var clearRequest = function(request){ $scope.requests.splice($scope.requests.indexOf(request), 1); }; });
The logic gets messy and could use some additional encapsulation to keep request management from overwhelming the controller, but this is the essence of what you’d need to do to allow cancellation of $http operations.
Question from the mailbox: "After many years as a server side developer and DBA, is it time to make a switch to JavaScript and focus on client side development?"
I think you need to work in an environment you enjoy. Some people do not enjoy client side development, and that's ok. There is still plenty of work to do on the server and in the services that run there.
The funny thing is, you don't have to leave the server to learn JavaScript. Take a look at technologies like NodeJS or MongoDB first. Having some JavaScript experience will be good, and I think there is no better way to learn a new language than to use the language inside a paradigm you already understand.
Once you know more about JavaScript the language you should take some time to explore the HTML development landscape and try working with some of the tools and frameworks, even if you have to use some of your spare time.
Maybe you’ll find you like the new world, only then it would be time for a switch, because in the end you gotta do what you enjoy...
In my recent work I’ve been using two approaches to handling errors and exceptions. The ultimate goal is to not let an error go unnoticed.
First up is a decorator for the $exceptionHandler service. We’ve looked at other decorators in a previous post. This specific decorator will send all errors to $rootScope for data binding before allowing the call to fall through to the default implementation (addError is a custom method on $rootScope, while $delegate represents the service being decorated). You could also try to send the errors back to the host and thereby collect errors from all clients.
app.config(function($provide){ $provide.decorator("$exceptionHandler", function($delegate, $injector){ return function(exception, cause){ var $rootScope = $injector.get("$rootScope"); $rootScope.addError({message:"Exception", reason:exception}); $delegate(exception, cause); }; }); });
Notice the use of $injector in the above code. Using the $injector service directly is required to avoid a circular dependency error by advertising both $exceptionHandler and $rootScope as dependencies.
I’m a fan of using catch at the end of a chain of promises. One reason is that catch is the only sure fire way to process all possible errors. Let’s use the following code as an example.
someService .doWork() .then(workComplete, workError);
Even though an error handler (workError) is provided to the then method, the error handler doesn’t help if something goes wrong inside of workComplete itself . . .
var workComplete = function(result){ return $q.reject("Feeling lazy"); };
. . . because we are already inside a success handler for the previous promise. I like the catch approach because it handles this scenario and also makes it easier to see that an error handler is in place.
someService .doWork() .then(workComplete) .catch(errors.catch("Could not complete work!"));
Since so many catch handlers started to look alike, I made an errors service to encapsulate some of the common logic.
app.factory("errors", function($rootScope){ return { catch: function(message){ return function(reason){ $rootScope.addError({message: message, reason: reason}) }; } }; });
And now async related activities can present meaningful error messages to the user when an operation fails.
Today’s tip comes straight from the AngularJS documentation, but I’ve seen a few people miss the topic.
Inside an ngRepeat directive the special properties $first, $last, and $middle are available. These properties hold boolean values ($first is true only for the first repeated element), and two more special properties that are available are $even and $odd.
Another useful property is the $index property, which contains the offset of each repeated element and starts at 0 (like all good offsets).
You can view the following markup live in this Plunker. The code is using $first and $last to avoid showing clickable up and down prompts when an item is in the first or last position. The markup is also using $index to grab the offset of a clicked item.
<table> <tr ng-repeat="item in items"> <td>{{item.title}}</td> <td><span ng-show="!$first" ng-click="moveUp($index)">up</span></td> <td><span ng-show="!$last" ng-click="moveDown($index)">down</span></td> </tr> </table>
Combined with the following controller, you can move items up and down by clicking on the arrows.
module.controller("mainController", function($scope){ $scope.items = [ { title: "Item 1" }, { title: "Item 2" }, { title: "Item 3" }, { title: "Item 4" }, { title: "Item 5" }, ]; var move = function (origin, destination) { var temp = $scope.items[destination]; $scope.items[destination] = $scope.items[origin]; $scope.items[origin] = temp; }; $scope.moveUp = function(index){ move(index, index - 1); }; $scope.moveDown = function(index){ move(index, index + 1); }; });