Dynamic Routes with AngularJS

Monday, March 24, 2014 by K. Scott Allen
7 comments

There is a simple rule in AngularJS that trips up many people because they simply aren’t aware of the rule. The rule is that every module has two phases, a configuration phase and a run phase. During the configuration phase you can only use service providers and constants, but during the run phase you only have access to services, and not the service providers.

One scenario where the rule will trip people up is the scenario where an application needs flexible, dynamic routes. Perhaps the routes are tailored to a user’s roles, like giving additional routes to a superuser, but regardless of the specifics you probably need some information from the server to generate the routes. The typical approach to server communication is to use the $http service, so a first attempt might be to write a config function that uses $http and $routeProvider to put together information on the available routes.

app.config(function ($http, $routeProvider) {

    var routes = $http.get("userInfo");
    // ... register routes with $routeProvider
                   
});

The above code will only generate an error.

Error: [$injector:unpr] Unknown provider: $http

Eventually you’ll figure out that a config function only has access to $httpProvider, not $http. Then you might try a run block, which does give you access to $http for server communication, but …

app.run(function ($http, $routeProvider) {

    var routes = $http.get("userInfo");
    // ... register routes with $routeProvider
    
});

… there is no access to providers during a run block.

[$injector:unpr] Unknown provider: $routeProviderProvider

There are a few different approaches to tackling this problem.

One approach would be to use a different wrapper for service communication, like jQuery’s $.get, perhaps combined with manual bootstrapping of the Angular application to ensure you have everything from the server you need to get started.

An Solution With C# and Razor

Another approach would be to use server side rendering to embed the information you need into the shell page of the application. For example, let’s say you are using the following class definitions.

public class ClientRoute
{
    public string Path { get; set; }
    public ClientRouteProperties Properties { get; set; }
}

public class ClientRouteProperties
{
    public string TemplateUrl { get; set; }
    public string Controller { get; set; }
    public string Resolve { get; set; }
}

And also a ClientRouteBuilder that can generate client side routes given the identity of a  user.

public class ClientRouteBuilder
{
    public string BuildRoutesFor(IPrincipal user)
    {
        var routes = new List<ClientRoute>()
        {
            new ClientRoute { 
                Path = "/index",
                Properties = new ClientRouteProperties
                {
                    TemplateUrl = "index.html",
                    Controller = "IndexController"
                }
            }
            
            // ... more routes
        };

        if (user.IsInRole("admin"))
        {
            routes.Add(new ClientRoute
            {
                Path = "/admin",
                Properties = new ClientRouteProperties
                {
                    TemplateUrl = "admin.html",
                    Controller = "AdminController"
                }
            });
        }

        return JsonConvert.SerializeObject(routes,new JsonSerializerSettings()
        {
            ContractResolver = new CamelCasePropertyNamesContractResolver()
        });
    }
}

In a Razor view you can use the builder to emit a JavaScript data structure with all the required routes, and embed the JavaScript required to config the application in the view as well.

<body ng-app="app">
    <div ng-view>
        
    </div>
    
    <script src="~/Scripts/angular.js"></script>
    <script src="~/Scripts/angular-route.js"></script>
    <script>
        (function() {

            var app = angular.module("app", ["ngRoute"]);

            /*** embed the routes ***/
            var routes = @Html.Raw(new ClientRouteBuilder().BuildRoutesFor(User))

            /*** register the routes ***/
            app.config(function ($routeProvider) {
                routes.forEach(function(route) {
                    $routeProvider.when(route.path, route.properties);
                 });
                $routeProvider.otherwise({
                    redirectTo: routes[0].path
                });
            });
        }());    

    </script>
    @* Rest of the app scripts *@
</body>

And Remember

You can’t effectively enforce security on the client, so views and API calls still need authorization on the server to make sure a malicious user hasn’t manipulated the routes.

Durandal and Object.defineProperty

Thursday, March 20, 2014 by K. Scott Allen
6 comments

DurandalJS continues to make dramatic improvements under the direction of lead architect Rob Eisenberg. The framework is easy to pick up since the API is small, well designed, fully featured, and uses technologies familiar to many JavaScript developers, like jQuery, Knockout, and RequireJS.

I’ve been working with Durandal 2.0.1 and my favorite feature, by far, is the observable plugin which allows binding to plain JavaScript objects.

When creating the application, I only need to tell Durandal to use the observable plugin.

define(function(require) {
    var app = require("durandal/app");
    app.configurePlugins({        
        observable: true // <-
    });

    app.start().then(function() {
        app.setRoot("{viewModelName}", "entrance");
    });
});

And now all my view models can be plain, simple objects.

define(function(require) {
    var dataService = require("data/movieData");
    var viewModel = {

        movies: [],

        activate: function() {
            dataService.getAll().then(function(newMovies) {
                viewModel.movies = newMovies;
            });
        }
    };
    return viewModel;
});

How’s It Work?

At the core of the observable plugin is Object.defineProperty. This ES5 API requires a compatible browser, but fortunately even IE9 has support. The defineProperty method can build a property with get and set logic, as in the following example.

var person = {};

var propertyDefinition = function(value) {
    
    var get = function(){
        return value;
    };

    var set = function(newValue) {
        if(value != newValue) {
            value = newValue;
            console.log("Value changed to " + value);
        }
    };

    return {
        configurable: true,
        enumerable: true,
        get: get,
        set: set
    };    
};

Object.defineProperty(
    person, 
    "firstName", 
    propertyDefinition()
);

person.firstName = "Scott";
person.firstName = "Allen";
console.log(person.firstName);

With Durandal, you never have to worry about using defineProperty directly. Durandal’s observable plugin provides an API to convert all the properties of an object into Knockout compatible observables using defineProperty.

define(function(require){
    var observable = require("durandal/plugins/observable");

    var person = {
        firstName: "Scott",
        lastName: "Allen"
    };

    observable.convertObject(person);
    
    // can still use properties as properties instead of as functions
    person.firstName = "Ethan";
    console.log(person.firstName);
});

If we look at the object in the debugger, we’ll see the following.

Observable with defineProperty

But even the above code is something you don’t need to write, because Durandal will automatically convert view models into observable objects ready for 2 way data binding before bindings are applied. There’s a lot to be said for frameworks that care about making things easy for the developer.

Rethinking Biggy

Wednesday, March 19, 2014 by K. Scott Allen
5 comments

A few weeks ago Rob unveiled Biggy, a simple file based document store for .NET inspired by NeDB. Since then there’s been a few additions and Biggy also works with relational databases and MongoDB.

Looking through the code and the enhancement list, I couldn’t help wondering if Biggy might benefit from a different design. I started to open an issue on GitHub based on a completely experimental branch I created, then decided it would be better off as a post.

How It Currently Works

Looking over the Biggy implementation, every different data store becomes coupled to an InMemoryList<T> class through inheritance. The coupling isn’t necessarily wrong, but it does complicate the implementation of each new data store. For example, for JSON storage the Add method has to remember to call into the base class in order to raise the proper events:

public void Add(T item) {
  var json = JsonConvert.SerializeObject(item);
  using (var writer = File.AppendText(this.DbPath)) {
    writer.WriteLine(json);
  }
  base.Add(item);
}

The inheritance relationship also complicates life for consumers, as InMemoryList supports both Clear and Purge methods in the API, but the JSON implementation only supports Clear. Not sure if this was intentional or not, but I did find it confusing.

I also thought an alternate approach that clearly defines the responsibilities of the in-memory data manager and the backing data store might be helpful…

Separating Lists From Stores

First we’ll start off with an abstraction that clearly defines the capabilities of a Biggy in-memory list.

public interface IBiggy<T> : IEnumerable<T>
{
    void Clear();
    int Count();
    T Update(T item);
    T Remove(T item);
    T Add(T item);
    IList<T> Add(IList<T> items);
    IQueryable<T> AsQueryable();

    event EventHandler<BiggyEventArgs<T>> ItemRemoved;
    event EventHandler<BiggyEventArgs<T>> ItemAdded;
    event EventHandler<BiggyEventArgs<T>> Changed;
    event EventHandler<BiggyEventArgs<T>> Loaded;
    event EventHandler<BiggyEventArgs<T>> Saved;
}

Methods like Add and Remove will return the affected item in case something changed, like if the underlying store populates a key or version field. There are two versions of Add, because batch inserts are a common scenario.

The implementation of an IBiggy can now focus on manipulating in-memory data and firing events. All data persistence is handled by stores.

public virtual T Add(T item)
{
    _store.Add(item);
    _items.Add(item);
    Fire(ItemAdded, item: item);
    return item;
}

A Biggy store has a simple, brute force API.

public interface IBiggyStore<T>
{
    IList<T> Load();
    void SaveAll(IList<T> items);
    void Clear();     
    T Add(T item);
    IEnumerable<T> Add(IEnumerable<T> items);
}

But I’m also thinking some data stores might support additional features that make updates and queries more efficient. These capabilities are segregated into separate interfaces.

public interface IUpdateableBiggyStore<T> : IBiggyStore<T>
{
    T Update(T item);
    T Remove(T item);
}

public interface IQueryableBiggyStore<T> : IBiggyStore<T>
{
    IQueryable<T> AsQueryable();
}

And when constructing a BiggyList<T>, you have to inject a specific data store. BiggyList<T> can query the available interfaces in a data store to understand the store’s capabilities, but once the store is inject a list client never needs to know about the store.

public BiggyList(IBiggyStore<T> store)
{
    _store = store;
    _queryableStore = _store as IQueryableBiggyStore<T>;
    _updateableBiggyStore = _store as IUpdateableBiggyStore<T>;
    _items = _store.Load();
}

Now, the implementation of an actual data store doesn’t need to call into a base class or worry about raising events. The store only does what it is told. Circling back to the JSON backing store, an implementation might look like:

T IBiggyStore<T>.Add(T item)
{
    var json = JsonConvert.SerializeObject(item);
    using (var writer = File.AppendText(DbPath))
    {
        writer.WriteLine(json);
    }
    return item;
}

Another useful benefit to this approach is that each store can specify generic constraints independently of the Biggy list or other data stores. For example, a data store for SQL Server can specify that T has to implement an interface with an ID property, with one for Azure Table Storage can enforce partition and row keys. 

Is It Useful?

All the static typing and interfaces might move Biggy away from Rob’s initial vision of a lightweight and easy to use library, but I think the ability to cleanly separate stores from lists is valuable not just for extensibility but in the simplicity of the design.

Working With FIPS 140 Crypto Standards

Tuesday, March 18, 2014 by K. Scott Allen
1 comment

Kryptos sculptorTo explain the 140 series of the Federal Information Processing Standards (FIPS) in plain talk and without dozing off is challenging, but let me give it a try:

FIPS cryptographic standards specify design and implementation requirements for cryptographic modules. Software and hardware vendors can contract with accredited laboratories to validate their modules against these standards, which then allows others to use those modules in computing environments that must adhere to the U.S. government’s information processing standards.

Two questions come to mind.

First, what is a cryptographic module? A module might be a piece of software, hardware, or a combination of both. For example, RSAENH.DLL on various Windows platforms is FIPS complaint. Another example would be the Samsung crypto modules running on Android devices like the Galaxy S4.

Second question, from a software developer’s perspective, is who must use FIPS compliant modules? The obvious answer is most anyone working directly or indirectly for a department or agency of the United States federal government who is handling sensitive but unclassified data. However, FIPS has also made inroads into private sector healthcare and banking businesses, as both industries store and transmit sensitive data like credit card numbers and personal health information.

The FIPS Switch

Some businesses will enforce the use of FIPS compliant algorithms by flipping the “FIPS switch”.  This is a setting on operating systems that ensure applications only use FIPS verified cryptography algorithms. You can flip this switch on Windows, as well as OS X and other operating systems and devices.

On Windows, the FIPS switch impacts the entire system and all applications, from BitLocker to Internet Explorer and Remote Desktop. The FIPS switch will also impact the code you write. Even if you don’t want to use a FIPS compliant algorithm, if your C# code executes on a Windows machine with the FIPS switch on, you’ll have to use a FIPS compliant algorithm or the code will fail.

For example, with C#, the managed .NET implementations of AES encryption are not certified, so the following code fails with an exception.

using System.Security.Cryptography;

static class Program 
{
    static void Main()
    {
        var provider = new AesManaged();
    }
}

Unhandled Exception: System.InvalidOperationException: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.

Instead of using AesManaged, you’ll need to use AesCryptoServiceProvider, which calls into native, verified modules.  And just so you know, your application won’t be the only one to face these types of exceptions, as everything from Visual Studio to Internet Explorer and even frameworks like ASP.NET and databases like MongoDB have had (or still have) troubles with FIPS at one point or another. Be careful when you flip on FIPS that you don’t cripple your own development machine, you might need to reserve FIPS for a testing environment.

The FIPS Whip

One of the issues you might run into with the FIPS switch is how the system will use brute force to deny access to uncertified encryption modules. There is no awareness from the OS of why an application might choose to use a specific algorithm. In the case of AES the brute force approach might make sense, but when you start to think of algorithms like MD5 the situation is grey. MD5 might be used cryptographically to generate a message digest, but someone might have also chosen MD5 to generate a hash to use as the key value for plain text data in a distributed cache. But, MD5 is considered weak from a crypto perspective, and the following code will also fail with the exception we saw previously, even though the code is using a crypto service provider.

using System.Security.Cryptography;

static class Program 
{
    static void Main()
    {
        var provider = new MD5CryptoServiceProvider();
    }
}

To run on a computer with the FIPS switch on, then, you need to be careful about your choice of anything cryptographic.

The irony of the FIPS switch is that the system can only prevent what the system knows about. Microsoft programmed the .NET crypto classes to respond with an exception when used inappropriately, but there are other libraries and platforms that will run any kind of cryptographic algorithm on a machine with the FIPS switch on. For example, I can still run the following code in Node on a FIPS enabled Windows machine.

var crypto = require("crypto");
var message = "The Magic Words are Squeamish Ossifrage";
var hash = crypto.createHash("md5").update(message).digest("hex");

console.log(hash);  

So Is The FIPS Flag Useful?

I think the question is debatable.  The FIPS flag will keep the honest applications honest. But the FIPS flag doesn’t guarantee that an application encrypts the right data, or that an application encrypts data at the right time, or that an application developer doesn’t “work around” the FIPS flag by writing their own algorithm with XOR and clocking out. The FIPS flag also can’t stop a user from storing their passwords on a yellow sticky note affixed to the back of an LCD monitor.

The real question should be: “is the data on a FIPS enabled machine more secure than the data on a machine without the FIPS flag?”

I’d say the answer is unquestionably a “no”, and system administrators should know the FIPS flag is not a silver bullet for security.

Building Better Models For AngularJS

Monday, March 17, 2014 by K. Scott Allen
23 comments

Retrieving JSON data and binding the data to a template is easy with Angular, so easy that quite a bit of Angular code appears like the following.

<button title="Make movie longer" class="btn" 
        ng-click="makeLonger(movie)">
      <span class="glyphicon glyphicon-plus"></span>
</button>

What’s of interest is the ng-click directive, which is using an expression to invoke behavior directly against $scope.

ng-click="makeLonger(movie)"

The approach works well for simple applications and demos, but in the face of more complexity it would nice to have a proper model object that knows nothing about scopes or templates and contains straight-up logic related to a business concept, in which case ng-click might look like the following.

ng-click="movie.makeLonger()"

Subtle difference, but in my experience it is small changes like this that can make a code base easier and enjoyable to work with because responsibilities are well thought and separated. Even if the model only encapsulates a couple lines of code, it is a couple lines of code that don’t have to appear in a controller, or exploded in a complex if statement, or duplicated in multiple areas because all models are only data transfer objects brought to life by JSON deserialization.

Starting Over

Instead of allowing an HTTP API to define a model, we could start by defining a model with the following code.

(function() {

    var Movie = function() {
        this.length = 0;
        this.title = "";
        this.rating = 1;
    };

    Movie.minLength = 0;
    Movie.maxLength = 300;
    Movie.minRating = 1;
    Movie.maxRating = 5;

    Movie.prototype = {

        setRating: function(newRating) {
            if (newRating <= Movie.maxRating &&
                newRating >= Movie.minRating) {
                this.rating = newRating;
            } else {
                throw "Invalid rating value: " + newRating;
            }
        },

        makeLonger: function() {
            if (this.length < Movie.maxLength) {
                this.length += 1;
            }
        },

        makeShorter: function() {
            if (this.length > 0) {
                this.length -= 1;
            }
        }

    };

    var module = angular.module("movieModels");
    module.value("Movie", Movie);

}());

This approach allows a model to provide both state and behavior with unlimited functionality. The last few lines of code give the model definition a dependency on Angular, but it would be easy to factor out the registration of the constructor function and rely on something like a proper module API or global export. The code above is demonstrating what should eventually happen, which is that the constructor function is registered with Angular as a value service, and this allows the constructor to be decorated and injected into other services at run time.

Next we’d need the ability to take an object deserialzed from JSON and transform the data-only object into a proper model. The transformation is generic and could become the responsibility of another service.

(function() {
   
    var transformObject = function(jsonResult, constructor) {
        var model = new constructor();
        angular.extend(model, jsonResult);
        return model;
    };

    var transformResult = function(jsonResult, constructor) {
        if (angular.isArray(jsonResult)) {
            var models = [];
            angular.forEach(jsonResult, function(object) {
                models.push(transformObject(object, constructor));
            });
            return models;
        } else {
            return transformObject(jsonResult, constructor);
        }
    };

    var modelTransformer = function() {
        return {
            transform: transformResult
        };
    };

    var module = angular.module("dataServices");
    module.factory("modelTransformer", modelTransformer);

}());

Now any service can ask for the transformer and a constructor function to turn JSON into rich models.

(function() {

    var movieDataService = function ($http, modelTransformer, Movie) {

        var movies = [];

        var get = function () {

            return $http
                .get(movieUrl)
                .then(function (response) {
                    movies = modelTransformer.transform(response.data, Movie);
                    return movies;
                });
        };

        // ... more implementation

        return {
            get: get
            // ... more API
        };
    };

    var module = angular.module("dataServices", ["movieModels"]);
    module.factory("movieDataService", movieDataService);

}());

 

The end result is a set of richer models that make it easier to keep functionality of out $scope objects. This might seem like a lot of code, but other than the transformer, this is all code that would still be written but scattered around in $scopes.

What's The Downside?

Here are just a few of the reasons you might not like this approach. 

1. It's not functional JavaScript, it's classes with prototypes. Not everyone likes classes and prototypes. An alternative (and more common) approach to slimming down $scope would be to group interesting functions into a movie service.  

2. Adding additional state to a model might result in additional and unexpected values arriving at the server, if the model is serialized and sent back in an HTTP call.

3. If the service caches the model, it might require some code using instanceof to keep track of what objects have been transformed, and which have not. It also makes it more difficult to decorate the service. 

Some Basic Azure Table Storage Abstractions

Thursday, February 27, 2014 by K. Scott Allen
1 comment

When working with any persistence layer you want to keep the  infrastructure code separate from the business and UI logic, and working with Windows Azure Table Storage is no different. The WindowsAzure.Storage package provides a smooth API for working with tables, but not smooth enough to allow it into all areas of an application.

What I’d be looking for is an API as simple to use as the following.

var storage = new WidgetStorage();
var widgets= storage.GetAllForFacility("TERRITORY2", "FACILITY3");            
foreach (var widget in widgets)
{                
    Console.WriteLine(widget.Name);
}

The above code requires a little bit of work to abstract away connection details and query mechanics. First up is a base class for typed table storage access.

public class TableStorage<T> where T: ITableEntity, new()
{
    public TableStorage(string tableName, string connectionName = "StorageConnectionString")
    {
        var storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(connectionName));
        var tableClient = storageAccount.CreateCloudTableClient();
        
        Table = tableClient.GetTableReference(tableName);
        Table.CreateIfNotExists();
    }

    public virtual string Insert(T entity)
    {
        var operation = TableOperation.Insert(entity);
        var result = Table.Execute(operation);
        return result.Etag;
    }

    // update, merge, delete, insert many ...
   
    protected CloudTable Table;
}

The base class can retrieve connection strings and abstract away TableOperation and BatchOperation work. It’s easy to extract an interface definition if you want to work with an abstract type. Meanwhile, derived classes can layer query operations into the mix.

public class WidgetStorage : TableStorage<Widget>
{
    public WidgetStorage()
        : base(tableName: "widgets")
    {

    }

    public IEnumerable<Widget> GetAll()
    {
        var query = new AllWidgets();
        return query.ExecuteOn(Table);
    }

    // ...

    public IEnumerable<Widget> GetAllForFacility(string territory, string facility)
    {
        var query = new AllWidgetsInFacility(territory, facility);
        return query.ExecuteOn(Table);
    }       
}

The actual query definitions I like to keep as separate classes.

public class AllWidgetsInFacility : StorageQuery<Widget>
{
    public AllWidgetsInFacility(string territory, string facility)
    {
        Query =
            Query.Where(InclusiveRangeFilter(
                key: "PartitionKey",
                from: territory + "-" + facility,
                to: territory + "-" + facility + "."));
    }
}

Separate query classes allow a base class to focus on query execution, including the management of continuation tokens, timeout and retry policies, as well as query helper methods using TableQuery. The base class also allows for easy testability via the virtual ExecuteOn method.

public class StorageQuery<T> where T:TableEntity, new()
{
    protected TableQuery<T> Query;
        
    public StorageQuery()
    {
        Query = new TableQuery<T>();
    }

    public virtual IEnumerable<T> ExecuteOn(CloudTable table)
    {
        var token = new TableContinuationToken();
        var segment = table.ExecuteQuerySegmented(Query, token);
        while (token != null)
        {
            foreach (var result in segment)
            {
                yield return result;
            }
            segment = table.ExecuteQuerySegmented(Query, token);
            token = segment.ContinuationToken;
        }                           
    }

    protected string InclusiveRangeFilter(string key, string from, string to)
    {
        var low = TableQuery.GenerateFilterCondition(key, QueryComparisons.GreaterThanOrEqual, from);
        var high = TableQuery.GenerateFilterCondition(key, QueryComparisons.LessThanOrEqual, to);
        return TableQuery.CombineFilters(low, TableOperators.And, high);
    }       
}

As an aside, one of the most useful posts on Azure Table storage is now almost 3 years old but contains many good nuggets of information. See: How to get (the) most out of Windows Azure Tables.

Easy Animations For AngularJS With Animate.css

Tuesday, February 25, 2014 by K. Scott Allen
4 comments

Animations in AngularJS can be slightly tricky. First you need to learn about the classes that Angular adds to an element during an animated event, and then you have to write the correct CSS to perform an animation. There are also special cases to consider such as style rules that require !important and Angular’s rule of cancelling nested animations.

There is a detailed look at animations on yearofmoo, but the basic premise is that Angular will add and remove CSS classes to DOM elements that are entering, leaving, showing, hiding, and moving.

First, Angular adds a class to prepare the animation. For example, when a view is about to become active, angular adds an ng-enter class. This class represents a preparation phase where a stylesheet can apply the transition rule and identify which properties to transition and how long the transition should last, as well as the initial state of the element. Opacity 0 is a good starting point for a fade animation.

div[ng-view].ng-enter {
    transition: all 0.5s linear;
    opacity: 0;
}

Next, Angular will apply a class to activate the animation, in this case .ng-enter-active.

div[ng-view].ng-enter-active {
    opacity: 1;
}

Angular will inspect the computed styles on an element to see how long the transition lasts, and automatically remove .ng-enter and .ng-enter-active when the animation completes. There is not much required for a simple animation like this.

With Animate.css

Animate.css is to transitions what Bootstrap is to layout, which means it comes with a number of pre-built and easy to use styles. Animate uses keyframe animations. which specify the start, end, and in-between points of what an element should look like, and although Animate is not tied to Angular, keyframes make Angular animations easier because there is no need to specify the “preparation” phase. Also, complicated animations roll up into a single keyframe name.

So, for example, the previous 7 lines of CSS for animating the entrance of a view become the following 4 lines of code, which not only fade in an element, but give it a natural bounce.

div[ng-view].ng-enter {
    -webkit-animation: fadeInRight 0.5s;
    animation: fadeInRight 0.5s;
}

The ng-hide and ng-show directives need a little more work to function correctly. These animations use “add” and “remove” classes, and adding !important is a key to override the default ng-hide style of display:none.

.ng-hide-remove {
    -webkit-animation: bounceIn 2.5s;
    animation: bounceIn 2.5s;
}

.ng-hide-add {
    -webkit-animation: flipOutX 2.5s;
    animation: flipOutX 2.5s;
    display: block !important;
}

Hope that helps!

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!