New C# Generics Course

Tuesday, October 1, 2013 by K. Scott Allen
11 comments

csharp genericsMy new C# Generics course on Pluralsight includes topics for everyone.

For beginners:

- Why generic types are useful.

- A demonstration of all  the concrete collection types in System.Collections.Generic

- How to build basic generic types (generic classes, generic interfaces, and generic delegates).

For intermediates:

- How to apply generic constraints (including what you can and can’t do with constraints).

- How to cleanup generic code (remove those ugly type parameters from business logic).

- How to use built-in generic delegates like Func, Action, Predicate, Converter, and EventHandler.

On the advanced side:

- How to refactor covariant and contravariant interfaces out of an invariant generic IRepository interface that works against the Entity Framework.

- How to use reflection to discover generic parameters and generic type definitions, as well as build generic types and invoke generic methods in late bound fashion.

- How to build an IoC container with a fluent-ish API that supports nested dependencies and unbound generics.

Custom Serialization with JSON.NET, WebAPI, and BsonDocument

Monday, September 30, 2013 by K. Scott Allen
7 comments

JSON.NET has a simple and easy extensibility model, which is fortunate because I recently ran into problems serializing a collection of BsonDocument.

The first problem with serializing a BsonDocument is how each document exposes public conversion properties like AsInt32, AsBoolean, and AsDateTime. Trying to serialize all public properties  is guaranteed to throw an exception on at least one of these conversion properties.

Fortunately, the MongoDB C# driver includes a ToJson extension method which doesn’t try to serialize the conversion properties. But, ToJson can also create problems because ToJson doesn’t produce conforming JSON by default. For example, ObjectId is represented in the output as ObjectId("… id …"), which causes JSON.parse to fail in a browser.

The solution is to provide a custom JSON.NET converter which uses the ToJson method with some custom settings. In the following code, IMongoDbCursor is a custom abstraction I have to wrap server cursors in Mongo, and is essentially an IEnumerable<BsonDocument>.

public class MongoCursorJsonConverter : JsonConverter
{
    public override void WriteJson(
        JsonWriter writer, object value, JsonSerializer serializer)
    {
        var cursor = (IMongoDbCursor) value;
        var settings = new JsonWriterSettings
            {
                OutputMode = JsonOutputMode.Strict
            };

        writer.WriteStartArray();

        foreach (BsonDocument document in cursor)
        {
            writer.WriteRawValue(document.ToJson(settings));            
        }

        writer.WriteEndArray();
    }

    public override object ReadJson(
        JsonReader reader, Type objectType, 
        object existingValue, JsonSerializer serializer)
    {
        throw new NotImplementedException();
    }

    public override bool CanConvert(Type objectType)
    {
        var types = new[] {typeof (MongoDbCursor)};
        return types.Any(t => t == objectType);
    }
}

The whole key is the foreach statement inside of WriteJson, which uses WriteRawValue to put strictly conforming JSON in place. Another option is to treat a BsonDocument as an IDictionary, which breaks down complex values into primitives.

writer.WriteStartArray();

foreach (BsonDocument document in cursor)
{   
    serializer.Serialize(writer, document.ToDictionary());
}

writer.WriteEndArray();

The last piece is to plugin the converter during WebApi configuration.

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
jsonFormatter.SerializerSettings.Converters.Add(new MongoCursorJsonConverter());

.

Moving Data In An AngularJS Directive

Wednesday, September 11, 2013 by K. Scott Allen
12 comments

This plunkr demonstrates a few different scenarios and capabilities when moving data between a model and a directive. Like other places in Angular JS there are quite a number of different approaches that work, which grants a lot of flexibility but also leads to some confusion.

The example uses a simple controller.

module.controller("TestController", function($scope){
    
    $scope.magicValue = 1; 
     
    $scope.increment = function(){
      $scope.magicValue += 1;
    };
    
});

The markup for the sample displays the controller’s magicValue, and provides a button to change the value. There are also 5 versions of a valueDisplay directive.

<div ng-controller="TestController">
  Controller: {{magicValue}}
  <button ng-click="increment()">Click</button>

  <ol>
    <li><value-display1 value="magicValue"></value-display1></li>

    <li><value-display2 value="magicValue"></value-display2></li>

    <li><value-display3 value="{{magicValue}}"></value-display3></li>

    <li><value-display4 value="magicValue"></value-display4></li>
       
    <li><value-display5 value="{{magicValue}}"></value-display5></li>
  </ol>
</div>

The first version of valueDisplay will put the value on the screen, but will never update the screen if the magic value changes. This approach is only reasonable for static data since the directive reads the value directly from the controller’s scope (using the name specified in the attribute), places the value in the element, then never looks back.

module.directive("valueDisplay1", function () {
    return {
    restrict: "E",
       link: function (scope, element, attrs) {
           element.text(scope[attrs.value]);
       }
   };
});

The second version of valueDisplay uses a $watch to update the screen as the model value changes. Notice the attribute value (value="magicString")  is still working like an expression to retrieve and watch data directly from the controller’s scope.

module.directive("valueDisplay2", function () {

     return {
        restrict: "E",
        link: function (scope, element, attrs) {
            scope.$watch(attrs.value, function(newValue) {
                element.text(newValue);
            });
        }
    };

});

The third version is a slight variation that will read and observe data in the value attribute itself instead of going to the controller. There are a few differences to note. First, the directive will always retrieve the value as a string. Secondly, the view has to use interpolation to place the proper value into the attribute (i.e. value="{{magicValue}}"). Finally, $observe is used instead of a scope watch since the directive wants to watch changes on the attribute, not the controller scope.

module.directive("valueDisplay3", function () {
 return {
    restrict: "E",
    link: function (scope, element, attrs) {
        attrs.$observe("value", function (newValue) {
            element.text(newValue);
        });
    }
  };
});

The fourth version moves to using an isolate scope to detach itself from the parent scope. Two-way data binding is setup between the directive's isloated scope and the parent model using using "=", so the framework will take care of pushing and pulling data between the directive’s scope and the parent scope. The view still chooses the model property to use with the value attribute in the markup (value="magicValue"), and displays the value using a template (which will automatically watch for changes).

module.directive("valueDisplay4", function () {
     return {
        restrict: "E",
        scope: {
            value: "="
        },
        template: '{{value}}',
    };
});

The final version of valueDisplay uses attribute binding ("@"), which also works in both directions and will update the view as the model value changes (and vice-versa, if we had an input). 

module.directive("valueDisplay5", function () {
    return {
        restrict: "E",
        scope: {
              value: "@"
        },
        template: '{{value}}',
    };
});

In the end, there are even more options available (we didn’t even look at executing expressions to move data), but that can be a topic for a future post. I currently tend to use option 4 for binding complex objects from the model to a directive, and option 5 for moving simple customization values like headings, labels, and titles.

Hosting A JavaScript Engine In .NET

Tuesday, September 10, 2013 by K. Scott Allen
10 comments

ClearScript is simple to use and allows me to host either V8 or Chakra. The samples on the home page show just how easy is the interop from C# to JavaScript and vice versa.

I use a simple wrapper because the only API I need is one to Evaluate small script expressions.

public interface IJavaScriptMachine : IDisposable
{
    dynamic Evaluate(string expression);
}

The following implementation sets up a default environment for script execution by loading up some required scripts, like underscore.js. The scripts are embedded resources in the current assembly.

public class JavaScriptMachine : JScriptEngine, 
                                 IJavaScriptMachine
{      
    public JavaScriptMachine()
    {
        LoadDefaultScripts();                
    }
    
    void LoadDefaultScripts()
    {
        var assembly = Assembly.GetExecutingAssembly();
        foreach (var baseName in _scripts)
        {
            var fullName = _scriptNamePrefix + baseName;
            using (var stream = assembly.GetManifestResourceStream(fullName))
            using(var reader = new StreamReader(stream))
            {
                var contents = reader.ReadToEnd();
                Execute(contents);
            }
        }
    }

    const string _scriptNamePrefix = "Foo.Namespace.";
    readonly string[] _scripts = new[]
        {
            "underscore.js", "other.js"
        };       
}

For examples, check out the ClearScript documentation.

Metaprogramming Fun In JavaScript

Tuesday, September 3, 2013 by K. Scott Allen
3 comments

The idea is to take a JavaScript statement like the following:

c.find({ x: 1, y: 3, name: "foo" }, { id: 0 }).limit(1);

.. and turn the statement into a data structure that describes the methods being invoked and the arguments for each method call. There are multiple method names to capture (not just find and limit, but also findOne, orderBy, and more).

This sounds like a job for a mock object library, but let’s explore a few simple approaches that use < 20 lines of code.

One approach that doesn’t work is try to to build a function for each possible method call and forget how closures work. The following code has a bug.

var CommandCapture = function () {

    var commands = ["find", "findOne", "limit", "orderBy"];

    for (var i = 0; i < commands.length; i++) {
        this[commands[i]] = function() {
            return {
                name: commands[i], // here's the problem
                args: arguments
            };
        };
    }
};

By the time the innermost function executes, the value of i is outside the bounds of the commands array, so the name isn’t properly captured.

To avoid the closure problem we can embed the command name into a string and use the Function constructor.

var CommandCapture = function() {
    
    var commands = ["find", "findOne", "limit", "orderBy"];
    
    for (var i = 0; i < commands.length; i++) {
        this[commands[i]] = new Function("return { name: '" +
                                            commands[i] +
                                         "', args:arguments };");
    }
};

But ... building functions out of strings is tiresome and error prone, so instead we can use an IFFE and capture the value of i in a more proper manner.

var CommandCapture = function () {

    var commands = ["find", "findOne", "limit", "orderBy"];

    for (var i = 0; i < commands.length; i++) {
        this[commands[i]] = function(name) {

            return function() {
                return {
                    name: name,
                    args: arguments
                };
            };
            
        }(commands[i]);
    }
};

The above code will work for simple cases. Executing the following code:

var capture = new CommandCapture();
var result = capture.find({ x: 1, y: 3, name: "foo" }, { id: 0 });
console.log(result);

Yields this output:

{ 
    name: 'find',
    args: { '0': { x: 1, y: 3, name: 'foo' }, '1': { id: 0 } }
}

However, the above code doesn’t allow for method chaining (capture.find().limit()). The following code does, by keeping all the method calls and arguments in a field named $$captures.

var CommandCapture = function () {

    var self = this;
    self.$$captures = [];
    var commands = ["find", "findOne", "limit", "orderBy"];

    for (var i = 0; i < commands.length; i++) {
        this[commands[i]] = function (name) {

            return function () {
                self.$$captures.push({ name: name, args: arguments });
                return self;
            };

        }(commands[i]);
    }      
};

These are the types of problems that are fun to work on during a Saturday afternoon thunderstorm.

Dynamic Tabs with AngularJS and UI Bootstrap

Wednesday, August 14, 2013 by K. Scott Allen
4 comments

I’ve been working on a  data management tool where I want to give users “workspace” areas. Each workspace is encapsulated inside of a tab, and a user can add a new workspace by clicking an icon in the tab bar, as in the following picture.

image

This feature isn’t too difficult to put together using AngularJS plus UI Bootstrap. UI Bootstrap provides directives and templates to work with Bootstrap components like tabs, accordions, alerts, and dialogs.

The first bit of code here is the TabsParentController, which is responsible for managing multiple workspaces.

module.controller("TabsParentController", function ($scope) {

    var setAllInactive = function() {
        angular.forEach($scope.workspaces, function(workspace) {
            workspace.active = false;
        });
    };

    var addNewWorkspace = function() {
        var id = $scope.workspaces.length + 1;
        $scope.workspaces.push({
            id: id,
            name: "Workspace " + id,
            active: true
        });
    };

    $scope.workspaces =
    [
        { id: 1, name: "Workspace 1", active:true  },
        { id: 2, name: "Workspace 2", active:false }
    ];

    $scope.addWorkspace = function () {
        setAllInactive();
        addNewWorkspace();
    };       

});

Most of the tricky parts come in the HTML markup, which uses UI Bootstrap directives to create the tabs. You can see one tab created for each workspace, plus a static tab with the “+” sign icon.

<div ng-controller="TabsParentController">
    <tabset>
        <tab ng-repeat="workspace in workspaces"
             heading="{{workspace.name}}"
             active=workspace.active>
            <div ng-controller="TabsChildController"  
                 ng-init="workspace=workspace">
                <div>
                    {{workspace.id}} : {{ workspace.name}}
                </div>
                <input type="text" ng-model="workspace.name"/>
            </div>     
        </tab>
        <tab select="addWorkspace()">
            <tab-heading>
                <i class="icon-plus-sign"></i>
            </tab-heading>
        </tab>
    </tabset>
</div>

The hardest part was figuring out how to tell the TabsChildController which workspace object to use. Although prototypal inheritance is a nice way to share information between a parent and child controller, in this case the child controller inherits all the workspaces from its parent, and doesn’t know which specific workspace to use.

To get around this problem I used an ngInit directive to create a workspace attribute in the child controller’s scope. The value of the workspace is the workspace used from the outer repeater scope. This is confusing if you haven’t worked with Angular for awhile, I think, so if you think of a better solution I’m all ears!

Attribute Routes and Hierarchical Routing

Monday, August 12, 2013 by K. Scott Allen
6 comments

As announced earlier this year, attribute routing will be a part of the next ASP.NET release. You can read more about the feature on the asp.net wiki, but here is a quick example to get the idea:   

public class ServerController : ApiController
{         
    [GET("api/server/{server}")]
    public IEnumerable<string> GetDatabaseNames(string server)
    {
        // ...
    }

    [GET("api/server/{server}/{database}")]
    public IEnumerable<string> GetCollectionNames(string server, string database)
    {
        // ...        
    }     
}

Attribute routing has been around for some time (you can use it today), and is invaluable for certain types of applications. While today’s default routing approach in ASP.NET MVC and Web API is easy and conventional, the  approach lacks the flexibility to make the hard scenarios easy. Scenarios like modeling hierarchical resources as in the above code, where the application wants to respond to api/server/localhost/ and api/server/localhost/accoutingdb. Other scenarios include creating controller actions with custom parameter names, and actions that can respond to multiple URIs.

Overall, the addition of attribute routing to the framework is a win.

However . . .

One of the benefits of attribute routing listed on the asp.net wiki is:

[Talking about conventional routing] –> The information about what URI to use to call into a controller is kept in a completely different file from the controller itself. A developer has to look at both the controller and the global route table in configuration to understand how to call into the controller.

An attribute-based approach solves all these problems by allowing you to configure how an action gets called right on the action itself. For most cases, this should improve usability and make Web APIs simpler to build and maintain.

I disagree!

I personally like centralized route configuration. Having routes defined in one place makes it easier to order and optimize the routes, and also think about URIs before thinking about an implementation (which is arguably more important for an API than a regular web site).

As a comparison, consider Darrel Miller’s ApiRouter, which also allows for flexibility and hierarchy in the routing rules (below is an excerpt for routing rules to mimic GitHub’s model). 

Add("issues", ri=> ri.To<IssuesController>());
Add("events", ri => ri.To<EventsController>());
Add("networks", 
  rn => rn.Add("{userid}", 
    ru => ru.Add("{repoid}", 
      rr => rr.To<NetworksController>())));

Add("gists",rg => rg.To<GistsController>()
  .Add("public",
     rp => rp.To<GistsController>(new { gistfilter = "public" }))
  .Add("starred", 
     rs => rs.To<GistsController>(new { gistfilter = "starred" }))
  .Add("{gistid}", 
      rgi => rgi.To<GistController>("justid")
  .Add("comments", 
      rc => rc.To<GistCommentsController>())

In the end, I believe using an approach like ApiRouter will lead to a routing configuration that is easier to understand, optimize, maintain, and troubleshoot.

I believe it will also will lead to a better API design, because attribute routing makes it easy to destroy the uniform interface for a resource and avoid looking at the bigger picture of how the URIs work together. 

Thoughts?

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!