Looking Back: My First C# Program

Monday, July 8, 2013 by K. Scott Allen
5 comments

While digging through some directories of archived source code I found the first program I ever wrote in C#.

I’m not sure when I wrote this, but since there was a makefile in the directory I’m guessing this was still in the .NET 1.0 beta days of late 2000.

/******************************************************************************

CLIPBOARD.CS 

Based on the code and idea in Bill Wagner's VCDJ Fundamentals column.
This program takes piped input or a filename argument and copies all stream data 
to the clipboard. 

Examples:

dir | clipboard 

clipboard clipboard.cs

******************************************************************************/

using System;
using System.IO;
using System.WinForms;

class MainApp 
{  
    public static void Main( string[] args ) 
    {

        // The clipboard class uses COM interop. I figured this out because
        // calls to put data in the clipboard always failed and further 
        // investigation showed a failed hresult indicating no CoInitialize.
        // Here is the .NET equivalent:
        Application.OLERequired();
        
        TextReader textReader;
        if (args.Length == 0)
        {
            // take the piped input from stdin
            textReader = System.Console.In;
        }
        else
        {
            // open the text file specified on command line
            File file = new File(args[0]);
            textReader = file.OpenText();
        }
    
        string line;
        string allText = "";
        Boolean pipeFull = true;
        
        while(pipeFull)
        {
            try
            {
                // When the pipe is empty, ReadLine throws an exception
                // instead of the documented "return a null string" behavior.
                // When reading from a file a null string is returned.
                line = textReader.ReadLine();
                if( line == null )
                {
                    pipeFull = false;
                }
                else
                {
                    allText += line; 
                    allText += "\r\n";
                }
            }
            catch(System.IO.IOException ex)
            {
                if(ex.Message == "The pipe has been ended")
                {
                    pipeFull = false;
                }
                else
                {    
                    throw ex;
                }
            }
        } 

        Clipboard.SetDataObject(allText, true);
    }
}

The first thoughts that came to mind when seeing this code again were:

1) Wow, that’s a long function by today’s standards.

2) I could use this!

Before resharpering the program into shape, I did a quick search and discovered Windows now comes with such a program by default. It’s called clip. I guess I can leave the code in the archive.

A File Input Directive For AngularJS

Friday, July 5, 2013 by K. Scott Allen
4 comments

Now that we have a FileReader service for AngularJS, we need something that will give us a file to read. The two ways for users to select files are to use <input type=’file’>,  or to drag and drop a file into the browser.

We’ll build a directive for the file input this week, and look at drag and drop next week.

But first, why are we using a directive?

As discussed before, directives are where the DOM and your Angular code can all come together. Directives can manipulate the DOM, listen for events in the DOM, and move data between the  model and the view. For simple directives, most of the work is in the link function of the directive. Inside of link you’ll have access to your associated DOM element and the current scope. The DOM element is wrapped in jQuery lite (unless you are using jQuery, then it will be wrapped with jQuery), so you can wire up events on the element, change its classes, set its text, its HTML, and so on.

In our fileInput directive, the most important DOM operation is to wire up the change event.

var fileInput = function ($parse) {
    return {
        restrict: "EA",
        template: "<input type='file' />",
        replace: true,          
        link: function (scope, element, attrs) {

            var modelGet = $parse(attrs.fileInput);
            var modelSet = modelGet.assign;
            var onChange = $parse(attrs.onChange);

            var updateModel = function () {
                scope.$apply(function () {
                    modelSet(scope, element[0].files[0]);
                    onChange(scope);
                });                    
            };
            
            element.bind('change', updateModel);
        }
    };
};

The rest of the code in the link function is moving a selected file into the model.  This directive assumes there will be a single file. The directive can also fire off an onChange expression set in the HTML. Most of this work is easy because the $parse service in Angular can essentially turn HTML attribute values into executable code.

Using the directive in markup is easy:

<div file-input="file" on-change="readFile()"></div>

<img ng-src="{{imageSrc}}"/>

The associated controller just needs to implement readFile for the code to all work.

var UploadController = function ($scope, fileReader) {
    
    $scope.readFile = function () {            
        fileReader.readAsDataUrl($scope.file, $scope)
                  .then(function(result) {
                        $scope.imageSrc = result;
                    });
    };
};

Building a FileReader Service For AngularJS: The Service

Wednesday, July 3, 2013 by K. Scott Allen
1 comment

In the previous post we looked at promises in AngularJS both from both the creator and client perspective. Here is an Angular service that wraps the FileReader and transforms an evented API into a promise API. The onload event resolves a promise, and the onerror event rejects a promise. Notice the use of scope.$apply to propagate the promise results.

(function (module) {
    
    var fileReader = function ($q, $log) {

        var onLoad = function(reader, deferred, scope) {
            return function () {
                scope.$apply(function () {
                    deferred.resolve(reader.result);
                });
            };
        };

        var onError = function (reader, deferred, scope) {
            return function () {
                scope.$apply(function () {
                    deferred.reject(reader.result);
                });
            };
        };

        var onProgress = function(reader, scope) {
            return function (event) {
                scope.$broadcast("fileProgress",
                    {
                        total: event.total,
                        loaded: event.loaded
                    });
            };
        };

        var getReader = function(deferred, scope) {
            var reader = new FileReader();
            reader.onload = onLoad(reader, deferred, scope);
            reader.onerror = onError(reader, deferred, scope);
            reader.onprogress = onProgress(reader, scope);
            return reader;
        };

        var readAsDataURL = function (file, scope) {
            var deferred = $q.defer();
            
            var reader = getReader(deferred, scope);         
            reader.readAsDataURL(file);
            
            return deferred.promise;
        };

        return {
            readAsDataUrl: readAsDataURL  
        };
    };

    module.factory("fileReader",
                   ["$q", "$log", fileReader]);

}(angular.module("testApp")));

The service only exposes a single method (readAsDataUrl). The rest of the FileReader methods are omitted for brevity, but they follow the same pattern.

One interesting event is the onprogress event of a FileReader. Since there is nothing to do with the promise during a progress event, the event is instead transformed and forwarded using scope.$broadcast. Other interested parties can use $scope.$on to register event handlers for such broadcasts. For example, here is a controller  using the reader the reader service.

var UploadController = function ($scope, fileReader) {
    
    $scope.getFile = function () {
        $scope.progress = 0;
        fileReader.readAsDataUrl($scope.file, $scope)
                      .then(function(result) {
                          $scope.imageSrc = result;
                      });
    };

    $scope.$on("fileProgress", function(e, progress) {
        $scope.progress = progress.loaded / progress.total;
    });

};

The progress value is fed into a progress element:

<progress value="{{progress}}"></progress>

But how does a controller know what file to read?

That’s where we can build some file input and drag and drop directives...

Building a FileReader Service For AngularJS: Promises, Promises

Tuesday, July 2, 2013 by K. Scott Allen
0 comments

Let’s build an AngularJS service to wrap an HTML 5 FileReader object.

The first question a curious person might ask is: why create a service? Why not use a FileReader directly from the code in a model or controller?

Here are two reasons:

1) To build an adapter around FileReader that works with promises instead of call back functions (which is what this post will focus on).

2) To achieve greater flexibility. Services in AngularJS can be decorated or replaced at runtime.

To understand the advantages of #1, we’ll need to learn about promises in AngularJS. For more general information about promises, see “What’s so great about JavaScript Promises”.

Promises and $q

In Angular, $q is the well known name of a promise provider, meaning you can use $q to create new promises. Here’s some code for a simple service that will perform async operations. The service requires $q as a dependency.

(function(module) {

    var slowService = function ($q) {

        var doWork = function () {
            var deferred = $q.defer();

            // asynch work will go here

            return deferred.promise;
        };

        return {
            doWork: doWork
        };
    };
    
    module.factory("slowService",
                   ["$q", slowService]);

}(angular.module("testApp")));

The doWork function uses $q.defer to create a a deferred object that represents the outstanding work for the service to complete. The function returns the deferred object’s promise property, which will give the caller an API for figuring out when the work is complete. This is the basic pattern most async services will use, but in order for anything interesting to happen with the promise, the service will also need to resolve or reject the promise when the async work is complete. Here is a new version of the doWork function .

var doWork = function (value, scope) {
    var deferred = $q.defer();

    setTimeout(function() {
        scope.$apply(function() {
            if (value === "bad") {
                deferred.reject("Bad call");
            } else {
                deferred.resolve(value);
            }
        });
    }, 2000);

    return deferred.promise;
};

This version is using setTimeout to simulate work that takes 2000 milliseconds to complete. Normally you’d want to use the Angular $timeout service instead of setTimeout, but setTimeout is here to illustrate an important point.

AngularJS promises do not propagate the result of a completed promise until the next digest cycle.

This means there has to be a call to scope.$apply (which will kick off a digest cycle) for the promise holder to have their callback functions invoked. With the $timeout service we wouldn’t need to use $scope.apply ourselves since the $timeout service will call into our code using $apply, but most things you’ll wrap with a service are not Angularized, so you’ll need to use $apply when resolving a promise.

Promises From The Client Perspective

Promises can lead to readable code for the client, as the code in the bottom of the controller demonstrates.

var TestController = function($scope, $log, slowService) {

    var callComplete = function (result) {
        $scope.serviceResult = result;
    };

    var callFailed = function(reason) {
        $scope.serviceResult = "Failed: " + reason;
    };

    var logCall = function() {
        $log.log("Service call completed");
    };

    $scope.serviceResult = "";

    slowService
        .doWork("Hello!", $scope)
        .then(callComplete, callFailed)
        .always(logCall);
};

One interesting feature of AngularJS is how the data-binding infrastructure understands how to work with promises. If all you want to do is assign the resolved value to a variable for data-binding, then you can assign the variable a promise instead of using a callback and Angular will know how to pick up the resolved value and update the view.

$scope.serviceResult =
    slowService.doWork("Hello!", $scope)
               .always(logCall);

And of course chained promises can kick off new async operations and the runtime will work through the promises in a serial fashion.

var TestController = function($scope, $log, slowService) {

    var saveResult = function (result) {
        $scope.callResults.push(result);
    };
 
    $scope.callResults = [];

    slowService
        .doWork("Hello", $scope)
        .then(saveResult)
        .then(function() {
            return slowService.doWork("World", $scope);
        })
        .then(saveResult);
};

With the above code and this bit of markup:

<li ng-repeat="result in callResults">
    {{ result }}
</li>

... then “World” appears on the screen roughly 2 seconds after “Hello”.

It’s interesting to note in the last example that if the first call fails, the 2nd call never happens (because the 2nd call is started from an “on success” function). If you want to know that the call failed you don’t have to add an error callback to every .then invocation.  A rejected promise will tunnel it’s way through the rest of the chain invoking error call backs as it goes, meaning you could get away with an error callback in the final .then.

For example, the following chained operations are doomed to failure since the service will reject the initial call with a parameter of “bad”.

slowService
    .doWork("bad", $scope)
    .then(saveResult)
    .then(function() {
        return slowService.doWork("World", $scope);
    })
    .then(saveResult, logFailure);

Even though the error handler logFailure doesn’t appear till the end of the chain, the initial failed service call will find it and the 2nd service is skipped.

In addition to serial processing, you can kick off multiple promises and wait for them to complete using $q.all.

var promises = [];
var parameters = ["Hello", "World", "Final Call"];

angular.forEach(parameters, function(parameter) {
    var promise = slowService.doWork(parameter, $scope).then(saveResult);
    promises.push(promise);
});

$q.all(promises).then(function() {
    $scope.message = "All calls complete!";
});

Now that we know a little more about promises, we can move on to the business of building a service in the next post.

On The Coexistence of ASP.NET MVC and WebAPI

Monday, July 1, 2013 by K. Scott Allen
10 comments

I’ve gotten more than a few questions over the last year on how to use the ASP.NET MVC framework and the Web API framework together. Do they work together? Should they work together? When should you use one or the other?

Here’s some general rules of thumb I use.

1. If the bulk of the application is generating HTML on the server, then there is no need to use the Web API. Even if I have the occasional call from JavaScript to get JSON data for templates or an autocomplete widget, using regular MVC controllers will suffice.

All ASP.NET MVC

One thing to keep in mind is that the Web API is a separate framework with its own dependency resolver, action filters, routing rules, model binding, and model serialization settings. Bringing in a 2nd set of all these components just to satisfy a few JSON requests from script is an increase in complexity.

2. If an application is loaded with JavaScripts and JSON flows back and forth to the server frequently, or if I have to support more clients than just the script in the pages from my site (some of whom might want XML or other formats), then it is a good time to create HTTP service endpoints and a more formal service API. These scenarios are just what Web API is made for and both MVC controllers and Web API controllers will happily coexist in the same project.

MVC With Web API

Although the Web API does add some complexity, it is also easier to build a proper service with the Web API. I can use the verb based routing and content negotiation features to build services oriented around resources, and the OData support and automatic help page generation of the Web API framework can come in handy.

3. I’m not a big fan of services for services sake. In the previous two figures, the MVC UI and Web API pieces are drawn to suggest how they are only facades on top of a core application. Most of the interesting things are defined in the application and not in the UI and Web API layers. When someone suggests that the MVC controllers should talk over HTTP to Web API controllers in the same application, all I can think about is putting a façade over a façade, which seems silly.

 

Tears for tiers

There are some valid reasons to go with such an architecture (see #4), but be cautious when creating new tiers.

4. It is more than reasonable to integrate multiple applications or large pieces of functionality using services and the Web API. This is a scenario where having web service calls inside or behind the MVC controllers of an application is almost required.

Big Enterprise

The above type of scenario usually involves large applications and multiple teams. Using a service layer allows for more flexibility and scale compared to sharing binaries, or integrating at the database level (shudder with fear).

Parting Words

There is no quick and easy answer to the questions in this space. If you are looking for guidance, hopefully I’ve provided some rules of thumb you can use to start thinking about the answer for your specific scenario. While thinking, remember these lines from the Zen of Python:

Simple is better than complex

Complex is better than complicated

AngularJS Videos From NDC 2013

Friday, June 28, 2013 by K. Scott Allen
0 comments

ndcosloI can’t say enough good things about The Norwegian Developers Conference.

This year there were a couple talks on AngularJS.

First there was Tom Dale, Peter Cooper, and Rob Conery in an EmberJS versus AngularJS cage match. We also interviewed Tom and Rob on Herding Code after the match was over and Tom gave us some great insights on the future direction of Ember.

Tom Dale, Peter Cooper and Rob Conery; Cage Match - EmberJS vs. Angular from NDCOslo on Vimeo.

 

I also did a long and rambling live coding session with AngularJS. Hopefully someone can make sense of it all.

Scott Allen: The Abstractions of AngularJS from NDCOslo on Vimeo.

IE11 Preview and the New Developer Tools

Thursday, June 27, 2013 by K. Scott Allen
9 comments

The Windows 8.1 preview includes a preview of Internet Explorer 11, which includes a new version of the F12 Developer Tools for inspecting, profiling, and debugging web sites.

The Good

The developer tools are now metrofied with flat buttons and an emphasis on content over window chrome. Although there are still a considerable number of icons and commands to press, it does seem easier to read and work with the information presented. All the important features are still here, though a few things seem to be missing (the ruler tool and link report as two examples), and they still behave the same and present the same information.

Dev Tools DOM Explorer

Update - yes, the DOM explorer tracks changes in real time now. Huge improvement!

The New

The emphasis is on performance with new “UI Responsiveness”, “Profiler”, and “Memory” sections.

The Memory tab is looking very useful for today’s apps and the heap snapshots are easier to use compared to the tools in other browsers. Likewise the code profiler is easy to work with and similar to the profiling tools for managed code in VS Ultimate.   

image

The “UI Responsiveness” tab is visually appealing and highly interactive but contains an enormous amount of information and will require some guidance and practice to use properly.

UI Responsive

The Missing

To get a full picture of what is happening on any given page, the IE dev tools will need to give us the ability to inspect local storage, session storage, IndexDB, and Application Cache. I didn’t find any of these in the current release.

Most worrisome is how there doesn’t appear to be any extensibility story for the tools. Framework oriented plugins for Knockout and Angular are popular in other browsers because they allow developers to work at a level above the code to see what is happening on a page. The ability to use simple HTML and JavaScript to create these types of plugins is what makes the other tools so popular. The IE dev tools will need a better extensibility story to keep web developers happy.

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!