True story: My first use of Python was a little over 10 years ago. I used Python to prevent rabbits from running into campfires and injuring themselves.
Of course the rabbits were virtual, as were the campfires. I was working with a team to create MMORPG middleware, and Python scripts were the brains for some of the lesser NPCs.
I’ve been working with Python again recently.
class Cart: def __init__(self): self.contents = dict() def process(self, order): if order.add: if not order.item in self.contents: self.contents[order.item] = 0 self.contents[order.item] += 1 if order.delete: if order.item in self.contents: self.contents[order.item] -= 1 if self.contents[order.item] == 0: del self.contents[order.item] def __repr__(self): return "Cart: {0!r}".format(self.__dict__)
It’s interesting how my opinion of Python has changed. Ten years ago the majority of my programming experience was with C, C++, and Java. My thoughts on Python were:
1. Using indentations to control block structure was weird, considering white space was insignificant everywhere else.
2. Tuples were useful.
3. Double underscores look cool
Todays thoughts:
1. Tuples are still useful, but the REPL perhaps more so.
2. How did I miss lambdas, generators, map, filter, and reduce?
3. Double underscores are ugly, but the absence of { and } and is beautiful.
I like Python, it’s a great language. I have a better appreciation for its features today than I did 10 years ago.
Continuing from previous posts on building a file input directive and a file reader service, this post contains my first try at a drag-n-drop directive that uses the file reader service to copy an image dropped from the desktop into an img element.
As always, I welcome suggestions!
Native HTML 5 drag-and-drop is easy to work with. The directive handles the dragover, dragleave, and drop events on the target element. Dragover and dragleave are mostly about manipulating classes on the element to style it as a droppable target, as well as using e.preventDefault(), which is required for the drop event to work.
module.directive("imageDrop", function ($parse, fileReader, resampler) { return { restrict: "EA", link: function (scope, element, attrs) { var expression = attrs.imageDrop; var accesor = $parse(expression); var onDragOver = function (e) { e.preventDefault(); element.addClass("dragOver"); }; var onDragEnd = function (e) { e.preventDefault(); element.removeClass("dragOver"); }; var placeImage = function (imageData) { accesor.assign(scope, imageData); }; var resampleImage = function (imageData) { return resampler.resample( imageData, element.width(), element.height(), scope); }; var loadFile = function (file) { fileReader .readAsDataUrl(file, scope) .then(resampleImage) .then(placeImage); }; element.bind("dragover", onDragOver) .bind("dragleave", onDragEnd) .bind("drop", function (e) { onDragEnd(e); loadFile(e.originalEvent.dataTransfer.files[0]); }); scope.$watch(expression, function () { element.attr("src", accesor(scope)); }); } }; });
Some of the code to ensure only images are being processed is omitted, but I do want to point out the image resizing code. It’s wrapped by the resampler service below, which in turn uses Resampler.js from the post “100% Client Side Image Resizing”.
var resampler = function ($q) { var resample = function (imageData, width, height, scope) { var deferred = $q.defer(); Resample(imageData, width, height, function (result) { scope.$apply(function () { deferred.resolve(result); }); }); return deferred.promise; }; return { resample: resample }; }; module.factory("resampler", resampler);
If you start almost any new ASP.NET project in the new preview of Visual Studio 2013, you’ll find a reference to the Owin package inside. OWIN (the Open Web Interface for .NET) is a specification designed to decouple web servers from the frameworks and applications they host. The OWIN goal is to provide a lightweight, modular, and portable platform for mixing and matching components, frameworks, and servers.
Katana is Microsoft’s implementation of OWIN components. The code is available on CodePlex.
In a future post we can talk about how OWIN compares to System.Web, but first let’s get a simple example up and running from scratch.
In VS2013 you can start a new console mode application, then run the following commands in the package manager console:
install-package Microsoft.Owin.Hosting -IncludePreRelease install-package Microsoft.Owin.Host.HttpListener –IncludePreRelease install-package Microsoft.Owin.Diagnostics –IncludePreRelease install-Package Owin.Extensions -IncludePrerelease
Inside the Main entry point for the console application, we can use the WebApp class to start an HTTP listener.
static void Main(string[] args) { string uri = "http://localhost:8080/"; using (WebApp.Start<Startup>(uri)) { Console.WriteLine("Started"); Console.ReadKey(); Console.WriteLine("Stopping"); } }
The Startup class used as the generic type parameter to WebApp.Start is a class we’ll have to implement, too.
public class Startup { public void Configuration(IAppBuilder app) { app.UseWelcomePage(); } }
IAppBuilder is an interface we can use to compose the application for Katana to host. In this setup we’ll invoke the UseWelcomePage extension method provided by Microsoft.Owin.Diagnostics. Running the console program and pointing a browser to http://localhost:8080 produces:
Other extension methods allow for more fine grained control over the request and response processing. For example, UseHandlerAsync from Owin.Extensions allows for a more traditional “Hello, World” response to every request.
public class Startup { public void Configuration(IAppBuilder app) { app.UseHandlerAsync((req, res) => { res.ContentType = "text/plain"; return res.WriteAsync("Hello, World!"); }); } }
So far this doesn’t appear much different than self-hosting WebAPI in a console application, but in some future posts we’ll dig a little deeper into some examples showing the loftier goals of OWIN and Katana.
While digging through some directories of archived source code I found the first program I ever wrote in C#.
I’m not sure when I wrote this, but since there was a makefile in the directory I’m guessing this was still in the .NET 1.0 beta days of late 2000.
/****************************************************************************** CLIPBOARD.CS Based on the code and idea in Bill Wagner's VCDJ Fundamentals column. This program takes piped input or a filename argument and copies all stream data to the clipboard. Examples: dir | clipboard clipboard clipboard.cs ******************************************************************************/ using System; using System.IO; using System.WinForms; class MainApp { public static void Main( string[] args ) { // The clipboard class uses COM interop. I figured this out because // calls to put data in the clipboard always failed and further // investigation showed a failed hresult indicating no CoInitialize. // Here is the .NET equivalent: Application.OLERequired(); TextReader textReader; if (args.Length == 0) { // take the piped input from stdin textReader = System.Console.In; } else { // open the text file specified on command line File file = new File(args[0]); textReader = file.OpenText(); } string line; string allText = ""; Boolean pipeFull = true; while(pipeFull) { try { // When the pipe is empty, ReadLine throws an exception // instead of the documented "return a null string" behavior. // When reading from a file a null string is returned. line = textReader.ReadLine(); if( line == null ) { pipeFull = false; } else { allText += line; allText += "\r\n"; } } catch(System.IO.IOException ex) { if(ex.Message == "The pipe has been ended") { pipeFull = false; } else { throw ex; } } } Clipboard.SetDataObject(allText, true); } }
The first thoughts that came to mind when seeing this code again were:
1) Wow, that’s a long function by today’s standards.
2) I could use this!
Before resharpering the program into shape, I did a quick search and discovered Windows now comes with such a program by default. It’s called clip. I guess I can leave the code in the archive.
Now that we have a FileReader service for AngularJS, we need something that will give us a file to read. The two ways for users to select files are to use <input type=’file’>, or to drag and drop a file into the browser.
We’ll build a directive for the file input this week, and look at drag and drop next week.
But first, why are we using a directive?
As discussed before, directives are where the DOM and your Angular code can all come together. Directives can manipulate the DOM, listen for events in the DOM, and move data between the model and the view. For simple directives, most of the work is in the link function of the directive. Inside of link you’ll have access to your associated DOM element and the current scope. The DOM element is wrapped in jQuery lite (unless you are using jQuery, then it will be wrapped with jQuery), so you can wire up events on the element, change its classes, set its text, its HTML, and so on.
In our fileInput directive, the most important DOM operation is to wire up the change event.
var fileInput = function ($parse) { return { restrict: "EA", template: "<input type='file' />", replace: true, link: function (scope, element, attrs) { var modelGet = $parse(attrs.fileInput); var modelSet = modelGet.assign; var onChange = $parse(attrs.onChange); var updateModel = function () { scope.$apply(function () { modelSet(scope, element[0].files[0]); onChange(scope); }); }; element.bind('change', updateModel); } }; };
The rest of the code in the link function is moving a selected file into the model. This directive assumes there will be a single file. The directive can also fire off an onChange expression set in the HTML. Most of this work is easy because the $parse service in Angular can essentially turn HTML attribute values into executable code.
Using the directive in markup is easy:
<div file-input="file" on-change="readFile()"></div> <img ng-src="{{imageSrc}}"/>
The associated controller just needs to implement readFile for the code to all work.
var UploadController = function ($scope, fileReader) { $scope.readFile = function () { fileReader.readAsDataUrl($scope.file, $scope) .then(function(result) { $scope.imageSrc = result; }); }; };
In the previous post we looked at promises in AngularJS both from both the creator and client perspective. Here is an Angular service that wraps the FileReader and transforms an evented API into a promise API. The onload event resolves a promise, and the onerror event rejects a promise. Notice the use of scope.$apply to propagate the promise results.
(function (module) { var fileReader = function ($q, $log) { var onLoad = function(reader, deferred, scope) { return function () { scope.$apply(function () { deferred.resolve(reader.result); }); }; }; var onError = function (reader, deferred, scope) { return function () { scope.$apply(function () { deferred.reject(reader.result); }); }; }; var onProgress = function(reader, scope) { return function (event) { scope.$broadcast("fileProgress", { total: event.total, loaded: event.loaded }); }; }; var getReader = function(deferred, scope) { var reader = new FileReader(); reader.onload = onLoad(reader, deferred, scope); reader.onerror = onError(reader, deferred, scope); reader.onprogress = onProgress(reader, scope); return reader; }; var readAsDataURL = function (file, scope) { var deferred = $q.defer(); var reader = getReader(deferred, scope); reader.readAsDataURL(file); return deferred.promise; }; return { readAsDataUrl: readAsDataURL }; }; module.factory("fileReader", ["$q", "$log", fileReader]); }(angular.module("testApp")));
The service only exposes a single method (readAsDataUrl). The rest of the FileReader methods are omitted for brevity, but they follow the same pattern.
One interesting event is the onprogress event of a FileReader. Since there is nothing to do with the promise during a progress event, the event is instead transformed and forwarded using scope.$broadcast. Other interested parties can use $scope.$on to register event handlers for such broadcasts. For example, here is a controller using the reader the reader service.
var UploadController = function ($scope, fileReader) { $scope.getFile = function () { $scope.progress = 0; fileReader.readAsDataUrl($scope.file, $scope) .then(function(result) { $scope.imageSrc = result; }); }; $scope.$on("fileProgress", function(e, progress) { $scope.progress = progress.loaded / progress.total; }); };
The progress value is fed into a progress element:
<progress value="{{progress}}"></progress>
But how does a controller know what file to read?
That’s where we can build some file input and drag and drop directives...
Let’s build an AngularJS service to wrap an HTML 5 FileReader object.
The first question a curious person might ask is: why create a service? Why not use a FileReader directly from the code in a model or controller?
Here are two reasons:
1) To build an adapter around FileReader that works with promises instead of call back functions (which is what this post will focus on).
2) To achieve greater flexibility. Services in AngularJS can be decorated or replaced at runtime.
To understand the advantages of #1, we’ll need to learn about promises in AngularJS. For more general information about promises, see “What’s so great about JavaScript Promises”.
In Angular, $q is the well known name of a promise provider, meaning you can use $q to create new promises. Here’s some code for a simple service that will perform async operations. The service requires $q as a dependency.
(function(module) { var slowService = function ($q) { var doWork = function () { var deferred = $q.defer(); // asynch work will go here return deferred.promise; }; return { doWork: doWork }; }; module.factory("slowService", ["$q", slowService]); }(angular.module("testApp")));
The doWork function uses $q.defer to create a a deferred object that represents the outstanding work for the service to complete. The function returns the deferred object’s promise property, which will give the caller an API for figuring out when the work is complete. This is the basic pattern most async services will use, but in order for anything interesting to happen with the promise, the service will also need to resolve or reject the promise when the async work is complete. Here is a new version of the doWork function .
var doWork = function (value, scope) { var deferred = $q.defer(); setTimeout(function() { scope.$apply(function() { if (value === "bad") { deferred.reject("Bad call"); } else { deferred.resolve(value); } }); }, 2000); return deferred.promise; };
This version is using setTimeout to simulate work that takes 2000 milliseconds to complete. Normally you’d want to use the Angular $timeout service instead of setTimeout, but setTimeout is here to illustrate an important point.
AngularJS promises do not propagate the result of a completed promise until the next digest cycle.
This means there has to be a call to scope.$apply (which will kick off a digest cycle) for the promise holder to have their callback functions invoked. With the $timeout service we wouldn’t need to use $scope.apply ourselves since the $timeout service will call into our code using $apply, but most things you’ll wrap with a service are not Angularized, so you’ll need to use $apply when resolving a promise.
Promises can lead to readable code for the client, as the code in the bottom of the controller demonstrates.
var TestController = function($scope, $log, slowService) { var callComplete = function (result) { $scope.serviceResult = result; }; var callFailed = function(reason) { $scope.serviceResult = "Failed: " + reason; }; var logCall = function() { $log.log("Service call completed"); }; $scope.serviceResult = ""; slowService .doWork("Hello!", $scope) .then(callComplete, callFailed) .always(logCall); };
One interesting feature of AngularJS is how the data-binding infrastructure understands how to work with promises. If all you want to do is assign the resolved value to a variable for data-binding, then you can assign the variable a promise instead of using a callback and Angular will know how to pick up the resolved value and update the view.
$scope.serviceResult = slowService.doWork("Hello!", $scope) .always(logCall);
And of course chained promises can kick off new async operations and the runtime will work through the promises in a serial fashion.
var TestController = function($scope, $log, slowService) { var saveResult = function (result) { $scope.callResults.push(result); }; $scope.callResults = []; slowService .doWork("Hello", $scope) .then(saveResult) .then(function() { return slowService.doWork("World", $scope); }) .then(saveResult); };
With the above code and this bit of markup:
<li ng-repeat="result in callResults"> {{ result }} </li>
... then “World” appears on the screen roughly 2 seconds after “Hello”.
It’s interesting to note in the last example that if the first call fails, the 2nd call never happens (because the 2nd call is started from an “on success” function). If you want to know that the call failed you don’t have to add an error callback to every .then invocation. A rejected promise will tunnel it’s way through the rest of the chain invoking error call backs as it goes, meaning you could get away with an error callback in the final .then.
For example, the following chained operations are doomed to failure since the service will reject the initial call with a parameter of “bad”.
slowService .doWork("bad", $scope) .then(saveResult) .then(function() { return slowService.doWork("World", $scope); }) .then(saveResult, logFailure);
Even though the error handler logFailure doesn’t appear till the end of the chain, the initial failed service call will find it and the 2nd service is skipped.
In addition to serial processing, you can kick off multiple promises and wait for them to complete using $q.all.
var promises = []; var parameters = ["Hello", "World", "Final Call"]; angular.forEach(parameters, function(parameter) { var promise = slowService.doWork(parameter, $scope).then(saveResult); promises.push(promise); }); $q.all(promises).then(function() { $scope.message = "All calls complete!"; });
Now that we know a little more about promises, we can move on to the business of building a service in the next post.