OdeToCode IC Logo

MVC 4 Video Outtakes

Monday, July 30, 2012 by K. Scott Allen

I recently finished an ASP.NET MVC 4 course for Pluralsight.

Making a set of tech videos is not an easy job for me. Every video needs careful editing to remove moments of despair, frustration, and delusion. Not to mention the workplace hazards.

I put together 90 seconds of outtakes from the MVC 4 videos to share with you. I hope you can laugh at the painful parts (direct link):

Swinging on a Canvas

Thursday, July 26, 2012 by K. Scott Allen

verlet with JavaScriptBuilding on top of requestAnimationFrame from earlier this week, I put together a simple example of using basic verlet integration to simulate a swinging rope in an HTML 5 canvas. You can pull the source from github, or try the sample on jsFiddle.

It’s amazing what you can do just by adding numbers together in a certain fashion. To make things interesting, there is also an option to let the rope leave a trail of pixels behind as it swings, which can make for interesting patterns (as shown in the image in this post).

Evolution of the Entity Framework: Lesson 5

Wednesday, July 25, 2012 by K. Scott Allen

Imagine you are in control of a large company who specializes in building tools for developers. Your staff is full of developers with tool building expertise. They write lexers during lunch breaks and WYSIWYG designers on weekends.

Next, imagine someone asks you to select the perfect serialization format for your tools. Of course you’d pick XML, right? XML is easy to parse when you need to load data into your tool, and easy to create when your tool saves some output.  Building tools with XML is like having your cake and eating it, too.

XML cake

But wait, did anyone ask the customer if XML makes their job easier?

An Ocean of Angle Brackets

XML appears in everything from configuration files to form builders these days. In some scenarios XML is a good choice, but quite often developers have to read, write, and modify the XML created by tools, or we have to understand what a tool is producing, or we need to see the differences between two versions of the tool’s output to track down a bug in our software. None of these scenarios are pleasant to deal with. Tool vendors like to use XML because XML is convenient for the tools, but contrast this justification for XML against the design philosophy for Ruby put forth by Matz:

I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language.

When is the last time you felt productive and happy with XML?

Of course, the idea is to work only inside the tool and never look at the XML output. It’s easy to stay in a tool when a tool is a word processor, but developer tools are different.

Here are a few examples.

The first two versions of the Entity Framework used a visual designer to generate an .edmx file with all of the model metadata required for the Entity Framework to function. The .edmx file was an XML file. It was possible to do some work in the designer only to discover a build error during the next compiler run. A build error like the following:

Problem in Mapping Fragments starting at lines 6, 69: Non-Primary-Key column(s) CalendarID are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified.

After you get past the initial shock of the foreign vocabulary being presented you’ll realize the error message is pointing you to lines of code inside the XML contents of the .edmx file. Woe to those who open the .edmx file with an XML editor and wade through the ocean of angle brackets. The entity data model inside the file requires more XML than you might expect if you’ve previously done any ORM mapping with XML files.

It was also possible, particularly in V1, to run into a scenario the entity data model designer didn’t support. To implement table-per-hierarchy inheritance, for example, required wading through several hundred lines of XML just to figure out the important bits that make the inheritance approach work.

Which leads us to the 5th and final lesson derived from the evolution of the Entity Framework:

Lesson 5: Have Empathy For Your Customer

When you sit down to design a software framework or tool, something you should keep in mind is the user experience. How will they interact with your software? How will they use your software to perform a job? What sort of jobs will the user need to do? Will they be happy, and productive, and enjoy programming?

There were a number of difficult aspects to the Entity Framework that all appear to derive from design decisions that benefited the Entity Framework instead of the customer. The non-standard connection strings, the lack of implicit lazy loading, the difficult API for working with object graphs, the visual designer that couldn’t scale up to model complex domains, and of course the mountains of XML, to name just a few.

In a future post we’ll take a more positive attitude and see how the more recent versions of the Entity Framework have addressed the 5 lessons we’ve looked at over this series of posts.

Evolution Of The Entity Framework: Lesson 4

Tuesday, July 24, 2012 by K. Scott Allen

The first version of the Entity Framework could create an entity data model by reverse engineering a database, and then generating code from the resulting conceptual data model. One of the enhancements in the second release of the framework was to make the code generation extensible through T4 templates. You could download a template provided by Microsoft, customize an existing template, or create a template from scratch to tweak the code generation strategy.

EF 4 T4 Templates

T4 templates provided a nice extensibility point for teams who wanted to change the generated code. Unfortunately, one of the templates provided by Microsoft was the “POCO Entity Generator”. The POCO Entity Generator came about because many people disliked the default code generation strategy. The default strategy forced all the entities to derive from an Entity Framework base class (EntityObject), and included a large number of partial methods and serialization attributes. Many developers asked for the Entity Framework to work with POCOs (plain old C# objects) instead – but why?

Objects of Desire

The word POCO came from the word POJO (plain old Java object), a term coined around the year 2000 to describe simple Java objects that were not encumbered by framework requirements. The Manning Book “POJOs In Action” is an interesting read about why POJOs came into existence. To summarize - programmers were frustrated trying to solve business problems because the application frameworks required boilerplate infrastructure code in every class. The boilerplate code was noisy, the components were difficult to test, the software was slow to build and deploy, and the overall architecture guided developers towards implementing procedural code and transaction scripts.

POJOs were about making object-oriented development easier, because a programmer didn’t have to think about business logic, persistence, and transactions all at once. POJOs let the developers drive the software design using business requirements instead of framework requirements. In short, POJOs let the developers control the code.

My Generation

The Entity Framework POCO template technically generates plain old C# objects, because the objects don’t derive from a framework base class, and don’t include partial methods and serialization attributes. However, the true spirit of code ownership through POCOs is lost in code generation. The true spirit of POJO and POCO programming is in owning the code from the start and building a core model of the business problem to solve. Like a gardener who cares about seedlings – the developer wants to grow the classes with a hands-on approach. With EF 4, the code generator still owns the POCO classes, and the POCO template does not address one of the early criticisms against the Entity Framework expressed in the Entity Framework Vote Of No Confidence.

The Entity Framework encourages the Anemic Domain Model anti-pattern by discouraging the inclusion of business logic in the entity classes. While it is possible for the business logic to be written in partial classes, this adds some awkwardness to the code as the entity data and the entity business rules and logic live in separate knowledge and user experience contexts.

Using code generation to create a set of POCO objects is backwards, but I’ve met many teams and developers who feel good about using generated POCOs because the word “POCO” covers them in the amorphous blanket of software best practices. They don’t use test-driven development, hexagonal architectures, break away from procedural transaction scripts, or try to use any of the advantages true POCO development could provide.

The POCO template, by virtue of its name, tricked many developers into cargo cult programming.

Cargo Cult Plane

This leads us to the lesson for this entry:

Lesson 4: Understand Your Customer’s Problem

The developers who wanted to work with POCOs didn’t technically want POCOs – they wanted to own the code and build applications starting with an inner core of domain logic. Code generation couldn’t solve this problem, and the POCO template was a misleading solution.

In a future post, we’ll learn one last lesson from the evolution of the Entity Framework.

Using requestAnimationFrame in JavaScript

Monday, July 23, 2012 by K. Scott Allen

There are a few different techniques you can use to animate objects in a web browser. The easiest animations are declarative animations with CSS 3 transitions. With CSS you can tell the browser to apply property changes over time instead of instantaneously, and even add some easing to make the animation appear natural. See David Rousset’s Introduction to CSS3 Animations and David Catuhe’s Transitions Puzzle for some clever examples.

Other types of animation require custom algorithms. Verlet integration, as you’ll see in a future post, can produce some enjoyable effects, but also requires custom logic in script code and a timer tick to manually update the screen at regular intervals. Script code traditionally implemented the periodic screen updates using setTimeout or setInterval,  but the future is requestAnimationFrame.

RAF in 17 Syllables

One way to think about requestAnimationFrame (RAF for short), is to contemplate the following poem, which I wrote while waiting for a dish of Yakisoba topped with red peppers and sesame seeds to arrive.

game loop request
electron salvation
render springtime on my screen

To express RAF in more straightforward prose is to say that RAF allows you to setup a loop by repeatedly telling the browser you want to draw a frame on the screen. Since the browser knows when the best time is to update the screen, it can optimize calls into your drawing code and synchronize with all the other painting and drawing. The optimization can lead to faster performance, improved CPU utilization, and extended battery life for portable devices. RAF is already supported in Chrome, Firefox, and IE 10. If a browser doesn’t support RAF natively, you can always fallback to setTimeout.


To use RAF you need to invoke the RAF method and pass a callback. The callback is the function with your code to paint the illusion of motion, and you’ll keep calling RAF from within the paint method to set up an endless loop. Here is a simple example with a crude calculation for frames per second. The code will output the FPS measurement to a div.

<div id="frameRate"></div>



$(document).ready(function () {

var framesPerSecond = 0;
var output = $("#frameRate");
var lastRun = new Date().getTime();

var loop = function () {
framesPerSecond = 1 / ((new Date().getTime() - lastRun) / 1000);
lastRun = new Date().getTime();




Typically you would move objects or refresh and draw in a canvas element during the loop. We’ll look at simulating a swinging rope with verlet integration in a future post.