OdeToCode IC Logo

Evolution Of The Entity Framework: Lesson 4

Tuesday, July 24, 2012 by K. Scott Allen

The first version of the Entity Framework could create an entity data model by reverse engineering a database, and then generating code from the resulting conceptual data model. One of the enhancements in the second release of the framework was to make the code generation extensible through T4 templates. You could download a template provided by Microsoft, customize an existing template, or create a template from scratch to tweak the code generation strategy.

EF 4 T4 Templates

T4 templates provided a nice extensibility point for teams who wanted to change the generated code. Unfortunately, one of the templates provided by Microsoft was the “POCO Entity Generator”. The POCO Entity Generator came about because many people disliked the default code generation strategy. The default strategy forced all the entities to derive from an Entity Framework base class (EntityObject), and included a large number of partial methods and serialization attributes. Many developers asked for the Entity Framework to work with POCOs (plain old C# objects) instead – but why?

Objects of Desire

The word POCO came from the word POJO (plain old Java object), a term coined around the year 2000 to describe simple Java objects that were not encumbered by framework requirements. The Manning Book “POJOs In Action” is an interesting read about why POJOs came into existence. To summarize - programmers were frustrated trying to solve business problems because the application frameworks required boilerplate infrastructure code in every class. The boilerplate code was noisy, the components were difficult to test, the software was slow to build and deploy, and the overall architecture guided developers towards implementing procedural code and transaction scripts.

POJOs were about making object-oriented development easier, because a programmer didn’t have to think about business logic, persistence, and transactions all at once. POJOs let the developers drive the software design using business requirements instead of framework requirements. In short, POJOs let the developers control the code.

My Generation

The Entity Framework POCO template technically generates plain old C# objects, because the objects don’t derive from a framework base class, and don’t include partial methods and serialization attributes. However, the true spirit of code ownership through POCOs is lost in code generation. The true spirit of POJO and POCO programming is in owning the code from the start and building a core model of the business problem to solve. Like a gardener who cares about seedlings – the developer wants to grow the classes with a hands-on approach. With EF 4, the code generator still owns the POCO classes, and the POCO template does not address one of the early criticisms against the Entity Framework expressed in the Entity Framework Vote Of No Confidence.

The Entity Framework encourages the Anemic Domain Model anti-pattern by discouraging the inclusion of business logic in the entity classes. While it is possible for the business logic to be written in partial classes, this adds some awkwardness to the code as the entity data and the entity business rules and logic live in separate knowledge and user experience contexts.

Using code generation to create a set of POCO objects is backwards, but I’ve met many teams and developers who feel good about using generated POCOs because the word “POCO” covers them in the amorphous blanket of software best practices. They don’t use test-driven development, hexagonal architectures, break away from procedural transaction scripts, or try to use any of the advantages true POCO development could provide.

The POCO template, by virtue of its name, tricked many developers into cargo cult programming.

Cargo Cult Plane

This leads us to the lesson for this entry:

Lesson 4: Understand Your Customer’s Problem

The developers who wanted to work with POCOs didn’t technically want POCOs – they wanted to own the code and build applications starting with an inner core of domain logic. Code generation couldn’t solve this problem, and the POCO template was a misleading solution.

In a future post, we’ll learn one last lesson from the evolution of the Entity Framework.

Using requestAnimationFrame in JavaScript

Monday, July 23, 2012 by K. Scott Allen

There are a few different techniques you can use to animate objects in a web browser. The easiest animations are declarative animations with CSS 3 transitions. With CSS you can tell the browser to apply property changes over time instead of instantaneously, and even add some easing to make the animation appear natural. See David Rousset’s Introduction to CSS3 Animations and David Catuhe’s Transitions Puzzle for some clever examples.

Other types of animation require custom algorithms. Verlet integration, as you’ll see in a future post, can produce some enjoyable effects, but also requires custom logic in script code and a timer tick to manually update the screen at regular intervals. Script code traditionally implemented the periodic screen updates using setTimeout or setInterval,  but the future is requestAnimationFrame.

RAF in 17 Syllables

One way to think about requestAnimationFrame (RAF for short), is to contemplate the following poem, which I wrote while waiting for a dish of Yakisoba topped with red peppers and sesame seeds to arrive.

game loop request
electron salvation
render springtime on my screen

To express RAF in more straightforward prose is to say that RAF allows you to setup a loop by repeatedly telling the browser you want to draw a frame on the screen. Since the browser knows when the best time is to update the screen, it can optimize calls into your drawing code and synchronize with all the other painting and drawing. The optimization can lead to faster performance, improved CPU utilization, and extended battery life for portable devices. RAF is already supported in Chrome, Firefox, and IE 10. If a browser doesn’t support RAF natively, you can always fallback to setTimeout.

Example

To use RAF you need to invoke the RAF method and pass a callback. The callback is the function with your code to paint the illusion of motion, and you’ll keep calling RAF from within the paint method to set up an endless loop. Here is a simple example with a crude calculation for frames per second. The code will output the FPS measurement to a div.

<div id="frameRate"></div>

<script>

 

$(document).ready(function () {

var framesPerSecond = 0;
var output = $("#frameRate");
var lastRun = new Date().getTime();

var loop = function () {
requestAnimationFrame(loop);
framesPerSecond = 1 / ((new Date().getTime() - lastRun) / 1000);
lastRun = new Date().getTime();
output.text(Math.round(framesPerSecond));
};



loop();



});

</script>


Typically you would move objects or refresh and draw in a canvas element during the loop. We’ll look at simulating a swinging rope with verlet integration in a future post.

Evolution of the Entity Framework: Lesson 3

Wednesday, June 27, 2012 by K. Scott Allen

To combat the negativity surrounding the first release of the Entity Framework, Microsoft launched a messaging campaign and repeatedly told us that the EF is not an ORM.

Often, people categorize Entity Framework as an Object/Relational Mapper and try to compare it to other O/R Mappers. I dare say that’s an apples-to-oranges kind of comparison. While Entity Framework does have ORM capabilities, that is only a fraction of its functionality, and more importantly, those ORM capabilities are achieved through a fundamentally different approach compared to typical O/R Mappers … Entity Framework is much bigger than a mere O/R Mapper. It is a conceptual-level development platform, and its ORM capabilities are an interface for applications to interact with conceptual models.

As part of the effort, Danny Simmons wrote a post titled “Why Use The Entity Framework”.

The big difference between the EF and nHibernate is around the Entity Data Model (EDM) and the long-term vision for the data platform we are building around it. The EF was specifically structured to separate the process of mapping queries/shaping results from building objects and tracking changes. This makes it easier to create a conceptual model which is how you want to think about your data and then reuse that conceptual model for a number of other services besides just building objects.

Instead of propelling the Entity Framework forward, the messaging spread more confusion. The first version of the Entity Framework could only do one thing: reverse engineer a database and generate code to access the database. Yet, we were supposed to think of the Entity Framework as more than an ORM and see capabilities that didn’t exist. The Entity Framework walked like a duck, and talked like a duck, but we were told to think of the framework as a golden goose.

Golden Goose

Marketing can change the perception of a product but developers can always detect missing features. No one was sure when the features would arrive. Given the recent history at the time, we didn’t expect a quick turn around for the next release. Promising features in the far off future is like asking developers to take on a debt. “Use our framework today, and we’ll pay you back in two years”. This is a tough promise to accept given the fast changing technology landscape and how quickly Microsoft abandons frameworks.

Lesson 3: Don’t Ask Your Customer For A Long Term Loan

 

The second release of the Entity Framework bumped the version number to 4.0 and arrived with Visual Studio 2010. Although many new features were added, some were superficial and in hindsight, offered another lesson.

To be continued…

Trouble, Trouble, A Quintuple of Double

Tuesday, June 26, 2012 by K. Scott Allen
Func<double, double, double, double, double> distance = 
(x1, y1, x2, y2) =>
Math.Sqrt(Math.Pow(x2 - x1, 2) + Math.Pow(y2 - y1, 2));

A lady once asked me if this code was perfectible
if the Quintuple Of Double was somehow susceptible
to replacing with code still likeminded yet loveable
for the Quintuple Of Double offered feelings of trouble.

delegate double TwoPointOperation(double x1, double y1, 
double x2, double y2);

TwoPointOperation distance =
(x1, x2, y1, y2) =>
Math.Sqrt(Math.Pow(x2 - x1, 2) + Math.Pow(y2 - y1, 2));
I told her the code was still quite correctable 
a delegate type would make her worries reversible
and the one thing worse than a Quintuple Of Double
is the Sextuplet Of Object, then you know you’re in trouble.

Parallel Work in Async MVC Actions

Wednesday, June 20, 2012 by K. Scott Allen

One of the samples in the ASP.NET MVC 4 release notes is an example of using an async controller action.

public async Task<ActionResult> Index(string city)
{
var newsService = new NewsService();
var sportsService = new SportsService();

return View("Common",
new PortalViewModel
{
NewsHeadlines = await newsService.GetHeadlinesAsync(),
SportsScores = await sportsService.GetScoresAsync()
});
}

At first glance, it might seem like getting the headlines and getting the sports scores are two operations that happen in parallel, but the way the code is structured this can’t happen. It’s easier to see if you add some async methods to the controller and watch the different threads at work in the debugger.

public async Task<ActionResult> Index(string city)
{
return View("Common",
new PortalViewModel
{
NewsHeadlines = await GetHeadlinesAsync(),
SportsScores = await GetScoresAsync()
});
}
async Task<IEnumerable<Score>> GetScoresAsync()
{
await Task.Delay(3000);
// return some scores ...
}

async Task<IEnumerable<Headline>> GetHeadlinesAsync()
{
await Task.Delay(3000);
// return some news
}

In the methods I’ve introduced a delay using Task.Delay (which returns a Task you can await and thereby free the calling thread, unlike Thread.Sleep which will block). The total time to render the view will be at least 6,000 milliseconds, because the Index action awaits the result of GetHeadlinesAsync. Awating will suspend execution before GetScoresAsync has a chance to start.
If you want the headlines and scores to work in parallel, you can kick off both async calls before awaiting the first result.
public async Task<ActionResult> Index(string city)
{
var newsService = new NewsService();
var sportsService = new SportsService();

var newsTask = newsService.GetHeadlinesAsync();
var sportsTask = sportsService.GetScoresAsync();

return View("Common",
new PortalViewModel
{

NewsHeadlines = await newsTask,
SportsScores = await sportsTask
});
}

Evolution of the Entity Framework: Lesson 2

Tuesday, June 19, 2012 by K. Scott Allen

The first version of the Entity Framework appeared in the second half of 2008 with an API derived from WinFS and a heavy theoretical focus on entity-relationship modeling. While other object relational mapping frameworks viewed mapping as a (sometimes) necessary evil, entity mapping was a centerpiece of the Entity Framework.

The Entity Framework vision was to create a conceptual data model representing entities and their relationships. The conceptual model would be the ideal model for application developers. You didn’t have to think about database tables or normalization when building the conceptual model. Instead, you were supposed to focus on the important business objects and concepts. Once the model was in place, the Entity Framework could use the model to generate code for programming against the conceptual model inside an application, as well as generate database structures to persist data represented in the model into a relational database. The theory and math behind was the mapping was spelled out in an impressive piece of academic work titled “Compiling Mappings to Bridge Applications and Databases”.

But the Entity Framework vision went beyond just traditional application development and relational databases. It was thought that the conceptual entity data model could be the canonical data model for an entire enterprise, and drive not only line of business applications but also reporting, synchronization services, web services, and data analysis.

Entity Framework Vision

Unfortunately, the Entity Framework, like its predecessor, set out to solve a broad set of problems in enterprise IT and missed the opportunity to solve a specific, common problem in a way that would make developers happy. By the time the framework officially shipped with a Visual Studio service pack, the developer community had already highlighted a number of shortcomings for accessing a relational database from the Entity Framework, which included (but was not limited to):

  • · Unimplemented LINQ operators
  • · No capability for model-first design
  • · No support for complex types
  • · No support for enum types or unsigned integers
  • · No implicit loading of relationships
  • · Limited stored procedure support
  • · Unnecessarily complexity in generated SQL code
  • · Unnecessary complexity and dependencies in generated C# code
  • · Performance

Since developers love to benchmark software, it was the last bullet, performance, that generated many blog posts comparing the Entity Framework to other object relational mapping frameworks like nHibernate and LINQ to SQL. LINQ to SQL ironically was never intended to see the light of day but did ship earlier than the Entity Framework and was gaining in popularity because it was simple to understand and solved the ORM problem in a straightforward fashion. Because LINQ to SQL has fewer architectural layers, it outperformed the Entity Framework in almost every scenario.

Entity Framework Benchmarks

Developers looked to the Entity Framework to solve one specific problem, but the framework lagged other frameworks in almost every area. When you look around at successful products, you’ll typically find they solve at least one problem extremely well. Dropbox, for example, has a minimalistic feature set compared to other file synchronization applications. But, Dropbox is hugely successful because Dropbox does file synchronization extremely well. In fact, the success of Dropbox was the topic for a question on Quora.

Well, let's take a step back and think about the sync problem and what the ideal solution for it would do:

· There would be a folder.

· You'd put your stuff in it.

· It would sync.

They built that.

Why didn't anyone else build that? I have no idea.

"But," you may ask, "so much more you could do! What about task management, calendaring, customized dashboards, virtual white boarding. More than just folders and files!"

No, shut up. People don't use that crap. They just want a folder. A folder that syncs.

Lesson 2: Solve At Least One Customer Problem Well

Early frustrations around the Entity Framework primarily arose because the framework didn’t solve a specific problem well. In turn, this led to negative reviews.

As the pressure mounted on the Entity Framework, another learning opportunity arose which we’ll look at in a future post.

Geolocation, Geocoding, and jQuery Promises

Monday, June 18, 2012 by K. Scott Allen

If you want to use the customer’s hardware to find their exact address, one approach is to combine the HTML 5 Geolocation APIs with a Geocoding web service (like Google).

For Google, you can still get in without an API key (for a limited number of calls) using the Google Maps JavaScript library (just reference http://maps.google.com/maps/api/js in a script tag).

With the library in place, the code is straightforward (particularly the following code, which doesn’t have any error handling, but is a good skeleton of the calls you’ll need to make).

(function () {

    var getPosition = function (options) {
        navigator.geolocation.getCurrentPosition(
            lookupCountry,
            null,
            options);
    };

    var lookupCountry = function (position) {
        console.log(position);
        var latlng = new google.maps.LatLng(
                            position.coords.latitude,
                            position.coords.longitude);
        
        var geoCoder = new google.maps.Geocoder();
        geoCoder.geocode({ location: latlng }, displayResults);
    };

    var displayResults = function (results, status) {
        // here you can look through results ...
        $("body").append("<div>").text(results[0].formatted_address);      
    };

    $(function () {
        getPosition();
    });

} ());

 

Making Promises

Adding some jQuery deferred objects makes the code a little longer, but also a little more robust, as the individual pieces of work are no longer responsible for knowing what to do next and we can invert control of the execution flow. In other words, if you return promises from getPosition and lookupCountry:

var getPosition = function (options) {
    var deferred = $.Deferred();

    navigator.geolocation.getCurrentPosition(
        deferred.resolve,
        deferred.reject,
        options);

    return deferred.promise();
};

var lookupCountry = function (position) {
    var deferred = $.Deferred();

    var latlng = new google.maps.LatLng(
                        position.coords.latitude,
                        position.coords.longitude);
    var geoCoder = new google.maps.Geocoder();
    geoCoder.geocode({ location: latlng }, deferred.resolve);

    return deferred.promise();
};

Then the control logic reads pretty well:

$(function () {
    $.when(getPosition())
     .pipe(lookupCountry)
     .then(displayResults);
});

Note that pipe is different than then because pipe gives back a new promise.

Try it out on jsFiddle.