Spark is a view engine for the ASP.NET MVC and Castle Monorail frameworks. I’ve been wanting to try this creation by Louis DeJardin for some time, but ScottW pushed me over the edge with “If you are using ASP.Net MVC, you owe yourself to spend some time with Spark”. The Spark documentation makes it easy to get started, and the source code and examples are even more valuable.
There are quiet a few tutorials for Spark floating around, but I wanted to call out what appears to be a well hidden secret in the samples: the client rendering of views. In short, the client rendering produces JavaScript you can invoke on the client to render the same HTML you see when rendering a server-side partial view. This means you can happily fetch JSON from the server and use it to produce HTML without duplicating the server-side template logic on the client.
As an example,let’s say you have the following partial view to render the employees inside a department:
<div id="employees"> <table> <tr> <td>ID</td> <td>Name</td> </tr> <viewdata department="Models.Department"/> <tr each="var employee in department.Employees"> <td align="right"> ${employee.ID} </td> <td>${employee.Name}</td> </tr> </table> </div>
Notice how in Spark you can weave C# into the markup without breaking the flow of the HTML.
Next, let’s say you wanted the ability to refresh just this partial section of your view by asynchronously fetching data from the server. A first step would be to create a controller action that returns a Spark JavascriptViewResult.
public ActionResult ShowEmployees() { return new JavascriptViewResult {ViewName = "_ShowEmployees"}; }
This action tells Spark to generate some JavaScript code from the _ShowEmployees partial view (the one we see above). The JavaScript will know how to create the same HTML as the server side view. Since this action produces JavaScript, you'll want to add a <script> in your main view that references that action endpoint. (this is reminiscent of how ASP.NET AJAX produces WCF proxies in JavaScript that know how to invoke service endpoints on the server, except client rendering isn’t about services – it’s about sharing a data binding template logic between the client and server).
<content:head>
...
<script type="text/javascript" src="~/Department/ShowEmployees"></script>
</content:head>
What you’ll receive in your view is a JavaScript object with a RenderView method, and the RenderView method knows how to take your view model (as JSON data) and create the same HTML as the server-side partial view. Combining this generated JavaScript object with jQuery’s AJAX capabilities is straightforward.
$.getJSON( "/Department/Refresh/" + $("#id").val(), function(data) { var content = Spark.Department._ShowEmployees.RenderView( { department: data }); $("#employees").html(content); });
The above code gets a JSONified version of your view model from the server and passes it into RenderView. RenderView returns the HTML we can use to update the UI. Clever!
For more on Spark, check out Lou’s blog.
Phil and Scott (and the other Scott) announced the open source Nerddinner.com project and their free ASP.NET MVC eBook today. Actually, the free eBook is a single chapter of 185 pages, which is at least 50 pages longer than any chapter in Tolstoy’s War and Peace (and over half the size of my entire workflow book). Amazing.
In any case, I was looking through the code this evening and a thought struck me. You can divide the nascent world of ASP.NET MVC developers into two camps:
The people who use strings don’t love to use strings – they just use them to get work done. But the people who hate strings really hate strings. They’d rather be caught using goto label than ever type a string literal, and they exterminate string literals from as many places as possible.
The views in Nerddinner.com are full of strings:
<p> <label for="Title">Dinner Title:</label> <%= Html.TextBox("Title", Model.Dinner.Title) %> <%= Html.ValidationMessage("Title", "*") %> </p>
The “Title” string is significant – it has to match up with the name of a controller parameter or the name of a model property when data movies in and out of the controller. A typo in the view, or in a model, can create bugs. Compare the “stringy” views to the views in another open source MVC application - CodeCampServer:
<%=Html.Input(a => a.AttendeeID)%> <%=Html.Input(a => a.ConferenceID)%>
This is another example of using LINQ expressions to implement “reflection without strings”. A typo here yields a compiler error. The technique is quite powerful and implementations are popping up everywhere, including inside the MVCContrib project.
Errors can be caught with either approach, but you can catch errors earlier and perform safer refactorings if you take Nancy Regan’s advice and Just Say No (to magic strings).
I’m curious – which approach do YOU prefer?
Recent talk centered on software quality got me thinking of “never events”. The “never events” in health care are defined by the National Quality Forum to identity serious problems in the quality of a health care facility. A “never event” has to be:
Here are some examples of these events from the list of 28 defined by the NQF:
These are not events that never occur, but events that should never occur. Humans will make mistakes, but a high quality hospital will have fewer occurrences of “never events” than a hospital with low standards.
I wonder if we could ever find consensus on a set of “never events” for software development. Perhaps we could start with the “Top 25 Most Dangerous Programming Mistakes”, except not all of these are preventable, or at least easily preventable. Plus, the list is focused on dangerous programming mistakes. I’ve always felt that configuration management practices are a leading indicator of software quality, and a couple “never events” come to mind immediately:
What “never events” can you think of? Remember: measureable, preventable, and serious.
A new Code Contracts preview is now available on DevLabs. Code Contracts will be part of the base class library in .NET 4.0 (included in mscorlib), and facilitate a Design by Contract programming approach. You can describe pre-conditions, post-conditions, and object invariants.
Let’s borrow a couple ideas from Matt’s DbC post (he was using Spec#, which was a precursor) to see what Code Contracts will look like.
public void Run() { TargetResult result = LaunchMissle(new Target()); } public TargetResult LaunchMissle(Target target) { Contract.Requires(target != null); Contract.Ensures(Contract.Result<TargetResult>() != null); return new TargetResult(); }
In the LaunchMissle method we are using static methods on the System.Diagnostics.Contracts.Contract class to define a pre-condition (the target reference cannot be null), and a post-condition (the return value cannot be null). You can allow these contracts to verify not only the runtime behavior of your software, but you can also allow a static analysis tool to verify these contracts during builds. The following screen shot is from the new Code Contracts tab in the project options:
Static analysis will find two problems when it analyzes the following code:
public void Run() { TargetResult result = LaunchMissle(BuildTarget()); } Target BuildTarget() { return new Target(); } public TargetResult LaunchMissle(Target target) { Contract.Requires(target != null); Contract.Ensures(Contract.Result<TargetResult>() != null); return null; }
The “return null” line in LaunchMissle is an obvious violation of the method’s contract, and the static analyzer will tell us so. However, it will also tell us that the pre-condition (target != null) is unproven. This is because the BuildTarget method doesn’t include a contract that guarantees it’s behavior. We could fix that with the following code:
Target BuildTarget() { Contract.Ensures(Contract.Result<Target>() != null); return new Target(); }
This example demonstrates how enforcing a contract in one location can have a ripple effect on your code – which is something that becomes really painful if you’ve ever dealt with checked exceptions or generic constraints. Nevertheless, I’m still pretty optimistic about DbC in .NET. Built-in DbC constructs would have been in my top 3 list of “things to have in C#” 5 years ago. TDD has re-order my list dramatically, but I still feel DbC will still be a good addition and useful in some specific scenarios. 5 years ago I would have liberally applied contracts everywhere. Currently I’m thinking they’ll be best put to use on system boundaries.
One of the interesting features of Code Contracts is that it includes a MSIL rewriter (ccrewrite.exe) that post-processes an assembly to change the intermediate language instructions emitted by the compiler. I hope this elevates MSIL re-writing from a black art to a more mainstream technology that we can benefit from in the future. Re-writing could enable a number of cool scenarios in .NET, like AOP. I can’t help thinking that the Entity Framework team might have delivered POCOs in V1 if AOP was held in more regard.
Another great feature of Code Contracts is that you can turn static analysis on and off on a per project basis. I believe this will be important to anyone practicing TDD and BDD. You can see the issues in the comments of Matt’s post that I linked to earlier. Some people believe contracts eliminate the need for certain tests, and some people don’t.
I’ve done some thinking on this issue and I don’t want a contract to force a compiler error in my test. For example, imagine a unit test that would pass null as the parameter to LaunchMissle. Static analysis can flag this as a problem because it violates the LaunchMissle contract. Should I delete the test? I vote no. Fortunately, it looks like I’ll be able to turn off static analysis when building my test project. TDD and BDD are a design process and I’ll write the test before I ever write the contract, and I feel that the eventual writing of the contract shouldn’t invalidate my unit tests. They have contracts and tests serve orthogonal purposes. As Colin Jack commented in Matt’s post:
What I'm saying is that yes the compiler will warn me and that I love, however I'd also want the option of being able to write a spec that ensures a particular contract is in place (if I do X I get Y). If I can't do that then refactoring of the code (not the specs) becomes less safe.
There are many more great features in the preview. Download the bits to check them out, or RTFM. Will Code Contracts be something you use in .NET 4.0?
At the end of last year I finished a project that required a fair amount of object-to-object mapping. Unfortunately, Jimmy Bogard didn’t release AutoMapper until this year, so I had to write a pile of object-to-object mapping goo on my own.
AutoMapper is a convention based mapper. Let’s say you have an object implementing an interface like this…
public interface IPatientDetailView { string Name { get; set; } IEnumerable<Procedure> Procedures { get; set; } // ... }
…but all of the data you need to pump into the object is in a different type:
public class PatientDetailData { public string Name { get; set; } public IEnumerable<Procedure> Procedures { get; set; } // ... }
With AutoMapper you just need a one time configuration:
Mapper.CreateMap<PatientDetailData, IPatientDetailView>();
Then moving data over is ridiculously easy:
Mapper.Map(patientData, patientView);
AutoMapper has a number of neat tricks up its sleeve. The flattening feature, for instance, can move data from a hierarchical object graph to a “flattened” destination. But what if your property names don’t match? Then you can take advantage of a fluent API to describe how AutoMapper should move the data. For example, moving data from the following type …
public class ProcedureData { public string PatientName { get; set; } public IEnumerable<Procedure> Procedures { get; set; } // ... }
… will require a bit of configuration for AutoMapper to know where to put the patient name:
Mapper.CreateMap<ProcedureData, IPatientDetailView>() .ForMember(destination => destination.Name, options => options.MapFrom( source => source.PatientName));
For other AutoMapper features it’s best to peruse the tests. There are extensibility hooks, like custom formatters and resolvers, and support for different profiles. Check it out.
Here are three different reasons:
An example for reason #1 is Bart De Smet’s ForEach operator. While you are on Bart’s blog, you can read about the pros and cons of a ForEach in his comments.
An example for reason #2 would be a custom join operator. Let’s say we are joining an object collection of employees to an object collection of departments.
var employeeAndDepartments = employees.Join(departments, employee => employee.DepmartmentID, department => department.ID, (employee, department) => new { Employee = employee, Department = department });
The Join operator with extension methods is a little unwieldy. You need three lambda expressions: one to specify the employee key, one to specify the department key, and one to specify the result. To make the query itself a bit more readable you could define a custom Join operator that knows how to join employees and departments.
public static IEnumerable<EmployeeDepartmentDTO> Join( this IEnumerable<Employee> employees, IEnumerable<Department> departments) { return employees.Join(departments, employee => employee.DepmartmentID, department => department.ID, (employee, department) => new EmployeeDepartmentDTO { Employee = employee, Department = department }); }
Not pretty, but it drastically cleans up code in other places:
var employeeAndDepartments = employees.Join(departments);
Reason #3 is performance. Generally speaking, you'll write an operator for performance when you know something that LINQ doesn't know.A good example is in Aaron Erickson’s i40 (Indexed LINQ) library. i40 features an IndexableCollection type that can drastically increase the performance of LINQ to Object queries (think table scan versus index seek). Imagine having a huge number of objects in memory and you commonly query to find just one.
var subject = subjects.Where(subject => subject.ID == 42)
.Single();
With i40 you can create an index on the ID property.
var subjects = /* ... */ .ToIndexableCollection() .CreateIndexFor(subject => subject.ID);
/* ... */
var subject = subjects.Where(subject => subject.ID == 42)
.Single();
If you are using the i40 namespace, you’ll get a special i40 Where operator that takes advantage of the indexes built into the IndexableCollection.
//extend the where when we are working with indexable collections! public static IEnumerable<T> Where<T> ( this IndexableCollection<T> sourceCollection, Expression<Func<T, bool>> expr ) { // ... source from IndexableCollectionExtension.cs }
What custom operators have you made?
Not every optimization is a performance optimization. Imagine trying to get this XML:
string xml = @"<people> <Person> <property value=""John"" name=""firstName""/> <property value=""Dow"" name=""lastName""/> <property value=""john@blah.com"" name=""email""/> </Person> <Person> <property value=""Jack"" name=""firstName""/> <property value=""Dow"" name=""lastName""/> <property value=""jack@blah.com"" name=""email""/> </Person> </people>";
class Person { public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } }
A brute force solution would look like the following:
var xmlDoc = XDocument.Parse(xml); var records = from record in xmlDoc.Descendants("Person") select new Person { FirstName = (from p in record.Elements("property") where p.Attribute("name").Value == "firstName" select p.Attribute("value").Value).FirstOrDefault(), LastName = (from p in record.Elements("property") where p.Attribute("name").Value == "lastName" select p.Attribute("value").Value).FirstOrDefault(), Email = (from p in record.Elements("property") where p.Attribute("name").Value == "email" select p.Attribute("value").Value).FirstOrDefault(), };
It works - but it’s ugly. It would be better if the code looked like this.
var records = from record in xmlDoc.Descendants("Person") select new Person { FirstName = record.Property("firstName"), LastName = record.Property("lastName"), Email = record.Property("email") };
Which just requires a bit of extension method magic.
public static string Property( this XElement element, string name) { return (from p in element.Elements("property") where p.Attribute("name").Value == name select p.Attribute("value").Value).FirstOrDefault(); }
Is the code faster? Probably not – but until it’s certain that we have a performance problem, it’s better to optimize for readability.