What Are The “Never Events” for Software Quality?

Wednesday, February 25, 2009 by scott
6 comments

ekg Recent talk centered on software quality got me thinking of “never events”. The “never events” in health care are defined by the National Quality Forum to identity serious problems in the quality of a health care facility.  A “never event” has to be:

  • Measureable
  • Mostly preventable
  • Have serious implications (like death or disability)

Here are some examples of these events from the list of 28 defined by the NQF:

These are not events that never occur, but events that should never occur. Humans will make mistakes, but a high quality hospital will have fewer occurrences of “never events” than a hospital with low standards.

I wonder if we could ever find consensus on a set of “never events” for software development. Perhaps we could start with the “Top 25 Most Dangerous Programming Mistakes”, except not all of these are preventable,  or at least easily preventable. Plus, the list is focused on dangerous programming mistakes. I’ve always felt that configuration management practices are a leading indicator of software quality, and a couple “never events” come to mind immediately: 

  • A team should never find itself incapable of retrieving the exact source files used to produce a build of the software.
  • A team should never find a build deployed with manual changes made outside of source control. 

What “never events” can you think of? Remember: measureable, preventable, and serious.

Thoughts on the Code Contracts Preview for .NET 4.0

Tuesday, February 24, 2009 by scott
7 comments

A new Code Contracts preview is now available on DevLabs. Code Contracts will be part of the base class library in .NET 4.0 (included in mscorlib), and facilitate a Design by Contract programming approach. You can describe pre-conditions, post-conditions, and object invariants.

The Code

Let’s borrow a couple ideas from Matt’s DbC post (he was using Spec#, which was a precursor) to see what Code Contracts will look like.

public void Run()
{
    TargetResult result = LaunchMissle(new Target());
}

public TargetResult LaunchMissle(Target target)
{
    Contract.Requires(target != null);
    Contract.Ensures(Contract.Result<TargetResult>() != null);

    return new TargetResult();
}

In the LaunchMissle method we are using static methods on the System.Diagnostics.Contracts.Contract class to define a pre-condition (the target reference cannot be null), and a post-condition (the return value cannot be null). You can allow these contracts to verify not only the runtime behavior of your software, but you can also allow a static analysis tool to verify these contracts during builds. The following screen shot is from the new Code Contracts tab in the project options:

 

code contract options

Static analysis will find two problems when it analyzes the following code:

public void Run()
{
    TargetResult result = LaunchMissle(BuildTarget());
}

Target BuildTarget()
{
    return new Target();
}

public TargetResult LaunchMissle(Target target)
{
    Contract.Requires(target != null);
    Contract.Ensures(Contract.Result<TargetResult>() != null);

    return null;
}

The “return null” line in LaunchMissle is an obvious violation of the method’s contract, and the static analyzer will tell us so. However, it will also tell us that the pre-condition (target != null) is unproven. This is because the BuildTarget method doesn’t include a contract that guarantees it’s behavior. We could fix that with the following code:

Target BuildTarget()
{
    Contract.Ensures(Contract.Result<Target>() != null);
    return new Target();
}

This example demonstrates how enforcing a contract in one location can have a ripple effect on your code – which is something that becomes really painful if you’ve ever dealt with checked exceptions or generic constraints. Nevertheless, I’m still pretty optimistic about DbC in .NET. Built-in DbC constructs would have been in my top 3 list of “things to have in C#” 5 years ago. TDD has re-order my list dramatically, but I still feel DbC will still be a good addition and useful in some specific scenarios. 5 years ago I would have liberally applied contracts everywhere. Currently I’m thinking they’ll be best put to use on system boundaries.

MSIL Rewriting, TDD

One of the interesting features of Code Contracts is that it includes a MSIL rewriter (ccrewrite.exe) that post-processes an assembly to change the intermediate language instructions emitted by the compiler. I hope this elevates MSIL re-writing from a black art to a more mainstream technology that we can benefit from in the future. Re-writing could enable a number of cool scenarios in .NET, like AOP. I can’t help thinking that the Entity Framework team might have delivered POCOs in V1 if AOP was held in more regard.

Another great feature of Code Contracts is that you can turn static analysis on and off on a per project basis. I believe this will be important to anyone practicing TDD and BDD. You can see the issues in the comments of Matt’s post that I linked to earlier. Some people believe contracts eliminate the need for certain tests, and some people don’t.

I’ve done some thinking on this issue and I don’t want a contract to force a compiler error in my test.  For example, imagine a unit test that would pass null as the parameter to LaunchMissle. Static analysis can flag this as a problem because it violates the LaunchMissle contract. Should I delete the test? I vote no. Fortunately, it looks like I’ll be able to turn off static analysis when building my test project. TDD and BDD are a design process and I’ll write the test before I ever write the contract, and I feel that the eventual writing of the contract shouldn’t invalidate my unit tests. They have contracts and tests serve orthogonal purposes. As Colin Jack commented in Matt’s post:

What I'm saying is that yes the compiler will warn me and that I love, however I'd also want the option of being able to write a spec that ensures a particular contract is in place (if I do X I get Y). If I can't do that then refactoring of the code (not the specs) becomes less safe.


There are many more great features in the preview. Download the bits to check them out, or RTFM. Will Code Contracts be something you use in .NET 4.0?

Mapping Objects with AutoMapper

Thursday, February 19, 2009 by scott
4 comments

At the end of last year I finished a project that required a fair amount of object-to-object mapping. Unfortunately, Jimmy Bogard didn’t release AutoMapper until this year, so I had to write a pile of object-to-object mapping goo on my own.

AutoMapper is a convention based mapper. Let’s say you have an object implementing an interface like this…

public interface IPatientDetailView
{
    string Name { get;  set; }
    IEnumerable<Procedure> Procedures { get;  set; }
    // ...         
}

…but all of the data you need to pump into the object is in a different type:

public class PatientDetailData
{
    public string Name { get; set; }
    public IEnumerable<Procedure> Procedures { get;  set; }
    // ...

}

With AutoMapper you just need a one time configuration:

Mapper.CreateMap<PatientDetailData, IPatientDetailView>();

Then moving data over is ridiculously easy:

Mapper.Map(patientData, patientView);

AutoMapper has a number of neat tricks up its sleeve. The flattening feature, for instance, can move data from a hierarchical object graph to a “flattened” destination. But what if your property names don’t match? Then you can take advantage of a fluent API to describe how AutoMapper should move the data. For example, moving data from the following type …

public class ProcedureData
{
    public string PatientName { get; set; }
    public IEnumerable<Procedure> Procedures { get;  set; }
    // ...

}

… will require a bit of configuration for AutoMapper to know where to put the patient name:

Mapper.CreateMap<ProcedureData, IPatientDetailView>()
        .ForMember(destination => destination.Name,
                   options => options.MapFrom(
                        source => source.PatientName));

For other AutoMapper features it’s best to peruse the tests. There are extensibility hooks, like custom formatters and resolvers, and support for different profiles. Check it out.

Why Would I Create A Custom LINQ Operator?

Friday, February 13, 2009 by scott
6 comments

Here are three different reasons:

  1. For an operation that doesn’t exist.
  2. For readability.
  3. For performance.

An example for reason #1 is Bart De Smet’s ForEach operator. While you are on Bart’s blog, you can read about the pros and cons of a ForEach in his comments.

An example for reason #2 would be a custom join operator. Let’s say we are joining an object collection of employees to an object collection of departments.

var employeeAndDepartments =
    employees.Join(departments,
                   employee => employee.DepmartmentID,
                   department => department.ID,
                   (employee, department) =>
                       new
                       {
                           Employee = employee,
                           Department = department
                       });

The Join operator with extension methods is a little unwieldy. You need three lambda expressions: one to specify the employee key, one to specify the department key,  and one to specify the result. To make the query itself a bit more readable you could define a custom Join operator that knows how to join employees and departments.

public static IEnumerable<EmployeeDepartmentDTO> Join(
             this IEnumerable<Employee> employees,
                  IEnumerable<Department> departments)
{
    return employees.Join(departments,
                          employee => employee.DepmartmentID,
                          department => department.ID,
                          (employee, department) =>
                              new EmployeeDepartmentDTO
                               {
                                   Employee = employee,
                                   Department = department
                               });
}                                             

Not pretty, but it drastically cleans up code in other places:

var employeeAndDepartments = employees.Join(departments);

Reason #3 is performance. Generally speaking, you'll write an operator for performance when you know something that LINQ doesn't know.A good example is in Aaron Erickson’s i40 (Indexed LINQ) library. i40 features an IndexableCollection type that can drastically increase the performance of LINQ to Object queries (think table scan versus index seek). Imagine having a huge number of objects in memory and you commonly query to find just one.

var subject = subjects.Where(subject => subject.ID == 42)
                      .Single();

With i40 you can create an index on the ID property.

var subjects = /* ... */
                .ToIndexableCollection()
                .CreateIndexFor(subject => subject.ID);
/* ... */
var subject = subjects.Where(subject => subject.ID == 42)
                      .Single();

If you are using the i40 namespace, you’ll get a special i40 Where operator that takes advantage of the indexes built into the IndexableCollection.

//extend the where when we are working with indexable collections!
public static IEnumerable<T> Where<T>
(
  this IndexableCollection<T> sourceCollection,
  Expression<Func<T, bool>> expr
)
{
    // ... source from IndexableCollectionExtension.cs
}

What custom operators have you made?

More LINQ Optimizations

Thursday, February 12, 2009 by scott
3 comments

Not every optimization is a performance optimization. Imagine trying to get this XML:

string xml = 
    @"<people>
        <Person>
          <property value=""John"" name=""firstName""/>
          <property value=""Dow"" name=""lastName""/>
          <property value=""john@blah.com"" name=""email""/>
        </Person>
        <Person>
          <property value=""Jack"" name=""firstName""/>
          <property value=""Dow"" name=""lastName""/>
          <property value=""jack@blah.com"" name=""email""/>
        </Person>
      </people>";

Into objects of this type:

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string Email { get; set; }
}

A brute force solution would look like the following:

var xmlDoc = XDocument.Parse(xml);
var records =
    from record in xmlDoc.Descendants("Person")
    select new Person
    {
        FirstName = (from p in record.Elements("property") 
                    where p.Attribute("name").Value == "firstName"
                    select p.Attribute("value").Value).FirstOrDefault(),
        LastName = (from p in record.Elements("property") 
                    where p.Attribute("name").Value == "lastName"
                    select p.Attribute("value").Value).FirstOrDefault(),
        Email = (from p in record.Elements("property")
                 where p.Attribute("name").Value == "email"
                 select p.Attribute("value").Value).FirstOrDefault(),

    };

It works - but it’s ugly. It would be better if the code looked like this.

var records =
    from record in xmlDoc.Descendants("Person")
    select new Person
    {
        FirstName = record.Property("firstName"),
        LastName = record.Property("lastName"),
        Email = record.Property("email")
    };

Which just requires a bit of extension method magic.

public static string Property(
    this XElement element, string name)
{
    return
        (from p in element.Elements("property")
         where p.Attribute("name").Value == name
         select p.Attribute("value").Value).FirstOrDefault();
}

Is the code faster? Probably not – but until it’s certain that we have a performance problem, it’s better to optimize for readability.

Due Diligence and Code Comments

Wednesday, February 11, 2009 by scott
15 comments

Due diligenceThis is the silly tale of a strange due diligence process I experienced. It happened several years ago but I couldn’t talk about it at the time. 

I’ve been on both sides of the technical due diligence table, and I’ve always felt that being reviewed is easy – you simply tell the truth when asked about the software and how it’s built. Anything else can get you into trouble. It is the reviewer who has the difficult job. Someone wants to buy or invest in a company and they are depending on you to help determine the fair value. You have to look at the development methodology, the design, the architecture, the configuration management practices, the licenses, and basically ferret out any problems and risks to ensure the company is a good investment.

A few years ago I was on the receiving end of the review. The first thing the guy wanted to do was open up a source code file. I pick one and open it on the screen. The conversation goes like this:

Him: I don’t see any comments in the code.

Me: Nope!

Him: Ok, let’s take a look at this file. Hmm, I don’t see any comments in here, either!

Me: Umm .. no .. no comments.

Him: Ok, let’s take a look at this file here. Well, well, well. It doesn’t look like you guys write any comments at all!

Me: Honestly - we try to avoid commenting code.

Him (horrified): What? How on earth does anyone know what the code is doing?

Me (stunned): Well, we figure out what the code is doing .. by  .. um .. reading .. the  .. code.

There are people who believe that all code should be accompanied by comments (an undergrad professor come to mind), but by the turn of the century I had hoped that most people understood comments to only be used as a “here be dragons” sign. If you are depending on comments to make code consumable - you’ve already lost the battle. Sure, there are exceptions for exceptional code – like code written to work around platform bugs, code optimized for speed, and the comments that feed javadoc and intellisense for a public API, but for the most part code that requires comments to make itself maintainable is bad code. This isn’t a recent trend. The Elements Of Programming Style said in 1978:

Don’t comment bad code – rewrite it.

Eventually the business deal fell through, but it wasn’t due to the lack of comments. The company walked away from the money the investors were willing to offer. A due diligence process, like a job interview, works in both directions. In this case the competency of the due diligence team didn’t instill the confidence needed to sell them a piece of ownership.


Thoughts on AJAX Preview 4 and JSINQ

Saturday, February 7, 2009 by scott
3 comments

ASP.NET AJAX 4.0 is adding client-side templates. You can bind data in the browser using declarative markup, imperative JavaScript code, or mix and match both approaches. The purely declarative approach looks like the following:

<body xmlns:sys="java script:Sys"
xmlns:dataview="java script:Sys.UI.DataView"
sys:activate="*">
    <ul id="departmentManagerList" 
            class="sys-template"
            sys:attach="dataview"
            dataview:serviceuri="DepartmentService.svc"
            dataview:query="GetAllDepartments">
        <li>{{ Manager.Name }}</li>
    </ul>

That code is everything you need to invoke an AJAX-enabled WCF service (specifically the GetAllDepartments method of DepartmentService.svc), and bind the results to a list. Some of the binding markup looks foreign inside the HTML, and when I read it a little voice in my head says: “JavaScript should be unobtrusive”. I reply and say “that is not JavaScript”, and the voice says “but it’s still ugly”. I have to agree, but there is an alternative we’ll look at in just a moment.

Anyone who has used XAML binding extensions for Silverlight or WPF will notice a number of similarities in the AJAX 4 bits. For example, both use curly braces inside the item template to denote the path to the source property for binding (in the above code we’re drilling into a department object and pulling out a manager’s name property). The AJAX libraries will take care of replicating the data inside of <li> tags. Also, there are similar binding modes (one way, two way, and one time), and the concept of an IValueConverter for each binding (ConvertFrom and ConvertBack functions can inject custom logic at bind time).

The AJAX 4 libraries support hierarchical data binding (for master detail scenarios) and can update a binding when the underlying data changes. However, this is one place where AJAX and XAML land differ. In XAML land we use INotify* interfaces to raise change events, whereas AJAX provides a Sys.Observable class that can “mixin” functions on a target object that allow you to ultimtaely catch property change notifications, but you must use Sys.Observable.setValue to see the notifications. Infinities Loop has the scoop on this behavior.

Imperative Code and JSINQ

Instead of purely declarative code, one can instantiate a DataView and write some JavaScript code to grab and process data. Since there doesn’t appear to be any pre or post request events we can hook from the DataView (which would be nice), this also gives us a chance to manipulate data from a web service before binding. The HTML will look cleaner:

<body>
    …    
    <ul id="departmentManagerList" class="sys-template">
        <li>{{ Manager.Name }}</li>
    </ul>

And we need to write a bit of code during load:

var view = $create(Sys.UI.DataView, {}, {}, {},
                  $get("departmentManagerList"));

var service = new DepartmentService();
service.GetAllDepartments(function(result) {
    view.set_data(filterAndSortDepartments(result));
});

What is filterAndSortDepartments? It’s a function that uses JSINQ – LINQ to objects for JavaScript.This isn't part of ASP.NET AJAX 4.0, but a CodePlex hosted project.

function filterAndSortDepartments(departments) {
    
    return new jsinq.Enumerable(departments)
                       .where(function(department) {
                         return department.Employees.length > 1;
                     }).orderBy(function(department) {
                         return department.Name;
                     }).toArray();
}

JSINQ has complete coverage of all the standard LINQ query operators, including join, group, groupJoin, any, all, and aggregate. The above code was written with the “extension method” syntax style, and rather makes me wish that JavaScript could take the lead from C# 3.0 lambda syntax and allow function definitions without keywords. JSINQ also offers comprehension query syntax:

function filterAndSortDepartments(departments) {
    var query = new jsinq.Query('\
                        from department in $0 \
                        where department.Employees.length > 1 \
                        orderby department.Name \
                        select department \
                    ');
    query.setValue(0, new jsinq.Enumerable(departments));
    return query.execute().toArray();
}

In this case the declarative code is easier on the eyes, but like most declarative code it’s impossible to debug if something goes wrong. There are lot’s of possibilities when using the combination of these two libraries. Give JSINQ a try in the JSINQ playground

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!
(c) OdeToCode LLC 2004 - 2014