OdeToCode IC Logo

Victory In Software Development

Sunday, September 27, 2009 by scott

I wrote the following prose in preparation for my fireside keynote at Concept Camp 2009. Concept Camp arrived at a time when I was asking questions about the software I’ve created over the years. I was wondering if the software I’ve created is truly successful, and how you measure success in software development.

ConceptCamp Campfire  

I’ve written commercial software for many companies over the years. Some of the best software I’ve created was for companies that are now out of business. Was the software a failure? Some of the worst software I’ve created, as a developer working alone and just out of school, is still running inside laboratory instruments around the world. Is that software a success?

Athletes declare victory and defeat using scoreboards and finish lines, but success – real success, isn’t easy to measure in software development, or in other lines of work. As an example, let me tell you the story of two men working as army generals, and you try to decide which one achieved victory.

A Tale of Two Generals

Concept Camp 2009 took place just outside the town of Williamsport, Maryland, which is near my hometown of Hagerstown, MD. Since I grew up in this area I had no choice but to learn about the local history, and there is no shortage of history in the area. For instance, Williamsport, holding a prime location along the Potomac River, was one considered by George Washington as the site for our nation’s capital.

Not far from Williamsport is the town of Sharpsburg. Its 700 residents live next to the Antietam Battlefield – one of the most famous battlefields on American soil. 147 years before Concept Camp, this area was full of soldiers fighting the American Civil War. The general leading the Confederate army from the south was General Robert E. Lee. The General leading the Union army from the north was General George B. McClellan. Both generals were graduates of West Point, but the two applied vastly different strategies in battle.

Autumn sky over Antietam battlefield

The Setup

General Lee assumed control of the Confederate army in the spring of 1862. This was during a time when many felt the war would be over quickly. The Union had already invaded the south and the army was sitting outside the capital city of Richmond, VA. Lee quickly won a series of battles, however, and pushed the Union army away. Those battles ended the hope for a quick end to the war.

The situation was beginning to worry President Abraham Lincoln. European powers like Britain and France were watching the rebel army effectively defend its territory, and they were thinking of supporting the south’s claim for sovereignty. Meanwhile, mid-term congressional elections were coming up in November, and if the northern voters perceived the war as going badly, their voting power could end Lincoln’s plan to reunite the country.

Against this backdrop, General Lee saw an opportunity for great military and political gains. In early September, he led his army north across the Potomac River. It was the Confederate army’s first steps on Union soil.

The Battle & Aftermath

General McClellan chased Lee’s army until they met outside of Sharpsburg by the Antietam Creek. The ensuing battle on September 17, 1862 was horrific. It is still the bloodiest day in American history with 23,000 casualties. You can read about the terrible fighting around Miller’s cornfield, Bloody Lane, and Burnside’s bridge in books like “Antietam - A Soldier’s Battle” by John Priest (my history teacher in school).

McClellan’s army left 30% of Lee’s army dead or wounded. Because of the casualties, Lee had to withdraw his army and return south to Virginia. President Abraham Lincoln took this opportunity to issue the first Emancipation Proclamation on September 22nd. The Proclamation meant the war was no longer just a war between states, but a war to end slavery. The higher cause boosted the morale of the Union soldiers and bolstered support for the war from voters. Overseas, the Proclamation forced European powers to reconsider support for the rebel states, as they didn’t want the world to perceive their policies as supporting a slave nation.

Doesn’t this sound like a decisive victory for McClellan and the Union?

Cornfield Ave Antietam

The Analysis

Of course, I didn’t give you all the information you needed to know about the battle. I didn’t mention that McClellan’s army outnumbered Lee’s army almost 2:1 (90,000 men to 50,000 men). I didn’t mention that McClellan kept half his men in reserve – they never fired a shot. I also didn’t mention that McClellan split up his attacks into three separate assaults, giving Lee time to shift and reorganize his defenses. And I didn’t mention how vulnerable Lee’s position was with his back to the Potomac River.

Finally, I didn’t mention that President Lincoln, unhappy with McClellan’s performance, removed the general of his command not long after the battle.

Most experts believe McClellan had the opportunity and resources to eliminate Lee’s army and put an end to the war. Instead, the civil war continued for three more years.

Victory

It’s difficult to draw analogies between war and software development. One is a bloody endeavor that kills while leaving physical and mental scars. Another is a profession that involves keyboards and cubicles under fluorescent lighting. Nevertheless, I think there is something we can learn about victory from the Battle at Antietam. Achieving victory doesn't mean we've achieved our full potential.

Also, we already know that victory is hard to define. Shipping a product isn’t a reason to declare victory, particularly if the product falls apart after 1 month. We can’t declare victory based on the number of lines of code we’ve written, or the number of features we’ve shipped. Having too much code or too many features can  lead to defeat. We can’t declare victory based on the number of unit tests we have, or code coverage statistics. We can’t declare victory by pushing cards across a Kanban or moving a line down a burn down chart.

kanban

All of the things I’ve mentioned are indicators of success, but even if we look at them collectively we’d still be missing the judgment of the people who use our software. Does it have the right features? Is it fast enough? Is it friendly? Does it work without unnecessary surprises? Did it reach on time? Did it cost too much?

Customers, clients, consumers, and users all have a considerable say in our success and failure, and we can’t ask their opinion using only a single point in time, just like we can’t declare the winner of a 100 meter dash by observing runners at the 50 meter mark. Most applications need to evolve over the course of years, and we can only achieve victory if we continue to grow the application.

Returning to a military theme, there is the type of victory known as a pyrrhic victory. A pyrrhic victory is what happens when you are decidedly victorious in a single battle, but paid such a heavy price to win that you are in a position to lose the war. Not everyone realizes that software development is a marathon, and shortcuts often lead to pyrrhic victories. The application ships, but it’s an un-maintainable collection of shortsighted hacks that will fail to evolve with the user’s needs in the future.

User Eccentric

Since users have a considerable vote in our quest for success, having them invested in our software project is a great idea. The good news is that software development is making concerted efforts to move the user closer to the center of our development activities.

We started by changing our methodologies. We realized that a waterfall process, where the user tells us in detail what they want, and then we hide in offices for 24 months while building it, results in defeat for everyone involved. We started to adopt agile methods that embrace change and require everyday participation from the users.

We also started moving users closer to our code. Doman driven design encourages the use of a ubiquitous language that the developer and the business expert share, and the language influences the names of our abstractions. We strive to write acceptance tests using natural languages a customer can understand. Everyone wants a domain specific language that a business expert can comprehend.

We’ve moved our users closer to what we do, but software development is still not centered around the user. Developers sit in the center of software development as the sun sits in the center of our solar system. We use our heavy mass of technology to distort light, space, and time to our needs. We tell the user what can and can’t be done. We’ve trained them to come to us and ask us what’s possible, instead of envisioning their own possibilities.

Like General McClellan in the Battle of Antietam, we aren’t reaching our full potential.

Risk and Reward

General McClellan didn’t fight to win the Battle of Antietam – he fought not to lose. He was cautious. He was pessimistic. He didn’t pursue Lee’s weary army as they hobbled back into Virginia, though he had fresh regiments of soldiers available. McClellan saw only risks, and not the opportunities.

Software developers are obsessed with mitigating risk. All the indicators of success we talked about earlier- deadlines, code coverage, unit tests, burn down charts – these are all in place to mitigate risk. We also mitigate risk with compilers, coding standards, source control systems, patterns, practices, daily stand up meetings, continuous integration builds, and more.

risk

Avoiding disaster in software development requires an obsessive-compulsive approach to risk management. We focus on risk instead of potential.

Our focus means we don’t fight to win in software development. We fight not to lose.

Once a customer asked me how I could automatically populate one of the screens in her application with information held inside a database of manuscripts. I immediately saw the risk in matching data to the right fields based on the context of the screen. I’d need a way to search and index the correct pieces of information. I’d need some fuzzy logic to interpret the meaning of the data. There would be some advanced algorithms needed to transform and pick apart the appropriate information.

I wasn’t thinking of the software’s potential. I wasn’t thinking about how this feature could transform the lives of the people who used the software. I saw only the risks. I surrendered to fear.

Courage Conquers Fear

Kent Beck listed courage as one of the primary values to embrace in his Extreme Programming book. We’ve made significant advances in embracing many of the other values listed in the book. We practice test-driven development while reciting the Single Responsibility Principle in homage to the value of simplicity. We hold daily stand up meetings and apply continuous integration to promote the value of feedback.

But courage?

It’s difficult to find a pattern or a process that helps us practice courage on a daily basis.

skydive

How many times have you seen a software team talk a customer out of a feature because it might require something different, or unknown?

  • Trying a technology outside of company “rules” (like using an open source library)
  • Using a “non-standard” technology (like a non-relational database)
  • Trying an unfamiliar platform (web development shops writing a desktop application)
  • Doing something perceived as “hard” (fuzzy matching with genetic algorithms)

Fear of the unknown drives too many decisions in software development.

A Pattern

The next time your user asks for something and your first impulse is to start assessing the risks involved – STOP.

Think about the possibilities first. Think about the value to the user. Could this request make the software great? If not, how could it be even better? What can you do in this area to empower the user? How can the software truly transform their job, or their business? What could you change about your technology to make this work? What could you change about your team to make this work? What could you change about your company to make this work? How many days would you really need to experiment with something new and different? Only after you understand the true potential of what is possible can you can put on your engineering hat and assess the risks.

We all work with limited resources and deadlines, but we often use these constraints as excuses instead of parameters in a formula for victory.

Only courage allows us to find the true potential of software. Only courage allows us to recognize our weaknesses and try something new. Only courage allows us to explore unfamiliar landscapes in the world of software development.Only courage will allow us to align our goals with the goals of the user, make our software great, and give us a shot at undisputed victory.

--

I want to say thanks to @SaraJChipps for having the courage to organize this novel event, and inviting me to speak. And thanks to everyone who had the courage to come and share their stories and laugh in the rain. Concept Camp 2009 was a success, and I can’t wait for the next one! I just hope the weather is a little better …

Stockholm

Wednesday, September 9, 2009 by scott

During the last week of August I was in Stockholm, Sweden to deliver two of the twelve workshops at Øredev's Progressive .NET Days. The workshops were split across 3 tracks and covered  prevailing software development topics like NHibernate, DDD, DSLs, Git, and dynamic languages. Based on the feedback I’ve heard, the workshops were considered a success by everyone who participated. But, what left a lasting impression on me was the unbounded hospitality of the event organizers, attendees, and Stockholm itself.

Meeting Michael Tiberg of Oredev

 

On the first evening that I ever spent in Stockholm I was greeted by Michael Tiberg, the CEO of Oredev. We strolled across modern roads into the medieval, cobblestoned streets of Old Town Stockholm – Gamla stan. Michael was my personal tour guide and pointed out sites. I saw Stockholm City Hall - a stalwart building made of red brick and home to Nobel Prize ceremonies and banquets. I also saw the rounded backside of the Swedish Parliament building, and facades of the Stockholm Palace.

Michael didn’t know I was a fan of Jazz music when we entered Stampen that evening. In the 17th century, this building was a French Reform church, but since 1968 it’s been a jazz pub. The vocalist of the 5 piece band playing that evening would belt out jazz classics in English, then banter with the growing crowd in Swedish. The last song we heard before leaving was Billie Holiday’s God Bless The Child – a personal favorite.

Jazz at the Stampen

Within a day I decided that Stockholm is a quiet city, but quiet isn’t the same as sleepy, or empty. People are everywhere in downtown Stockholm. Commuters wheel past in the dedicated bike lanes and swarm into metro openings. Taxis and tourist buses jockey politely for curb space. But something was audibly missing - the insecure blustering I’m so accustomed to hearing in the metro areas of the Northeast United States – no horns blowing, no people shouting, no music blaring. In Stockholm, even the sidewalk hawkers stand quietly on the corner wearing sandwich board signs to advertise lunch specials.

The quiet in the city gives you the chance to hear people laugh. They laugh as they talk on cell phones, and they laugh in crowds under the open awnings of cafes. The Swedes I met paint  themselves as reserved, and present themselves as self-assured and happy.

City square 

I took home many fond memories from Stockholm, including a trip to an Ice Bar (thanks Magnus), and some great technical discussions with other speakers and attendees. The last evening I spent chatting and dining with guys from Nansen – a local and fast growing web development firm. We talked about software development, health care, politics (I met at least two members of the Swedish Pirate Party during the week), religion, the size of American cars, and XBOX games. I hope to return one day and experience more of Stockholm, and I’m already looking forward to the main Øredev conference this fall in Malmö.

Review: The 36-Hour Day

Monday, September 7, 2009 by scott

The 36-Hour Day is a book to buy if someone you love is growing a little older, and perhaps growing a little confused, too. The book is about memory loss and dementias like Alzheimer’s disease. It will give you information and strategies you need to care for your loved one, but more importantly, it gives you a perspective and understanding of what your loved one experiences as their illness transforms their behavior and memory.

I have an easy time finding quality technical content on the Internet, but I struggled finding good resources on this topic. The articles on dementia live in a wasteland of superficial prose, smeared by pointless advertisements. The 36-Hour Day : A Family Guide to Caring for Persons With Alzheimer Disease, Related Dementing Illnesses, and Memory Loss in Later Life is packed with advice to deal with the emotional, legal, medical, nutritional, and behavioral aspects of a heartbreaking situation. Kudos to the authors Nancy Mace and Peter Rabins.

The book will help you care for the person you love.

The book will help you care for yourself.

I highly recommend it.

Resource Files and ASP.NET MVC Projects

Thursday, July 16, 2009 by K. Scott Allen

If you try some of the traditional ASP.NET approaches to localization and internationalization in an MVC application you’re likely to run into a couple interesting* obstacles.

Resx Files In App_GlobalResources

Using resource files in App_GlobalResources from your controller code will break your unit tests.

When you drop a .resx file in the special App_GlobalResources folder, the IDE uses the GlobalResourceProxyGenerator to generate a strongly typed internal class to wrap the resources inside. The internal class gives any code in the MVC project access to the resources:

var greeting = Resources.Strings.Greeting;

You can also use the resources from a view:

<%= Html.Encode(Resources.Strings.Greeting) %>

The problem is that global resources are not actually embedded into the project’s .dll. Instead it is the ASP.NET runtime that creates an App_GlobalResources assembly with the resources inside. This assembly is referenced by all the view assemblies ASP.NET creates, and is explicitly loaded by the strongly typed wrapper generated by the GlobalResourceProxyGenerator. Since the App_GlobalResources assembly doesn’t exist without an ASP.NET compilation phase, it’s not available when unit tests are running. Controller code under test that tries to access the resources will bomb with an exception.

Note that you’ll also have some Intellisense problems when using the view syntax shown above. I'm guessing this is because the IDE is confused by seeing the resource wrapper in two places (the project assembly, and a wrapper also goes into the App_GlobalResource created by ASP.NET in the Temporary ASP.NET Files folder. ).

There is a way to make resx files in App_GlobalResources work, but the folder isn’t truly necessary in an MVC project (or a web application project, for that matter). I think it’s just as easy to add resx files in a different location, even a separate class library, to avoid any confusion on how App_GlobalResources will behave.

In short: avoid App_GlobalResources and App_LocalResources (which has its own set of problems) in MVC.

Resx Files Outside Of Special Resource Directoriesresx properties in MVC

If you add a resx file to any other folder in an MVC project or class library, the resx is automatically set to be embedded into the project’s output assembly - this is good. The IDE also assigns the resx a custom tool of ResxCodeFileGenerator to generate a strongly typed wrapper - this is good. The generated class is internal by default – this is bad. The assembly created for a view (by ASP.NET) won’t be able to use the internal class because it is in a different assembly – the project assembly compiled in Visual Studio.

Solution

The easy fix is to make sure the custom tool is set to  PublicResXFileCodeGenerator instead of ResXCodeFileGenerator. You can do this in the property window for the file, or in the resource editor that gives you a drop down for “Access Modifer” (the options are Internal, Public, and No Code Generation – choose Public).

You can also set the “Custom Tool Namespace” for the generated wrapper in the properties window. My suggestion is to use a convention like “Resources” for global resources, and “Resources.Controller.View” for resources dedicated to a specific view.

This approach means you can use the resources in unit-testable controller code, and in views, too. The syntax remains the same as above. The ResouceManager used in the wrapper classes can automatically resolve the proper resource to use depending on the current UI culture setting of the thread.

Setting The UI Culture

The easiest approach to having the correct UI culture in effect during web request processing is to use the globalization section of web.config.

<globalization uiCulture="auto" culture="auto"/>

The above will set both the current UI culture and current culture settings for the request. See Dennis Dietrich’s post for a good explanation of the two settings: YACVCP (Yet another CurrentCulture vs. CurrentUICulture post).

If you need to set the culture up according to a user’s preference, or a URL parameter, then the best bet is to write a custom HTTP module or action filter.

* Interesting only if you consider localization and resource files interesting, in which case you might need to take some medication.

Geeky Places to Visit In &amp; Near Maryland

Thursday, July 9, 2009 by scott

Maryland has a number of geek attractions inside and around its borders. Here’s a sampling:

National Air & Space Museum

There are two locations to visit. The museum on the National Mall is in the heart of D.C. and features over 20 exhibition galleries that include an Apollo 11 command module, lunar rocks, IMAX theater, and planetarium. For truly colossal exhibits you’ll want to head ~22 miles out of town to the Steven F. Udvar-Hazy Center. Located just off the approach to Dulles airport’s runway 1R, the museum includes the space shuttle Enterprise, and an SR-71 Blackbird. You shouldn’t need any more reasons to go.

National Cryptologic Museumcryptologic museum

Nothing says “hard core geek” more than spending time in a museum dedicated to cryptology. The museum is located within electronic eavesdropping range* of the NSA headquarters on a drab stretch of Route 32. But inside you’ll find many compelling exhibits and intriguing devices, like a Jefferson cipher wheel, an Enigma machine, and a Cray Y-MP supercomputer that could hold 32 GB of memory … in 1993.

You don’t need a password to get in.

Goddard Space Flight Center

It’s a little bit of NASA just outside the D.C. beltway in Greenbelt, MD. Exhibits include a rocket garden, a spherical movie screen,  and a sycamore tree grown from a seed that went to the moon and back with Apollo 14 in 1971. Model rocket launching on the first Sunday of every month!

Maryland Science Center

The MSC sits diagonally across the water from the National Aquarium in Baltimore’s inner harbor. It was voted as one of the 10 best science centers for families by Parents’ magazine and includes interactive exhibits that range from the cells inside us, to the stars around us. The aquarium on the other side of the harbor is top notch, too, with great food and other sights in between.

Western Maryland Scenic Railroad

What’s so geeky about a scenic railroad ride in the mountains? In short: steam power, bridges, tunnels, and turn tables. As a kid I found myself absolutely astounded by the fact that steam (which I thought of then as really hot water), could propel such massive weight. There are also murder mystery trains on the weekends, which is just as geeky as D&D, but without the 20 sided die rolling.

U.S. Army Ordnance Museum

The museum is located on the Aberdeen Proving Ground and contains a formidable number of engineering marvels with German, Soviet, Japanese, American, and Italian tanks and artillery pieces. An exhibit both extraordinary and sobering. The museum is relocating to southern Virginia in 2011.

* This would be them eavesdropping on you, as I wouldn’t recommend the reverse approach.

Event Aggregation with jQuery

Tuesday, July 7, 2009 by scott

As the “write less, do more” library, jQuery garners lots of love for its terseness. The terseness, combined with a rich ecosystem of plug-ins, means I can display my OdeToCode twitter feed on a web page using only 10 lines of code (complete with animation and custom loading message)*.

$(function() {
    $("#getTweets").click(function() {
        $("#tweets").getTwitter({
            userName: $("#screenname").val(),
            numTweets: 10,
            loaderText: "Fetching tweets...",
            slideIn: true,
            showHeading: true,
            headingText: "Latest Tweets",
            showProfileLink: true
        });
        return false;
    });
});

When writing with jQuery there is a tendency to use nested functions and collapse as many responsibilities as possible into a single piece of code. For many web pages this approach is entirely suitable.We aren’t building a space shuttle - it’s just a web page. The code above is responsible for locating the DOM element for events, hooking up a click event, fetching tweets, and locating the DOM element to display the tweets.

Composite UIs

In more complex pages, and particularly in composite pages that are made up from independent pieces, the above approach tends to become brittle, and encapsulation breaks down as independent pieces try to peek into each other’s private business. It’s easy to fall into black hole of JavaScript code that swallows all who come near. A step away from the black hole would be to extract some of the common, reusable functionality into different pieces.

For example, you can separate the piece that knows about DOM elements …

$(function() {
    $("#getTweets").click(function() {
        getTweets($("#tweets"), $("#screenname").val());
        return false;
    });
});

… from the piece that knows about Twitter, and include the pieces independently …

function getTweets(element, screenname) {
    $(element).getTwitter({
        userName: screenname,
        numTweets: 10,
        loaderText: "Loading tweets...",
        slideIn: true,
        showHeading: true,
        headingText: "Latest Tweets",
        showProfileLink: true
    });
}

Now we have a bit of separation. At this point some of us would be inclined to raise the battle cry of the object oriented programmer, and run off to a workstation to design  namespaces, prototypes, constructor functions,  properties – blah blah blah**. But we wouldn’t be creating a greater separation between the pieces of code. All we’d really be doing is creating bigger abstractions that are still tied together as closely as they were when they were simple function objects.

Enter The Aggregator

One of the classes I dig in Prism is the EventAggregator. Eventing is pretty much a required approach to managing a composite UI if you want to stay sane. The EventAggregator makes this easy in WPF and SIlverlight, and includes some thread marshalling tricks behind the scenes as a bonus.

Fortunately, you can do something similar in most major JavaScript frameworks, including jQuery. There are jQuery plugins dedicated to event aggregating, but a simple approach would use bind and trigger with custom events, letting the document serve as the aggregator.

Now we can achieve a greater decoupling between the “I need tweets” action …

$(function() {
    $("#getTweets").click(function() {
        $(document).trigger("fetchTweets", [$("#tweets"), $("#screenname").val()]);
        return false;
    });
});

… and the piece (or pieces) that will respond to such an action…

$(document).bind("fetchTweets", function(e, element, screenname) {
    $(element).getTwitter({
        userName: screenname,
        numTweets: 10,
        loaderText: "Loading tweets...",
        slideIn: true,
        showHeading: true,
        headingText: "Latest Tweets",
        showProfileLink: true
    });
});

It’s easy to layer in additional behavior to the “Get Tweets” button click. We could call a web service to save the user’s preferences. We could cache information. We could add some debugging info …

$(document).bind("fetchTweets", function(e, element, screenname) {
    console.log(screenname);
});

All these things could be done by including separate scripts, and without putting any knowledge of these actions inside the click event handler where the aggregated event begins. Of course, to complete the circle, the code should raise an event when the tweets are retrieved, and let someone else deal with the results.

* Most excellent Twitter plug-in is available from @code_za’s blog.

** I’m not saying that bending prototypes to act like classes is bad, it’s just not the solution to every problem.

Pretty Code #1 – Building SelectListItems

Friday, July 3, 2009 by scott

In ASP.NET MVC, you can use a collection of SelectListItems to help build an HTML

Tonight, you’ll be the judge in this first contest of charm, grace, and readability.

Contestant #1 hails from the System.Web.Mvc namespace. It likes pina coladas and string literals, but is turned off by tattoos that look like programming symbols. Let me introduce the SelectList class:

var products = GetProducts();
var selectItems = new SelectList(products, "ID", "Name");

Contestant #2 lives in the System.Linq namespace. It likes whips and method chains. Functional programmers call it “map”, but in .NET we call it "Select":

var selectItems = from product in GetProducts()
                  select new SelectListItem 
                  {
                      Text = product.Name,
                      Value = product.ID.ToString()
                  };

… or (from the backside) …
var selectList = GetProducts().Select(product =>
                    new SelectListItem
                    {
                        Value = product.ID.ToString(),
                        Text = product.Name 
                    });

Contestant #3 lives in the MvcContrib project. It’s turned on by pointy things and practices yoga for extensibility. Introducing the ToSelectList method:

var selectItems = GetProducts().ToSelectList(product => product.ID,
                                             product => product.Name);

Personally, I like #2. While the name of #3 makes its purpose obvious, it sometimes takes a moment to be 100% clear about what property becomes Text, and what property becomes Value. In #2 the Text and Value assignments are obvious, even though the code is a little longer. Setting the Selected properties with either approach is trivial.

What do you think?