It’s possible to do a lot of work with ASP.NET and not know anything about IIS, particularly if you work with a large team where IT specialists keep the riff-raff away from production web applications. Ever since Visual Studio started shipping its own web server1, many people don’t rely on IIS for day to day development work (although many of us still do).
For those of you who are just learning how to deploy in IIS, or those of you who need a refresher, I put together a short and free Pluralsight screen cast on IIS: Web Sites, Applications, and Virtual Directories in IIS
This is one video in a collection of screencasts from Pluralsight.1) Some people call the web server “Cassini”. Other people call it the “WebDev” server. Still others call it “the web thingy that sits in my system tray”, even though Windows doesn’t have a system tray, but whatever. If you worked with the first release of Visual Studio, you’ll know we’ve come a long way from running as an administrator with Front Page Extensions installed and the IDE trying to force all of our code to live underneath inetpub\wwwroot <shudder />.
The July issue of MSDN Magazine is available online with my article “Guiding Principles For Your ASP.NET MVC Applications”. Another MVC article in this issue is Justin Etheredge’s “Building Testable ASP.NET MVC Applications”. Justin’s article is a good one as he shows you how to design for testability, and includes specific examples with xUnit.net, moq, and Ninject.
Something we both touched on was the topic of code-behind files.
The conversation between two developers (let’s call them Pushy and Principled), goes like this:
Pushy: Is it OK to use code-behind files with aspx views?
Principled: No.
Pushy: But, I have something that’s really, really specific to this one particular view. That’s OK, right?
Principled: No.
Pushy: Oh, come on! You aren’t being pragmatic here. I don’t want to add a Page_Load, I just need a teeny tiny little instance method. I’ll add a code-behind file and stick it inside. It’s tiny! That’s OK, right?
Principled: No.
Pushy: Really now, what do you expect me to do? Build one of those forsakenly awful HTML helper methods with more overloads than the California power grid in August? Don’t you think it’s better to put the code close to the view that uses it?
Principled: No.
This is one scenario where I’d side with Principled. Sure, the code-behind could be simple. Sure, if you are careful it might even be unit-testable. But, the mere fact that code-behind is possible is a fluke and a byproduct from building on an existing framework. Someone writing an MVC view engine from scratch wouldn’t need to provide such a feature that allows you to put anything resembling intelligence near a view.
All of the problems I’ve seen described where code-behind is a solution could easily be solved with an HTML helper, or by using a more robust presentation model that is passed to the view. Some people worry about a proliferation of HTML helpers, but if you absolutely need a helper scoped to a specific view, you can always put the helper in a different namespace that only that particular view will use.
I know your team is disciplined. I know you wouldn’t do anything wrong. I know you want to be pragmatic and do the simplest thing that works. But, think of the first code-behind file in a project as the first broken window. It’s another degree of freedom where entropy can wiggle into your software and undermine maintainability. It’s making a view smarter, which your principles should suggest is wrong.
Thoughts?
Steve Wellens had a recent blog post arguing for the use of a goto in C# (see: Why goto Still Exists in C#). Steve had a series of methods he wants to execute, but he wants to stop if any given method returns false. At the end of the post, Steve decided that the following code with goto was better than setting boolean variables:
// DoProcess using goto void DoProcess3() { LOG("DoProcess Started..."); if (Step1() == false) goto EXIT; if (Step2() == false) goto EXIT; if (Step3() == false) goto EXIT; if (Step4() == false) goto EXIT; if (Step5() == false) goto EXIT; EXIT: LOG("DoProcess Finished"); }
In the comments, a remark from a different Steve stood out. Steve suggested using an array of Func<bool>. The comment didn’t generate any further discussion, but it’s worthy to call out.
I don’t think the problem with the above code is with the goto per-se. I think the problem is how the code conflates “what to do” with “how to do it”. In this scenario both are a little bit tricky. Let’s assume we might need to add, remove, or change the order of the method calls. But, the method calls are so intertwined with conditional checks and goto statements that it obscures the process. Using an array of Func<bool> is a simple approach, yet it still manages to create a data structure that isolates “what to do”.
void Process() { Func<bool>[] steps = { Step1, Step2, Step3, Step4, Step5 }; ExecuteStepsUntilFirstFailure(steps); }
You could argue that all this code does is push the problem of “how to do it” further down the stack. That’s true, but we’ve still managed to separate “what” from “how”, and that’s a big win for maintaining this code. The simplest thing that could possibly work for “how” would be:
void ExecuteStepsUntilFirstFailure(IEnumerable<Func<bool>> steps) { steps.All(step => step() == true); }
The All operator is documented as stopping as soon as a result can be determined, so the above code is equivalent to the following:
void ExecuteStepsUntilFirstFailure(IEnumerable<Func<bool>> steps) { foreach (var step in steps) { if (step() == false) { break; } }
}
With this approach it’s easy to change the order of the steps, or to add steps and delete steps, without worrying about missing a goto or conditional check. The next step up in complexity (excuse the pun) would be to create a Step class and encapsulate the Func with other metadata and state. I’m sure you could also imagine the execution phase relying on an IStepExector interface as the base for executing steps under a transaction, or with step-level logging, or even in parallel – and all this without changing how the steps are arranged. Take this to an extreme, and you’ll have a technology like Windows Workflow Foundation. :)
The ability of functional and declarative programming to separate the “what” and the “how” is powerful, but you don’t need a new language, and you can start simple. In this scenario it’s another tool you can use to save your city from the goto-zilla monster.
What do you think?
Øredev is putting together an exciting lineup of topics and speakers for Progressive .NET Days. The event is August 27-28 in Stockholm Sweden.
Progressive software development understands that tomorrow's better ideas for software development are likely here with us today and seeks them out now, building bridges that span paradigms through practice and experience.
I’m excited to be a part of the event, and wish I could also sit in on every other session.I’ve also had a dream to ride in a hot air balloon, which I understand is quite popular during the summer months in Stockholm….
“Program to an interface, not an implementation” is a well-known mantra from the GoF book. Take this guidance to an extreme, though, and you generate POO instead of OOP. How do know if you crossed the line?
I think it’s useful to take a step back and think about the word “interface” in a general sense. There are interfaces everywhere in software. There are interfaces between layers, between tiers, between applications, between objects, and between callers and their callees. Just about anything and everything in software, no matter how trivial, has an interface.
The real question with interfaces is how many constraints you want in place for any given interface. Consider the following JavaScript code.
function validate(creditService) { creditService.checkCreditForCustomer(this.id); }
The only constraint on the creditService parameter is that the object needs a checkCreditForCustomer function that takes an ID parameter. The validation function doesn’t care how the creditService was built, who built it, where it came from, or what other capabilities might be in place. This code demonstrates the flexible, dynamic, and relatively unconstrained qualities of duck typing. If the parameter checks the credit of a customer like a credit service should, then it must be a credit service.
Static languages generally have to crank up the constraints on an interface, although many have an escape hatch. C# 4.0, for example, introduces a dynamic type.
public bool Validate(dynamic creditService) { return creditService.CheckCreditForCustomer(ID); }
Again - all we need is an object with a CheckCreditForCustomer method that takes an int parameter. Because the object is typed as dynamic, the compiler won’t guarantee what the object can actually do – there is no type checking. At runtime, we may find out the object doesn’t actually support the method we are looking for, and an exception appears. This duck typing behavior is what keeps fans of static typing awake at night. They think the dynamic programmers are insane for throwing around objects in a willy-nilly manner. Meanwhile, the dynamic crowd thinks the fans of static typing are insane for spending all of their time obsessing over types instead of creating software.
Regardless of where you fall in the static to dynamic spectrum, you can view a type definition as a constraint. In C# and Java, the interface keyword can constrain the type of an object without placing any constraints on the implementation.
interface ICreditService { bool CheckCreditForCustomer(int id); bool CheckCreditForCompany(int id); }
Now we can use this constraint to enforce type safety.
public bool Validate(ICreditService creditService) { return creditService.CheckCreditForCustomer(ID); }
An interface (in the interface keyword sense) allows fans of static typing to sleep at night while still leaving some flexibility behind. The object that arrives as an ICreditService on any given call might be one of 10 different credit service implementations. The 10 implementations may be from the same class inheritance hierarchy, or they may not. One might be a mock object or test double used only during testing (which I should point out is not, not, not the point of using interfaces), or it may not. The Validate doesn’t care about the concrete implementation behind the interface.
We still have some flexibility, but we also have additional constraints when compared to duck typing. The credit service has to implement two methods now, even if we just want to build an object for the Validate method which only uses the CheckCreditForCustomer method. These two methods may or may not be good thing. Iterative design with tests and a dose of the interface segregation principle will take care of the matter.
Even more constraints come into play if we use a class definition instead of an interface.
public class CreditService { public virtual bool CheckCreditForCustomer(int id) { // ... } // ... }
Now we’ve not only constrained the type, but we’ve constrained the implementation. Whoever provides our credit service functionality must be a CreditService object, or use CreditService as a base class. Building software is all about composing pieces of functionality together, and using a concrete class as the interface specification places hard restrictions on how the composition will work now,and in the future.
Sometimes, these hard restrictions make sense, or at least aren’t important. For example, classes that have no behavior (like DTOs) don’t need an interface abstraction. I’ve also never found it useful to specify entities using an interface, as they have pure business logic inside (logic dealing only with other business objects or abstractions).
public interface ICustomer { int ID { get; set; } int Name { get; set; } void UpdateAddress(/* ... */); // ... }
In short, you don’t need interfaces everywhere, you need to anticipate where your software needs to be flexible, which isn’t always easy. Using interface definitions between two horizontal or vertical layers of an application is almost always a yes, but programming to an interface between two business objects inside the same context is a definite maybe.
I like to use interface definitions when I want to turn a detail into a concept. For example, I’d feel more comfortable with an business object using an ISendMessage object then an SmtpServer object. The concept is closer to what the object needs to do (send a message), and it’s easier to change the business object’s behavior by giving the object a different ISendMessage implementation. As a special extra double bonus, the object using ISendMessage is much easier to test. List<T> is a detail. IList<T> is a concept.
If you doubt the power of interface programming, then just look at COM. Really. In COM you could only program to an object’s interface, and this allowed objects from different runtimes (Visual Basic versus C), different threading models (objects with a thread affinity versus multi-threaded objects), and different processes (local versus remote) to all work together, plus a host of other features. Interface definitions are the ultimate abstraction (for a statically typed environment!).
Let’s say you wanted to select the parts for a Lenovo X60 laptop from the following XML.
<Root> <Manufacturer Name="Lenovo="> <Model Name="X60=" > <Parts> <!-- ... --> </Parts> </Model> <Model Name="X200="> <!-- ... --> </Model> </Manufacturer> <Manufacturer Name="...=" /> <Manufacturer Name="...=" /> <Manufacturer Name="...=" /> <Manufacturer Name="...=" /> </Root>
If you know LINQ to XML, you might load up an XDocument and start the party with a brute force approach:
var parts = xml.Root .Elements("Manufacturer") .Where(e => e.Attribute("Name").Value == "Lenovo") .Elements("Model") .Where(e => e.Attribute("Name").Value == "X60") .Single() .Element("Parts");
But, the code is ugly and makes you long for the days when XPath ruled the planet. Fortunately, you can combine XPath with LINQ to XML. The System.Xml.XPath namespace includes some XPath specific extension methods, like XPathSelectElement:
string xpath = "Manufacturer[@Name='Lenovo']/Model[@Name='X60']/Parts"; var parts = xml.Root.XPathSelectElement(xpath);
Now the query is a bit more readable (at least to some), but let’s see what we can do with extension methods.
static class ComputerManufacturerXmlExtensions { public static XElement Manufacturer(this XElement element, string name) { return element.Elements("Manufacturer") .Where(e => e.Attribute("Name").Value == name) .Single(); } public static XElement Model(this XElement element, string name) { return element.Elements("Model") .Where(e => e.Attribute("Name").Value == name) .Single(); } public static XElement Parts(this XElement element) { return element.Element("Parts"); } }
Now, the query is short and succinct:
var parts = xml.Root.Manufacturer("Lenovo").Model("X60").Parts();
Combine an XSD file with T4 code generation and you’ll have all the extension methods you’ll ever need for pretty XML queries...
I now have a number of lean software development books queued up. It started when I saw this single bullet point in a presentation:
I’m enjoying the thinking behind lean, and I believe the techniques and vocabulary of lean makes software development more tangible to the folks we work with who don’t write code – and that’s important.
Overproduction in software development happens when you produce a feature that customers rarely use. This is one of lean’s seven deadly types of wastes. The perfect technique to manage this waste is to never create a feature without first establishing a clear value for the feature, but perfection isn’t easy. In commercial software development you’ll inevitably ship some useless bits as you discover the market and the functionality your future customers will value.
Even when you do ship successful bits, the outside world can reprioritize your software. The U.S. healthcare industry, for example, is ultra-sensitive to laws and regulations. A new piece of legislation can change last year’s “must have” feature into this year’s “meh”.
The relationship between overproduction stuck out to me because I’ve wrestled with overproduction for many years on several different products. Software vendors are reluctant to remove features, no matter how rarely used the features may be. Sales people in particular object to cutting anything they think might possibly have the slightest potential to attract a single future sale.
The basic problem is thinking of a software feature as an investment -something to protect moving forward. As Mark Lindell will tell you, code is not an investment but a liability. In lean thinking, features are inventory, and anyone who has come within spitting distance of a business school knows inventory can make cuts in the margin.
Removing a working feature is never an easy decision, but the sooner a vendor sees obsolete features as a cost and waste, the sooner the vendor can jettison the unused inventory that adds no value to the customer or the company.