OdeToCode IC Logo

nCover

Tuesday, June 14, 2005 by scott

I didn’t realize until recently that there are two different projects going by the name of nCover – the one on ncover.org and the one on SourceForge.

Both projects are code coverage analyzers. Code coverage tools measure the fraction of code exercised during execution. If an assembly has 100 lines of code, and the execution flow covers 30 of those 100 lines, you have 30% code coverage. I think code coverage tools are a natural sidekick to unit testing tools, because you can see and verify what code is being tested. For pros (and cons), see: “How to Misuse Code Coverage” (PDF) by Brian Marick.

The two NCover projects take different approaches to produce coverage numbers. The nCover project on SourceForge requires a pre-build step to modify .cs files and produce an instrumented version of the code base (the tool makes a backup of the original files, and has the ability to ‘de-instrument’ a file). The tool uses regular expressions to find branching logic and inject the coverage points.

The nCover.org tool using the profiling API to compute coverage. There is no modification to source code or pre-build events. The coverage report is an XML file you can XSLT into just what you want to see. If you like XSLT, that is. I’ve tried hard to avoid XSLT for a long time now, for no good reason.

I’m currently leaning towards the nCover from ncover.org. The profiling API feels less intrusive and has few moving parts compared to regular expression matching and munging of source code files. Unfortunately, I have one assembly the tool refuses to measure, and I can’t figure out why.

What do the cool people use?

Linking Is Back

Monday, June 13, 2005 by scott

In case you didn’t know there was an OdeToCode link blog, or you were wondering if the link blog died, well, it’s back.

OdeToCode Links For June 12

Monday, June 13, 2005 by otcnews

Yes - the link blog has been quiet lately, but we're back with a vengeance, and a different style...

Jeremy Miller, in TDD Design Starter Kit - State vs. Interaction Testing, shows us how to write a C# class with no regard for unit testing, and then how to go back and design the same class for testability. Along the same lines, Roy Osherove has slides and sample code available for his latest presentation on “Testing Legacy Code”. The sample code demonstrates a number of useful refactoring strategies to make legacy code testable.

Rick Strahl has been kicking out some killer ASP.NET 2.0 posts lately, including “ASP.NET projects aren’t ‘real’ projects in VS.NET 2005”, “Script Callbacks in ASP.NET 2.0 – interesting, but lacking”, and “Adding default Namespaces and Control libraries in ASP.NET 2.0 with web.config”.

There is a new episode of MSDN TV available: Report Authoring Tips & Tricks with Brian Welcker. Of course the huge news this week for Reporting Services was that SSRS will be available in ALL SQL Server 2005 editions, including Express, and the Report Builder (an ad-hoc tool for end users), will be available in Standard and Workgroup editions (it was previously going to only be available in the pricier Enterprise edition). Wow!

More String Comparisons

Tuesday, June 7, 2005 by scott

Just out of college, I used to spend half my time writing Windows UI code in C++, and half my time writing firmware for 8 bit Intel and Hitachi CPUs using C and assembly language. The firmware drove some infrared LEDs and took readings from an ADC to determine the percentage of protein in a wheat sample. “How exciting”, I can hear you say.

The C compiler for the 8 bit Hitachi chip was terrible, but fortunately the compiler generated files of assembly language instructions instead of binaries.. The assembly files would then pass through an assembler to produce the final firmware bits. When I hit a spot that was slow I could take the assembly files and begin to hand tweak the code. The C compiler generated hideous instruction sequences, particularly when it came to using the floating point libraries, so sometimes it was easy to get 5x to 10x performance increases with hand optimizations – not something you’ll do in a day with today’s mainstream compilers.

Ah, well. Different place, different time, different life.

The reason I bring up this background is that I still like to look at assembly code now and then. Particularly when I see a post like Geoff’s – it just makes me want to see what is happening after the JIT optimizations kick in.

I asked Geoff for the code, compiled it, ran it, and attached WinDbg. I wanted to see what the JIT was producing for the following C# and VB.NET methods. Note: both methods are implementing ITester.StringTest. .

public void StringTest()

{

    string s1 = "foo";

    string s2 = "bar";

    bool bRet = s1.Equals(s2);

}

Public Sub StringTest() Implements ClassLibrary3.ITester.StringTest

        Dim s1 As String = "foo"

        Dim s2 As String = "bar"

        Dim bRet As Boolean = s1.Equals(s2)

    End Sub

To view a disassembly with WinDbg:

0:005>.load E:\WIN2003\Microsoft.NET\Framework\v1.1.4322\sos.dll

0:005> !name2ee ClassLibrary2.dll ClassLibrary2.Class1.StringTest

--------------------------------------

MethodDesc: 927308

Name: [DEFAULT] [hasThis] Void ClassLibrary2.Class1.StringTest()

0:005> !name2ee ClassLibrary1.dll ClassLibrary1.Class1.StringTest

--------------------------------------

MethodDesc: 927280

Name: [DEFAULT] [hasThis] Void ClassLibrary1.Class1.StringTest()

name2ee can get us the MethodDesc addresses given a module and qualified method name. Given a MethodDesc address we can ask for a disassembly. Here is the VB.NET method. I’ve modified the output slightly to make it easier on the eyes.

0:005> !u 927280

Normal JIT generated code

[DEFAULT] [hasThis] Void ClassLibrary1.Class1.StringTest()

Begin 00ce14c0, size 14

1    mov    ecx,[0206617c] ("foo")

2    mov    edx,[02066180] ("bar")

3    cmp    [ecx],ecx

4    call    mscorwks!COMString::EqualsString (791ef42d)

5    ret

The VB.NET version is short and sweet. The code moves string references into the ecx and edx registers. The cmp instruction is a compare instruction, but what this instruction really does is check to make sure the ecx register does not contain a null pointer by dereferencing the value held in the register (the brackets indicate an indirect addressing mode). If you look at the IL listing Geoff shows in his post, the method call s1.Equals produces a callvirt IL instruction, and callvirt guarantees the “this” / “Me” reference will not be null. If s1 was null, the CPU would trap this instruction and ultimately force a NullReferenceException to bubble out of the CLR. Once the check passes, the Equals method gets invoked.

For some reason, the C# compiler and JIT did not work together to produce quite as good a code as the VB.NET version. Mind you, we are talking about nanoseconds here, so as Geoff pointed out it is nothing to get too excited about. Just makes one curious, is all.

0:005> !u 927308

Normal JIT generated code

[DEFAULT] [hasThis] Void ClassLibrary2.Class1.StringTest()

Begin 00ce1500, size 1d

1    push    esi

2    mov    esi,[0206617c] ("foo")

3    mov    edx,[02066180] ("bar")

4    mov    ecx,esi

5    cmp    [ecx],ecx

6    call    mscorwks!COMString::EqualsString (791ef42d)

7    and    eax,0xff

8    pop    esi

9    ret

For some reason the instructions for the C# version bounce the first string reference from the esi register to the ecx register, which forces a push and then a pop to preserve the value in esi. (The value in esi was a reference to the Class1 instance, you can find this out by putting a breakpoint on the first instruction (bp 00ce1500) and dumping stack objects (!DumpStackObjects). The C# version also seems obsessed with the return value of the Equals method, even though we never make use of the value! Return values generally appear in the ax register, and I’m assuming the and operation on line 7 is masking off the high bits to make sure we have a 32 bit bool. Weird.

There you have it. It’s VB.NET by 9 bytes and a few clock ticks.

“How exciting”, I can hear you say again.

TechEd Day 1

Tuesday, June 7, 2005 by scott

No, I’m not at TechEd either, but I did keep the keynote webcast open in the background. If you are interested, I’m sure a Google news search will turn up piles of mind-numbing analysis by industry experts who do nothing all day but write white papers filled with pie charts, so I’m not rehashing any content here.

Something did stick out, though. Rather – something was missing in this keynote entitled “The New World Of Work”*

I’m pretty sure there was not one mention of a web application during the keynote. There was talk of smart clients, web services, mobile devices, tablet PCs, Office add-ins, security, and future versions of Windows - but not one mention of a “web app” that I heard.

Will 2006 be the beginning of the end for ASP.NET?

I think so.

*What's with the title? It sounds rather ominous. Is this a technical conference or an Illuminati convention?

Code-behind for Me

Friday, June 3, 2005 by scott

In ASP.NET 2.0, I still prefer using code-behind over the single-file code model for web forms – but it is a tough choice.

In ASP.NET 1.x the choice was easy. Using a code-behind file was the only way to have intellisense when writing code, was the only way to catch syntax errors at build time, and was the only way to ‘pre-compile’ your code into an assembly before deployment.

All these reasons above go away in 2.0. In the single-file model your C# / VB.NET code exists inside a <script runat=”server”> tag, and within the same file as the ASPX / HTML markup for the web form. You’ll have intellisense, and the refactoring menu works, too. You can catch syntax errors by requesting a build, and still pre-compile the entire web application. In short, you won’t lose any productivity using a single-file model.

There are other advantages to the single-file model. There is only one file to check into source control. Only one file to diff when looking for changes between builds. Only one file to deploy, and only one file to copy when you want to sell company secrets to the highest bidder.

I like the simplicity of the single-file model. There are certainly fewer moving pieces. I just don’t like code intermingled with so much declarative markup – neither in ASP.NET or some of the Avalon samples I look at. I like to see a full class definition and not headless, floating method definitions. The mixture just isn't pretty.

I like to look at pretty code.

Perhaps this dislike could pass in time, but for now I’m putting code in a separate file.

Code-Behind or Single File?

Thursday, June 2, 2005 by scott

The latest OdeToCode article: The Code Models Of ASP.NET 2.0.

The obvious question is which model should you be using for your ASP.NET projects? The answer will largely depend on the type of person you are. Working from a single file containing both the code and ASPX markup will appeal to many people, while others will insist on a strict separation and favor the code-behind model. The single-file model has an advantage in configuration management and deployment, since there is only a single file to version and deploy. Intellisense and refactoring tools appear to work equally well with both models, so there will be no clear winner in the productivity category. One additional factor in deciding on the model to use is your pre-compilation strategy, which will be the topic of our next article...

I have some more thoughts on the subject to blog later this week, as there are some subtle but suprising differences, particularly in VB.NET versus C# land....

Teaser, teaser, teaser. It's all about the teasers.