One paragraph in the article struck home with me:
Much of the problem has to do with the fact that .Net is just too hard, Dobson said. "Most IT pro people—I'm talking about the DBAs—did not embrace .Net" when it first came out in 2001, he said.
True, true. Just the other day I was working with a DateTime value in C#, but what I really needed was a string with a plain ol’ U.S. formatted date. After some intense concentration, I typed the following:
string date = someDate.ToShortDateString();
I’m sure many of you have felt that pain. It’s particularly hard with Intellisense always popping up bizzare windows. I couldn’t take the foolishness anymore, so I wrote a T-SQL UDF instead, and used the following:
SET @date = CONVERT(char,@someDate,131)
I was pretty happy with this version, but during testing I discovered I was getting strange output. It turns out the number 131 tells SQL Server to use Kuwaiti algorithm and the Islamic lunar calendar to produce a date.
No problem - I tried 126, 108, 105, and then 6. No luck. Eventually I put in a 101 - and everything worked!! Tee-hee!
I spent the rest of the afternoon etching the T-SQL date format styles into a piece of corkboard using my trusty soldering iron. I'll hang this above my desk to make life even easier.
I won't tell you what happened when I implemented String.Split in T-SQL. I'll leave the wonderful experience as an excercise for the reader.
I sat down to my desktop this morning and my wrists reminded me I’ve been spending a great deal of time with the mouse and keyboard these last three weeks.
After logging in, I decided it was time for a change.
I stared intently at the taskbar on the bottom of my screen. I began to imagine applications launching, menus opening, commands executing. I could feel the energy start to form around me. I concentrated on synchronizing the patterns in my brain - pouring them into the silicon mind before me. I began to hum the note three steps below a middle c. The aura around me was terrific to behold.
Then it happened.
A window appeared!!!
The window said: “A newer version of MSN Messenger is available”.
Coincidence? I don’t think so.
I believe I just need some more practice….
I’m crushed with some heavy duty projects.
The hours fly by like a compressed binary stream, and I’ve had no time to coalesce ideas into a well-formed post.
I do, however, have a collection of random thoughts I can share. You probably can’t tell the difference between this post and all the rest….
On Monday I met the new marketing director. After shaking my hand he looks at me and says: “I never lie to potential clients – I always check with engineering first to know what we have”.
I’m going to take up an office collection to ship him this book.
I have one directory with 30,000 C# files (don’t ask). The Visual Studio “Find In Files” command just can’t cope.
Fortunately, the MSN search toolbar can cope. The .cs extension is already included as a default type to index, all I needed to do was add the right directory as custom folder to index.
The toolbar does not index .vb files by default.
What’s up with car names these days? I saw a “Prius” in a parking lot last week and for some reason my first thought was of a prostate exam.
Yesterday I saw an “Armada”. I’m not sure why I’d buy a vehicle that evokes images of ships being set ablaze and battered by cannonballs, but I do wonder if the driver wears a naval cap while cruising.
Now, back to the grindstone.
I started some new work on a .NET 2.0 project recently and decided to dive headfirst into the unit testing features in 2005. The auto-generated tests seem to have a fundamental flaw and I’d recommend avoiding them (at least in beta 2).
It’s easy to put the cursor on an existing class, right-click and select “Create Tests…”. Visual Studio will generate a boatload of code at this point, including a unit test class with a test method for each method and property on the target class.
The stubbed out test methods are setup to work through an accessor class. If you create an Account class, and use the “Create Tests…” option, Visual Studio creates an AccountAccesor class.
The accessor classes will proxy calls to the real object using reflective techniques, i.e. with an Invoke call. This approach allows you to test protected and private members of a class as if they were public members. The merits of testing non-public members of a class are debatable, but the layer of indirection an accessor adds takes all the fun out of refactoring.
One of the many benefits of the refactor feature is that you can change the name of a method or class and feel confident that the IDE can manage the aftermath. There is no need to do a global search and replace by hand – the IDE takes care of cleaning up.
Let’s say you want to rename your Account class to CAccount because you miss programming in MFC (in which case I suggest you look for professional help). The refactoring will not change the code in your unit tests, because the unit tests are using the accessor class instead of the real class. The instantiation of the accessor would look something like the following (except it’s actually a lot uglier):
_accountAccessor accessor = new _accountAccessor(target);
Of course, the accessor class needs to create a real instance of the Account class to test, but because it creates an instance in a late bound manner. The Account class type is only referenced as a string, like the following (except it’s actually a lot uglier).
protected static PrivateType m_privateType =
new PrivateType("AssemblyName", "Namespace.Account");
The refactor feature can't make any changes to your unit tests because it doesn't know where you were using the Account class. The next time you run your tests you'll have a pile of exceptions about missing methods and classes.
You can regenerate just the accessor classes, but you'll still have a mess to clean up by hand.
I’m not sure what the generated tests will look like in the RTM versions of Visual Studio 2005 / Team System (or even what versions will support unit testing), but I’m staying away from the auto-generated tests in beta 2.
Something is fundamentally wrong if a testing feature makes refactoring more difficult ... or am I missing something?
Continuing on the AJAX theme, I have a couple ideas about what AJAX should not be:
AJAX should not be hard to debug.
The moment I see a developer machine with a packet sniffer in one window and a script debugger in a second window, I know it’s time to fire up Microsoft Project and make some adjustments. A good AJAX implementation will have tracing, logging, and diagnostics built-in.
AJAX should not be hard to test.
Automation == good.
Room full of interned monkeys clicking randomly == bad
AJAX should not be SmartNavigation 2.0
SmartNavigation (obsolete in 2.0) uses a clever trick to make a web application feel like a windows forms application. By POSTing from a hidden IFRAME, the end user doesn’t experience the typical flash and loss of scroll position during post back. Unfortunately, SmartNav only works for trivial web applications. People sink a good deal of time trying to get SmartNav to work before giving up.
If you think about it, AJAX and SmartNavigation have similar goals (and by manipulating the DOM with async results, an eerily similar implementation) - I just hope whatever AJAX becomes turns out better.
AJAX should not confuse the user. Those little animated icons in a web browser are a great way to tell the user: “Thank you for choosing the Internet. Please stay on hold and a server will respond to your request shortly. This request may be monitored for quality purposes. (cue soft jazz by Albert, Herb). Every AJAX design should plan to give visual feedback when processing is underway. AJAX toolkits should make this easy.
Also, AJAX should never annoy the user. There is a good potential of AJAX being the next <blink> tag.
Wally asks: “What is AJAX?”
I hope AJAX will be invisible. I hope developers don’t have to think about AJAX anymore than they think about the inner details of the HTTP protocol. I have no inside information on the ASP.NET team’s Atlas project yet, but I’m hoping to use it like so:
<asp:Image runat="server" ID="MagicImage"
The above snippet demonstrates two features:
1. High frequency events like ondrag, which we previously could only handle with client side script, will be available on the server.
2. The presence of an AsyncPostBack property indicating a control initiates a “lightweight” postback.
On the server, life goes on as usual. We respond to events using .NET code, execute queries, data bind controls, set control properties - yadda, yadda. During the MagicImage_OnDrag event, we might set the control’s ImageUrl property to a new value. A new Page property, IsAsyncPostBack, will let us skip code that we know we don’t need to execute during a lightweight postback.
The server responds by grabbing rendered HTML for the controls marked with AsyncPostBack=”true” and piping bits back down the wire. Client side script parses instructions in the results and updates a subset of form fields and controls using the DOM.
I was looking forward to softball this weekend. I had a feeling the coach would move me from 3rd base to the shortstop position because our regular guy was out this week, and I was right.
In the 3rd inning I fielded a well hit double play ball. I turned to throw to second when *BAM*. It felt like someone hit me in the back of a leg with a steel pipe. My teammates said I just collapsed.
I remember laying in the dirt thinking “so, this is what it feels like to rupture an Achilles tendon”.
Fortunately, the X-Rays came back negative this morning.
The doctor told me to rest with my feet elevated and a computer in my lap. Ok, he didn’t mention the computer part, but that’s how I had planned to spend my Sunday in any case, though I hadn’t planned on ice packs and Ibuprofen.
To make a long story short, I finished “Themes In ASP.NET 2.0” today. I hope you enjoy the article.