Office 2010

Friday, May 14, 2004 by scott

When Beth came to the office door I knew the news wouldn’t be good. High energy and optimism combined to give Beth a cheerful personality, but the look on her face didn’t bode well for the project schedule.

“Well, I’m back from legal,” Beth said. “We have some work cut out for us”.

“What did they say about the latest build?” I asked.

“Big problems with the File commands,” she said, “we have to cut the Import command completely”.

“What?” I asked, dismayed.

“Yep, last year’s ‘SCO versus Corel’ ruling used wording in DMCA III to prevent an application from opening file types registered to another application. It gets worse, though, we will need to cut ‘Save As Web Page’ too.”

“What?” I said, incredulously.

Beth studied a long list of notes on her tablet. “Yep, submarine patents again - small company in Dog Lick, Kentucky has all lossless and lossy picture formats locked up. Until we license some more algorithms we can only render images using 4 bit color bitmaps, but what’s really going to hurt the schedule is the hold up on ‘Send As Attachment’.

“Well,” I said, “Steve and Bill are meeting with President Sheen next week to see if someone can amend CAN SPAM 2009 for us”.

“I hope Charlie can push some senators around,” Beth replied. “The list of approved email clients has been really thin. The ‘Share Over WiFi’ feature has no chance though, and healthcare industry representatives are saying no copies will be deployed unless we allow them a hook into the ‘Open’ command”.

“They can’t do that”, I exclaimed.

“Well, according to the latest HIPPA bill in 2008, healthcare workers need to read, print, and sign an audit form before opening any document which could potentially contain information about a current, past, or possibly future patient, unless the person is standing in the same room with three forms of identification and a notarized release form. Even veterinarians have to be careful now. We have no choice with the current legislation.”

“So what can we have on the File menu?” I asked. “Exit?”

“As long as we RTM before ‘The New Improved PATRIOT Act,” said Beth. “If not, we will need to add the new stealth activity upload.”

“Not a chance of releasing with these setbacks,” I muttered.

”I didn’t even have time to get into the Edit menu with the lawyers,” Beth continued. “I’m sure last month's Supreme Court ruling on the Revised Database and Collections of Information Misappropriation Act is going to kill the ‘Paste’ command”.

“This is stupid!” I blurted. “Applications don’t copy data, people do!”

“Welcome to software design in 2010,” Beth said, then smiled, picked up her tablet, and strode from the room. I sighed heavily, and whirled my chair around. I opened my bottom desk drawer, and pulled out my bar review notes. “No innovation without litigation,” I thought to myself. Those good old days are gone.

Double Check Locking In The News Again

Thursday, May 13, 2004 by scott

Once upon a time, Chris Brumme posted about shortcomings in the memory model of the ECMA specification for the CLR. Not necessarily shortcomings from a runtime performance point of view, but shortcomings from a programmer productivity point of view. In the post he discussed why double check locking requires some attention to detail. Specifically, the following code snippet may not be as thread safe as it first appears.

if (a == null)
    if (a == null) a = new A();

There are interesting comments in response to the post, and eventually Jon Skeet devoted a page to singleton construction. Jon avoids the double check locking issue altogether by using a static field initializer in a nested type. The approach Jon promotes works very well except in cases where you do not know the singleton type to construct at compile time. For example, the type of singleton to construct may be an object derived from an abstract base class in a provider / pluggable architecture and the application reads the type to construct from a config file.

If you can’t use a static field initializer, but still want safe, lazy instantiation, then it seems to me that Brad Adam’s post about using the static MemoryBarrier method of the System.Threading.Thread class is the direction to go, for a couple reasons.

To me, the volatile keyword carries specific overtones. I still think of programming with memory mapped IO when I see the volatile keyword. Volatile variables are completely unsafe for caching. Imagine having a byte in memory hooked up to a thermometer laying on your desk. Not even a single CPU machine knows when the memory location may update with a new temperature value – you have to read it from main memory every time. Volatile has an unfortunate connotation for a singleton reference, which after construction isn’t going to change.

Secondly, the use of Thread.MemoryBarrier explicitly calls out what needs to happen for the code to be thread safe. For people who stumble across the code in the future, they will not need to think of the side effects of a volatile variable when Thread.MemoryBarrier is in place.

Not only do we have maintainable code showing programmer intent, there is a performance bonus too. That being said, if this code was not part of a singleton, and other methods were involved, I'd prefer volatile.

Speech-Enabled ASP.NET Commerce Starter Kit Application

Sunday, May 9, 2004 by otcnews
Looking for examples of how to speech enable an existing web application? CommerceVoice leverages the existing business- and data-layers of the IBuySpy sample it is based on, demonstrating programming and design techniques for using the Microsoft Speech Application SDK, and for developing voice-only applications in general.

Server Software For The Desktop

Saturday, May 8, 2004 by scott

For the past four years I’ve always run a server version of Windows on my development machine. I do this for a few reasons. On those rare occasions when I find myself in a server room around a production machine, I feel comfortable knowing where all the buttons and settings are. It’s hard to feel comfortable when you have only a vague memory of where you saw a particular configuration dialog, and pointy haired people stand behind you spouting “Is it online yet?”.

The other reason is that I want to feel like I am getting my moneys worth from my MSDN subscription. Some people keep their subscription discs in numerical order. I like to keep mine ordered by license fees. That way if there is something expensive I have not installed as yet, I can throw it on a virtual PC and tinker around.

Since switching to a DVD subscription two years ago, it’s been much harder to compute the license fee value per disc. It requires a calculator. Then again, the CD subscription was driving me insane years ago. I’m convinced Microsoft implemented the CD numbering and coloring scheme using a stochastic process. First, you received about 2400 CDs each year. If you organized the CDs numerically, it was impossible to find any specific product inside without an up to date annotated index, and you never knew when any particular CD was obsolete. The numbering sequence often left large gaps, but invariably a CD would show up with a number in between two other CDs and all the discs had to be manually bubble-sorted throughout the CD binder. I’m certain the process has driven some percentage of developers to drink. One company I worked at budgeted 40 intern hours a month to organizing MSDN subscription binders for developers.

But getting back to my previous topic, which is running server software on what is essentially a desktop machine. Windows 2003 is different beast and requires some tweaking to offer a pleasant desktop experience. Kevin Moore offers a tip on getting rid of the Shutdown Event Tracking. MSFN has some other tips to enable themes, video acceleration, audio acceleration, and more. By the time you get to the end of the guide, you’ll be able to watch those MSDN webcasts in a nicely themed Windows Media Player at full frame rate.

The only drawback to running 2003 as a desktop OS is you’ll find some software refuses to install, saying it requires Windows XP. Also, some utility software, like good anti-virus software, has tiered pricing for server class machines. On the other hand, it is the only OS where you can install some of the new, expensive stuff.

Oh, and at least some of those obsolete MSDN CDs have found a good home.

What Goes On At the ASP.NET Website?

Thursday, May 6, 2004 by scott

First, I think ASP.NET is a great web site, and featuring articles from all over the community helps build a diverse and informative resource for developers.


There are problems which in my opinion devalue the site. I used to think there was some forethought and human intelligence behind the scenes which would take the steps necessary for the site to appear with the polished veneer you’d expect from a site with Microsoft’s name attached, but the ‘man behind the curtain’ appears to be 100% silicon.

Take today’s new article description. This is obviously meant to reach someone who is responsible for the ASP.NET daily article content and not meant to appear on the front page. I’m sorry to say John, even if you used the official contact email of, you won’t get a response, at least in my experience.

(UPDATE: The article has changed as of 3:20 PM EST. The description used to begin with "Dear Editor, Thank you for accepting my article,,,," Thank you ASP.NET!)

In the past there have been articles that have nothing to do with web development, which doesn’t bother me too much, but when this happens I start to wonder what sort of standards the site maintains. What I do find troubling is how at least two articles were duplicated in the space of 10 days this year, which indicates to me nobody is paying attention to what is going on. The front page content is just a FIFO queue in a database. My guess is, someone could post a link to Michael Jackson's legal documents and the article will show up on the front page of ASP.NET.

I can appreciate filtering content to feature a daily article (indeed, even having a daily article) can be a tough job. Hopefully, someone can step up and address the issue. Given the site’s domain name, and the site’s owner, there are certain expectations to meet. Don’t devalue the site and the work by the author’s who contribute to this resource.

New Longhorn Bits

Thursday, May 6, 2004 by scott

Robert McLaws has compiled a list of “things to do” while downloading the latest Longhorn bits, and I may make it through most of them as I still have 5 hours left (not counting the SDK transfer of 380 MB). The transfer rate has steadily been falling from ~ 70 KB/sec to about ~ 20 KB/sec as the evening progresses. I don’t think I will be seeing the installation and setup screen until after a night of sleep. Update: it appears I have downloaded the DDK, not the SDK, as the SDK has yet to appear on subscriber downloads.

Chris Sells points out there will not be any Visual Studio bits to put on this build (M7.2 Longhorn). I might have to grab vi and see if the muscle memory in my fingers can still play :w and yy like the days of old.

Wesner Moise tells us that MSDN has already updated the online Longhorn SDK to reflect the latest build.

Finally, Scoble addressed all the hardware requirement speculators who believe Longhorn will require a creation from the Los Alamos labs to run. I know I’ve been running the PDC bits on what some would consider ridiculously modest hardware – a 1Ghz P3 with 1GB of RAM. Quit laughing! It runs pretty well!

ASP.NET Validation : False Sense of Security?

Monday, May 3, 2004 by scott

A subtle and dangerous bug appears regularly in newsgroup postings, and some have even sighted the problem in sample code from articles and books.

Take the following ASPX snippet:

<asp:textbox id="txtPassword" runat="server"/>
<asp:button id="btnSubmit" runat="server" Text="Submit"/>
<asp:requiredfieldvalidator id="valReqPassword" 
            runat="server" ErrorMessage="Password required" 

And the following code-behind logic:

private void btnSubmit_Click(object sender, 
                             System.EventArgs e)
   Response.Write("Password set!");

If you enter a blank password and click submit on this form (in a DHTML capable browser), the validation control prevents the post back and displays an error message next to the TextBox control. Testing complete, validation works, continue to the next form.

I’m sure many of you have spotted the problem, but judging from newsgroup postings this isn’t so easy for newcomers to catch. One can expose the bug by setting the EnableClientScript property of the validation control to false. Now if the user enters a blank password and clicks submit the validation error message still appears, but in addition all of the code inside the click event handler executes. Unless there is a database constraint in place, chances are the user just set their password to an empty string.

Even with client side scripting enabled, we know it would be easy to give the software un-validated input with the System.Net.WebRequest class. Client side validation works so well in the browser, however, so it is hard to see this vulnerability.

The crux of the misunderstanding is how the client side validation behavior is entirely different from server side behavior. On the client side, if validation fails, the flow of execution effectively stops. On the server side, you have to check Page.IsValid and alter the flow yourself.

Darren Neimke posted today about the difficulty in achieving elegance when writing functionality spanning client and server sides. I agree, at times it still seems much harder than it should be to get things right (with pretty code), particularly since the two sides behave so differently. Perhaps failed validation should trigger an exception on the server side - but I realize this would break many existing applications. Still, something to think about ... and watch for Page.IsValid in code reviews.

My Pluralsight Courses

K.Scott Allen OdeToCode by K. Scott Allen
What JavaScript Developers Should Know About ECMAScript 2015
The Podcast!