New article on OdeToCode: Precompilation in ASP.NET 2.0. Here is an excerpt:
Although pre-compilation will give our site a performance boost, the difference in speed will only be noticeable during the first request to each folder. Perhaps a more important benefit is the new deployment option made available by pre-compilation - the option to deploy a site without copying any of the original source code to the server. This includes the code and markup in aspx, ascx, and master files.
Other notes:
It seems the precompile.axd 'magic page' touted as a feature in the early days has slipped quietly into the bucket of dropped features.
Although pre-compilation allows you to deploy a web application without any ASPX files present, this still poses a problem if a request comes in for a directory and you want the request to find a default document for the directory. The 'default document' issue has been a thorn in the side of url-rewriters for quite some time.
If you try to pre-compile to a target directory that already contains a pre-compiled application, the aspnet_compiler will fail with “error ASPRUNTIME: Object reference not set to an instance of an object”. Ugh - unhelpful. Use the –f switch in this case to force an overwrite.
I spent my recreation time this weekend on the softball field and doing the Service Broker challenge. I left the softball field for the second time in as many weeks on the losing side, and with scraped up legs. It’s time to buy long pants – I’m becoming old and fragile, I am.
The Service Broker experience involved a lot less pain, although Geoff is now taunting me. May your blog be filled with an overabundance of spam, Geoff!
The amazing part about Service Broker is not how I dropped an XML document into a local queue, nor how the document was whisked to a remote machine over an encrypted TCP connection authenticated by digital certificate, nor that the remote machine validated the document against an XML schema before activating a stored procedure and sending back a response that I pulled out of a local queue, but that all this was done using only Transact SQL.
What was that thud? Another DBA keeling over?
My one hang up in the challenge was that I did not understand the different protocol layers in Service Broker, but Rushi got me on track:
SQL 2005 – a crazy new world.
The adjacent layer protocol that connects two SQL Server instances and the dialog endpoint layer protocol that connects two services are distinct protocols that stack one on top of the other. Think IP and TCP. Hence the certificates used at the adjacent layer for authenticating instances to each other are different from the certificates used at the dialog layer for authentication services to each other.Two services that wish to engage in a conversation may not be connected directly to each other, but via multiple hops using routes. Each pair of adjacent hops (i.e. SQL Server instances) use their own authentication/encryption which is hidden from the dialog endpoints.
The fine people at Packt Publishing are letting me giveaway their books in a contest. They have books by yours truly, on DotNetNuke, BPEL, and a variety of other topics outside of .NET.
My first thought was to have contestants answer 3 .NET trivia questions, but this runs the risk of having some clever know-at-all from Microsoft point out a subtle flaw in the question and invalidating the entire contest. Instead, I decided to play it safe and ask three questions about content on this blog (don’t worry – I’ve provided hints to make it point-and-click easy).
Here are the basic rules:
Send answers for the questions listed below to contest@OdeToCode.com. Three winners will be selected at random on July 5th from entries with correct answers. Entries must be in before midnight (UTC) on July 4th. Winners will be announced here on July 6th.
I'll have Packt ship each winner one book of their choice from the Packt stable! Whoohoo!
1. What is the #3 reason I wanted to see Microsoft buy Disney?
2. What do I think is #1 in the 'best of' list from the .NET 1.1 years?
3. What do I consider stage #4 in the '5 stages of mocking'?
The 5 Stages Of Mocking
If Microsoft Would Buy Disney
The Best Of The .NET 1.x Years
Spread the word!
Update: Your email will not be given out to anyone - ever! I don't like spam any more than you do...
Fritz Onion asks an interesting question in his “Value of asynch tasks” post. Do asynch tasks add any benefit for parallelizing asynchronous web service invocations? I’ve been experimenting with the feature too, and wanted to offer an answer.
Let’s say we need to call a HelloWorld web service that returns a string, but the service takes 5 seconds to complete – and we need to call it twice. In the simplest case we are looking at a 10 second response time. We might try to improve response time by kicking off simultaneous calls the service like so:
1 protected void Page_Load(object sender, EventArgs e)
2 {
3 IAsyncResult ar1 = helloService.BeginHelloWorld(null, null);
4 IAsyncResult ar2 = helloService.BeginHelloWorld(null, null);
5
6 TextBox1.Text = helloService.EndHelloWorld(ar1);
7 TextBox2.Text = helloService.EndHelloWorld(ar2);
8 }
The response time for the page will be just over 5 seconds – a great improvement - but at what cost? We are tying up 3 threads to process 1 user request. This approach might work well for apps with low utilization, but I’d want to do some careful stress testing before this page sees a heavy load.
ASP.NET 2.0 introduces asynch pages. With asynch pages you register one or more tasks with the RegisterAsyncTask method for the runtime to execute asynchronously. RegisterAsyncTask requires a PageAsyncTask object initialized with begin, end, and timeout event handlers for the async task. You can also pass along a state object, and indicate if the task should execute in parallel. With anonymous delegates, you could put together code like the following:
1 protected void Page_Load(object sender, EventArgs e)
2 {
3 PageAsyncTask task1;
4 PageAsyncTask task2;
5
6 bool executeInParallel = true;
7
8 task1 = new PageAsyncTask
9 (
10 delegate(Object source, EventArgs ea, AsyncCallback callback, Object state)
11 { return helloService.BeginHelloWorld(SLEEPTIME, callback, state); },
12
13 delegate(IAsyncResult ar)
14 { TextBox1.Text = helloService.EndHelloWorld(ar); },
15 // dont' need a TimeOut handler, or state object
16 null, null, executeInParallel
17 );
18
19 task2 = new PageAsyncTask
20 (
21 delegate(Object source, EventArgs ea, AsyncCallback callback, Object state)
22 { return helloService.BeginHelloWorld(SLEEPTIME, callback, state); },
23
24 delegate(IAsyncResult ar)
25 { TextBox2.Text = helloService.EndHelloWorld(ar); },
26
27 null, null, executeInParallel
28 );
29
30 RegisterAsyncTask(task1);
31 RegisterAsyncTask(task2);
32 }
A couple notes about the above code. First, it does the same job as the first code sample but appears longer and complex. However, the amount of code doesn’t tell the whole story – we’ll slowly uncover more. The runtime will ensure the registered tasks either complete or timeout before the page goes to render. We can run the tasks in parallel by passing a true value for the last parameter to the ctor for PageAsyncTask (the default is false). Here is one advantage to the Async page model – we can move from parallel to serialized processing by toggling one Boolean variable.
Another advantage to async pages is a feature we are not taking advantage of – the timeout delegate (third parameter in the PageAsyncTask ctor). If a task takes too long to complete, the runtime can move ahead and finish processing the page, optionally notifying us about the timeout if we pass a third delegate to the task. This is another feature that would be difficult to construct in the first code example.
I was also curious to see if async pages did a better job managing all the threads involved. The threads spend most of their time waiting 5 seconds for a web service method return. I whipped up a quick stress test using the testing tools in VS 2005 Beta 2 (which rock, by the way - very easy to use, and I’ve used a fair number of web stress test tools). Here are some average results after simulating a 5 user load for 5 minutes.
Avg Requests per Second | Avg Request Time (s) | |
First code sample | 2.9 | 5.0 |
Async page (parallel=true) | 2.9 | 5.0 |
Async page (parallel=false) | 1.5 | 11.1 |
This is roughly what we would expect. Both parallel approaches are pretty even, and by toggling the executeInParallel flag we double the response time for our async page. Let’s try again with a 20 user simulated load.
Request / sec | Request time (s) | |
First code sample | 2.3 | 23.4 |
Async page (parallel=true) | 3.5 | 15.5 |
Async page (parallel=false) | 3.5 | 14.4 |
Under load, the async page sustains a higher throughput, primarily because, I believe, the incoming request thread is free to go back to the worker pool once the async tasks are registered and the begin handlers fire. This is unlike the first code sample, where the primary request thread hangs around waiting for the async calls to complete.
I’ll follow up with some thread tracing analysis soon.
I think Async pages are a great addition to 2.0 when used in the right situations. You can easily control the parallelizing of tasks, easily handle timeouts, and thread management is superior compared to the simple approach in the first code snippet.
Did you know in 2.0 both C# and VB use the partial keyword to indicate a class definition will be spread across multiple declarations?
Don’t they always use different keywords?
How did this happen?
To: Team From: Team Leader Date: T – 3 days Subject: Partial Dear Team, Yesterday, that ‘other’ language team decided to use partial as a keyword in their language. We need the same feature, but you know we try to avoid sharing keywords with that ‘other’ language. We have about 3 days to think something up. Email me your suggestions.
To: Team From: Team Leader Date: T – 2 days Subject: Re: Partial Dear Team, I’ve seen no suggestions yet. You people better put on your brainstorming caps before we have the marketing team come in.
To: Team From: Team Leader Date: T – 1 days Subject: Re: Partial Dear Team, So far we’ve come up with: “Incomplete”, “Fractional”, “Limited”, and “Unfinished”. Of these four only “Fractional” has a fighting chance. The others look insulting in code. I think you people are just using the Word thesaurus or something. Get with it!
To: Team From: Team Leader Date: T Subject: Re: Partial Dear Team, I’m sorry to say we are going to be using partial as a keyword in our language, too. I feel we missed an opportunity to differentiate ourselves, and I hope we never miss another opportunity like this one again. Tim: Your idea about “HalfAss Class” was good, but it didn’t get by legal. Thanks for the effort.
There is a kerfuffle in Blogsville over censorship in the MSN Spaces available to residents of China.
Let’s make the assumption that without censorship, there would be no MSN Spaces available in China. The assumption seems safe to make, in which case Microsoft is taking the right approach.
I’d like to think that if I was Chinese, and I wrote a blog entry containing the word ‘freedom’, and the site rejected my post for containing ‘forbidden speech’, that my thought process would be the following:
1. The government sucks …
2. … but I’m sure this is easy to work around …
3. … so let me get all my friends blogging, and we can exchange more ideas, and they can get their friends blogging, too….
Having censorship smack you across the fingers with an obvious error message is highly provoking when compared to reading a censored version of the newspaper, and never knowing what lines quietly disappeared. Provocation precipitates change.
Having blogs with censorship is better than having no blogs at all. Blogs facilitate the exchange of ideas. Ideas precipitate change.
One can only hope the Chinese people will one day be able to work outside the confines of censorship, and focus on the important things in life that so consume western culture. You know, like live coverage of the Michael Jackson trial, 24 hour a day sports TV, and keeping the really dirty words off computer screens.
I didn’t realize until recently that there are two different projects going by the name of nCover – the one on ncover.org and the one on SourceForge.
Both projects are code coverage analyzers. Code coverage tools measure the fraction of code exercised during execution. If an assembly has 100 lines of code, and the execution flow covers 30 of those 100 lines, you have 30% code coverage. I think code coverage tools are a natural sidekick to unit testing tools, because you can see and verify what code is being tested. For pros (and cons), see: “How to Misuse Code Coverage” (PDF) by Brian Marick.
The two NCover projects take different approaches to produce coverage numbers. The nCover project on SourceForge requires a pre-build step to modify .cs files and produce an instrumented version of the code base (the tool makes a backup of the original files, and has the ability to ‘de-instrument’ a file). The tool uses regular expressions to find branching logic and inject the coverage points.
The nCover.org tool using the profiling API to compute coverage. There is no modification to source code or pre-build events. The coverage report is an XML file you can XSLT into just what you want to see. If you like XSLT, that is. I’ve tried hard to avoid XSLT for a long time now, for no good reason.
I’m currently leaning towards the nCover from ncover.org. The profiling API feels less intrusive and has few moving parts compared to regular expression matching and munging of source code files. Unfortunately, I have one assembly the tool refuses to measure, and I can’t figure out why.
What do the cool people use?