One More On ASP.NET 2.0 Compilation

Thursday, June 30, 2005 by scott
6 comments

I’m trying to move past the subject of ASP.NET 2.0 compilation to something new, really I am, but between some insightful questions I’ve seen, and work on my own code, I’m starting to have … issues.

Switching Pre-compilation Models Can Expose Broken Code

Let’s say you have a web form and need to dynamically load a user control. The code inside a web form might look like:

{

    Products products;

    products = LoadControl("Products.ascx") as Products;

    products.CategoryName = "Blunt Instruments";

    Controls.Add(products);       

}

We can build the code from the IDE with no errors. We can do a simple precompilation from the command line with no errors. There are errors, however, if we add –fixednames during precompilation.

C:\Documents and Settings\bitmask>aspnet_compiler -p "c:\dev\WebSites\WebSite9" -f -v / -fixednames c:\temp\website1
Utility to precompile an ASP.NET application
Copyright (C) Microsoft Corporation. All rights reserved.

c:\dev\WebSites\WebSite9\Default2.aspx.cs(16):
error CS0246: The type or namespace name 'Products' could not be found (are you missing a using directive or an assembly reference?)

Ouch.

When building from the IDE, the runtime was batch compiling all the aspx and ascx files inside the root directory into a single assembly. With –fixednames each aspx and ascx compiles into a separate assembly. The precompiler doesn’t know where the Products type lives now - there is no reference.

What we should have done from the start was use an @ Reference in the ASPX like so:

<%@ Reference VirtualPath="~/Products.ascx"  %>

The point isn’t how to make things work, the point is we wouldn’t have these idiosyncrasies if the compilation model was simple instead of complicated.

We Don’t Need No Stinkin’ Projects

The first hint of trouble should have come when typing in the code to interact with the user control. There is no Intellisense. Why? Because there is no real "project" for a web project. Pick one of the following perspectives on this fact:

One way to view the 'project-less' project is to picture each webform as an island – an autonomous service whose type is unknown outside it’s own declaration. Each web form can end up in a distinct assembly. No one webform knows anything about any other webform. This perspective is pleasant.

A second, less flattering view of the 'project-less' project is how a web application is no longer a cohesive unit, but a random collection of whatever known files the IDE can find inside your folders. Point the IDE to a folder, throw anything in, it will compile when the time comes. It feels sloppy.

If you do need to share secrets between webforms, you need to use the aforementioned @ Reference directive. For example, when someone needs to pass values between web forms, they often find the following article on MSDN: Passing Server Control Values Between Pages. I’ve never liked this approach – it has the aroma of an anti-pattern. The approach will also not compile in ASP.NET 2.0 unless you @ Reference the first web form with a Virtual Path from the second (transfer target) web form.

There is at least one other migration scenario where this can sting. Let’s say you have stand alone .cs or .vb file in a 1.x web project. From within the class you use a second class that is a code-behind class for a webform. I don’t advocate this approach, but I can understand some scenarios where it would be useful.

The 2.0 migration wizard will move the stand alone source file into the /App_Code directory, and the code will compile into the App_Code assembly. Types inside the App_Code assembly will have no knowledge of any code behind classes - you cannot get to them. There is no easy way to get this scenario to work in Beta 2 without injecting an abstract base class for your webform code-behind class into /App_Code.

To solve this, the ASP.NET team decided to automatically generate stub classes for code-behind classes into the App_Code directory in post-Beta 2 builds.

I feel like I’m studying a Rube Goldberg machine.

My Worst Job

Wednesday, June 29, 2005 by scott
3 comments

I’m throwing my story into the worst job meme.

When I was 17-ish, my friend Andy started driving a maroon 1969 Camaro SS – a true muscle car. I’ve never been a big gear head, but the sound and fury of 400 horsepower was enthralling. We used to cruise the streets downtown. Andy had a theory that the smell of burning rubber would attract females, but we soon realized the only interested females wore badges and wrote traffic tickets.

Live and learn, I always say.

We also came to terms with another, more serious problem. Andy’s parents refused to pay for traffic tickets, gas, and new tires. It’s not easy to have a summer of high-speed adventure without gas and tires, so Andy went in search of a job.

Within a week, Andy had found a job with a company building a new miniature golf course, and talked me into applying, too. They hired us. The lure of extra spending money was part of the reason I agreed, but I also figured miniature golf courses were fun to be at, so they must be fun to build. Maybe I’d get to drive a backhoe!

Live and learn, I always say.

I told my dad that I was going to work on building a miniature golf course. I still remember the strange smile that came across his face. I realized years later the meaning behind the smile. He realized his never-worked-a-day-in-his-life-I-hope-he-goes-to-college son was about to receive a lesson in life, and the lesson was not going to require a trip to the police station, a trip to the hospital, or a trip to his bank account, as so many teenage lessons do.

Andy and I arrived for our first day of work in our brand new work boots, and I surveyed all the cool equipment around the grounds. Mechanical digging things. Mechanical roller things. I started to wonder if I would need a special license to drive a backhoe, or if they could just teach me on the job. I have to admit I was a bit disappointed when the boss gave me an un-mechanized shovel and told me where to dig.

I’m pretty sure the Earth experienced a major orbital variation at this point, and for the next 8 hours we hovered a mere 5 meters above the surface of the sun. The heat was unbearable. I only kept digging in hopes I would uncover some alien artifact, and secret government agents would whisk me away in a non-descript, but air-conditioned van.

After what seemed like 20 years of digging we got a new assignment: pull back a tarp that was covered with rain water. The tarp was massive, and it took 9 people to pull the tarp free. When the tarp did come free, 8 people let go and remained standing. One person stumbled backwards and fell into a large pit of mud.

Rising from the mud, my first thought was to go clean myself off. I spotted a garden hose and headed for it. After two steps I tripped on a piece of rebar that was sticking out of the ground, ripped a whole in my new work boots, and fell face first into a second muddy pit.

Standing up again, I thought of continuing to the garden hose, but I suddenly appreciated something that pigs and hippos have known for a long time. Let me illustrate with an excerpt from The Hippopotamus Song:

Mud! Mud! Glorious mud!
Nothing quite like it for cooling the blood.
So, follow me, follow, down to the hollow,
And there let us wallow in glorious mud.

I decided I’d keep the soothing mud. Unfortunately, mud left under a hot sun forms a sort of plaster like substance. In fact, some previous civilizations have built living structures, temples, and freeway overpasses using only dried mud as a building material. It took three days with soap, hot water, and a steel brush to remove mud from my skin.

Live and learn, I always say.

I think I’ll end the story at this point by letting you know the day only got worse. I permanently retired from golf course construction work after 8 hours, and I’ve still never driven a backhoe.

When Deployment Gets Ugly

Wednesday, June 29, 2005 by scott
3 comments

Rick Strahl presents a GUI utility to drive the aspnet_compiler command line tool, and voices valid criticisms of ASP.NET 2.0 deployment options with “ASP.NET 2.0 Application Deployment in the real world”.

Precompile || !Precompile

Is pre-compilation for deployment worth the trouble? The performance advantages to pre-compilation are almost insignificant. Pre-compilation is not NGEN. There remains a hefty amount of JIT compiling at startup, not to mention warming up the cache, establishing connections … the to-do list for the runtime at startup goes on and on.

There are, however, at least two good reasons to pre-compile. First, pre-compilation will find any syntactical errors that might be lurking inside in the application. Even if you don’t deploy a pre-compiled version of an application, pre-compilation should be a part of every build process to ensure there are no errors.

A second reason, good for shared hosting environments, is that pre-compilation will lock down the application. You can deploy an ASP.NET application without deploying any source code whatsoever (not even ASPX files). No one can change your application, or even place a new ASPX file inside your directory in a hack attempt (the new page will throw an exception if executed). Note: you can pre-compile ‘for update’ if you still need dynamic compilation of aspx, ascx files, like for applications that use skins.

What’s In Your Bin Directory?

A drawback to pre-compilation that Rick points out is the lack of control you have over the /bin directory. Every folder with a web form inside will compile into a different .dll. In addition, the compiler generates part of the assembly’s name at random. The first time you precompile, you might see App_Web_vifortkl.dll appear. The second time, the same directory will produce an App_Web_snarkfob.dll assembly. Multiple XCOPY deployments with no clean up will result in a /bin directory littered with obsolete dlls.

The solution to this problem is to pass –fixednames as a parameter to the aspnet_compiler. The fixednames parameter will force the compiler to generate the same filename on each pre-compilation run for an application, but there is a catch. Each web form and user control will produce an individual assembly! If you have 5 web forms, you’ll find at least 5 assemblies in the bin. Actually, there will be 10 files total, because alongside each .dll file is a .compiled file filled with XML to map source files to assemblies.

You might also use the –d switch to place debugging information for each assembly into a pdb file. PDBs are essential if you want to see line numbers in a production exception’s stack trace. Now there are three files per form. I pre-compiled one of the smallest applications I have with –d –fixednames, an application with only 12 web forms and some user controls, and found 21 assemblies in /bin. 63 total files!

Sixty three!

Each assembly adds a little overhead to the working set of an application. I’m interested to see the impact on large applications where the number of assemblies reaches into the hundreds.

Each assembly also adds some overhead in getting files to a production server. If you are FTP-ing all these files to a shared host with the application online, remember each write to the /bin directory will put your application into a state of flux until all the files are complete.

Control

The underlying problem isn’t performance worries or deployment overhead, but loss of control over the build outputs of an ASP.NET application. At first glance it would seem easy to map each directory and file name into a namespace and type name, then put all forms and controls into a single assembly, but then many problems seem easy at first glance.

I’m hoping the uneasy feeling I get when looking in a /bin directory goes away.

Special Directories in ASP.NET 2.0

Tuesday, June 28, 2005 by scott
2 comments

ASP.NET 2.0 introduces a number of special directories for application resources. These directories live as subfolders in the application root, have special names, and offer various shortcuts and conveniences to web developers. One such folder is the App_Code folder. You can drop a .cs file into the App_Code folder, even while an application is running, and the runtime will automatically compile all the code inside the folder into an assembly.

The App_Code folder is one of those features experienced developers will shun in favor of class libraries. Other folders have definite advantages. For example, the App_Browsers folder will allow you to update browser definitions (browsercaps) for an application. In a shared hosting environment today, you’d have to clutter up web.config with new browsercaps. There are also special directories for skin files (App_Themes), resource files (App_GlobalResources, App_LocalResources, App_Resources), and web references (App_WebReferences). As always, the trusty Bin directory will also be around.

Then there is App_Data. You can plop SQL Server data (.mdf) and log files (.ldf) into the directory, and have the engine attach dynamically by using AttachDBFileName in the connection string. App_Data will be a useful feature for people in shared hosting environments, where XCOPY and FTP deployment options are the only options available.

I think App_Data is a double-edged sword. Under the right conditions, you’ll be able to overwrite a production database with an FTP client. Ouch.

Tracing Threads In Async Pages

Friday, June 24, 2005 by scott
0 comments

Let’s look at the impact of the Async=”true” attribute in an @ Page directive for a web form using RegisterAsyncTask. Refer to the 2nd chunk of source code from last week’s “Async pages” post.

The new tracepoint feature in VS 2005 makes analyzing thread behavior in this scenario quite easy. Tracepoints are set like breakpoints, but instead of halting execution you can ask a tracepoint to log a message to the debug output window. It’s like having calls to Trace.WriteLine in your code.

I set tracepoints at the beginning of Page_Load, Page_Unload, and in each of the begin and end delegates for the async tasks. The tracepoints log the Thread ID and their location. When the async tasks are set to execute in parallel, and the Async=”true” attribute is set in the @ Page directive, we see the following trace output:

Thread: 0xA54 Page_Load
Thread: 0xA54 Task 1 Start Delegate
Thread: 0xA54 Task2 Start Delegate
Thread: 0x9B4 Task1 EndDelegate
Thread: 0xA68 Task2 EndDelegate
Thread: 0xA68 in Unload, Elapsed time = 5.0s

The original thread (0xA54) is the thread selected to start processing the request, but once she has kicked off the async tasks she goes back to the thread pool to service other requests. Thread 0xA68 finishes the last async web service call and goes on to finish the page processing in Page_Unload.

Thread 0xA54 is from the worker thread threadpool. Threads 0x9B4 and 0xA68 are from the completion port thread pool. This is the behavior Fritz Onion refers to in a comment on my previous post. We can verify this behavior using SOS from the VS.NET debugger’s immediate window (output modified to fit on the screen):

.load sos
extension C:\WINDOWS\Microsoft.NET\Framework\v2.0.50215\sos.dll loaded
 
!threads
ThreadCount: 10
UnstartedThread: 0
BackgroundThread: 10
PendingThread: 0
DeadThread: 0
Hosted Runtime: no
     
OSID APT 
 9bc Ukn (Threadpool Worker)
 9c4 MTA (Finalizer)
 9c8 MTA (Threadpool Completion Port)
 9cc Ukn
 9b4 MTA (Threadpool Completion Port)
 a28 MTA (Threadpool Worker)
 a3c MTA
 a54 MTA (Threadpool Worker)
 a68 MTA (Threadpool Completion Port)
 a80 MTA (Threadpool Completion Port)

Now we will set Async=”false” in the @ Page directive, and restart.

Thread: 0xC54 Page_Load 
Thread: 0xC54 Task1 Start Delegate 
Thread: 0xC54 Task2 Start Delegate 
Thread: 0xC6C Task1 EndDelegate 
Thread: 0xC80 Task2 EndDelegate 
Thread: 0xC54 in Unload, Elapsed time = 5.0s 

Threads 0xC6C and 0xC80 are from the completion port threadpool, but notice thread 0xC54 has to wait around and finish the page processing. This reduces scalability and would be similar to the processing that happens in first code sample of the previous post.

Moral Of The Story

Always remember to set Async=”true” in the @ Page directive if you use RegisterAsyncTask.

Precompilation in ASP.NET 2.0

Wednesday, June 22, 2005 by scott
49 comments

New article on OdeToCode: Precompilation in ASP.NET 2.0. Here is an excerpt:

Although pre-compilation will give our site a performance boost, the difference in speed will only be noticeable during the first request to each folder. Perhaps a more important benefit is the new deployment option made available by pre-compilation - the option to deploy a site without copying any of the original source code to the server. This includes the code and markup in aspx, ascx, and master files.

Other notes:

It seems the precompile.axd 'magic page' touted as a feature in the early days has slipped quietly into the bucket of dropped features.

Although pre-compilation allows you to deploy a web application without any ASPX files present, this still poses a problem if a request comes in for a directory and you want the request to find a default document for the directory. The 'default document' issue has been a thorn in the side of url-rewriters for quite some time.

If you try to pre-compile to a target directory that already contains a pre-compiled application, the aspnet_compiler will fail with “error ASPRUNTIME: Object reference not set to an instance of an object”. Ugh - unhelpful. Use the –f switch in this case to force an overwrite.

Service Broker

Tuesday, June 21, 2005 by scott
0 comments

I spent my recreation time this weekend on the softball field and doing the Service Broker challenge. I left the softball field for the second time in as many weeks on the losing side, and with scraped up legs. It’s time to buy long pants – I’m becoming old and fragile, I am.

The Service Broker experience involved a lot less pain, although Geoff is now taunting me. May your blog be filled with an overabundance of spam, Geoff!

The amazing part about Service Broker is not how I dropped an XML document into a local queue, nor how the document was whisked to a remote machine over an encrypted TCP connection authenticated by digital certificate, nor that the remote machine validated the document against an XML schema before activating a stored procedure and sending back a response that I pulled out of a local queue, but that all this was done using only Transact SQL.

What was that thud? Another DBA keeling over?

My one hang up in the challenge was that I did not understand the different protocol layers in Service Broker, but Rushi got me on track:


The adjacent layer protocol that connects two SQL Server instances and the dialog endpoint layer protocol that connects two services are distinct protocols that stack one on top of the other. Think IP and TCP. Hence the certificates used at the adjacent layer for authenticating instances to each other are different from the certificates used at the dialog layer for authentication services to each other.

Two services that wish to engage in a conversation may not be connected directly to each other, but via multiple hops using routes. Each pair of adjacent hops (i.e. SQL Server instances) use their own authentication/encryption which is hidden from the dialog endpoints.

SQL 2005 – a crazy new world.
by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!