The Dreaded “Aspnet_wp.exe Could Not Be Started” Error

Friday, August 27, 2004 by scott
0 comments

This morning started on a bad foot. We officially designated one of our dev machines as “pooched”.

Inside the application log was the dreaded ‘Aspnet_wp.exe Could Not Be Started’ message. I cringe at the sight of this message because it could be the result of any number of security or permission related mis-configurations. One way to start troubleshooting the problem is by enabling auditing and getting some tools from sysinternals.com. See my post Security Whodunit and read Anil’s comment.

Going through all the KB articles (linked below) and trouble-shooting steps did not solve the problem. After four hours, the computer was one runtime error away from being placed into a charcoal picnic grill outside the building and set on fire. That’s when we realized a second machine was complaining about certificate errors and not being able to connect to SQL Server.

Both of these machines had something in common. Some of our client’s have moved to using VPN solutions over SSL, and both machines were setup to use this. The advantages to this type of VPN are the ability to tunnel over port 443 from just about anywhere, and (in theory) there is no need for client software deployment. In fact, everywhere I look the VPN over SSL vendors insist there is no need to deploy software on the client and make messy configuration changes. This white paper even claims the technology gives you “clientless access”. Isn't that an oxymoron?

You typically start a VPN over SSL session by browsing to a secure website outside the destination’s firewall and entering some credentials. The browser then asks if you want to install some signed ActiveX software onto your machine. I’m guessing since this is “clientless access” it doesn’t count as a software deployment. Nevertheless, after going into Add / Remove programs and instructing the software to uninstall from the clientless machine, ASP.NET worked perfectly again – even though the uninstall barfs on an access violation just before it completes.

The VPN software we were using was from Juniper networks. According to the signature, Neoteris wrote the ActiveX control. Neoteris, by the way, was bought by NetScreen, which in turn was bought by Juniper. You know when you are running an installation program and see signs of three different companies it’s going to be trouble. If this is a low cost deployment, then I want the Cisco IPSEC client back. Oh, I forgot, it's not really a deployment - it's clientless.

There are many other reasons you might see this error. It could be that the account for the ASP.NET worker process is disabled, missing, locked out, or you have the wrong password setup in machine.config. You might be trying to run ASP.NET on a domain controller. Alternatively, the ASP.NET process account might not have the permissions to access files that it needs, and woe to anyone who names their machine “SYSTEM”. These scenarios and fixes are described in the following KB articles.

FIX: ASP.NET Does Not Work with the Default ASPNET Account on a Domain Controller

PRB: "Aspnet_wp.exe Could Not Be Started" Error Message When You View an ASP.NET Page

You receive a "W3wp.exe could not be started" error message in the application event log when you view an ASP.NET page

FIX: Cannot Browse to ASP.NET Pages If Computer Name Contains Certain Words

A Friend In Need

Wednesday, August 25, 2004 by scott
7 comments
Multiple choice question:

A friend in need is a ____.

A) Friend with a non-booting laptop.

B) Friend with a non-booting desktop.

C) A friend indeed.

D) All of the above.

I’m sitting here with not one, but two pieces of hardware from two different people I know. Everyone in this field is familiar with playing tech support occasionally, it’s my turn now and they've come in bunches.

At least I’m getting a few good meals for my efforts. I’ve already had a down payment made in the form of a sausage, shrimp, and red pepper jambalaya. It’s a shame I can’t help this person out much, it appears the hard drive has destroyed more files than an Enron paper shredder.

The problem I’m really having is with a Toshiba Satellite notebook. I did manage to get this machine running. The screen is large and crisp and clear. It’s fast. It’s sleek. It makes my aging Thinkpad look so bad.

I keep finding myself on the Dell homepage clicking “Customize It”. I almost get to the checkout when I think: “No, what I really want is a Tablet PC”, and I’m off browsing for a Tablet. Then I wonder if I will really make good use of the Tablet. How often will I use the Pen? What if I need to do some extended development work on the machine? It’s just so cool, but is it worth it?

Now I can find no middle ground. It either has to be the super-portable, battery friendly Tablet PC, or the desktop replacement monster laptop that will give my legs third degree burns if I use it on the couch.

Pen versus Pentium 4

Sexy versus Sledgehammer

Space versus Time

This is the last time I fix a computer for someone who has nicer hardware than I do.

A DBA's Dream - SQL 2005 DDL Triggers

Monday, August 23, 2004 by scott
0 comments
I’ve never put a trigger into production. I’m not saying this is good or bad, it just hasn’t happened …. yet.

With all these new features in SQL 2005 it’s easy to overlook the new capability to use DDL triggers. I think they will become a DBA’s friend long before CREATE ASSEMBLY and stored procedures in managed code ever will.

DDL triggers fire when data definition language events happen. For instance, you can block DROP TABLE and ALTER TABLE statements in a database with the following trigger.

CREATE TRIGGER AuditTableDDL
ON DATABASE 
FOR DROP_TABLE, ALTER_TABLE 
AS 
   PRINT 'No DROP or ALTER for you!' 
   PRINT CONVERT (nvarchar (1000), EVENTDATA())
   ROLLBACK;

In SQL 2000, the only way to prevent an accidental table drop was by using CREATE VIEW to touch a table and add the WITH SCHEMABINDING clause. DDL triggers are more explicit about this, and give you auditing capability easily with EVENTDATA().

The EVENTDATA() function is what really makes DDL interesting. If you need to audit DDL activity, the XML return value will contain all of the pertinent information for you. For example, if I have table Foo, and try DROP TABLE Foo with the previous trigger in place, I’ll get the following response (with some formatting applied):

No DROP or ALTER for you!
 
<EVENT_INSTANCE>
  <EventType>DROP_TABLE</EventType>
  <PostTime>2004-08-22T22:54:37.377</PostTime>
  <SPID>55</SPID>
  <ServerName>SQL2005B2</ServerName>
  <LoginName>SQL2005B2\bitmask</LoginName>
  <UserName>SQL2005B2\bitmask</UserName>
  <DatabaseName>AdventureWorks</DatabaseName>
  <SchemaName>dbo</SchemaName>
  <ObjectName>Foo</ObjectName>
  <ObjectType>TABLE</ObjectType>
  <TSQLCommand>
    <SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" 
                ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON"
                ENCRYPTED="FALSE"
    />
    <CommandText>DROP TABLE Foo</CommandText>
  </TSQLCommand>
</EVENT_INSTANCE>
 
Msg 3609, Level 16, State 2, Line 1
Transaction ended in trigger. Batch has been aborted.

SQL 2005 also has predefined event groups to make writing DDL triggers easier. In the following trigger, DDL_TABLE_EVENTS will catch CREATE TABLE, DROP TABLE, and ALTER TABLE:

CREATE TRIGGER AuditTableDDL
ON DATABASE 
FOR DDL_TABLE_EVENTS
AS   
   PRINT CONVERT (nvarchar (1000), EVENTDATA())
   ROLLBACK;

Likewise DDL_INDEX_EVENTS will fire on CREATE, ALTER, or DROP INDEX. These groups roll up into larger groups. DDL_TABLE_VIEW_EVENTS will fire on all table, view, index, or statistics DDL.

The above triggers operate at database scope, and only react to events in the current database. You can also apply triggers at a server scope to fire on CREATE, DROP, ALTER LOGIN, for instance.

And of course you could write the trigger in C# or VB.NET, but let’s not get ahead of ourselves just yet…

The Misunderstood Mutex

Friday, August 20, 2004 by scott
13 comments

Someone should delete this article in purgatory at codeproject.com. The article is full of mis-information, but look at the view count. 

The article attempts to restrict a Windows application to a single running instance. The article tries to do this using the Process class from the System.Diagnostics namespace. The code invokes Process.GetProcessesByName(processName) to see if there are any existing processes running with the same name, and exits if another is found. If you do some searching, you’ll find other code snippets using the same technique.

There are at least three problems with this approach:

1) It doesn’t account for race conditions. Two instances of the application could launch at nearly the same time, see each other, and both shut down.

2) It doesn’t work in terminal services, at least not if I want the application to run an instance in each login session.

3) It doesn’t account for the possibility that someone else might have a process with the same name.

These problems might be considered ‘edge conditions’, except there is an easy, foolproof way to check for a running instance of an application. A named mutex allows multiple processes to use the same mutex object for interprocess synchronization. The author asserts a mutex is not safe on a multiprocessor machine. If this were true it would be the end of civilization as we know it.

The name of the mutex will preferably be a unique identifier to offset the chances of another application using the same name. One could chose to use the full name of the executing assembly, or a GUID. If the application can acquire the named mutex with WaitOne, then it is the first instance running. If the application calls WaitOne with a timeout value and WaitOne returns false, another instance of the application is running and this one needs to exit.

When using this approach in .NET there is one ‘gotcha’. The following code has a small problem:

[STAThread]
static void Main() 
{
   Mutex mutex = new Mutex(false, appGuid);
   if(!mutex.WaitOne(0, false))
   {
      MessageBox.Show("Instance already running");
      return;
   }
   Application.Run(new Form1());
}
private static string appGuid = "c0a76b5a-12ab-45c5-b9d9-d693faa6e7b9";

The problem is easy to reproduce if you run the following code in a release build:

[STAThread]
static void Main() 
{
   Mutex mutex = new Mutex(false, appGuid);
   if(!mutex.WaitOne(0, false))
   {
      MessageBox.Show("Instance already running");
      return;
   }
   GC.Collect();                
   Application.Run(new Form1());
}
private static string appGuid = "c0a76b5a-12ab-45c5-b9d9-d693faa6e7b9";

Since the mutex goes unused when the Form starts running, the compiler and garbage collector are free to conspire together to collect the mutex out of existence. After the first garbage collector run, one might be able to launch multiple instances of the application again. The following code will keep the mutex alive. (The call to GC.Collect is still here just for testing).

[STAThread]
static void Main() 
{
   Mutex mutex = new Mutex(false, appGuid);
   if(!mutex.WaitOne(0, false))
   {
      MessageBox.Show("Instance already running");
      return;
   }
         
   GC.Collect();                
   Application.Run(new Form1());
   GC.KeepAlive(mutex);
}
private static string appGuid = "c0a76b5a-12ab-45c5-b9d9-d693faa6e7b9";

There is still an imperfection in the code. Mutex derives from WaitHandle, and WaitHandle implements IDisposable. Here is one more example that keeps the mutex alive and properly disposes the mutex when finished.

[STAThread]
static void Main() 
{
   using(Mutex mutex = new Mutex(false, appGuid))
   {
      if(!mutex.WaitOne(0, false))
      {
         MessageBox.Show("Instance already running");
         return;
      }
   
      GC.Collect();                
      Application.Run(new Form1());
   }
}

With the above code I can run the application from the console, and also log into the machine with terminal services and run the application in a different session. Terminal services provides a unique namespace for each client session (so does fast user switching on Windows XP). When I create a named mutex, the mutex lives inside the namespace for the session I am running in. Like .NET namespaces, terminal services uses namespaces to prevent naming collisions.

If I want to have only one instance of the application running across all sessions on the machine, I can put the named mutex into the global namespace with the prefix “Global\”.

[STAThread]
static void Main() 
{
   using(Mutex mutex = new Mutex(false, @"Global\" + appGuid))
   {
      if(!mutex.WaitOne(0, false))
      {
         MessageBox.Show("Instance already running");
         return;
      }
   
      GC.Collect();                
      Application.Run(new Form1());
   }
}

After all this you may find you need to adjust permissions on the mutex in order to access the mutex from another process running with different credentials than the first. This requires some PInvoke work and deserves a post unto itself.

Talking Plug-ins and CFOs (Small Company Life)

Friday, August 20, 2004 by scott
1 comment
A lazy developer is a good developer. One developer I worked with had a passion for automating everything. By the time you asked Dan to do something twice, he would have a SQL script ready, or an ASP page, or a VB program that screen scraped data from web servers in 15 different time zones and sent a report to the printer with instructions for automatic collation and stapling.

This was all particularly impressive considering Dan had moved into the pointy haired legions of ‘management’.

Back in the day before Outlook had a desktop alert, Dan decided to write his own alert as an Outlook plug-in. Every time a new message arrived, the plug-in used the MS text to speech engine to read the subject line aloud.

When Dan's machine said "Error in production", Dan knew something important was up. When Dan's machine said "Halloween Dress Up Day”, Dan knew he could continue working on something important and leave the email alone.

This all worked pretty well until the childish developers in the company realized the power of speech. Dan would be having a staff meeting with 4 other people in the office when his computer would blurt out:

“Beer. Beer. Beer. Beer. Beer. Beer. Beer. Beer. Beer. Beer. Beer.“

Anyway, when we were an up and coming company we had to plan for our destiny. It was very important, the VCs would tell us, to be prepared for the massive incoming rush of future customers. The absolute worst scenario for an Internet company is not to have the infrastructure required to meet customer demand. IBM would ridicule any company caught unprepared in a television commercial.

Like many companies of the late 90s, we promptly leased enough networking gear to wire the planet over twice.  

In 2001 cash became an issue. Someone decided that if we could just get out of all the leases and replace high end Cisco and Sun hardware with stuff from WalMart, then we would buy ourselves enough time for the massive incoming rush of customers to arrive.

Dan’s job was to crunch all the numbers and present a plan to the CFO. Dan knew the first answer is never acceptable in these scenarios so he came up with a complete financial model in the form of a spreadsheet. If they didn’t like the first answer then they could just change a few numbers around and the entire spreadsheet recalculated.

About 6 months later Dan told me what happened when he turned the spreadsheet over to the CFO. The CFO was excited and called Dan into his office.

CFO: Look, I can change a number here, and the number down here changes!

Dan: Yes, I thought I’d put this in a spreadsheet to try a few different things and see what works out best.

CFO: Yes, but look! I can change a number here, and the number down here changes!

Dan: Well, yeah, it’s a spreadsheet…..

CFO: But this is fantastic! I can change a number here, and the number down here changes!

I heard this story as I was packing the books at my desk into cardboard boxes. At this point nothing surprised me. In fact, being as how we were only days away from closing the door for good, maybe it was all starting to make sense.

Note: this story is not my fondest memory of the CFO…..

The Blog Delivery Extension

Tuesday, August 17, 2004 by scott
0 comments

In order for SQL Server Reporting Services to deliver a report to the destination of your choosing, you only need to create an assembly with class types implementing 3 simple interfaces: IExtension, IDeliveryExtension, and ISubscriptionBaseUIUserControl.

Then the fun really begins. I have some tips to share for anyone else who tries. Refer to the last post for the source code.

Building The Delivery Extension

My delivery extension needs to plug into both the user interface of the Report Manager (for the user to set delivery parameters), and into the ReportServerService (where all the heavy lifting and rendering takes place). After every build I had to get the assembly into the bin directory of both the Report Manager (the web application), and the ReportServerService. All this takes is a little batch file (like the following) and a command prompt with admin privileges.

net stop reportserver
 
xcopy /y (BuildPath) (SSRSHome)\ReportServer\bin
xcopy /y (BuildPath)  (SSRSHome)\ReportManager\bin"
 
net start reportserver

There is no need to shutdown the Report Manager web application. ASP.NET shadow copies the extension assembly and leaves the original unlocked in the bin directory. The runtime will recognize when the assembly has changed and loads the new version immediately.

Configuring The Delivery Extension

The next step was to have SSRS accept my new delivery extension with loving arms. There were 4 total configuration files to modify. The RSWebApplication.config and RSReportServer.config files are easy to figure out. Just provide SSRS with the type name and the assembly name at the appropriate location in the XML:

<Extensions>
  <Delivery>
    <Extension Name="Report Server Blog" Type="OdeToCode.BlogDeliveryExtension.BlogDeliveryProvider, BlogDeliveryExtension">
      <MaxRetries>3</MaxRetries>
      <SecondsBeforeRetry>900</SecondsBeforeRetry>                         
    </Extension>
  </Delivery>
</Extensions>

The rssrvpolicy.config and rsmgrpolicy.config policy files are a different story. These files manage the security policies of SSRS. After nearly blacking out from reading the documentation on code groups for the 15th time and not having any success, I found Bryan Keller’s post with a hint on where to place extension code groups (right after the CodeGen membership group).

<CodeGroup 
  class="UnionCodeGroup"
  version="1"
  PermissionSetName="FullTrust"
  Name="Report Server Blog"
  Description="Code group for OdeToCode Blog Extension">
  <IMembershipCondition 
    class="UrlMembershipCondition"
    version="1"
    Url="C:\Program Files\Microsoft SQL Server\MSSQL\Reporting Services\ReportManager\Bin\BlogDeliveryExtension.dll"
  />
</CodeGroup>                                              

Typos, or incorrect element placement can lead to a wide variety of interesting exception messages and log file entries. Any exception with “security”, “permission”, or “cannot load type” in the description is a possible user malfunction in editing the code groups.

Debug The Extension

To debug the UI behavior I’d attach to the ASP.NET worker process. To debug the actual delivery, I’d attach to the ReportServerService, or use System.Diagnostics.Debugger.Launch() to bring up a debugger as soon as it hit the line of code.

To execute the delivery I first needed to setup a subscription to a report. I setup a subscription to run a report every Monday morning at 8 am. I’ve got the debugger ready and just need to wait a few days now for the breakpoint to hit.

Just kidding.

There is a SQL Agent job for each schedule in SSRS. By executing the Agent job, you can trigger the subscription to fire off a delivery. Finding the right job can be problematic if you have several report subscriptions set, but scanning the results of the following query might help.

SELECT     
 
  RS.ScheduleID, 
  C.Name, 
  U.UserName, 
  S.Description
 
FROM 
 
  ReportSchedule RS
 
  INNER JOIN Subscriptions S
    ON RS.SubscriptionID = S.SubscriptionID 
 
  INNER JOIN Users U
    ON S.OwnerID = U.UserID 
 
  INNER JOIN [Catalog] C
    ON RS.ReportID = C.ItemID
 
ScheduleIDNameUserNameDescription
B20C9057-EE51-41E2-B3B5-7450AC73FFCBCustomers ReportREPORTING\bitmaskPost report to http://ibm600xp/dottext/scott/services/simpleblogservice.asmx

That concludes the tips for now. Happy 100th post to me.

Deliver Reports To A Blog

Monday, August 16, 2004 by scott
2 comments

SQL Server Reporting Services ships with two delivery extensions: one to deliver reports through email, and one to deliver reports to a shared network drive. A third extension to deliver reports to a printer exists in the SSRS samples directory.

One day I was setting up an email subscription and a thought occurred. Delivering reports to a blog instead of to a company email alias would be ideal. Instead of sitting in a slew of inboxes, a new delivery extension could post these reports with the blog’s web service API and intranet users could easily comment on and link to the report. Anyone needing the report subscribes to the blog with an aggregator and knows when a new report is ready.

I learned quite a bit and have quite a number of tips to share about the experience, but I’ll have to save those for future posts. Implementing, deploying, testing, and debugging a reporting service extension involves a little more work than I initially suspected. It looks easy in theory (just implement these 3 simple interfaces!).

In the meantime, you can look at the source if you dare. It has no warranty, no guarantees, contains liberal amounts of TODO comments, and comes with no installation instructions (yet).

BlogDeliveryExtension.zip (C#)

On the plus side, it does work on the simple reports I've tested so far. I have not tried reports with images, I suspect these are going to pose a problem. I just finished adding DPAPI calls to keep the blog user's password encrypted in the ReportServerDB, and the next step is to look at the fancier reports.

Feedback and criticism welcomed.

If you find reporting to be a boring subject, then I'm sure you won't download the code, and won't offer any feedback, because you've already stopped reading this.

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!