After reading Jerry Pournelle’s column (Chaos Manor) in last month’s Dr. Dobb’s Journal, I decided to give Xandros a try. A hard-core Linux distribution this is not. Any unix software which creates a “My Documents” directory is sure to make a true Linux fan sputter obscenities. I’m not a hard core Linux fan, and I think Xandros is nice and clean. What would really irritate me about RedHat and the others is all of the garbage they install by default. Do I really need 4 different web browsers on my desktop?
I now have mono up and running again. It took some work. Xandros is supposed to be for the relative compute newbie. As such, the free version doesn’t come with a cvs client or any other development tools (except a C++ compiler). However, I think starting clean and adding what I needed actually enabled everything to work for me on the first try, unlike previous attempts at getting mono built from source, which only resulted in google-ing for obscure error messages and coming up empty.
I had to install the following to get mono running on Xandros:
pkgconfig-0.8.0
glib-2.0.6
gc6.alpha5
autoconf-2.59
automake-1.9
bison-1.875
icu (note: change CFLAGS from -O2 to -O3 in icudefs.mk to work around a GCC optimization bug, argh).
libtool-1.5.10
mono-1.0.2
mcs-1.0.2
Each piece of software is installed using the typical ./configure, make, make install process. I had to add /usr/local/lib to ld.so.conf (then run ldconfig to reconfigure the bindings), and wow, it finally all works!
Fast forward to 2004. Everyone is obsessed with building “secure” software. Some say we can never be secure with Microsoft technologies.
If we look at what has happened over the last five years, perhaps we can see what might happen in the next five.
1) For scalability, expectations have leveled. In 1999 everyone believed they were building the next eBay and Amazon. Today, nobody is over-hyping the ability to go from 0 to huge overnight. I don’t foresee the same leveling off of expectations for secure software. If anything, the expectation is just starting to grow.
2) Security, interoperability, and flexibility have pushed scalability off the radar screen. On Aaron Skonnard's list, scalability is not even in the top 5 qualities to look for in enterprise software. Most shops with limited resources have to carefully select the qualities to focus on - we can’t have it all. Looking ahead, security and interoperability are here to stay for the next five years.
3) Scalability is easier today because of the platform and runtime. We moved from building web pages with loosely-typed interpreted pages to strongly-typed compiled pages. In some ways, the underlying platform lost features that were abused (think of ADO and server side cursors). Security in the next five years has to get easier too. It is currently hard to run as a non-admin. It is currently hard to run assemblies without full-trust. The platform and runtime can get better in this area, but not make them completely painless.
4) Scalability is easier because of the tools. It’s easy to set performance benchmarks and measure scalability with software. It’s easy to catch possible bottlenecks with syntactic analysis of an application: a cursor declaration in a SQL query, a static constructor in a class. Many security problems require the tougher task of semantic analysis. Where is the application saving a plaintext password? Is this string concatenation putting user input into a SQL string? These will be tough problems to uncover. Additional expertise and heuristics will need to exist in security tools.
5) Scalability education has been constant over the last five years. Follow some guidelines and work with the framework instead of against it, and you’ll have a decently scalable system. Don’t write your own connection pooling and thread pooling libraries – chances are the ones in place will work for 95% of the applications being developed. Security education, on the other hand, is still in infancy. You’ll find a good example on any given day in the newsgroups. Someone will post a solution which opens and disposes a database connection in an efficient manner, but somewhere in between concatenates a textbox value into a SQL string. This education takes some time.
In five more years I think we will be better off, but not completely there. We moved from being obsessed with scalability to expecting it out of the box in five years. However, after five more years we will still be obsessed with writing secure software, because security will still be difficult. As they say on Wall Street, past performance is not indicative of future returns.
Any problem in computer science can be solved with another layer of indirection. – Butler Lampson
Since the dawn of the HttpModule class in .NET 1.0, we’ve seen an explosion of URL rewriting in ASP.NET applications. Before HttpModule, URL rewriting required an ISAPI filter written in C++, but HttpModule brought the technique to the masses.
URL rewriting allows an application to target a different internal resource then the one specified in the incoming URI. While the browser may say /vroot/foo/bar.aspx in the address bar, your application may process the request with the /vroot/bat.aspx page. The foo directory and the bar.aspx page may not even exist. Most of the directories and pages you see in the address bar when viewing a .Text blog do not physically exist.
From an implementation perspective, URL rewriting seems simple:
If you want a comprehensive article about implementing URL rewriting and why, see Scott Mitchell’s article: “URL Rewriting in ASP.NET”, although I believe there is a simpler solution to the post-back problem (see the end of this post). I’m still looking for an ASP.NET topic on which Scott Mitchell has not written an article. If anyone finds one, let me know. Scott’s a great writer.
Conceptually, URL rewriting is a layer of indirection at a critical stage in the processing pipeline. It allows us to separate the view of our web application from it’s physical implementation. We can make one ASP.NET web site appear as 10 distinct sites. We can make one ASPX web form appear as 10,000 forms. We can make perception appear completely different from reality.
URL rewriting is a virtualization technique.
URL rewriting does have some drawbacks. Some applications take a little longer to figure out when they use URL rewriting. When the browser says an error occurred on /vroot/foo/bar.aspx, and there is no file with that name in solution explorer, it’s a bit disconcerting. Like most programs driven by meta-data, more time might be spent in an configuration file or database table learning about what the application is doing instead of in the code. This is not always intuitive, and for maintenance programmers can lead to the behavior seen in Figure 1.
Figure 1: Maintenance programmer who has been debugging and chained to a desk for months, finally snaps. It's the last time that machine will do any URL rewriting. | ![]() |
Actually, I’m not sure what the behavior in figure 1 is. It might be a Rob Zombie tribute band in Kiss makeup, or some neo-Luddite ritual, but either way it is too scary to look at and analyze.
One important detail about URL rewriting that is often left out. In the destination page, during the Page_Load event, it’s important to use RewritePath again to point to the original URL if your form will POST back to the server. This will not redirect the request again, but it will let ASP.NET properly set the action tag of the page’s form tag to the original URL, which is where you want to post back to.
Context.RewritePath(Path.GetFileName(Request.RawUrl));
Without this line, the illusion shatters, and people rant. I learned this trick from the Community Starter Kit, which is a great example of the virtualization you can achieve with URL rewriting.
John Woods stopped in yesterday to clarify some of the finer points of the nasal demon effect. John has another contribution to the world of knowledge:
"SCSI is *NOT* magic. There are *fundamental technical reasons* why you have to sacrifice a young goat to your SCSI chain every now and then."