Some say DOM scripting will end in fire,
Some say in Silverlight.
Still others say with much desire,
That either will suffice.
(apologies to Robert Frost).
The arguments I hear go like this: if a developer can build on a cross-platform CLR with a diverse selection of languages and still interop with the browser, than why would the developer stoop into the primordial ooze of HTML and script against a document object model that has more eccentricities than the city of San Francisco?
In other words, if I can write the following in C#:
... then why bother doing dynamic HTML using JavaScript? The tools built for C# code are far, far superior to anything written for JavaScript. Even when running a bare Visual Studio with no plugins - I have better refactoring tools, a better debugging experience, and (even in 2008) – better Intellisense when using C#. There are class diagrams, code snippets, and code browsers. Not to mention the support of a base class library that includes many features you won't find all in one place with JavaScript – real meat and potatoes stuff like string formatting and collection classes.
Certainly paints a bleak picture for JavaScript, doesn't it?
Well, one reason Silverlight doesn't replace JavaScript is that Silverlight doesn't run everywhere. If you want to reach the widest possible audience, including cell phones with a script interpreter - you'll still be giving JavaScript some love.
Let's take the best Silverlight scenario, however:
You are a hard core C# / VB.NET developer. You don't like JavaScript and you never want to work inside a paired set of <script> tags. You are writing an application that you will deliver to users who have the Silverlight 1.1 plugin installed, and all the boilerplate JavaScript code needed to bootstrap the plugin is encapsulated in a control (like the ASP.NET Futures Xaml control).
Do you ever need to touch JavaScript?
My answer is ... maybe (I'm hedging my bets because I don't know where Silverlight will be one year from now in terms of features).
There are some actions you can only perform in JavaScript (throwing up an alert box is one example that comes to mind). Reviewing the earlier code - is the code isolated from cross-browser quirks? No. Is the API pretty? No - at least not when (in my eyes) compared to JavaScript and not even if we take away all the hard coded strings.
The real question is: are we gaining anything by writing this code in C# instead of browser's native JavaScript? I have to wonder. Funneling everything through a single type like HtmlElement feels awkward.
There are two things that could happen to Silverlight that would make me say - "certainly yes".
document.getElementByID<HtmlImage>("id").src = "https://odetocode.com/odetocode.png";
I doubt this is actually in the game plan for Silverlight, as in a way doesn't serve to further the main purpose of Silverlight, and it's already been done by many other libraries (that are mostly written in JavaScript!).
#2 would be a great feature – (and by unit test I mean tools that are easy to run and integrate with a build engine like NUnit and MbUnit, and not tools that require a browser and some amount of integration pain, like Selenium or JsUnit). The former tools are still important as Silverlight does not yet offer an automation API so the runtime testability story is still weak. For unit-testing I know Jamie has already written a "Test With Silverlight" option to TD.NET, but its still only adhoc testing as I can see. If Microsoft makes it easy to host the runtime outside the brower, and provides some built-in fakes for the DOM objects, we could be off to the races.
The current Silverlight 1.1 alpha will eat exceptions. I’m not sure if the final version will behave similarly, but if you are working with the alpha don’t let this behavior surprise you.
For example, consider the following event handlers that listen to a shape’s mouse events:
The left mouse button goes down and … nothing happens. There is no indication of an error.
When the left mouse button goes up … the shape will grow in size. Silverlight is still running with the attitude of a Broadway director. Despite the setback - the show must go on.
If you have some mysterious behavior and are not catching exceptions inside event handlers, a good start might be to go into the Visual Studio Exceptions dialog (Debug -> Exceptions) and configure the debugger to break as soon as code throws an exception (the default setting is to only break on a user-unhandled exception). This might help locate the problem.
There is one area where an unhandled exception will stop the show. Consider the following user control:
If we place this faulty user control into the Page.xaml file that the plugin loads – Silverlight will tell us there is a parser error. This tends to make one look inside the .xaml file for malformed XML, when in fact the problem is that Silverlight can’t instantiate the object requested in XAML because of an exception throws from inside the default constructor.
Ajaxian linked to a reference implementation of ECMAScript 4 today. ECMAScript 4 (a.k.a JavaScript) is still a work in progress. When the work is finished, the new standard will be the first major update to the language since 1999.
The language overview whitepaper is 40 pages of ambition – iterators, pragmas, packages, namespaces, serialization, generics, annotations, non-nullable variables - and the list goes on.
Here is some code I was toying with:
Here is the code running in the reference implementation:
PS> .\es4
>> intrinsic::load('Point.es4')
>> var p = new Point();
>> p.x = 10;
10
>> p.y = 15;
15
Note that the following lines will create errors:
>> p.foo = "error: cannot add property to a non-dynamic object";
>> p.x = "error: incompatible types";
Wow! This is not the small, dynamic language that I've grown fond of this year. JavaScript is everywhere now – and I wonder how long it will take the various implementations to work out all the kinks in this standard.
ECMAScript is going from 0 to C++++ in a single release.
Phil and Dino Esposito have been talking about the RESTful aspects of the upcoming ASP.NET MVC framework.
You can read about REST in a PhD thesis, but I think Tonic captures the essence of a RESTful architectural style from the perspective of a web application developer:
One could treat each customer of the company as a resource. Locating a customer is as easy as formulating the proper URL, while the standard HTTP verbs (GET, POST, PUT, DELETE) will identify what sort of operation we wish to perform. Witness an URL like:
http://Astoria/northwind /Customers[ALFKI]
This is what we'll see in ADO.NET Data Services (Astoria).
Proponents of REST (the RESTafarians) have made a lot of noise over the last few years in the SOA and WS-* space. Why? In short because SOAP and the WS-* standards are now large bodies of work that spawn large toolsets – not everyone wants or needs the complexity.
When building a web site, we don't debate the merits of RPC versus REST, but we do debate the merits of friendly, hackable, and predictable URLs (like https://odetocode.com/articles/aspnetmvc/) versus scary, opaque, and unanticipated URLs (like https://odetocode.com/articles.aspx?c=5F9C-2FZ). We say the friendly URL is RESTful –while the other URL is still just scary.
REST has a place in web applications then, and not just in data services. However, I don't think we want to start our web applications with a resource modeling session.
Besides – I think friendly URLs are a byproduct of the new framework, and not, as Dino hints, its raison d'être.
In traditional web forms, ASP.NET maps incoming requests to the file system. Employee.aspx, for example, is a form we could use to list employees, update employees, delete employees, and create new employees. Employee.aspx is both the view (aspx) and the controller (code-behind) - and the two are inexorably bound.
The catch is that the "controller" does not have control in this scenario. The controller is tightly coupled to a single view - and can only act in response to the view's lifecycle events. The "view as addressable resource" model just doesn't work for MVC, no matter how you spin the story.
The MVC framework is, I believe, about putting control in the hands of a web developer. Control over the routing of HTTP requests to components. Control over selecting the view. Control over state management, and control over the outgoing HTML. Breaking the "one URL for every view" paradigm is necessary for MVC, and coincidently gives a developer total control over the addressable resources in an application.
In other words - RESTful URLs are possible because the framework uses a true MVC model, and not vice-versa.
I can't believe how easy it was...
Last week the bottom third of the display on my X60 tablet became distorted and wavy. Applying a gentle pressure to the bottom left corner would fix the display. Something was loose, obviously, and I can't just sit around all day squeezing my tablet.
At 9:30 A.M. Monday, I opened up a support case on the ThinkPad's EasyServ site. I was mentally prepared to give up my tablet for 8 weeks, and have my request lost in the pneumatic tube system that dispatches repair orders to an IBM facility where spider monkeys are trained to jab violently at broken hardware with Phillips screwdrivers until a banana appears.
By11:00 A.M. on Thursday, my tablet was back and working perfectly! The standard warranty covered both the repair, and the overnight shipping between Maryland and Memphis, Tennessee.
In a world where customer service consistently disappoints - three cheers for Lenovo and IBM for exceeding expectations. They've earned themselves a repeat customer.
Matt Kohut displays an Atwood-esque enthusiasm for hardware at Lenovo Blogs Inside The Box. Take a look inside the Bejing Desktop Testing Lab, see pictures of a Ruggedized PC, read the usability studies behind Battery Indicator Light Behavior, and drink in the honest tone of "Junk" in the Preload.
Due to popular demand, here are some answers to the questions.
Well, not answers exactly ... just some pointers to the get you in the right direction...
1. Are there advantages to building workflows using only XAML? Are there disadvantages?
See Keith Elder's blog on "Leveraging Workflow Foundation", particularly the section on storing workflow definitions in the database.
Also, see Matt Miler's Templates for Windows Workflow XAML activation projects
2. What are the pros and cons of using an ExternalDataExchange service versus going directly to the WorkflowQueuingService?
Sam Gentile: Windows Workflow 1023. When are attached dependency properties useful in WF programming?
See: Dependency Property Notes4. What behavior does the default scheduling service provide?
Hosting Windows Workflow5. How can my code participate in a database transaction with a workflow instance?
Advanced Workflow: Workflows and Transactions6. Why would I use a tracking service?
Follow the links in Tomas Restporo's WF Tracking Services Explained.
7. Describe a scenario where the WF runtime will cancel an executing activity.
I'd be looking to hear, for example, what happens inside a Listen activity. Also, see Matt W's Implementing the M of N Pattern in WF
8. Describe a scenario where I'd need to spawn a new ActivityExecutionContext.
Matt Milner: ActivityExecutionContext in WF.
9. Tell me why I'd use a compensation handler.
Guy Burstein explains the essence: Compensating and Fault Handling.
10. Tell me about the following activities: Replicator, Parallel, and Policy.
Advanced Workflow: Replicator Tips and Tricks
Kirk Allen Evans: Understanding ParallelActivity in WF
Sahil: Composite Activities in WF.