OdeToCode IC Logo

Yet Another Bundling Approach for MVC 4

Wednesday, March 21, 2012 by K. Scott Allen

ASP.NET MVC 4 allows you to bundle multiple JavaScript or CSS files into a single "bundled" download, and optionally minify the bundle to reduce the download size. John Petersen has a good introduction to the feature.

I've been experimenting with an approach that lets me use the following code during application startup.

BundleTable.Bundles.Add(new RGraphBundle());

The RGraphBundle class lists all the JavaScript files needed for a certain feature, and sets the virtual path to reach the bundle.

public class RGraphBundle : JsBundle
{
    public RGraphBundle() : base("~/Rgraph")
    {
        AddFiles(
            "~/Scripts/Rgraph/RGraph.common.core.js",
            "~/Scripts/Rgraph/RGraph.common.context.js",
            "~/Scripts/Rgraph/RGraph.common.zoom.js",
            "~/Scripts/Rgraph/RGraph.common.effects.js",
            "~/Scripts/Rgraph/RGraph.line.js"
        );
    }
}

Everything else is taken care of by base classes.

public class CustomBundle : Bundle
{
    public CustomBundle(string virtualPath) 
        : base(virtualPath)
    {
        
    }

    public void AddFiles(params string[] files)
    {
        foreach (var file in files)
        {
            AddFile(file);
        }            
    }

    public void SetTransform<T>() where T: IBundleTransform
    {
        if(HttpContext.Current.IsDebuggingEnabled)
        {
            Transform = new NoTransform();
        }
        else
        {
            Transform = Activator.CreateInstance<T>();
        }
    }        
}    

public class JsBundle : CustomBundle
{
    public JsBundle(string virtualPath) : base(virtualPath)
    {                        
        SetTransform<JsMinify>();
    }        
}

public class CssBundle : CustomBundle
{
    public CssBundle(string virtualPath) : base(virtualPath)
    {            
        SetTransform<CssMinify>();
    }
}

Checking the IsDebuggingEnabled flag lets me turn minification on and off by toggling the debug setting in web.config, just like the ScriptManager would do in that other ASP.NET web framework.

Avoiding NotSupportedException with IQueryable

Tuesday, March 20, 2012 by K. Scott Allen

Most remote LINQ providers can handle simple projections. For example, given a Movie class with lots of properties, and a MovieSummary class with a subset of those Movie properties, you can write a LINQ query like the following:

var summaries = db.Movies.Select(m => new MovieSummary {   
       Title = m.Title,
       Length = m.Length 
});

But it all falls apart if you try to offload some of the work to a MovieSummary constructor.

var db = new MovieDataStore();
var summaries = db.Movies.Select(m => new MovieSummary(m));

If you give the above query to the the Entity Framework, for example, it would throw a NotSupportedException.

Unhandled Exception: System.NotSupportedException: Only parameterless constructors and initializers are supported in LINQ to Entities.

A LINQ provider will not know what code is inside the MovieSummary constructor, because the constructor code isn't captured in the expression tree generated by the query. The Entity Framework tries to translate everything in the LINQ query into T-SQL, but since it can't tell exactly what is happening inside the constructor call it has to stop and throw an exception.

One solution is to move the entire projection out of the expression tree by switching from IQueryable to IEnumerable (using the AsEnumerable LINQ operator).

var summaries = db.Movies.AsEnumerable()
                  .Select(m => new MovieSummary(m));

With this query, however, a LINQ provider won't know you only need two properties from every movie. In the case of EF it will now bring back every column from the Movie table. If you need better performance and readability, it might be better to hide the projection in an extension method instead, and make sure the extension method extends IQueryable to keep the projection in an expression tree.

public static IQueryable<MovieSummary> ToMovieSummary(
    this IQueryable<Movie> source) 
{
    return source.Select(m => 
        new MovieSummary
            {
                Title = m.Title,
                Length = m.Length
            }                    
        );
}

// and in the query ...

var summaries = db.Movies.ToMovieSummary();

With EF, the above code will only select two columns from the database to create the movie summaries.

A Simple MapReduce with MongoDB and C#

Monday, March 19, 2012 by K. Scott Allen

If you work with relational databases and someone says "data aggregation", you immediately think of a GROUP BY clause and the standard aggregation operators, like COUNT, MIN, and MAX.

MapReduce with MongoDB is also a form of data aggregation where you can take a large amount of information and aggregate (reduce) the information to some smaller amount of information. Before reducing you have the ability translate (map) the information into a structure designed for the custom reduction process. For more details, see Karl Seguin's fabulous work titled The Little MongoDB Book

As an example of how to use MapReduce from C#, let's use Movie objects with Title, Category, and Minutes (length) properties.

void AddMovies(MongoCollection<Movie> collection)
{
    var movies = new List<Movie>
    {
        new Movie { Title="The Perfect Developer", 
                    Category="SciFi", Minutes=118 },
        new Movie { Title="Lost In Frankfurt am Main", 
                    Category="Horror", Minutes=122 }, 
        new Movie { Title="The Infinite Standup", 
                    Category="Horror", Minutes=341 } 
    };
    collection.InsertBatch(movies);
}

Let's say we want to find the total number of movies in each category, along with the total length and average length per category. With MongoDB we can do this with a MapReduce operation, and MapReduce requires JavaScript.

The Map

When you tell Mongo to MapReduce, the function you provide as the map function will receive each Movie as the this parameter. The purpose of the map is to exercise whatever logic you need in JavaScript and then call emit 0 or more times to produce a reducible value.

For now we'll leave the JavaScript embedded in the C# code as a string, but we'll look at something nicer next week.

string map = @"
    function() {
        var movie = this;
        emit(movie.Category, { count: 1, totalMinutes: movie.Minutes });
    }";

For each movie we'll emit a key and a value. The key is the first parameter to the emit function and represents how we want to group the values (in this case we are grouping  by category). The second parameter to emit is the value, which in this case is a little object containing the count of movies (always 1) and total length of each individual each movie.

The Reduce

Mongo will group the items you emit and pass them as an array to the reduce function you provide. It's inside the reduce function where you want to do the aggregation calculations and reduce all the objects to a single object. We are using simple logic here, but you can make extremely complex map and reduce functions using all the power of JavaScript.

string reduce = @"        
    function(key, values) {
        var result = {count: 0, totalMinutes: 0 };

        values.forEach(function(value){               
            result.count += value.count;
            result.totalMinutes += value.totalMinutes;
        });

        return result;
    }";

The reduce function returns a single result. It's important for the return value to have the same shape as the emitted values. It's also possible for MongoDB to call the reduce function multiple times for a given key and ask you to process a partial set of values, so if you need to perform some final calculation, you can also give MapReduce a finalize function.

The Finalize

The finalize function is optional, but if you need to calculate something based on a fully reduced set of data, you'll want to use a finalize function. Mongo will call the finalize function after all the reduce calls for a set are complete. This would be the place to calculate the average length of all movies in a category.

string finalize = @"
    function(key, value){
      
      value.average = value.totalMinutes / value.count;
      return value;

    }";

Putting It Together

With the JavaScript in place, all that is left is to tell MongoDB to execute a MapReduce.

var collection = db.GetCollection("movies");
var options = new MapReduceOptionsBuilder();
    options.SetFinalize(finalize);
    options.SetOutput(MapReduceOutput.Inline);
var results = collection.MapReduce(map, reduce, options);

foreach (var result in results.GetResults())
{
    Console.WriteLine(result.ToJson());
}

Which would produce:

 { "_id" : "Horror", 
   "value" : { "count" : 2.0, "totalMinutes" : 463.0, "average" : 231.5 } 
}
{ "_id" : "SciFi", 
  "value" : { "count" : 1.0, "totalMinutes" : 118.0, "average" : 118.0 } 
}

 

Note that you can use GetResultsAs<T> to map the results into .NET objects of type T. You can also have MapReduce store (or merge) the computed results into a collection instead of returning inline results as we have done in the example. Creating a collection from a MapReduce operation is the ideal strategy to use when you need the results frequently. The collection will serve as a cache.

Debugging JavaScript with Chrome

Thursday, March 15, 2012 by K. Scott Allen

The Chrome Developer Tools are a bit quirky, but for script debugging I currently like them the best. Here is a quick brain dump on some areas of interest (these are all in the stable build 17.0.963.79).

Chrome Developer Tools

The Toolbar Under #1

The leftmost button allows you to dock and undock the tools window from the browser window.

The second button opens the Console, which is a helpful JavaScript REPL. You can execute script code in the current debugging context, meaning you can manipulate the DOM, try different CSS selectors with jQuery, or call into libraries loaded on a page to figure out how an API works. Tip: To enter code that spans multiple lines use Shift+Enter to end a line. The console also displays errors, warnings, and log messages, and provides autocompletion (the right arrow key seems to be the safest way to complete, as the ENTER key sometimes doesn't work (I said it was quirky)).

The third button is the "click an element on the page to inspect it" button.

The fourth button toggles the break on exceptions behavior. The debugger can break on all exceptions, break on unhandled exceptions only, or ignore exceptions. Break on unhandled exceptions is a fast way to find broken code.

The fifth button is the pretty print button. If you are trying to step through minified source code to find a bug in a production library, the pretty print feature will at least give you the proper line breaks and white space to read the code. Unfortunately, local variables will still be minified and look like variables from a Fortran program circa 1972 (I,j,k).

The sixth button is the live edit button, where you can click the the source code and change the code on the fly. Like the console window, this is a useful feature to have when you are still trying to figure out how things work. After you change the code the changes are live in the browser and working.

#2 Area: Breakpoints

Like any debugger you can break on specific lines of code. You can also break into code starting an AJAX request (and break only on specific URLs). There are also event listener breakpoints. An event listener breakpoint is useful when you are trying to find who is responding to a click event, for example. Events include timer events, like setTimer, clearTimer, and the timer tick event handler.

The tools also provide  DOM breakpoints, which I use when I'm trying to find who is responsible for changing something in the DOM. With an element selected in the Elements tab, you can right-click and set up a breakpoint if an attribute on the element changes, if someone adds or removes a descendant element, or if someone removes the element from the DOM.

Profiles (#3)

Clicking on the profiles button will bring you to a tab with two primary features – the ability to start CPU profiling, and the ability to take a heap snapshot (once you are on the tab, look for the buttons in the #1 toolbar area).

CPU profiling help you find the functions where a page is spending most of its time.

image

The heap snapshots are thorough, but it can take some work to go from seeing there is an array holding 10MB of data to figuring out the who, what, where, when, and why.

The Timelines tab also surfaces some interesting visualizations, particularly if you are troubleshooting a slow page load.

Settings (#4)

The gear to the right of the big #4 is where you can change the settings for the tools. There aren't many settings, but you'll find the Disable Cache option useful.

Endless Appalachia

Wednesday, March 14, 2012 by K. Scott Allen

I've spent most of my life living in a valley of the Appalachian mountains. This isn't a hotbed for technology, by any means. Most people associate the culture of Appalachia with clan feuds, banjos, fiddles, moonshine, hillbillies, poverty, and a dirty black combustive rock known as coal.

When I'm not working at home I have to drive underneath a bridge carrying a section of the 2,100 mile Appalachian trail and continue east, away from the mountains, for an hour or more. I'll eventually reach an office or an airport around Baltimore or Washington D.C. Some people there think us western folk are backwards.

burnsides bridgeThey might think Appalachia is backwards, but I think of Appalachia as comfortable. Black walnut trees and white tailed deer. Lazy creeks and limestone bridges.

It's an area full of history, and yes, it is resistant to change. While other mountain ranges tried to outdo each other and grew to threatening heights, these old mountains became gentle and folded. They instill a sense of permanence into everything cradled within their range. I remember feeling this sense of permanence even as a carefree 8 year old boy wandering the forests around the towpath of the C&O canal. Something says to you "we've been here long before you came, and we'll be here long after you've gone". It is reassuring, not ominous. You are a guest, not an intruder.

When all hell is breaking loose in the technology and politics of the more civilized world, and when everyday brings new changes and challenges, I have a place to retreat and recoup. I can walk into the endless forests of Appalachia and be an 8 year old boy.

Carefree again ... at least for a little while.

Abstractions For MongoDB

Tuesday, March 13, 2012 by K. Scott Allen

I've been working with a base class like the following to dig data out of MongoDB with the 10gen driver (install-package mongocsharpdriver).

public class MongoDatastore : IDisposable
{
    protected MongoDatastore()
    {
        _db = new Lazy<MongoDatabase>(Connect);
    }

    protected MongoDatabase DB
    {
        get { return _db.Value; }
    }

    protected MongoDatabase Connect()
    {
        var server = MongoServer.Create("mongodb://lookup your server");
        var database = server.GetDatabase("lookup your database");
        return database;
    }

    public void Dispose()
    {
        if (_db.IsValueCreated && _db.Value != null)
        {
            _db.Value.Server.Disconnect();
        }
    }

    private readonly Lazy<MongoDatabase> _db;        
}

From there I can layer on a class with properties for each collection (or even a projection of a collection).

public class MongoSession : MongoDatastore
{
    public MongoCollection<Product> Products
    {
        get { return DB.GetCollection<Product>("products"); }
    }
    ...
}

And from there I only need to install-package FluentMongo and the LINQ queries are ready to go.

var product = session.Products.AsQueryable()
                     .First(m => m.ID == _id);

6 Ways To Avoid Mass Assignment in ASP.NET MVC

Monday, March 12, 2012 by K. Scott Allen

One of the scenarios that I always demonstrate during an ASP.NET MVC class is how to create a mass assignment vulnerability and then execute an over-posting attack. It is a mass assignment vulnerability that led to a severe problem on github last week.

Let's say you have the following model.

public class User
{
    public string FirstName { get; set; }
    public bool IsAdmin { get; set; }
}

When you want to let a regular user change their first name, you give them the following form.

@using (Html.BeginForm()) {
   
     @Html.EditorFor(model => model.FirstName)
    <input type="submit" value="Save" />    
    
}

There is no input in the form to let a user set the IsAdmin flag, but this won't stop someone from crafting an HTTP request with IsAdmin in the query string or request body. Maybe they saw the "IsAdmin" name somewhere in a request displaying account details, or maybe they just got lucky and guessed the name.

composing the attack

If you use the MVC model binder with the above request and the previous model, then the model binder will happily move the IsAdmin value into the IsAdmin property of the model. Assuming you save the model values into a database, then any user can become an administrator by sending the right request. It's not enough to leave an IsAdmin input out of the edit form.

Fortunately, there are at least 6 different approaches you can use to remove the vulnerability. Some approaches are architectural, others just involve adding some metadata or using the right API.

Weakly Typed Approaches

The [Bind] attribute will let you specify the exact properties a model binder should include in binding (a whitelist).

[HttpPost]
public ViewResult Edit([Bind(Include = "FirstName")] User user)
{
    // ...
}

Alternatively, you could use a blacklist approach by setting the Exclude parameter on the attribute.

[HttpPost]
public ViewResult Edit([Bind(Exclude = "IsAdmin")] User user)
{
    // ...
}

If you prefer explicit binding with the UpdateModel and TryUpdateModel API, then these methods also support whitelist and blacklist parameters.

[HttpPost]
public ViewResult Edit()
{
    var user = new User();
    TryUpdateModel(user, includeProperties: new[] { "FirstName" });
    // ...
}

Strongly Typed Approaches

TryUpdateModel will take a generic type parameter.  You can use the generic type parameter and an interface definition to restrict the model binder to a subset of properties.

[HttpPost]
public ViewResult Edit()
{
    var user = new User();
    TryUpdateModel<IUserInputModel>(user);

    return View("detail", user);
}

This assumes your interface definition looks like the following.

public interface IUserInputModel
{
    string FirstName { get; set; }
}

Of course, the model will also have to implement the interface.

public class User : IUserInputModel
{
    public string FirstName { get; set; }
    public bool IsAdmin { get; set; }
}

There is also a [ReadOnly] attribute the model binder will respect. ReadOnly metadata might be want you want to use if you never want to bind the IsAdmin property. (Note: I remember ReadOnly not working in MVC 2 or MVC 1, but it is working in 3 & 4 (beta)).

public class User 
{
    public string FirstName { get; set; }

    [ReadOnly(true)]
    public bool IsAdmin { get; set; }
}

An Architectural Approach

One of many architectural approaches to solve the problem is to always put user input into a model designed for user input only.

public class UserInputViewModel
{
    public string FirstName { get; set; }
}

In this approach you'll never bind against business objects or entities, and you'll only have properties available for the input you expect. Once the model is validated you can move values from the input model to the object you use in the next layer of software.

Whatever approach you use, remember to treat any data in an HTTP request as malicious until proven otherwise.