Sometimes people will approach me and ask “what is it like to be a software craftsman?”. I’ll usually answer with: “Pffft, don’t ask me, I just cranked out 500 lines of script that are harder to read than Finnegans Wake”. At times though, I like to pretend what it might be like to be a software craftsperson.
For example, a few Saturdays ago I was excited about a new product and found myself chipping away at some features that would require parsing XML files. XML like the following.
<?xml version="1.0" encoding="utf-8" ?> <SomeDocument xmlns="urn:my.org-v159" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <Container> <Section id="10399"> <!-- other stuff that might have a Section --> </Section> </Container> </SomeDocument>
Except, the real XML isn’t like the XML listed here.
The XML listed here is like an orc in an animated Disney movie. It has a funny pink nose and buck teeth making it look more vulnerable than dangerous.
The real XML is like an orc in a movie derived from a J.R.R Tolkien novel. It is ugly, angry, and nearly incomprehensible when communicating. Its creator is a Standards Committee committed to enslaving humans, elves, software developers, and dwarves. It is also just one orc in an army of repugnant orcs that cover a small continent.
Using this simple example though, let’s pretend the first goal is to retrieve the value of the id attribute in the Section element.
public class TheDocument { public TheDocument(XDocument document) { Id = int.Parse(document .Element(_myOrg + "SomeDocument") .Element(_myOrg + "Container") .Element(_myOrg + "Section") .Attribute("id").Value); } public int Id { get; protected set; } readonly XNamespace _myOrg = "urn:my.org-v159"; }
At this point a real craftsperson might realize that the id value is one piece of data in 376 total pieces of data that must be retrieved from the orc army. Since this one data point required 5 lines of code, the total processing would require 1,880 lines of code, which sounds excessive. Moreover, 1880 was a leap year, and leap years always create bugs in software, so the number itself is a bad omen and a sign that more work needs to be done. At least this is how I imagine a craftsperson to think.
I also imagine a real craftsman knows syntax and APIs pretty well, and in areas they don’t know well they will dig into documentation and figure out better ways of doing things. In this case using a different query method and the implicit conversion operator of an XAttribute cuts the LoC per data point to 4.
Id = (int)document .Descendants(_myOrg + "Section") .First() .Attribute("id");
However, 4 * 376 is 1,504, and 1504 was also a leap year. Coincidence? I don’t believe a craftsperson believes in coincidence, but they might believe in numerology, and regardless of their superstitions they certainly believe in error conditions. A craftsperson will understand the probability of receiving a proper XML document is low, because the Standards Committee, in an attempt to provide the ability to describe every possible variation of orc and sub-orc, built an XML schema so complex and full of malice that even the most sophisticated code generation tools will puke electronic bits on the floor when fed the first 5,000 lines of the associated schema file.
A craftsperson will know then, that it is a good idea to look at the types of error messages the software might produce if, for example, the Section element doesn’t exist.
Unhandled Exception: System.InvalidOperationException: Sequence contains no elements
These are the types of error messages that make debugging a software like debugging a 2 month old baby. You know the baby is unhappy because no living being makes these types of noises when content , but the baby can’t tell you exactly what it is wrong, it can only communicate with primitive shrieks that keep you awake at night.
A real craftsperson, I imagine, knows the business as well as the domain and the technology. And a real craftsperson will realize these types of blaring baby error messages will occur commonly and will never be solved without the assistance of a developer armed with stack trace and source code. It is then in the best interest of the business to spend additional time to craft a better solution.
But how to solve the problem?
A real craftsperson, I imagine, might take a step back and start to think about alternative approaches. A real craftsperson has more than a few years of experience under their belt, and will remember stories from a past age when orcs first became a tangible nuisance. It was then when the elders traveled to the Standard Mountain and forged a mighty blade. Behold! Its name is XPath the Orc Slayer.
document.XPathEvaluate("number(mo:SomeDocument/mo:Container/mo:Section/@id)")
XPathEvaluate is a strange beast. While the documentation for other XML related APIs drones on for paragraphs about the minutiae of an XML Infoset, the documentation for XPathEvaluate consists of a single terse sentence.
Evaluates an XPath expression.
Regular developers like me look at documentation like this and scoff. What sort of laziness is this? I already know this method evaluates an XPath expression because that’s the name of the method for Lórien’s sake! But, I’ve always had a suspicion that software craftspeople maintain a cabal and communicate through a series of secret signs. Documentation like this must be a secret sign leading to a powerful tool. Because powerful tools can’t just fall from the sky like rain, they have to be hidden in plain sight so that only powerful people can find them and only in times of darkness and dire needs. Thus, a real software craftsperson will instantly recognize XPathEvaluate as a useful but too-generic tool that needs a little bit of gift wrapping to provide real value in an application.
First, a namespace resolver for the XPath expressions. A namespace resolver is basically a lookup table for the XPath engine to discover real namespace values.
class MyOrgXmlNamespaceResolver : IXmlNamespaceResolver { public MyOrgXmlNamespaceResolver() { _namespaceMap = new Dictionary<string, string>() { { "mo", "urn:my.org-v159"}, { "xsi", "http://www.w3.org/2001/XMLSchema-instance"} }; } public IDictionary<string, string> GetNamespacesInScope(XmlNamespaceScope scope) { return _namespaceMap; } public string LookupNamespace(string prefix) { return _namespaceMap[prefix]; } public string LookupPrefix(string namespaceName) { return _namespaceMap.First(kvp => kvp.Value == namespaceName).Value; } private IDictionary<string, string> _namespaceMap; }
Also, a specialized exception class.
public class XmlParsingException : Exception { public XmlParsingException(string xpath) :base(String.Format(NoLocate, xpath)) { } public XmlParsingException(string xpath, Exception innerException) :base(String.Format(Problem, xpath), innerException) { } const string NoLocate = "Could not locate {0}"; const string Problem = "Problem locating {0}, see inner exception for details"; }
And finally, some syntactic sugar to dress XPathEvaluate like a tailor.
public static class XmlHelpers { public static T XPathToValue<T>(this XNode node, string xpath) { try { var queryResult = node.XPathEvaluate(xpath, _namespaceResolver) as IEnumerable; if (queryResult != null) { var firstResult = queryResult.OfType<XObject>().FirstOrDefault(); if (firstResult != null) { string value = ""; if (firstResult is XAttribute) { value = ((XAttribute)firstResult).Value; } else if (firstResult is XElement) { value = ((XElement) firstResult).Value; } var converter = TypeDescriptor.GetConverter(typeof(T)); return (T) converter.ConvertFrom(value); } } } catch (Exception ex) { throw new XmlParsingException(xpath, ex); } throw new XmlParsingException(xpath); }
A real software craftsperson, I think, would start to worry because the amount of code here (as well as the cyclomatic complexity) is a bit much. After all, it took only 4 lines of brute force code to retrieve a single integer value from the orc army. But a software craftsperson, I imagine, is always looking for the tangible value in a piece of code, and there are two ways to see if this code provides any value. The first test of value is to consume the code.
public class TheDocument { public TheDocument(XDocument document) { Id = document.XPathToValue<int>("mo:SomeDocument/mo:Container/mo:Section/@id"); } public int Id { get; protected set; } }
The consumption test passes. A developer can now focus on slaying orcs instead of XML APIs. The next test is to run the code, particularly with bad data, like a missing Section element.
Could not locate mo:SomeDocument/mo:Container/mo:Section/@id
Unlike the previous blaring baby error message (sequence contains no elements), this error is grown up. Also, it will allow a developer to switch from a mindset of “is this a bug in my code?” to “this is probably a bug in your XML!”, and upon further inspection 99% of the time there will be a bug in the XML, and developer can point out the error with an email like the following.
Version 3 revision 2402 of the specification (the only one we officially support) clearly states that the id attribute of the Section element must exist inside a Container of SomeDocument, except for the circumstances documented on pages 507-512, 693, 701, and the entirety of Appendix E. This is obvious to most people, but since this is the third time you’ve send an XML file with this same error YOU ARE CLEARLY A MORON.
I’m told that some developers write emails like these because they are filled with hubris, but I don’t believe everything I hear.
However, I do believe that a software craftsperson will think the above approach has some merit, because the code creates an easier API than XPathEvaluate and reduces the number of blaring baby error messages. The orcs must be trembling, but I think a software craftsperson will continue to look at the big picture and realize the code still has two problems.
With regards to #2, the code will sometimes need to parse integers and strings from both attributes and elements. The code also needs to parse individual elements, and sometimes a collection of elements. To make things even trickier, some information is required, and some information is optional. All of this might point a real software craftsperson to using an extension method as a simple gateway to an object that can encapsulate and manage more complexity.
public static class XmlHelpers { public static XPathEvaluator XPath(this XNode node, string xpath) { return new XPathEvaluator(node, xpath); } }
The XPathEvaluator is responsible for parsing different types of values, and throwing exceptions when required information isn’t present.
public class XPathEvaluator { public XPathEvaluator(XNode node, string xpath) { _node = node; _xpath = xpath; _required = true; _rawQueryResult = ExecuteExpression(); } public XPathEvaluator Optional() { _required = false; return this; } public XElement Element() { var result = _rawQueryResult.OfType<XElement>().FirstOrDefault(); Validate(result); return result; } public IList<XElement> Elements() { var result = _rawQueryResult.OfType<XElement>().ToList(); Validate(result); return result; } public T Value<T>() { T result = default(T); try { var firstEntry = _rawQueryResult.FirstOrDefault(); if (firstEntry != null) { var rawResult = GetRawResult(firstEntry); Validate(rawResult); return ConvertResult<T>(rawResult); } } catch (Exception ex) { throw new XmlParsingException(_xpath, ex); } if (_required) { throw new XmlParsingException(_xpath); } return result; } private static T ConvertResult<T>(string rawResult) { var converter = TypeDescriptor.GetConverter(typeof (T)); return (T) converter.ConvertFrom(rawResult); } private string GetRawResult(XObject firstEntry) { string rawResult = null; if (firstEntry != null) { if (firstEntry is XAttribute) { rawResult = ((XAttribute) firstEntry).Value; } else if (firstEntry is XElement) { rawResult = ((XElement) firstEntry).Value; } } return rawResult; } void Validate(XElement element) { if (_required) { if (element == null) { throw new XmlParsingException(_xpath); } } } void Validate(IList<XElement> elements) { if (_required) { if (elements == null || !elements.Any()) { throw new XmlParsingException(_xpath); } } } void Validate(string value) { if (_required) { if (value == null) { throw new XmlParsingException(_xpath); } } } IList<XObject> ExecuteExpression() { try { var result = (IEnumerable) _node.XPathEvaluate(_xpath, _namespaceResolver); return result.OfType<XObject>().ToList(); } catch (Exception ex) { throw new XmlParsingException(_xpath, ex); } } readonly XNode _node; readonly string _xpath; bool _required; IEnumerable<XObject> _rawQueryResult; static IXmlNamespaceResolver _namespaceResolver = new MyOrgXmlNamespaceResolver(); }
This is quite a bit of code, but there are many orcs on the field of battle, and now developers can fight them with relatively simple code.
public class TheDocument { public TheDocument(XDocument document) { Id = document.XPath("mo:SomeDocument/mo:Container/mo:Section/@id").Value<int>(); Name = document.XPath("mo:SomeDocument/mo:Container/mo:Section/@name").Value<string>(); Documentation = document.XPath("mo:SomeDocument/mo:Documentation").Optional().Value<string>(); Container = document.XPath("mo:SomeDocument/mo:Container").Element(); Extras = document.XPath("mo:SomeDocument/mo:Extra").Elements(); Comments = document.XPath("mo:SomeDocument/mo:Comment").Optional().Elements(); } public int Id { get; protected set; } public string Name { get; protected set; } public string Documentation { get; protected set; } public XElement Container { get; protected set; } public IList<XElement> Extras { get; protected set; } public IList<XElement> Comments { get; set; } }
This is the type of thought process I imagine a software craftsperson might have, but I don’t know for sure.
It could all be rubbish.
And I might be an orc.
UPDATE: The next public class will be the week of September 8th in Oslo, Norway.
UPDATE 2: AngularJS: Get Started is now available to Pluralsight subscribers. This course is a small but focused subset of the full class.
At the end of last year I put together and taught a 2 day workshop on AngularJS fundamentals at NDC London, which due to popular demand I’m offering as part of a larger class for ProgramUtvikling. Feel free to contact me if you would like an on-site workshop, although my bandwidth for custom training is scarce.
From animations to testing and everything in between, this course covers the features of AngularJS with a focus on practical scenarios and real applications. We will see how to build custom services, directives, filters, and controllers while maintaining a separation of concerns with clean JavaScript code. Hands on labs will reinforce concepts.
The outline of topics:
Some of the material is based on blog posts here on OTC.
In previous posts (listed below), we saw how the UserManager class in ASP.NET Identity provides the domain logic for an identity and membership system. Your software makes calls into a UserManager object to register and login users, and the UserManager will then call into a UserStore object to persist and retrieve data.
Microsoft’s UserStore class uses the Entity Framework for persistence. If you don’t like or can’t use Microsoft’s UserStore class, then implementing the storage interfaces for ASP.NET identity is easy.
The goal of a custom storage class is to provide the basic CRUD operations required by the features your application needs. Since the storage features are factored into granular interfaces, your storage class can pick and choose the interfaces it needs to implement.
Here are the current interfaces you can choose from.
The one required interface in the identity system is IUserStore. In the 2.0 alpha release, a TKey generic parameter allows you to specify the type of the identifier / primary key for a user, which was assumed to be a string in v1.0. This interface has 5 simple CRUD requirements:
These methods each require < 10 lines of code, and the count of 10 includes checks for valid parameters and premature disposal. Each method only needs to forward data to a data framework or existing API. For example, a simple implementation of CreateAsync with the Entity Framework might look like the following (where _users is a DbSet<TUser>).
public Task CreateAsync(User user) { user.Id = Guid.NewGuid(); _users.Add(user); return _db.SaveChangesAsync(); }
And with MongoDB, users would probably be stored in a MongoCollection:
public Task CreateAsync(User user) { user.Id = ObjectId.GenerateNewId(); _db.Users.Insert(user); return Task.FromResult(0); }
All other interfaces derive from IUserStore and add additional functionality. The following interfaces also take TUser and TKey generic type arguments. The generic arguments are omitted from the headings for aesthetic reasons.
Implement this interface in your custom user store if you want to store users with local hashed passwords. The interface will force you to implement methods to:
Implement this interface if you want to store 3rd party user logins, like logins from Twitter, Facebook, Google, and Microsoft. Again, the interface only requires some simple CRUD operations.
Manage System.Security.Claim information for users with three simple methods.
Implement IUserRoleStore if you want to associate roles with each user. There are a total of 4 methods required.
The IRoleStore interface, like IUserStore, is a storage API with CRUD operations for role management. You’ll want to implement this interface and pass it to the ASP.NET Identity RoleManager.
IUserStore goes with UserManager; IRoleStore goes with RoleManager.
The IRoleStore interface requires 4 operations.
The security stamp is best explained in a stackoverflow.com answer from team member Hao King.
… this is basically meant to represent the current snapshot of your user's credentials. So if nothing changes, the stamp will stay the same. But if the user's password is changed, or a login is removed (unlink your google/fb account), the stamp will change. This is needed for things like automatically signing users/rejecting old cookies when this occurs, which is a feature that's coming …
The interface requires only two methods.
New in Identity 2.0 are the abilities to confirm users via a token and allow for users to reset their password. To offer both features you’ll want these 2 interfaces. When combined they require the following operations.
When you create a new instance of a UserManager you pass in an object implementing at least IUserStore and 0 or more of the other IUser*Store interfaces. If you ask the user manager to do something that isn’t supported (like by calling FindByEmailAsync when your custom user store doesn’t support IUserEmailStore), the manager object throws an exception.
There are a few OSS projects out there already providing IUser*Store implementations with various storage mechanisms. You can NuGet these implementations into a project, or peruse the source to get an idea of how to implement a custom user store.
Tugberk Ugurlu has an implementation for RavenDB: AspNet.Identity.RavenDB
Daniel Wertheim has an implementation for CouchDB / Cloudant
InspectorIT has an implementation for MongoDB: MongoDB.AspNet.Identity
Stuart Leeks just published a store using Azure Table Storage: AspNet.Identity.TableStorage
ILMServices has an implementation for RavenDB, too: RavenDB.AspNet.Identity
Antônio Milesi Bastos has built a user store using NHibernate: NHibernate.AspNet.Identity
Bombsquad AB provides a user store for Elastic Search: Elastic Identity
aminjam built a user store on top of Redis: Redis.AspNet.Identity
cbfrank has provided T4 Templates to generate EF code for a “database first” user store: AspNet.Identity.EntityFramework
Most of us never see the $injector service in Angular until there is a problem with dependencies, at which point we’ll see an error message like the following.
Uncaught Error: [$injector:unpr] Unknown provider: someServiceProvider <- someService
It is the Angular $injector that knows how to invoke functions by analyzing their dependencies and passing the correct parameters by fetching the proper service instances. Angular creates a single $injector when it bootstraps an application and uses the single $injector to invoke controller functions, service functions, filter functions, and any other function that might need dependencies as parameters.
The above error would occur if a function asks for a service named “someService”, but “someService” is not a service the injector knows about (because, perhaps, the service wasn’t registered correctly).
You can use the $injector service, like any service, just by asking for the $injector by name.
var someFunction = function($rootScope, $http) { return "called!"; }; app.run(function($rootScope, $injector) { $rootScope.annotations = $injector.annotate(someFunction); $rootScope.message = $injector.invoke(someFunction); });
The above code demonstrates two capabilities of the $injector.
1. You can use the annotate API to discover the dependency annotations for an injectable function (the above code would list “$rootScope” and “$http” as annotations).
2. You can use the invoke API to execute an injectable function and have the $injector pass in the proper services.
In some cases you might find it useful to create your own $injector instead of using the injector created by Angular during application startup. As an example, creating your own injector is useful in unit tests where you do not want singleton service instances. You can create your own injector using the angular.injector method.
var injector = angular.injector(["ng"]); var someFunction = function($http) { // ... }; injector.invoke(someFunction);
You must pass a list of the modules the injector will work with (just the core “ng” module in the above code). You have to explicitly list the ng module if you are going to use any services from the core of Angular. Unlike the angular.module method, which assumes you have a dependency on the ng module and will silently add “ng” to your list of dependencies, the injector function makes no assumptions about dependent modules.
Identity and membership systems are difficult to implement because there is such a wide variety of business needs and technology requirements for these systems. Out of the box solutions for identity management will never make everyone happy and are guaranteed to make somebody angry.
Here are a few thoughts, pointers, and links revolving around customization options with the new ASP.NET Identity features.
But first, previous posts in this series:
The sweet spot for ASP.NET Identity is a greenfield application with minimal architectural constraints. If you are comfortable taking a dependency on the Entity Framework and aren’t trying to apply domain driven design to a complex problem space, the Identity framework is a quick approach to storing usernames, passwords, and logins.
By deriving from IdentityUser you can store custom profile properties per user. By deriving from IdentityDbContext you can add relationships between members and other data in a system.
Migrating existing applications to use the Identify framework is a bit trickier. You’ll need to provide some custom mapping for EF to work with an exiting schema, and also manage hashed passwords in a way that the Identity password hasher understands (which is extensible through an IPasswordHasher typed property on the UserManager class. Recently, a few articles appeared to help with this scenario:
The Identity framework doesn’t yet include all of the features of the previous membership frameworks, so you should perform a gap analysis before heading in this direction.
Based on feedback from the first release of the Identity framework, there is already a prerelease of Identity 2.0. Most notable in this release is the addition of an IUser<TKey> type, where TKey is the type of the primary key / identifier for a user. IUser<TKey> is useful for anyone who doesn’t like the default string type for a primary key. More information is available in the blog post Announcing preview of Microsoft.AspNet.Identity 2.0.0-alpha1, where you can also see the following improvements.
There are a number of good reasons to use the Identity framework’s UserManager class while implementing your own user store persistence logic.
For starters, implementing your own persistence logic in a scenario where you just need users with passwords, but not roles, claims, or 3rd party logins, is relatively straightforward. The UserManager can work with a class only implementing IUserStore and IUserPasswordStore, which require a total of 8 CRUD type methods. Following this path has a few benefits.
The next post in this series will look at building a custom user store both with a relational and non-relational database.
Brock Allen’s analysis: The good, the bad and the ugly of ASP.NET Identity
Khalid Abuhakmeh’s analysis: ASP.NET MVC 5 Authentication Breakdown and ASP.NET MVC 5 Authentication Breakdown : Part Deux
A database project template from Konstantin Tarkus to work with Identity in a DB first approach: ASP.NET Identity Database
A customization article by John Atten: Code-First Migration and Extending Identity Accounts in ASP.NET MVC 5 and Visual Studio 2013
Brock Allen’s Membership Reboot
Thinktecture’s IdentityServer 2
The directives and controllers that AngularJS automatically associates with <form> and <input> elements offer quite a bit of functionality that is not apparent until you dig into the source code and experiment.
For example, in this plunkr I’ve created a simple HTML form with two inputs.
<div ng-controller="registerController"> <form name="registerForm" ng-submit="register()"> <input name="username" type="text" placeholder="Username" ng-model="username" required ng-minlength="3" /> <input name="password" type="password" placeholder="Password" ng-model="password" required /> <input type="submit" ng-disabled="registerForm.$invalid"/> <div>{{message}}</div> <pre>{{ registerForm | alljson }}</pre> </form> </div>
Notice the form has a name. Giving a form a name means Angular will automatically associate the form with registerController’s $scope object. The associated property will have the same name as the form (registerForm) and contain information about input values, clean and dirty flags, as well as valid and invalid flags.
The plunkr will dump out the data available about the form using an alljson filter, defined below. I’m using a custom filter because the built-in Angular json purposefully ignores object attributes that start with $ (like $invalid).
app.filter("alljson", function() { return function(o) { return JSON.stringify(o, null, 4); }; });
The output will look like the following (some information omitted for brevity).
{ "$name": "registerForm", "$dirty": true, "$pristine": false, "$valid": true, "$invalid": false, "username": { "$viewValue": "sallen", "$modelValue": "sallen", "$pristine": false, "$dirty": true, "$valid": true, "$invalid": false, "$name": "username", "$error": { "required": false, "minlength": false } }, "password": { "$viewValue": "123", "$modelValue": "123", "$pristine": false, "$dirty": true, "$valid": true, "$invalid": false, "$name": "password", "$error": { "required": false } } }
In addition to managing properties, Angular manipulates CSS classes in the DOM and provides extensibility points with model formatters and parsers, We’ll look at these features in a future post.
In a previous post (Core Identity), we saw how the .Core identity assembly provides interfaces for describing the data access needs of a membership and identity system. Core also provides a UserManager class with the domain logic for identity management.
The .EntityFramework identity assembly provides concrete implementations for the core interfaces.
Here are 5 things to know about how it all works together.
If you use File –> New Project to create an MVC 5 application with the “Individual User Accounts” security option, the new project template will spit out all the code needed for users to register, login, and logoff, with all information stored into a SQL Server database.
The new identity bits do not support some of the features included with membership providers in the years past, features like counting invalid login attempts and lockouts, but the extensibility is in place and the current implementation has some clean separations, so perhaps they’ll be in by default in the future.
Remember the UserManager is the domain logic, and the UserManager needs (at a minimum) an IUserPasswordStore to persist users and passwords. Thus, the way the default AccountController constructs a UserManager is by passing in a new UserStore, which implements IUserPasswordStore in addition to the other core identity interfaces for persisting claims, roles, and 3rd party logins.
new UserManager<ApplicationUser>( new UserStore<ApplicationUser>( new ApplicationDbContext()))
It turns out that UserStore also has a dependency, a dependency on an EF DbContext class, and not just any context but one that derives from IdentityDbContext. IdentityDbContext provides all of the EF code-first mapping and DbSet properties needed to manage the identity tables in SQL Server. The default “new project” code provides an ApplicationDbContext that derives from IdentityDbContext with the idea that you’ll add your own DbSet properties for the entities, tables, and overall data that your application needs and keep everything in the same database.
In short, an identity specific DbContext plugs into the concrete user store, which then plugs into the user manager.
Once all three are together you have an identity system that supports third party logins as well as local accounts.
The WebAPI and Single Page Application project templates also support user registration and password logins, but in these templates the AccountController is an API controller that issues authentication tokens instead of authentication cookies. Because the identity management work happens inside both the AccountController and inside Katana middleware, a UserManager factory is responsible for creating the user manager that both the middleware and API controller share.
public static Func<UserManager<IdentityUser>> UserManagerFactory { get; set; }
This static property is in the Startup.Auth.cs file that holds the Katana Startup configuration class. The actual factory function is initialized in this class, also.
UserManagerFactory = () => new UserManager<IdentityUser>(new UserStore<IdentityUser>());
This code uses the default constructor for the UserStore class, which will create a new instance of an IdentityDbContext object since an object isn’t supplied. If you want to use your own IdentityDbContext derived class, like the MVC 5 project does, you can modify the above initialization code and pass in your own context.
Side note: the SPA application template produces more than 2700 lines of code to get started. Not a large amount, but there are structural design issues (like the static UserManagerFactory) that require a healthy amount of rework for real applications. My personal advice is to use the template to get some ideas, but throw the code away and start from scratch for production applications.
By default, IdentityDbContext uses a connection string named “DefaultConnection”, and all the new project templates will include a DefaultConnection connection string in the project’s web.config file. The connection string points to a SQL Local DB database in the AppData folder.
To change the SQL Server database being used, you can change the value of the connection string in web.config. You can also pass a different connection string name into the DB context constructor.
A new MVC 5 project (not the SPA or WebAPI templates) provides an IdentityModels.cs file with the following two classes.
public class ApplicationUser : IdentityUser { } public class ApplicationDbContext : IdentityDbContext<ApplicationUser> { public ApplicationDbContext() : base("DefaultConnection") { } }
Remember ApplicationDbContext is the context used to initialize a UserStore for a UserManager. The context will already include Users and Roles properties it inherits from IdentityDbContext, but you can add additional properties to store movies, books, accounts, employees, or whatever an application needs to solve a problem.
The ApplicationUser class includes Id, Username, PasswordHash, and other properties it inherits from IdentityUser. You can add additional properties here to store additional profile information about a user.
For simple applications this all works well, but for more complex applications you again probably want to use the starting project template code only for inspiration and start your own implementation from scratch. The names, structure, and code organization all have a prototype code feel and aren’t production ready.
In a real application you’ll have to decide if you want to mingle your data context with IdentityDbContext. One issue to be aware of is that the UserStore class does not play well when using the unit of work design pattern. Specifically, the UserStore invokes SaveChanges in nearly every method call by default, which makes it easy to prematurely commit a unit of work. To change this behavior, change the AutoSaveChanges flag on the UserStore.
var store = new UserStore<ApplicationUser>(new ApplicationDbContext()); store.AutoSaveChanges = false;
The new Identity system is easy for greenfield projects with no existing users, but quite a bit trickier if you have an existing schema or don’t use the Entity Framework. Fortunately, the separation of domain logic in the UserManager and persistence logic behind IUserStore and friends is a fairly clean separation, so it is relatively easy to implement a custom persistence layer. This is one of the topics we’ll cover in a future post.