In Part I, we defined an IUnitOfWork interface to avoid coding directly to the System.Data.Linq.DataContext class. IUnitOfWork gives us the opportunity to manipulate IDataSource<T> objects.
IDataSource<T> is another abstraction. When the software is running in earnest, we need to back an IDataSource<T> with a table in a SQL Server database. However, during unit testing we might prefer to have an IDataSource<T> backed by an in-memory data structure. Essentially, we want the ability to switch between a System.Data.Linq.Table<T> implementation (SQL Server) and a System.Collections.Generic.List<T> implementation (in-memory) - without changing the code and LINQ expressions in the upper layers of software.
Fortunately, a handful of interfaces define every Table<T> object. If we implement the same interfaces, we can walk and talk just like a real Table<T>.
Implementing a persistent data source, one that talks to SQL Server, is as simple as forwarding the calls to an underlying Table<T> implementation.
Implementing an in-memory data source is a little bit trickier, but appears possible thanks to extension methods like AsQueryable on the System.Linq.Queryable class.
Of course, this naïve implementation could never support the full application, but should be suitable for the majority of isolated unit tests.
In the next post, we'll tie everything together to see how all these abstractions work together.
In the meantime, read Rick Strahl's "Dynamic Expression in LINQ to SQL". It is scenarios like the ones that Rick is presenting that make me worry that this approach will fall apart when it hits real requirements.