OdeToCode IC Logo

Thoughts on 'Are We Done Yet?'

Wednesday, September 4, 2019 by K. Scott Allen

A classic question in software is "are we done yet?". This is not just a question for software developers. Anyone who is working in IT needs to ask the question when writing code, testing a feature, upgrading a network, or designing a user experience.

There is no perfect answer to the question. Every individual, every team, every business scenario will have a different answer, and you’ll only know if the answer was good using hindsight.

However, we need to make sure we ask the question and think about the answer.

When Is Fast Too Fast?

Most of us work in a business that wants to move fast. We feel a lot of pressure to release software as quickly as possible and move on to the next task. We descend on tasks like peregrine falcons - moving quickly and finishing jobs at breakneck speeds.

Falcon

There’s a danger in moving too fast. We take shortcuts, we rely on short term thinking, and we accumulate technical debt. Moving too fast today will force us to go slower tomorrow. We’ll spend more time tracking down bugs that are hard to find, and it takes longer to make changes in brittle code.

We need to avoid future slowdowns by moving at a sustainable pace. A sustainable pace requires us to change our definition of "done".

Two Types of Requirements

Most teams tend to focus on functional requirements. A functional requirement is how we think about the behavior of a system. For example – "must import an HL7 XML file", or "must compute the customer discount". A job can’t be done until the functional requirements are satisfied.

A business might also include some non-functional requirements. Non-functional requirements describe a specific characteristic or quality of a system. For example – "must comply with U.S. section 508 accessibility laws", or "must maintain a median response time less than 3 seconds".

Good development teams will go beyond just the requirements handed over by the business. Here are three specific non-functional requirements that we often sacrifice so we can move faster. However, without these requirements we are going to move slower in the future. The pace is unsustainable.

Maintainability

We need to build maintainable software. The code we write today might be around for many years. I have a 20-year old web forms application still in production. We need to make sure code is clean and, in the right places, extensible.

Deployability

Most clouds makes it easy for one developer to publish software into a platform. Unfortunately, it then becomes easy for the entire team to think of the one developer as The Person Who Does Every Install and never bother with automating the install so that anyone can deploy the software at any time. The irony here is that DevOps platforms and cloud themselves have never been so easy to automate. Good teams will automate every step ruthlessly.

Observability

Nothing is more time consuming than bugs that leave behind no clues. Instead of fixing problems, teams can spend days hypothesizing about what could be going wrong. Today's micro-service architectures with distributed messages make the scenarios even more difficult. There are, however, good solutions. Teams have to take the time to make software observable, and use telemetry frameworks that know how to correlate cross-process activities.

Are We Done Yet?

Most of us should say our software isn’t good enough for the first release into production until it is maintainable, deployable, and observable. To achieve these goals we want to:

  • Schedule more regular design and code reviews.

  • Automate deployments to the point where someone who is not on the team can deploy the app with a minimum amount of instruction.

  • Include error scenarios in test plans. Users should see friendly error pages with a transaction code they can capture. Errors in any page, controller, form, or API endpoint needs to generate diagnostic information in a queryable data source.

Question and Answers from Pluralsight LIVE! 2019

Wednesday, August 28, 2019 by K. Scott Allen

Here are some questions, with answers, from my interview session at Pluralsight LIVE!.

How did you get started in .NET programming?

I was working for a Microsoft shop when Microsoft first released .NET. At the time I was writing COM components in C++. I read about .NET in MSDN magazine, which is defunct now, sadly, but I wanted to be the first person at my company to understand .NET and be able to lead the charge in a new direction.

Pluralsight LIVE!

You've been teaching C# and .NET topics for a long time. What keeps you in the .NET ecosystem?

The .NET ecosystem has always been a good ecosystem, although there were some dark days when Silverlight died, and Sinofsky announced the Win8 development stack, which made no mentions of .NET. I'm happy the ecosystem is back and healthier than ever. Teams inside Microsoft embrace .NET Core, and we are seeing a .NET renaissance.

I’ve stayed here because the ecosystem is familiar, yes, but also exciting again. .NET is also not as fragmented and disjointed like other ecosystems, and I still think .NET has the best tools and can be the most productive.

C# 8 is almost here. What's a feature you're excited about?

I hope development teams embrace non-nullable reference types. I’ve been using non-nullable references in a new project, and I’m seeing why developers rave about languages, like F#, where you don’t have to spend as much time reasoning about nulls. You might have to work a bit harder at first, but there’s a certain amount of complexity that disappears from your thinking, and from the code itself.

What was one of the biggest technical hurdles you've ever had? How did you overcome it?

The reoccurring hurdle in my career has been data. If only we could write software without worrying about data! I’ve always worked on projects where even getting to the data is a challenge. Real data is always dirty and messy, so one tough job is cleaning up data for the software to work.

Another reoccurring hurdle has been mapping data. I've spent most of my time in the mortgage industry and the healthcare industry, and in both places there is always difficulty in moving data from one system to another. You need to change identifiers and lookup codes. You also need to change the shape of the data, sometimes matching two pieces together, sometimes splitting one piece apart.

Another hurdle is the volume of data, especially these days. How do you move large datasets from one source to another? I feel there is an iron triangle in the extract, load, and transform operations. You can be flexible. You can be fast. You can be cheap. But, you can only pick two of those three qualities.

Salt Lake City at Night

I don't think I've ever overcome the data hurdle, so to speak. Software is about tradeoffs. For the complex problems in real-world software you'll never find perfect solutions. What you hope to find is a solution that removes some of the problems you want to avoid, and gives you a set of problems you are willing to deal with.

A good example is picking a database technology. Do you need flexibility in how to evolve the schema? Pick a document database, but understand the problems you'll face with reporting and aggregations. Does the business require strict consistency? Pick a relational database, but understand the challenges you'll face in scalability and object-relational mapping. Want the best of both worlds? Send your writes to a relational database and query from a document database, but understand the new set of problems stemming from the additional complexity of maintaining two data stores. Software is all about tradeoffs!

What does YOUR learning plan look like? When you discover a new technology and want to learn it what's your approach?

I need to be hands on. I’ll give you a recent example where I was gearing up to learn Apache Spark. I read quite a bit about Spark at high level. I watched some videos. I browsed some blog posts. I understood the types of problems Spark is trying to solve and how Spark relates to other technologies like Hadoop. But, I learned more about Spark in 30 minutes after I installed the software locally and worked through some examples in the Spark shell, which is like a REPL for Spark.

By the way, REPLs are always avfantastic tool for learning and exploring a technology or language.

Alex Honnold - Free Solo!

What's a good technique for someone struggling to learn a technology to help get over a hump?

Have patience and be persistent! Something I’ve realized over the years is how much I continue to learn about technologies I think I already know as time passes. I believe the learning happens because my perspective changes over time and I can see the technology in a different light.

An example would be delegates in C#. I found delegates to be confusing and difficult to learn when I started with C#, and based on what I hear from my students, this is true for nearly everyone. Over the years I learned about other languages, like Python and Ruby and JavaScript, where functions are first class. I began to see past the complicated C# syntax for delegates and understand some of the simple functional concepts that form the basis of delegates. You could say my experience gave me a brighter light to shine on the subject of delegates so I could see them clearly for what they are, and what they are capable of.

If you feel like you are stuck on a topic, try approaching the topic from a different direction. Try to solve a different problem with the technology, or try a different technology with the same problem. The range of experiences you can gather will help you gain the insights you need, but experience takes time. Learning is a marathon race, and not a sprint.

What is one of the coolest tech changes you've seen in the last year?

Without a doubt the best change is the move towards ethical computing and ethical software. I feel fortunate and privileged to have grown up in the golden age of computing, where every day, and every month, and every year computers grew more powerful and made positive impacts in my life. For example, if I think back to the first time I came here to Salt Lake city, it was in the days before the iPhone. I had to rent a car, and look at a map to plan my drive from the airport. I spent lots of time and money just managing the logistics of the trip.

Yesterday, I flew into the airport with no preparation and used my phone at baggage claim to book a $15 ride to the hotel. Technology boosted my productivity all through the trip, and removed a lot of stress.

Pluralsight Banners

I worry that in the future I might have grandkids and they won't see computers as powerful and fun. I worry they might see computers as tools of oppression, or depression. I believe it was Scott Galloway that said something like social media is to mental health what smoking was to the physical health of our parent's generation. In other words, when I was growing up,I remember seeing the magazine ads with the Marlboro man and marketing copy about all the pleasures of smoking. Smoking was cool and doctor approved! Most people today understand the physical tolls of smoking on the respiratory and cardiovascular systems. Smoke is a poison. I genuinely worry that future generations will look back at social media's impact on mental health in the same light, and maybe even the Internet in general. The unlimited content, the unlimited conflict, and the AI algorithms designed to feed us dopamine boosts to keep us hooked – these are all pushing into dangerous boundaries.

You've spent a lot of time with Angular. What is something you really like about it?

I still think Angular 1.5 was the best version of Angular. Yes, Angular in the v1 days had quirks, and a strange vocabulary, and was not the easiest framework to learn. But, Angular was an amazingly productive framework once you got over the initial hurdles.

What do you think the next year or two looks like for angular as a technology?

Speaking of mental health, I walked away from front-end development about 20 months ago. I feel much more relaxed these days.

What is one thing an angular developer can study or learn that will help them be more effective?

If you are in front end development, or the JavaScript ecosystem in general, the one thing I would say is to make sure you learn the timeless skills. Yes, to do the best work you’ll need to know your framework, your language, and the operating environment. You'll need to know it all cold. But, there are diminishing returns to this learning path because the technologies here are ephemeral and what you'll need in 5 years will be different.

Timeless skills would be skills like designing a user experience, or writing clean code as a craft, or leading a software team. Those skills are still valuable in 5, 10, 20 years.

Azure is red hot and something you're well into. What's the coolest new feature they've come out with in the last year?

Azure DevOps amazes me. I have always been someone who promotes and works toward good configuration management practices. I once wrote a custom build engine so the company could have a repeatable build and release process. We could check-in source code to start a build, and have the build package the software for deployment. This was 18 years ago. That system was a lot of work, but a lot of fun to build and gave us many advantages.

Eventually Microsoft’s Team Foundation Server replaced the system, but TFS was also a lot of work to install, configure, and maintain. DevOps makes it so easy to setup a source code repository, define a build, define a release, and track work items. I don’t have to spend time on infrastructure problems or worry about upgrades.

Author Game Night at Quarters

What does the next year look like for Azure? What big changes do you anticipate?

I think we will see more serverless platforms, or, more of the existing platforms move to a serverless model. The word serverless is thrown around too much as a marketing term, though, so let me be more specific. I think we will continue to move to a model in the cloud where we pay for what we use instead of paying for what we provision. That’s what serverless means to me.

I want to tell Azure here is my code, and here is my data. Now, if I have 1,000 customer requests tomorrow bill me $1 for the day. And, if there are 10,000 requests the day after bill me $10 for the day. Then, if there are one million requests the next day, bill me $100. I don’t want to define what type of service plan I need and then setup and configure auto-scale rules. I don’t want to pay to reserve the virtual hardware for 2 million requests and then only have 1 customer show up. All this AI power in the cloud should give me what I need just in time.

What changes do you see for developers in the coming year or two. What big shifts can we expect to see?

I’ve never been particularly good at predicting software futures, but I do think the pressures of ethical computing will continue to grow. As software developers we are accustomed to attending family reunions and trying to avoid those relatives who want us to fix their PC, or hook up their printer, or help them download pictures from their flip phone.

I’m worried we are going to start avoiding those relatives who ask us harder questions. Like, why did bad software allowed their identity to be stolen, or my car to avoid an emission test, or crash an airplane, or discriminate against individuals. I think we need to start using tools like the Ethical operating system, which is a toolkit for evaluating risk zones to help you think about the long term ethical impacts of software.

If you had to pick 3 truths for being a good developer what would they be? What are three things you wished you knew when you started out?

I mentioned this before, but the first truth is that a career in software development is a marathon, not a sprint. You want to make sure you are continually improving and at a steady pace. It will take some time, but you can be as great as you will yourself to be.

The second truth I wish I knew when I started is that empathy is crucial for software development. If you are a developer who builds a UI, you need to have empathy for the users of your user interface. If you are a developer building a web service, you have to have empathy for the users of your API. When you are writing any line of code, you need to have empathy for the other developers who come along to maintain or change your code. You also need empathy for the ops team. How can you make your software easier to deploy? How can you make your software easier to monitor and troubleshoot when the software is in production?

I can’t overemphasize empathy, and I believe empathy is a skill you can improve. Get to know the people who use your code, who use your software, who operate your software. Get out of your comfort zone and try to experience other people’s viewpoints and opinions.

And finally, hone your ability to do root cause analysis. This will help you not only in software development, but also in your personal relationships, your business relationships, your health, your leadership abilities, and in so many places in life. Learn how to ask the 5 whys. The 5 whys are a good process you can use to start practicing root cause analysis, and you can practice on everyday problems you face in life.

What cool new projects are you working on? What we expect to see from you in the near future?

I've been working with Apache Spark recently, and i find it absolutely fascinating. I think the ability to apply functional programming techniques to distributed computing and large datasets is fascinating and exciting. For .NET developers, think of it as LINQ for big data. Maybe I'll be able to put together a course for .NET developers who need to use Spark.

Think Twice Before Returning null

Wednesday, August 7, 2019 by K. Scott Allen

Let’s say you are working in the kitchen and preparing a five-course meal. A five-course meal is a considerable amount of work, but fortunately, you have an assistant who can help you with the mise en place. You’ve asked your assistant to cube some Russet potatoes, julienne some baby carrots, and dice some red onions. After 15 minutes of working, you check on your assistant’s progress and ...

Kitchen disaster

What a disaster! You have diced onions to cook with, but you’ve also got a mess to clean up.

This is how I feel when I use a method with the following inside:

return null

I’m seeing thousands of return null statements in recent code reviews. Yes, I know you’ll find my fingerprints on some of that code, but as my mom used to say: "live and learn".

Returning null Creates More Work

An ideal function, like an assistant cook, will encapsulate work and produce something useful. A function that returns a null reference achieves neither goal. Returning null is like throwing a time bomb into the software. Other code must a guard against null with if and else statements. These extra statements add more complexity to the software. The alternative is to allow null to crash the program with exceptions. Both scenarios are bad.

What Are the Alternatives?

Before throwing in a return null statement, or using LINQ operators like FirstOrDefault, consider some of these other options.

Throw an Exception

In some scenarios the right thing to do is to throw an exception. Throwing is the right approach when null represents an unrecoverable failure.

For example, imagine calling a helper method that gives you a database connection string. Imagine the method doesn’t find the connection string, so the method gives up and returns null. Your software doesn’t function without a database, so other developers write code that assumes a connection string is always in place. The code will try to establish connections using null connection strings. Eventually, some component is going to fail with a null reference exception or null argument exception and give us a stack trace that doesn’t point to the real problem.

Instead of returning null, the method should throw an exception with an explicit message like "Could not find a connection string named 'PatientDb'". Not only does the software fail fast, but the error message is easier to diagnose compared to the common null reference exception.

Throwing is not always the right option, however. Consider the case where a user is searching for a specific patient using an identifier. A method to execute the search has to consider the possibility that a user is searching with a bad or obsolete identifier, and not every search will find a patient. In this scenario it might be better to return null than throw an exception. But, there are other options, too.

Return a Default Value

Finding a sensible and safe default value works in some scenarios. For example, imagine a helper method to fetch the claims for a given user. What should the method do when no claims are found? Instead of returning null and forcing every caller to guard against a null collection of claims, consider returning an empty collection of claims.

private static readonly IReadOnlyDictionary<string, string> EmptyClaims = new Dictionary<string, string>();

public IReadOnlyDictionary<string, string> FindClaims(int userId)
{
    /*
     * Find claims from somewhere, or ...
     */


     // when no claims are found . . .
     return EmptyClaims;
}

If callers only use ContainsKey or TryGetValue to make authorization decisions, the empty claims collection works well. There are no claims in the empty collection, so the software will never authorize the user. Even better, the code doesn’t need to guard against a null reference.

Think of all the complexity you've removed from the rest of the software! Future generations who work on the code will praise you and build ornate shrines in your honor.

Return a Null Object

The null object pattern has been around for a long time because the pattern is successful at reducing complexity and making software easier to read and write. Imagine a method that needs to return the current user in the system, but the method can’t identify the user. Perhaps the method can’t identify the user because the user is anonymous, and not being able to find the user is an expected condition. We shouldn’t handle an expected condition using exceptions.

Another option is to use null to represent an unknown user, but returning null is lazy. In case you missed the opening paragraphs, returning null is like telling your callers "I’m not only giving up, but I’m going to stick you with a dangerous value, so handle with care, and good luck!". Future generations who work on the code will curse you and burn you in effigy.

The null object pattern solves the problem by returning a real object reference, and the real object contains some safe defaults. We don’t want an unidentified user to gain access to admin functionality, or somehow slip through permission checks, so we create a version of the user that will fail all permission checks and not display a name, but at the same time no code needs to guard against null and introduce if else checks everywhere.

There are a number of different approaches you can use to design null objects, but remember the goal is to build an object with safe defaults. Here’s an example. Let’s start with a User type.

public class User
{
    public User(string name, IReadOnlyDictionary<string, string> claims)
    {
        Name = name;
        Claims = claims;
    }

    public string Name { get; protected set; }
    public IReadOnlyDictionary<string, string> Claims { get; protected set; }
    public bool IsAdmin { get; }
}

Now we can create an instance of User that contains safe defaults, like an empty claims dictionary so the IsAdmin helper (which presumably looks into Claims, will return false). One way to define the class is to use inheritance, like so:

public class NullUser : User
{
    public NullUser() : base(string.Empty, EmptyClaims)
    {

    }

    private static readonly IReadOnlyDictionary<string, string> EmptyClaims = new Dictionary<string, string>();
}

Now we can write a method that always return a safe value.

private static readonly NullUser nullUser = new NullUser();

 User GetCurrentUser()
 {
     // if user not found :
     return nullUser;
 }

If we start thinking about alternatives to null today, we'll be better set to handle the future.

The Future Is Undoubtedly Going to use Non-nullable Reference Types

There is a new feature coming with version 8 of the C# compiler that will help us avoid nulls and all the problems they bring. The feature is non-nullable reference types. I expect most teams to aggressively adopt C# 8 after the release later this year, at least for new code. You can find many examples of the new feature in Microsoft docs and blogs, but the idea is to tell the C# compiler if a null value is allowed, or not, in each variable. The C# compiler can aggressively check code to make sure non-nullable reference types do not receive a null value. Using these types should give us simpler code, and force us to think before adding a return null.

Summary

Ten years ago, Tony Hoare apologized for inventing null, calling the null reference a billion dollar mistake. Let's see if we can avoid using his mistake!

Remember the Principal of Least Privilege When Upgrading Those Older ASP.NET Applications

Wednesday, July 31, 2019 by K. Scott Allen

When I started developing with ASP.NET many years ago, we used two tools on a regular basis. We used Visual Studio to write code for applications, and Internet Information Services (IIS) to host the applications.

IIS runs as a privileged service in Windows, meaning IIS has access to resources, APIs, and data structures on a machine that a normal user couldn’t read or write. The extra privilege afforded to IIS works well when you want to run an application in production and execute as fast as possible, but is a problem when you want to develop, test, or debug an application. Debugging, for example, requires an intimate relationship between two processes where the debugger controls the execution and the memory space of the debugee. Attaching a debugger to a privileged process like IIS isn’t something ordinary users should be able to do.

Debugging a privileged process requires … privileges, which is why Visual Studio requires us to run as an Administrator if an ASP.NET project uses IIS for hosting. Running a development tool as the machine administrator violates the principle of least privilege, which is a principle we should all value in these dark days of phishing attacks, data breaches, and Snapchat.

Microsoft released IIS Express in 2010 to help us develop, test, and debug applications without elevating into the administrator role. The express version of IIS has most of the key features that IIS includes for hosting and running web applications. But, instead of running as a privileged service, IIS express runs as a normal process using the same identity as the developer.

A few older projects I've been in lately still require IIS, which requires running as an Administrator. This is certainly not a situation you want for new applications, and it should be part of a migration plan when working with older applications. In Visual Studio you can right-click on an ASP.NET project and go to the web properties to configure a project for using IIS Express.

VS Web Properties

Using IIS Express for developing with ASP.NET will help us obey the principle of least privilege but might present some other difficulties. IIS Express works best with monolithic applications. We’ll need some extra setup for systems composed of multiple services, processes, and applications. Since Express doesn’t run as a service, Visual Studio has to launch your applications on demand. If you want to have multiple applications and services running concurrently without launching Visual Studio and opening a project for each component, than the following tips might help.

  1. You can still configure IIS with websites pointing to all the components in a system. You’ll only be able to debug the component running in Express with Visual Studio, but at least the other services are alive and responding.

  2. You can use the command line to run Express and launch as many applications and services as you need (see https://docs.microsoft.com/en-us/iis/extensions/using-iis-express/running-iis-express-from-the-command-line).

  3. For more complicated scenarios, look into using containers and container orchestration.

So while there might be some extra work involved, not running as an administrator gives you a better security profile, and helps to obey the principle of least privilege.

Advanced Azure App Service Articles and Videos

Friday, July 26, 2019 by K. Scott Allen

Over the last few months I've put some work into explaining Azure App Services.

At Petri.com

Demystifying Azure App Services is a look behind the abstraction of Azure App Services.

Demystifying Azure App Services Plans gets into the details of what makes an App Service Plan.

Demystifying Azure App Services – Diagnostics and Telemetry looks at some of the gems in App Services smart and opinionated diagnostics.

At NDC

Here's a recording of my App Service talk at NDC Oslo.

I hope you enjoy the material!

Talk Ideas for 2020

Wednesday, June 26, 2019 by K. Scott Allen

After a 12-month break from developer conferences, I'm looking forward to working with a small handful of conferences next year. I've been kicking around some ideas for topics I’d like to talk about, and I'm sharing those ideas because thoughts and feedback are always appreciated!

An Architects Guide to Building Cloud Native Solutions in Azure

One of the challenges you’ll face when building applications in the cloud is in choosing the best technologies for your solution. In this session we’ll take a broad look at the technologies, services, and infrastructure available in Azure while drilling into the vital details you need to make decisions. What’s the best host for my container-based solution? Should I place my data into a data lake, a data warehouse, or a simple blob container? What’s the essential difference between a message queue and an event hub? We’ll answer these questions while also covering topics like security, identity, governance and compliance.

Into the Sky

Apache Spark for .NET Developers

Big data meets .NET with Apache Spark, the analytics engine for large-scale data processing. If you are a .NET developer and you need to create an ETL process for high volumes of data, or process large streams of data in real time, or train machine learning models against a big data set, or explore big data sets with exploratory queries, then Apache Spark is a technology you should know about. Once we’ve seen how to setup an Apache Spark cluster and worked with the shell, we’ll dive into the .NET bindings for Spark and see how to query and analyze data using C#, F#, and SQL. We’ll also be looking at Azure’s DataBricks platform to work with Spark in a managed environment.

Around the Corner

What .NET Core Developers Should Know about MSBuild

If you’ve ever wondered what makes MSBuild work, or if you’ve ever needed to tweak the XML in a project file to allow your software to compile, then this session is for you. We’ll start by learning about the fundamental concepts in MSBuild, concepts like tasks, properties, and conditionals. We’ll then move into the details of build targets that drive most of today’s builds. This talk is based on years of experience in being the person who everyone uses to debug builds, so there will be no shortage of tips and tricks for managing and debugging your builds.

A Seat at the Table

Five Lessons Learned as the CTO

Leadership skills come naturally for some people, but for the rest of us we need to figure out leadership as we go along. If you are thinking of making the jump from being a technical contributor, or have your sights set for the chief technical position, this session will give you some insight and lessons learned from experiences on the job.

Tapestry

Questions from the NDC Oslo Panel Discussion on the Future of .NET

Monday, June 24, 2019 by K. Scott Allen

At NDC Olso I was part of a panel discussion with Julie Lerman, David Fowl er, Damian Edwards, and Bryan Hogan. Here are my stream of consciousness answers for some of the questions presented to the panelists. Not all the questions were directed to me, but I jotted down some thoughts nonetheless.

NDC Oslo Show Floor

What are you most excited about with .NET Core 3?

This might sound odd, but I’m excited about WPF and Windows Forms. It’s not that I want to go out and write a new application using WinForms. But, I have a WinForms application that is still in production and is old enough to graduate from high school. I feel good knowing that Microsoft recognizes the importance of these older frameworks and the importance of writing desktop applications. For some scenarios, desktop applications are easier to build than web applications, and for the first time in more than 5 years I feel like .NET supports the desktop.

I’m also excited about gRPC services in ASP.NET Core. I think the enterprise struggles with the alternatives to WCF and SOAP. REST and hypermedia work well for some services and applications, but there are many scenarios where you need to bang out a distributed service and don’t need the overhead of perfectly decoupled components. In other words, HTTP and JSON are great, but the old days of “Add Service Reference” aren’t as bad as many people make them out to be. Like it or not, SOAP and WS-* protocols still perform significant amounts of work in today’s world. I believe gPRC can be a better SOAP.

NDC Oslo Panel Discussion on the Future of .NET

What do you think of performance improvements in .NET Core 3?

My favorite new API is Math.FusedMultiplyAdd. The operation sounds like something you might need in the software for a thermonuclear simulator. It’s fast, and it’s accurate! Be careful!

Where Do You See the Entity Framework Going?

I think EF needs to expand its reach far beyond relational databases. I know EF Core 3 is going to support CosmosDB, but its taking a long time to get there, and I already have good abstractions I can use in the Cosmos DB SDK, so I don’t need EF there.

It’s funny, ten years ago I considered relational databases to be the only way to store data. Then I started using MongoDB in real applications and I started to see all the alternatives. Today, I think of relational databases as specialized high-end storage for specific use cases. The bulk of my data, in terms of both volume and processing, lives outside of a SQL database. The data is in blob storage, and in data lakes, and is streaming through event grids, and getting crunched in Apache Spark clusters. That’s where I need help with data.

I think the best thing EF could do moving forward is to split into two frameworks. One part of the framework could map data from CSV files and JSON payloads into CLR objects as quickly as possible. But, I don’t want just an object mapper in the AutoMapper sense, I also want something similar to an F# type provider for C#. I want to point to data in an Azure data lake or an event grid stream and then express my computations using strong types.

The other part of the framework could focus on sending commands and queries to a remote data source for processing. Currently, EF translates LINQ expressions into SQL, but I think we need to give up on the dream of LINQ to everything. It takes too long for EF to support new databases, like ComosDB. It takes EF too long to support common features of a single database, like views and stored procedures in SQL. What we need is an EF that allows developers to use native data processing languages in straightforward fashion. Give me an elegant way to embed SQL statements in my code. SQL is everywhere! SQL is a language supported by SQL Server, by Cosmos DB, by Apache Spark, and in the future even blob storage in Azure. I need to take advantage of as many SQL features as I can without being limited by what can be expressed in C#, and then I need to execute the SQL and map results into objects.

In short, I want better language interop and forget about abstracting heterogeneous data sources behind C# expressions.

NDC Oslo Panel Discussion on the Future of .NET

Do you worry that changes in .NET could leave people behind?

I do worry. I’m worried about people getting left behind, and I’m worrying about people getting confused and giving up. A couple of years ago I made a deliberate move away from front-end development because I felt the pace of change was unhealthy. It felt like everything was changing, but not getting better. The improvements were small and not worth the cost of keeping up to date. I always use webpack 4 as a specific example. There were many breaking changes between version 4 and version 3. For example, loaders were removed and replaced by rules. From what I could tell, rules don’t offer any features beyond loaders, but everyone moving from webpack 3 to webpack 4 starts with a broken webpack build because of the change, and they need to research the change and find out how to update their webpack configuration.

I think .NET and the C# language are both changing, and I do believe they are moving forward, they are improving, and they are not just changing to make a change. But, I do worry that Microsoft doesn’t keep backward compatibility as a priority anymore. I know that backward compatibility can be an albatross and there’s been a lot of bad decisions made in the name of backward compatibility, but I also know I’m not pushing my team to update their ASP.NET Core 1.1 applications because of all the breaking changes in moving to 2.2. We’ll need to move eventually, but there’s nothing forcing us to move today except for the end of support coming in a few weeks. We’ll have to set aside a couple days to make the move and figure out the new package names, the new extension method namespaces, the new name for the web host builder, and how to configure the new authentication and authorization services and middleware. Then we have to run some tests and make sure we have no regressions in performance, features, or security. Upgrading is not just setting a version property in a project file, sometimes you need to learn about the philosophy changes in the framework.

Do you find companies are still reluctant to adopt Entity Framework?

Yes, I do. Just last month I was working with a development team that made it clear from the start that this team uses a DBA, and the DBA is going to be solely responsible for writing all SQL queries for the database.

It is a reasonable decision to avoid the Entity Framework, and there are many reasons you can use to justify this decision. I only ask teams to avoid EF for the right reasons. Don’t avoid EF because you think EF is insecure. I’d bet EF does a better job avoiding SQL injection attacks than hand rolled code. Don’t avoid EF because you think EF will be slow, there are many scenarios where EF is fast enough.

If you could rewrite .NET entirely, what would you change?

If .NET includes the C# language, too, the first thing I’d rip out and redo is async and await. We need async programming models, but the current solution is fragile and frustrating. I don't like the Async postfix convention in my codebase. I don’t want to see ConfigureAwait in my library code. I don’t want to worry about running sync code over async code or async code over sync code. We need better operability because here we don’t always have perfectly async code bases and libraries.

What I’m saying is – I want to call Task.Result and not feel dirty or fearful.

Do you think future versions of .NET should offer language features like Ballerina?

For anyone who hasn’t heard of Ballerina, go visit ballerina.io. I worry about baking Ballerina type features into the language because I’m afraid we’ll end up with something like async and await where the solution covers 80% of the easy scenarios, but the last 20% is painful or near impossible.

Ballerina sample

Like it or not, C# is a general purpose programming language, so you are going to have scenarios that require boilerplate code. I do think we can make some changes to the language that can help eliminate boilerplate code. Every time I look at a project that generates Swagger docs or OpenAPI docs, there are these [Produces] attributes all over the code. The attributes are noisy and I always wonder why the tools can't figure out the real return type.

Or, better yet, why can’t the language express what I’m going to return? C# is an object oriented language so I need to write methods with a return type like IActionResult to cover all possible return types. But, IActionResult loses all the fidelity you can have with the concrete types the method actually uses. What if C# had the F# concept of a sum type, or discriminated union? Then I don’t have to lose type fidelity by specifying a return type with some base class or interface. I think changes like this would make C# and then .NET appeal to an even broader audience.

I also think .NET and the tools can continue to improve for the new world of microservices. Look at what’s happened with HttpClient over the years. The original version was impossible to use correctly, because you had to choose between network socket starvation on one hand, and infinite DNS cache lookups on the other. So, we need smarter bits of infrastructure, and we need better scaffolding tools. The scaffolding tools with MVC 5 are light years ahead of the tools in ASP.NET Core in terms of features and performance.

Thanks for reading, and thanks to every who came to see the panel live!

View from Holmenkollbakken