OdeToCode IC Logo

.NET Core Opinion #6 - Be Wary of GUI Build Tools

Monday, October 29, 2018 by K. Scott Allen

It’s not that GUI build tools are bad, per se, but you have to watch out for tools that use too much magic, and tools that don’t let you version control your build with the same zealousness that you use for the source code of the system you are building.

Let’s start with the 2nd type of tool.

Why Text Files Rule

Many open source .NET Core projects use AppVeyor to build, test, and deploy applications and NuGet packages. When defining an AppVeyor build, you have a choice of using a GUI, or a configuration file.

With the GUI, you can point and click your way to a deployment. This is a perfectly acceptable approach for some projects, but lacks some of the rigor you might need for larger projects or teams.

AppVeyor GUI Build

You can also define your build by checking a YAML configuration file into the root of your repository. Here’s an excerpt:

AppVeyor YAML Build

Think about the advantages of the source-controlled YAML approach:

  • You can version the build with the rest of your software

  • You can use standard diff tools to see what has changed

  • You can see who changed the build

  • You can copy and share your build with other projects.

Also note that in the screen shot above, the YAML file calls out to a build script – build.ps1. I believe you should encapsulate as many build steps as possible into a package you can run on the build server and on a development machine. Doing so allows you to make changes, test changes, and troubleshoot build steps quickly. You can use MSBuild, Powershell, Cake, or any technology that makes builds easier. Integration points, like publishing to NuGet, will stay as configurable steps in the build platform.

Azure Pipelines also offers GUI editors for authoring build and release pipelines.

Azure Pipelines GUI Editor

Fortunately, a YAML option arrived recently and is getting better with every sprint.

Configure Azure Pipelines with YAML

Magic Tools

Magic tools are tools like Visual Studio. For over 20 years, Visual Studio has met the goal of making complex tasks simple. However, any time you want to perform a more complex task, the simplicity and hidden complexity can get in the way.

For example, what does the “Build” command in Visual Studio do, exactly? Call MSBuild? Which MSBuild? I have 10 copies of msbuild.exe on a relatively fresh machine. What parameters are passed? All the details are hidden and there’s nothing Visual Studio gives me as a starting point if I want to create a build pipeline on some other platform.

Another example of magic is the Docker support in Visual Studio. It is nice that I can right-click an ASP.NET Core project and say “Add Docker Support”. Seconds later I’ll be able to build an image, and not only run my project in a container, but debug through code executing in a container.

But, try to build or run the same image outside of Visual Studio and you’ll discover just how much context is hidden by tooling. You have to dig around in the build output to discover some of the parameters, and then you'll realize VS is mapping user secrets and setting up a different environment for the container. You might also notice the quiet installation of Microsoft.VisualStudio.Azure.Containers.Tools.Targets into your project, but you won't find any documentation or source code for thus NuGet package.

I think it is tooling like the Docker tooling that gives VS Code good traction. VS Code relies on configuration files and scripts that can make complex tasks simpler without making customizations and understanding inaccessible. Want to make a simple change to the Visual Studio approach? Don’t tell me you are going to edit the IL in Microsoft’s assembly! VS Code is to Visual Studio what YAML files are to GUI editors.

Summary

To wrap up this post in a single sentence: build and release definitions need source control, too.

Major Updates for my Building Secure Services in Azure Course

Tuesday, October 23, 2018 by K. Scott Allen

I’ve completely reworked my secure services course from scratch. There’s a lot of demos across a wide range of technologies here, including:

Docker Containers

  • Building ASP.NET Core projects using Visual Studio and Docker containers.

  • Deploying container images using Docker Hub and Azure App Services for Linux

  • Setting up continuous deployment for containers

Automation and Azure Resource Manager

  • Using ARM templates to deploy and provision resources in Azure (infrastructure as code)

  • Setting up Azure Key Vault

  • Storing secrets in Key Vault for use in ARM templates

Microservices and Container Orchestration

  • Using the new IHttpClientFactory and resiliency patterns for HTTP networking in ASP.NET Core

  • Container orchestration using Docker compose

  • Creating and using an Azure Container Registry (ACR)

  • Deploying multiple images using ACR And Compose

Cloud Identity

  • Creating your own test instance of Azure Active Directory

  • Authentication with OpenID Connect (OIDC) and Azure Active Directory

  • Securing APIs using Azure Active Directory and JWT tokens

  • Invoking secure APIs

  • Setting up an Azure B2C instance and defining your own policies

  • Securing an application using Azure B2C.

Note: this updated course is an hour shorter than the original course. Pluralsight authors generally want to make courses longer, not shorter, but I learned how to tell a better story this second time around. Also, the Docker story and tooling is markedly improved from last year, which saves times.

Building Secure Services in Azure

I hope you enjoy the course!

.NET Core Opinion #5 - Deployment Scripts and Templates

Wednesday, October 17, 2018 by K. Scott Allen

Previously, we looked at some folders to include in your source code repository. One folder I didn’t mention at the time is a deployment folder.

Not every project needs a deployment folder, but if you are building an application, a service, or a component that requires a deployment, then this folder is useful, even if a deployment is as simple as copying files to a well-known location.

What goes into the folder?

Setup Instructions

At one extreme, the folder might contain markdown instructions about how to setup a development environment, or a list of prerequisites to develop and run the software. There’s nothing automated about markdown files, but the developer starting this week doesn’t need to figure out the setup using trial and error.

Configuration as Code

At the other extreme, you can automate anything these days. Does a project need specific software on Windows for development? Write a script to call Chocolatey. Does the project use resources in Azure for development? The Azure CLI is easy to use, and Azure Resource Manager templates can declaratively take on some of the load.

Generating an ARM template from the Azure portal puts you one step closer to automating the setup of an entire resource group.

Ruthlessly automating software from development to production requires time and dedication, but the benefits are enormous. Not wasting time on setup and debugging misconfigurations is one advantage. Being able to duplicate a given environment with no additional work comes in handy, too.

Tackling Costs in Azure

Monday, October 15, 2018 by K. Scott Allen

"Cloud Computing Governance" sounds like a talk I’d want to attend after lunch when I need an afternoon nap, but ever since the CFO walked into my office waving Azure invoices in the air, the topic is on my mind.

It seems when you turn several teams of software developers loose in the cloud, you typically set the high-level priorities like so:

  1. Make it secure
  2. Make it fast
  3. Make it more secure

Missing from the list is the priority to "make the monthly cost as cheap as possible", but cost is easy to overlook when the focus is on security, quality, and scalability. After the CFO left, I reviewed what was happening across a dozen Azure subscriptions and I started to make some notes:

Cloud Costs Adhoc Impromptu Unstructured Analysis

Yes, there’s 104 un-pooled Azure SQL instances, and 38 app services running on 30 app service plans.

Cutting Costs

There are countless people in the world who want to sell tools and consulting services to help a company reduce costs in the cloud. To me, outside consultants start with only 1 of the 3 areas of expertise needed to optimize cost. The three areas are:

  1. Expertise in Azure features, pricing, and licensing
  2. In depth knowledge of the system under development
  3. An understanding of where the business is headed in 1 to 3 years, including an understanding of the contractual relationships with SaaS customers.

In Venn diagram form:

The Ideal Person to Optimize Cost as a Function of Cloud

Let’s dig into the details of where these three areas of knowledge come into play.

Cutting Tools

Let’s say your application needs data from dozens of large customers. How will the data move into Azure? An outside consultant can’t just say “Event Hubs” or “Data Factory” without knowing some details. Is the data size measured in GB or TB? How often does the data move? Where does the data live at the customer? What needs to happen with the data in the cloud? Will any of these answers change in a year?

Without a good understanding of the Azure offerings, a tech person often answers with the technology they already know. A SQL oriented developer, for example, will use Data Factory to pump data into an Azure SQL database. But, this isn’t the most cost effective answer if the data requires heavy duty processing after delivery, because Azure SQL instances are priced for line of business transactions that need atomicity, reliability, redundancy, high availability, and automatic backups, not hardcore compute and I/O.

But let’s say the answer is SQL Server. Now what?

Cutting Boards

Now a consultant needs to dig deeper to find out the best approach to SQL Server in the cloud. There are three broad approaches:

  1. SQL Server on a virtual machine
  2. SQL Server as a managed instance
  3. Azure SQL Server

Option #1 is best for lift and shift solutions, but there is no need to take on the responsibility for clustering, upgrades, and backups if you can start with PaaS instead of IaaS. Option #2 is also designed for moving on-prem applications to the cloud, because a managed instance has better compatibility with an on-prem SQL Server, but without some of the IaaS and management hassles. For greenfield development, option #3 is the best option for most scenarios.

Once you’ve decided on option 3, there is another two levels of cost and performance options to consider. It’s not so much that Azure SQL is complicated, but Microsoft provides flexibility to cover different business scenarios. For any given Azure SQL instance, you can:

  1. Run the database as a single database
  2. Add the database to a pool for resource sharing

Option #1 is the best option when you manage a single database, or you have a database with unique performance characteristics. A pool is usually better from a cost to performance ratio when you have 3 or more databases.

After you’ve decided to pool, the next decision is to decide how you’ll specify the performance characteristic of the pool. Will you use DTUs? Or will you use vCPUs? DTUs are frustratingly vague, but we do know that 20 DTUs are twice as powerful as 10 DTUs. vCPUs are at least a bit familiar, because we equated CPUs with performance capability for decades.

Cutting Edge

One significant difference between the DTU model and the vCPU model is that only the vCPU model allows for reserved instances and the “hybrid benefit”. Both of these options can lead to huge cost savings, but both require some business knowledge.

The “hybrid benefit” is the ability to bring your own SQL Server license. The benefit is ideal for moving SQL databases from on-prem to the cloud, because you can make use of a license you already own. Or, perhaps your organization already has a number of free licenses from the Microsoft partner program, or discounted licenses from enterprise agreements.

Reserved instances will save you 21 to 33 percent if you commit to a certain level of provisioning for 1 to 3 years. If you customers sign one year contracts to use your service, a one year reserved instance is a quick cost savings with little risk.

If everything I’ve said so far makes it sound like you could benefit from a using a spread sheet to run hypothetical test, then yes, setting up a spreadsheet does help.

Cloud Cost Optimization Engine with Automatic Dependency Management

Cutting Ends

Once you have a plan, you have to enforce the plan and reevaluate the plan as time moves forward. But, logging into the portal and eyeballing resources only works for a small number of resources. If things go as planned, I’ll be blogging about an automated approach over the next few months.

An Updated Cloud Patterns and Architecture Course

Tuesday, October 2, 2018 by K. Scott Allen

I’ve updated my Cloud Patterns and Architecture course on Pluralsight.

The overall goal of this course is to show the technologies and techniques you can use to build scalable, resilient, and highly available applications in the cloud, specifically with Azure.

Sample System Design with Azure Platform Services

In addition to walking through sample architectures, demonstrating design patterns, and adding bits of theory on topics like the CAP theorem, here are some of the lower level demos in the course:

  • Setting up Azure Traffic Manager and using Traffic Manager profiles to route traffic to a geo-distributed web application.

  • Setting up Azure Service Bus to send and receive queued messages.

  • Creating an Azure Redis Cache and using the cache with an SDK, as well as configuring the cache to operate behind the ASPNET IDistributedCache interface.

  • Provisioning a Content Delivery Network (Azure CDN) and pushing static web site content into the CDN.

  • Importing a web API into Azure API Manager using OpenAPI and the ASPNET Swashbuckle package, then configuring an API to apply a throttling policy.

  • Creating, tweaking, running, and analyzing load tests using Azure DevOps and Visual Studio load testing tools.

And more! I hope you enjoy the course!

Thoughts on Azure DevOps

Monday, October 1, 2018 by K. Scott Allen

Azure DevOpsI’ve been using Azure DevOps since the early days when the service carried the name Visual Studio Online. I’ve used the service for both professional projects and personal projects, and I’ve enjoyed the service so much I’ve demonstrated and recommended the service on consulting gigs, at workshops, at user groups, and in Pluralsight courses.

ADO has sometimes been a hard sell. The previous monikers for the service tied the product to Windows, Visual Studio, and Microsoft, so getting a Node developer to sit down and see a continuous delivery pipeline for a pure JS project wasn’t always easy. People already in the Microsoft ecosystem would also resist given the baggage of its on-premises ancestor, Team Foundation Services. And by baggage, I mean heavy enterprise baggage overstuffed with XML. I’ve gotten a lot of work done with TFS over the years, but TFS is also the only Microsoft product I’ve upgraded by hiring an external expert. I did not want to learn the installation incantations for the unholy amalgamation of TFS, SQL Server, SSRS, and SharePoint. TFS is also the only source code provider I’ve seen crash a commercial-grade network router while fetching source code.

But, the past is gone, and no other service exemplifies the evolution of Microsoft quite as well as evolution of TFS to Azure DevOps. We’ve gone from a centralized behemoth with an enterprise focus to a modern looking and sleek cloud platform that aggressively supports small open source projects as well as larger organizations.

Here are the constituent services that form Azure DevOps.

Azure Pipelines

Pipelines provide a platform for building, testing, packaging, and deploying applications. It’s a feature rich build system that is extensible and easy to use. I’d consider this the crown jewel of Azure DevOps. All the heavy lifting uses build machines that the service transparently provisions in the cloud. Here are three more fun facts:

  • Pipelines are not tied to source control in Azure. You can pull source from public and private repositories, including GitHub.

  • Build minutes for OSS projects are free and unlimited.

  • You can build anything for anyone since the supported build environments include Linux, Windows and macOS.

My biggest complaint about Pipelines in the past has been the inability to define builds using source controlled text files instead of the web UI. Fortunately, YML files have come to the rescue and the ability to codify and version build and release definitions should soon be generally available.

Azure Pipelines at Work

Azure Boards

Boards are where a team can track issues, bugs, work items, and epics. There are Kanban boards, of course, and custom workflows. The service is well featured, particularly since it is free for 5 users and about $9,000 USD a year for 100 users (note that developers with MSDN subscriptions will have free access). There are other products that have many more bells and whistles, but they’ll also start license negotiations at $20,000 for 100 users.

Azure Repos

Git source control with an unlimited number of private or public repositories.

Azure Test Plans

Automated tests will typically execute in a Pipeline. The Test Plans service is more of a place for tests not executing in a pipeline, so this service covers manual tests, and exploratory tests, as well as load tests (which are automated, but fall here for some reason).

The load testing features are the only features I’m qualified to speak about since I’ve been using the testing tools in VS Enterprise for years. Unfortunately, the tools themselves remain pretty much unchanged over these years and feel dated. The test recorder requires Internet Explorer and a plugin. The “Browser Mix” feature will allow you to make HTTP requests using an IE9 UA string, but there is no mention of any browser released after IE9, and even having a browser mix feature in 2018 is questionable.

Behind the scenes, the load testing artifacts are relatively simple XML files, so it is possible to avoid the tools in some workflows.

On the plus side, the load testing framework can intelligently provision hardware in the cloud to generate load. There is no need to setup test agents and monitor the agents to see if they are overworked. See my Cloud Patterns course for more.

Azure Load Test Results

Azure Artifacts

Your own ultimate package repository for Maven, npm, and NuGet packages. Publish packages here for internal use at your organization.

Extensions

The app store for DevOps contains some high quality extensions. There is also an extensive HTTP API throughout the DevOps service to integrate with other products and your own custom utilities. Between the API and the custom extensions, there is always a way to make something work in DevOps, all you need is the will.

How This Relates to GitHub

My opinion: GitHub is community focused, Azure DevOps is focused on organizations. But, there is some crossover. If you have an OSS project, you’ll want to host on GitHub and build in Pipelines.

Summary

Look for yourself at the aggressive and transparent evolution of Azure DevOps over the years. My only worry today is Azure DevOps using the word "DevOps" in the name. DevOps requires a way of thinking and a culture. I hope organizations don't adopt DevOps tools in the same way they adopted Agile tools and then proclaimed themselves Agile.

.NET Core Opinion #4 - Increase Productivity with Dev Scripts

Friday, September 21, 2018 by K. Scott Allen

In a previous post I mentioned that a scripts directory can be a welcome addition to any source code repository. What goes into scripts? Anything you can automate to make a developer’s life easier!

Examples for Inspiration

Here’s a script I’ve used to simplify adding an EF migration. All I need to do from the command line is addmigration [migration_name].

pushd src\beaverleague.data
dotnet ef migrations add %1
dotnet ef database update
popd

I also have a recreatedb script I can use to start fresh after pulling changes.

pushd src\beaverleague.web 
dotnet run dropdb migratedb seeddb stop 
popd

More on how the parameters above work in a future post.

The EF repo itself uses a tools folder instead of a scripts folder, but the idea is the same. Inside you’ll find scripts to clean up test environments by dropping and shrinking databases, like this one that uses a combination of sqlcmd and sqllocaldb command line tools, as well as a script to query for all the non-system databases in a DB instance.

@echo off 
sqlcmd -S "(localdb)\mssqllocaldb" -i "DropAllDatabases.sql" -o "DropAll.sql" 

sqlcmd -S "(localdb)\mssqllocaldb" -i "DropAll.sql" 

del "DropAll.sql" 

sqllocaldb stop mssqllocaldb 
sqllocaldb delete mssqllocaldb 

ShrinkLocalDBModel.cmd

For more examples and ideas, checkout the TypeScript repo with scripts for everything from running tests to automating GitHub issues with OctoKit. There’s the vscode repo with scripts to setup an environment. The repo to build the official .NET Docker images includes Powershell scripts to execute docker pull with retries.

These are all examples where 5 or 6 lines of script code can not only save time for the entire team in the long run, but also codify a common operation.

dotnet

I specifically want to call out special capabilities of the dotnet CLI tool. We’ve always had the ability to build, publish, and package from the command line, but the new global tools feature gives us an npm-ishly easy path to installing new tools and use them from anywhere.

Here are some of the tools I use.

Nate McMaster maintains a more complete list of global tools.

Summary

Take advantage of the command line renaissance in .NET Core to speed up a repeatable development process.