OdeToCode IC Logo

New and Updated Azure Course for .NET Developers

Monday, June 11, 2018 by K. Scott Allen

I completely re-worked my Developing with .NET on Microsoft Azure course earlier this year, and the new videos are now available.

Here are some of the changes from the previous version of the course:

- I show how to use the Azure CLI for Azure automation from the command line. The CLI works across platforms and the commands are easy to discover.

- I show how to setup a local Git repository in an Azure App Service and demonstrate how to deploy ASP.NET Core apps from the repo.

- The Azure Functions module uses the new 2.0 runtime to develop a function locally.

- The Azure Function is a function using blob storage, Cognitive Services, and Azure CosmosDB. 

- Numerous other changes to catch up with new features in Azure and VSTS

Enjoy!


image

Here are some other topics you'll see covered in the course:

- Develop and deploy an ASP.NET Core application to Azure App Services

- Manage configuration settings for an App Service

- Monitor and scale an App Service

- Work with input and output bindings in Azure Functions

- Create a git repository with a remote in VSTS or Azure App Services

- Setup a build and release pipeline using VSTS for continuous deployment

- Connect to Azure storage using the Portal, C# code, and Azure Storage Explorer

- Save and retrieve files from blob storage

- Configure alerts

- Monitor performance metrics using Application Insights

- Choose an API for CosmosDB storage

- Create and read documents in CosmosDB

- Create and read records in Azure SQL using Entity Framework Core

Separating Concerns with Key Vault

Tuesday, April 3, 2018 by K. Scott Allen

In an earlier post we looked at decrypting an asymmetric key using Key Vault. After decryption, we could use the key to decrypt other secrets from Key Vault, like encrypted connection strings.

This raises the question – do we need to encrypt our secrets in Key Vault? If we still need to encrypt our secrets, what value does Key Vault provide?

The short answers are maybe, and a lot.

Encrypting Secrets

It’s not a requirement to encrypt secrets before storing the secrets into key vault, but for those of use who work in highly regulated industries, it is difficult to justify to an auditor why we are not encrypting all sensitive pieces of information.

Keep in mind that Key Vault already encrypts our secrets at rest and will only use secure communication protocols, so secrets are safe on the network, too. The only benefit of us encrypting the secret before giving the secret to key vault is to keep the plain text from appearing in the portal or as the output of a script.

For applications running outside of “encrypt all secrets no matter the cost” mandates, the built-in safety mechanisms of Key Vault are good enough if you follow the right practices.

Separation of Concerns

Key Vault allows us to separate the roles of key managers, key consumers, and developers. The separation is important in the production data environment.

Key Vault Separation of Concerns

The security team is the team who can create key vaults for production and create keys and secrets inside a vault. When a system needs access to a given database, the security team can create a login for the application, then add the connection string as a secret in the vault. The team can then give developers a URL that leads to the secret.

Developers never need to read or see the secret. Developers only need to place the secret URL in a location where the running application can retrieve the URL as a parameter. The location could be a configuration file, or developers could place the URL into an ARM template to update application settings during a deployment.

The security team grants the application read access to secrets in the Key Vault. At run time, an application can read a connection string from the vault and connect to a database.

Finally, an auditor can review access logs and make sure the security team is rotating keys and secrets on a regular basis.

Rolling Secrets

Best security practices require periodic changes to passwords and access keys. Azure services that rely on access keys enable this scenario by providing two access keys – a primary and secondary. Azure Key vault also helps with this scenario by versioning all secrets in the vault and allowing access to multiple versions (this month's key, and last month's key, for example). We can also roll connection strings.

Take the following scenario as an example.

1. Security team creates a login for an application in Azure SQL. They place a connection string for this login into Key Vault. We'll call this connection string secret C1.

2. Devops deploys application with URL to the C1 secret.

3. After 30 days, security team creates a new login for application. They place the connection string in KeyVault. This is C2.

4. At some point in the next 30 days, devops will deploy the application and update the URL to point to C2.

5. After those 30 days, security team removes the login associated with C1.

6. GOTO 1

Summary

Key Vault is an important piece of infrastructure for applications managing sensitive data. Keeping all the secrets and keys for a system in Azure Key Vault not only helps you protect those secrets, but also gives you a place to inventory and audit your secrets.

Decryption with Azure Key Vault

Thursday, March 8, 2018 by K. Scott Allen

The next few posts are tips for developers using Azure Key Vault.

The documentation and examples for Key Vault can be frustratingly superficial. The goal of the next few posts is to clear up some confusion I’ve seen. In this first post we’ll talk about encryption and decryption with Key Vault.

But first, we’ll set up some context.

Old Habits Are Hard to Break

Over the years we’ve learned to treat passwords and other secrets with care. We keep secrets out of our source code and encrypt any passwords in configuration files. These best practices add a layer of security that helps to avoid accidents. Given how entrenched these practices are, the following line of code might not raise any eyebrows.

var appId = Configuration["AppId"];
var appSecret = Configuration["AppSecret"];

var encyptedSecret = keyVault.GetSecret("dbcredentials", appId, appSecret");

var decryptionKey = Configuration["DecryptKey"];
var connectionString = CryptoUtils.Decrypt(encryptedSecret, decryptKey);

Here are three facts we can deduce from the above code.

1. The application’s configuration sources hold a secret key to access the vault.

2. The application needs to decrypt the connection strings it fetches from the vault.

3. The application’s configuration sources hold the decryption key for the connection string.

Let’s work backwards through the list of items to see what we can improve.

Encryption and Decryption with Key Vault

Most presentations about Key Vault will tell you that you can store keys and secrets in the vault. Keys and secrets are two distinct categories in Key Vault. A secret can be a connection string, a password, an access token, or nearly anything you can stringify. A key, however, can only be a specific type of key. Key Vault’s current implementation supports 2048-bit RSA keys. You can have soft keys, which Azure encrypts at rest, or create keys in a hardware security module (HSM). Soft keys and HSMs are the two pricing tiers for Key Vault.

You can use an RSA key in Key Vault to encrypt and decrypt data. There is a special advantage to using key vault for decryption which we’ll talk about in just a bit. However, someone new to the cryptosystem world needs to know that RSA keys, which are asymmetric keys and computationally expensive compared to symmetric keys, will only encrypt small amounts of data. So, while you won’t use an RSA key to decrypt a database connection string, you could use an RSA key to decrypt a symmetric key the system uses for crypto operations on a database connection string.

Working with the REST API Wrapper

The .NET wrapper for Azure Key Vault is in the Microsoft.Azure.KeyVault package. If you want to use the client from a system running outside of Azure, you’ll need to authenticate using the Microsoft.IdentityModel.Clients.ActiveDirectory package. I’ll show how to authenticate using a custom application ID and secret in this post, but if you are running a system inside of Azure you should use a system’s Managed Service Identity instead. We’ll look at MSI in a future post.

The Key Vault client has a few quirks and exposes operations at a low level. To make the client easier to work with we will create a wrapper.

public class KeyVaultCrypto : IKeyVaultCrypto
{
    private readonly KeyVaultClient client;
    private readonly string keyId;

    public KeyVaultCrypto(KeyVaultClient client, string keyId)
    {
        this.client = client;
        this.keyId = keyId;
    }

    public async Task<string> DecryptAsync(string encryptedText)
    {   
        var encryptedBytes = Convert.FromBase64String(encryptedText);
        var decryptionResult = await client.DecryptAsync(keyId, 
                                 JsonWebKeyEncryptionAlgorithm.RSAOAEP, encryptedBytes);
        var decryptedText = Encoding.Unicode.GetString(decryptionResult.Result);
        return decryptedText;
    }

    public async Task<string> EncryptAsync(string value)
    {
        var bundle = await client.GetKeyAsync(keyId);
        var key = bundle.Key;

        using (var rsa = new RSACryptoServiceProvider())
        {          
            var parameters = new RSAParameters()
            {
                Modulus = key.N,
                Exponent = key.E
            };
            rsa.ImportParameters(parameters);
            var byteData = Encoding.Unicode.GetBytes(value);
            var encryptedText = rsa.Encrypt(byteData, fOAEP: true);
            var encodedText = Convert.ToBase64String(encryptedText);
            return encodedText;
        }
    }
}

Here are a few points about the code that may not be obvious.

First, notice the EncryptAsync method fetches an RSA key from Key Vault and executes an encryption algorithm locally. Key Vault can encrypt data we post to the vault via an HTTPS message, but local encryption is faster, and there is no problem giving a system access to the public part of the RSA key.

Secondly, speaking of public keys, only the public key is available to the system. The API call to GetKeyAsync doesn’t return private key data. This is why the DecryptAsync wrapper method does use the Key Vault API for decryption. In other words, private keys never leave the vault, which is one reason to use Key Vault for decryption instead of bringing private keys into the process.

Setup and Authenticating

The steps for creating a vault, creating a key, and granting access to the key for an application are all steps you can find elsewhere. Once those steps are complete, we need to initialize a KeyVaultClient to give to our wrapper. In ASP.NET Core, the setup might look like the following inside of ConfigureServices.

services.AddSingleton<IKeyVaultCrypto>(sp =>
{
    AuthenticationCallback callback = async (authority,resource,scope) =>
    {
        var appId = Configuration["AppId"];
        var appSecret = Configuration["AppSecret"];
        var authContext = new AuthenticationContext(authority);
        
        var credential = new ClientCredential(appId, appSecret);
        var authResult = await authContext.AcquireTokenAsync(resource, credential);
        return authResult.AccessToken;
    };
    
    var client = new KeyVaultClient(callback);
    return new KeyVaultCrypto(client, Configuration["KeyId"]);
});

In the above code we use an application ID and secret to generate an access token for Key Vault. In other words, the application needs one secret stored outside of Key Vault to gain access to secrets stored inside of Key Vault. In a future post we will assume the application is running inside of Azure and remove the need to know a bootstrapping secret. Otherwise, systems requiring encryption of the bootstrap secret should use a DPAPI library, or for ASP.NET Core, the Data Protection APIs.

Summary

Now that we know how to decrypt secrets with private keys in Key Vault, the application no longer needs to store a decryption key for the connection string. 

var encyptedSecret = keyVault.GetSecret("dbcredentials", appId, appSecret);
var connectionString = keyVault.Decrypt(encryptedSecret, decryptionKeyId);

We'll continue discussing this scenario in future posts.

New Course Covers Building Applications on Salesforce

Tuesday, March 6, 2018 by K. Scott Allen

My latest Pluralsight course is “Building Your First Salesforce Application”.

Learn how to work with Salesforce.com by creating custom applications. Start by signing up for a developer account on Salesforce.com, and finish with a full application including reports and dashboards that work on both desktops and mobile devices.

Salesforce is a new territory for me, and when the course was announced the question poured in. Are you moving to the Salesforce platform?

The short answer is no.

The longer answer is that I hear Salesforce in conversations more often. Companies and developers need to either build software on the Salesforce platform or use Salesforce APIs. There was a time when I thought of Salesforce as a customer relationship solution, but in these conversations I also started to hear Salesforce described as a database, as a cloud provider, and as an identity provider. I wanted to find out for myself what features and capabilities Salesforce could offer. I spent some time with a developer account on Salesforce, and when Pluralsight said they needed a beginner course on the topic, I decided to make this course.

I hope you enjoy watching and learning!

Building Your First Salesforce Application

Model Binding in GET Requests

Tuesday, February 27, 2018 by K. Scott Allen

I've seen parameter validation code inside controller actions on an HTTP GET request. However, the model binder in ASP.NET Core will populate the ModelState data structure with information about all input parameters on any type of request.

In other words, model binding isn't just for POST requests with form values or JSON.

Take the following class, for example.

public class SearchParameters
{
    [Required]
    [Range(minimum:1, maximum:int.MaxValue)]
    public int? Page { get; set; }

    [Required]
    [Range(minimum:10, maximum:100)]
    public int? PageSize { get; set; }

    [Required]
    public string Term { get; set; }
}

We'll use the class in the following controller.

[Route("api/[controller]")]
public class SearchController : Controller
{
    [HttpGet]
    public IActionResult Get(SearchParameters parameters)
    {
        if (!ModelState.IsValid)
        {
            return BadRequest(ModelState);
        }

        var model = // build the model ...
        return new OkObjectResult(model);
    }
}

Let's say a client sends a GET request to /api/search?pageSize=5000. We don't need to write validation code for the input in the action, all we need to do is check model state. For a request to /api/search?pageSize=5000, the above action code will return a 400 (bad request) error.

{ 
  "Page":["The Page field is required."],
  "PageSize":["The field PageSize must be between 10 and 100."],
  "Term":["The Term field is required."]
}

For the Required validation to activate for Page and PageSize, we need to make these int type properties nullable. Otherwise, the runtime assigns a default value 0 and the Range validation fails, which might be confusing to clients. 

Default Values

Give your input model a default constructor to provide default values and you won't need nullable properties or the Required attributes. Of course, this approach only works if you can provide sensible default values for the inputs. 

public class SearchParameters
{
    public SearchParameters()
    {
        Page = 1;
        PageSize = 10;
        Term = String.Empty;
    }
   
    [Range(minimum:1, maximum:int.MaxValue)]
    public int? Page { get; set; }

    [Range(minimum:10, maximum:100)]
    public int? PageSize { get; set; }

    public string Term { get; set; }
}

Byte Arrays and ASP.NET Core Web APIs

Thursday, February 22, 2018 by K. Scott Allen

I’ve decided to write down some of the steps I just went through in showing someone how to create and debug an ASP.NET Core controller. The controller is for an API that needs to accept a few pieces of data, including one piece of data as a byte array. The question asked specifically was how to format data for the incoming byte array.

Instead of only showing the final solution, which you can find if you read various pieces of documentation, I want to show the evolution of the code and a thought process to use when trying to figure out the solution. While the details of this post are specific to sending byte arrays to an API, I think the general process is one to follow when trying to figure what works for an API, and what doesn’t work.

To start, collect all the information you want to receive into a single class. The class will represent the input model for the API endpoint.

public class CreateDocumentModel
{
    public byte[] Document { get; set; }
    public string Name { get; set; }
    public DateTime CreationDate { get; set; }
}

Before we use the model as an input to an API, we’ll use the model as an output. Getting output from an API is usually easy. Sending input to an API can be a little bit trickier, because we need to know how to format the data appropriately and fight through some generic error messages. With that in mind, we’ll create a simple controller action to respond to a GET request and send back some mock data.

[HttpGet]
public IActionResult Get()
{
    var model = new CreateDocumentModel()
    {
        Document = new byte[] { 0x03, 0x10, 0xFF, 0xFF },
        Name = "Test",
        CreationDate = new DateTime(2017, 12, 27)
    };

    return new ObjectResult(model);
}

Now we can use any tool to see what our data looks like in a response. The following image is from Postman.

Simple GET Request ASP.NET Core Byte Array

What we see in the response is a string of characters for the “byte array” named document. This is one of those situations where having a few years of experience can help. To someone new, the characters look random. To someone who has worked with this type of data before, the trailing “=” on the string is a clue that the byte array was base64 encoded into the response. I’d like to say this part is easy, but there is no substitute for experience. For beginners, one also has to see how C# properties in PascalCase map to JSON properties in camelCase, which is another non-obvious hurdle to formatting the input correctly. 

Once you’ve figured out to use base64 encoding, it’s time to try to send this data back into the API. Before we add any logic, we’ll create a simple echo endpoint we can experiment with.

[HttpPost]
public IActionResult CreateDocument([FromBody] CreateDocumentModel model)
{
    return new ObjectResult(model);
}

With the endpoint in place, we can use Postman to send data to the API and inspect the response. We’ll make sure to set a Content-Type header to application/json, and then fire off a POST request by copying data from the previous response.

Posting byte array to asp.net core

Voilà!

The model the API returns looks just like the model we sent to the API. Being able to roundtrip the model is a good sign, but we are only halfway through the journey. We now have a piece of code we can experiment with interactively to understand how the code will behave in different circumstances. We want a deeper understanding of how the code behaves because our clients might not always send the model we expect, and we want to know what can go wrong before the wrong things happen.

Here are some questions to ask.

Q: Is a base64 encoded string the only format we can use for the byte array?

A: No. The ASP.NET Core model binder for byte[] also understands how to process a JSON array.

{
    "document": [1, 2, 3, 254],
    "name": "Test input",
    "creationDate": "2017-12-27T00:00:00"
}

Q: What happens if the document property is missing in the POST request?

A: The Document property on the input model will be null.

Q: What happens if the base64 encoding is corrupt, or when using an array, a value is outside the range of a byte?

A: The model input parameter itself will be null

I’m sure you can think of other interesting questions.

Summary

There are two points I’m making in this post:

1. When trying to figure out how to get some code to work, take small steps that are easy to verify.

2. Once the code is working, it is often worthwhile to spend a little more time to understand why the code works and how the code will behave when the inputs aren’t what you expect.

Managing Azure AD Group Claims in ASP.NET Core

Wednesday, February 21, 2018 by K. Scott Allen

In a previous post we looked at using Azure AD groups for authorization. I mentioned in that post how you need to be careful when pulling group membership claims from Azure AD. In this post we’ll look at the default processing of claims in ASP.NET Core and see how to avoid the overheard of carrying around too many group claims.

The first issue I want to address in this post is the change in claims processing with ASP.NET Core 2.

Missing Claims in the ASP.NET Core 2 OIDC Handler

Dominick Baier has a blog post about missing claims in ASP.NET Core. This is a good post to read if you are using the OIDC services and middleware. The post covers a couple different issues, but I want to call out the “missing claims” issue specifically.

The OIDC options for ASP.NET Core include a property named ClaimActions. Each object in this property’s collection can manipulate claims from the OIDC provider. By manipulate, I mean that all the claim actions installed by default will remove specific claims. For example, there is an action to delete the ipaddr claim, if present. Dom’s post includes the full list.

I think ASP.NET Core is removing claims to reduce cookie bloat. In my experiments, the dozen or so claims dropped by the default settings will reduce the size of the authentication cookies by 1,500 bytes, or just over 30%. Many of the claims, like IP address, don’t have any ongoing value to most applications, so there is no need to store the value in a cookie and pass the value around in every request.

If you want the deleted claims to stick around, there is a hard way and a straightforward way to achieve the goal.

The Top Answer on Stack Overflow Isn’t Always the Best

I’ve seen at least two software projects with the same 20 to 25 lines of code inside. The code originates from a Stack Overflow answer to solve the missing claims issue and explicitly parses all the claims from the OIDC provider.

If you want all the claims, you don’t need 25 lines of code. You just need a single line of code.

services.AddAuthentication()
        .AddOpenIdConnect(options =>
         {
              // this one:
              options.ClaimActions.Clear();
         });

However, make sure you really want all the claims saved in the auth cookie. In the case of AD group membership, the application might only need to know about 1 or 2 groups while the user might be a member of 10 groups. Let’s look at approaches to removing the unused group claims.

Removing Group Claims with a Claims Action

My first thought was to use the collection of ClaimActions on the OIDC options to remove group claims. The collection holds ClaimAction objects, where ClaimAction is an abstract base class in the ASP.NET OAuth libraries. None of the built-in concrete types do exactly what I’m looking for, so here is a new ClaimAction derived class to remove unused groups.

public class FilterGroupClaims : ClaimAction
{
    private string[] _ids;

    public FilterGroupClaims(params string[] groupIdsToKeep) : base("groups", null)
    {
        _ids = groupIdsToKeep;
    }

    public override void Run(JObject userData, ClaimsIdentity identity, string issuer)
    {
        var unused = identity.FindAll(GroupsToRemove).ToList();
        unused.ForEach(c => identity.TryRemoveClaim(c));
    }

    private bool GroupsToRemove(Claim claim)
    {
        return claim.Type == "groups" && !_ids.Contains(claim.Value);
    }
}

Now we Just need to add a new instance of this class to the ClaimActions, and pass in a list of groups we want to use.

options.ClaimActions.Add(new FilterGroupClaims(
    "c5038c6f-c5ac-44d5-93f5-04ec697d62dc",
    "7553192e-1223-0109-0310-e87fd3402cb7"
));

ClaimAction feels like an odd abstraction, however. It makes no sense for the base class constructor to need both a claim type and claim value type when these parameters go unused in the derived class logic. A ClaimAction is also specific to the OIDC handler in Core. Let’s try this again with a more generic claims transformation in .NET Core.

Removing Group Claims with Claims Transformation

Services implementing IClaimsTransformation in ASP.NET Core are useful in a number of different scenarios. You can add new claims to a principal, map existing claims, or delete claims. For removing group claims, we first need an implementation of IClaimsTransformation.

public class FilterGroupClaimsTransformation : IClaimsTransformation
{
    private string[] _groupObjectIds;

    public FilterGroupClaimsTransformation(params string[] groupObjectIds)
    {
        // note: since the container resolves this service, we could
        // inject a data access class to fetch IDs from a database, 
        // or IConfiguration, IOptions, etc. 

        _groupObjectIds = groupObjectIds;
    }

    public Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
    {
        var identity = principal.Identity as ClaimsIdentity;
        if (identity != null)
        {
            var unused = identity.FindAll(GroupsToRemove).ToList();
            unused.ForEach(c => identity.TryRemoveClaim(c));
        }
        return Task.FromResult(principal);
    }

    private bool GroupsToRemove(Claim claim)
    {
        return claim.Type == "groups" &&
               !_groupObjectIds.Contains(claim.Value);
    }
}

Register the transformer during ConfigureServices in Startup, and the unnecessary group claims disappear.


Summary

Group claims are not difficult to use with Azure Active Directory, but you do need to take care in directories where users are members of many groups. Instead of fetching the group claims from Azure AD during authentication like we've done in the previous post, one could change the claims transformer to fetch a user’s groups using the Graph API and adding only claims for groups the application needs.