Over the years I’ve noticed that application startup code tends to attract smaller bits of code in the same way that a protostar accretes cosmic material until reaching the point where nuclear fusion begins. I’ve seen this happen in the main
function of C programs, and (back when we never had enough HRESUT
s to hold the HINSTANCE
s of our HWINDOW
s), in the WinMain
function of C++ programs. I’ve also seen this happen inside of global.asa in classic ASP, and in global.asax.cs for ASP.NET. It’s as if we say to ourselves, "I only have two new lines of code to execute when the program starts up, so what could it hurt to jam these two lines in the middle of the 527 method calls we already have in the startup function?"
This post is a plea to avoid nuclear fusion in the Startup and Program files of ASP.NET Core.
There is a lengthy list of startup tasks for modern server applications. Warm up the cache, create the connection pool, configure the IoC container, instantiate the logging sinks, and all of this happens before you getting to the actual business of application startup. In ASP.NET Core, I used to see most of this logic end up inside of Startup.cs. Some of this code is moving over to Program.cs
as developers start to recognize Program.Main
as a usable entry point for a web application.
The next few opinionated posts will discuss strategies for organizing startup and entry code, and look at approaches you can use for clean startup code.
To get started, let’s talk about the Startup
class in ASP.NET Core. If you believe every class should have a single responsibility, then it is easy to think the Startup
class should manage all startup tasks. But, as I’ve already pointed out, there is a lot of work to do when starting up today’s apps. The Startup
class, despite its name, should not take responsibility for any of these tasks.
In Startup, there are three significant methods with specific, limited assignments:
The constructor, where you can inject dependencies to use in the other methods.
ConfigureServices
, where you can setup the service provider for the app
Configure
, which should be limited to arranging middleware components in the correct order.
public class Startup { public Startup(IConfiguration configuration) { // ... } public void ConfigureServices(IServiceCollection services) { // ... } public void Configure(IApplicationBuilder app, IHostingEnvironment env) { // ... } }
Of course, you can also have environment specific methods, like ConfigureDevelopment
and ConfigureProduction
, but the rules remain the same. Initialization tasks like warming the cache don’t need to live in Startup, which already has enough to do. Where should these tasks go? That’s the topic for the next post.
I’ve kept most of my workshop and conference materials in a private GitHub repository for years. I recently made the repository public and added a CC-BY-4.0 license. The material includes slides, and hands-on labs, too. Some of the workshops are old (you’ll find some WinJS material inside [shudder]), but many of the workshops have aged well – C#, LINQ, and TDD are three workshops I could open and teach today. Other material, like the ASP.NET Core workshop, is recently updated. Actually, I think the ASP.NET Core material is the most practical and value focused technology workshop I’ve ever put together.
Ten years ago, Pluralsight decided to stop instructor led training and go 100% into video courses. As an author, I was happy to make video courses, but I also wanted to continue meeting students in face-to-face workshops. I still prefer workshops to conference sessions. I started making my own workshop material and ran classes under my own name and brand. Over the last 10 years I’ve been fortunate to work with remarkable teams from Mountain View in Silicon Valley, to Hyderabad, India, and many places in between. Four years ago on this day, actually, I was in Rotkruez, Switzerland, where I snapped the following picture on the way to lunch – one of numerous terrific meals I’ve shared with students over the years.
The memory of being driven through the snowy forests of Switzerland is enough to spike my wanderlust, which for several reasons I now need to temper. I still enjoy the workshops and conferences, and seeing good friends, but I don’t need the stress and repetition of traveling and performing more than a few times a year. If I see a place I’d like to visit, I’m in the privileged position of being able to go without needing work as an excuse. For that, I’m thankful that Pluralsight decided to go all-in with video training.
I don’t advertise my workshops or publicize the fact that I offer training for sale. I still receive regular request for private training, and again I am lucky to choose where I want to go. Conferences still ask me for workshops, but conferences can also be political and finicky (thanks to Tibi and Nick P for being notable exceptions).
What I’m saying is that I’m not using my workshop material enough to justify keeping the material private. Besides, training on some of my favorite topics is a commodity these days. Everyone does ASP.NET Core training, for example. And, if there is one lesson I’ve learned from years of training in person and on video, it’s that the training materials are not the secret sauce that can make for a great workshop. The secret sauce is the teacher.
Maybe, someone else can find something useful to do with this stuff.
Imagine you have a unit test that depends on an environment variable.
[Fact] public void CanGetMyVariable() { var expected = "This is a test"; var actual = Environment.GetEnvironmentVariable("MYVARIABLE"); Assert.Equal(expected, actual); }
Of course the dependency might not be so explicit. You could have a test that calls into code, that calls some other code, and the other code needs an environment variable. Or, maybe you have a script or tool that needs an environment variable. The question is - how do you setup environment variables in a DevOps pipeline?
The answer is easy - when a pipeline executes, Azure will place all pipeline variables into environment variables, so any tools, scripts, tasks, or processes you run as part of the build can access parameters through the environment.
In other words, I can make the above test pass by defining a variable in my pipeline YAML definition:
resources: - repo: self variables: MyVariable: 'This is a test' pool: vmImage: vs2017-win2016 steps: - task: DotNetCoreCLI@2 displayName: Test inputs: command: test projects: '**/*[Tt]ests/*.csproj ' arguments: '--configuration $(BuildConfiguration)'
... Or in the DevOps pipeline GUI:
Also included are the built-in variables, like Build.BuildNumber
and System.AccessToken
. Just be aware that the variable names you use to reference these parameters can depend on the context. See Build Variables for more details.
In a previous post I said to be wary of GUI build tools. In this episode of .NET Core Opinions, let me show you a "configuration as code" approach to building software using Azure DevOps.
Instead of the trivial one project demo you’ll see everywhere in the 5 minute demos for DevOps, let’s build a system that consists of:
An ASP.NET Core
project that will run as a web application
A Go console project that will run once-a-week as a web job
An Azure Functions project
Let’s also add some constraints to the scenario (and address some common questions I receive).
We need to deploy the web and Go applications into the same App Service.
We need to deploy the functions project into a second App Service that runs on a consumption plan.
The first step in using YAML for builds is to select the YAML option when creating a new pipeline instead of selecting from the built-in templates that give you a graphical build definition. I would post more screen shots of this process, but honestly, the UI will most likely iterate and change before I finish this post. Look for “YAML” in the pipeline options, then click a button with affirmative text.
I should mention that the graphical build definitions are still valuable, even though you should avoid using them to define your actual build pipelines. You can fiddle with a graphical build, and at any time click on the "View YAML" link at the pipeline or individual task level.
I found this toggle view useful for migrating to YAML pipelines, because I could look at a working build and see what YAML I needed to replicate the process. In other words, migrating an existing pipeline to YAML is easy.
Once you get the feeling for how YAML pipelines work, the docs, particularly the YAML snippets in the tasks docs, give you everything you need. Also, there is an extension for VS Code that provides syntax highlighting and intellisense for Pipelines YAML.
The YAML you’ll create will describe all the repositories, containers, triggers, jobs, and steps needed for a build. You can check the file into your source code repository, then version and diff your builds!
The essential building blocks for a pipeline are tasks. These are the same tasks you’d arrange in a list when defining a build using the GUI tools. In YAML, each task consists of the task name and version, then the task parameters. For example, to build all .NET Core projects across all folders in Release mode, run the DotNetCoreCLI task (currently version 2), which will run dotnet
with a default command parameter of build
.
- task: DotNetCoreCLI@2 displayName: 'Build All .NET Core Projects' inputs: projects: '**/*.csproj' arguments: '-c Release'
Ultimately, you want to run dotnet publish
on ASP.NET Core
projects. In YML, the task looks like:
- task: DotNetCoreCLI@2 displayName: 'Publish WebApp' inputs: command: publish arguments: '-c Release' zipAfterPublish: false
Notice the zipAfterPublish
setting is false
. In builds where a repo contains various projects intended for multiple destinations, I prefer to move files around in staging areas and then create zip files in explicit steps. We’ll see those steps later*.
I’m throwing in the Go steps because I have a Go project in the mix, but I also want to demonstrate how Azure Pipelines and Azure DevOps is platform agnostic. The platform wants to provide DevOps and continuous delivery for every platform, and every language. Building a Go project was easy with the built in Go task.
- task: Go@0 displayName: 'Install Go Deps' inputs: arguments: '-d' command: get workingDirectory: '$(System.DefaultWorkingDirectory)\cmd\goapp - task: Go@0 displayName: 'go build' inputs: command: build arguments: '-o cmd\goapp\app.exe cmd\goapp\main.go'
The first step is go get
, which is like using dotnet restore
in .NET Core. The second step is building a native executable from the entry point of the Go app in a file named main.go.
If you want to use Azure Functions and the C# language, then I believe Functions 2.0 is the only way to go. The 1.0 runtime works, but 1.0 is not as mature when it comes to building, testing, and deploying code. Building a 2.0 project (dotnet build
) places everything you need to deploy the functions into the output folder. There is no dotnet publish
step needed.
Once all the projects are built, the assemblies and executables associated with each project are on the file system. This is the point where I like to start moving files around to simplify the steps where the pipeline creates release artifacts. Release artifacts are the deployable bits, and it makes sense to create multiple artifacts if a system needs to deploy to multiple resources. Based on the requirements I listed at the beginning of the post, we are going to need the build pipeline to produce two artifacts, like so:
The first step is getting the files into the proper structure for artifact 1, which is the web app and the Go application combined. The Go application will execute on a schedule as an Azure Web Job. It is interesting how many people have asked me over the years how to deploy a web job with a web application. The key to the answer is to understand that Azure uses simple conventions to identity and execute web jobs that live inside an App Service. You don’t need to use the Azure portal to setup a web job, or find an obscure command on the CLI. You only need to copy the Web Job executable into the right folder underneath the web application.
- task: CopyFiles@2 displayName: 'Copy Go App to WebJob Location' inputs: SourceFolder: cmd\goapp TargetFolder: WebApp\bin\Release\netcoreapp2.1\publish\App_Data\jobs\triggered\app
Placing the Go .exe file underneath App_Data\jobs\triggered\app
, where app
is whatever name you want for the job, is enough for Azure to find the web job. Inside this folder, a settings.job
file can provide a cron expression to tell Azure when to run the job. In this case, 8 am every Monday:
{"schedule": "0 0 8 * * MON"}
The final steps consist of zipping up files and folders containing the project outputs, and publishing the two zip files as artifacts. Remember one artifact contains the published web app output and the web job, while the second artifact consist of the build output from the Azure Functions project. The ArchiveFiles
and PublishBuildArtifacts
tasks in Azure do all the work.
- task: ArchiveFiles@2 displayName: 'Archive WebApp inputs: rootFolderOrFile: WebApp\bin\Release\netcoreapp2.1\publish includeRootFolder: false archiveFile: WebApp\bin\Release\netcoreapp2.1\WebApp.zip - task: ArchiveFiles@2 displayName: 'Archive Function App inputs: rootFolderOrFile: FunctionApp\bin\Release\netcoreapp2.1 includeRootFolder: false archiveFile: FunctionApp\bin\Release\netcoreapp2.1\FunctionApp.zip - task: PublishBuildArtifacts@1 displayName: 'Publish Artifact: WebApp inputs: PathtoPublish: WebApp\bin\Release\netcoreapp2.1\WebApp.zip ArtifactName: WebApp - task: PublishBuildArtifacts@1 displayName: 'Publish Artifact: FunctionApp' inputs: PathtoPublish: FunctionApp\bin\Release\netcoreapp2.1\FunctionApp.zip ArtifactName: FunctionApp
Currently, YAML is not available for building a release pipeline, but the roadmap says the feature is coming soon. However, since we arranged the artifacts to simplify the release pipeline, all you should need to do is to feed the artifacts into Deploy Azure App Service
tasks. Remember function projects, even on a consumption plan, deploy just like a web application, but like web jobs, use some conventions around naming and directory structure to indicate the bits are for a function app. The build output of the function project will already have the right files and directories in place.
Having build pipelines defined in a textual format makes the pipeline easier to modify and version changes over time.
Unfortunately, this YAML approach only works in Azure. There is no support for running, testing, or troubleshooting a YAML build locally or in development. For systems with any amount of complexity, you will be in better shape if you automate the build using command line scripts, or a build system like Cake. Then you can run your builds both locally and in the cloud. Remember, your developer builds need to be every bit as consistent and well defined as your production builds if you want a productive, happy team.
* Note that I’ve simplified the YAML in the code samples by removing actual project names and “shortening” the directory structure for the project.
From Wikipedia: The law of the instrument is a cognitive bias that involves an over-reliance on a familiar tool.
The software industry is good at adopting standard data formats, then hyping and ultimately abusing those formats by pushing them into every conceivable nook and cranny of the trade.
Take XML, for example. What started as a simple, usable, textual data format became the lingua franca for enterprise web services. Along the way, we also used XML to build configuration files, replace make files, and implement more than a dozen significant UI frameworks.
Ten years ago, I was working with Microsoft’s triumvirate crown jewels of XML technology – WCF, WPF, and WF (Windows Workflow). Workflow is a good example of how XML transformed itself into a bloviating monster of complexity.
Example A: How to represent the integer value 5,000 in WF markup:
<ns0:CodePrimitiveExpression> <ns0:CodePrimitiveExpression.Value> <ns1:Int32 xmlns:ns1="clr-namespace:System;Assembly=mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> 5000 </ns1:Int32> </ns0:CodePrimitiveExpression.Value> </ns0:CodePrimitiveExpression>
Use XML, they said. It’s human readable, they said.
A few weeks ago, I became interested in cloud governance. Not an exciting topic, but when a company turns developers loose in the cloud, someone must enforce some basic rules about what services to use, and ensure the settings are in place to make the services as secure as possible. For Azure, I looked at using Azure Policy. It was obvious that I’d need to write some custom rules for Policy. To see how rule authoring works, I looked at a built-in rule that checks if data encryption is on for an Azure SQL instance:
{ "if": { "allOf": [ { "field": "type", "equals": "Microsoft.Sql/servers/databases" }, { "field": "name", "notEquals": "master" } ] }, "then": { "effect": "[parameters('effect')]", "details": { "type": "Microsoft.Sql/servers/databases/transparentDataEncryption", "name": "current", "existenceCondition": { "allOf": [ { "field": "Microsoft.Sql/transparentDataEncryption.status", "equals": "enabled" } ] } } } }
So, in the last 10 years, programming has evolved from writing syntax trees in XML to writing syntax trees in JSON. Actually, software has evolved from doing everything in XML to doing everything in JSON - web services, configuration files, build systems, database storage, and more. JSON’s ubiquity is surprising given that JSON doesn’t have any of the same flexibility and extensibility features as XML.
Or, as @kellabyte recently said:
How is it developers killed the use of XML for JSON, a format that has no ability to add comments?
— Kelly Sommers (@kellabyte) January 28, 2019
What developers decided this was a good idea? A format which half the lines in the editor have 1 character in them.
Yes, I know. I lived through the AJAX days when everyone said JSON was the future. Reason being that browsers and mobile devices didn't come with XML parsers by default.
Web browsers and mobile devices today carry AI for facial recognition, support headsets for augmented reality, stream 4K video, and execute GPU accelerated 3D animations.
But parse XML documents? Nobody wants to walk in that minefield.
JSON is the easy choice.
I can understand the argument for easy. I imagine trying to design Azure Policy and thinking about implementation details for custom rules.
Can we force everyone to use Powershell? No!
JavaScript? How to sandbox the execution?
Wait, I got it – describe the code in JSON! Everyone knows JSON. All the tools support JSON. The answer must be JSON.
I’ve been down a similar road before. Years ago, I needed to create a custom ETL tool to move data between relational databases as fast as possible. At the time, Microsoft’s offering from SQL Server was SSIS 1 (SQL Server Integration Services). I spent a couple of days with SSIS and decided it was not an appropriate choice for our specific scenario. The XML was complex, hard to version, hard to test, hard to debug, slow to execute, yielded unhelpful error messages, and made a team wholly dependent on UI tools that ship with SQL Server. Not to mention, SSIS wouldn’t easily scale to meet the thousands of packages and package variations we needed to support. I had been down that road before with Microsoft’s previous ETL tools (DTS – data transformation services) and vowed never again.
Once I decided to build my own ETL tool, I needed to decide on the language for the tool. My first attempt, which survived in the wild for a brief time, relied on Ruby and a fluent API. The second attempt tried to simplify things. I needed SQL, but a way to surround the SQL with metadata.
Why not use XML? Everyone knows XML, and all the tools support XML. XML is easy!
The result used "packages" that looked something like the following 2:
<Package Type="Arracher.Core.Packages.BulkLoadPackage"> <Name>Load Movies</Name> <Source>LiveDb</Source> <Destination>RapidDb</Destination> <DestinationTable>tbl_movies</DestinationTable> <Query> <![CDATA[ DECLARE @StartDate datetime SET @StartDate = '@{StartDate}' SELECT CONVERT(varchar(1),NULL) As title, release_date as releasedon FROM dbo.movies WHERE release_date > @StartDate ]]> </Query> </Package>
Is it the best tool in the world? No.
But, packages are easy to version, easy to diff, easy to edit, author, and test, and best of all – a dba can open the file, copy out the SQL, tweak the SQL, and paste the SQL back in without any special editing tools and a minimal amount of fuss. The SQL can be as complicated as SQL can be.
The tool works, I believe, because the XML doesn’t get in the way. That’s the problem you can run into when you embed one language in another – one of the languages will get in the way.
Azure Policy tries to embed Boolean rules inside of JSON, but to me, the JSON only gets in the way. It’s like embedding SQL code inside of C# or Java code – some language combinations are hard on the eyes, which makes the result hard to write, and impossible to read. You can’t just skim the code to get an idea of what might happen.
With policy, the more complex the expression, the more unreadable the code. The solution is error prone, hard to test, and therefore not scalable.
Here’s @kellabyte again, this time on embedding in JSON:
You’re right JSON fans. Of course this is a beautiful way to write a query language in JSON over HTTP. What was I thinking? pic.twitter.com/AP9eTZmGDT
— Kelly Sommers (@kellabyte) January 29, 2019
This is one of the reasons why LINQ is so compelling in C#. There’s no embedding – I have C# code inside of C# code, but some of the C# code might just translate itself into a different language and execute in a different process on some distant machine.
Despite what you might think of ORMs, or the hidden performance costs of LINQ, the feature still amazes me every time I see a query. I had the same sense of excitement the first time I ran across Gremlin-Python, too.
Truthfully, I wrote this post to organize my thoughts around Azure Policy.
Do I want to take a pass on what comes in the box with Azure? I think so.
Can I rationalize a custom solution? I think so.
I can invent my own tool for governance audits and make the rules easier to author, change, and test, as well as be more flexible.
And just for fun, I’ll write the tool in Go.
I’ll rationalize that decision in another post.
[1] The current offering for ETL (keep in mind ETL is a legacy term for a legacy form of processing) from Microsoft is Azure Data Factory (ADF). We author ADF packages in JSON. ADF v2 supports calling into SSIS XML packages. This is like candy built from data formatting standards - a hard JSON shell surrounds a creamy-sweet XML interior.
[2] I named the tool Arracher – French for “rip out”, or extract.
I’ve always admired languages that make composition as easy as inheritance. Groovy, with its @Delegate
, is one example from years ago. These days I’ve been working a bit with Go. In Go, the composition features are so good you can achieve everything you need using compositional thinking.
We’ll use waterfowl as a silly example. Here’s a duck in Go.
type Duck struct { ID int64 Name string }
I can also define a method on the Duck struct.
func (d *Duck) Eat() { fmt.Printf("Duck %s eats!\n", d.Name) }
Later I might need a Mallard. A Mallard is very much like a Duck, but with a Mallard I need to track the color, too. Go doesn’t have inheritance, but embedding is elegant.
type Mallard struct { Duck Color string }
Given a Mallard, I can reach into the ID
and Name
properties, or refer directly to the embedded Duck
.
duck := new(Duck) duck.ID = 1 duck.Name = "Pickles" mallard := new(Mallard) mallard.Color = "Green" // copy info: mallard.Name = duck.Name mallard.ID = duck.ID // or even better: mallard.Duck = *duck
And yes, I can define a method for Mallards.
func (m *Mallard) Sleep() { fmt.Printf("Mallard %s sleeps!\n", m.Name) }
A mallard can now both eat and sleep.
mallard.Eat() mallard.Sleep() // Duck Pickles eats! // Mallard Pickles sleeps!
The end result looks like inheritance, if you wear object-oriented glasses, but the mindset is entirely composition.
Some software is easier to understand if you remove the software from it’s usual environment and try some experiments. ASP.NET Security components, for example. What is the impact of having multiple authentication schemes? Why does a ClaimsPrincipal
have multiple identities? What does it mean to SignOutAsync
on an HttpContext
?
You’ll never use the following code in a real application. But, you might use this code to tinker and experiment.
First, we’ll setup two cookie authentication schemes during ConfigureServices
– cookie1 and cookie2.
services.AddAuthentication(options => { options.DefaultScheme = "cookie1"; }) .AddCookie("cookie1", "cookie1", options => { options.Cookie.Name = "cookie1"; options.LoginPath = "/loginc1"; }) .AddCookie("cookie2", "cookie2", options => { options.Cookie.Name = "cookie2"; options.LoginPath = "/loginc2"; });
Next, we’ll add some middleware that allows for identity sign-in and sign-out without getting bogged down in password validations.
app.Use(next => { return async ctx => { switch(ctx.Request.Path) { case "/loginc1": var identity1 = new ClaimsIdentity("cookie1"); identity1.AddClaim(new Claim("name", "Alice-c1")); await ctx.SignInAsync("cookie1", new ClaimsPrincipal(identity1)); break; case "/loginc2": var identity2 = new ClaimsIdentity("cookie2"); identity2.AddClaim(new Claim("name", "Alice-c2")); await ctx.SignInAsync("cookie2", new ClaimsPrincipal(identity2)); break; case "/logoutc1": await ctx.SignOutAsync("cookie1"); break; case "/logoutc2": await ctx.SignOutAsync("cookie2"); break; default: await next(ctx); break; } }; }); app.UseAuthentication();
Now it’s time for the experiments. What happens when trying to reach pages or controllers with the following attributes?
[Authorize]
[Authorize(AuthenticationSchemes ="cookie1")]
[Authorize(AuthenticationSchemes ="cookie2")]
[Authorize(AuthenticationSchemes ="cookie1, cookie2")]
When visiting those resources, it’s educational to dump out what we know about the user given the authorize conditions, and how the output changes if we change the default auth scheme.
<h2>User</h2> @foreach (var identity in User.Identities) { <div>Authentication Type: @identity.AuthenticationType</div> <table class="table"> @foreach (var claim in identity.Claims) { <tr> <td>@claim.Type</td> <td>@claim.Value</td> </tr> } </table> }
I’ve also found it useful, even in real applications, to have a page that dumps out information about the available authentication schemes. Quite often the setup is obscured by helpful extension methods we use inside of ConfigureServices
. A page model like the following will grab the information.
public class AuthDumpModel : PageModel { private readonly AuthenticationService authenticationService; public AuthDumpModel(IAuthenticationService authenticationService) { this.authenticationService = (AuthenticationService)authenticationService; } public IEnumerable<AuthenticationScheme> Schemes { get; set; } public AuthenticationScheme DefaultAuthenticate { get; set; } public AuthenticationScheme DefaultChallenge { get; set; } public AuthenticationScheme DefaultForbid { get; set; } public AuthenticationScheme DefaultSignIn { get; set; } public AuthenticationScheme DefaultSignOut { get; set; } public async Task OnGet() { Schemes = await authenticationService.Schemes.GetAllSchemesAsync(); DefaultAuthenticate = await authenticationService.Schemes.GetDefaultAuthenticateSchemeAsync(); DefaultChallenge = await authenticationService.Schemes.GetDefaultChallengeSchemeAsync(); DefaultForbid = await authenticationService.Schemes.GetDefaultForbidSchemeAsync(); DefaultSignIn = await authenticationService.Schemes.GetDefaultSignInSchemeAsync(); DefaultSignOut = await authenticationService.Schemes.GetDefaultSignOutSchemeAsync(); } }
And now we can see what’s installed, and where the defaults lead.
<h2>Auth Schemes</h2> <table class="table"> <tr> <th>DisplayName</th> <th>Name</th> <th>Type</th> </tr> @foreach (var scheme in Model.Schemes) { <tr> <td>@scheme.DisplayName</td> <td>@scheme.Name</td> <td>@scheme.HandlerType</td> </tr> } </table> <div>DefaultAuthenticate : @Model.DefaultAuthenticate.Name</div> <div>DefaultForbid: @Model.DefaultForbid.Name</div> <div>DefaultSignIn: @Model.DefaultSignIn.Name</div> <div>DefaultSignOut: @Model.DefaultSignOut.Name</div>