OdeToCode Videos

Tuesday, May 13, 2014 by K. Scott Allen
5 comments

I’ve started a collection of videos here on the site, and I’m starting with short clips about the next version of JavaScript – ECMAScript 6. Currently the collection includes:

  • Template strings
  • Rest parameters
  • Default parameter values
  • The spread operator

Topics coming soon include classes, and arrow functions (my favorite feature, for now).

Tips For Working With Windows Azure Media Services

Thursday, May 8, 2014 by K. Scott Allen
1 comment

I’ve been doing some work with Windows Azure Media services and making progress, although it takes some time and experimentation to work through the vocabulary of the API, documentation, and code snippets.

1. Uploading and encoding video into media services can be completed programmatically using the CloudMediaContext class from the NuGet package WindowsAzure.MediaServices.

2. Uploading creates an asset in media services. Each asset can contain multiple files, but you only want one video or audio file in the uploaded asset. WAMS will create a container in blob storage for each asset, so it seems best to create a new storage account dedicated to each media service.

3. For encoding you need to select a media processor by name, and perhaps a preset configuration by name. The processor names you can list using some C# code, and the values I currently see are:

        Windows Azure Media Encoder 3.7
        Windows Azure Media Packager 2.8
        Windows Azure Media Encryptor 2.8
        Windows Azure Media Encoder 2.3
        Storage Decryption 1.7

Preset names took some digging around, but I eventually found a complete list for the WAME at Media Services Encoder System Presets

What follows is a class that wraps CloudMediaContext and can list assets including the files inside, upload an asset, encode an asset, and list the available media processors. It is experimental code that assumes it is working inside a console application, but that behavior is easy to refactor out. Some of the LINQ queries are strange, but they work around the wonkiness of OData.

public class MediaService
{
    public MediaService()
    {            
        _context = new CloudMediaContext(
                Configuration.AzureAccountName, 
                Configuration.AzureAccountKey
            );
    }

    public void Upload(string filePath)
    {
        var assetName = Path.GetFileNameWithoutExtension(filePath) + "_i";
        var asset = _context.Assets.Create(assetName, AssetCreationOptions.None);

        var assetFileName = Path.GetFileName(filePath);
        var assetFile = asset.AssetFiles.Create(assetFileName);
        assetFile.UploadProgressChanged += (sender, args) => 
            Console.WriteLine("Up {0}:{1}", assetName, args.Progress);
        assetFile.Upload(filePath);
    }

    public void Encode(string filePath)
    {
        var assetName = Path.GetFileNameWithoutExtension(filePath) + "_i";
        var asset = GetAsset(assetName);
        var job = _context.Jobs.Create("Encoding job " + assetName);
        var processor = GetMediaProcessor();
        var task = job.Tasks.AddNew("Encoding task" + assetName, 
                        processor, Configuration.PresetName, TaskOptions.None);
        task.InputAssets.Add(asset);
        task.OutputAssets.AddNew(assetName + "_o", AssetCreationOptions.None);

        job.StateChanged += (sender, args) => 
            Console.WriteLine("Job: {0} {1}", job.Name, args.CurrentState);
        job.Submit();
        
        var progress = job.GetExecutionProgressTask(CancellationToken.None);
        progress.Wait();
    }

    public void ListMedia()
    {
        foreach (var asset in _context.Assets)
        {
            Console.WriteLine("{0}", asset.Name);
            foreach (var file in asset.AssetFiles)
            {
                Console.WriteLine("\t{0}", file.Name);
            }
        }
    }

    public void ListMediaProcessors()
    {
        Console.WriteLine("Available processors are:");
        foreach (var procesor in _context.MediaProcessors)
        {
            Console.WriteLine("\t{0} {1}", procesor.Name, procesor.Version);
        }
    }

    IMediaProcessor GetMediaProcessor()
    {
        var processors = _context.MediaProcessors
                                 .Where(p => p.Name == Configuration.EncoderName)
                                 .ToList()
                                 .OrderBy(p => new Version(p.Version));
                                 
        if (!processors.Any())
        {
            Console.WriteLine("Could not find processor {0}", Configuration.EncoderName);
            ListMediaProcessors();
            Environment.Exit(-1);
        }
        return processors.First();
    }        

    IAsset GetAsset(string name)
    {
        var assets = _context.Assets.Where(a => a.Name == name).ToList();
        if (!assets.Any())
        {
            Console.WriteLine("Could not find asset {0}", name);
            Environment.Exit(-1);
        }
        return assets.First();
    }

    readonly CloudMediaContext _context;
}

The above class also assumes you have a Configuration class to read config information, which looks like the following.

<appSettings>
    <add key="accountName" value="media services account name"/>
    <add key="accountKey" value="media services key"/>
    <add key="encoderName" value="Windows Azure Media Encoder"/>
    <add key="presetName" value="H264 Broadband SD 4x3"/>
</appSettings>

Using $compile in Angular

Wednesday, May 7, 2014 by K. Scott Allen
0 comments

Creating a custom directive in AngularJS is easy, let’s start with the HTML for a simple example.

{{ message }}
<div otc-dynamic></div>

The above markup is using a directive named otcDynamic, which only provides a template.

app.directive("otcDynamic", function(){
   return {
       template:"<button ng-click='doSomething()'>{{label}}</div>"
   };
});

When combined with a controller, the presentation will allow the user to click a button to see a message appear on the screen.

app.controller("mainController", function($scope){

    $scope.label = "Please click";
    $scope.doSomething = function(){
      $scope.message = "Clicked!";
    };

});

Make It Dynamic

Next, imagine the otcDynamic directive can’t use a static template. The directive needs to look at some boolean flags, user data, or service information, and dynamically construct the template markup. In the following example, we’ll only simulate this scenario. We are still using a static string, but we’ll pretend we created the string dynamically and use element.html to place the markup into the DOM.

app.directive("otcDynamic", function(){
    return {
        link: function(scope, element){
            element.html("<button ng-click='doSomething()'>{{label}}</button>");
        }
    };
});

The above sample no longer functions correctly and will only render a button displaying the literal text {{label}} to a user.

Markup has to go through a compilation phase for Angular to find and activate directives like ng-click and {{label}}.

Compilation

The $compile service is the service to use for compilation. Invoking $compile against markup will produce a function you can use to bind the markup against a particular scope (what Angular calls a linking function). After linking, you’ll have DOM elements you can place into the browser.

app.directive("otcDynamic", function($compile){
    return{
        link: function(scope, element){
            var template = "<button ng-click='doSomething()'>{{label}}</button>";
            var linkFn = $compile(template);
            var content = linkFn(scope);
            element.append(content);
        }
    }
});

If you have to $compile in response to an element event, like a click event or other non-Angular code, you’ll need to invoke $apply for the proper scope lifecycle.

app.directive("otcDynamic", function($compile) {
    
    var template = "<button ng-click='doSomething()'>{{label}}</button>";
    
    return{
        link: function(scope, element){
            element.on("click", function() {
                scope.$apply(function() {
                    var content = $compile(template)(scope);
                    element.append(content);
               })
            });
        }
    }
});

Dear Lenovo

Tuesday, May 6, 2014 by K. Scott Allen
18 comments

Long time buyer, first time writer.

Over the years I’ve struggled each time I’ve decided it’s time to to buy a new ThinkPad. I’ve struggled because it used to be difficult to choose from so many solid entries in the T, X, and W lines.

These days I’m looking at lenovo.com and struggling to find a laptop computer anyone is happy to own.

Take a look at stars on the T series. The combined score is 12 / 20.

 

Lenovo T Series

We’ll round up the stars in the X series and give these ultrabooks a combined score of 13/20.

Lenovo X Series

These scores are in the “meh” category, and not what I’d expect from Lenovo’s flagship and premium brand. Reading through the reviews you’ll find most people are happy with the performance, the battery life, the selection of ports, and the build quality.  But I’m sure you’ve also noticed the copious rants about the keyboards you are designing and shipping on today’s models.

Perhaps we expect more from ThinkPads because the ThinkPad name was synonymous for “great keyboard”. Perhaps that’s why shortcut key aficionados were drawn to the ThinkPad line in the first place. We don’t need mice or track pads when we can use Alt+F4 or Alt+Insert to make things happen.

Now you’ve removed the Insert key from the X1 and turned the function keys into a capacitive flat-strip LED light show.

And moving the Home and End keys to the left side of the keyboard? I have no words to describe my sadness. I’ll instead use Peter Bright’s words from his article “Stop trying to innovate keyboards. You’re just making them worse”.

“I think these kind of keyboard games betray a fundamental misunderstanding of how people use keyboards. Companies might think that they're being innovative by replacing physical keys with soft keys, and they might think that they're making the keyboard somehow "easier to use" by removing keys from the keyboard. But they're not.”

Maybe the world has changed and the majority of productive professionals and do web conferences all day and Netflix movies all night. Perhaps this is the product line you need to stay alive in a world where the majority are consumed by consumption and touch.

Yet, I hope moving forward you will delight customers with qualities and features that are unique to ThinkPads, and not continue with these innovations that transform your products into inferior imitations of other brands.

With great sincerity,

--s

Using an Azure PublishSettings File From C#

Monday, April 28, 2014 by K. Scott Allen
3 comments

One of the fantastic aspects of cloud computing in general is the capability to automate every step of a process. Hanselman and Brady Gaster have both written about the Windows Azure Management Libraries, see Penny Pinching in the Cloud and Brady’s Announcement for some details.

The management libraries are wrappers around the Azure HTTP API and are a boon for businesses that run products on Azure. Not only do the libraries allow for automation but they also allow you (or rather me, at this very minute) to create custom applications for a business to manage Azure services. Although the Azure portal is full of features, it can be overwhelming to someone who doesn’t work with Azure on a daily basis. An even bigger issue is how there is no concept of roles in the portal. If you want to give someone the ability to shutdown a VM from the portal, you also give them the ability to do anything in the portal, including the ability to accidently delete all services the business holds dear.

A custom portal solves both of the above problems because you can build something tailored to the services and vocabulary the business uses, as well as perform role checks and restrict dangerous activities. The custom portal will need a management certificate to perform activities against the Azure API, and the easiest approach to obtain a management certificate is to download  a publish setting file.

Once you have a publish settings file, you can write some code to parse the information inside and make the data available to higher layer management activities. There are a few libraries out there that can work with publish setting files, but I have some special requirements and want to work with them directly. The contents of a publish settings file look like the following (and note there can be multiple Subscription elements inside).

<PublishData>
  <PublishProfile
    SchemaVersion="2.0"
    PublishMethod="AzureServiceManagementAPI">
    <Subscription
      ServiceManagementUrl="https://management.core.windows.net"
      Id="...guid..."
      Name="Happy Subscription Name"
      ManagementCertificate="...base64 encoded certificate data..." />
  </PublishProfile>
</PublishData>

 

Let’s use the following code as an example goal for how my custom publish settings should work. I want to:

  1. Create an object representing the settings by just handing off some text.
  2. Loop through the subscription in a file
  3. Ask each subscription to create credentials I can use to invoke the Azure HTTP API.
var fileContents = File.ReadAllText("odetocode.publishsettings");
var publishSettingsFile = new PublishSettingsFile(fileContents);

foreach (var subscription in publishSettingsFile.Subscriptions)
{
    Console.WriteLine("Showing compute services for: {0}", subscription.Name);
    
    var credentials = subscription.GetCredentials();
    using (var client = new ComputeManagementClient(credentials, subscription.ServiceUrl))
    {
        var services = client.HostedServices.List();
        foreach (var service in services)
        {
            Console.WriteLine("\t{0}", service.ServiceName);
        }
    }
}

It is the PublishSettingsFile class that parses the XML and creates PublishSetting objects. I’ve removed some error handling from the class so it doesn’t appear too intimidating.

public class PublishSettingsFile
{
    public PublishSettingsFile(string fileContents)
    {
        var document = XDocument.Parse(fileContents);

        _subscriptions = document.Descendants("Subscription")
                            .Select(ToPublishSettings).ToList();
    }

    private PublishSettings ToPublishSettings(XElement element)
    {
        var settings = new PublishSettings();
        settings.Id = Get(element, "Id");
        settings.Name = Get(element, "Name");
        settings.ServiceUrl = GetUri(element, "ServiceManagementUrl");
        settings.Certificate = GetCertificate(element, "ManagementCertificate");
        return settings;
    }        

    private string Get(XElement element, string name)
    {
        return (string) element.Attribute(name);
    }

    private Uri GetUri(XElement element, string name)
    {
        return new Uri(Get(element, name));
    }

    private X509Certificate2 GetCertificate(XElement element, string name)
    {
        var encodedData = Get(element, name);
        var certificateAsBytes = Convert.FromBase64String(encodedData);
        return new X509Certificate2(certificateAsBytes);
    }

    public IEnumerable<PublishSettings> Subscriptions
    {
        get
        {
            return _subscriptions;
        }
    }

    private readonly IList<PublishSettings> _subscriptions;
}

The PublishSettings class itself is relatively simple. It mostly holds data, but can also create the credentials object needed to communicate with Azure.

public class PublishSettings
{
    public string Id { get; set; }
    public string Name { get; set; }
    public Uri ServiceUrl  { get; set; }
    public X509Certificate2 Certificate { get; set; }

    public SubscriptionCloudCredentials GetCredentials()
    {
        return new CertificateCloudCredentials(Id, Certificate);
    }

}

In the future I’ll try to write more about the custom portal I’m building with ASP.NET MVC, WebAPI, and AngularJS. It has some interesting capabilities.

Canceling $http Requests in AngularJS

Thursday, April 24, 2014 by K. Scott Allen
2 comments

One of the objects you can pass along in the config argument of an $http operation is a timeout promise. If the promise resolves, Angular will cancel the corresponding HTTP request.

Sounds easy, but in practice there are a few complications. Before we get to the complications, let’s look at some easy code. Imagine the following  inside of a controller where a user can click a Cancel button.

var canceller = $q.defer();

$http.get("/api/movies/slow/2", { timeout: canceller.promise })
     .then(function(response){
        $scope.movie = response.data;
    });

$scope.cancel = function(){
    canceller.resolve("user cancelled");  
};

The code passes the canceller promise as the timeout option in the config object. If the user clicks cancel before the request completes, we’ll see the cancellation in the Network tab of the developer tools.

Cancelled HTTP Request

The complications come in real life scenarios where we have to manage multiple requests, provide the ability to cancel an operation to other client components, and figuring out if a given request is cancelled or not.

First, let’s look at a service that wraps $http to provide domain oriented operations. Typically services that talk using $http return simple promises, but now we need to return objects that provide a promise for the outstanding request, and  a method that can cancel the request.

app.factory("movies", function($http, $q){

    var getById = function(id){
        var canceller = $q.defer();

        var cancel = function(reason){
            canceller.resolve(reason);
        };

        var promise =
            $http.get("/api/movies/slow/" + id, { timeout: canceller.promise})
                .then(function(response){
                   return response.data;
                });

        return {
            promise: promise,
            cancel: cancel
        };
    };

    return {
        getById: getById
    };

});

A client of the service might need to track multiple requests, if there is a UI like the following that allows a user to send multiple requests.

<div ng-controller="mainController">

    <button ng-click="start()">
        Start Request
    </button>

    <ul>
        <li ng-repeat="request in requests">
            <button ng-click="cancel(request)">Cancel</button>
        </li>
    </ul>

    <ul>
        <li ng-repeat="m in movies">{{m.title}}</li>
    </ul>

</div>

The following code will manage the UI and allow the user to cancel any outstanding request.

app.controller("mainController", function($scope, movies) {

    $scope.movies = [];
    $scope.requests = [];
    $scope.id = 1;

    $scope.start = function(){

        var request = movies.getById($scope.id++);
        $scope.requests.push(request);
        request.promise.then(function(movie){
            $scope.movies.push(movie);
            clearRequest(request);
        }, function(reason){
            console.log(reason);
        });
    };

    $scope.cancel = function(request){
        request.cancel("User cancelled");
        clearRequest(request);
    };

    var clearRequest = function(request){
        $scope.requests.splice($scope.requests.indexOf(request), 1);
    };
});

The logic gets messy and could use some additional encapsulation to keep request management from overwhelming  the controller, but this is the essence of what you’d need to do to allow cancellation of $http operations.

Is It Time To Switch To JavaScript?

Wednesday, April 23, 2014 by K. Scott Allen
8 comments

JavaScript LogoQuestion from the mailbox: "After many years as a server side developer and DBA, is it time to make a switch to JavaScript and focus on client side development?"

I think you need to work in an environment you enjoy. Some people do not enjoy client side development, and that's ok. There is still plenty of work to do on the server and in the services that run there.

The funny thing is, you don't have to leave the server to learn JavaScript. Take a look at technologies like NodeJS or MongoDB first. Having some JavaScript experience will be good, and I think there is no better way to learn a new language than to use the language inside a paradigm you already understand.

Once you know more about JavaScript the language you should take some time to explore the HTML development landscape and try working with some of the tools and frameworks, even if you have to use some of your spare time.

Maybe you’ll find you like the new world, only then it would be time for a switch, because in the end you gotta do what you enjoy...

by K. Scott Allen K.Scott Allen
My Pluralsight Courses
The Podcast!