aspnet, security comments edit

I’ve been working with ASP.NET Core in a web farm environment. Things worked great when deployed to an Azure Web App but in a different farm setting (Pivotal Cloud Foundry) I started getting an error I hadn’t seen before: System.AggregateException: Unhandled remote failure. ---> System.Exception: Unable to unprotect the message.State.

This happened in context of the OpenID Connect middleware, specifically when a value encrypted by one instance of the ASP.NET Core application tried to be decrypted by a different instance of the application.

The problem is that the values used in DataProtection weren’t synchronized across all instances of the application. This is a lot like the ASP.NET classic issue where you have to ensure all nodes in the farm have the machine key synchronized so ViewState and other things can be shared across application instances.

Instead of machine key, ASP.NET Core uses Microsoft.AspNetCore.DataProtection for handling the encryption keys used to protect state values that get posted between the app and the client. There is plenty of documentation on how this works but not much in the way of a concise explanation of what it takes to get things working in a farm. Hopefully this wil help.

How DataProtection Gets Added

Normally you don’t manually add the data protection bits to the application pipeline. It’s done for you when you call services.AddMvc() during the ConfigureServices() part of application startup. That services.AddMvc() line actually fans out into adding a lot of default services, some of which are the defaults for data protection.

What to Synchronize

Instead of just machine key in ASP.NET Core, you have three things that must line up for a farm scenario:

Why This Doesn’t “Just Work” in All Farms

  • The application discriminator, being based on the installed location of the app, is great if all machines in the farm are identical. If, instead, you’re using some containerization techniques, a virtual filesystem, or otherwise don’t have the app installed in the same location everywhere, you need to manually set this.
  • The master encryption key, while not used on non-Windows environments, does otherwise need to be synchronized. If you choose to use a certificate, the current EncryptedXml mechanism used internally allows you to pass in a certificate for use in encryption but in decryption it requires the certificate to be in the machine certificate store. That requirement is less than stellar since it means you can’t store the certificate in something like Azure Key Vault.
  • The encrypted set of session keys is easy to persist in a file share… if the farm is allowed to store things in a common share and all the network ports are open to allow that. If you want to store in a different repository like a database or Redis, there’s nothing out of the box that helps you.

Why This Works in Azure Web Apps

There is some documentation outlining how this works in Azure. In a nutshell:

  • All applications are installed to the same location, so the application discriminator lines up.
  • Keys aren’t encrypted at rest, so there is no master encryption key.
  • The session keys are put in a special folder location that is “magically” synchronized across all instances of the Azure Web App.

Setting Your Own Options

To set your own options, call services.AddDataProtection() after you call services.AddMvc() in your ConfigureServices() method in Startup. It will look something like this:

public virtual IServiceProvider ConfigureServices(IServiceCollection services)
{
  services.AddMvc();
  services
    .AddDataProtection(opt => opt.ApplicationDiscriminator = "your-app-id")
    .ProtectKeysWithYourCustomKey()
    .PersistKeysToYourCustomLocation();
}

Example Extensions

To help get you on your way, I’ve published a couple of extensions on GitHub. They include:

  • XML encryption/decryption using a certificate that isn’t required to be in a machine certificate store. This allows you to store the master certificate in a repository like Azure Key Vault. This bypasses that requirement that the certificate be in the machine certificate store during decryption.
  • Encrypted XML storage in Redis. This allows you to share the session keys in a Redis database rather than a file share.

I wanted to be able to not only tidy my JSON objects, but also sort by property. I wanted to do this so I could unify my project.json and config.json files while working in .NET Core. Figuring out where people were adding keys, finding redundant things added to files, and so on… having a predictable order makes it all that much easier.

Up front, I’ll tell you this is a total hack. I got it to work as a user package (code in your User folder) but haven’t taken it as far as putting it into a repo or adding it to Package Control. That’s probably the next step. I just wanted to get this out there.

I’ll also say this is instructions for a Windows environment. The places you’ll have to adjust for Linux should be obvious, but I don’t have guidance or instructions to help you. Sorry.

First, install the External Command package. This is a great general-purpose package for setting up external commands and pushing Sublime Text buffers through. Select some text and have that text passed to an external shell command on stdin. (No selection? It runs the whole file.)

Next, create a folder called SortJson in your User package folder. This is where we’ll put the contents of the user module.

If you don’t have Node installed… why not? Really, though, if you don’t, go get it and install it. We need it because we use the Node json-stable-stringify package to do the work.

Drop to a command prompt in the SortJson folder and install the json-stable-stringify module.

npm install json-stable-stringify

You should get a node_modules folder under that SortJson folder and inside you’ll have json-stable-stringify (and maybe dependencies, but that’s fine).

Now we need a little script to take the contents of stdin and pass it through json-stable-stringify.

Create a script called sort-json.js in the SortJson folder. In that script, put this:

var stringify = require('json-stable-stringify');
var opts = {
    "space": 2
};

var stdin = process.stdin,
    stdout = process.stdout,
    inputChunks = [];

stdin.resume();
stdin.setEncoding('utf8');

stdin.on('data', function (chunk) {
    inputChunks.push(chunk);
});

stdin.on('end', function () {
    var inputJSON = inputChunks.join(""),
        parsedData = JSON.parse(inputJSON),
        outputJSON = stringify(parsedData, opts);
    stdout.write(outputJSON);
    stdout.write('\n');
});

Unfortunately, the External Command package doesn’t let you set a working directory, so you can’t just fire up Node and run the sort-json.js directly. We have to create a little batch file that helps our script find the json-stable-stringify module at runtime.

Create a batch script called sort-json.cmd in the SortJson folder. In that script, put this:

@SETLOCAL
@SET NODE_MODULES=%~dp0node_modules
@node "%~dp0sort-json.js" %*

That temporarily adds the SortJson\node_modules folder to the NODE_MODULES environment variable before running the sort-json.js script.

The last thing you need is a tie to the Sublime Text command palette so you can run the command to sort JSON.

Create a file called sort-json.sublime-commands in the SortJson folder. In that file, put this:

[
    {
        "caption": "JSON: Sort Object",
        "command": "filter_through_command",
        "args": { "cmdline": "\"%APPDATA%\\Sublime Text 3\\Packages\\User\\SortJson\\sort-json.cmd\"" }
    }
]

You’ll have to restart Sublime, but when you do you’ll see a command in the palette “JSON: Sort Object”. Load up a file with a JSON object and run that command. You should get a sorted JSON object.

I try to pair this with the JsFormat package (for JSBeautify integration) as well as SublimeLinter-json (for linting/error checking), both of which are in Package Control. If you want to tweak the formatting that comes out of the sort directly, the opts variable you see at the top of sort-json.js are the options used by json-stable-stringify.

personal, movies, humor comments edit

I was watching Cutthroat Island this weekend with my daughter, who loves pirate movies, when I started thinking about these giant treasure chests full of gold you see in such films.

The stereotypical treasure chest

While I get that it’s a movie, it was fun to think about how practical carting around that treasure chest of doubloons might be.

Assumptions:

Now, let’s say the treasure chest is like 90cm x 60cm x 60cm on the inside. A little large-ish, but not unheard of in a pirate movie.

  • The chest has 324,000cc interior capacity.
  • Multiplied by the 35% packing efficiency, you’d have 113,400cc of gold.
  • 113,400cc x 19.3g per cc = 2,188,620g = 2188.62kg (4825.1lb).

There is no way pirates are carrying around 5000lb gold chests.

Let’s figure a couple guys - one on each end of the chest - need to cart the chest through the jungle or something. They’re strong, but not they’re not Hafþór Júlíus Björnsson. You’re looking at something like 115kg (253.5lb) or so lest it gets unwieldy.

Working backwards, 115kg is 5958.55cc of gold. With the packing ratio, that’s a chest with 17024.43cc total capacity. To make the math easy, let’s say it’s a cube-shaped chest. That’d yield a chest with internal dimensions of about roughly 25.73cm (10.13 inches) on a side.

That’s a pretty tiny treasure chest.

At least, tiny in comparison to what you usually see on a pirate movie.

Now, I could be generous with my packing efficiency. Maybe it’s far less than 35%, or it could be that the chest isn’t packed to the top with gold, or both.

If you had that 90cm x 60cm x 60cm chest and limited yourself to the 115kg weight, that’d put the packing efficiency of doubloons at closer to 5%; or it’d mean the chest is not quite a quarter of the way full.

Just for fun, we can also calculate the value of such treasure. The price of gold today (as I write this) is $39,175.66 USD per kg.

  • 115kg of gold = $4,505,200.90 USD
  • 2188.62kg of gold = $85,740,632.99 USD

If the chest was full of doubloons (which, again, are actually 22k gold, not 24k), we know that doubloons weigh 6.867g so you’d have…

  • 115kg of doubloons = 16,746 doubloons
  • 2188.62kg of doubloons = 318,715 doubloons

Doubloons seem to be baesd on weight rather than physical size (or, at least, I didn’t see any average size listed anywhere in my two minutes of searching) so I’m not sure how big a chest with that number of doubloons might need to be. I can’t imagine it’s too far off from my original calculation.

Anyway, it was kind of fun to think about. It makes for a better movie to have the giant chest of treasure, so it’s all good.

dotnet, aspnet, azure comments edit

I got the opportunity to hit the Microsoft Build conference this year. Last time I was able to make it was 2012, so it was good to be able to get back in and see what’s new in person.

I’m going to review this from my perspective as a web / web service / REST API sort of developer. As a different person or developer, you may have picked up something different you thought was super cool that I totally missed or tuned out. So.

Usually there are some “key themes” that get pushed at Build. Back in 2012, it was all Windows 8 applications. This year it was:

  • Internet of Things
  • Office and Cortana Integration
  • Cross-platform Applications
  • Microservices and Bots

Keeping in mind my status as a web developer, the microservice/bot stuff was the most interesting thing to me. I don’t work with hardware, so IoT is neat but not valuable. I don’t really need to integrate with Office, and Cortana isn’t web-based. I can maybe see doing some cross-platform stuff for mobile apps that talk to my REST APIs, but this was largely outside the scope of what I do, too.

I was reasonably disappointed by the keynotes. They usually have some big reveal in the keynotes. Day one left me wanting. Something about 22 new machine learning APIs being released that I won’t use. Day two the big reveal for me was free Xamarin for everyone. Again, cross-platform dev isn’t my thing, but still that’s pretty cool.

The sessions were impossible to get into. In 2012 they hosted the conference in Redmond on the Microsoft campus. I got into every session I was interested in. Since then they’ve hosted it in San Francisco at the Moscone Center. I can’t speak for other years, but this year you just couldn’t get into the sessions. If you weren’t lined up half an hour early to get in, forget it - there weren’t enough seats. In total I only got to see three different sessions. In three days, I saw three sessions. I don’t feel like I should have to catch up on the conference I paid to attend by watching videos of the sessions I wanted to see.

The schedule wasn’t public very early. I normally like to check out the web site and figure out which sessions I want to see when I get there. They didn’t release the speaker schedule (to my knowledge) until the day before the conference. I may well have canceled my reservation had I known the list of topics ahead of time. Maybe that’s why they didn’t release it.

There were great code labs. Something they didn’t have as much in previous years were interactive labs so you could learn new tech. They did a really good job of this, with several physical labs with hardware all set up so you could try stuff out. This was super valuable and the majority of my conference takeaways came from these labs. In particular I finally got a good feel for Docker by doing this lab.

There was no hardware giveaway this year which makes me wonder why the price was so high. I get that there are big parties and so on, but I would rather the price go down or there be some hardware than just keep the price cranked up. That said, I did come out with a Raspberry Pi 2 Azure IoT starter kit, so I can at least experiment with some of the IoT things they announced. Who knows? Maybe I’ll turn into an IoT aficionado.

There was a pitifully small amount of information about .NET Core. .NET Core and ASP.NET Core are on the top of my mind lately. Most of my current projects, including Autofac, are working through the challenges of RC1 and getting to RC2. There were something like three sessions total on .NET Core, most of which was just intro information. Any target dates on RC2? What’s the status on dotnet CLI? Honestly, I was hoping the big keynote announcement would be the .NET Core RC2 release. No such luck.

Access to the actual product teams was awesome. This almost (but not quite) makes up for the sessions being full. The ability to talk directly to various product team members for things like Visual Studio Online, ASP.NET Core, NuGet, Visual Studio, and Azure offerings was fantastic. It can be so hard sometimes to get questions answered or get the straight scoop on what’s going on with a project - cutting through the red tape and just talking to people is the perfect answer to that.

There was a big to-do around HoloLens. There seemed to be a lot around HoloLens - from conceptual demos to a full demo of walking on Mars. The lines for this were ridiculous. I didn’t get a chance to try it myself; a couple of colleagues tried it and said it wasn’t as mind-blowing as it was promoted to be.

Nutshell:

  • Logistics: Not good. If you sell out in five minutes and don’t have enough seats for sessions, that’s not cool.
  • Topics: Not good. I get there’s a focus on a certain subset of topics, but I can usually find something cool I’m excited about. Not this time.
  • Educational Value: OK. I didn’t get much from sessions but the labs and the on-hand staff were great.
  • Networking Value: Good. I don’t normally “network” with people in the whole “sales” context, but being able to meet up with people from different vendors and product teams and speak face to face was a valuable thing.

net comments edit

I’ve been working a bit with Serilog and ASP.NET Core lately. In both cases, there are constructs that use CallContext to store data across an asynchronous flow. For Serilog, it’s the LogContext class; for ASP.NET Core it’s the HttpContextAccessor.

Running tests, I’ve noticed some inconsistent behavior depending on how I set up the test fakes. For example, when testing some middleware that modifies the Serilog LogContext, I might set it up like this:

var mw = new SomeMiddleware(ctx => Task.FromResult(0));

Note the next RequestDelegate I set up is just a Task.FromResult call because I don’t really care what’s going on in there - the point is to see if the LogContext is changed after the middleware executes.

Unfortunately, what I’ve found is that the static Task methods, like Task.FromResult and Task.Delay, don’t behave consistently with respect to using CallContext to store data across async calls.

To illustrate the point, I’ve put together a small set of unit tests here:

public class CallContextTest
{
  [Fact]
  public void SimpleCallWithoutAsync()
  {
    var value = new object();
    SetCallContextData(value);
    Assert.Same(value, GetCallContextData());
  }

  [Fact]
  public async void AsyncMethodCallsTaskMethod()
  {
    var value = new object();
    await NoOpTaskMethod(value);
    Assert.Same(value, GetCallContextData());
  }

  [Fact]
  public async void AsyncMethodCallsAsyncFromResultMethod()
  {
    var value = new object();
    await NoOpAsyncMethodFromResult(value);

    // THIS FAILS - the call context data
    // will come back as null.
    Assert.Same(value, GetCallContextData());
  }

  private static object GetCallContextData()
  {
    return CallContext.LogicalGetData("testdata");
  }

  private static void SetCallContextData(object value)
  {
    CallContext.LogicalSetData("testdata", value);
  }

  /*
   * Note the difference between these two methods:
   * One _awaits_ the Task.FromResult, one returns it directly.
   * This could also be Task.Delay.
   */

  private async Task NoOpAsyncMethodFromResult(object value)
  {
    // Using this one will cause the CallContext
    // data to be lost.
    SetCallContextData(value);
    await Task.FromResult(0);
  }

  private Task NoOpTaskMethod(object value)
  {
    SetCallContextData(value);
    return Task.FromResult(0);
  }
}

As you can see, changing from return Task.FromResult(0) in a non async/await method to await Task.FromResult(0) in async/await suddenly breaks things. No amount of configuration I could find fixes it.

StackOverflow has related questions and there are forum posts on similar topics, but this is the first time this has really bitten me.

I gather this is why AsyncLocal<T> exists, which means maybe I should look into that a bit deeper.