aspnet, autofac comments edit

As we all saw, ASP.NET Core and .NET Core went RTM this past Monday. Congratulations to those teams - it’s been a long time coming and it’s some pretty amazing stuff.

Every time an RC (or, now, RTM) comes out, questions start flooding in on Autofac, sometimes literally within minutes of the go-live, asking when Autofac will be coming out with an update. While we have an issue you can track if you want to watch the progress, I figured I’d give a status update on where we are and where we’re going with respect to RTM. I’ll also explain why we are where we are.

Current Status

We have an RC build of core Autofac out on NuGet that is compatible with .NET Core RTM. That includes a version of Autofac.Extensions.DependencyInjection, the Autofac implementation against Microsoft.Extensions.DependencyInjection. We’ll be calling this version 4.0.0. We are working hard to get a “stable” version released, but we’ve hit a few snags at the last minute, which I’ll go into shortly.

About half of the non-portable projects have been updated to be compatible with Autofac 4.0.0. For the most part this was just an update to the NuGet packages, but with Autofac 4.0.0 we also changed to stop using the old code access security model (remember [AllowPartiallyTrustedCallers] ?) and some of these projects needed to be updated accordingly.

We are working hard to get the other half of the integration projects updated. Portable projects are being converted to use the new project.json structure and target netstandard framework monikers. Non-portable projects are sticking with .csproj but are being verified for compatibility with Autofac 4.0.0, getting updated as needed.

Why It’s Taking So Long

Oh, where do I begin.

Let me preface this by saying it’s going to sound like a rant. And in some ways it is. I do love what the .NET Core and ASP.NET Core teams have out there now, but it’s been a bumpy ride to get here and many of the bumps are what caused the delay.

First, let’s set the scene: There are really only two of us actively working on Autofac and the various officially supported integration libraries - it’s me and Alex Meyer-Gleaves. There are 23 integration projects we support alongside core Autofac. There’s a repository of examples as well as documentation. And, of course, there are questions that come in on StackOverflow, issues that come in that need responses, and requests on the discussion forum. We support this on the side since we both have our own full-time jobs and families.

I’m not complaining, truly. I raise all that because it’s not immediately evident. When you think about what makes Autofac tick (or AutoMapper, or Xunit, or any of your other favorite OSS projects that aren’t specifically backed/owned by a company like Microsoft or a consultant making money from support), it’s a very small number of people with quite a lot of work to get done in pretty much no time. Core Autofac is important, but it’s the tip of a very large iceberg.

We are sooooo lucky to have community help where we get it. We have some amazing folks who chime in on Autofac questions on StackOverflow. We’ve gotten some pretty awesome pull requests to add some new features lately. Where we get help, it’s super. But, admittedly, IoC containers and how they work internally are tricky beasts. There aren’t a lot of simple up-for-grabs sort of fixes that we have in the core product. It definitely reduces the number of things that we can get help with from folks who want to drop in and get something done quickly. (The integration projects are much easier to help with than core Autofac.)

Now, keep that in the back of your mind. We’ll tie into that shortly.

You know how the tooling for .NET Core changed like 1,000 times? You know how there was pretty much no documentation for most of that? And there were all sorts of weird things like the only examples available being from the .NET teams and they were using internal tools that folks didn’t have great access to. Every new beta or RC release was a nightmare. Mention that and you get comments like, “That’s life in the big city,” which is surely one way to look at it but is definitely dismissive of the pain involved.

Every release, we’d need to reverse-engineer the way the .NET teams had changed their builds, figure out how the tools were working, figure out how to address the breaking changes, and so on. Sometimes (rarely, but it happened) someone would have their project ported over first and we could look at how they did it. We definitely weren’t the only folks to feel that, I know.

NuGet lagging behind was painful because just updating core Autofac didn’t necessarily mean we could update the integration libraries. Especially with the target framework moniker shake-up, you’d find that without the tooling in place to support the whole chain, you could upgrade one library but not be able to take the upgrade in a downstream dependency because the tooling would consider it incompatible.

Anyway, with just the two of us (and the community as possible) and the tooling/library change challenges there was a lot of wheel-spinning. There were weeks where all we did was try to figure out the right magic combination of things in project.json to get things compiling. Did it work? I dunno, we can’t test because we don’t have a unit test framework compatible with this version of .NET Core. Can’t take it in a downstream integration library to test things, either, due to tooling challenges.

Lots of time spent just keeping up.

Finally, we’ve been bitten by the “conforming container” introduced for ASP.NET Core. Microsoft.Extensions.DependencyInjection is an abstraction around DI that was introduced to support ASP.NET Core. It’s a “conforming container” because it means anything that backs the IServiceProvider interface they use needs to support certain features and react in the same way. In some cases that’s fine. For the most part, simple stuff like GetService<T>() is pretty easy to implement regardless of the backing container.

The stuff you can’t do in a conforming container is use the container-specific features. For example, Autofac lets you pass parameters during a Resolve<T>() call. You can’t do that without actually referencing the Autofac lifetime scope - the IServiceProvider interface serves as a “lowest common denominator” for containers.

All along the way, we’ve been testing the junk out of Autofac to make sure it works correctly with Microsoft.Extensions.DependencyInjection. It’s been just fine so far. However, at the last minute (20 days ago now) we got word that not only did we need to implement the service provider interface as specified, but we also need to return IEnumerable<T> collections in the order that the components were registered.

We don’t currently do that. Given IEnumerable<T> has no specification around ordering and all previous framework features (controller action filters, etc.) requiring ordering used an Order property or something like that, it’s never been an issue. Interfaces using IEnumerable<T> generally don’t assume order (or, at least, shouldn’t) This is a new requirement for the conforming container and it’s amazingly non-trivial to implement.

It’s hard to implement because Autofac tracks registrations in a more complex way than just adding them to a list. If you add a standard registration, it does get added to a list. But if you add .PreserveExistingDefaults() because you want to register something and keep the existing default service in place if one’s already registered - that goes in at the end of the list instead of at the head. We also support very dynamic “registration sources” - a way to add registrations to the container on the fly without making explicit registrations. That’s how we handle things like Lazy<T> automatically working.

(That’s sort of a nutshell version. It gets more complex as you think about child/nested lifetime scopes.)

Point being, this isn’t as simple as just returning the list of stuff that got registered. We have to update Autofac to start keeping track of registration order yet still allow the existing functionality to behave correctly. And what do you do with dynamic registration sources? Where do those show up in the list?

The answers are not so straightforward.

We are currently working hard on solving that ordering problem. Actually, right now, Alex is working hard on that while I try and get the rest of the 23 integration projects converted, update the documentation, answer StackOverflow/issue/forum questions, and so on. Thank goodness for that guy because I couldn’t do this by myself.

If you would like to follow along on the path to a stable release, check out these issues:

While it may not be obvious, adding lots of duplicate issues asking for status or “me too” comments on issues in the repo doesn’t help. In some cases it’s frustrating (it’s a “no pressure, but hurry up” vote) and may slow things down as we spend what little time we have responding to the dupes and the questions rather than actually getting things done. I love the enthusiasm and interest, but please help us out by not adding duplicates. GitHub recently added “reactions” to issues (that little smiley face at the top-right of an issue comment) - jump in with a thumbs-up or smile or something; or subscribe to an issue if you’re interested in following along (there’s a subscribe button along the right up near the top of the issue, under the tags).

Thanks (So Far)

Finally, I have some thanks to hand out. Like I said, we couldn’t get this done without support from the community. I know I’m probably leaving someone out, and if so, I’m sorry - please know I didn’t intentionally do it.

  • The ASP.NET Core team - These guys took the time to talk directly to Alex and I about how things were progressing and answered several questions.
  • Oren Novotny - When the .NET target framework moniker problem was getting us down, he helped clear things up.
  • Cyril Durand and Viktor Nemes - These guys are rockstars on StackOverflow when it comes to Autofac questions.
  • Caio Proiete, Taylor Southwick, Kieren Johnstone, Geert Van Laethem, Cosmin Lazar, Shea Strickland, Roger Kratz - Pull requests of any size are awesome. These folks submitted to the core Autofac project within the last year. This is where I’m sure I missed someone because it was a manually pulled list and didn’t include the integration libraries. If you helped us out, know I’m thanking you right now.

personal comments edit

As of yesterday, June 27, 2016, I’ve worked for 15 years at Fiserv.

My 15 year certificate

Given I got my first “official” job when I was 14 and I turn 40 this year, that’s over half of my professional working life that I’ve been here.

I started in the marketing department back when the company was Corillian. I got hired to help work on the corporate web site, www.corillian.com (which now redirects to a Fiserv page on internet banking).

I think it was a year or two into that when some restructuring came along and the web site transferred to the IT department. I transferred with it and became the only IT developer doing internal tools and working on automating things.

I remember working on rolling out the original SharePoint 2003 along with Windows SharePoint Services in an overall Office 2003 effort. We had some pretty fancy “web parts” in VBScript to do custom document indexing and reporting. I vaguely recall updating several of those parts to be .NET 1.1 assemblies.

It was in 2004 when a need arose for a developer to work on some proof-of-concept and demo web sites that our sales folks could take around on calls. I happened to be free, so I worked with our product folks on those things. As sometimes happens, those POC and demo sites became the template for what we wanted the next version of the product to be like. And since I’d already worked on them… why not come over to product development and do the work “for real this time?”

I worked on the very first iteration of the Corillian Consumer Banking product. That was in .NET 1.1 though 2.0 was right around the corner. I remember having to back-port features like ASP.NET master pages into 1.1. (I still like our implementation better.) This was back when Hanselman was still at Corillian and we worked together on several features, particularly where the UI had to interact with/consume services.

In early 2007 CheckFree acquired Corillian. After the dust on that settled, I was still working on Consumer Banking - basically, same job, new company. There were definitely some process hiccups as we went from a fairly agile Scrum-ish methodology that Corillian had into CheckFree’s version of Rational Unified Process, but we made do.

In late 2007, Fiserv acquired CheckFree.

Yeah, that was some crazy times.

Fiserv, for the most part, adopted CheckFree’s development processes, at least as far as our group found. RUP gave way after a while to something more iterative but still not super agile. It was only pretty recently (last five-ish years?) that we’ve finally made our way back to Scrum.

The majority of my time has been in web service and UI development. I did get my Microsoft Certified DBA and Microsoft Certified .NET Solutions Developer certifications so I’m not uncomfortable working at all layers, but I do like to spend my time a little higher than the data tier when possible.

In my most recent times, I’ve been working on REST API stuff using ASP.NET Core. Always something new to learn, always interesting.

Also interesting is that with the various acquisitions, reorganizations, and re-prioritizations we’ve seen over the years, while I have worked (effectively) for the same company, it’s given me a lot of great experience with different people, processes, tools, and development environments. In some cases, it’s been like working different jobs… despite it being the same job. Definitely some great experience.

Plus, I’m afforded (a small amount of) time to help out the open source community with Autofac and other projects.

That’s actually why I’ve stayed so long. I can only speak for myself, but even with me sort of doing “the same thing” for so long… it’s not the same thing. I’m always learning something new, there’s always something changing, there’s always a new problem to solve. I work with some great people who are constantly trying to improve our products and processes.

And a bit of seniority never hurt anyone.

humor, personal comments edit

In the last couple of weeks I’ve had the opportunity to get together with folks for lunch or dinner and I’m finding it’s hard to agree on a “nice place to eat.”

Here’s the thing.

I’m not a really picky eater. At least, I don’t think so. I like simple food that tastes good. The thing is, I live in the Portland, OR metro area, so when someone talks about “a nice place to eat” it usually has something to do with an independent restaurateur who has “a fresh take on old ideas.” This generally amounts to “I don’t want to eat there” for me.

Don’t get me wrong, I’ve tried several of these places. I have yet to enjoy them. It’s not like I didn’t give it a fair shake.

With that in mind, I decided to post a list of “restaurant red flags” - things that warn me against eating at a place. No single item here instantly disqualifies a place, but a combination of them will probably result in a “no.”

If your restaurant has/does/says any of these things, I’m out:

  • The description of your restaurant contains some version of the word “gastronomy” that is not immediately prefixed by “molecular.”
  • All the pictures of the meat dishes appear to be barely cooked to rare.
  • I have to look up what two or more of the words are on any item.
  • Your menu is in English but you don’t use the common English words for things so you can sound fancier (e.g., you use “ali-oli” instead of “aioli”).
  • You serve a dish based on a creature I would normally otherwise consider “vermin” rather than “game.”
  • You’ve been in any top ten restaurant list where the food is described as “new and exciting.”
  • The intent of the food is to have lots of small dishes purchased and get passed around. (I hate tapas. Joey doesn’t share food.)
  • You think it’s a great idea to have a lot of community tables and no individual tables.
  • There’s a lot of fermented stuff on the menu that isn’t alcohol.
  • More than one item on the menu can be described as a “delicacy.”
  • A significant number of the meat dishes are made with the less-common cuts of meat (cheek, tongue, tail, etc.).

I may add to that list in the future, but basically, yeah. Red flags.

aspnet, security comments edit

I’ve been working with ASP.NET Core in a web farm environment. Things worked great when deployed to an Azure Web App but in a different farm setting (Pivotal Cloud Foundry) I started getting an error I hadn’t seen before: System.AggregateException: Unhandled remote failure. ---> System.Exception: Unable to unprotect the message.State.

This happened in context of the OpenID Connect middleware, specifically when a value encrypted by one instance of the ASP.NET Core application tried to be decrypted by a different instance of the application.

The problem is that the values used in DataProtection weren’t synchronized across all instances of the application. This is a lot like the ASP.NET classic issue where you have to ensure all nodes in the farm have the machine key synchronized so ViewState and other things can be shared across application instances.

Instead of machine key, ASP.NET Core uses Microsoft.AspNetCore.DataProtection for handling the encryption keys used to protect state values that get posted between the app and the client. There is plenty of documentation on how this works but not much in the way of a concise explanation of what it takes to get things working in a farm. Hopefully this wil help.

How DataProtection Gets Added

Normally you don’t manually add the data protection bits to the application pipeline. It’s done for you when you call services.AddMvc() during the ConfigureServices() part of application startup. That services.AddMvc() line actually fans out into adding a lot of default services, some of which are the defaults for data protection.

What to Synchronize

Instead of just machine key in ASP.NET Core, you have three things that must line up for a farm scenario:

Why This Doesn’t “Just Work” in All Farms

  • The application discriminator, being based on the installed location of the app, is great if all machines in the farm are identical. If, instead, you’re using some containerization techniques, a virtual filesystem, or otherwise don’t have the app installed in the same location everywhere, you need to manually set this.
  • The master encryption key, while not used on non-Windows environments, does otherwise need to be synchronized. If you choose to use a certificate, the current EncryptedXml mechanism used internally allows you to pass in a certificate for use in encryption but in decryption it requires the certificate to be in the machine certificate store. That requirement is less than stellar since it means you can’t store the certificate in something like Azure Key Vault.
  • The encrypted set of session keys is easy to persist in a file share… if the farm is allowed to store things in a common share and all the network ports are open to allow that. If you want to store in a different repository like a database or Redis, there’s nothing out of the box that helps you.

Why This Works in Azure Web Apps

There is some documentation outlining how this works in Azure. In a nutshell:

  • All applications are installed to the same location, so the application discriminator lines up.
  • Keys aren’t encrypted at rest, so there is no master encryption key.
  • The session keys are put in a special folder location that is “magically” synchronized across all instances of the Azure Web App.

Setting Your Own Options

To set your own options, call services.AddDataProtection() after you call services.AddMvc() in your ConfigureServices() method in Startup. It will look something like this:

public virtual IServiceProvider ConfigureServices(IServiceCollection services)
{
  services.AddMvc();
  services
    .AddDataProtection(opt => opt.ApplicationDiscriminator = "your-app-id")
    .ProtectKeysWithYourCustomKey()
    .PersistKeysToYourCustomLocation();
}

Example Extensions

To help get you on your way, I’ve published a couple of extensions on GitHub. They include:

  • XML encryption/decryption using a certificate that isn’t required to be in a machine certificate store. This allows you to store the master certificate in a repository like Azure Key Vault. This bypasses that requirement that the certificate be in the machine certificate store during decryption.
  • Encrypted XML storage in Redis. This allows you to share the session keys in a Redis database rather than a file share.

I wanted to be able to not only tidy my JSON objects, but also sort by property. I wanted to do this so I could unify my project.json and config.json files while working in .NET Core. Figuring out where people were adding keys, finding redundant things added to files, and so on… having a predictable order makes it all that much easier.

Up front, I’ll tell you this is a total hack. I got it to work as a user package (code in your User folder) but haven’t taken it as far as putting it into a repo or adding it to Package Control. That’s probably the next step. I just wanted to get this out there.

I’ll also say this is instructions for a Windows environment. The places you’ll have to adjust for Linux should be obvious, but I don’t have guidance or instructions to help you. Sorry.

First, install the External Command package. This is a great general-purpose package for setting up external commands and pushing Sublime Text buffers through. Select some text and have that text passed to an external shell command on stdin. (No selection? It runs the whole file.)

Next, create a folder called SortJson in your User package folder. This is where we’ll put the contents of the user module.

If you don’t have Node installed… why not? Really, though, if you don’t, go get it and install it. We need it because we use the Node json-stable-stringify package to do the work.

Drop to a command prompt in the SortJson folder and install the json-stable-stringify module.

npm install json-stable-stringify

You should get a node_modules folder under that SortJson folder and inside you’ll have json-stable-stringify (and maybe dependencies, but that’s fine).

Now we need a little script to take the contents of stdin and pass it through json-stable-stringify.

Create a script called sort-json.js in the SortJson folder. In that script, put this:

var stringify = require('json-stable-stringify');
var opts = {
    "space": 2
};

var stdin = process.stdin,
    stdout = process.stdout,
    inputChunks = [];

stdin.resume();
stdin.setEncoding('utf8');

stdin.on('data', function (chunk) {
    inputChunks.push(chunk);
});

stdin.on('end', function () {
    var inputJSON = inputChunks.join(""),
        parsedData = JSON.parse(inputJSON),
        outputJSON = stringify(parsedData, opts);
    stdout.write(outputJSON);
    stdout.write('\n');
});

Unfortunately, the External Command package doesn’t let you set a working directory, so you can’t just fire up Node and run the sort-json.js directly. We have to create a little batch file that helps our script find the json-stable-stringify module at runtime.

Create a batch script called sort-json.cmd in the SortJson folder. In that script, put this:

@SETLOCAL
@SET NODE_MODULES=%~dp0node_modules
@node "%~dp0sort-json.js" %*

That temporarily adds the SortJson\node_modules folder to the NODE_MODULES environment variable before running the sort-json.js script.

The last thing you need is a tie to the Sublime Text command palette so you can run the command to sort JSON.

Create a file called sort-json.sublime-commands in the SortJson folder. In that file, put this:

[
    {
        "caption": "JSON: Sort Object",
        "command": "filter_through_command",
        "args": { "cmdline": "\"%APPDATA%\\Sublime Text 3\\Packages\\User\\SortJson\\sort-json.cmd\"" }
    }
]

You’ll have to restart Sublime, but when you do you’ll see a command in the palette “JSON: Sort Object”. Load up a file with a JSON object and run that command. You should get a sorted JSON object.

I try to pair this with the JsFormat package (for JSBeautify integration) as well as SublimeLinter-json (for linting/error checking), both of which are in Package Control. If you want to tweak the formatting that comes out of the sort directly, the opts variable you see at the top of sort-json.js are the options used by json-stable-stringify.