javascript comments edit

I was working on my annual PTO schedule and thought it would be nice to collaborate on it with my wife, but also make it easy to visually indicate which days I was taking off.

Google Sheets is great for that sort of thing, so I started out with the calendar template. Then I wanted to highlight (with a background color) which days I was taking off.

That works well, but then I also wanted to see how many days I was planning to make sure I didn’t use too many vacation days.

How do you count cells in Google Sheets by background color?

One way is to use the “Power Tools” add-on in Sheets. You have to use the “Pro” version if you go that route, so consider that. I think the pro version is still free right now.

I did try that and had some trouble getting it to work. Maybe I was just doing it wrong. The count was always zero.

Instead, I wrote a script to do this. It was based on this StackOverflow question but I wanted my function to be parameterized, where the stuff in the question wasn’t.

First, go to “Tools > Script Editor…“ in your sheet and paste in this script:

/**
 * Counts the number of items with a given background.
 *
 * @param {String} color The hex background color to count.
 * @param {String} inputRange The range of cells to check for the background color.
 * @return {Number} The number of cells with a matching background.
 */
function countBackground(color, inputRange) {
  var inputRangeCells = SpreadsheetApp.getActiveSheet().getRange(inputRange);
  var rowColors = inputRangeCells.getBackgrounds();
  var count = 0;

  for(var r = 0; r < rowColors.length; r++) {
    var cellColors = rowColors[r];
    for(var c = 0; c < cellColors.length; c++) {
      if(cellColors[c] == color) {
        count++;
      }
    }
  }

  return count;
}

Once that’s in, you can save and exit the script editor.

Back in your sheet, use the new function by entering it like a formula, like this:

=countBackground("#00ff00", "B12:X17")

It takes two parameters:

  • The first parameter is the color of background highlight. It’s a hexadecimal color since that’s how Sheets stores it. The example I showed above is the bright green background color.
  • The second parameter is the cell range you want the function to look at. This is in the current sheet. In the example, I’m looking at the range from B12 through X17.

Gotcha: Sheets caches function results. I found that Google Sheets caches the output of custom function execution. What that means is that you enter the function (like the example above), it runs and calculates the number of items with the specified background, and then it won’t automatically run again. You change the background of one of the cells, the function doesn’t just run again and the value of the count/total doesn’t update. This is a Google Sheets thing, trying to optimize performance. What it means for you is that if you change cell backgrounds, you need to change the function temporarily to get it to update.

For example, say you have a cell that has this:

=countBackground("#00ff00", "B12:X17")

You update some background colors and want your count to update. Change the function to, say, look at a different range temporarily:

=countBackground("#00ff00", "B12:X18")

Then change it back:

=countBackground("#00ff00", "B12:X17")

By changing it, you force Google Sheets to re-run it. I haven’t found any button or control to force the methods to update or re-run so this is the way I’ve been tricking it.

net, aspnet comments edit

It’s good to develop and deploy your .NET web apps using SSL/TLS to ensure you’re doing things correctly and securely.

If you’re using full IIS for development and testing, it’s easy enough to create a self-signed certificate right from the console. But you have to be an administrator to use IIS in development, and it’s not cool to dev as an admin, so that’s not usually an option. At least, it’s not for me.

IIS Express comes with a self-signed SSL certificate you can use for development and Visual Studio even prompts you to trust that certificate when you first fire up a project using it. (Which is much nicer than the hoops you used to have to jump through to trust it.)

That still doesn’t fix things if you’re using a different host, though, like Kestrel for .NET Core projects; or if you’re trying to share the development SSL certificate across your team rather than using the per-machine self-signed cert.

There are instructions on MSDN for creating a temporary self-signed certificate.

The instructions work well enough, but something my team and I ran into: After a period of time, the certificate you created will no longer be trusted. We weren’t able to reproduce it on demand, just… periodically (between one day and two weeks) the certificate you place in the “Trusted Root Certification Authorities” store as part of the instructions just disappears.

It literally disappears. Your self-signed CA cert will get removed from the list of trusted third party CAs without a trace.

You can try capturing changes to the CA list with Procmon or set up security auditing on the registry keys that track the CA list and you won’t get anything. I tried for months. I worked through it with Microsoft Premier Support and they couldn’t find anything, either.

It’s easy enough to put it back, but it will eventually get removed again.

What is Going On?

The reason for this is the Automatic Third-Party CA Updates process that runs as a part of of Windows. This process goes to Windows Update periodically to get an updated list of trusted third-party certificate authorities and if it finds any certificates not present in the list they get deleted.

Obviously your self-signed dev cert won’t be in the list, so, poof. Gone. It was unclear to me as well as the MS support folks why we couldn’t catch this process modifying the certificate store via audits or anything else.

There are basically two options to fix this (assuming you don’t want to ignore the issue and just put the self-signed CA cert back every time it gets removed):

Option 1: Stop Using Self-Signed Certificates

Instead of using a self-signed development cert, try something from an actual, trusted third-party CA. You can get a free certificate from LetsEncrypt, for example. Note LetsEncrypt certificates currently only last 90 days but you’ll get 90 uninterrupted days where your certificate won’t magically lose trust.

Alternatively, if you have an internal CA that’s already trusted, use that. Explaining how to set up an internal CA is a bit beyond the scope of this post and it’s not a single-step five-minute process, but if you don’t want a third-party certificate, this is also an option.

Option 2: Turn Off Automatic Third-Party CA Updates

If you’re on an Active Directory domain you can do this through group policy, but in the local machine policy you can see this under Computer Configuration / Administrative Templates / System / Internet Communication Management / Internet Communication Settings - You’d set “Turn off Automatic Root Certificate Update” to “Enabled.”

I’m not recommending you do this. You accept some risk if you stop automatically updating your third-party CA list. However, if you’re really stuck and looking for a fix, this is how you turn it off.

Turning off automatic third-party CA updates

personal comments edit

Well, I made it to 40.

That being a somewhat significant life milestone, I figured I’d stop to reflect a bit.

To celebrate the occasion, I rented out a local theater (a smaller 18-person place) and we had a private screening of Star Trek Beyond along with a great dinner while watching. It was a great time with family and friends.

I think 40 is not a bad place to be. I feel like I’m now old enough to “know better” but not so old I can’t still take risks. As it’s been approaching I haven’t looked at it in fear or with any real sense of “mortality” as it were, just… “Hey, here comes a sort of marker in the road of life. I wonder what it’ll mean?”

I feel like I’ve lost at least a little bit of that rough edge I had when I was younger, and that’s good. Looking back at the blog history here the tone of posts have changed to be slightly less aggressive, though I can’t honestly say a bit of that isn’t still inside me. I still don’t suffer fools gladly and I still get irritated with people who don’t respect me or my time. I’m not a patient guy and I have about ten minutes’ worth of attention span for poorly run meetings.

I’m working on it.

I’ve been in professional software development for about 20 years now. The majority of that work has been web-related, which is sort of weird for me to think that field has been around for that long. I remember doing a project in college writing a web CGI library in Standard ML and that was pretty new stuff.

As of this year, 15 of my career years have been spent at Fiserv. Given my first job was when I was 14, that’s actually most of my working life. I was originally hired at Corillian, which was subsequently acquired by CheckFree, which, in turn, was acquired by Fiserv. With all the changes of management, process, tools, and so on, it feels like having worked at different companies over the years even though the overall job hasn’t changed. That’s actually one of the reasons I haven’t really felt the need to go elsewhere - I’ve had the opportunity to see product development from a long-term standpoint, experience different group dynamics, try different development processes… and all without the instability of being an independent contractor. It’s been good.

I originally went to college wanting to be a computer animator / 3D modeler. Due to a series of events involving getting some bad counseling and misleading information, which I am still very bitter about, I ended up in computer science. Turns out art houses believe you can teach an artist computer science but computer scientists will never be good at art. Even if you have a portfolio and a demo reel. So that was the end of that.

That’s actually why I started in web stuff - I was in love with the UI aspect of things. Over time I’ve found the art in solving computer science problems and have lost my interest in pushing pixels (now with CSS).

I still have a passion for art, I still do crafts and things at home. I really like sewing, which is weird because when I was a kid I would feel dizzy in fabric stores so I hated it. (My mom used to sew when I was a kid.) I actually called them “dizzy places.” I’m curious if maybe I had a chemical sensitivity to the sizing (or whatever) that ships on the fabric. Maybe I was just super bored. In any case, I really like it now. I like drawing and coloring. I like building stuff. I’m probably not the best artist, but I have a good time with it. I do wish I had more time for it.

I waited until later in life to start a family. I’ve only been married for coming up on 10 years now, though I’ve been with my wife for 16 years. I only have one kid and she’s five so she came along slightly later, too. That’s probably a good thing since she’s quite the handful and I’m not sure I’d have had the patience required to handle her in earlier times. I still kinda don’t. I definitely don’t have the energy I would have had 15 years ago or whatever.

I only have one grandparent left. My wife has none. My daughter really won’t know great-grandparents. I’m not sure how I feel about that. I was young when I met my great-grandparents and I honestly don’t remember much about them. I’m guessing it’ll be the same for her.

I love my parents and have a good relationship with them. They’re around for my daughter and love her to pieces. That makes me happy.

I have two sisters, both of whom I love, but only one of whom still talks to the family. The one that still talks to us has a great life and family of her own that means we don’t cross paths often. I’m glad she’s happy and has her own thing going, but I realize that our lives are so different now that if she weren’t actually related to me we probably wouldn’t really keep in touch. A lot of the commonality we shared as kids has disappeared over time.

Friends have come and gone over the years. I don’t have a lot of friends, but I’m glad to say the ones I have are great. I’m still friends with a few people I knew from school, but my school years weren’t the best for me so I don’t really keep in touch with many of them. Some folks I swear I’d be best friends with for life have drifted away. Some folks I never would have guessed have turned into the best friends I could have. I guess that’s how it goes as people change.

I haven’t made my first billion, or my first million, but I’m comfortable and don’t feel unsuccessful. I wish we had a bigger house, but there is also a lot of space we don’t use so maybe it’s just that I want a different layout. I feel comfortable and don’t live paycheck to paycheck so I can’t say I’m not fortunate. (Don’t get me wrong, though, I’m not saying I’m not interested in money. I don’t work for free.)

Anyway, here’s to at least another 40 years. The first 40 has been great, I’m curious what the next batch has in store.

aspnet, autofac comments edit

As we all saw, ASP.NET Core and .NET Core went RTM this past Monday. Congratulations to those teams - it’s been a long time coming and it’s some pretty amazing stuff.

Every time an RC (or, now, RTM) comes out, questions start flooding in on Autofac, sometimes literally within minutes of the go-live, asking when Autofac will be coming out with an update. While we have an issue you can track if you want to watch the progress, I figured I’d give a status update on where we are and where we’re going with respect to RTM. I’ll also explain why we are where we are.

Current Status

We have an RC build of core Autofac out on NuGet that is compatible with .NET Core RTM. That includes a version of Autofac.Extensions.DependencyInjection, the Autofac implementation against Microsoft.Extensions.DependencyInjection. We’ll be calling this version 4.0.0. We are working hard to get a “stable” version released, but we’ve hit a few snags at the last minute, which I’ll go into shortly.

About half of the non-portable projects have been updated to be compatible with Autofac 4.0.0. For the most part this was just an update to the NuGet packages, but with Autofac 4.0.0 we also changed to stop using the old code access security model (remember [AllowPartiallyTrustedCallers] ?) and some of these projects needed to be updated accordingly.

We are working hard to get the other half of the integration projects updated. Portable projects are being converted to use the new project.json structure and target netstandard framework monikers. Non-portable projects are sticking with .csproj but are being verified for compatibility with Autofac 4.0.0, getting updated as needed.

Why It’s Taking So Long

Oh, where do I begin.

Let me preface this by saying it’s going to sound like a rant. And in some ways it is. I do love what the .NET Core and ASP.NET Core teams have out there now, but it’s been a bumpy ride to get here and many of the bumps are what caused the delay.

First, let’s set the scene: There are really only two of us actively working on Autofac and the various officially supported integration libraries - it’s me and Alex Meyer-Gleaves. There are 23 integration projects we support alongside core Autofac. There’s a repository of examples as well as documentation. And, of course, there are questions that come in on StackOverflow, issues that come in that need responses, and requests on the discussion forum. We support this on the side since we both have our own full-time jobs and families.

I’m not complaining, truly. I raise all that because it’s not immediately evident. When you think about what makes Autofac tick (or AutoMapper, or Xunit, or any of your other favorite OSS projects that aren’t specifically backed/owned by a company like Microsoft or a consultant making money from support), it’s a very small number of people with quite a lot of work to get done in pretty much no time. Core Autofac is important, but it’s the tip of a very large iceberg.

We are sooooo lucky to have community help where we get it. We have some amazing folks who chime in on Autofac questions on StackOverflow. We’ve gotten some pretty awesome pull requests to add some new features lately. Where we get help, it’s super. But, admittedly, IoC containers and how they work internally are tricky beasts. There aren’t a lot of simple up-for-grabs sort of fixes that we have in the core product. It definitely reduces the number of things that we can get help with from folks who want to drop in and get something done quickly. (The integration projects are much easier to help with than core Autofac.)

Now, keep that in the back of your mind. We’ll tie into that shortly.

You know how the tooling for .NET Core changed like 1,000 times? You know how there was pretty much no documentation for most of that? And there were all sorts of weird things like the only examples available being from the .NET teams and they were using internal tools that folks didn’t have great access to. Every new beta or RC release was a nightmare. Mention that and you get comments like, “That’s life in the big city,” which is surely one way to look at it but is definitely dismissive of the pain involved.

Every release, we’d need to reverse-engineer the way the .NET teams had changed their builds, figure out how the tools were working, figure out how to address the breaking changes, and so on. Sometimes (rarely, but it happened) someone would have their project ported over first and we could look at how they did it. We definitely weren’t the only folks to feel that, I know.

NuGet lagging behind was painful because just updating core Autofac didn’t necessarily mean we could update the integration libraries. Especially with the target framework moniker shake-up, you’d find that without the tooling in place to support the whole chain, you could upgrade one library but not be able to take the upgrade in a downstream dependency because the tooling would consider it incompatible.

Anyway, with just the two of us (and the community as possible) and the tooling/library change challenges there was a lot of wheel-spinning. There were weeks where all we did was try to figure out the right magic combination of things in project.json to get things compiling. Did it work? I dunno, we can’t test because we don’t have a unit test framework compatible with this version of .NET Core. Can’t take it in a downstream integration library to test things, either, due to tooling challenges.

Lots of time spent just keeping up.

Finally, we’ve been bitten by the “conforming container” introduced for ASP.NET Core. Microsoft.Extensions.DependencyInjection is an abstraction around DI that was introduced to support ASP.NET Core. It’s a “conforming container” because it means anything that backs the IServiceProvider interface they use needs to support certain features and react in the same way. In some cases that’s fine. For the most part, simple stuff like GetService<T>() is pretty easy to implement regardless of the backing container.

The stuff you can’t do in a conforming container is use the container-specific features. For example, Autofac lets you pass parameters during a Resolve<T>() call. You can’t do that without actually referencing the Autofac lifetime scope - the IServiceProvider interface serves as a “lowest common denominator” for containers.

All along the way, we’ve been testing the junk out of Autofac to make sure it works correctly with Microsoft.Extensions.DependencyInjection. It’s been just fine so far. However, at the last minute (20 days ago now) we got word that not only did we need to implement the service provider interface as specified, but we also need to return IEnumerable<T> collections in the order that the components were registered.

We don’t currently do that. Given IEnumerable<T> has no specification around ordering and all previous framework features (controller action filters, etc.) requiring ordering used an Order property or something like that, it’s never been an issue. Interfaces using IEnumerable<T> generally don’t assume order (or, at least, shouldn’t) This is a new requirement for the conforming container and it’s amazingly non-trivial to implement.

It’s hard to implement because Autofac tracks registrations in a more complex way than just adding them to a list. If you add a standard registration, it does get added to a list. But if you add .PreserveExistingDefaults() because you want to register something and keep the existing default service in place if one’s already registered - that goes in at the end of the list instead of at the head. We also support very dynamic “registration sources” - a way to add registrations to the container on the fly without making explicit registrations. That’s how we handle things like Lazy<T> automatically working.

(That’s sort of a nutshell version. It gets more complex as you think about child/nested lifetime scopes.)

Point being, this isn’t as simple as just returning the list of stuff that got registered. We have to update Autofac to start keeping track of registration order yet still allow the existing functionality to behave correctly. And what do you do with dynamic registration sources? Where do those show up in the list?

The answers are not so straightforward.

We are currently working hard on solving that ordering problem. Actually, right now, Alex is working hard on that while I try and get the rest of the 23 integration projects converted, update the documentation, answer StackOverflow/issue/forum questions, and so on. Thank goodness for that guy because I couldn’t do this by myself.

If you would like to follow along on the path to a stable release, check out these issues:

While it may not be obvious, adding lots of duplicate issues asking for status or “me too” comments on issues in the repo doesn’t help. In some cases it’s frustrating (it’s a “no pressure, but hurry up” vote) and may slow things down as we spend what little time we have responding to the dupes and the questions rather than actually getting things done. I love the enthusiasm and interest, but please help us out by not adding duplicates. GitHub recently added “reactions” to issues (that little smiley face at the top-right of an issue comment) - jump in with a thumbs-up or smile or something; or subscribe to an issue if you’re interested in following along (there’s a subscribe button along the right up near the top of the issue, under the tags).

Thanks (So Far)

Finally, I have some thanks to hand out. Like I said, we couldn’t get this done without support from the community. I know I’m probably leaving someone out, and if so, I’m sorry - please know I didn’t intentionally do it.

  • The ASP.NET Core team - These guys took the time to talk directly to Alex and I about how things were progressing and answered several questions.
  • Oren Novotny - When the .NET target framework moniker problem was getting us down, he helped clear things up.
  • Cyril Durand and Viktor Nemes - These guys are rockstars on StackOverflow when it comes to Autofac questions.
  • Caio Proiete, Taylor Southwick, Kieren Johnstone, Geert Van Laethem, Cosmin Lazar, Shea Strickland, Roger Kratz - Pull requests of any size are awesome. These folks submitted to the core Autofac project within the last year. This is where I’m sure I missed someone because it was a manually pulled list and didn’t include the integration libraries. If you helped us out, know I’m thanking you right now.

personal comments edit

As of yesterday, June 27, 2016, I’ve worked for 15 years at Fiserv.

My 15 year certificate

Given I got my first “official” job when I was 14 and I turn 40 this year, that’s over half of my professional working life that I’ve been here.

I started in the marketing department back when the company was Corillian. I got hired to help work on the corporate web site, www.corillian.com (which now redirects to a Fiserv page on internet banking).

I think it was a year or two into that when some restructuring came along and the web site transferred to the IT department. I transferred with it and became the only IT developer doing internal tools and working on automating things.

I remember working on rolling out the original SharePoint 2003 along with Windows SharePoint Services in an overall Office 2003 effort. We had some pretty fancy “web parts” in VBScript to do custom document indexing and reporting. I vaguely recall updating several of those parts to be .NET 1.1 assemblies.

It was in 2004 when a need arose for a developer to work on some proof-of-concept and demo web sites that our sales folks could take around on calls. I happened to be free, so I worked with our product folks on those things. As sometimes happens, those POC and demo sites became the template for what we wanted the next version of the product to be like. And since I’d already worked on them… why not come over to product development and do the work “for real this time?”

I worked on the very first iteration of the Corillian Consumer Banking product. That was in .NET 1.1 though 2.0 was right around the corner. I remember having to back-port features like ASP.NET master pages into 1.1. (I still like our implementation better.) This was back when Hanselman was still at Corillian and we worked together on several features, particularly where the UI had to interact with/consume services.

In early 2007 CheckFree acquired Corillian. After the dust on that settled, I was still working on Consumer Banking - basically, same job, new company. There were definitely some process hiccups as we went from a fairly agile Scrum-ish methodology that Corillian had into CheckFree’s version of Rational Unified Process, but we made do.

In late 2007, Fiserv acquired CheckFree.

Yeah, that was some crazy times.

Fiserv, for the most part, adopted CheckFree’s development processes, at least as far as our group found. RUP gave way after a while to something more iterative but still not super agile. It was only pretty recently (last five-ish years?) that we’ve finally made our way back to Scrum.

The majority of my time has been in web service and UI development. I did get my Microsoft Certified DBA and Microsoft Certified .NET Solutions Developer certifications so I’m not uncomfortable working at all layers, but I do like to spend my time a little higher than the data tier when possible.

In my most recent times, I’ve been working on REST API stuff using ASP.NET Core. Always something new to learn, always interesting.

Also interesting is that with the various acquisitions, reorganizations, and re-prioritizations we’ve seen over the years, while I have worked (effectively) for the same company, it’s given me a lot of great experience with different people, processes, tools, and development environments. In some cases, it’s been like working different jobs… despite it being the same job. Definitely some great experience.

Plus, I’m afforded (a small amount of) time to help out the open source community with Autofac and other projects.

That’s actually why I’ve stayed so long. I can only speak for myself, but even with me sort of doing “the same thing” for so long… it’s not the same thing. I’m always learning something new, there’s always something changing, there’s always a new problem to solve. I work with some great people who are constantly trying to improve our products and processes.

And a bit of seniority never hurt anyone.

humor, personal comments edit

In the last couple of weeks I’ve had the opportunity to get together with folks for lunch or dinner and I’m finding it’s hard to agree on a “nice place to eat.”

Here’s the thing.

I’m not a really picky eater. At least, I don’t think so. I like simple food that tastes good. The thing is, I live in the Portland, OR metro area, so when someone talks about “a nice place to eat” it usually has something to do with an independent restaurateur who has “a fresh take on old ideas.” This generally amounts to “I don’t want to eat there” for me.

Don’t get me wrong, I’ve tried several of these places. I have yet to enjoy them. It’s not like I didn’t give it a fair shake.

With that in mind, I decided to post a list of “restaurant red flags” - things that warn me against eating at a place. No single item here instantly disqualifies a place, but a combination of them will probably result in a “no.”

If your restaurant has/does/says any of these things, I’m out:

  • The description of your restaurant contains some version of the word “gastronomy” that is not immediately prefixed by “molecular.”
  • All the pictures of the meat dishes appear to be barely cooked to rare.
  • I have to look up what two or more of the words are on any item.
  • Your menu is in English but you don’t use the common English words for things so you can sound fancier (e.g., you use “ali-oli” instead of “aioli”).
  • You serve a dish based on a creature I would normally otherwise consider “vermin” rather than “game.”
  • You’ve been in any top ten restaurant list where the food is described as “new and exciting.”
  • The intent of the food is to have lots of small dishes purchased and get passed around. (I hate tapas. Joey doesn’t share food.)
  • You think it’s a great idea to have a lot of community tables and no individual tables.
  • There’s a lot of fermented stuff on the menu that isn’t alcohol.
  • More than one item on the menu can be described as a “delicacy.”
  • A significant number of the meat dishes are made with the less-common cuts of meat (cheek, tongue, tail, etc.).

I may add to that list in the future, but basically, yeah. Red flags.

aspnet, security comments edit

I’ve been working with ASP.NET Core in a web farm environment. Things worked great when deployed to an Azure Web App but in a different farm setting (Pivotal Cloud Foundry) I started getting an error I hadn’t seen before: System.AggregateException: Unhandled remote failure. ---> System.Exception: Unable to unprotect the message.State.

This happened in context of the OpenID Connect middleware, specifically when a value encrypted by one instance of the ASP.NET Core application tried to be decrypted by a different instance of the application.

The problem is that the values used in DataProtection weren’t synchronized across all instances of the application. This is a lot like the ASP.NET classic issue where you have to ensure all nodes in the farm have the machine key synchronized so ViewState and other things can be shared across application instances.

Instead of machine key, ASP.NET Core uses Microsoft.AspNetCore.DataProtection for handling the encryption keys used to protect state values that get posted between the app and the client. There is plenty of documentation on how this works but not much in the way of a concise explanation of what it takes to get things working in a farm. Hopefully this wil help.

How DataProtection Gets Added

Normally you don’t manually add the data protection bits to the application pipeline. It’s done for you when you call services.AddMvc() during the ConfigureServices() part of application startup. That services.AddMvc() line actually fans out into adding a lot of default services, some of which are the defaults for data protection.

What to Synchronize

Instead of just machine key in ASP.NET Core, you have three things that must line up for a farm scenario:

Why This Doesn’t “Just Work” in All Farms

  • The application discriminator, being based on the installed location of the app, is great if all machines in the farm are identical. If, instead, you’re using some containerization techniques, a virtual filesystem, or otherwise don’t have the app installed in the same location everywhere, you need to manually set this.
  • The master encryption key, while not used on non-Windows environments, does otherwise need to be synchronized. If you choose to use a certificate, the current EncryptedXml mechanism used internally allows you to pass in a certificate for use in encryption but in decryption it requires the certificate to be in the machine certificate store. That requirement is less than stellar since it means you can’t store the certificate in something like Azure Key Vault.
  • The encrypted set of session keys is easy to persist in a file share… if the farm is allowed to store things in a common share and all the network ports are open to allow that. If you want to store in a different repository like a database or Redis, there’s nothing out of the box that helps you.

Why This Works in Azure Web Apps

There is some documentation outlining how this works in Azure. In a nutshell:

  • All applications are installed to the same location, so the application discriminator lines up.
  • Keys aren’t encrypted at rest, so there is no master encryption key.
  • The session keys are put in a special folder location that is “magically” synchronized across all instances of the Azure Web App.

Setting Your Own Options

To set your own options, call services.AddDataProtection() after you call services.AddMvc() in your ConfigureServices() method in Startup. It will look something like this:

public virtual IServiceProvider ConfigureServices(IServiceCollection services)
{
  services.AddMvc();
  services
    .AddDataProtection(opt => opt.ApplicationDiscriminator = "your-app-id")
    .ProtectKeysWithYourCustomKey()
    .PersistKeysToYourCustomLocation();
}

Example Extensions

To help get you on your way, I’ve published a couple of extensions on GitHub. They include:

  • XML encryption/decryption using a certificate that isn’t required to be in a machine certificate store. This allows you to store the master certificate in a repository like Azure Key Vault. This bypasses that requirement that the certificate be in the machine certificate store during decryption.
  • Encrypted XML storage in Redis. This allows you to share the session keys in a Redis database rather than a file share.

json, sublime comments edit

I wanted to be able to not only tidy my JSON objects, but also sort by property. I wanted to do this so I could unify my project.json and config.json files while working in .NET Core. Figuring out where people were adding keys, finding redundant things added to files, and so on… having a predictable order makes it all that much easier.

Up front, I’ll tell you this is a total hack. I got it to work as a user package (code in your User folder) but haven’t taken it as far as putting it into a repo or adding it to Package Control. That’s probably the next step. I just wanted to get this out there.

I’ll also say this is instructions for a Windows environment. The places you’ll have to adjust for Linux should be obvious, but I don’t have guidance or instructions to help you. Sorry.

First, install the External Command package. This is a great general-purpose package for setting up external commands and pushing Sublime Text buffers through. Select some text and have that text passed to an external shell command on stdin. (No selection? It runs the whole file.)

Next, create a folder called SortJson in your User package folder. This is where we’ll put the contents of the user module.

If you don’t have Node installed… why not? Really, though, if you don’t, go get it and install it. We need it because we use the Node json-stable-stringify package to do the work.

Drop to a command prompt in the SortJson folder and install the json-stable-stringify module.

npm install json-stable-stringify

You should get a node_modules folder under that SortJson folder and inside you’ll have json-stable-stringify (and maybe dependencies, but that’s fine).

Now we need a little script to take the contents of stdin and pass it through json-stable-stringify.

Create a script called sort-json.js in the SortJson folder. In that script, put this:

var stringify = require('json-stable-stringify');
var opts = {
    "space": 2
};

var stdin = process.stdin,
    stdout = process.stdout,
    inputChunks = [];

stdin.resume();
stdin.setEncoding('utf8');

stdin.on('data', function (chunk) {
    inputChunks.push(chunk);
});

stdin.on('end', function () {
    var inputJSON = inputChunks.join(""),
        parsedData = JSON.parse(inputJSON),
        outputJSON = stringify(parsedData, opts);
    stdout.write(outputJSON);
    stdout.write('\n');
});

Unfortunately, the External Command package doesn’t let you set a working directory, so you can’t just fire up Node and run the sort-json.js directly. We have to create a little batch file that helps our script find the json-stable-stringify module at runtime.

Create a batch script called sort-json.cmd in the SortJson folder. In that script, put this:

@SETLOCAL
@SET NODE_MODULES=%~dp0node_modules
@node "%~dp0sort-json.js" %*

That temporarily adds the SortJson\node_modules folder to the NODE_MODULES environment variable before running the sort-json.js script.

The last thing you need is a tie to the Sublime Text command palette so you can run the command to sort JSON.

Create a file called sort-json.sublime-commands in the SortJson folder. In that file, put this:

[
    {
        "caption": "JSON: Sort Object",
        "command": "filter_through_command",
        "args": { "cmdline": "\"%APPDATA%\\Sublime Text 3\\Packages\\User\\SortJson\\sort-json.cmd\"" }
    }
]

You’ll have to restart Sublime, but when you do you’ll see a command in the palette “JSON: Sort Object”. Load up a file with a JSON object and run that command. You should get a sorted JSON object.

I try to pair this with the JsFormat package (for JSBeautify integration) as well as SublimeLinter-json (for linting/error checking), both of which are in Package Control. If you want to tweak the formatting that comes out of the sort directly, the opts variable you see at the top of sort-json.js are the options used by json-stable-stringify.

personal, movies, humor comments edit

I was watching Cutthroat Island this weekend with my daughter, who loves pirate movies, when I started thinking about these giant treasure chests full of gold you see in such films.

The stereotypical treasure chest

While I get that it’s a movie, it was fun to think about how practical carting around that treasure chest of doubloons might be.

Assumptions:

Now, let’s say the treasure chest is like 90cm x 60cm x 60cm on the inside. A little large-ish, but not unheard of in a pirate movie.

  • The chest has 324,000cc interior capacity.
  • Multiplied by the 35% packing efficiency, you’d have 113,400cc of gold.
  • 113,400cc x 19.3g per cc = 2,188,620g = 2188.62kg (4825.1lb).

There is no way pirates are carrying around 5000lb gold chests.

Let’s figure a couple guys - one on each end of the chest - need to cart the chest through the jungle or something. They’re strong, but not they’re not Hafþór Júlíus Björnsson. You’re looking at something like 115kg (253.5lb) or so lest it gets unwieldy.

Working backwards, 115kg is 5958.55cc of gold. With the packing ratio, that’s a chest with 17024.43cc total capacity. To make the math easy, let’s say it’s a cube-shaped chest. That’d yield a chest with internal dimensions of about roughly 25.73cm (10.13 inches) on a side.

That’s a pretty tiny treasure chest.

At least, tiny in comparison to what you usually see on a pirate movie.

Now, I could be generous with my packing efficiency. Maybe it’s far less than 35%, or it could be that the chest isn’t packed to the top with gold, or both.

If you had that 90cm x 60cm x 60cm chest and limited yourself to the 115kg weight, that’d put the packing efficiency of doubloons at closer to 5%; or it’d mean the chest is not quite a quarter of the way full.

Just for fun, we can also calculate the value of such treasure. The price of gold today (as I write this) is $39,175.66 USD per kg.

  • 115kg of gold = $4,505,200.90 USD
  • 2188.62kg of gold = $85,740,632.99 USD

If the chest was full of doubloons (which, again, are actually 22k gold, not 24k), we know that doubloons weigh 6.867g so you’d have…

  • 115kg of doubloons = 16,746 doubloons
  • 2188.62kg of doubloons = 318,715 doubloons

Doubloons seem to be baesd on weight rather than physical size (or, at least, I didn’t see any average size listed anywhere in my two minutes of searching) so I’m not sure how big a chest with that number of doubloons might need to be. I can’t imagine it’s too far off from my original calculation.

Anyway, it was kind of fun to think about. It makes for a better movie to have the giant chest of treasure, so it’s all good.

net, aspnet, azure comments edit

I got the opportunity to hit the Microsoft Build conference this year. Last time I was able to make it was 2012, so it was good to be able to get back in and see what’s new in person.

I’m going to review this from my perspective as a web / web service / REST API sort of developer. As a different person or developer, you may have picked up something different you thought was super cool that I totally missed or tuned out. So.

Usually there are some “key themes” that get pushed at Build. Back in 2012, it was all Windows 8 applications. This year it was:

  • Internet of Things
  • Office and Cortana Integration
  • Cross-platform Applications
  • Microservices and Bots

Keeping in mind my status as a web developer, the microservice/bot stuff was the most interesting thing to me. I don’t work with hardware, so IoT is neat but not valuable. I don’t really need to integrate with Office, and Cortana isn’t web-based. I can maybe see doing some cross-platform stuff for mobile apps that talk to my REST APIs, but this was largely outside the scope of what I do, too.

I was reasonably disappointed by the keynotes. They usually have some big reveal in the keynotes. Day one left me wanting. Something about 22 new machine learning APIs being released that I won’t use. Day two the big reveal for me was free Xamarin for everyone. Again, cross-platform dev isn’t my thing, but still that’s pretty cool.

The sessions were impossible to get into. In 2012 they hosted the conference in Redmond on the Microsoft campus. I got into every session I was interested in. Since then they’ve hosted it in San Francisco at the Moscone Center. I can’t speak for other years, but this year you just couldn’t get into the sessions. If you weren’t lined up half an hour early to get in, forget it - there weren’t enough seats. In total I only got to see three different sessions. In three days, I saw three sessions. I don’t feel like I should have to catch up on the conference I paid to attend by watching videos of the sessions I wanted to see.

The schedule wasn’t public very early. I normally like to check out the web site and figure out which sessions I want to see when I get there. They didn’t release the speaker schedule (to my knowledge) until the day before the conference. I may well have canceled my reservation had I known the list of topics ahead of time. Maybe that’s why they didn’t release it.

There were great code labs. Something they didn’t have as much in previous years were interactive labs so you could learn new tech. They did a really good job of this, with several physical labs with hardware all set up so you could try stuff out. This was super valuable and the majority of my conference takeaways came from these labs. In particular I finally got a good feel for Docker by doing this lab.

There was no hardware giveaway this year which makes me wonder why the price was so high. I get that there are big parties and so on, but I would rather the price go down or there be some hardware than just keep the price cranked up. That said, I did come out with a Raspberry Pi 2 Azure IoT starter kit, so I can at least experiment with some of the IoT things they announced. Who knows? Maybe I’ll turn into an IoT aficionado.

There was a pitifully small amount of information about .NET Core. .NET Core and ASP.NET Core are on the top of my mind lately. Most of my current projects, including Autofac, are working through the challenges of RC1 and getting to RC2. There were something like three sessions total on .NET Core, most of which was just intro information. Any target dates on RC2? What’s the status on dotnet CLI? Honestly, I was hoping the big keynote announcement would be the .NET Core RC2 release. No such luck.

Access to the actual product teams was awesome. This almost (but not quite) makes up for the sessions being full. The ability to talk directly to various product team members for things like Visual Studio Online, ASP.NET Core, NuGet, Visual Studio, and Azure offerings was fantastic. It can be so hard sometimes to get questions answered or get the straight scoop on what’s going on with a project - cutting through the red tape and just talking to people is the perfect answer to that.

There was a big to-do around HoloLens. There seemed to be a lot around HoloLens - from conceptual demos to a full demo of walking on Mars. The lines for this were ridiculous. I didn’t get a chance to try it myself; a couple of colleagues tried it and said it wasn’t as mind-blowing as it was promoted to be.

Nutshell:

  • Logistics: Not good. If you sell out in five minutes and don’t have enough seats for sessions, that’s not cool.
  • Topics: Not good. I get there’s a focus on a certain subset of topics, but I can usually find something cool I’m excited about. Not this time.
  • Educational Value: OK. I didn’t get much from sessions but the labs and the on-hand staff were great.
  • Networking Value: Good. I don’t normally “network” with people in the whole “sales” context, but being able to meet up with people from different vendors and product teams and speak face to face was a valuable thing.

net comments edit

I’ve been working a bit with Serilog and ASP.NET Core lately. In both cases, there are constructs that use CallContext to store data across an asynchronous flow. For Serilog, it’s the LogContext class; for ASP.NET Core it’s the HttpContextAccessor.

Running tests, I’ve noticed some inconsistent behavior depending on how I set up the test fakes. For example, when testing some middleware that modifies the Serilog LogContext, I might set it up like this:

var mw = new SomeMiddleware(ctx => Task.FromResult(0));

Note the next RequestDelegate I set up is just a Task.FromResult call because I don’t really care what’s going on in there - the point is to see if the LogContext is changed after the middleware executes.

Unfortunately, what I’ve found is that the static Task methods, like Task.FromResult and Task.Delay, don’t behave consistently with respect to using CallContext to store data across async calls.

To illustrate the point, I’ve put together a small set of unit tests here:

public class CallContextTest
{
  [Fact]
  public void SimpleCallWithoutAsync()
  {
    var value = new object();
    SetCallContextData(value);
    Assert.Same(value, GetCallContextData());
  }

  [Fact]
  public async void AsyncMethodCallsTaskMethod()
  {
    var value = new object();
    await NoOpTaskMethod(value);
    Assert.Same(value, GetCallContextData());
  }

  [Fact]
  public async void AsyncMethodCallsAsyncFromResultMethod()
  {
    var value = new object();
    await NoOpAsyncMethodFromResult(value);

    // THIS FAILS - the call context data
    // will come back as null.
    Assert.Same(value, GetCallContextData());
  }

  private static object GetCallContextData()
  {
    return CallContext.LogicalGetData("testdata");
  }

  private static void SetCallContextData(object value)
  {
    CallContext.LogicalSetData("testdata", value);
  }

  /*
   * Note the difference between these two methods:
   * One _awaits_ the Task.FromResult, one returns it directly.
   * This could also be Task.Delay.
   */

  private async Task NoOpAsyncMethodFromResult(object value)
  {
    // Using this one will cause the CallContext
    // data to be lost.
    SetCallContextData(value);
    await Task.FromResult(0);
  }

  private Task NoOpTaskMethod(object value)
  {
    SetCallContextData(value);
    return Task.FromResult(0);
  }
}

As you can see, changing from return Task.FromResult(0) in a non async/await method to await Task.FromResult(0) in async/await suddenly breaks things. No amount of configuration I could find fixes it.

StackOverflow has related questions and there are forum posts on similar topics, but this is the first time this has really bitten me.

I gather this is why AsyncLocal<T> exists, which means maybe I should look into that a bit deeper.

personal, culture comments edit

There has been a lot of push lately for people to learn to code. From Hour of Code to the President of the United States pushing for more coders, the movement towards everyone coding is on.

What gets lost in the hype, drowned out by the fervor of people everywhere jamming keys on keyboards, is that simply being able to code is not software development.

OK, sure, technically speaking when you write code that executes a task you have just developed a piece of software. Also, technically speaking, when you fumble out Chopsticks on the keyboard while walking through Costco you just played the piano. That doesn’t make you a pianist any more than taking an hour to learn to code makes you a software developer.

Here’s where my unpopular opinion comes out. Here’s where I call out the elephant in the room and the politically-correct majority gasp at how I can be so unencouraging to these folks learning to code.

Software development is an art, not a science.

Not everyone can be a software developer in the same way not everyone can be a pianist, a painter, or a sculptor. Anyone can learn to play the piano well; anyone can learn to paint or sculpt reasonably. That doesn’t mean just anyone can make a living doing these things.

It’s been said that if you spend 10,000 hours practicing a task you can become great at anything. 10,000 hours is basically five years of a full-time job. So, ostensibly, if you spent five years full-time coding, you’d be a developer.

However, we’ve all heard that argument about experience: Have you had 20 years of experience? Or one year of experience 20 times? Does spending 10,000 hours coding make you a developer? Or does it just mean you spent a lot of time coding?

Regardless of your time in any field you have probably run across both of these people - the ones who really have 20 years’ experience and the ones who have been working for 20 years and you wonder how they’ve advanced so far in their careers.

I say that to be a good developer - or a good artist - you need three things: skills, aptitude, and passion.

Skills are the rote abilities you learn when you start with that Hour of Code or first take a class on coding. Pretty much anyone can learn a certain level of skill in nearly any field. It’s learned ability that takes a brainpower and dedication.

Aptitude is a fuzzier quality meaning your natural ability to do something. This is where the “art” part of development starts coming in. You may have learned the skills to code, but do you have any sort of natural ability to perform those skills?

Passion is your enthusiasm - in this case, the strong desire to execute the skills you have and continue to improve on them. This is also part of the “art” of development. You might be really good at jamming out code, but if you don’t like doing it you probably won’t come up with the best solutions to the problems with which you’re faced.

Without all three, you may be able to code but you won’t really be a developer.

A personal anecdote to help make this a bit more concrete: When I went to college, I told my advisors that I really wanted to be a 3D graphics animator/modeler. My dream job was (and still kind of is) working for Industrial Light and Magic on special effects. As a college kid, I didn’t know any better, so when the advisors said I should get a Computer Science degree, I did. Only later did I find out that wouldn’t get me into ILM or Pixar. Why? In their opinion (at the time, in my rejection letters), “you can teach computer science to an artist but you can’t teach art to a computer scientist.”

The first interesting thing I find there is that, at least at the time, the thought there was that art “isn’t teachable.” For the most part, I agree - without the skills, aptitude, and passion for art, you’re not going to be a really great artist.

The more interesting thing I find is the lack of recognition that solving computer science problems, in itself, is an art.

If you’ve dived into code, you’re sure to have seen this, though maybe you didn’t realize it.

  • Have you ever seen a really tough problem solved in an amazingly elegant way that you’d never have thought of yourself? What about the converse - a really tough problem solved in such a brute force manner that you can’t imagine why that’s good?
  • Have you ever picked up someone else’s code and found that it’s entirely unreadable? If you hand someone else your code, can they make heads or tails of it? What about code that was so clearly written you didn’t even need any comments to understand how it worked?
  • Have you ever seen code that’s so deep and unnecessarily complicated that if anything went wrong with it you could never fix it? What about code that’s so clear you could easily fix anything with it if a problem was discovered?

We’ve all seen this stuff. We’ve all written this stuff. I know I have… and still do. Sometimes we even laugh about it.

The important part is that those three factors - skill, aptitude, and passion - work together to improve us as developers.

I don’t laugh at a beginner’s code because their skills aren’t there yet. However, their aptitude and passion may help to motivate them to raise their skill level, which will make them overall better at what they do.

The art of software development isn’t about the quantity of code churned out, it’s about quality. It’s about constant improvement. It’s about change. These are the unquantifiable things that separate the coders from the developers.

Every artist constantly improves. I’m constantly improving, and I hope you are, too. It’s the artistic aspect of software development that drives us to do so, to solve the problems we’re faced with. Don’t just be a software developer, be a software artist. And be the best artist you can be.

net, aspnet comments edit

Here’s the situation:

  • I have a .NET Core / ASP.NET Core (DNX) web app. (Currently it’s an RC1 app.)
  • When I start it in Visual Studio, I get IIS Express listening for requests and handing off to DNX.
  • When I start the app from a command line, I want the same experience as VS - IIS Express listening and handing off to DNX.

Now, I know I can just dnx web and get Kestrel to work from a simple self-host perspective. I really want IIS Express here. Searching around, I’m not the only one who does, though everyone’s reasons are different.

Since the change to the IIS hosting model you can’t really do the thing that the ASP.NET Music Store was doing where you copy the AspNet.Loader.dll to your bin folder and have magic happen when you start IIS Express.

When Visual Studio starts up your application, it actually creates an all-new applicationhost.config file with some special entries that allow things to work. I’m going to tell you how to update your per-user IIS Express applicationhost.config file so things can work outside VS just like they do inside.

There are two pieces to this:

  1. Update your applicationhost.config (one time) to add the httpPlatformHandler module so IIS Express can “proxy” to DNX.
  2. Use appcmd.exe to point applications to IIS Express.
  3. Set environment variables and start IIS Express using the application names you configured using appcmd.exe

Let’s walk through each step.

applicationhost.config Updates

Before you can host DNX apps in IIS Express, you need to update your default IIS Express applicationhost.config to know about the httpPlatformHandler module that DNX uses to start up its child process.

You only have to do this one time. Once you have it in place, you’re good to go and can just configure your apps as needed.

To update the applicationhost.config file I used the XML transform mechanism you see in web.config transforms - those web.Debug.config and web.Release.config deals. However, I didn’t want to go through MSBuild for it so I did it in PowerShell.

First, save this file as applicationhost.dnx.xml - this is the set of transforms for applicationhost.config that the PowerShell script will use.

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <configSections>
        <sectionGroup name="system.webServer"
                      xdt:Locator="Match(name)">
            <section name="httpPlatform"
                     overrideModeDefault="Allow"
                     xdt:Locator="Match(name)"
                     xdt:Transform="InsertIfMissing" />
        </sectionGroup>
    </configSections>
    <location path=""
              xdt:Locator="Match(path)">
        <system.webServer>
            <modules>
                <add name="httpPlatformHandler"
                     xdt:Locator="Match(name)"
                     xdt:Transform="InsertIfMissing" />
            </modules>
        </system.webServer>
    </location>
    <system.webServer>
        <globalModules>
            <add name="httpPlatformHandler"
                 image="C:\Program Files (x86)\Microsoft Web Tools\HttpPlatformHandler\HttpPlatformHandler.dll"
                 xdt:Locator="Match(name)"
                 xdt:Transform="InsertIfMissing" />
        </globalModules>
    </system.webServer>
</configuration>

I have it structured so you can run it over and over without corrupting the configuration - so if you forget and accidentally run the transform twice, don’t worry, it’s cool.

Here’s the PowerShell script you’ll use to run the transform. Save this as Merge.ps1 in the same folder as applicationhost.dnx.xml:

function script:Merge-XmlConfigurationTransform
{
    [CmdletBinding()]
    Param(
        [Parameter(Mandatory=$True)]
        [ValidateNotNullOrEmpty()]
        [String]
        $SourceFile,

        [Parameter(Mandatory=$True)]
        [ValidateNotNullOrEmpty()]
        [String]
        $TransformFile,

        [Parameter(Mandatory=$True)]
        [ValidateNotNullOrEmpty()]
        [String]
        $OutputFile
    )

    Add-Type -Path "${env:ProgramFiles(x86)}\MSBuild\Microsoft\VisualStudio\v14.0\Web\Microsoft.Web.XmlTransform.dll"

    $transformableDocument = New-Object 'Microsoft.Web.XmlTransform.XmlTransformableDocument'
    $xmlTransformation = New-Object 'Microsoft.Web.XmlTransform.XmlTransformation' -ArgumentList "$TransformFile"

    try
    {
        $transformableDocument.PreserveWhitespace = $false
        $transformableDocument.Load($SourceFile) | Out-Null
        $xmlTransformation.Apply($transformableDocument) | Out-Null
        $transformableDocument.Save($OutputFile) | Out-Null
    }
    finally
    {
        $transformableDocument.Dispose();
        $xmlTransformation.Dispose();
    }
}

$script:ApplicationHostConfig = Join-Path -Path ([System.Environment]::GetFolderPath([System.Environment+SpecialFolder]::MyDocuments)) -ChildPath "IISExpress\config\applicationhost.config"
Merge-XmlConfigurationTransform -SourceFile $script:ApplicationHostConfig -TransformFile (Join-Path -Path $PSScriptRoot -ChildPath applicationhost.dnx.xml) -OutputFile "$($script:ApplicationHostConfig).tmp"
Move-Item -Path "$($script:ApplicationHostConfig).tmp" -Destination $script:ApplicationHostConfig -Force

Run that script and transform your applicationhost.config.

Note that the HttpPlatformHandler isn’t actually a DNX-specific thing. It’s an IIS 8+ module that can be used for any sort of proxying/process management situation. However, it doesn’t come set up by default on IIS Express so this adds it in.

Now you’re set for the next step.

Configure Apps with IIS Express

I know you can run IIS Express with a bunch of command line parameters, and if you want to do that, go for it. However, it’s just a bunch easier if you set it up as an app within IIS Express so you can more easily launch it.

Set up applications pointing to the wwwroot folder.

A simple command to set up an application looks like this:

"C:\Program Files (x86)\IIS Express\appcmd.exe" add app /site.name:"MyApplication" /path:/ /physicalPath:C:\some\folder\src\MyApplication\wwwroot

Whether you use the command line parameters to launch every time or set up your app like this, make sure the path points to the wwwroot folder.

Set Environment Variables and Start IIS Express

If you look at your web.config file in wwwroot you’ll see something like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <system.webServer>
        <handlers>
            <add name="httpPlatformHandler"
                 path="*"
                 verb="*"
                 modules="httpPlatformHandler"
                 resourceType="Unspecified" />
        </handlers>
        <httpPlatform processPath="%DNX_PATH%"
                      arguments="%DNX_ARGS%"
                      stdoutLogEnabled="false"
                      startupTimeLimit="3600" />
    </system.webServer>
</configuration>

The important bit there are the two variables DNX_PATH and DNX_ARGS.

  • DNX_PATH points to the dnx.exe executable for the runtime you want for your app.
  • DNX_ARGS are the arguments to dnx.exe, as if you were running it on a command line.

A very simple PowerShell script that will launch an IIS Express application looks like this:

$env:DNX_PATH = "$($env:USERPROFILE)\.dnx\runtimes\dnx-clr-win-x86.1.0.0-rc1-update1\bin\dnx.exe"
$env:DNX_ARGS = "-p `"C:\some\folder\src\MyApplication`" web"
Start-Process "${env:ProgramFiles(x86)}\IIS Express\iisexpress.exe" -ArgumentList "/site:MyApplication"

Obviously you’ll want to set the runtime version and paths accordingly, but this is basically the equivalent of running dnx web and having IIS Express use the site settings you configured above as the listening endpoint.

windows, azure, security comments edit

I’ve been experimenting with Azure Active Directory Domain Services (currently in preview) and it’s pretty neat. If you have a lot of VMs you’re working with, it helps quite a bit in credential management.

However, it hasn’t all been “fall-down easy.” There are a couple of gotchas I’ve hit that folks may be interested in.

##Active Directory Becomes DNS Control for the Domain When you join an Azure VM to your domain, you have to set the network for that VM to use the Azure Active Directory as the DNS server. This results in any DNS entries for the domain - for machines on that network - only being resolved by Active Directory.

This is clearer with an example: Let’s say you own the domain mycoolapp.com and you enable Azure AD Domain Services for mycoolapp.com. You also have…

  • A VM named webserver.
  • A cloud service responding to mycoolapp.cloudapp.net that’s associated with the VM.

You join webserver to the domain. The full domain name for that machine is now webserver.mycoolapp.com. You want to expose that machine to the outside (outside the domain, outside of Azure) to serve up your new web application. It needs to respond to www.mycoolapp.com.

You can add a public DNS entry mapping www.mycoolapp.com to the mycoolapp.cloudapp.net public IP address. You can now get to www.mycoolapp.com correctly from outside your Azure domain. However, you can’t get to it from inside the domain. Why not?

You can’t because Active Directory is serving DNS inside the domain and there’s no VM named www. It doesn’t proxy external DNS records for the domain, so you’re stuck.

There is not currently a way to manage the DNS for your domain within Azure Active Directory.

Workaround: Rename the VM to match the desired external DNS entry. Which is to say, call the VM www instead of webserver. That way you can reach the same machine using the same DNS name both inside and outside the domain.

##Unable to Set User Primary Email Address When you enable Azure AD Domain Services you get the ability to start authenticating against joined VMs using your domain credentials. However, if you try managing users with the standard Active Directory MMC snap-ins, you’ll find some things don’t work.

A key challenge is that you can’t set the primary email address field for a user. It’s totally disabled in the snap-in.

This is really painful if you are trying to manage a cloud-only domain. Domain Services sort of assumes that you’re synchronizing an on-premise AD with the cloud AD and that the workaround would be to change the user’s email address in the on-premise AD. However, if you’re trying to go cloud-only, you’re stuck. There’s no workaround for this.

##Domain Services Only Connects to a Single ASM Virtual Network When you set up Domain Services, you have to associate it with a single virtual network (the vnet your VMs are on), and it must be an Azure Service Manager style network. If you created a vnet with Azure Resource Manager, you’re kinda stuck. If you have ARM VMs you want to join (which must be on ARM vnets), you’re kinda stuck. If you have more than one virtual network on which you want Domain Services, you’re kinda stuck.

Workaround: Join the “primary vnet” (the one associated with Domain Services) to other vnets using VPN gateways.

There is not a clear “step-by-step” guide for how to do this. You need to sort of piece together the information in these articles:

##Active Directory Network Ports Need to be Opened Just attaching the Active Directory Domain Services to your vnet and setting it as the DNS server may not be enough. Especially when you get to connecting things through VPN, you need to make sure the right ports are open through the network security group or you won’t be able to join the domain (or you may be able to join but you won’t be able to authenticate).

Here’s the list of ports required by all of Domain Services. Which is not to say you need all of them open, just that you’ll want that for reference.

I found that enabling these ports outbound for the network seemed to cover joining and authenticating against the domain. YMMV. There is no specific guidance (that I’ve found) to explain exactly what’s required.

  • LDAP: Any/389
  • LDAP SSL: TCP/636
  • DNS: Any/53

personal, gaming, toys, xbox comments edit

This year for Christmas, Jenn and I decided to get a larger “joint gift” for each other since neither of us really needed anything. That gift ended up being an Xbox One (the Halo 5 bundle), the LEGO Dimensions starter pack, and a few expansion packs.

LEGO Dimensions Starter Pack

Never having played one of these collectible toy games before, I wasn’t entirely sure what to expect beyond similar gameplay to other LEGO video games. We like the other LEGO games so it seemed like an easy win.

LEGO Dimensions is super fun. If you like the other LEGO games, you’ll like this one.

The story is, basically, that a master bad guy is gathering up all the other bad guys from the other LEGO worlds (which come from the licensed LEGO properties like Portal, DC Comics, Lord of the Rings, and so on). Your job is to stop him from taking over these “dimensions” (each licensed property is a “dimension”) by visiting the various dimensions and saving people or gathering special artifacts.

With the starter pack you get Batman, Gandalf, and Wildstyle characters with which you can play the game. These characters will allow you to beat the main story.

So why get expansion packs?

  • There are additional dimensions you can visit that you can’t get to without characters from that dimension. For example, while the main game lets you play through a Doctor Who level, you can’t visit the other Doctor Who levels unless you buy the associated expansion pack.
  • As with the other LEGO games, you can’t unlock certain hidden areas or collectibles unless you have special skills. For example, only certain characters have the ability to destroy metal LEGO bricks. With previous LEGO games you could unlock these characters by beating levels; with LEGO Dimensions you unlock characters by buying the expansion packs.

Picking the right packs to get the best bang for your buck is hard. IGN has a good page outlining the various character abilities, which pack contains each, and some recommendations on which ones will get you the most if you’re starting fresh.

The packs Jenn and I have (after getting some for Christmas and grabbing a couple of extras) are:

Portal level pack
Portal level pack

Back to the Future level pack
Back to the Future level pack

Emmet fun pack
Emmet fun pack

Zane fun pack
Zane fun pack

Gollum fun pack
Gollum fun pack

Eris fun pack
Eris fun pack

Wizard of Oz Wicked Witch fun pack
Wizard of Oz Wicked Witch fun pack

Doctor Who level pack
Doctor Who level pack

Unikitty fun pack
Unikitty fun pack

Admittedly, this is a heck of an investment in a game. We’re suckers. We know.

This particular combination of packs unlocks just about everything. There are still things we can’t get to - levels we can’t enter, a few hidden things we can’t reach - but this is a good 90%. Most of the stuff we can’t get to is because there are characters where only that one character has such-and-such ability. For example, Aquaman (for whatever reason) seems to have one or two abilities unique to him for which we’ve run across the need. Unikitty is also a character with unique abilities (which we ended up getting). I’d encourage you as you purchase packs to keep consulting the character ability matrix to determine which packs will best help you.

I have to say… There’s a huge satisfaction in flying the TARDIS around or getting the Twelfth Doctor driving around in the DeLorean. It may make that $15 or whatever worth it.

If you’re a LEGO fan anyway, the packs actually include minifigs and models that are detachable - you can play with them with other standard LEGO sets once you get tired of the video game. It’s a nice dual-purpose that other collectible games don’t provide.

Finally, it’s something fun Jenn and I can play together to do something more interactive than just watch TV. I don’t mind investing in that.

In any case, if you’re looking at one of the collectible toy games, I’d recommend LEGO Dimensions. We’re having a blast with it.

personal comments edit

It’s been a busy year, and in particular a pretty crazy last-three-months, so I’m rounding out my 2015 by finally using up my paid time off at work and effectively taking December off.

What that means is I probably won’t be seen on StackOverflow or handling Autofac issues or working on the Autofac ASP.NET 5 conversion.

I love coding, but I also have a couple of challenges if I do that on my time off:

  • I stress out. I’m not sure how other people work, but when I see questions and issues come in I feel like there’s a need I’m not addressing or a problem I need to solve, somehow, immediately right now. Even if that just serves as a backlog of things to prioritize, it’s one more thing on the list of things I’m not checking off. I want to help people and I want to provide great stuff with Autofac and the other projects I work on, but there is a non-zero amount of stress involved with that. It can pretty quickly turn from “good, motivating stress” to “bad, overwhelming stress.” It’s something I work on from a personal perspective, but taking a break from that helps me regain some focus.
  • I lose time. There are so many things I want to do that I don’t have time for. I like sewing and making physical things - something I don’t really get a chance to do in the software world. If I sit down and start coding stuff, pretty soon the day is gone and I may have made some interesting progress on a code-related project, but I just lost a day I could have addressed some of the other things I want to do. Since I code for a living (and am lucky enough to be able to get Autofac time in as part of work), I try to avoid doing much coding on my time off unless it’s helping me contribute to my other hobbies. (For example, I just got an embroidery machine - I may code to create custom embroidery patterns.)

I don’t really take vacation time during the year so I often end up in a “use it or lose it” situation come December, which works out well because there are a ton of holidays to work around anyway. Why not de-stress, unwind, and take the whole month off?

I may even get some time to outline some of the blog entries I’ve been meaning to post. I’ve been working on some cool stuff from Azure to Roslyn code analyzers, not to mention the challenges we’ve run into with Autofac/ASP.NET 5. I’ve just been slammed enough that I haven’t been able to get those out. We’ll see. I should at least start keeping a list.

halloween, costumes comments edit

It was raining again this year and that definitely took down the number of visitors. Again this year we also didn’t put out our “Halloween projector” that puts a festive image on our garage. In general, it was pretty slow all around. I took Phoenix out this year while Jenn answered the door so I got to see what was out there firsthand. Really hardly anyone out there this year.

##Trick-or-Treaters

2015: 85
trick-or-treaters.

Average Trick-or-Treaters by Time Block

The table’s also starting to get pretty wide; might have to switch it so time block goes across the top and year goes down.

Cumulative data:

  Time Block
Year 6:00p - 6:30p 6:30p - 7:00p 7:00p - 7:30p 7:30p - 8:00p 8:00p - 8:30p Total
2006 52 59 35 16 0 162
2007 5 45 39 25 21 139
2008 14 71 82 45 25 237
2009 17 51 72 82 21 243
2010 19 77 76 48 39 259
2011 31 80 53 25 0 189
2013 28 72 113 80 5 298
2014 19 54 51 42 10 176
2015 13 14 30 28 0 85

##Costumes

My costume this year was Robin Hood. Jenn was Merida from Brave so we were both archers. Phoenix had two costumes - for trick-or-treating at Jenn’s work she was a bride with a little white dress and veil; for going out in the neighborhood she was a ninja.

The finished Robin Hood costume

Costume with the cloak closed

I posted some in-progress pictures of my costume on social media, but as part of the statistical breakdown of Halloween this year I thought it’d be interesting to dive into more of exactly what went into the making outside of the time and effort - actual dollars put in.

On my costume, I made the shirt, the doublet, the pants, and the cape. I bought the boots, the tights, and the bow.

###Accessories and Props

Let’s start with the pieces I bought:

Total: $99.10

###The Shirt The shirt is made of a gauzy fabric that was pretty hard to work with. The pattern was also not super helpful because you’d see “a step” in the pattern consisting of several actions.

Confusing shirt pattern

I did learn how to use an “even foot” (sometimes called a “walking foot”) on our sewing machine, which was a new thing for me.

Even foot on the sewing machine

  • Shirt, doublet, and pants pattern - $10.17
  • Gauze fabric - $6.59
  • Thread - $3.29
  • Buttons - $5.99
  • Interfacing - $0.62

Total: $26.66

###The Pants

I don’t have any in-progress shots of the pants being made, but they were pretty simple pants. I will say I thought I should make the largest size due to my height… but then the waist turned out pretty big so I had to do some adjustments to make them fit. Even after adjusting they were pretty big. I should probably have done more but ran out of time.

  • Shirt, doublet, and pants pattern - (included in shirt cost)
  • Black gabardine fabric - $23.73
  • Thread - $4.00
  • Buttons - $1.90
  • Eyelets - $2.39
  • Ribbon - $1.49
  • Interfacing - (I had some already for this)

Total: $33.51

###The Doublet

The doublet was interesting to make. It had a lot of pieces, but they came together really well and I learned a lot while doing it. Did you know those little “flaps” on the bottom are called “peplum?” I didn’t.

I hadn’t really done much with adding trim, so this was a learning experience. For example, this trim had a sort of “direction” or “grain” to it - if you sewed with the “grain,” it went on really smoothly. If you went against the “grain,” the trim would get all caught up on the sewing machine foot. I also found that sewing trim to the edge of a seam is really hard on thick fabric so I ended up adding a little margin between the seam and the trim.

Putting trim on the body of the doublet

These are the peplums that go around the bottom of the doublet. You can see the trim pinned to the one on the right.

Sewing peplums

Once all the peplums were done, I pinned and machine basted them in place. Getting them evenly spaced was a challenge, but it turned out well.

Pinning peplums

After the machine basting, I ran the whole thing through the serger which gave them a strong seam and trimmed off the excess. This was the first project I’d done with a serger and it’s definitely a time saver. It also makes finished seams look really professional.

Serging peplums

To cover the seam where the peplums are attached, the lining in the doublet gets hand sewn over the top. There was a lot of hand sewing in this project, which was the largest time sink.

Slip-stitching the doublet lining

Here’s the finished doublet.

The finished doublet

  • Shirt, doublet, and pants pattern - (included in shirt cost)
  • Quilted fabric (exterior) - $15.73
  • Brown broadcloath fabric (lining) - $5.99
  • Thread - $8.29
  • Eyelets - $8.28
  • Trim - $23.95
  • Leather lacing - $2.50
  • Interfacing - $6.11

Total: $70.85

###The Cape

The cape was the least complex of the things to make but took the most time due to the size. Just laying out and cutting the pattern pieces took a couple of evenings.

As you can see, I had to lay them out in our hallway.

Cutting the exterior cape pieces

I learned something while cutting the outside of the cape: The pattern was a little confusing in that the diagrams of how the pattern should be laid out were inconsistent with the notation they describe. This resulted in my cutting one of the pattern pieces backwards and Jenn being kind enough to go back to the fabric store all the way across town and get the last bit of fabric from the bolt. I was very lucky there was enough to re-cut the piece the correct way.

I used binder clips on the edges in an attempt to stop the two fabric layers from slipping around. It was mildly successful.

Cutting the cape lining

I found with the serger I had to keep close track of the tension settings to make sure the seams were sewn correctly. Depending on the thread and weight of the fabric being sewn, I had to tweak some things.

To help me remember settings, I took photos with my phone of the thread, the fabric being sewn, and the dials on the serger so I’d know exactly what was set.

Here are the settings for sewing together two layers of cape lining.

Cape lining serger settings

And the settings for attaching the lining to the cape exterior.

Serger settings for attaching lining to cape exterior

I took a shot of my whole work area while working on the cape. It gets pretty messy, especially toward the end of a project. I know where everything is, though.

You can also see I’ve got things set up so I can watch TV while I work. I got through a couple of different TV seasons on Netflix during this project.

My messy work area

One of the big learning things for me with this cape was that with a thicker fabric it’s hard to get the seams to lay flat. I ironed the junk out of that thing and all the edge seams were rounded and puffy. I had to edgestitch the seams to make sure they laid flat.

Edge stitching the cape hem

  • Green suedecloth (exterior) - $55.82
  • Gold satin (lining) - $40.46
  • Dark green taffeta (hood lining) - $5.99
  • Interfacing - (I had some already for this)
  • Thread - $11.51
  • Silver “conchos” (the metal insignias on the neck) - $13.98
  • Scotchgard (for waterproofing) - $5.99

Total: $133.75

###Total - Accessories and props - $99.10 - Shirt - $26.66 - Pants - $33.51 - Doublet - $70.85 - Cape - $133.75

Total: $363.87

That’s probably reasonably accurate. I know I had some coupons where I saved some money on fabric (you definitely need to be watching for coupons!) so the costs on those may be lower, but I also know I had to buy some incidentals like more sewing machine needles after I broke a couple, so it probably roughly balances out.

I get a lot of folks asking why I don’t just rent a costume. Obviously from a money and time perspective it’d be a lot cheaper to do that.

The thing is… I really like making the costume. I’m a software engineer, so generally when I “make something,” it’s entirely intangible - it’s electronic bits stored somewhere that make other bits do things. I don’t come out with something I can hold in my hands and say I did this. When I make a shirt or a costume or whatever, there’s something to be said for having a physical object and being able to point to it and be proud of the craftsmanship that went into it. It’s a satisfying feeling.

home comments edit

At home, especially after a long day, I’ve noticed my phone may be low on power even though I’d like to continue using it. All of our chargers are in rooms other than the living room where we spend most of our time and I didn’t want to move one into the living room because I didn’t want cords all over the place or a bajillion different things to plug in.

I finally figured out the answer.

##Materials

Charger, cables, and adhesive

##Assembly Use one of the Command strips to affix the USB charger under the lip on the back of your end table. I went with Command strips because they’re reasonably strong but generally won’t ruin the finish on your table because they can be easily removed.

Plug one or more of the one-foot cables into the charger.

If you get the Sabrent USB cables I mentioned earlier, they have a little bit of Velcro on them you can use to your benefit. Put a little Velcro under a nearby area of the table and push the Velcro tie on the USB cable to the end. You can then attach the end of the cable in easy reach from your couch using the Velcro.

Charger attached to the back of the table

##Usage The nice thing about this is it’s entirely unobtrusive. Charge your phone on the end table while you’re sitting and watching TV, but when you’re done you can drop the cable back behind the table (it’s only a foot long so it won’t drag on the ground or look messy); or if you have the Velcro you can affix the cable under the lip of the table in easy reach for the next usage.

Charging a phone

net, aspnet, build, autofac comments edit

We recently released Autofac 4.0.0-beta8-157 to NuGet to coincide with the DNX beta 8 release. As part of that update, we re-added the classic PCL target .NETPortable,Version=v4.5,Profile=Profile259 (which is portable-net45+dnxcore50+win+wpa81+wp80+MonoAndroid10+Xamarin.iOS10+MonoTouch10) because older VS versions and some project types were having trouble finding a compatible version of Autofac 4.0.0 - they didn’t rectify the dotnet target framework as a match.

If you’re not up on the dotnet target framework moniker, Oren Novotny has some great articles that help a lot:

I’m now working on a beta 8 compatible version of Autofac.Configuration. For beta 7 we’d targeted dnx451, dotnet, and net45. I figured we could just update to start using Autofac 4.0.0-beta8-157, rebuild, and call it good.

Instead, I started getting a lot of build errors when targeting the dotnet framework moniker.

Building Autofac.Configuration for .NETPlatform,Version=v5.0
  Using Project dependency Autofac.Configuration 4.0.0-beta8-1
    Source: E:\dev\opensource\Autofac\Autofac.Configuration\src\Autofac.Configuration\project.json

  Using Package dependency Autofac 4.0.0-beta8-157
    Source: C:\Users\tillig\.dnx\packages\Autofac\4.0.0-beta8-157
    File: lib\dotnet\Autofac.dll

  Using Package dependency Microsoft.Framework.Configuration 1.0.0-beta8
    Source: C:\Users\tillig\.dnx\packages\Microsoft.Framework.Configuration\1.0.0-beta8
    File: lib\dotnet\Microsoft.Framework.Configuration.dll

  Using Package dependency Microsoft.Framework.Configuration.Abstractions 1.0.0-beta8
    Source: C:\Users\tillig\.dnx\packages\Microsoft.Framework.Configuration.Abstractions\1.0.0-beta8
    File: lib\dotnet\Microsoft.Framework.Configuration.Abstractions.dll

  Using Package dependency System.Collections 4.0.11-beta-23409
    Source: C:\Users\tillig\.dnx\packages\System.Collections\4.0.11-beta-23409
    File: ref\dotnet\System.Collections.dll

  (...and some more package dependencies that got resolved, then...)

  Unable to resolve dependency fx/System.Collections

  Unable to resolve dependency fx/System.ComponentModel

  Unable to resolve dependency fx/System.Core

  (...and a lot more fx/* items unresolvable.)

This was, at best, confusing. I mean, in the same target framework, I see these two things together:

  Using Package dependency System.Collections 4.0.11-beta-23409
    Source: C:\Users\tillig\.dnx\packages\System.Collections\4.0.11-beta-23409
    File: ref\dotnet\System.Collections.dll

  Unable to resolve dependency fx/System.Collections

So it found System.Collections, but it didn’t find System.Collections. Whaaaaaaa?!

After a lot of searching (with little success) I found David Fowler’s indispensible article on troubleshooting dependency issues in ASP.NET 5. This led me to the dnu list --details command, where I saw this:

[Target framework .NETPlatform,Version=v5.0 (dotnet)]

Framework references:
  fx/System.Collections  - Unresolved
    by Package: Autofac 4.0.0-beta8-157

  fx/System.ComponentModel  - Unresolved
    by Package: Autofac 4.0.0-beta8-157

  (...and a bunch more of these...)


Package references:
* Autofac 4.0.0-beta8-157
    by Project: Autofac.Configuration 4.0.0-beta8-1

* Microsoft.Framework.Configuration 1.0.0-beta8
    by Project: Autofac.Configuration 4.0.0-beta8-1

  Microsoft.Framework.Configuration.Abstractions 1.0.0-beta8
    by Package: Microsoft.Framework.Configuration 1.0.0-beta8

  System.Collections 4.0.11-beta-23409
    by Package: Autofac 4.0.0-beta8-157...
    by Project: Autofac.Configuration 4.0.0-beta8-1

  (...and so on.)

Hold up - Autofac 4.0.0-beta8-157 needs both the framework assembly and the dependency package for System.Collections?

Looking in the generated .nuspec file for the updated core Autofac, I see:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">
  <metadata>
    <!-- ... -->
    <dependencies>
      <group targetFramework="DNX4.5.1" />
      <group targetFramework=".NETPlatform5.0">
        <dependency id="System.Collections" version="4.0.11-beta-23409" />
        <dependency id="System.Collections.Concurrent" version="4.0.11-beta-23409" />
        <!-- ... -->
      </group>
      <group targetFramework=".NETFramework4.5" />
      <group targetFramework=".NETCore4.5" />
      <group targetFramework=".NETPortable4.5-Profile259" />
    </dependencies>
    <frameworkAssemblies>
      <!-- ... -->
      <frameworkAssembly assemblyName="System.Collections" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.ComponentModel" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Core" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Diagnostics.Contracts" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Diagnostics.Debug" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Diagnostics.Tools" targetFramework=".NETPortable4.5-Profile259" />
      <!-- ... -->
    </frameworkAssemblies>
  </metadata>
</package>

The list of failed fx/* dependencies is exactly the same as the list of frameworkAssembly references that target .NETPortable4.5-Profile259 in the .nuspec.

By removing the dotnet target framework moniker from Autofac.Configuration and compiling for specific targets, everything resolves correctly.

What I originally thought was that dotnet indicated, basically, “I support what my dependencies support,” which I took to mean, “we’ll figure out the lowest common denominator of all the dependencies and that’s the set of stuff this supports.”

What dotnet appears to actually mean is, “I support the superset of everything my dependencies support.”

The reason I take that away is that the Microsoft.Framework.Configuration 1.0.0-beta8 package targets net45, dnx451, dnxcore, and dotnet - but it doesn’t specifically support .NETPortable,Version=v4.5,Profile=Profile259. I figured Autofac.Configuration, targeting dotnet, would rectify to support the common frameworks that both core Autofac and Microsft.Framework.Configuration support… which would mean none of the <frameworkAssembly /> references targeting .NETPortable4.5-Profile259 would need to be resolved to build Autofac.Configuration.

Since they do, apparently, need to be resolved, I have to believe dotnet implies superset rather than subset.

This appears to mostly just be a gotcha if you have a dependency that targets one of the older PCL framework profiles. If everything down the stack just targets dotnet it seems to magically work.

If you’d like to try this and see it in action, check out the Autofac.Configuration repo at 14c10b5bf6 and run the build.ps1 build script.

autofac, net comments edit

As part of DNX RC1, the Microsoft.Framework.* packages are getting renamed to Microsoft.Extensions.*.

The Autofac.Framework.DependencyInjection package was named to follow along with the pattern established by those libraries: Microsoft.Framework.DependencyInjection -> Autofac.Framework.DependencyInjection.

With the RC1 rename of the Microsoft packages, we’ll be updating the name of the Autofac package to maintain consistency: Autofac.Extensions.DependencyInjection. This will happen for Autofac as part of beta 8.

We’ll be doing the rename as part of the beta 8 drop since beta 8 appears to have been pushed out by a week and we’d like to get a jump on things. For beta 8 we’ll still refer to the old Microsoft dependency names to maintain compatibility but you’ll have a new Autofac dependency. Then when RC1 hits, you won’t have to change the Autofac dependency because it’ll already be in line.

You can track the rename on Autofac issue #685.