personal comments edit

Well, I made it to 40.

That being a somewhat significant life milestone, I figured I’d stop to reflect a bit.

To celebrate the occasion, I rented out a local theater (a smaller 18-person place) and we had a private screening of Star Trek Beyond along with a great dinner while watching. It was a great time with family and friends.

I think 40 is not a bad place to be. I feel like I’m now old enough to “know better” but not so old I can’t still take risks. As it’s been approaching I haven’t looked at it in fear or with any real sense of “mortality” as it were, just… “Hey, here comes a sort of marker in the road of life. I wonder what it’ll mean?”

I feel like I’ve lost at least a little bit of that rough edge I had when I was younger, and that’s good. Looking back at the blog history here the tone of posts have changed to be slightly less aggressive, though I can’t honestly say a bit of that isn’t still inside me. I still don’t suffer fools gladly and I still get irritated with people who don’t respect me or my time. I’m not a patient guy and I have about ten minutes’ worth of attention span for poorly run meetings.

I’m working on it.

I’ve been in professional software development for about 20 years now. The majority of that work has been web-related, which is sort of weird for me to think that field has been around for that long. I remember doing a project in college writing a web CGI library in Standard ML and that was pretty new stuff.

As of this year, 15 of my career years have been spent at Fiserv. Given my first job was when I was 14, that’s actually most of my working life. I was originally hired at Corillian, which was subsequently acquired by CheckFree, which, in turn, was acquired by Fiserv. With all the changes of management, process, tools, and so on, it feels like having worked at different companies over the years even though the overall job hasn’t changed. That’s actually one of the reasons I haven’t really felt the need to go elsewhere - I’ve had the opportunity to see product development from a long-term standpoint, experience different group dynamics, try different development processes… and all without the instability of being an independent contractor. It’s been good.

I originally went to college wanting to be a computer animator / 3D modeler. Due to a series of events involving getting some bad counseling and misleading information, which I am still very bitter about, I ended up in computer science. Turns out art houses believe you can teach an artist computer science but computer scientists will never be good at art. Even if you have a portfolio and a demo reel. So that was the end of that.

That’s actually why I started in web stuff - I was in love with the UI aspect of things. Over time I’ve found the art in solving computer science problems and have lost my interest in pushing pixels (now with CSS).

I still have a passion for art, I still do crafts and things at home. I really like sewing, which is weird because when I was a kid I would feel dizzy in fabric stores so I hated it. (My mom used to sew when I was a kid.) I actually called them “dizzy places.” I’m curious if maybe I had a chemical sensitivity to the sizing (or whatever) that ships on the fabric. Maybe I was just super bored. In any case, I really like it now. I like drawing and coloring. I like building stuff. I’m probably not the best artist, but I have a good time with it. I do wish I had more time for it.

I waited until later in life to start a family. I’ve only been married for coming up on 10 years now, though I’ve been with my wife for 16 years. I only have one kid and she’s five so she came along slightly later, too. That’s probably a good thing since she’s quite the handful and I’m not sure I’d have had the patience required to handle her in earlier times. I still kinda don’t. I definitely don’t have the energy I would have had 15 years ago or whatever.

I only have one grandparent left. My wife has none. My daughter really won’t know great-grandparents. I’m not sure how I feel about that. I was young when I met my great-grandparents and I honestly don’t remember much about them. I’m guessing it’ll be the same for her.

I love my parents and have a good relationship with them. They’re around for my daughter and love her to pieces. That makes me happy.

I have two sisters, both of whom I love, but only one of whom still talks to the family. The one that still talks to us has a great life and family of her own that means we don’t cross paths often. I’m glad she’s happy and has her own thing going, but I realize that our lives are so different now that if she weren’t actually related to me we probably wouldn’t really keep in touch. A lot of the commonality we shared as kids has disappeared over time.

Friends have come and gone over the years. I don’t have a lot of friends, but I’m glad to say the ones I have are great. I’m still friends with a few people I knew from school, but my school years weren’t the best for me so I don’t really keep in touch with many of them. Some folks I swear I’d be best friends with for life have drifted away. Some folks I never would have guessed have turned into the best friends I could have. I guess that’s how it goes as people change.

I haven’t made my first billion, or my first million, but I’m comfortable and don’t feel unsuccessful. I wish we had a bigger house, but there is also a lot of space we don’t use so maybe it’s just that I want a different layout. I feel comfortable and don’t live paycheck to paycheck so I can’t say I’m not fortunate. (Don’t get me wrong, though, I’m not saying I’m not interested in money. I don’t work for free.)

Anyway, here’s to at least another 40 years. The first 40 has been great, I’m curious what the next batch has in store.

aspnet, autofac comments edit

As we all saw, ASP.NET Core and .NET Core went RTM this past Monday. Congratulations to those teams - it’s been a long time coming and it’s some pretty amazing stuff.

Every time an RC (or, now, RTM) comes out, questions start flooding in on Autofac, sometimes literally within minutes of the go-live, asking when Autofac will be coming out with an update. While we have an issue you can track if you want to watch the progress, I figured I’d give a status update on where we are and where we’re going with respect to RTM. I’ll also explain why we are where we are.

Current Status

We have an RC build of core Autofac out on NuGet that is compatible with .NET Core RTM. That includes a version of Autofac.Extensions.DependencyInjection, the Autofac implementation against Microsoft.Extensions.DependencyInjection. We’ll be calling this version 4.0.0. We are working hard to get a “stable” version released, but we’ve hit a few snags at the last minute, which I’ll go into shortly.

About half of the non-portable projects have been updated to be compatible with Autofac 4.0.0. For the most part this was just an update to the NuGet packages, but with Autofac 4.0.0 we also changed to stop using the old code access security model (remember [AllowPartiallyTrustedCallers] ?) and some of these projects needed to be updated accordingly.

We are working hard to get the other half of the integration projects updated. Portable projects are being converted to use the new project.json structure and target netstandard framework monikers. Non-portable projects are sticking with .csproj but are being verified for compatibility with Autofac 4.0.0, getting updated as needed.

Why It’s Taking So Long

Oh, where do I begin.

Let me preface this by saying it’s going to sound like a rant. And in some ways it is. I do love what the .NET Core and ASP.NET Core teams have out there now, but it’s been a bumpy ride to get here and many of the bumps are what caused the delay.

First, let’s set the scene: There are really only two of us actively working on Autofac and the various officially supported integration libraries - it’s me and Alex Meyer-Gleaves. There are 23 integration projects we support alongside core Autofac. There’s a repository of examples as well as documentation. And, of course, there are questions that come in on StackOverflow, issues that come in that need responses, and requests on the discussion forum. We support this on the side since we both have our own full-time jobs and families.

I’m not complaining, truly. I raise all that because it’s not immediately evident. When you think about what makes Autofac tick (or AutoMapper, or Xunit, or any of your other favorite OSS projects that aren’t specifically backed/owned by a company like Microsoft or a consultant making money from support), it’s a very small number of people with quite a lot of work to get done in pretty much no time. Core Autofac is important, but it’s the tip of a very large iceberg.

We are sooooo lucky to have community help where we get it. We have some amazing folks who chime in on Autofac questions on StackOverflow. We’ve gotten some pretty awesome pull requests to add some new features lately. Where we get help, it’s super. But, admittedly, IoC containers and how they work internally are tricky beasts. There aren’t a lot of simple up-for-grabs sort of fixes that we have in the core product. It definitely reduces the number of things that we can get help with from folks who want to drop in and get something done quickly. (The integration projects are much easier to help with than core Autofac.)

Now, keep that in the back of your mind. We’ll tie into that shortly.

You know how the tooling for .NET Core changed like 1,000 times? You know how there was pretty much no documentation for most of that? And there were all sorts of weird things like the only examples available being from the .NET teams and they were using internal tools that folks didn’t have great access to. Every new beta or RC release was a nightmare. Mention that and you get comments like, “That’s life in the big city,” which is surely one way to look at it but is definitely dismissive of the pain involved.

Every release, we’d need to reverse-engineer the way the .NET teams had changed their builds, figure out how the tools were working, figure out how to address the breaking changes, and so on. Sometimes (rarely, but it happened) someone would have their project ported over first and we could look at how they did it. We definitely weren’t the only folks to feel that, I know.

NuGet lagging behind was painful because just updating core Autofac didn’t necessarily mean we could update the integration libraries. Especially with the target framework moniker shake-up, you’d find that without the tooling in place to support the whole chain, you could upgrade one library but not be able to take the upgrade in a downstream dependency because the tooling would consider it incompatible.

Anyway, with just the two of us (and the community as possible) and the tooling/library change challenges there was a lot of wheel-spinning. There were weeks where all we did was try to figure out the right magic combination of things in project.json to get things compiling. Did it work? I dunno, we can’t test because we don’t have a unit test framework compatible with this version of .NET Core. Can’t take it in a downstream integration library to test things, either, due to tooling challenges.

Lots of time spent just keeping up.

Finally, we’ve been bitten by the “conforming container” introduced for ASP.NET Core. Microsoft.Extensions.DependencyInjection is an abstraction around DI that was introduced to support ASP.NET Core. It’s a “conforming container” because it means anything that backs the IServiceProvider interface they use needs to support certain features and react in the same way. In some cases that’s fine. For the most part, simple stuff like GetService<T>() is pretty easy to implement regardless of the backing container.

The stuff you can’t do in a conforming container is use the container-specific features. For example, Autofac lets you pass parameters during a Resolve<T>() call. You can’t do that without actually referencing the Autofac lifetime scope - the IServiceProvider interface serves as a “lowest common denominator” for containers.

All along the way, we’ve been testing the junk out of Autofac to make sure it works correctly with Microsoft.Extensions.DependencyInjection. It’s been just fine so far. However, at the last minute (20 days ago now) we got word that not only did we need to implement the service provider interface as specified, but we also need to return IEnumerable<T> collections in the order that the components were registered.

We don’t currently do that. Given IEnumerable<T> has no specification around ordering and all previous framework features (controller action filters, etc.) requiring ordering used an Order property or something like that, it’s never been an issue. Interfaces using IEnumerable<T> generally don’t assume order (or, at least, shouldn’t) This is a new requirement for the conforming container and it’s amazingly non-trivial to implement.

It’s hard to implement because Autofac tracks registrations in a more complex way than just adding them to a list. If you add a standard registration, it does get added to a list. But if you add .PreserveExistingDefaults() because you want to register something and keep the existing default service in place if one’s already registered - that goes in at the end of the list instead of at the head. We also support very dynamic “registration sources” - a way to add registrations to the container on the fly without making explicit registrations. That’s how we handle things like Lazy<T> automatically working.

(That’s sort of a nutshell version. It gets more complex as you think about child/nested lifetime scopes.)

Point being, this isn’t as simple as just returning the list of stuff that got registered. We have to update Autofac to start keeping track of registration order yet still allow the existing functionality to behave correctly. And what do you do with dynamic registration sources? Where do those show up in the list?

The answers are not so straightforward.

We are currently working hard on solving that ordering problem. Actually, right now, Alex is working hard on that while I try and get the rest of the 23 integration projects converted, update the documentation, answer StackOverflow/issue/forum questions, and so on. Thank goodness for that guy because I couldn’t do this by myself.

If you would like to follow along on the path to a stable release, check out these issues:

While it may not be obvious, adding lots of duplicate issues asking for status or “me too” comments on issues in the repo doesn’t help. In some cases it’s frustrating (it’s a “no pressure, but hurry up” vote) and may slow things down as we spend what little time we have responding to the dupes and the questions rather than actually getting things done. I love the enthusiasm and interest, but please help us out by not adding duplicates. GitHub recently added “reactions” to issues (that little smiley face at the top-right of an issue comment) - jump in with a thumbs-up or smile or something; or subscribe to an issue if you’re interested in following along (there’s a subscribe button along the right up near the top of the issue, under the tags).

Thanks (So Far)

Finally, I have some thanks to hand out. Like I said, we couldn’t get this done without support from the community. I know I’m probably leaving someone out, and if so, I’m sorry - please know I didn’t intentionally do it.

  • The ASP.NET Core team - These guys took the time to talk directly to Alex and I about how things were progressing and answered several questions.
  • Oren Novotny - When the .NET target framework moniker problem was getting us down, he helped clear things up.
  • Cyril Durand and Viktor Nemes - These guys are rockstars on StackOverflow when it comes to Autofac questions.
  • Caio Proiete, Taylor Southwick, Kieren Johnstone, Geert Van Laethem, Cosmin Lazar, Shea Strickland, Roger Kratz - Pull requests of any size are awesome. These folks submitted to the core Autofac project within the last year. This is where I’m sure I missed someone because it was a manually pulled list and didn’t include the integration libraries. If you helped us out, know I’m thanking you right now.

personal comments edit

As of yesterday, June 27, 2016, I’ve worked for 15 years at Fiserv.

My 15 year certificate

Given I got my first “official” job when I was 14 and I turn 40 this year, that’s over half of my professional working life that I’ve been here.

I started in the marketing department back when the company was Corillian. I got hired to help work on the corporate web site, www.corillian.com (which now redirects to a Fiserv page on internet banking).

I think it was a year or two into that when some restructuring came along and the web site transferred to the IT department. I transferred with it and became the only IT developer doing internal tools and working on automating things.

I remember working on rolling out the original SharePoint 2003 along with Windows SharePoint Services in an overall Office 2003 effort. We had some pretty fancy “web parts” in VBScript to do custom document indexing and reporting. I vaguely recall updating several of those parts to be .NET 1.1 assemblies.

It was in 2004 when a need arose for a developer to work on some proof-of-concept and demo web sites that our sales folks could take around on calls. I happened to be free, so I worked with our product folks on those things. As sometimes happens, those POC and demo sites became the template for what we wanted the next version of the product to be like. And since I’d already worked on them… why not come over to product development and do the work “for real this time?”

I worked on the very first iteration of the Corillian Consumer Banking product. That was in .NET 1.1 though 2.0 was right around the corner. I remember having to back-port features like ASP.NET master pages into 1.1. (I still like our implementation better.) This was back when Hanselman was still at Corillian and we worked together on several features, particularly where the UI had to interact with/consume services.

In early 2007 CheckFree acquired Corillian. After the dust on that settled, I was still working on Consumer Banking - basically, same job, new company. There were definitely some process hiccups as we went from a fairly agile Scrum-ish methodology that Corillian had into CheckFree’s version of Rational Unified Process, but we made do.

In late 2007, Fiserv acquired CheckFree.

Yeah, that was some crazy times.

Fiserv, for the most part, adopted CheckFree’s development processes, at least as far as our group found. RUP gave way after a while to something more iterative but still not super agile. It was only pretty recently (last five-ish years?) that we’ve finally made our way back to Scrum.

The majority of my time has been in web service and UI development. I did get my Microsoft Certified DBA and Microsoft Certified .NET Solutions Developer certifications so I’m not uncomfortable working at all layers, but I do like to spend my time a little higher than the data tier when possible.

In my most recent times, I’ve been working on REST API stuff using ASP.NET Core. Always something new to learn, always interesting.

Also interesting is that with the various acquisitions, reorganizations, and re-prioritizations we’ve seen over the years, while I have worked (effectively) for the same company, it’s given me a lot of great experience with different people, processes, tools, and development environments. In some cases, it’s been like working different jobs… despite it being the same job. Definitely some great experience.

Plus, I’m afforded (a small amount of) time to help out the open source community with Autofac and other projects.

That’s actually why I’ve stayed so long. I can only speak for myself, but even with me sort of doing “the same thing” for so long… it’s not the same thing. I’m always learning something new, there’s always something changing, there’s always a new problem to solve. I work with some great people who are constantly trying to improve our products and processes.

And a bit of seniority never hurt anyone.

humor, personal comments edit

In the last couple of weeks I’ve had the opportunity to get together with folks for lunch or dinner and I’m finding it’s hard to agree on a “nice place to eat.”

Here’s the thing.

I’m not a really picky eater. At least, I don’t think so. I like simple food that tastes good. The thing is, I live in the Portland, OR metro area, so when someone talks about “a nice place to eat” it usually has something to do with an independent restaurateur who has “a fresh take on old ideas.” This generally amounts to “I don’t want to eat there” for me.

Don’t get me wrong, I’ve tried several of these places. I have yet to enjoy them. It’s not like I didn’t give it a fair shake.

With that in mind, I decided to post a list of “restaurant red flags” - things that warn me against eating at a place. No single item here instantly disqualifies a place, but a combination of them will probably result in a “no.”

If your restaurant has/does/says any of these things, I’m out:

  • The description of your restaurant contains some version of the word “gastronomy” that is not immediately prefixed by “molecular.”
  • All the pictures of the meat dishes appear to be barely cooked to rare.
  • I have to look up what two or more of the words are on any item.
  • Your menu is in English but you don’t use the common English words for things so you can sound fancier (e.g., you use “ali-oli” instead of “aioli”).
  • You serve a dish based on a creature I would normally otherwise consider “vermin” rather than “game.”
  • You’ve been in any top ten restaurant list where the food is described as “new and exciting.”
  • The intent of the food is to have lots of small dishes purchased and get passed around. (I hate tapas. Joey doesn’t share food.)
  • You think it’s a great idea to have a lot of community tables and no individual tables.
  • There’s a lot of fermented stuff on the menu that isn’t alcohol.
  • More than one item on the menu can be described as a “delicacy.”
  • A significant number of the meat dishes are made with the less-common cuts of meat (cheek, tongue, tail, etc.).

I may add to that list in the future, but basically, yeah. Red flags.

aspnet, security comments edit

I’ve been working with ASP.NET Core in a web farm environment. Things worked great when deployed to an Azure Web App but in a different farm setting (Pivotal Cloud Foundry) I started getting an error I hadn’t seen before: System.AggregateException: Unhandled remote failure. ---> System.Exception: Unable to unprotect the message.State.

This happened in context of the OpenID Connect middleware, specifically when a value encrypted by one instance of the ASP.NET Core application tried to be decrypted by a different instance of the application.

The problem is that the values used in DataProtection weren’t synchronized across all instances of the application. This is a lot like the ASP.NET classic issue where you have to ensure all nodes in the farm have the machine key synchronized so ViewState and other things can be shared across application instances.

Instead of machine key, ASP.NET Core uses Microsoft.AspNetCore.DataProtection for handling the encryption keys used to protect state values that get posted between the app and the client. There is plenty of documentation on how this works but not much in the way of a concise explanation of what it takes to get things working in a farm. Hopefully this wil help.

How DataProtection Gets Added

Normally you don’t manually add the data protection bits to the application pipeline. It’s done for you when you call services.AddMvc() during the ConfigureServices() part of application startup. That services.AddMvc() line actually fans out into adding a lot of default services, some of which are the defaults for data protection.

What to Synchronize

Instead of just machine key in ASP.NET Core, you have three things that must line up for a farm scenario:

Why This Doesn’t “Just Work” in All Farms

  • The application discriminator, being based on the installed location of the app, is great if all machines in the farm are identical. If, instead, you’re using some containerization techniques, a virtual filesystem, or otherwise don’t have the app installed in the same location everywhere, you need to manually set this.
  • The master encryption key, while not used on non-Windows environments, does otherwise need to be synchronized. If you choose to use a certificate, the current EncryptedXml mechanism used internally allows you to pass in a certificate for use in encryption but in decryption it requires the certificate to be in the machine certificate store. That requirement is less than stellar since it means you can’t store the certificate in something like Azure Key Vault.
  • The encrypted set of session keys is easy to persist in a file share… if the farm is allowed to store things in a common share and all the network ports are open to allow that. If you want to store in a different repository like a database or Redis, there’s nothing out of the box that helps you.

Why This Works in Azure Web Apps

There is some documentation outlining how this works in Azure. In a nutshell:

  • All applications are installed to the same location, so the application discriminator lines up.
  • Keys aren’t encrypted at rest, so there is no master encryption key.
  • The session keys are put in a special folder location that is “magically” synchronized across all instances of the Azure Web App.

Setting Your Own Options

To set your own options, call services.AddDataProtection() after you call services.AddMvc() in your ConfigureServices() method in Startup. It will look something like this:

public virtual IServiceProvider ConfigureServices(IServiceCollection services)
{
  services.AddMvc();
  services
    .AddDataProtection(opt => opt.ApplicationDiscriminator = "your-app-id")
    .ProtectKeysWithYourCustomKey()
    .PersistKeysToYourCustomLocation();
}

Example Extensions

To help get you on your way, I’ve published a couple of extensions on GitHub. They include:

  • XML encryption/decryption using a certificate that isn’t required to be in a machine certificate store. This allows you to store the master certificate in a repository like Azure Key Vault. This bypasses that requirement that the certificate be in the machine certificate store during decryption.
  • Encrypted XML storage in Redis. This allows you to share the session keys in a Redis database rather than a file share.