personal, halloween, costumes comments edit

I love to sew. There’s something about making something tangible that I like. I’m sure it’s the same feeling people who do woodworking or metalworking get when they’re making something. Especially given I deal in the intangible all day, making something is cool.

I sew my Halloween costume every year. I try to do something bigger, better, and more complex each time (at least from a sewing standpoint). I don’t sew quite as much over the year, but Halloween is my holiday.

I have documented before what it takes to make my costume, but I’ve never really walked folks too deep into the step-by-step. This year I took it a little further and instead of focusing on cost and major steps, I figured I’d show full start-to-finish of what it takes. (I showed these same things in a Facebook photo album so if you want higher resolution or un-cropped images, check there.)

The first step is the pattern.

I used Butterick pattern B6340, which they rate as an “advanced” pattern. Normally when you buy a costume pattern, they’re “beginner” or “intermediate,” depending on whether or not it’s lined or super detailed. Things that are closer to historical recreation or that have a lot of detail get into “advanced” which boils down to “more intricate steps.”

The back of the pattern looks like this and tells you how much fabric you need.

Butterick Pattern B6340

I ended up having to buy something like eight yards of fabric for the costume, which amounts to just buying the whole bolt. I also had lining and some accent fabrics.

When you get the fabric home, you have to wash it to pre-shrink it and get the sizing out that ships on the fabric. Then you have to iron it. Here’s what it looks like to throw eight yards of fabric on the ironing board.

Ironing the main fabric

The pattern itself comes in large sheets of tissue paper. You have to iron the pattern, too (on a very low setting) to get the creases out. That helps in cutting the pattern and makes sure when you cut your fabric you’ll get things the right size.

The pattern tissue sheets

After cutting out the pattern, you’ll have a stack of pattern pieces. I have mine separated between the coat (top pile) and the pants (bottom pile).

The cut pattern pieces

Once you’ve got the pattern cut out, it’s time to pin the pattern to the fabric and cut along the edges. This takes a long time and is the most tedious part of the whole process because not only do you cut out the pieces but you also have to transfer the little markings and lines from the pattern to your fabric. Those markings tell you where to fold, sew, cut, and line pieces up.

The top of this picture shows a stack of pieces I’ve cut; the bottom is some items pinned.

Pinning and cutting

Here’s what it looks like when all the pieces are cut out of all the fabric. It’s a big stack of puzzle pieces.

All the pieces cut out

After cutting the pieces out I put together the coat.

On the coat, there are a lot of pieces that give the body shape and rigidity. You sew a fairly rigid fabric called “interfacing” to those pieces. Here’s the interfacing all sewn to the coat pieces.

Interfacing in the coat pieces

After that, it’s time to get the front sections sewn. Here’s the right front part of the jacket.

Right front of the jacket

Once both front pieces and the back are sewn on, you have a sort of “vest” that you can wear and start to size.

The coat in 'vest' form

The coat has pockets on the front, so here’s what making those pockets looks like. The tan one on the top left is a “finished” pocket that will get sewn to the coat; the green one on the bottom is the second pocket, just not turned inside out yet.

The pockets on the jacket are lined, so once they’re sewn down you’ll see green lining inside, not just tan fabric.

Jacket pockets

After making the bottoms of the pockets, sewing those on, and then putting flaps and buttons over the top, here’s what the finished pockets look like attached to the jacket.

Pockets attached to the jacket

With the body of the coat done, it’s time for the sleeves. These are basically fabric tubes, but with a bit of extra fabric at the top. That extra fabric helps give your shoulders room to move and makes sure the arms aren’t literally pointing straight out from the jacket. (Look at your jacket - the arms hang down fairly evenly, they don’t stick straight out.)

In my case, the arms are also somewhat tailored so they have a slight bend at the elbow.

Arms for the jacket

Here’s what it looks like to attach an arm to the body of the jacket.

The top picture shows how you have to pin the arm to the jacket arm hole. There is some gathering to do because the distance around the arm hole on the jacket is actually less than the distance around the top of the arm hole on the arm. A lot of sewing is trying to fudge this stuff to make it fit for one reason or another. Sometimes it’s because you’re a quarter inch off in cutting something. Sometimes it’s because something stretched while you were sewing.

In this case, arm holes are always like this. It’s so your shoulder fits in well and so the fabric moves without being tight.

The middle picture shows what it looks like after the sewing is done and the pins are out.

The bottom picture shows the finished arm from the outside of the coat.

Sewing in an arm

Once the arms are in, it starts looking like a jacket.

(I skipped the steps where I attached the collar. You didn’t notice that, right? They were kinda boring anyway.)

Arms attached to the jacket

Once the outside of the jacket is done, it’s lining time.

Making a lining is a lot like making a slightly smaller version of the jacket, just with lighter fabric and inside-out.

Here I’ve put together the front parts of the lining. The left shows pinning the green lining fabric to some tan facing (so the lining doesn’t go right up to the edge where you button the jacket).

Front lining

Sew the front and back lining together, and you have a little vest - just like making the jacket.

Front and back lining connected

Add some lining sleeves and the inside part of the collar and you’ve got an inside-out version of the jacket.

The complete lining

Now it’s time to attach the lining to the jacket.

I put the shoulder pads in place and sewed the jacket half of the collar to the lining half of the collar.

Starting to put the lining in

This pattern uses a technique called “bagging the lining” to put the lining in. This lets you put the lining in with the sewing machine doing most of the work so it goes quicker.

Well, it goes quicker if you’ve done it before. This was the first time I’d tried it. I can see how it could be a time and effort saver, but figuring it out the first time was sort of painful.

Basically, you sew the outside of the lining to the edges of the jacket and leave a tiny hole in one of the seams in the arm lining. Once you’ve got the jacket and lining attached, flip the code right-side-out through the arm lining hole and sew up that small hole by hand.

My usual method of lining involves a lot of hand sewing so this was a nice change.

The lining attached to the jacket

After flipping it right-side-out I found a couple of weird things I had to fix in the corner of the jacket where the lining meets the jacket front.

Jacket lining gathering at the corner

Luckily I was able to do a minimal amount of work to fix it.

The jacket lining gathering fixed

With the lining in, the jacket is almost complete!

Lining fully in the jacket

Last step: the belt!

The belt for the jacket

Here’s the completed jacket after all that work.

The completed jacket

Now for the pants.

The first step in the pants is to put a couple of small front pockets in.

From left to right, top to bottom: The pocket lining sewn along the top of the waist; the lining flipped down and pinned; the pocket lining sewn with pins removed; the finished pocket as seen from the front.

The front pocket on the pants

There are side pockets, too. Here are a couple of facing pieces attached to the pocket lining. The facing makes it so putting your hands in your pockets doesn’t expose green lining.

Side pocket lining

Once you’ve attached the side pocket fronts to the pants front, it looks like this (one pocket per leg).

Side pocket fronts attached

The front sides have a waistband, so you have to attach that after the pockets for a nice finished edge.

Side waistband attached

Skipping ahead a bit, here are the back halves of the legs sewn together. These will get sewn to the front halves to be a full pair of pants.

Back leg halves sewn together

The center front also has a waistband. Here’s what that looks like when it’s not attached.

That white fabric you see in there is the interfacing I mentioned earlier.

Front waistband

Here’s the front waistband attached to the front of the pants. Remember those little front pockets I made earlier? You see them here. The close right at the waistband so I put some tools in them so you could see.

The top picture is what you’d see from the front. The bottom picture is the inside of the pants front.

Pants front waistband attached

These particular pants have patches on the insides of the legs. I put some microsuede in for mine so it looks like leather but was cheaper and easier to work with.

Leg patches in place

Once the front is sewn to the back, you can actually try the pants on.

In the picture, they’re pinned together since I haven’t put buttons in yet. You’ll also see the legs look a little baggy. These pants have buttons down the sides of the legs to tighten them up. (I skipped showing those steps - again, hard to see, a little tedious.)

First fitting of the pants

Time to put button holes in.

This is a glimpse at some of the pain that shows up unexpectedly while working.

Sewing machines nowadays are great about putting button holes of all shapes and sizes into your clothes, but they don’t tell you when you start the button hole whether or not you have enough thread in the machine to finish the button hole.

You can’t “put in more thread and pick up where you left off.” It’s pretty “all or nothing.” So… when your thread runs out mid-button-hole, you get to pull out the seam ripper and very, very gently rip out all those tiny stitches and try again.

Failed partial button hole

Here I have all the button holes put into the pants. There are 26 button holes in all that I had to make: eight on each leg, four up each side, two in the middle.

Button holes complete

Once you have button holes, you gotta put on the buttons.

Here are the leg buttons done…

Leg buttons in place

…and the side buttons getting put in…

Side buttons going in

That’s the last step of the pants. After a little fitting and adding some belt loops, the pants looked pretty decent.

The finished pants

Pants plus coat equals costume!

With the pants and the coat finished, I just had to add my accessories: pith helmet, riding half chaps to cover my boots, and binoculars.

The complete costume!

In all, I think it turned out pretty well.

javascript comments edit

I was working on my annual PTO schedule and thought it would be nice to collaborate on it with my wife, but also make it easy to visually indicate which days I was taking off.

Google Sheets is great for that sort of thing, so I started out with the calendar template. Then I wanted to highlight (with a background color) which days I was taking off.

That works well, but then I also wanted to see how many days I was planning to make sure I didn’t use too many vacation days.

How do you count cells in Google Sheets by background color?

One way is to use the “Power Tools” add-on in Sheets. You have to use the “Pro” version if you go that route, so consider that. I think the pro version is still free right now.

I did try that and had some trouble getting it to work. Maybe I was just doing it wrong. The count was always zero.

Instead, I wrote a script to do this. It was based on this StackOverflow question but I wanted my function to be parameterized, where the stuff in the question wasn’t.

First, go to “Tools > Script Editor…“ in your sheet and paste in this script:

 * Counts the number of items with a given background.
 * @param {String} color The hex background color to count.
 * @param {String} inputRange The range of cells to check for the background color.
 * @return {Number} The number of cells with a matching background.
function countBackground(color, inputRange) {
  var inputRangeCells = SpreadsheetApp.getActiveSheet().getRange(inputRange);
  var rowColors = inputRangeCells.getBackgrounds();
  var count = 0;

  for(var r = 0; r < rowColors.length; r++) {
    var cellColors = rowColors[r];
    for(var c = 0; c < cellColors.length; c++) {
      if(cellColors[c] == color) {

  return count;

Once that’s in, you can save and exit the script editor.

Back in your sheet, use the new function by entering it like a formula, like this:

=countBackground("#00ff00", "B12:X17")

It takes two parameters:

  • The first parameter is the color of background highlight. It’s a hexadecimal color since that’s how Sheets stores it. The example I showed above is the bright green background color.
  • The second parameter is the cell range you want the function to look at. This is in the current sheet. In the example, I’m looking at the range from B12 through X17.

Gotcha: Sheets caches function results. I found that Google Sheets caches the output of custom function execution. What that means is that you enter the function (like the example above), it runs and calculates the number of items with the specified background, and then it won’t automatically run again. You change the background of one of the cells, the function doesn’t just run again and the value of the count/total doesn’t update. This is a Google Sheets thing, trying to optimize performance. What it means for you is that if you change cell backgrounds, you need to change the function temporarily to get it to update.

For example, say you have a cell that has this:

=countBackground("#00ff00", "B12:X17")

You update some background colors and want your count to update. Change the function to, say, look at a different range temporarily:

=countBackground("#00ff00", "B12:X18")

Then change it back:

=countBackground("#00ff00", "B12:X17")

By changing it, you force Google Sheets to re-run it. I haven’t found any button or control to force the methods to update or re-run so this is the way I’ve been tricking it.

net, aspnet comments edit

It’s good to develop and deploy your .NET web apps using SSL/TLS to ensure you’re doing things correctly and securely.

If you’re using full IIS for development and testing, it’s easy enough to create a self-signed certificate right from the console. But you have to be an administrator to use IIS in development, and it’s not cool to dev as an admin, so that’s not usually an option. At least, it’s not for me.

IIS Express comes with a self-signed SSL certificate you can use for development and Visual Studio even prompts you to trust that certificate when you first fire up a project using it. (Which is much nicer than the hoops you used to have to jump through to trust it.)

That still doesn’t fix things if you’re using a different host, though, like Kestrel for .NET Core projects; or if you’re trying to share the development SSL certificate across your team rather than using the per-machine self-signed cert.

There are instructions on MSDN for creating a temporary self-signed certificate.

The instructions work well enough, but something my team and I ran into: After a period of time, the certificate you created will no longer be trusted. We weren’t able to reproduce it on demand, just… periodically (between one day and two weeks) the certificate you place in the “Trusted Root Certification Authorities” store as part of the instructions just disappears.

It literally disappears. Your self-signed CA cert will get removed from the list of trusted third party CAs without a trace.

You can try capturing changes to the CA list with Procmon or set up security auditing on the registry keys that track the CA list and you won’t get anything. I tried for months. I worked through it with Microsoft Premier Support and they couldn’t find anything, either.

It’s easy enough to put it back, but it will eventually get removed again.

What is Going On?

The reason for this is the Automatic Third-Party CA Updates process that runs as a part of of Windows. This process goes to Windows Update periodically to get an updated list of trusted third-party certificate authorities and if it finds any certificates not present in the list they get deleted.

Obviously your self-signed dev cert won’t be in the list, so, poof. Gone. It was unclear to me as well as the MS support folks why we couldn’t catch this process modifying the certificate store via audits or anything else.

There are basically two options to fix this (assuming you don’t want to ignore the issue and just put the self-signed CA cert back every time it gets removed):

Option 1: Stop Using Self-Signed Certificates

Instead of using a self-signed development cert, try something from an actual, trusted third-party CA. You can get a free certificate from LetsEncrypt, for example. Note LetsEncrypt certificates currently only last 90 days but you’ll get 90 uninterrupted days where your certificate won’t magically lose trust.

Alternatively, if you have an internal CA that’s already trusted, use that. Explaining how to set up an internal CA is a bit beyond the scope of this post and it’s not a single-step five-minute process, but if you don’t want a third-party certificate, this is also an option.

Option 2: Turn Off Automatic Third-Party CA Updates

If you’re on an Active Directory domain you can do this through group policy, but in the local machine policy you can see this under Computer Configuration / Administrative Templates / System / Internet Communication Management / Internet Communication Settings - You’d set “Turn off Automatic Root Certificate Update” to “Enabled.”

I’m not recommending you do this. You accept some risk if you stop automatically updating your third-party CA list. However, if you’re really stuck and looking for a fix, this is how you turn it off.

Turning off automatic third-party CA updates

personal comments edit

Well, I made it to 40.

That being a somewhat significant life milestone, I figured I’d stop to reflect a bit.

To celebrate the occasion, I rented out a local theater (a smaller 18-person place) and we had a private screening of Star Trek Beyond along with a great dinner while watching. It was a great time with family and friends.

I think 40 is not a bad place to be. I feel like I’m now old enough to “know better” but not so old I can’t still take risks. As it’s been approaching I haven’t looked at it in fear or with any real sense of “mortality” as it were, just… “Hey, here comes a sort of marker in the road of life. I wonder what it’ll mean?”

I feel like I’ve lost at least a little bit of that rough edge I had when I was younger, and that’s good. Looking back at the blog history here the tone of posts have changed to be slightly less aggressive, though I can’t honestly say a bit of that isn’t still inside me. I still don’t suffer fools gladly and I still get irritated with people who don’t respect me or my time. I’m not a patient guy and I have about ten minutes’ worth of attention span for poorly run meetings.

I’m working on it.

I’ve been in professional software development for about 20 years now. The majority of that work has been web-related, which is sort of weird for me to think that field has been around for that long. I remember doing a project in college writing a web CGI library in Standard ML and that was pretty new stuff.

As of this year, 15 of my career years have been spent at Fiserv. Given my first job was when I was 14, that’s actually most of my working life. I was originally hired at Corillian, which was subsequently acquired by CheckFree, which, in turn, was acquired by Fiserv. With all the changes of management, process, tools, and so on, it feels like having worked at different companies over the years even though the overall job hasn’t changed. That’s actually one of the reasons I haven’t really felt the need to go elsewhere - I’ve had the opportunity to see product development from a long-term standpoint, experience different group dynamics, try different development processes… and all without the instability of being an independent contractor. It’s been good.

I originally went to college wanting to be a computer animator / 3D modeler. Due to a series of events involving getting some bad counseling and misleading information, which I am still very bitter about, I ended up in computer science. Turns out art houses believe you can teach an artist computer science but computer scientists will never be good at art. Even if you have a portfolio and a demo reel. So that was the end of that.

That’s actually why I started in web stuff - I was in love with the UI aspect of things. Over time I’ve found the art in solving computer science problems and have lost my interest in pushing pixels (now with CSS).

I still have a passion for art, I still do crafts and things at home. I really like sewing, which is weird because when I was a kid I would feel dizzy in fabric stores so I hated it. (My mom used to sew when I was a kid.) I actually called them “dizzy places.” I’m curious if maybe I had a chemical sensitivity to the sizing (or whatever) that ships on the fabric. Maybe I was just super bored. In any case, I really like it now. I like drawing and coloring. I like building stuff. I’m probably not the best artist, but I have a good time with it. I do wish I had more time for it.

I waited until later in life to start a family. I’ve only been married for coming up on 10 years now, though I’ve been with my wife for 16 years. I only have one kid and she’s five so she came along slightly later, too. That’s probably a good thing since she’s quite the handful and I’m not sure I’d have had the patience required to handle her in earlier times. I still kinda don’t. I definitely don’t have the energy I would have had 15 years ago or whatever.

I only have one grandparent left. My wife has none. My daughter really won’t know great-grandparents. I’m not sure how I feel about that. I was young when I met my great-grandparents and I honestly don’t remember much about them. I’m guessing it’ll be the same for her.

I love my parents and have a good relationship with them. They’re around for my daughter and love her to pieces. That makes me happy.

I have two sisters, both of whom I love, but only one of whom still talks to the family. The one that still talks to us has a great life and family of her own that means we don’t cross paths often. I’m glad she’s happy and has her own thing going, but I realize that our lives are so different now that if she weren’t actually related to me we probably wouldn’t really keep in touch. A lot of the commonality we shared as kids has disappeared over time.

Friends have come and gone over the years. I don’t have a lot of friends, but I’m glad to say the ones I have are great. I’m still friends with a few people I knew from school, but my school years weren’t the best for me so I don’t really keep in touch with many of them. Some folks I swear I’d be best friends with for life have drifted away. Some folks I never would have guessed have turned into the best friends I could have. I guess that’s how it goes as people change.

I haven’t made my first billion, or my first million, but I’m comfortable and don’t feel unsuccessful. I wish we had a bigger house, but there is also a lot of space we don’t use so maybe it’s just that I want a different layout. I feel comfortable and don’t live paycheck to paycheck so I can’t say I’m not fortunate. (Don’t get me wrong, though, I’m not saying I’m not interested in money. I don’t work for free.)

Anyway, here’s to at least another 40 years. The first 40 has been great, I’m curious what the next batch has in store.

aspnet, autofac comments edit

As we all saw, ASP.NET Core and .NET Core went RTM this past Monday. Congratulations to those teams - it’s been a long time coming and it’s some pretty amazing stuff.

Every time an RC (or, now, RTM) comes out, questions start flooding in on Autofac, sometimes literally within minutes of the go-live, asking when Autofac will be coming out with an update. While we have an issue you can track if you want to watch the progress, I figured I’d give a status update on where we are and where we’re going with respect to RTM. I’ll also explain why we are where we are.

Current Status

We have an RC build of core Autofac out on NuGet that is compatible with .NET Core RTM. That includes a version of Autofac.Extensions.DependencyInjection, the Autofac implementation against Microsoft.Extensions.DependencyInjection. We’ll be calling this version 4.0.0. We are working hard to get a “stable” version released, but we’ve hit a few snags at the last minute, which I’ll go into shortly.

About half of the non-portable projects have been updated to be compatible with Autofac 4.0.0. For the most part this was just an update to the NuGet packages, but with Autofac 4.0.0 we also changed to stop using the old code access security model (remember [AllowPartiallyTrustedCallers] ?) and some of these projects needed to be updated accordingly.

We are working hard to get the other half of the integration projects updated. Portable projects are being converted to use the new project.json structure and target netstandard framework monikers. Non-portable projects are sticking with .csproj but are being verified for compatibility with Autofac 4.0.0, getting updated as needed.

Why It’s Taking So Long

Oh, where do I begin.

Let me preface this by saying it’s going to sound like a rant. And in some ways it is. I do love what the .NET Core and ASP.NET Core teams have out there now, but it’s been a bumpy ride to get here and many of the bumps are what caused the delay.

First, let’s set the scene: There are really only two of us actively working on Autofac and the various officially supported integration libraries - it’s me and Alex Meyer-Gleaves. There are 23 integration projects we support alongside core Autofac. There’s a repository of examples as well as documentation. And, of course, there are questions that come in on StackOverflow, issues that come in that need responses, and requests on the discussion forum. We support this on the side since we both have our own full-time jobs and families.

I’m not complaining, truly. I raise all that because it’s not immediately evident. When you think about what makes Autofac tick (or AutoMapper, or Xunit, or any of your other favorite OSS projects that aren’t specifically backed/owned by a company like Microsoft or a consultant making money from support), it’s a very small number of people with quite a lot of work to get done in pretty much no time. Core Autofac is important, but it’s the tip of a very large iceberg.

We are sooooo lucky to have community help where we get it. We have some amazing folks who chime in on Autofac questions on StackOverflow. We’ve gotten some pretty awesome pull requests to add some new features lately. Where we get help, it’s super. But, admittedly, IoC containers and how they work internally are tricky beasts. There aren’t a lot of simple up-for-grabs sort of fixes that we have in the core product. It definitely reduces the number of things that we can get help with from folks who want to drop in and get something done quickly. (The integration projects are much easier to help with than core Autofac.)

Now, keep that in the back of your mind. We’ll tie into that shortly.

You know how the tooling for .NET Core changed like 1,000 times? You know how there was pretty much no documentation for most of that? And there were all sorts of weird things like the only examples available being from the .NET teams and they were using internal tools that folks didn’t have great access to. Every new beta or RC release was a nightmare. Mention that and you get comments like, “That’s life in the big city,” which is surely one way to look at it but is definitely dismissive of the pain involved.

Every release, we’d need to reverse-engineer the way the .NET teams had changed their builds, figure out how the tools were working, figure out how to address the breaking changes, and so on. Sometimes (rarely, but it happened) someone would have their project ported over first and we could look at how they did it. We definitely weren’t the only folks to feel that, I know.

NuGet lagging behind was painful because just updating core Autofac didn’t necessarily mean we could update the integration libraries. Especially with the target framework moniker shake-up, you’d find that without the tooling in place to support the whole chain, you could upgrade one library but not be able to take the upgrade in a downstream dependency because the tooling would consider it incompatible.

Anyway, with just the two of us (and the community as possible) and the tooling/library change challenges there was a lot of wheel-spinning. There were weeks where all we did was try to figure out the right magic combination of things in project.json to get things compiling. Did it work? I dunno, we can’t test because we don’t have a unit test framework compatible with this version of .NET Core. Can’t take it in a downstream integration library to test things, either, due to tooling challenges.

Lots of time spent just keeping up.

Finally, we’ve been bitten by the “conforming container” introduced for ASP.NET Core. Microsoft.Extensions.DependencyInjection is an abstraction around DI that was introduced to support ASP.NET Core. It’s a “conforming container” because it means anything that backs the IServiceProvider interface they use needs to support certain features and react in the same way. In some cases that’s fine. For the most part, simple stuff like GetService<T>() is pretty easy to implement regardless of the backing container.

The stuff you can’t do in a conforming container is use the container-specific features. For example, Autofac lets you pass parameters during a Resolve<T>() call. You can’t do that without actually referencing the Autofac lifetime scope - the IServiceProvider interface serves as a “lowest common denominator” for containers.

All along the way, we’ve been testing the junk out of Autofac to make sure it works correctly with Microsoft.Extensions.DependencyInjection. It’s been just fine so far. However, at the last minute (20 days ago now) we got word that not only did we need to implement the service provider interface as specified, but we also need to return IEnumerable<T> collections in the order that the components were registered.

We don’t currently do that. Given IEnumerable<T> has no specification around ordering and all previous framework features (controller action filters, etc.) requiring ordering used an Order property or something like that, it’s never been an issue. Interfaces using IEnumerable<T> generally don’t assume order (or, at least, shouldn’t) This is a new requirement for the conforming container and it’s amazingly non-trivial to implement.

It’s hard to implement because Autofac tracks registrations in a more complex way than just adding them to a list. If you add a standard registration, it does get added to a list. But if you add .PreserveExistingDefaults() because you want to register something and keep the existing default service in place if one’s already registered - that goes in at the end of the list instead of at the head. We also support very dynamic “registration sources” - a way to add registrations to the container on the fly without making explicit registrations. That’s how we handle things like Lazy<T> automatically working.

(That’s sort of a nutshell version. It gets more complex as you think about child/nested lifetime scopes.)

Point being, this isn’t as simple as just returning the list of stuff that got registered. We have to update Autofac to start keeping track of registration order yet still allow the existing functionality to behave correctly. And what do you do with dynamic registration sources? Where do those show up in the list?

The answers are not so straightforward.

We are currently working hard on solving that ordering problem. Actually, right now, Alex is working hard on that while I try and get the rest of the 23 integration projects converted, update the documentation, answer StackOverflow/issue/forum questions, and so on. Thank goodness for that guy because I couldn’t do this by myself.

If you would like to follow along on the path to a stable release, check out these issues:

While it may not be obvious, adding lots of duplicate issues asking for status or “me too” comments on issues in the repo doesn’t help. In some cases it’s frustrating (it’s a “no pressure, but hurry up” vote) and may slow things down as we spend what little time we have responding to the dupes and the questions rather than actually getting things done. I love the enthusiasm and interest, but please help us out by not adding duplicates. GitHub recently added “reactions” to issues (that little smiley face at the top-right of an issue comment) - jump in with a thumbs-up or smile or something; or subscribe to an issue if you’re interested in following along (there’s a subscribe button along the right up near the top of the issue, under the tags).

Thanks (So Far)

Finally, I have some thanks to hand out. Like I said, we couldn’t get this done without support from the community. I know I’m probably leaving someone out, and if so, I’m sorry - please know I didn’t intentionally do it.

  • The ASP.NET Core team - These guys took the time to talk directly to Alex and I about how things were progressing and answered several questions.
  • Oren Novotny - When the .NET target framework moniker problem was getting us down, he helped clear things up.
  • Cyril Durand and Viktor Nemes - These guys are rockstars on StackOverflow when it comes to Autofac questions.
  • Caio Proiete, Taylor Southwick, Kieren Johnstone, Geert Van Laethem, Cosmin Lazar, Shea Strickland, Roger Kratz - Pull requests of any size are awesome. These folks submitted to the core Autofac project within the last year. This is where I’m sure I missed someone because it was a manually pulled list and didn’t include the integration libraries. If you helped us out, know I’m thanking you right now.