net, aspnet, azure comments edit

I got the opportunity to hit the Microsoft Build conference this year. Last time I was able to make it was 2012, so it was good to be able to get back in and see what’s new in person.

I’m going to review this from my perspective as a web / web service / REST API sort of developer. As a different person or developer, you may have picked up something different you thought was super cool that I totally missed or tuned out. So.

Usually there are some “key themes” that get pushed at Build. Back in 2012, it was all Windows 8 applications. This year it was:

  • Internet of Things
  • Office and Cortana Integration
  • Cross-platform Applications
  • Microservices and Bots

Keeping in mind my status as a web developer, the microservice/bot stuff was the most interesting thing to me. I don’t work with hardware, so IoT is neat but not valuable. I don’t really need to integrate with Office, and Cortana isn’t web-based. I can maybe see doing some cross-platform stuff for mobile apps that talk to my REST APIs, but this was largely outside the scope of what I do, too.

I was reasonably disappointed by the keynotes. They usually have some big reveal in the keynotes. Day one left me wanting. Something about 22 new machine learning APIs being released that I won’t use. Day two the big reveal for me was free Xamarin for everyone. Again, cross-platform dev isn’t my thing, but still that’s pretty cool.

The sessions were impossible to get into. In 2012 they hosted the conference in Redmond on the Microsoft campus. I got into every session I was interested in. Since then they’ve hosted it in San Francisco at the Moscone Center. I can’t speak for other years, but this year you just couldn’t get into the sessions. If you weren’t lined up half an hour early to get in, forget it - there weren’t enough seats. In total I only got to see three different sessions. In three days, I saw three sessions. I don’t feel like I should have to catch up on the conference I paid to attend by watching videos of the sessions I wanted to see.

The schedule wasn’t public very early. I normally like to check out the web site and figure out which sessions I want to see when I get there. They didn’t release the speaker schedule (to my knowledge) until the day before the conference. I may well have canceled my reservation had I known the list of topics ahead of time. Maybe that’s why they didn’t release it.

There were great code labs. Something they didn’t have as much in previous years were interactive labs so you could learn new tech. They did a really good job of this, with several physical labs with hardware all set up so you could try stuff out. This was super valuable and the majority of my conference takeaways came from these labs. In particular I finally got a good feel for Docker by doing this lab.

There was no hardware giveaway this year which makes me wonder why the price was so high. I get that there are big parties and so on, but I would rather the price go down or there be some hardware than just keep the price cranked up. That said, I did come out with a Raspberry Pi 2 Azure IoT starter kit, so I can at least experiment with some of the IoT things they announced. Who knows? Maybe I’ll turn into an IoT aficionado.

There was a pitifully small amount of information about .NET Core. .NET Core and ASP.NET Core are on the top of my mind lately. Most of my current projects, including Autofac, are working through the challenges of RC1 and getting to RC2. There were something like three sessions total on .NET Core, most of which was just intro information. Any target dates on RC2? What’s the status on dotnet CLI? Honestly, I was hoping the big keynote announcement would be the .NET Core RC2 release. No such luck.

Access to the actual product teams was awesome. This almost (but not quite) makes up for the sessions being full. The ability to talk directly to various product team members for things like Visual Studio Online, ASP.NET Core, NuGet, Visual Studio, and Azure offerings was fantastic. It can be so hard sometimes to get questions answered or get the straight scoop on what’s going on with a project - cutting through the red tape and just talking to people is the perfect answer to that.

There was a big to-do around HoloLens. There seemed to be a lot around HoloLens - from conceptual demos to a full demo of walking on Mars. The lines for this were ridiculous. I didn’t get a chance to try it myself; a couple of colleagues tried it and said it wasn’t as mind-blowing as it was promoted to be.


  • Logistics: Not good. If you sell out in five minutes and don’t have enough seats for sessions, that’s not cool.
  • Topics: Not good. I get there’s a focus on a certain subset of topics, but I can usually find something cool I’m excited about. Not this time.
  • Educational Value: OK. I didn’t get much from sessions but the labs and the on-hand staff were great.
  • Networking Value: Good. I don’t normally “network” with people in the whole “sales” context, but being able to meet up with people from different vendors and product teams and speak face to face was a valuable thing.

net comments edit

I’ve been working a bit with Serilog and ASP.NET Core lately. In both cases, there are constructs that use CallContext to store data across an asynchronous flow. For Serilog, it’s the LogContext class; for ASP.NET Core it’s the HttpContextAccessor.

Running tests, I’ve noticed some inconsistent behavior depending on how I set up the test fakes. For example, when testing some middleware that modifies the Serilog LogContext, I might set it up like this:

var mw = new SomeMiddleware(ctx => Task.FromResult(0));

Note the next RequestDelegate I set up is just a Task.FromResult call because I don’t really care what’s going on in there - the point is to see if the LogContext is changed after the middleware executes.

Unfortunately, what I’ve found is that the static Task methods, like Task.FromResult and Task.Delay, don’t behave consistently with respect to using CallContext to store data across async calls.

To illustrate the point, I’ve put together a small set of unit tests here:

public class CallContextTest
  public void SimpleCallWithoutAsync()
    var value = new object();
    Assert.Same(value, GetCallContextData());

  public async void AsyncMethodCallsTaskMethod()
    var value = new object();
    await NoOpTaskMethod(value);
    Assert.Same(value, GetCallContextData());

  public async void AsyncMethodCallsAsyncFromResultMethod()
    var value = new object();
    await NoOpAsyncMethodFromResult(value);

    // THIS FAILS - the call context data
    // will come back as null.
    Assert.Same(value, GetCallContextData());

  private static object GetCallContextData()
    return CallContext.LogicalGetData("testdata");

  private static void SetCallContextData(object value)
    CallContext.LogicalSetData("testdata", value);

   * Note the difference between these two methods:
   * One _awaits_ the Task.FromResult, one returns it directly.
   * This could also be Task.Delay.

  private async Task NoOpAsyncMethodFromResult(object value)
    // Using this one will cause the CallContext
    // data to be lost.
    await Task.FromResult(0);

  private Task NoOpTaskMethod(object value)
    return Task.FromResult(0);

As you can see, changing from return Task.FromResult(0) in a non async/await method to await Task.FromResult(0) in async/await suddenly breaks things. No amount of configuration I could find fixes it.

StackOverflow has related questions and there are forum posts on similar topics, but this is the first time this has really bitten me.

I gather this is why AsyncLocal<T> exists, which means maybe I should look into that a bit deeper.

personal, culture comments edit

There has been a lot of push lately for people to learn to code. From Hour of Code to the President of the United States pushing for more coders, the movement towards everyone coding is on.

What gets lost in the hype, drowned out by the fervor of people everywhere jamming keys on keyboards, is that simply being able to code is not software development.

OK, sure, technically speaking when you write code that executes a task you have just developed a piece of software. Also, technically speaking, when you fumble out Chopsticks on the keyboard while walking through Costco you just played the piano. That doesn’t make you a pianist any more than taking an hour to learn to code makes you a software developer.

Here’s where my unpopular opinion comes out. Here’s where I call out the elephant in the room and the politically-correct majority gasp at how I can be so unencouraging to these folks learning to code.

Software development is an art, not a science.

Not everyone can be a software developer in the same way not everyone can be a pianist, a painter, or a sculptor. Anyone can learn to play the piano well; anyone can learn to paint or sculpt reasonably. That doesn’t mean just anyone can make a living doing these things.

It’s been said that if you spend 10,000 hours practicing a task you can become great at anything. 10,000 hours is basically five years of a full-time job. So, ostensibly, if you spent five years full-time coding, you’d be a developer.

However, we’ve all heard that argument about experience: Have you had 20 years of experience? Or one year of experience 20 times? Does spending 10,000 hours coding make you a developer? Or does it just mean you spent a lot of time coding?

Regardless of your time in any field you have probably run across both of these people - the ones who really have 20 years’ experience and the ones who have been working for 20 years and you wonder how they’ve advanced so far in their careers.

I say that to be a good developer - or a good artist - you need three things: skills, aptitude, and passion.

Skills are the rote abilities you learn when you start with that Hour of Code or first take a class on coding. Pretty much anyone can learn a certain level of skill in nearly any field. It’s learned ability that takes a brainpower and dedication.

Aptitude is a fuzzier quality meaning your natural ability to do something. This is where the “art” part of development starts coming in. You may have learned the skills to code, but do you have any sort of natural ability to perform those skills?

Passion is your enthusiasm - in this case, the strong desire to execute the skills you have and continue to improve on them. This is also part of the “art” of development. You might be really good at jamming out code, but if you don’t like doing it you probably won’t come up with the best solutions to the problems with which you’re faced.

Without all three, you may be able to code but you won’t really be a developer.

A personal anecdote to help make this a bit more concrete: When I went to college, I told my advisors that I really wanted to be a 3D graphics animator/modeler. My dream job was (and still kind of is) working for Industrial Light and Magic on special effects. As a college kid, I didn’t know any better, so when the advisors said I should get a Computer Science degree, I did. Only later did I find out that wouldn’t get me into ILM or Pixar. Why? In their opinion (at the time, in my rejection letters), “you can teach computer science to an artist but you can’t teach art to a computer scientist.”

The first interesting thing I find there is that, at least at the time, the thought there was that art “isn’t teachable.” For the most part, I agree - without the skills, aptitude, and passion for art, you’re not going to be a really great artist.

The more interesting thing I find is the lack of recognition that solving computer science problems, in itself, is an art.

If you’ve dived into code, you’re sure to have seen this, though maybe you didn’t realize it.

  • Have you ever seen a really tough problem solved in an amazingly elegant way that you’d never have thought of yourself? What about the converse - a really tough problem solved in such a brute force manner that you can’t imagine why that’s good?
  • Have you ever picked up someone else’s code and found that it’s entirely unreadable? If you hand someone else your code, can they make heads or tails of it? What about code that was so clearly written you didn’t even need any comments to understand how it worked?
  • Have you ever seen code that’s so deep and unnecessarily complicated that if anything went wrong with it you could never fix it? What about code that’s so clear you could easily fix anything with it if a problem was discovered?

We’ve all seen this stuff. We’ve all written this stuff. I know I have… and still do. Sometimes we even laugh about it.

The important part is that those three factors - skill, aptitude, and passion - work together to improve us as developers.

I don’t laugh at a beginner’s code because their skills aren’t there yet. However, their aptitude and passion may help to motivate them to raise their skill level, which will make them overall better at what they do.

The art of software development isn’t about the quantity of code churned out, it’s about quality. It’s about constant improvement. It’s about change. These are the unquantifiable things that separate the coders from the developers.

Every artist constantly improves. I’m constantly improving, and I hope you are, too. It’s the artistic aspect of software development that drives us to do so, to solve the problems we’re faced with. Don’t just be a software developer, be a software artist. And be the best artist you can be.

net, aspnet comments edit

Here’s the situation:

  • I have a .NET Core / ASP.NET Core (DNX) web app. (Currently it’s an RC1 app.)
  • When I start it in Visual Studio, I get IIS Express listening for requests and handing off to DNX.
  • When I start the app from a command line, I want the same experience as VS - IIS Express listening and handing off to DNX.

Now, I know I can just dnx web and get Kestrel to work from a simple self-host perspective. I really want IIS Express here. Searching around, I’m not the only one who does, though everyone’s reasons are different.

Since the change to the IIS hosting model you can’t really do the thing that the ASP.NET Music Store was doing where you copy the AspNet.Loader.dll to your bin folder and have magic happen when you start IIS Express.

When Visual Studio starts up your application, it actually creates an all-new applicationhost.config file with some special entries that allow things to work. I’m going to tell you how to update your per-user IIS Express applicationhost.config file so things can work outside VS just like they do inside.

There are two pieces to this:

  1. Update your applicationhost.config (one time) to add the httpPlatformHandler module so IIS Express can “proxy” to DNX.
  2. Use appcmd.exe to point applications to IIS Express.
  3. Set environment variables and start IIS Express using the application names you configured using appcmd.exe

Let’s walk through each step.

applicationhost.config Updates

Before you can host DNX apps in IIS Express, you need to update your default IIS Express applicationhost.config to know about the httpPlatformHandler module that DNX uses to start up its child process.

You only have to do this one time. Once you have it in place, you’re good to go and can just configure your apps as needed.

To update the applicationhost.config file I used the XML transform mechanism you see in web.config transforms - those web.Debug.config and web.Release.config deals. However, I didn’t want to go through MSBuild for it so I did it in PowerShell.

First, save this file as applicationhost.dnx.xml - this is the set of transforms for applicationhost.config that the PowerShell script will use.

<?xml version="1.0"?>
<configuration xmlns:xdt="">
        <sectionGroup name="system.webServer"
            <section name="httpPlatform"
                     xdt:Transform="InsertIfMissing" />
    <location path=""
                <add name="httpPlatformHandler"
                     xdt:Transform="InsertIfMissing" />
            <add name="httpPlatformHandler"
                 image="C:\Program Files (x86)\Microsoft Web Tools\HttpPlatformHandler\HttpPlatformHandler.dll"
                 xdt:Transform="InsertIfMissing" />

I have it structured so you can run it over and over without corrupting the configuration - so if you forget and accidentally run the transform twice, don’t worry, it’s cool.

Here’s the PowerShell script you’ll use to run the transform. Save this as Merge.ps1 in the same folder as applicationhost.dnx.xml:

function script:Merge-XmlConfigurationTransform



    Add-Type -Path "${env:ProgramFiles(x86)}\MSBuild\Microsoft\VisualStudio\v14.0\Web\Microsoft.Web.XmlTransform.dll"

    $transformableDocument = New-Object 'Microsoft.Web.XmlTransform.XmlTransformableDocument'
    $xmlTransformation = New-Object 'Microsoft.Web.XmlTransform.XmlTransformation' -ArgumentList "$TransformFile"

        $transformableDocument.PreserveWhitespace = $false
        $transformableDocument.Load($SourceFile) | Out-Null
        $xmlTransformation.Apply($transformableDocument) | Out-Null
        $transformableDocument.Save($OutputFile) | Out-Null

$script:ApplicationHostConfig = Join-Path -Path ([System.Environment]::GetFolderPath([System.Environment+SpecialFolder]::MyDocuments)) -ChildPath "IISExpress\config\applicationhost.config"
Merge-XmlConfigurationTransform -SourceFile $script:ApplicationHostConfig -TransformFile (Join-Path -Path $PSScriptRoot -ChildPath applicationhost.dnx.xml) -OutputFile "$($script:ApplicationHostConfig).tmp"
Move-Item -Path "$($script:ApplicationHostConfig).tmp" -Destination $script:ApplicationHostConfig -Force

Run that script and transform your applicationhost.config.

Note that the HttpPlatformHandler isn’t actually a DNX-specific thing. It’s an IIS 8+ module that can be used for any sort of proxying/process management situation. However, it doesn’t come set up by default on IIS Express so this adds it in.

Now you’re set for the next step.

Configure Apps with IIS Express

I know you can run IIS Express with a bunch of command line parameters, and if you want to do that, go for it. However, it’s just a bunch easier if you set it up as an app within IIS Express so you can more easily launch it.

Set up applications pointing to the wwwroot folder.

A simple command to set up an application looks like this:

"C:\Program Files (x86)\IIS Express\appcmd.exe" add app /"MyApplication" /path:/ /physicalPath:C:\some\folder\src\MyApplication\wwwroot

Whether you use the command line parameters to launch every time or set up your app like this, make sure the path points to the wwwroot folder.

Set Environment Variables and Start IIS Express

If you look at your web.config file in wwwroot you’ll see something like this:

<?xml version="1.0" encoding="utf-8"?>
            <add name="httpPlatformHandler"
                 resourceType="Unspecified" />
        <httpPlatform processPath="%DNX_PATH%"
                      startupTimeLimit="3600" />

The important bit there are the two variables DNX_PATH and DNX_ARGS.

  • DNX_PATH points to the dnx.exe executable for the runtime you want for your app.
  • DNX_ARGS are the arguments to dnx.exe, as if you were running it on a command line.

A very simple PowerShell script that will launch an IIS Express application looks like this:

$env:DNX_PATH = "$($env:USERPROFILE)\.dnx\runtimes\dnx-clr-win-x86.1.0.0-rc1-update1\bin\dnx.exe"
$env:DNX_ARGS = "-p `"C:\some\folder\src\MyApplication`" web"
Start-Process "${env:ProgramFiles(x86)}\IIS Express\iisexpress.exe" -ArgumentList "/site:MyApplication"

Obviously you’ll want to set the runtime version and paths accordingly, but this is basically the equivalent of running dnx web and having IIS Express use the site settings you configured above as the listening endpoint.

windows, azure, security comments edit

I’ve been experimenting with Azure Active Directory Domain Services (currently in preview) and it’s pretty neat. If you have a lot of VMs you’re working with, it helps quite a bit in credential management.

However, it hasn’t all been “fall-down easy.” There are a couple of gotchas I’ve hit that folks may be interested in.

##Active Directory Becomes DNS Control for the Domain When you join an Azure VM to your domain, you have to set the network for that VM to use the Azure Active Directory as the DNS server. This results in any DNS entries for the domain - for machines on that network - only being resolved by Active Directory.

This is clearer with an example: Let’s say you own the domain and you enable Azure AD Domain Services for You also have…

  • A VM named webserver.
  • A cloud service responding to that’s associated with the VM.

You join webserver to the domain. The full domain name for that machine is now You want to expose that machine to the outside (outside the domain, outside of Azure) to serve up your new web application. It needs to respond to

You can add a public DNS entry mapping to the public IP address. You can now get to correctly from outside your Azure domain. However, you can’t get to it from inside the domain. Why not?

You can’t because Active Directory is serving DNS inside the domain and there’s no VM named www. It doesn’t proxy external DNS records for the domain, so you’re stuck.

There is not currently a way to manage the DNS for your domain within Azure Active Directory.

Workaround: Rename the VM to match the desired external DNS entry. Which is to say, call the VM www instead of webserver. That way you can reach the same machine using the same DNS name both inside and outside the domain.

##Unable to Set User Primary Email Address When you enable Azure AD Domain Services you get the ability to start authenticating against joined VMs using your domain credentials. However, if you try managing users with the standard Active Directory MMC snap-ins, you’ll find some things don’t work.

A key challenge is that you can’t set the primary email address field for a user. It’s totally disabled in the snap-in.

This is really painful if you are trying to manage a cloud-only domain. Domain Services sort of assumes that you’re synchronizing an on-premise AD with the cloud AD and that the workaround would be to change the user’s email address in the on-premise AD. However, if you’re trying to go cloud-only, you’re stuck. There’s no workaround for this.

##Domain Services Only Connects to a Single ASM Virtual Network When you set up Domain Services, you have to associate it with a single virtual network (the vnet your VMs are on), and it must be an Azure Service Manager style network. If you created a vnet with Azure Resource Manager, you’re kinda stuck. If you have ARM VMs you want to join (which must be on ARM vnets), you’re kinda stuck. If you have more than one virtual network on which you want Domain Services, you’re kinda stuck.

Workaround: Join the “primary vnet” (the one associated with Domain Services) to other vnets using VPN gateways.

There is not a clear “step-by-step” guide for how to do this. You need to sort of piece together the information in these articles:

##Active Directory Network Ports Need to be Opened Just attaching the Active Directory Domain Services to your vnet and setting it as the DNS server may not be enough. Especially when you get to connecting things through VPN, you need to make sure the right ports are open through the network security group or you won’t be able to join the domain (or you may be able to join but you won’t be able to authenticate).

Here’s the list of ports required by all of Domain Services. Which is not to say you need all of them open, just that you’ll want that for reference.

I found that enabling these ports outbound for the network seemed to cover joining and authenticating against the domain. YMMV. There is no specific guidance (that I’ve found) to explain exactly what’s required.

  • LDAP: Any/389
  • LDAP SSL: TCP/636
  • DNS: Any/53

personal, gaming, toys, xbox comments edit

This year for Christmas, Jenn and I decided to get a larger “joint gift” for each other since neither of us really needed anything. That gift ended up being an Xbox One (the Halo 5 bundle), the LEGO Dimensions starter pack, and a few expansion packs.

LEGO Dimensions Starter Pack

Never having played one of these collectible toy games before, I wasn’t entirely sure what to expect beyond similar gameplay to other LEGO video games. We like the other LEGO games so it seemed like an easy win.

LEGO Dimensions is super fun. If you like the other LEGO games, you’ll like this one.

The story is, basically, that a master bad guy is gathering up all the other bad guys from the other LEGO worlds (which come from the licensed LEGO properties like Portal, DC Comics, Lord of the Rings, and so on). Your job is to stop him from taking over these “dimensions” (each licensed property is a “dimension”) by visiting the various dimensions and saving people or gathering special artifacts.

With the starter pack you get Batman, Gandalf, and Wildstyle characters with which you can play the game. These characters will allow you to beat the main story.

So why get expansion packs?

  • There are additional dimensions you can visit that you can’t get to without characters from that dimension. For example, while the main game lets you play through a Doctor Who level, you can’t visit the other Doctor Who levels unless you buy the associated expansion pack.
  • As with the other LEGO games, you can’t unlock certain hidden areas or collectibles unless you have special skills. For example, only certain characters have the ability to destroy metal LEGO bricks. With previous LEGO games you could unlock these characters by beating levels; with LEGO Dimensions you unlock characters by buying the expansion packs.

Picking the right packs to get the best bang for your buck is hard. IGN has a good page outlining the various character abilities, which pack contains each, and some recommendations on which ones will get you the most if you’re starting fresh.

The packs Jenn and I have (after getting some for Christmas and grabbing a couple of extras) are:

Portal level pack
Portal level pack

Back to the Future level pack
Back to the Future level pack

Emmet fun pack
Emmet fun pack

Zane fun pack
Zane fun pack

Gollum fun pack
Gollum fun pack

Eris fun pack
Eris fun pack

Wizard of Oz Wicked Witch fun pack
Wizard of Oz Wicked Witch fun pack

Doctor Who level pack
Doctor Who level pack

Unikitty fun pack
Unikitty fun pack

Admittedly, this is a heck of an investment in a game. We’re suckers. We know.

This particular combination of packs unlocks just about everything. There are still things we can’t get to - levels we can’t enter, a few hidden things we can’t reach - but this is a good 90%. Most of the stuff we can’t get to is because there are characters where only that one character has such-and-such ability. For example, Aquaman (for whatever reason) seems to have one or two abilities unique to him for which we’ve run across the need. Unikitty is also a character with unique abilities (which we ended up getting). I’d encourage you as you purchase packs to keep consulting the character ability matrix to determine which packs will best help you.

I have to say… There’s a huge satisfaction in flying the TARDIS around or getting the Twelfth Doctor driving around in the DeLorean. It may make that $15 or whatever worth it.

If you’re a LEGO fan anyway, the packs actually include minifigs and models that are detachable - you can play with them with other standard LEGO sets once you get tired of the video game. It’s a nice dual-purpose that other collectible games don’t provide.

Finally, it’s something fun Jenn and I can play together to do something more interactive than just watch TV. I don’t mind investing in that.

In any case, if you’re looking at one of the collectible toy games, I’d recommend LEGO Dimensions. We’re having a blast with it.

personal comments edit

It’s been a busy year, and in particular a pretty crazy last-three-months, so I’m rounding out my 2015 by finally using up my paid time off at work and effectively taking December off.

What that means is I probably won’t be seen on StackOverflow or handling Autofac issues or working on the Autofac ASP.NET 5 conversion.

I love coding, but I also have a couple of challenges if I do that on my time off:

  • I stress out. I’m not sure how other people work, but when I see questions and issues come in I feel like there’s a need I’m not addressing or a problem I need to solve, somehow, immediately right now. Even if that just serves as a backlog of things to prioritize, it’s one more thing on the list of things I’m not checking off. I want to help people and I want to provide great stuff with Autofac and the other projects I work on, but there is a non-zero amount of stress involved with that. It can pretty quickly turn from “good, motivating stress” to “bad, overwhelming stress.” It’s something I work on from a personal perspective, but taking a break from that helps me regain some focus.
  • I lose time. There are so many things I want to do that I don’t have time for. I like sewing and making physical things - something I don’t really get a chance to do in the software world. If I sit down and start coding stuff, pretty soon the day is gone and I may have made some interesting progress on a code-related project, but I just lost a day I could have addressed some of the other things I want to do. Since I code for a living (and am lucky enough to be able to get Autofac time in as part of work), I try to avoid doing much coding on my time off unless it’s helping me contribute to my other hobbies. (For example, I just got an embroidery machine - I may code to create custom embroidery patterns.)

I don’t really take vacation time during the year so I often end up in a “use it or lose it” situation come December, which works out well because there are a ton of holidays to work around anyway. Why not de-stress, unwind, and take the whole month off?

I may even get some time to outline some of the blog entries I’ve been meaning to post. I’ve been working on some cool stuff from Azure to Roslyn code analyzers, not to mention the challenges we’ve run into with Autofac/ASP.NET 5. I’ve just been slammed enough that I haven’t been able to get those out. We’ll see. I should at least start keeping a list.

halloween, costumes comments edit

It was raining again this year and that definitely took down the number of visitors. Again this year we also didn’t put out our “Halloween projector” that puts a festive image on our garage. In general, it was pretty slow all around. I took Phoenix out this year while Jenn answered the door so I got to see what was out there firsthand. Really hardly anyone out there this year.


2015: 85

Average Trick-or-Treaters by Time Block

The table’s also starting to get pretty wide; might have to switch it so time block goes across the top and year goes down.

Cumulative data:

  Time Block
Year 6:00p - 6:30p 6:30p - 7:00p 7:00p - 7:30p 7:30p - 8:00p 8:00p - 8:30p Total
2006 52 59 35 16 0 162
2007 5 45 39 25 21 139
2008 14 71 82 45 25 237
2009 17 51 72 82 21 243
2010 19 77 76 48 39 259
2011 31 80 53 25 0 189
2013 28 72 113 80 5 298
2014 19 54 51 42 10 176
2015 13 14 30 28 0 85


My costume this year was Robin Hood. Jenn was Merida from Brave so we were both archers. Phoenix had two costumes - for trick-or-treating at Jenn’s work she was a bride with a little white dress and veil; for going out in the neighborhood she was a ninja.

The finished Robin Hood costume

Costume with the cloak closed

I posted some in-progress pictures of my costume on social media, but as part of the statistical breakdown of Halloween this year I thought it’d be interesting to dive into more of exactly what went into the making outside of the time and effort - actual dollars put in.

On my costume, I made the shirt, the doublet, the pants, and the cape. I bought the boots, the tights, and the bow.

###Accessories and Props

Let’s start with the pieces I bought:

Total: $99.10

###The Shirt The shirt is made of a gauzy fabric that was pretty hard to work with. The pattern was also not super helpful because you’d see “a step” in the pattern consisting of several actions.

Confusing shirt pattern

I did learn how to use an “even foot” (sometimes called a “walking foot”) on our sewing machine, which was a new thing for me.

Even foot on the sewing machine

  • Shirt, doublet, and pants pattern - $10.17
  • Gauze fabric - $6.59
  • Thread - $3.29
  • Buttons - $5.99
  • Interfacing - $0.62

Total: $26.66

###The Pants

I don’t have any in-progress shots of the pants being made, but they were pretty simple pants. I will say I thought I should make the largest size due to my height… but then the waist turned out pretty big so I had to do some adjustments to make them fit. Even after adjusting they were pretty big. I should probably have done more but ran out of time.

  • Shirt, doublet, and pants pattern - (included in shirt cost)
  • Black gabardine fabric - $23.73
  • Thread - $4.00
  • Buttons - $1.90
  • Eyelets - $2.39
  • Ribbon - $1.49
  • Interfacing - (I had some already for this)

Total: $33.51

###The Doublet

The doublet was interesting to make. It had a lot of pieces, but they came together really well and I learned a lot while doing it. Did you know those little “flaps” on the bottom are called “peplum?” I didn’t.

I hadn’t really done much with adding trim, so this was a learning experience. For example, this trim had a sort of “direction” or “grain” to it - if you sewed with the “grain,” it went on really smoothly. If you went against the “grain,” the trim would get all caught up on the sewing machine foot. I also found that sewing trim to the edge of a seam is really hard on thick fabric so I ended up adding a little margin between the seam and the trim.

Putting trim on the body of the doublet

These are the peplums that go around the bottom of the doublet. You can see the trim pinned to the one on the right.

Sewing peplums

Once all the peplums were done, I pinned and machine basted them in place. Getting them evenly spaced was a challenge, but it turned out well.

Pinning peplums

After the machine basting, I ran the whole thing through the serger which gave them a strong seam and trimmed off the excess. This was the first project I’d done with a serger and it’s definitely a time saver. It also makes finished seams look really professional.

Serging peplums

To cover the seam where the peplums are attached, the lining in the doublet gets hand sewn over the top. There was a lot of hand sewing in this project, which was the largest time sink.

Slip-stitching the doublet lining

Here’s the finished doublet.

The finished doublet

  • Shirt, doublet, and pants pattern - (included in shirt cost)
  • Quilted fabric (exterior) - $15.73
  • Brown broadcloath fabric (lining) - $5.99
  • Thread - $8.29
  • Eyelets - $8.28
  • Trim - $23.95
  • Leather lacing - $2.50
  • Interfacing - $6.11

Total: $70.85

###The Cape

The cape was the least complex of the things to make but took the most time due to the size. Just laying out and cutting the pattern pieces took a couple of evenings.

As you can see, I had to lay them out in our hallway.

Cutting the exterior cape pieces

I learned something while cutting the outside of the cape: The pattern was a little confusing in that the diagrams of how the pattern should be laid out were inconsistent with the notation they describe. This resulted in my cutting one of the pattern pieces backwards and Jenn being kind enough to go back to the fabric store all the way across town and get the last bit of fabric from the bolt. I was very lucky there was enough to re-cut the piece the correct way.

I used binder clips on the edges in an attempt to stop the two fabric layers from slipping around. It was mildly successful.

Cutting the cape lining

I found with the serger I had to keep close track of the tension settings to make sure the seams were sewn correctly. Depending on the thread and weight of the fabric being sewn, I had to tweak some things.

To help me remember settings, I took photos with my phone of the thread, the fabric being sewn, and the dials on the serger so I’d know exactly what was set.

Here are the settings for sewing together two layers of cape lining.

Cape lining serger settings

And the settings for attaching the lining to the cape exterior.

Serger settings for attaching lining to cape exterior

I took a shot of my whole work area while working on the cape. It gets pretty messy, especially toward the end of a project. I know where everything is, though.

You can also see I’ve got things set up so I can watch TV while I work. I got through a couple of different TV seasons on Netflix during this project.

My messy work area

One of the big learning things for me with this cape was that with a thicker fabric it’s hard to get the seams to lay flat. I ironed the junk out of that thing and all the edge seams were rounded and puffy. I had to edgestitch the seams to make sure they laid flat.

Edge stitching the cape hem

  • Green suedecloth (exterior) - $55.82
  • Gold satin (lining) - $40.46
  • Dark green taffeta (hood lining) - $5.99
  • Interfacing - (I had some already for this)
  • Thread - $11.51
  • Silver “conchos” (the metal insignias on the neck) - $13.98
  • Scotchgard (for waterproofing) - $5.99

Total: $133.75

###Total - Accessories and props - $99.10 - Shirt - $26.66 - Pants - $33.51 - Doublet - $70.85 - Cape - $133.75

Total: $363.87

That’s probably reasonably accurate. I know I had some coupons where I saved some money on fabric (you definitely need to be watching for coupons!) so the costs on those may be lower, but I also know I had to buy some incidentals like more sewing machine needles after I broke a couple, so it probably roughly balances out.

I get a lot of folks asking why I don’t just rent a costume. Obviously from a money and time perspective it’d be a lot cheaper to do that.

The thing is… I really like making the costume. I’m a software engineer, so generally when I “make something,” it’s entirely intangible - it’s electronic bits stored somewhere that make other bits do things. I don’t come out with something I can hold in my hands and say I did this. When I make a shirt or a costume or whatever, there’s something to be said for having a physical object and being able to point to it and be proud of the craftsmanship that went into it. It’s a satisfying feeling.

home comments edit

At home, especially after a long day, I’ve noticed my phone may be low on power even though I’d like to continue using it. All of our chargers are in rooms other than the living room where we spend most of our time and I didn’t want to move one into the living room because I didn’t want cords all over the place or a bajillion different things to plug in.

I finally figured out the answer.


Charger, cables, and adhesive

##Assembly Use one of the Command strips to affix the USB charger under the lip on the back of your end table. I went with Command strips because they’re reasonably strong but generally won’t ruin the finish on your table because they can be easily removed.

Plug one or more of the one-foot cables into the charger.

If you get the Sabrent USB cables I mentioned earlier, they have a little bit of Velcro on them you can use to your benefit. Put a little Velcro under a nearby area of the table and push the Velcro tie on the USB cable to the end. You can then attach the end of the cable in easy reach from your couch using the Velcro.

Charger attached to the back of the table

##Usage The nice thing about this is it’s entirely unobtrusive. Charge your phone on the end table while you’re sitting and watching TV, but when you’re done you can drop the cable back behind the table (it’s only a foot long so it won’t drag on the ground or look messy); or if you have the Velcro you can affix the cable under the lip of the table in easy reach for the next usage.

Charging a phone

net, aspnet, build, autofac comments edit

We recently released Autofac 4.0.0-beta8-157 to NuGet to coincide with the DNX beta 8 release. As part of that update, we re-added the classic PCL target .NETPortable,Version=v4.5,Profile=Profile259 (which is portable-net45+dnxcore50+win+wpa81+wp80+MonoAndroid10+Xamarin.iOS10+MonoTouch10) because older VS versions and some project types were having trouble finding a compatible version of Autofac 4.0.0 - they didn’t rectify the dotnet target framework as a match.

If you’re not up on the dotnet target framework moniker, Oren Novotny has some great articles that help a lot:

I’m now working on a beta 8 compatible version of Autofac.Configuration. For beta 7 we’d targeted dnx451, dotnet, and net45. I figured we could just update to start using Autofac 4.0.0-beta8-157, rebuild, and call it good.

Instead, I started getting a lot of build errors when targeting the dotnet framework moniker.

Building Autofac.Configuration for .NETPlatform,Version=v5.0
  Using Project dependency Autofac.Configuration 4.0.0-beta8-1
    Source: E:\dev\opensource\Autofac\Autofac.Configuration\src\Autofac.Configuration\project.json

  Using Package dependency Autofac 4.0.0-beta8-157
    Source: C:\Users\tillig\.dnx\packages\Autofac\4.0.0-beta8-157
    File: lib\dotnet\Autofac.dll

  Using Package dependency Microsoft.Framework.Configuration 1.0.0-beta8
    Source: C:\Users\tillig\.dnx\packages\Microsoft.Framework.Configuration\1.0.0-beta8
    File: lib\dotnet\Microsoft.Framework.Configuration.dll

  Using Package dependency Microsoft.Framework.Configuration.Abstractions 1.0.0-beta8
    Source: C:\Users\tillig\.dnx\packages\Microsoft.Framework.Configuration.Abstractions\1.0.0-beta8
    File: lib\dotnet\Microsoft.Framework.Configuration.Abstractions.dll

  Using Package dependency System.Collections 4.0.11-beta-23409
    Source: C:\Users\tillig\.dnx\packages\System.Collections\4.0.11-beta-23409
    File: ref\dotnet\System.Collections.dll

  (...and some more package dependencies that got resolved, then...)

  Unable to resolve dependency fx/System.Collections

  Unable to resolve dependency fx/System.ComponentModel

  Unable to resolve dependency fx/System.Core

  (...and a lot more fx/* items unresolvable.)

This was, at best, confusing. I mean, in the same target framework, I see these two things together:

  Using Package dependency System.Collections 4.0.11-beta-23409
    Source: C:\Users\tillig\.dnx\packages\System.Collections\4.0.11-beta-23409
    File: ref\dotnet\System.Collections.dll

  Unable to resolve dependency fx/System.Collections

So it found System.Collections, but it didn’t find System.Collections. Whaaaaaaa?!

After a lot of searching (with little success) I found David Fowler’s indispensible article on troubleshooting dependency issues in ASP.NET 5. This led me to the dnu list --details command, where I saw this:

[Target framework .NETPlatform,Version=v5.0 (dotnet)]

Framework references:
  fx/System.Collections  - Unresolved
    by Package: Autofac 4.0.0-beta8-157

  fx/System.ComponentModel  - Unresolved
    by Package: Autofac 4.0.0-beta8-157

  (...and a bunch more of these...)

Package references:
* Autofac 4.0.0-beta8-157
    by Project: Autofac.Configuration 4.0.0-beta8-1

* Microsoft.Framework.Configuration 1.0.0-beta8
    by Project: Autofac.Configuration 4.0.0-beta8-1

  Microsoft.Framework.Configuration.Abstractions 1.0.0-beta8
    by Package: Microsoft.Framework.Configuration 1.0.0-beta8

  System.Collections 4.0.11-beta-23409
    by Package: Autofac 4.0.0-beta8-157...
    by Project: Autofac.Configuration 4.0.0-beta8-1

  (...and so on.)

Hold up - Autofac 4.0.0-beta8-157 needs both the framework assembly and the dependency package for System.Collections?

Looking in the generated .nuspec file for the updated core Autofac, I see:

<?xml version="1.0"?>
<package xmlns="">
    <!-- ... -->
      <group targetFramework="DNX4.5.1" />
      <group targetFramework=".NETPlatform5.0">
        <dependency id="System.Collections" version="4.0.11-beta-23409" />
        <dependency id="System.Collections.Concurrent" version="4.0.11-beta-23409" />
        <!-- ... -->
      <group targetFramework=".NETFramework4.5" />
      <group targetFramework=".NETCore4.5" />
      <group targetFramework=".NETPortable4.5-Profile259" />
      <!-- ... -->
      <frameworkAssembly assemblyName="System.Collections" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.ComponentModel" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Core" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Diagnostics.Contracts" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Diagnostics.Debug" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Diagnostics.Tools" targetFramework=".NETPortable4.5-Profile259" />
      <!-- ... -->

The list of failed fx/* dependencies is exactly the same as the list of frameworkAssembly references that target .NETPortable4.5-Profile259 in the .nuspec.

By removing the dotnet target framework moniker from Autofac.Configuration and compiling for specific targets, everything resolves correctly.

What I originally thought was that dotnet indicated, basically, “I support what my dependencies support,” which I took to mean, “we’ll figure out the lowest common denominator of all the dependencies and that’s the set of stuff this supports.”

What dotnet appears to actually mean is, “I support the superset of everything my dependencies support.”

The reason I take that away is that the Microsoft.Framework.Configuration 1.0.0-beta8 package targets net45, dnx451, dnxcore, and dotnet - but it doesn’t specifically support .NETPortable,Version=v4.5,Profile=Profile259. I figured Autofac.Configuration, targeting dotnet, would rectify to support the common frameworks that both core Autofac and Microsft.Framework.Configuration support… which would mean none of the <frameworkAssembly /> references targeting .NETPortable4.5-Profile259 would need to be resolved to build Autofac.Configuration.

Since they do, apparently, need to be resolved, I have to believe dotnet implies superset rather than subset.

This appears to mostly just be a gotcha if you have a dependency that targets one of the older PCL framework profiles. If everything down the stack just targets dotnet it seems to magically work.

If you’d like to try this and see it in action, check out the Autofac.Configuration repo at 14c10b5bf6 and run the build.ps1 build script.

autofac, net comments edit

As part of DNX RC1, the Microsoft.Framework.* packages are getting renamed to Microsoft.Extensions.*.

The Autofac.Framework.DependencyInjection package was named to follow along with the pattern established by those libraries: Microsoft.Framework.DependencyInjection -> Autofac.Framework.DependencyInjection.

With the RC1 rename of the Microsoft packages, we’ll be updating the name of the Autofac package to maintain consistency: Autofac.Extensions.DependencyInjection. This will happen for Autofac as part of beta 8.

We’ll be doing the rename as part of the beta 8 drop since beta 8 appears to have been pushed out by a week and we’d like to get a jump on things. For beta 8 we’ll still refer to the old Microsoft dependency names to maintain compatibility but you’ll have a new Autofac dependency. Then when RC1 hits, you won’t have to change the Autofac dependency because it’ll already be in line.

You can track the rename on Autofac issue #685.

powershell, vs, net comments edit

I love PowerShell, but I do a lot with the Visual Studio developer prompt and that’s still good old cmd.

Luckily, you can make your PowerShell prompt also a Visual Studio prompt.

First, go get the PowerShell Community Extensions. If you’re a PowerShell user you should probably have these already. You can either get them from the CodePlex site directly or install via Chocolatey.

Now, in your PowerShell profile (stored at %UserProfile%\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1) add the following:

Import-Module Pscx

# Add Visual Studio 2015 settings to PowerShell
Import-VisualStudioVars 140 x86

# If you'd rather use VS 2013, comment out the
# above line and use this one instead:
# Import-VisualStudioVars 2013 x86

Next time you open a PowerShell prompt, it’ll automatically load up the Visual Studio variables to also make it a Visual Studio prompt.

Take this to the next level by getting a “PowerShell Prompt Here” context menu extension and you’re set!

net, build, tfs comments edit

I’m working with some builds in Visual Studio Online which, right now, is basically Team Foundation Server 2015. As part of that, I wanted to do some custom work in my build script but only if the build was running in TFS.

The build runs using the Visual Studio Build build step. I couldn’t find any documentation for what additional / special environment variables where available, so I ran a small build script like this…

<Exec Command="set" />

…and got the environment variables. For reference, here’s what you get when you run the Visual Studio Build task on an agent with Visual Studio 2015:

AGENT_NAME=Hosted Agent
BUILD_DEFINITIONNAME=Continuous Integration
BUILD_QUEUEDBY=[DefaultCollection]\Project Collection Service Accounts
CommonProgramFiles=C:\Program Files (x86)\Common Files
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
CommonProgramW6432=C:\Program Files\Common Files
GTK_BASEPATH=C:\Program Files (x86)\GtkSharp\2.12\
Path=C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agent\worker\Modules\Microsoft.TeamFoundation.DistributedTask.Task.Internal\NativeBinaries\amd64;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.0\;C:\Program Files\Microsoft SQL Server\120\Tools\Binn\;C:\Users\VssAdministrator\.dnx\bin;C:\Program Files\Microsoft DNX\Dnvm\;C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\;C:\Program Files (x86)\Microsoft Emulator Manager\1.0\;C:\Program Files (x86)\GtkSharp\2.12\bin;C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.4\;C:\Program Files (x86)\Git\bin;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\PrivateAssemblies\;C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies\;C:\Program Files (x86)\Microsoft SQL Server\120\DTS\Binn\;C:\Program Files\Microsoft\Web Platform Installer\;C:\NPM\Modules;C:\Program Files\nodejs\;C:\NPM\Modules;C:\cordova;C:\java\ant\apache-ant-1.9.4\bin;
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 45 Stepping 7, GenuineIntel
ProgramFiles=C:\Program Files (x86)
ProgramFiles(x86)=C:\Program Files (x86)
ProgramW6432=C:\Program Files
PSModulePath=C:\Users\buildguest\Documents\WindowsPowerShell\Modules;C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules\;C:\Program Files\SharePoint Online Management Shell\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\PowerShell\Modules\;C:\Program Files (x86)\Microsoft SQL Server\120\Tools\PowerShell\Modules\;C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement;C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agent\worker\Modules
VS100COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\Tools\
VS110COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\Tools\
VS120COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\
VS140COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\
VSSDK110Install=C:\Program Files (x86)\Microsoft Visual Studio 11.0\VSSDK\
VSSDK120Install=C:\Program Files (x86)\Microsoft Visual Studio 12.0\VSSDK\
VSSDK140Install=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VSSDK\
WIX=C:\Program Files (x86)\WiX Toolset v3.7\
XNAGSShared=C:\Program Files (x86)\Common Files\Microsoft Shared\XNA\

Note that BUILDCONFIGURATION and BUILDPLATFORM are parameters to the Visual Studio Build task. You’ll see that text right in the TFS build system dashboard.

The BUILD_BUILDNUMBER value is the one I’m the most interested in - that’s what I can key off to set the assembly version. TF_BUILD seems to be what you use to determin if the build is running in TFS.

autofac, net comments edit

In the continuing journey toward the Autofac 4.0.0 release, some integration/extras packages have been released:

  • Autofac.Wcf 4.0.0: Doesn’t require Autofac 4.0 but is tested to be compatible with it. .NET 4.0 framework support is removed; minimum is now .NET 4.5. No interface changes were made - the major semver increment is for the .NET compatibility break. AllowPartiallyTrustedCallers and other pre-4.5 security markings are also removed.
  • Autofac.Mef 4.0.0: Doesn’t require Autofac 4.0 but is tested to be compatible with it. .NET 4.0 framework support is removed; minimum is now .NET 4.5. No interface changes were made - the major semver increment is for the .NET compatibility break. AllowPartiallyTrustedCallers and other pre-4.5 security markings are also removed.
  • Autofac.Extras.AttributeMetadata 4.0.0: This is a rename from Autofac.Extras.Attributed to Autofac.Extras.AttributeMetadata to express what it actually attributes. Doesn’t require Autofac 4.0 but is tested to be compatible with it. Requires Autofac.Mef 4.0 due to the security attribute changes and minimum .NET 4.5 requirement.
  • Autofac.Multitenant.Wcf 4.0.0-beta7-223: The WCF component compatible with Autofac.Multitenant 4.0 beta. This will stay in beta until Autofac.Multitenant can also be fully released. This is a rename from Autofac.Extras.Multitenant.Wcf to match the rename of Autofac.Extras.Multitenant to Autofac.Multitenant. Requires Autofac.Wcf 4.0 due to the security attribute changes and minimum .NET 4.5 requirement.

There’s a lot going on to try and keep up with DNX compatibility (where possible) and checking/testing existing integration packages to ensure they still work with Autofac 4. For the most part, it seems just updating things to use .NET 4.5 instead of .NET 4.0 allows us to retain forward-compatibility, though in some cases the code access security changes require changes.

We’re working hard on it. Watch our Twitter feed for announcements!

autofac, net comments edit

I just pushed Autofac.Multitenant 4.0.0-beta8-216 to NuGet.

This update includes some breaking changes from the previous Autofac.Extras.Multitenant package:

  • Multitenant support has been promoted to first-class. The package is now Autofac.Multitenant and not Autofac.Extras.Multitenant. (Note the “Extras” part is gone. We always talked about how, at some point, an “Extras” package might become a core feature, figured it was time to finally actually do that.)
  • This package requires Autofac 4.0.0-beta8-* and up because…
  • The multitenant support has been updated to support the same set of portable platforms that core Autofac supports (dnx451, dotnet, net45, netcore45).
  • This builds on top of DNX beta 8.

If you’re using DNX and want to try out multitenancy, give this a shot. If you find any issues, please let us know at the Autofac.Multitenant repo!

personal, costumes comments edit

I normally don’t sew too much outside of Halloween, when it becomes more an excuse to set aside time to make cool stuff than anything else. However, I saw this “Exploding TARDIS” fabric at Jo-Ann the other day and had to make something out of it.

Since my daughter Phoenix is a Whovian like myself, I figured I’d make her a little dress. I went with Butterick “See & Sew” pattern B5443 because it was a whopping $3 and it was a fairly simple thing. I also got some shiny blue lining to go in it.

The exploding TARDIS fabric, lining, and pattern

The most time consuming part was, as usual, pinning and cutting all the pieces.

Cutting the main pieces

The back closes in a zipper, which is usually a painful experience but actually went smoothly this time. Here I’m pinning it in…

Pinning the zipper

…and here it’s finished.

The finished zipper

I did a bit of a modification and let the blue lining hang slightly below the body of the dress for a peek of that shiny blue. I also finished the waist with a ribbon that has gold sun, moon, and stars printed on it.

The 'exploding TARDIS' dress

And here is the proud four-year-old Whovian in her new exploding TARDIS dress.

Phoenix in her new dress

All told it took about a day and a half from start to finish. I really started around 10 or 11 on Saturday, ran it through until around 10 Saturday night, and finished up in a couple of hours on Sunday morning. Not too bad.

This was the first thing I’ve made since I got Jenn a Brother 1034D serger for Christmas, and let me tell you - the serger makes all the difference. The seams come out very professional looking and the garment has a much more “store bought” quality to it.

I bought enough of the fabric to make myself a shirt using Vogue pattern 8800. I’ve used that pattern before and it comes out well, if a bit snug, so this next go-round with it I’ll make one size larger for some breathing room.

synology, security comments edit

In November of last year I set up a PPTP VPN on my Synology NAS so I could do more secure browsing on public wifi. Since then, I’ve updated my setup to use OpenVPN and made the connection a lot easier.

I based the steps to get my connection working on this forum post but I didn’t do quite the extra work with the certificates.

Assuming you’ve got the VPN package installed and ready to go on your Synology NAS (which I walk through in the previous article), the next steps to get OpenVPN going are:

  • Open the VPN Server application in the Diskstation Manager.
  • Enable the OpenVPN protocol by checking the box. Leave everything else as default.
  • At the bottom of the OpenVPN panel, click “Export Configuration.” This will give you the profile you’ll need for your devices to connect to the VPN.
  • In the Control Panel, go to the “Security” tab. On the “certificate” panel, click “Export Certificate.” Save that somewhere and call it ca.crt. This is a little different than what I was expecting - I had hoped the certificates that come in the OpenVPN zip file (when you export that configuration) would just work, but it turns out I needed to get this particular certificate. YMMV on this.
  • Just like with the PPTP VPN, make sure the firewall has a rule to allow port 1194 (the OpenVPN port) through. You also need to create a port forwarding rule for port 1194 with your router. You can see how to do this in my other article.

You should have OpenVPN up and running. That part, at least for me, was the easiest part. The harder part was getting my Android phone connected to it and trying to automate that.

First things first, let’s get connected.

Install the OpenVPN Connect app for Android. There are several OpenVPN apps out there; I use this one and the rest of my article will assume you do, too. The app is free, so there’s no risk if you don’t like it.

Open the zip file of exported OpenVPN configuration you got from the Synology and pull out the openvpn.ovpn file. Pop that open in a text editor and make sure that…

  • The remote line at the top points to your public DNS entry for your Synology, like or whatever you set up.
  • The ca line has ca.crt in it.

Here’s what it should generally look like. I’ve left the comments in that are there by default.

dev tun

remote 1194

# The "float" tells OpenVPN to accept authenticated packets from any address,
# not only the address which was specified in the --remote option.
# This is useful when you are connecting to a peer which holds a dynamic address
# such as a dial-in user or DHCP client.
# (Please refer to the manual of OpenVPN for more information.)


# If redirect-gateway is enabled, the client will redirect it's
# default network gateway through the VPN.
# It means the VPN connection will firstly connect to the VPN Server
# and then to the internet.
# (Please refer to the manual of OpenVPN for more information.)


# dhcp-option DNS: To set primary domain name server address.
# Repeat this option to set secondary DNS server addresses.

#dhcp-option DNS DNS_IP_ADDRESS


# If you want to connect by Server's IPv6 address, you should use
# "proto udp6" in UDP mode or "proto tcp6-client" in TCP mode
proto udp

script-security 2

ca ca.crt


reneg-sec 0


Put the ca.crt certificate you exported and the openvpn.ovpn file on your Android device. Make sure it’s somewhere you can find later.

Open the OpenVPN Connect app and select “Import Profile.” Select the openvpn.ovpn file you pushed over. Magic should happen and you will see your VPN show up in the app.

Now’s a good time to test the connection to your VPN. Enter your username and password into the OpenVPN Connect app, check the Save button to save your credentials, and click the “Connect” button. It should find your VPN and connect. When you connect you may see a little “warning” icon saying network communication could be monitored by a third-party - that’s Android seeing your Synology’s certificate. You should also see OpenVPN Connect telling you you’re connected.

OpenVPN Connect showing the connection is active

It’s important to save your credentials in OpenVPN Connect or the automation of connecting to the VPN later will fail.

If you’re not able to connect, it could be a number of different things. Troubleshooting this is the biggest pain of the whole thing. Feel good if things worked the first time; I struggled figuring out all the certificates and such. Things to check:

  • Did you enter your username/password correctly using an account defined on the Synology?
  • Does the account you used have permissions to the VPN? (By default it should, but you may be trying to use a limited access account, so check that.)
  • Did the router port forwarding get set up?
  • Did the firewall rule get set up?
  • Is your dynamic DNS entry working?
  • Is the ca.crt in the same folder on your Android device as the openvpn.ovpn file?
  • If that ca.crt isn’t working, did you try the one that came in the zip file with the OpenVPN configuration you exported? (The one in that zip didn’t work for me, but it might work for you.)
  • Consider trying the instructions in this forum post to embed the certificate info right in the openvpn.ovpn file.

From here on out, I assume you can connect to your VPN.

Now we want to make it so you connect automatically to the VPN when you’re on a wifi network that isn’t your own. I even VPN in when I’m on a “secure” network like at a hotel where you need a password because, well, there are a lot of people on there with you and do you trust them all? I didn’t think so.

Install the Tasker app for Android. This one will cost you $3 but it’s $3 well spent. Tasker helps you automate things on your Android phone and you don’t even need root access.

I found the instructions for setting up Tasker with OpenVPN Connect over on the OpenVPN forums via a reddit thread. I’ll put them here for completeness, but total credit to the folks who originally figured this out.

The way Tasker works is this: You create “tasks” to run on your phone, like “show an alert” or “send an email to Mom.” You then set up “contexts” so Tasker knows when to run your tasks. A “context” is like “when I’m at this location” or “when I receive an SMS text message” - it’s a condition that Tasker can recognize to raise an event and say, “run a task now!” Finally, you can tie multiple “contexts” together with “tasks” in a profile - “when I’m at this location AND I receive an SMS text message THEN send an email to Mom.”

We’re going to set up a task to connect to the VPN when you’re on a network not your own and then disconnect from the VPN when you leave the network.

You need to know the name of your OpenVPN Connect profile - the text that shows at the top of OpenVPN Connect when you’re logging in. For this example, let’s say it’s [openvpn]

  1. Create a new task in Tasker. (You want to create the task first because it’s easier than doing it in the middle of creating a profile.)
    1. Call the task Connect To Home VPN.
    2. Use System -> Send Intent as the action.
    3. Fill in the Send Intent fields like this (it is case-sensitive, so be exact; also, these are all just one line, so if you see line wraps, ignore that):
      • Action: android.intent.action.VIEW
      • Category: None
      • Mime Type:
      • Data:
      • Extra: net.openvpn.openvpn.AUTOSTART_PROFILE_NAME: [openvpn]
      • Extra:
      • Extra:
      • Package:net.openvpn.openvpn
      • Class: net.openvpn.openvpn.OpenVPNClient
      • Target: Activity
  2. Create a second new task in Tasker.
    1. Call the task Disconnect From Home VPN.
    2. Use System -> Send Intent as the action.
    3. Fill in the Send Intent fields like this (it is case-sensitive, so be exact; also, these are all just one line, so if you see line wraps, ignore that):
      • Action: android.intent.action.VIEW
      • Category: None
      • Mime Type:
      • Data:
      • Extra:
      • Extra:
      • Extra:
      • Package:net.openvpn.openvpn
      • Class: net.openvpn.openvpn.OpenVPNDisconnect
      • Target: Activity
  3. Create a new profile in Tasker and add a context.
    1. Use State -> Net -> Wifi Connected as the context.
    2. In the SSID field put the SSID of your home/trusted network. If you have more than one, separate with slashes like network1/network2.
    3. Check the Invert box. You want the context to run when you’re not connected to these networks.
  4. When asked for a task to associate with the profile, select Connect To Home VPN.
  5. On the home screen of Tasker you should see the name of the profile you created and, just under that, a “context” showing something like Not Wifi Connected network1/network2.
  6. Long-press on the context and it’ll pop up a menu allowing you to add another context.
    1. Use State -> Net -> Wifi Connected as the context.
    2. Leave all the other fields blank and do not check the Invert box.
  7. On the home screen of Tasker you should now see the profile has two contexts - one for Not Wifi Connected network1/network2 and one for Wifi Connected *,*,*. This profile will match when you’re on a wifi network that isn’t in your “whitelist” of trusted networks. Next to the contexts you should see a little green arrow pointing to Connect To Home VPN - this means when you’re on a wifi network not in your “whitelist” the VPN connection will run.
  8. Long-press on the Connect To Home VPN task next to those contexts. You’ll be allowed to add an “Exit Task.” Do that.
  9. Select the Disconnect From Home VPN task you created as the exit task. Now when you disconnect from the untrusted wifi network, you’ll also disconnect from the VPN.

You can test the Tasker tasks out by going to the “Tasks” page in Tasker and running each individually. Running the Connect To Home VPN task should quickly run OpenVPN Connect, log you in, and be done. Disconnect From Home VPN should log you out.

If you’re unable to get the Connect To Home VPN task working, things to check:

  • Did you save your credentials in the OpenVPN Connect app?
  • Do you have a typo in any of the task fields?
  • Did you copy your OpenVPN Connnect profile name correctly?

You should now have an Android device that automatically connects to your Synology-hosted OpenVPN whenever you’re on someone else’s network.

The cool thing about OpenVPN that I didn’t see with PPTP is that I don’t have to set up a proxy with it. I got some comments on my previous article where some folks were lucky enough to not need to set up a proxy. I somehow needed it with PPTP but don’t need it anymore with OpenVPN. Nice.

NOTE: I can’t offer you individual support on this. Much as I’d like to be able to help everyone, I just don’t have time. I ask questions and follow forum threads like everyone else. If you run into trouble, Synology has a great forum where you can ask questions so I’d suggest checking that out. The above worked for me. I really hope it works for you. But it’s not fall-down easy and sometimes weird differences in network setup can make or break you.

autofac, net, testing comments edit

Autofac DNX support is under way and as part of that we’re supporting both DNX and DNX Core platforms. As of DNX beta 6, you can sign DNX assemblies using your own strong name key.

To use your own key, you need to add it to the compilationOptions section of your project.json file:

  "compilationOptions": {
    "keyFile": "myApp.snk"

Make sure not to specify keyFile and strongName at the same time - you can only have one or the other.

The challenge we ran into was with testing: We wanted to run our tests under both DNX and DNX Core to verify the adjustments we made to handle everything in a cross-platform fashion. Basically, we wanted this:

dnvm use 1.0.0-beta6 -r CLR
dnx test/Autofac.Test test
dnvm use 1.0.0-beta6 -r CoreCLR
dnx test/Autofac.Test test

Unfortunately, that yields an error:

System.IO.FileLoadException : Could not load file or assembly 'Autofac, Version=, Culture=neutral, PublicKeyToken=null' or one of its dependencies. General Exception (Exception from HRESULT: 0x80131500)
---- Microsoft.Framework.Runtime.Roslyn.RoslynCompilationException : warning DNX1001: Strong name generation is not supported on CoreCLR. Skipping strongname generation.
error CS7027: Error signing output with public key from file '../../Build/SharedKey.snk' -- Assembly signing not supported.
Stack Trace:
     at Autofac.Test.ContainerBuilderTests.SimpleReg()
  ----- Inner Stack Trace -----
     at Microsoft.Framework.Runtime.Roslyn.RoslynProjectReference.Load(IAssemblyLoadContext loadContext)
     at Microsoft.Framework.Runtime.Loader.ProjectAssemblyLoader.Load(AssemblyName assemblyName, IAssemblyLoadContext loadContext)
     at Microsoft.Framework.Runtime.Loader.ProjectAssemblyLoader.Load(AssemblyName assemblyName)
     at assemblyName)
     at assemblyName)
     at Microsoft.Framework.Runtime.Loader.AssemblyLoaderCache.GetOrAdd(AssemblyName name, Func`2 factory)
     at Microsoft.Framework.Runtime.Loader.LoadContext.Load(AssemblyName assemblyName)
     at System.Runtime.Loader.AssemblyLoadContext.LoadFromAssemblyName(AssemblyName assemblyName)
     at System.Runtime.Loader.AssemblyLoadContext.Resolve(IntPtr gchManagedAssemblyLoadContext, AssemblyName assemblyName)

I ended up filing an issue about it to get some help figuring it out.

Under the covers, DNX rebuilds the assembly under test rather than using the already-built artifacts. This was entirely unclear to me since you don’t actually see any rebuild process happen. If you turn DNX tracing on (set DNX_TRACE=1) then you actually will see that Roslyn is recompiling.

If you want to test the same build output under different runtimes, you need to publish your tests as though they are applications. Which is to say, you need to use the dnu publish command on your unit test projects, like this:

dnu publish test\Your.Test --configuration Release --no-source --out C:\temp\Your.Test

When you run dnu publish you’ll get all of the build output copied to the specified output directory and you’ll get some small scripts corresponding to the commands in the project.json. For a unit test project, this means you’ll see test.cmd in the output folder. To execute the unit tests, you run test.cmd rather than dnx test\Your.Test test on your tests.

The Autofac tests now run (basically) like this:

dnvm use 1.0.0-beta6 -r CLR
dnu publish test\Autofac.Test --configuration Release --no-source --out .\artifacts\tests
dnvm use 1.0.0-beta6 -r CoreCLR

Publishing the unit tests bypasses the Roslyn recompile, letting you sign the assembly with your own key but testing under Core CLR.

I published an example project on GitHub showing this in action. In there you’ll see two build scripts - one that breaks because it doesn’t use dnu publish and one that works because it publishes the tests before executing.

autofac comments edit

Today we pushed the following packages with support for DNX beta 6:

This marks the first release of the Autofac.Configuration package for DNX and includes a lot of changes.

Previous Autofac.Configuration packages relied on web.config or app.config integration to support configuration. With DNX, the new configuration mechanism is through Microsoft.Framework.Configuration and external configuration that isn’t part of web.config or app.config.

While this makes for a cleaner configuration story with a lot of great flexibility, it means if you want to switch to the new Autofac.Configuration, you have some migration to perform.

There is a lot of documentation with examples on the Autofac doc site showing how new configuration works.

A nice benefit is you can now use JSON to configure Autofac, which can make things a bit easier to read. A simple configuration file might look like this:

    "defaultAssembly": "Autofac.Example.Calculator",
    "components": [
            "type": "Autofac.Example.Calculator.Addition.Add, Autofac.Example.Calculator.Addition",
            "services": [
                    "type": "Autofac.Example.Calculator.Api.IOperation"
            "injectProperties": true
            "type": "Autofac.Example.Calculator.Division.Divide, Autofac.Example.Calculator.Division",
            "services": [
                    "type": "Autofac.Example.Calculator.Api.IOperation"
            "parameters": {
                "places": 4

If you want, you can still use XML, but it’s not the same as the old XML - you have to make it compatible with Microsoft.Framework.Configuration. Here’s the above JSON config converted to XML:

<?xml version="1.0" encoding="utf-8" ?>
<autofac defaultAssembly="Autofac.Example.Calculator">
    <components name="0">
        <type>Autofac.Example.Calculator.Addition.Add, Autofac.Example.Calculator.Addition</type>
        <services name="0" type="Autofac.Example.Calculator.Api.IOperation" />
    <components name="1">
        <type>Autofac.Example.Calculator.Division.Divide, Autofac.Example.Calculator.Division</type>
        <services name="0" type="Autofac.Example.Calculator.Api.IOperation" />

When you want to register configuration, you do that by building up your configuration model first and then registering that with Autofac:

// Add the configuration to the ConfigurationBuilder.
var config = new ConfigurationBuilder();

// Register the ConfigurationModule with Autofac.
var module = new ConfigurationModule(config.Build());
var builder = new ContainerBuilder();

Again, check out the documentation for some additional detail including some of the differences and new things we’re supporting using this model.

Finally, big thanks to the Microsoft.Framework.Configuration team for working to get collection/array support into the configuration model.

javascript, home comments edit

I have, like, 1,000 of those little keyring cards for loyalty/rewards. You do, too. There are a ton of apps for your phone that manage them, and that’s cool.

Loyalty card phone apps never work for me.

For some reason, I seem to go to all the stores where they’ve not updated the scanners to be able to read barcodes off a phone screen. I’ve tried different phones and different apps, all to no avail.

You know what always works? The card in my wallet. Which means I’m stuck carrying around these 1,000 stupid cards.

There are sites, some of them connected to the phone apps, that will let you buy a combined physical card. But I’m cheap and need to update just frequently enough that it’s not worth paying the $5 each time. I used to use a free site called “JustOneClubCard” to create a combined loyalty card but that site has gone offline. I think it was purchased by one of the phone app manufacturers. ((Seriously.)


Enter: LoyaltyCard

I wrote my own app: LoyaltyCard. You can go there right now and make your own combined loyalty card.

You can use the app to enter up to eight bar codes and then download the combined card as a PDF to print out. Make as many as you like.

And if you want to save your card? Just bookmark the page with the codes filled in. Done. Come back and edit anytime you like.

Go make a loyalty card.

Behind the Scenes

I made the app not only for this but as a way to play with some Javascript libraries. The whole app runs in the client with the exception of one tiny server-side piece that loads the high-resolution barcodes for the PDF.

You can check out the source over on GitHub.