personal, costumes comments edit

I normally don't sew too much outside of Halloween, when it becomes more an excuse to set aside time to make cool stuff than anything else. However, I saw this "Exploding TARDIS" fabric at Jo-Ann the other day and had to make something out of it.

Since my daughter Phoenix is a Whovian like myself, I figured I'd make her a little dress. I went with Butterick "See & Sew" pattern B5443 because it was a whopping $3 and it was a fairly simple thing. I also got some shiny blue lining to go in it.

The exploding TARDIS fabric, lining, and pattern

The most time consuming part was, as usual, pinning and cutting all the pieces.

Cutting the main pieces

The back closes in a zipper, which is usually a painful experience but actually went smoothly this time. Here I'm pinning it in...

Pinning the zipper

...and here it's finished.

The finished zipper

I did a bit of a modification and let the blue lining hang slightly below the body of the dress for a peek of that shiny blue. I also finished the waist with a ribbon that has gold sun, moon, and stars printed on it.

The 'exploding TARDIS' dress

And here is the proud four-year-old Whovian in her new exploding TARDIS dress.

Phoenix in her new dress

All told it took about a day and a half from start to finish. I really started around 10 or 11 on Saturday, ran it through until around 10 Saturday night, and finished up in a couple of hours on Sunday morning. Not too bad.

This was the first thing I've made since I got Jenn a Brother 1034D serger for Christmas, and let me tell you - the serger makes all the difference. The seams come out very professional looking and the garment has a much more "store bought" quality to it.

I bought enough of the fabric to make myself a shirt using Vogue pattern 8800. I've used that pattern before and it comes out well, if a bit snug, so this next go-round with it I'll make one size larger for some breathing room.

synology, security comments edit

In November of last year I set up a PPTP VPN on my Synology NAS so I could do more secure browsing on public wifi. Since then, I've updated my setup to use OpenVPN and made the connection a lot easier.

I based the steps to get my connection working on this forum post but I didn't do quite the extra work with the certificates.

Assuming you've got the VPN package installed and ready to go on your Synology NAS (which I walk through in the previous article), the next steps to get OpenVPN going are:

  • Open the VPN Server application in the Diskstation Manager.
  • Enable the OpenVPN protocol by checking the box. Leave everything else as default.
  • At the bottom of the OpenVPN panel, click "Export Configuration." This will give you the profile you'll need for your devices to connect to the VPN.
  • In the Control Panel, go to the "Security" tab. On the "certificate" panel, click "Export Certificate." Save that somewhere and call it ca.crt. This is a little different than what I was expecting - I had hoped the certificates that come in the OpenVPN zip file (when you export that configuration) would just work, but it turns out I needed to get this particular certificate. YMMV on this.
  • Just like with the PPTP VPN, make sure the firewall has a rule to allow port 1194 (the OpenVPN port) through. You also need to create a port forwarding rule for port 1194 with your router. You can see how to do this in my other article.

You should have OpenVPN up and running. That part, at least for me, was the easiest part. The harder part was getting my Android phone connected to it and trying to automate that.

First things first, let's get connected.

Install the OpenVPN Connect app for Android. There are several OpenVPN apps out there; I use this one and the rest of my article will assume you do, too. The app is free, so there's no risk if you don't like it.

Open the zip file of exported OpenVPN configuration you got from the Synology and pull out the openvpn.ovpn file. Pop that open in a text editor and make sure that...

  • The remote line at the top points to your public DNS entry for your Synology, like yourdiskstation.synology.me or whatever you set up.
  • The ca line has ca.crt in it.

Here's what it should generally look like. I've left the comments in that are there by default.

dev tun
tls-client

remote yourdiskstation.synology.me 1194

# The "float" tells OpenVPN to accept authenticated packets from any address,
# not only the address which was specified in the --remote option.
# This is useful when you are connecting to a peer which holds a dynamic address
# such as a dial-in user or DHCP client.
# (Please refer to the manual of OpenVPN for more information.)

#float

# If redirect-gateway is enabled, the client will redirect it's
# default network gateway through the VPN.
# It means the VPN connection will firstly connect to the VPN Server
# and then to the internet.
# (Please refer to the manual of OpenVPN for more information.)

redirect-gateway

# dhcp-option DNS: To set primary domain name server address.
# Repeat this option to set secondary DNS server addresses.

#dhcp-option DNS DNS_IP_ADDRESS

pull

# If you want to connect by Server's IPv6 address, you should use
# "proto udp6" in UDP mode or "proto tcp6-client" in TCP mode
proto udp

script-security 2

ca ca.crt

comp-lzo

reneg-sec 0

auth-user-pass

Put the ca.crt certificate you exported and the openvpn.ovpn file on your Android device. Make sure it's somewhere you can find later.

Open the OpenVPN Connect app and select "Import Profile." Select the openvpn.ovpn file you pushed over. Magic should happen and you will see your VPN show up in the app.

Now's a good time to test the connection to your VPN. Enter your username and password into the OpenVPN Connect app, check the Save button to save your credentials, and click the "Connect" button. It should find your VPN and connect. When you connect you may see a little "warning" icon saying network communication could be monitored by a third-party - that's Android seeing your Synology's certificate. You should also see OpenVPN Connect telling you you're connected.

OpenVPN Connect showing the connection is active

It's important to save your credentials in OpenVPN Connect or the automation of connecting to the VPN later will fail.

If you're not able to connect, it could be a number of different things. Troubleshooting this is the biggest pain of the whole thing. Feel good if things worked the first time; I struggled figuring out all the certificates and such. Things to check:

  • Did you enter your username/password correctly using an account defined on the Synology?
  • Does the account you used have permissions to the VPN? (By default it should, but you may be trying to use a limited access account, so check that.)
  • Did the router port forwarding get set up?
  • Did the firewall rule get set up?
  • Is your dynamic DNS entry working?
  • Is the ca.crt in the same folder on your Android device as the openvpn.ovpn file?
  • If that ca.crt isn't working, did you try the one that came in the zip file with the OpenVPN configuration you exported? (The one in that zip didn't work for me, but it might work for you.)
  • Consider trying the instructions in this forum post to embed the certificate info right in the openvpn.ovpn file.

From here on out, I assume you can connect to your VPN.

Now we want to make it so you connect automatically to the VPN when you're on a wifi network that isn't your own. I even VPN in when I'm on a "secure" network like at a hotel where you need a password because, well, there are a lot of people on there with you and do you trust them all? I didn't think so.

Install the Tasker app for Android. This one will cost you $3 but it's $3 well spent. Tasker helps you automate things on your Android phone and you don't even need root access.

I found the instructions for setting up Tasker with OpenVPN Connect over on the OpenVPN forums via a reddit thread. I'll put them here for completeness, but total credit to the folks who originally figured this out.

The way Tasker works is this: You create "tasks" to run on your phone, like "show an alert" or "send an email to Mom." You then set up "contexts" so Tasker knows when to run your tasks. A "context" is like "when I'm at this location" or "when I receive an SMS text message" - it's a condition that Tasker can recognize to raise an event and say, "run a task now!" Finally, you can tie multiple "contexts" together with "tasks" in a profile - "when I'm at this location AND I receive an SMS text message THEN send an email to Mom."

We're going to set up a task to connect to the VPN when you're on a network not your own and then disconnect from the VPN when you leave the network.

You need to know the name of your OpenVPN Connect profile - the text that shows at the top of OpenVPN Connect when you're logging in. For this example, let's say it's yourdiskstation.synology.me [openvpn]

  1. Create a new task in Tasker. (You want to create the task first because it's easier than doing it in the middle of creating a profile.)
    1. Call the task Connect To Home VPN.
    2. Use System -> Send Intent as the action.
    3. Fill in the Send Intent fields like this (it is case-sensitive, so be exact; also, these are all just one line, so if you see line wraps, ignore that):
      • Action: android.intent.action.VIEW
      • Category: None
      • Mime Type:
      • Data:
      • Extra: net.openvpn.openvpn.AUTOSTART_PROFILE_NAME: yourdiskstation.synology.me [openvpn]
      • Extra:
      • Extra:
      • Package:net.openvpn.openvpn
      • Class: net.openvpn.openvpn.OpenVPNClient
      • Target: Activity
  2. Create a second new task in Tasker.
    1. Call the task Disconnect From Home VPN.
    2. Use System -> Send Intent as the action.
    3. Fill in the Send Intent fields like this (it is case-sensitive, so be exact; also, these are all just one line, so if you see line wraps, ignore that):
      • Action: android.intent.action.VIEW
      • Category: None
      • Mime Type:
      • Data:
      • Extra:
      • Extra:
      • Extra:
      • Package:net.openvpn.openvpn
      • Class: net.openvpn.openvpn.OpenVPNDisconnect
      • Target: Activity
  3. Create a new profile in Tasker and add a context.
    1. Use State -> Net -> Wifi Connected as the context.
    2. In the SSID field put the SSID of your home/trusted network. If you have more than one, separate with slashes like network1/network2.
    3. Check the Invert box. You want the context to run when you're not connected to these networks.
  4. When asked for a task to associate with the profile, select Connect To Home VPN.
  5. On the home screen of Tasker you should see the name of the profile you created and, just under that, a "context" showing something like Not Wifi Connected network1/network2.
  6. Long-press on the context and it'll pop up a menu allowing you to add another context.
    1. Use State -> Net -> Wifi Connected as the context.
    2. Leave all the other fields blank and do not check the Invert box.
  7. On the home screen of Tasker you should now see the profile has two contexts - one for Not Wifi Connected network1/network2 and one for Wifi Connected *,*,*. This profile will match when you're on a wifi network that isn't in your "whitelist" of trusted networks. Next to the contexts you should see a little green arrow pointing to Connect To Home VPN - this means when you're on a wifi network not in your "whitelist" the VPN connection will run.
  8. Long-press on the Connect To Home VPN task next to those contexts. You'll be allowed to add an "Exit Task." Do that.
  9. Select the Disconnect From Home VPN task you created as the exit task. Now when you disconnect from the untrusted wifi network, you'll also disconnect from the VPN.

You can test the Tasker tasks out by going to the "Tasks" page in Tasker and running each individually. Running the Connect To Home VPN task should quickly run OpenVPN Connect, log you in, and be done. Disconnect From Home VPN should log you out.

If you're unable to get the Connect To Home VPN task working, things to check:

  • Did you save your credentials in the OpenVPN Connect app?
  • Do you have a typo in any of the task fields?
  • Did you copy your OpenVPN Connnect profile name correctly?

You should now have an Android device that automatically connects to your Synology-hosted OpenVPN whenever you're on someone else's network.

The cool thing about OpenVPN that I didn't see with PPTP is that I don't have to set up a proxy with it. I got some comments on my previous article where some folks were lucky enough to not need to set up a proxy. I somehow needed it with PPTP but don't need it anymore with OpenVPN. Nice.

NOTE: I can't offer you individual support on this. Much as I'd like to be able to help everyone, I just don't have time. I ask questions and follow forum threads like everyone else. If you run into trouble, Synology has a great forum where you can ask questions so I'd suggest checking that out. The above worked for me. I really hope it works for you. But it's not fall-down easy and sometimes weird differences in network setup can make or break you.

autofac, net, testing comments edit

Autofac DNX support is under way and as part of that we're supporting both DNX and DNX Core platforms. As of DNX beta 6, you can sign DNX assemblies using your own strong name key.

To use your own key, you need to add it to the compilationOptions section of your project.json file:

{
  "compilationOptions": {
    "keyFile": "myApp.snk"
  }
}

Make sure not to specify keyFile and strongName at the same time - you can only have one or the other.

The challenge we ran into was with testing: We wanted to run our tests under both DNX and DNX Core to verify the adjustments we made to handle everything in a cross-platform fashion. Basically, we wanted this:

dnvm use 1.0.0-beta6 -r CLR
dnx test/Autofac.Test test
dnvm use 1.0.0-beta6 -r CoreCLR
dnx test/Autofac.Test test

Unfortunately, that yields an error:

System.IO.FileLoadException : Could not load file or assembly 'Autofac, Version=4.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. General Exception (Exception from HRESULT: 0x80131500)
---- Microsoft.Framework.Runtime.Roslyn.RoslynCompilationException : warning DNX1001: Strong name generation is not supported on CoreCLR. Skipping strongname generation.
error CS7027: Error signing output with public key from file '../../Build/SharedKey.snk' -- Assembly signing not supported.
Stack Trace:
     at Autofac.Test.ContainerBuilderTests.SimpleReg()
  ----- Inner Stack Trace -----
     at Microsoft.Framework.Runtime.Roslyn.RoslynProjectReference.Load(IAssemblyLoadContext loadContext)
     at Microsoft.Framework.Runtime.Loader.ProjectAssemblyLoader.Load(AssemblyName assemblyName, IAssemblyLoadContext loadContext)
     at Microsoft.Framework.Runtime.Loader.ProjectAssemblyLoader.Load(AssemblyName assemblyName)
     at dnx.host.LoaderContainer.Load(AssemblyName assemblyName)
     at dnx.host.DefaultLoadContext.LoadAssembly(AssemblyName assemblyName)
     at Microsoft.Framework.Runtime.Loader.AssemblyLoaderCache.GetOrAdd(AssemblyName name, Func`2 factory)
     at Microsoft.Framework.Runtime.Loader.LoadContext.Load(AssemblyName assemblyName)
     at System.Runtime.Loader.AssemblyLoadContext.LoadFromAssemblyName(AssemblyName assemblyName)
     at System.Runtime.Loader.AssemblyLoadContext.Resolve(IntPtr gchManagedAssemblyLoadContext, AssemblyName assemblyName)

I ended up filing an issue about it to get some help figuring it out.

Under the covers, DNX rebuilds the assembly under test rather than using the already-built artifacts. This was entirely unclear to me since you don't actually see any rebuild process happen. If you turn DNX tracing on (set DNX_TRACE=1) then you actually will see that Roslyn is recompiling.

If you want to test the same build output under different runtimes, you need to publish your tests as though they are applications. Which is to say, you need to use the dnu publish command on your unit test projects, like this:

dnu publish test\Your.Test --configuration Release --no-source --out C:\temp\Your.Test

When you run dnu publish you'll get all of the build output copied to the specified output directory and you'll get some small scripts corresponding to the commands in the project.json. For a unit test project, this means you'll see test.cmd in the output folder. To execute the unit tests, you run test.cmd rather than dnx test\Your.Test test on your tests.

The Autofac tests now run (basically) like this:

dnvm use 1.0.0-beta6 -r CLR
dnu publish test\Autofac.Test --configuration Release --no-source --out .\artifacts\tests
.\artifacts\tests\test.cmd
dnvm use 1.0.0-beta6 -r CoreCLR
.\artifacts\tests\test.cmd

Publishing the unit tests bypasses the Roslyn recompile, letting you sign the assembly with your own key but testing under Core CLR.

I published an example project on GitHub showing this in action. In there you'll see two build scripts - one that breaks because it doesn't use dnu publish and one that works because it publishes the tests before executing.

autofac comments edit

Today we pushed the following packages with support for DNX beta 6:

This marks the first release of the Autofac.Configuration package for DNX and includes a lot of changes.

Previous Autofac.Configuration packages relied on web.config or app.config integration to support configuration. With DNX, the new configuration mechanism is through Microsoft.Framework.Configuration and external configuration that isn't part of web.config or app.config.

While this makes for a cleaner configuration story with a lot of great flexibility, it means if you want to switch to the new Autofac.Configuration, you have some migration to perform.

There is a lot of documentation with examples on the Autofac doc site showing how new configuration works.

A nice benefit is you can now use JSON to configure Autofac, which can make things a bit easier to read. A simple configuration file might look like this:

{
    "defaultAssembly": "Autofac.Example.Calculator",
    "components": [
        {
            "type": "Autofac.Example.Calculator.Addition.Add, Autofac.Example.Calculator.Addition",
            "services": [
                {
                    "type": "Autofac.Example.Calculator.Api.IOperation"
                }
            ],
            "injectProperties": true
        },
        {
            "type": "Autofac.Example.Calculator.Division.Divide, Autofac.Example.Calculator.Division",
            "services": [
                {
                    "type": "Autofac.Example.Calculator.Api.IOperation"
                }
            ],
            "parameters": {
                "places": 4
            }
        }
    ]
}

If you want, you can still use XML, but it's not the same as the old XML - you have to make it compatible with Microsoft.Framework.Configuration. Here's the above JSON config converted to XML:

<?xml version="1.0" encoding="utf-8" ?>
<autofac defaultAssembly="Autofac.Example.Calculator">
    <components name="0">
        <type>Autofac.Example.Calculator.Addition.Add, Autofac.Example.Calculator.Addition</type>
        <services name="0" type="Autofac.Example.Calculator.Api.IOperation" />
        <injectProperties>true</injectProperties>
    </components>
    <components name="1">
        <type>Autofac.Example.Calculator.Division.Divide, Autofac.Example.Calculator.Division</type>
        <services name="0" type="Autofac.Example.Calculator.Api.IOperation" />
        <injectProperties>true</injectProperties>
        <parameters>
            <places>4</places>
        </parameters>
    </components>
</autofac>

When you want to register configuration, you do that by building up your configuration model first and then registering that with Autofac:

// Add the configuration to the ConfigurationBuilder.
var config = new ConfigurationBuilder();
config.AddJsonFile("autofac.json");

// Register the ConfigurationModule with Autofac.
var module = new ConfigurationModule(config.Build());
var builder = new ContainerBuilder();
builder.RegisterModule(module);

Again, check out the documentation for some additional detail including some of the differences and new things we're supporting using this model.

Finally, big thanks to the Microsoft.Framework.Configuration team for working to get collection/array support into the configuration model.

javascript, home comments edit

I have, like, 1,000 of those little keyring cards for loyalty/rewards. You do, too. There are a ton of apps for your phone that manage them, and that's cool.

Loyalty card phone apps never work for me.

For some reason, I seem to go to all the stores where they've not updated the scanners to be able to read barcodes off a phone screen. I've tried different phones and different apps, all to no avail.

You know what always works? The card in my wallet. Which means I'm stuck carrying around these 1,000 stupid cards.

There are sites, some of them connected to the phone apps, that will let you buy a combined physical card. But I'm cheap and need to update just frequently enough that it's not worth paying the $5 each time. I used to use a free site called "JustOneClubCard" to create a combined loyalty card but that site has gone offline. I think it was purchased by one of the phone app manufacturers. ((Seriously.)

So...

Enter: LoyaltyCard

I wrote my own app: LoyaltyCard. You can go there right now and make your own combined loyalty card.

You can use the app to enter up to eight bar codes and then download the combined card as a PDF to print out. Make as many as you like.

And if you want to save your card? Just bookmark the page with the codes filled in. Done. Come back and edit anytime you like.

Go make a loyalty card.

Behind the Scenes

I made the app not only for this but as a way to play with some Javascript libraries. The whole app runs in the client with the exception of one tiny server-side piece that loads the high-resolution barcodes for the PDF.

You can check out the source over on GitHub.

vs comments edit

I installed Visual Studio 2015 today. I had the RC installed and updated to the the RTM.

One of the minor-yet-annoying things I found about the RTM version showed up when I pinned it to my taskbar next to VS2013:

Confusing icons on the taskbar

Sigh.

Luckily it's an easy fix.

Windows 7 / Server 2008

First, unpin VS2015 from your taskbar. You'll put it back after you've fixed the icon.

Open up your Start menu and right-click on the "Visual Studio 2015" shortcut in there. On the context menu, choose "Properties." Click the "Change Icon" button.

Click the 'Change Icon' button

VS2015 actually comes with a few icons. They're not all awesome, but they're at least different than the VS2013 icon. I chose the one with the little arrow because it's, you know, upgraded from VS2013.

Pick a better icon

Click OK enough times to close all the property dialogs. You'll see the icon in the Start menu has changed. Now right-click that and pin it to the taskbar. Problem solved.

At least you can tell which is which now

Windows 8 / Server 2012

If you haven't pinned VS2015 to your taskbar yet, do that now so you can get a shortcut.

Open up the taskbar icons folder. This is at C:\Users\yourusername\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar.

Copy the "Visual Studio 2015" shortcut out of that folder and onto your desktop.

Unpin VS2015 from your taskbar. The shortcut in that TaskBar folder will disappear.

Right-click on the "Visual Studio 2015" shortcut you copied to your desktop. On the context menu, choose "Properties." Click the "Change Icon" button.

Click the 'Change Icon' button

VS2015 actually comes with a few icons. They're not all awesome, but they're at least different than the VS2013 icon. I chose the one with the little arrow because it's, you know, upgraded from VS2013.

Pick a better icon

Click OK enough times to close all the property dialogs. You'll see the icon on your desktop has changed.

Right-click on the icon on your desktop and pin that one to your taskbar. A new shortcut with the correct icon will be added to that TaskBar folder and will appear on the taskbar. You can now delete the one from your desktop.

At least you can tell which is which now

gaming, xbox comments edit

I tried playing a couple of Xbox 360 Kinect games with my four-year-old daughter, Phoenix. We had less than stellar results.

The first game was "Sesame Street TV." Basically it's interactive Sesame Street. We picked it up from the library to try it and I'm glad it was free.

Problem 1: She's very small compared to me. If the Kinect sees me, it somehow stops seeing her. And vice versa - if it sees her, it stops detecting me. There seemed to be a sort of very small "magic area" in the room where it'd find both of us.

Problem 2: The interaction for that game isn't constant. It's more like: they sing a song, then you have a small bit of interaction, then they tell a story, then there's a small bit of interaction. She'll watch or she'll interact, but she loses interest in interacting once you switch to watching.

Problem 3: Slight misrepresentation of the game on the box. The concept behind the game is like you going into the TV and being on Sesame Street. There is a picture on the box to illustrate the concept. Phoenix wants that to be the reality. It is really hard to explain that the box just shows an idea of what it's like, that you don't really transfer yourself into the television.

After a bit of Sesame Street, we tried "Kinect Adventures." I did this thinking that the constant interaction would keep her engaged.

We still ran into the problem where there was basically the small area where it recognized us both, but then it was compounded with a couple of new problems.

Problem 4: Many of the games aren't obvious to four-year-olds. In particular, the game where you have to walk from side to side and jump to control the raft - that was entirely unintuitive to Phoenix. She was far more concerned with whether or not the avatar on the raft actually looked like her, which then led to a half-hour diversion where we had to set up an avatar.

Problem 5: Auto jump-in/jump-out. The ability to jump in and out of the game quickly is great for folks that "get it" and when you have a properly sized room without the "magic area" where you're recognized. However, every time Phoenix accidentally stepped out of the "magic area," her avatar would disappear because it thought she was jumping out of the game, at which point I'd have to try to convince her to come back into the area - but not too close to me - so we could continue.

In the end, we decided it a better idea to just go watch some Looney Tunes cartoons we picked up at the library. Which, now that I think about it, is sort of the opposite of what Kinect is trying to get you to do - get off the couch and be active. Hmmm.

Over the years I've posted about my home media center developments. Back in 2008 I posted a summary with links to articles, then I did another roundup in 2014.

The problem with this sort of periodic summary is that it's hard to get an accurate picture of how things are working right now. I might forget to blog it, or I'll take some notes on something I found and forget to post it, or whatever.

I was keeping my media center and home networking notes in a personal wiki on PBworks but I figured it was time to make things a bit more official.

My media center and home network documentation is now live at illigmediacenter.readthedocs.org

Diagram of my home network

This is the place I'll add notes or tips on how my media center setup works. I've got everything from the hardware I use to my process for getting video content into the system. I've got my plan and analysis for how I cut cable including cost breakdowns and options. It's all on this site.

My biggest problem in getting my media center going was that I didn't know what I didn't know. Information about all this stuff - hardware, software, how to get things done - is spread out all over the place. I never found a complete guide to help me on my way.

I hope this documentation can help you jump start your media center or improve the one you have. As things change in my system, I'll be keeping the documentation here up to date so it should always have the latest info.

home, media, music, movies comments edit

We finally did it: We cut the cable.

On Friday, we took all the cable boxes back to Comcast, cut off the cable TV and the phones, and we're down to internet service and mobile phones only.

I have to say, I know I'm only a few days into it but I haven't really noticed it. Aside from calling my various financial institutions and utilities to change my phone number with them, it's pretty status quo. We were already watching most of our stuff on demand or through online services anyway.

If you'd like to know what I did or how I did it, I documented the whole plan. I'll do a blog entry later for the official release of my media center documentation site, but you can read over there about my cable cutting plan: what we did and the equipment/services we use.

net, vs, ndepend comments edit

NDepend 6 was recently released with a ton of new features. I've been working with NDepend for quite some time (my earliest blog entry on it was for version 2.7) and every release gets better. It's been a couple of years since version 5 came out. What's new?

The first thing you notice that's new when you start things up is the additional integrations they've added. It used to be just "install the extension for Visual Studio" but now there are icons for TFS, TeamCity, SonarQube, and Reflector integration.

NDepend 6 integrations

I'm particularly interested in the TeamCity integration because that's the build server I use. I have manually integrated it in the past using MSBuild and some manual TeamCity configuration, but with the new add-in, I can just drop NDepend on the build server and have all that work done for me. There's even a specific NDepend build step type added and the report magically shows up in the dashboard. There are some great step-by-step walkthrough videos on the NDepend site showing how to set this up.

I decided to analyze some of the new code I've been working on. It was pretty easy to get my project started. I love how NDepend helps you figure out where to go next if you haven't used it before.

NDepend beginner dialog

The report has improved by adding "how to fix" information to rule failures. One of the challenges I've had in the past is that you could see what things might have failed a rule, but you didn't really have anything clearly "actionable" you could tell folks to fix. You had to kind of "know" what a rule meant. Now there's no guesswork.

Report showing how to fix violations

One of my huge complaints with other tools (coverage, analysis) has been addressed - handling of async/await methods. A lot of what I've been working on lately has been Web API code, which is async/await from the ground up. Have you ever looked at that stuff in a decompiler like Reflector? Or a code coverage tool? I've found you don't get any information on it ("Let's just omit it!"); you get incorrect information on it ("You don't have full coverage because you didn't cover all the cases in the generated state machine!"); or you get confusing information ("I'll show you all of the compiler generated methods that don't make sense!").

Reports are very clean and complete, but you don't see the compiler generated state machine junk. Finally!

The metrics view just doubled in value by adding a second "dimension" to its display. You used to be able to just change the size of an item in the view based on a specific metric; now you can compare one metric to a second metric by adding a sort of "heat map" style coloration to it.

My favorite combination so far is to set the box size by "# IL Instructions" and set the color of the boxes by "IL Cyclomatic Complexity." It gives you a pretty good indication of things that need to be refactored - just look for the huge red boxes!

NDepend metrics view

My favorite new feature is the shareable rule files. We have a standard FxCop ruleset we use on all of our projects. We have a standard StyleCop ruleset we use on all of our projects. We can finally have a standard NDepend ruleset we use on all of our projects.

You can create a rule file with all of your analysis rules stored outside the project file and then tell projects to reference the central/common NDepend rules file.

Create a rules file

Once you have a custom rules file, you can reference it from your project. You will probably want to switch the paths in your project to be relative to the project file so it works on your machine and the build server.

Change paths to relative

With every iteration, NDepend just gets more compelling. I get so much insight from it about our code and areas we need to improve - things that are hard to see when you're neck deep in code and NuGet package references and under a deadline. You owe it to yourself to check it out.

Full disclosure: I got a free personal license from Patrick at NDepend. However, we have also purchased several licenses at work and make use of it to great benefit.

lastpass, security comments edit

I use LastPass for a lot of things including storing my personal software license files. I use the "secure note" function to save the license information and attach the license file to the secure note.

I was working on something today and trying to save a license to my machine and kept getting a dialog saying, "Error opening attachment. Error C." Nothing really specific and very confusing. I was able to save the attachment from the LastPass web site but not through the browser extension.

I ended up finding the solution in this forum post.

  1. LastPass Icon > Tools > Advanced Tools > Clear Local Cache
  2. LastPass Icon > Tools > Advanced Tools > Refresh Sites

After doing a clear and refresh, the attachment saved correctly. These are probably good steps to try whenever you get any sort of error with the LastPass browser extensions. Filed for future reference.

vs, coderush comments edit

CR_Documentor version 4.0.0 has been released to the Visual Studio Gallery and adds support for Visual Studio 2015.

Head over to the gallery to get your copy or get it through "Extensions and Updates" in the Visual Studio "Tools" menu.

Note: In VS 2015 RC you may notice that after installing the add-in the only add-in that shows up for CodeRush is CR_Documentor. I'm not sure why this is, but it seems to be fixed by clearing out the files in your loader cache in these folders:

%appdata%\CodeRush for VS .NET\1.1\Settings.xml\Loader
%appdata%\CodeRush for VS .NET\1.1\Settings.xml\_Scheme_FrictionFree\Loader

It is safe to delete these files because they will be re-created on the next restart of VS. This will get all the CodeRush features to show up again.

I filed an issue with DevExpress about this. If you are having this problem, please add a comment to that issue so they know it's not just me.

media comments edit

Back in March 2014 I started converting my DVD rips into MP4 files for use with Plex. I ran two laptops (both with 2.3GHz dual-core CPUs) 24/7 until early March 2015 when I added a third computer - an eight-core 4GHz machine.

Today I finally finished converting all of my disc-based video content to MP4.

Some quick statistics:

  • Total number of files: 4998
  • Total content runtime: 134 days, 8 hours, 56 minutes, 47 seconds
    • SD runtime: 115 days, 12 hours, 25 minutes, 17 seconds
    • HD runtime: 18 days, 20 hours, 31 minutes, 30 seconds
  • Total file size: 5182.3GB
    • SD file size: 3042.04GB
    • HD file size: 2140.26GB
  • Average MB/minute for SD content: 18.73
  • Average MB/minute for HD content: 80.72

I'm pretty pleased with how everything has come together. Seeing it all in Plex, nicely organized... it's a good feeling.

I can definitely say CPU power is important in video conversion. My laptops could convert an average SD movie in three or four hours, but an HD movie... I couldn't get one converted in a day. The eight-core behemoth can take the same SD movie and finish in an hour or less; and HD movies take about four hours - same as SD content on my laptops.

Anyway, if you're looking to convert a bunch of video, it's worth investing in some hefty CPU power. It'll save you tons of time.

Finally, as part of this, I'd like to introduce my media center documentation on ReadTheDocs.

It's a work in progress, so this is sort of a "soft launch," but I think it's fleshed out enough to be of some use. I will probably do a more dedicated blog entry for it when I've got more of it filled out.

Information about how I converted my stuff with Handbrake, including the script I used to pull the report data above, as well as the specs for my behemoth conversion/Plex server, is all over there.

process, security comments edit

I feel like I should write a book. It'd be epic like Moby Dick but would start with, "Call me Yossarian." This is going to sound confusing and comedic, straight out of Catch-22, but I assure you it's entirely true. It is happening to me right now.

Serenity Now!

We write a lot of documentation to a wiki at work. I've got permissions on it to add pages, rename pages, move pages... but not delete pages. If I want to delete a page, I have to find someone who has delete rights and ask them to do that, which doesn't make sense because I'm a pretty heavy contributor to the wiki.

I decided to seek out delete permissions for myself.

The wiki is managed by an overseas team. The previous process to get permissions to the wiki was to send an email to their infrastructure distribution list with your request and the issue would be dealt with in a day or two. It was fairly effective from a customer perspective.

The new process to get wiki permissions is to file a ticket in this custom-built ticketing system they've adopted. You find this out by sending an email to the infrastructure distribution list and reading the "out of office" autoresponder thing that comes back.

You can't file a ticket unless you have an account on the ticketing system. That's... well, not unheard of, but a bit inconvenient. Fine, I need to create an account.

In order to get an account on the ticketing system, you need to file a ticket. No joke. As one colleague put it, this is sort of like a secret society - you can't get in unless you already know someone who's in and will "vouch for you" by creating a ticket on your behalf.

Three working days later, I have an account so I log in. The ticketing system is a totally custom beast that was initially written starting in 2001 and hasn't really been updated since 2008. It looks and behaves exactly like you think - it's very bare-bones, there's no significant help, and it's entirely unintuitive to people who don't already use it every day.

Seeking out help, I notice in the autoresponder email there's a wiki link to a guide on how to file tickets. Cool. I visit that link and... I don't have permissions to see the wiki link.

In order to see the guide on how to file tickets, I have to file a ticket. Of course, I'm not sure what kind of ticket to file, since I can't see the guide.

I search around to see if there's any hint pointing me to which ticket type to file since they all have great titles like "DQT No TU Child Case." Totally obvious, right? I end up stumbling onto a screen shot someone has taken and posted to a comment section on an unrelated wiki page referring me to the type of case I need to file.

I don't see the right case type on the list of available tickets I can file. Turns out I don't have ticket system permissions to file that kind of ticket.

I have now opened a ticket so I can get permissions to open a ticket to get permissions to delete pages from the wiki. This is after, of course, the initial "secret society" ticket was filed to get me an account so I can file tickets.

humor, rest comments edit

I was browsing around the other day and found your mom's REST API. Naturally, I pulled my client out and got to work.

An abbreviated session follows:

GET /your/mom HTTP/1.1

HTTP/1.1 200 OK

PUT /your/mom HTTP/1.1
":)"

HTTP/1.1 402 Payment Required

POST /your/mom HTTP/1.1
"$"

HTTP/1.1 411 Length Required

PUT /your/mom HTTP/1.1
":)"

HTTP/1.1 406 Not Acceptable
HTTP/1.1 413 Request Entity Too Large
HTTP/1.1 200 OK
.
.
.
HTTP/1.1 200 OK
.
.
HTTP/1.1 200 OK
.
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 502 Bad Gateway
HTTP/1.1 503 Service Unavailable

I think I need to get a new API key before she gives me the ol' 410. :)

build comments edit

In making a package similar to the NuGet.Server package, I had a need to, from one project in the solution, get the list of build output assemblies from other projects in the same solution.

That is, in a solution like:

  • MySolution.sln
    • Server.csproj
    • Project1.csproj
    • Project2.csproj

...from the Server.csproj I wanted to get the build output assembly paths for the Project1.csproj and Project2.csproj projects.

The technically correct solution is sort of complicated and Sayed Ibrahim Hashimi has documented it on his blog. The problem with the technically correct solution is that it requires you to invoke a build on the target projects.

That build step was causing no end of trouble. Projects were re-running AfterBuild actions, code was getting regenerated at inopportune times, cats and dogs living together - mass hysteria.

I came up with a different way to get the build outputs that is less technically correct but gets the job done and doesn't require you to invoke a build on the target projects.

My solution involves loading the projects in an evaluation context using a custom inline MSBuild task. Below is a snippet showing the task in action. Note that the snippet is in the context of a .targets file that would be added to your .csproj by a NuGet package, so you'll see environment variables used that will only be present in a full build setting:

<Project DefaultTargets="EnumerateOutput" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" >
  <ItemGroup>
    <!-- Include all projects in the solution EXCEPT this one -->
    <ProjectToScan Include="$(SolutionDir)/**/*.csproj" Exclude="$(SolutionDir)/**/$(ProjectName).csproj" />
  </ItemGroup>
  <Target Name="EnumerateOutput" AfterTargets="Build">
    <!-- Call the custom task to get the output -->
    <GetBuildOutput ProjectFile="%(ProjectToScan.FullPath)">
      <Output ItemName="ProjectToScanOutput" TaskParameter="BuildOutput"/>
    </GetBuildOutput>

    <Message Text="%(ProjectToScanOutput.Identity)" />
  </Target>

  <UsingTask TaskName="GetBuildOutput" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v12.0.dll" >
    <ParameterGroup>
      <ProjectFile ParameterType="System.String" Required="true"/>
      <BuildOutput ParameterType="Microsoft.Build.Framework.ITaskItem[]" Output="true"/>
    </ParameterGroup>
    <Task>
      <Reference Include="System.Xml"/>
      <Reference Include="Microsoft.Build"/>
      <Using Namespace="Microsoft.Build.Evaluation"/>
      <Using Namespace="Microsoft.Build.Utilities"/>
      <Code Type="Fragment" Language="cs">
      <![CDATA[
        // The dollar-properties here get expanded to be the
        // actual values that are present during build.
        var properties = new Dictionary<string, string>
        {
          { "Configuration", "$(Configuration)" },
          { "Platform", "$(Platform)" }
        };

        // Load the project into a separate project collection so
        // we don't get a redundant-project-load error.
        var collection = new ProjectCollection(properties);
        var project = collection.LoadProject(ProjectFile);

        // Dollar sign can't easily be escaped here so we use the char code.
        var expanded = project.ExpandString(((char)36) + @"(MSBuildProjectDirectory)\" + ((char)36) + "(OutputPath)" + ((char)36) + "(AssemblyName).dll");
        BuildOutput = new TaskItem[] { new TaskItem(expanded) };
      ]]>
      </Code>
    </Task>
  </UsingTask>
</Project>

How it works:

  1. Create a dictionary of properties you want to flow from the current build environment into the target project. In this case, the Configuration and Platform properties are what affects the build output location, so I pass those. The $(Configuration) and $(Platform) in the code snippet will actually be expanded on the fly to be the real values from the current build environment.
  2. Create a tiny MSBuild project collection (similar to the way MSBuild does so for a solution). Pass the set of properties into the collection so they can be used by your project. You need this collection so the project doesn't get loaded in the context of the solution. You get an error saying the project is already loaded if you don't do this.
  3. Load the project into your collection. When you do, properties will be evaluated using the global environment - that dictionary provided.
  4. Use the ExpandString method on the project to expand $(MSBuildProjectDirectory)\$(OutputPath)$(AssemblyName).dll into whatever it will be in context of the project with the given environment. This will end up being the absolute path to the assembly being generated for the given configuration and platform. Note the use of (char)36 there - I spent some time trying to figure out how to escape $ but never could, so rather than fight it... there you go.
  5. Return the information from the expansion to the caller.

That step with ExpandString is where the less technically correct bit comes into play. For example, if the project generates an .exe file rather than a .dll - I don't account for that. I could enhance it to accommodate for that, but... well, this covers the majority case for me.

I considered returning a property rather than an item, but I have a need to grab a bunch of build output items and batch/loop over them, so items worked better in that respect.

There's also probably a real way of escaping $ that just didn't pop up in my searches. Leave a comment if you know; I'd be happy to update.

sublime, xml comments edit

I already have my build scripts tidy up my XML configuration files but sometimes I'm working on something outside the build and need to tidy up my XML.

There are a bunch of packages that have HTML linting and tidy, but there isn't really a great XML tidy package... and it turns out you don't really need one.

  1. Get a copy of Tidy and make sure it's in your path.
  2. Install the Sublime package "External Command" so you can pipe text in the editor through external commands.
  3. In Sublime, go to Preferences -> Browse Packages... and open the "User" folder.
  4. Create a new file in there called ExternalCommand.sublime-commands. (The name isn't actually important as long as it ends in .sublime-commands but I find it's easier to remember what the file is for with this name.)

Add the following to the ExternalCommand.sublime-commands file:

[
    {
        "caption": "XML: Tidy",
        "command": "filter_through_command",
        "args": { "cmdline": "tidy --input-xml yes --output-xml yes --preserve-entities yes --indent yes --indent-spaces 4 --input-encoding utf8 --indent-attributes yes --wrap 0 --newline lf" }
    }
]

Sublime should immediately pick this up, but sometimes it requires a restart.

Now when you're working in XML and want to tidy it up, go to the command palette (Ctrl+Shift+P) and run the XML: Tidy command. It'll be all nicely cleaned up!

The options I put here match the ones I use in my build scripts.. If you want to customize how the XML looks, you can change up the command line in the ExternalCommand.sublime-commands file using the options available to Tidy.

aspnet, rest, json comments edit

Here's the situation:

You have a custom object type that you want to use in your Web API application. You want full support for it just like a .NET primitive:

  • It should be usable as a route value like api/operation/{customobject}.
  • You should be able to GET the object and it should serialize the same as it does in the route.
  • You should be able to POST an object as the value for a property on another object and that should work.
  • It should show up correctly in ApiExplorer generated documentation like Swashbuckle/Swagger.

This isn't as easy as you might think.

The Demo Object

Here's a simple demo object that I'll use to walk you through the process. It has some custom serialization/deserialization logic.

public class MyCustomObject
{
  public int First { get; set; }

  public int Second { get; set; }

  public string Encode()
  {
    return String.Format(
        CultureInfo.InvariantCulture,
        "{0}|{1}",
        this.First,
        this.Second);
  }

  public static MyCustomObject Decode(string encoded)
  {
    var parts = encoded.Split('|');
    return new MyCustomObject
    {
      First = int.Parse(parts[0]),
      Second = int.Parse(parts[1])
    };
  }
}

We want the object to serialize as a pipe-delimited string rather than a full object representation:

var obj = new MyCustomObject
{
  First = 12,
  Second = 345
}

// This will be "12|345"
var encoded = obj.Encode();

// This will decode back into the original object
var decoded = MyCustomObject.Decode(encoded);

Here we go.

Outbound Route Value: IConvertible

Say you want to generate a link to a route that takes your custom object as a parameter. Your API controller might do something like this:

// For a route like this:
// [Route("api/value/{value}", Name = "route-name")]
// you generate a link like this:
var url = this.Url.Link("route-name", new { value = myCustomObject });

By default, you'll get a link that looks like this, which isn't what you want: http://server/api/value/MyNamespace.MyCustomObject

We can fix that. UrlHelper uses, in this order:

  • IConvertible.ToString()
  • IFormattable.ToString()
  • object.ToString()

So, if you implement one of these things, you can control how the object appears in the URL. I like IConvertible because IFormattable runs into other things like String.Format calls, where you might not want the object serialized the same.

Let's add IConvertible to the object. You really only need to handle the ToString method; everything else, just bail with InvalidCastException. You also have to deal with the GetTypeCode implementation and a simple ToType implementation.

using System;
using System.Globalization;

namespace SerializationDemo
{
  public class MyCustomObject : IConvertible
  {
    public int First { get; set; }

    public int Second { get; set; }

    public static MyCustomObject Decode(string encoded)
    {
      var parts = encoded.Split('|');
      return new MyCustomObject
      {
        First = int.Parse(parts[0]),
        Second = int.Parse(parts[1])
      };
    }

    public string Encode()
    {
      return String.Format(
        CultureInfo.InvariantCulture,
        "{0}|{1}",
        this.First,
        this.Second);
    }

    public TypeCode GetTypeCode()
    {
      return TypeCode.Object;
    }

    public override string ToString()
    {
      return this.ToString(CultureInfo.CurrentCulture);
    }

    public string ToString(IFormatProvider provider)
    {
      return String.Format(provider, "<{0}, {1}>", this.First, this.Second);
    }

    string IConvertible.ToString(IFormatProvider provider)
    {
      return this.Encode();
    }

    public object ToType(Type conversionType, IFormatProvider provider)
    {
      return Convert.ChangeType(this, conversionType, provider);
    }

    /* ToBoolean, ToByte, ToChar, ToDateTime,
       ToDecimal, ToDouble, ToInt16, ToInt32,
       ToInt64, ToSByte, ToSingle, ToUInt16,
       ToUInt32, ToUInt64
       all throw InvalidCastException */
  }
}

There are a couple of interesting things to note here:

  • I explicitly implemented IConvertible.ToString. I did that so the value you'll get in a String.Format call or a standard ToString call will be different than the encoded value. To get the encoded value, you have to explicitly cast the object to IConvertible. This allows you to differentiate where the encoded value shows up.
  • ToType pipes to Convert.ChangeType. Convert.ChangeType uses IConvertible where possible, so you kinda get this for free. Another reason IConvertible is better here than IFormattable.

Inbound Route Value, Action Parameter, and ApiExplorer: TypeConverter

When ApiExplorer is generating documentation, it needs to know whether the action parameter can be converted into a string (so it can go in the URL). It does this by getting the TypeConverter for the object and querying CanConvertFrom(typeof(string)). If the answer is false, ApiExplorer assumes the parameter has to be in the body of a request - which wrecks any generated documentation because that thing should be in the route.

To satisfy ApiExplorer, you need to implement a TypeConverter.

When your custom object is used as a route value coming in or otherwise as an action parameter, you also need to be able to model bind the encoded value to your custom object.

There is a built-in TypeConverterModelBinder that uses TypeConverter so implementing the TypeConverter will address model binding as well.

Here's a simple TypeConverter for the custom object:

using System;
using System.ComponentModel;
using System.Globalization;

namespace SerializationDemo
{
  public class MyCustomObjectTypeConverter : TypeConverter
  {
    public override bool CanConvertFrom(
        ITypeDescriptorContext context,
        Type sourceType)
    {
      return sourceType == typeof(string) ||
             base.CanConvertFrom(context, sourceType);
    }

    public override bool CanConvertTo(
        ITypeDescriptorContext context,
        Type destinationType)
    {
      return destinationType == typeof(string) ||
             base.CanConvertTo(context, destinationType);
    }

    public override object ConvertFrom(
        ITypeDescriptorContext context,
        CultureInfo culture,
        object value)
    {
      var encoded = value as String;
      if (encoded != null)
      {
        return MyCustomObject.Decode(encoded);
      }

      return base.ConvertFrom(context, culture, value);
    }

    public override object ConvertTo(
        ITypeDescriptorContext context,
        CultureInfo culture,
        object value,
        Type destinationType)
    {
      var cast = value as MyCustomObject;
      if (destinationType == typeof(string) && cast != null)
      {
        return cast.Encode();
      }

      return base.ConvertTo(context, culture, value, destinationType);
    }
  }
}

And, of course, add the [TypeConverter] attribute to the custom object.

[TypeConverter(typeof(MyCustomObjectTypeConverter))]
public class MyCustomObject : IConvertible
{
  //...
}

Setting Swagger/Swashbuckle Doc

Despite all of this, generated Swagger/Swashbuckle documentation will still show an expanded representation of your object, which is inconsistent with how a user will actually work with it from a client perspective.

At application startup need to register a type mapping with the Swashbuckle SwaggerSpecConfig.Customize method to map your custom type to a string.

SwaggerSpecConfig.Customize(c =>
{
  c.MapType<MyCustomObject>(() =>
      new DataType { Type = "string", Format = null });
});

Even More Control: JsonConverter

Newtonsoft.Json should handle converting your type automatically based on the IConvertible and TypeConverter implementations.

However, if you're doing something extra fancy like implementing a custom generic object, you may need to implement a JsonConverter for your object.

There is some great doc on the Newtonsoft.Json site so I won't go through that here.

Using Your Custom Object

With the IConvertible and TypeConverter implementations, you should be able to work with your object like any other primitive and have it properly appear in route URLs, model bind, and so on.

// You can define a controller action that automatically
// binds the string to the custom object. You can also
// generate URLs that will have the encoded value in them.
[Route("api/increment/{value}", Name = "increment-values")]
public MyCustomObject IncrementValues(MyCustomObject value)
{
  // Create a URL like this...
  var url = this.Url.Link("increment-values", new { value = value });

  // Or work with an automatic model-bound object coming in...
  return new MyCustomObject
  {
    First = value.First + 1,
    Second = value.Second + 1
  }
}

Bonus: Using Thread Principal During Serialization

If, for whatever reason, your custom object needs the user's principal on the thread during serialization, you're in for a surprise: While the authenticated principal is on the thread during your ApiController run, HttpServer restores the original (unauthenticated) principal before response serialization happens.

It's recommended you use HttpRequestMessage.GetRequestContext().Principal instead of Thread.CurrentPrincipal but that's kind of hard by the time you get to type conversion and so forth and there's no real way to pass that around.

The way you can work around this is by implementing a custom JsonMediaTypeFormatter.

The JsonMediaTypeFormatter has a method GetPerRequestFormatterInstance that is called when serialization occurs. It does get the current request message, so you can pull the principal out then and stick it on the thread long enough for serialization to happen.

Here's a simple implementation:

public class PrincipalAwareJsonMediaTypeFormatter : JsonMediaTypeFormatter
{
  // This is the default constructor to use when registering the formatter.
  public PrincipalAwareJsonMediaTypeFormatter()
  {
  }

  // This is the constructor to use per-request.
  public PrincipalAwareJsonMediaTypeFormatter(
    JsonMediaTypeFormatter formatter,
    IPrincipal user)
    : base(formatter)
  {
    this.User = user;
  }

  // For per-request instances, this is the authenticated principal.
  public IPrincipal User { get; private set; }

  // Here's where you create the per-user/request formatter.
  public override MediaTypeFormatter GetPerRequestFormatterInstance(
    Type type,
    HttpRequestMessage request,
    MediaTypeHeaderValue mediaType)
  {
    var requestContext = request.GetRequestContext();
    var user = requestContext == null ? null : requestContext.Principal;
    return new PrincipalAwareJsonMediaTypeFormatter(this, user);
  }

  // When you deserialize an object, throw the principal
  // on the thread first and restore the original when done.
  public override object ReadFromStream(
    Type type,
    Stream readStream,
    Encoding effectiveEncoding,
    IFormatterLogger formatterLogger)
  {
    var originalPrincipal = Thread.CurrentPrincipal;
    try
    {
      if (this.User != null)
      {
        Thread.CurrentPrincipal = this.User;
      }

      return base.ReadFromStream(type, readStream, effectiveEncoding, formatterLogger);
    }
    finally
    {
      Thread.CurrentPrincipal = originalPrincipal;
    }
  }

  // When you serialize an object, throw the principal
  // on the thread first and restore the original when done.
  public override void WriteToStream(
    Type type,
    object value,
    Stream writeStream,
    Encoding effectiveEncoding)
  {
    var originalPrincipal = Thread.CurrentPrincipal;
    try
    {
      if (this.User != null)
      {
        Thread.CurrentPrincipal = this.User;
      }

      base.WriteToStream(type, value, writeStream, effectiveEncoding);
    }
    finally
    {
      Thread.CurrentPrincipal = originalPrincipal;
    }
  }
}

You can register that at app startup with your HttpConfiguration like this:

// Copy any custom settings from the current formatter
// into a new formatter.
var formatter = new PrincipalAwareJsonMediaTypeFormatter(config.Formatters.JsonFormatter);

// Remove the old formatter, add the new one.
config.Formatters.Remove(config.Formatters.JsonFormatter);
config.Formatters.Add(formatter);

Conclusion

I have to admit, I'm a little disappointed in the different ways the same things get handled here. Why do some things allow IConvertible but others require TypeConverter? It'd be nice if it was consistent.

In any case, once you know how it works, it's not too hard to implement. Knowing is half the battle, right?

Hopefully this helps you in your custom object creation journey!

autofac, aspnet comments edit

We've been silent for a while, but we want you to know we've been working diligently on trying to get a release of Autofac that works with ASP.NET 5.0/vNext.

When it's released, the ASP.NET vNext compatible version will be Autofac 4.0.

Here's a status update on what's been going on:

  • Split repositories for Autofac packages. We had been maintaining all of the Autofac packages - Autofac.Configuration, Autofac.Wcf, and so on - in a single repository. This made it easier to work with but also caused trouble with independent package versioning and codeline release tagging. We've split everything into separate repositories now to address these issues. You can see the repositories by looking at the Autofac organization in GitHub.
  • Switched to Gitflow. Previously we were just working in master and it was pretty easy. Occasionally we'd branch for larger things, but not always. We've switched to using Gitflow so you'll see the 4.0 work going on in a "develop" branch in the repo.
  • Switched the build. We're trying to get the build working using only the new stuff (.kproj/project.json). This is proving to be a bit challenging, which I'll discuss more below.
  • Switched the tests to xUnit. In order to see if we broke something we need to run the tests, and the only runner in town for vNext is xUnit, so... we switched, at least for core Autofac.
  • Working on code conversion. Most of the differences we've seen in the API has to do with the way you access things through reflection. Of course, IoC containers do a lot of that, so there's a lot of code to update and test. The new build system handles things like resources (.resx) slightly differently, too, so we're working on making sure everything comes across and tests out.
  • Moved continuous integration to AppVeyor. You'll see build badges on all of the README files in the respective repos. The MyGet CI NuGet feed is still live and where we publish the CI builds, but the build proper is on AppVeyor. I may have to write a separate blog entry on why we switched, but basically - we had more control at AppVeyor and things are easier to manage. (We are still working on getting a CI build for the vNext stuff going on there.)

Obviously at a minimum we'd like to get core Autofac out sooner rather than later. Ideally we could also get a few other items like Autofac.Configuration out, too, so folks can see things in a more "real world" scenario.

Once we can get a reliable Autofac core ported over, we can get the ASP.NET integration piece done. That work is going on simultaneously, but it's hard to get integration done when the core bits are still moving.

There have, of course, been some challenges. Microsoft's working hard on getting things going, but things still aren't quite baked. Most of it comes down to "stuff that will eventually be there but isn't quite done yet."

  • Portable Class Library support isn't there. We switched Autofac to PCL to avoid having a ton of #if ASPNETCORE50 sorts of code in the codebase. We had that early on with things like Silverlight and PCL made this really nice. Unfortunately, the old-style .csproj projects don't have PCL support for ASP.NET vNext yet (though it's supposed to be coming) and we're not able to specify PCL target profiles in project.json. (While net45 works, it doesn't seem that .NETPortable,Version=v4.6,Profile=Profile259 does, or anything like it.) That means we're back to a couple of #if items and still trying to figure out how to get the other platforms supported. UPDATE: Had a Twitter conversation with Dave Kean and it turns out we may need to switch the build back to .csproj to get PCL support, but PCL should allow us to target ASP.NET vNext.
  • Configuration isn't quite baked. Given there's no web.config or ConfigurationElement support in ASP.NET, configuration is handled differently - through Microsoft.Framework.ConfigurationModel. Unfortunately, they don't currently support the notion of arrays/collections, so for Autofac.Configuration if you wanted to register a list of modules... you can't with this setup. There's an issue for it filed but it doesn't appear to have any progress. Sort of a showstopper and may mean we need to roll our own custom serialization for configuration.
  • The build structure has a steep learning curve. I blogged about this before so I won't recap it, but suffice to say, there's not much doc and there's a lot to figure out in there.
  • No strong naming. One of the things they changed about the new platform is the removal of strong naming for assemblies. Personally, I'm fine with that - it's always been a headache - but there's a lot of code access security stuff in Autofac that we'd put into place to make sure it'd work in partial trust; we had [InternalsVisibleTo] attributes in places... and that all has to change. You can't have a strong-named assembly depend on a not-strong-named assembly, and as they move away from strong naming, it basically means everything has to either maintain two builds (strong named and not strong named) or we stop strong naming. I think we're leaning toward not strong naming - for the same reason we tried getting away from the #if statements. One codeline, one release, easy to manage.

None of this is insurmountable, but it is a lot like dominos - if we can get the foundation stuff up to date, things will just start falling into place. It's just slow to make progress when the stuff you're trying to build on isn't quite there.

aspnet, net, autofac, github comments edit

Alex and I are working on switching Autofac over to ASP.NET vNext and as part of that we're trying to figure out what the proper structure is for a codeline, how a build should look, and so on.

There is a surprisingly small amount of documentation on the infrastructure bits. I get that things are moving quickly but the amazing lack of docs of any detail creates for a steep learning curve and a lot of frustration. I mean, you can read about the schema for project.json but even that is out of date/incomplete so you end up diving into the code, trying to reverse-engineer how things come together.

Below is a sort of almost-stream-of-consciousness braindump of things I've found while working on sorting out build and repo structure for Autofac.

No More MSBuild - Sake + KoreBuild

If you're compiling only on a Windows platform you can still use MSBuild, but if you look at the ASP.NET vNext repos, you'll see there's no MSBuild to be found.

This is presumably to support cross-platform compilation of the ASP.NET libraries and the K runtime bits. That's a good goal and it's worth pursuing - we're going that direction for at least core Autofac and a few of the other core libs that need to change (like Autofac.Configuration). Eventually I can see all of our stuff switching that way.

The way it generally works in this system is:

  • A base build.cmd (for Windows) and build.sh (for Linux) use NuGet to download the Sake and KoreBuild packages.
  • The scripts kick off the Sake build engine to run a makefile.shade which is platform-agnostic.
  • The Sake build engine, which is written in cross-platform .NET, handles the build execution process.

The Sake Build System

Sake is a C#-based make/build system that appears to have been around for quite some time. There is pretty much zero documentation on this, which makes figuring it out fairly painful.

From what I gather, it is based on the Spark view engine and uses .shade view files as the build scripts. When you bring in the Sake package, you get several shared .shade files that get included to handle common build tasks like updating assembly version information or running commands.

It enables cross-platform builds because Spark, C#, and the overall execution process works both on Mono and Windows .NET.

One of the nice things it has built in, and a compelling reason to use it beyond the cross-platform support, is that a convention-based standard build lifecycle that runs clean/build/test/package targets in a standard order. You can easily hook into this pipeline to add functionality but you don't have to think about the order of things. It's pretty nice.

The KoreBuild Package

KoreBuild is a build system layered on top of Sake that is used to build K projects. As with Sake, there is zero doc on this.

If you're using the new K build system, though, and you're OK with adopting Sake, there's a lot of value in the KoreBuild package. KoreBuild layers in Sake support for automatic NuGet package restore, native compile support, and other K-specific goodness. The _k-standard-goals.shade file is where you can see the primary set of things it adds.

The Simplest Build Script

Assuming you have committed to the Sake and KoreBuild way of doing things, you can get away with an amazingly simple top-level build script that will run a standard clean/build/test/package lifecycle automatically for you.

var AUTHORS='Your Authors Here'

use-standard-lifecycle
k-standard-goals

At the time of this writing, the AUTHORS value must be present or some of the standard lifecycle bits will fail... but since the real authors for your package are specified in project.json files now, this really just is a placeholder that has to be there. It doesn't appear to matter what the value is.

Embedded Resources Have Changed

There is currently no mention of how embedded resources are handled in the documentation on project.json but if you look at the schema you'll see that you can specify a resources element in project.json the same way you can specify code.

A project with embedded resources might look like this (minus the frameworks element and all the dependencies and such to make it easier to see):

{
    "description": "Enables Autofac dependencies to be registered via configuration.",
    "authors": ["Autofac Contributors"],
    "version": "4.0.0-*",
    "compilationOptions": {
        "warningsAsErrors": true
    },
    "code": ["**\\*.cs"],
    "resources": "**\\*.resx"
    /* Other stuff... */
}

Manifest Resource Path Changes

If you include .resx files as resources, they correctly get converted to .resources files without doing anything. However, if you have other resources, like an embedded XML file...

{
    "code": ["**\\*.cs"],
    "resources": ["**\\*.resx", "Files\\*.xml"]
}

...then you get an odd generated path. Easiest to see with an example. Say you have this:

~/project/
  src/
    MyAssembly/
      Files/
        Embedded.xml

In old Visual Studio/MSBuild, the file would be embedded and the internal manifest resource stream path would be MyAssembly.Files.Embedded.xml - the folders would represent namespaces and path separators would basically become dots.

However, in the new world, you get a manifest resource path Files/Embedded.xml - literally the relative path to the file being embedded. If you have unit tests or other stuff where embedded files are being read, this will throw you for a loop.

No .resx to .Designer.cs

A nice thing about the resource system in VS/MSBuild was the custom tool that would run to convert .resx files into strongly-typed resources in .Designer.cs files. There's no automatic support for this anymore.

However, if you give in to the KoreBuild way of things, they do package an analogous tool inside KoreBuild that you can run as part of your command-line build script. It won't pick up changes if you add resources to the file in VS, but it'll get you by.

To get .resx building strongly-typed resources, add it into your build script like this:

var AUTHORS='Your Authors Here'

use-standard-lifecycle
k-standard-goals

#generate-resx .resx description='Converts .resx files to .Designer.cs' target='initialize'

What that does is add a generate-resx build target to your build script that runs during the initialize phase of the standard lifecycle. The generate-resx target dependes on a target called resx which does the actual conversion to .Designer.cs files. The resx target comes from KoreBuild and is included when you include the k-standard-goals script, but it doesn't run by default, which is why you have to include it yourself.

Gotcha: The way it's currently written, your .resx files must be in the root of your project (it doesn't use the resources value from project.json). They will generate the .Designer.cs files into the Properties folder of your project. This isn't configurable.

ASP.NET Repo Structure is Path of Least Resistance

If you give over to Sake and KoreBuild, it's probably good to also give over to the source repository structure used in the ASP.NET vNext repositories. Particularly in KoreBuild there are some hardcoded assumptions in certain tasks that you're using that repo structure.

The structure looks like this:

~/MyProject/
  src/
    MyProject.FirstAssembly/
      Properties/
        AssemblyInfo.cs
      MyProject.FirstAssembly.kproj
      project.json
    MyProject.SecondAssembly/
      Properties/
        AssemblyInfo.cs
      MyProject.SecondAssembly.kproj
      project.json
  test/
    MyProject.FirstAssembly.Test/
      Properties/
        AssemblyInfo.cs
      MyProject.FirstAssembly.Test.kproj
      project.json
    MyProject.SecondAssembly.Test/
      Properties/
        AssemblyInfo.cs
      MyProject.SecondAssembly.Test.kproj
      project.json
  build.cmd
  build.sh
  global.json
  makefile.shade
  MyProject.sln

The key important bits there are: - Project source is in the src folder. - Tests for the project are in the test folder. - There's a top-level solution file (if you're using Visual Studio). - The global.json points to the src file as the place for project source. - There are build.cmd and build.sh scripts to kick off the cross-platform builds. - The top-level makefile.shade handles build orchestration. - The folder names for the source and test projects are the names of the assemblies they generate. - Each assembly has... - Properties with AssemblyInfo.cs where the AssemblyInfo.cs doesn't include any versioning information, just other metadata. - A .kproj file (if you're using Visual Studio) that is named after the assembly being generated. - A project.json that spells out the authors, version, dependencies, and other metadata about the assembly being generated.

Again, a lot of assumptions seem to be built in that you're using that structure. You can save a lot of headaches by switching.

I can see this may cause some long-path problems. Particularly if you are checking out code into a deep file folder and have a long assembly name, you could have trouble. Think...

C:\users\myusername\Documents\GitHub\project\src\MyProject.MyAssembly.SubNamespace1.SubNamespace2\MyProject.MyAssembly.SubNamespace1.SubNamespace2.kproj

That's 152 characters right there. Add in those crazy WCF-generated .datasource files and things are going to start exploding.

Assembly/Package Versioning in project.json

Part of what you put in project.json is your project/package version:

{
    "authors": ["Autofac Contributors"],
    "version": "4.0.0-*",
    /* Other stuff... */
}

There desn't appear to be a way to keep multiple assemblies in a solution consistently versioned. That is, you can't put the version info in the global.json at the top level and I'm not sure where else you could store it. You could probably come up with a custom build task to handle centralized versioning, but it'd be nice if there was something built in for it.

XML Doc Compilation Warnings

The old compiler csc.exe had a thing where it would automatically output compiler warnings for XML documentation errors (syntax or reference errors). The K compiler apparently doesn't do this by default so they added custom support for it in the KoreBuild package.

To get XML documentation compilation warnings output in your build, add it into your build script like this:

var AUTHORS='Your Authors Here'

use-standard-lifecycle
k-standard-goals

#xml-docs-test .clean .build-compile description='Check generated XML documentation files for errors' target='test'
  k-xml-docs-test

That adds a new xml-docs-test target that runs during the test part of the lifecycle (after compile). It requires the project to have been cleaned and built before running. When it runs, it calls the k-xml-docs-test target to manually write out XML doc compilation warnings.

Runtime Update Gotchas

Most build.cmd or build.sh build scripts have a line like this:

CALL packages\KoreBuild\build\kvm upgrade -runtime CLR -x86
CALL packages\KoreBuild\build\kvm install default -runtime CoreCLR -x86

Basically: - Get the latest K runtime from the feed. - Set the latest K runtime as the 'default' one to use.

While I think this is fine early on, I can see a couple of gotchas with this approach.

  • Setting the 'default' modifies the user profile. When you call kvm install default the intent is to set the aliast default to refer to the specified K runtime version (in the above example, that's the latest version). When you set this alias, it modifies a file attached to the user profile containing the list of aliases - it's a global change. What happens if you have a build server environment where lots of builds are running in parallel? You're going to get the build processes changing aliases out from under each other.
  • How does backward compatibility work? At this early stage, I do want the latest runtime to be what I build against. Later, though, I'm guessing I want to pin a revision of the runtime in my build script and always build against that to ensure I'm compatible with applications stuck at that runtime version. I guess that's OK, but is there going to be a need for some sort of... "binding redirect" (?) for runtime versions? Do I need to specify some sort of "list of supported runtime versions?"

Testing Means XUnit and aspnet50

At least at this early stage, XUnit seems to be the only game in town for unit testing. The KoreBuild stuff even has XUnit support built right in, so, again, path of least resistance is to switch if you're not already on it.

I did find a gotcha, though, where if you want k test to work your assemblies must target aspnet50.

Which is to say... in your unit test project.json you'll have a line to specify the test runner command:

{
    "commands": {
        "test": "xunit.runner.kre"
    },
    "frameworks": {
        "aspnet50": { }
    }
}

Specifying that will allow you to drop to a command prompt inside the unit test assembly's folder and run k test to execute the unit tests.

In early work for Autofac.Configuration I was trying to get this to work with the Autofac.Configuration assembly only targeting aspnetcore50 and the unit test assembly targeting aspnetcore50. When I ran k test I got a bunch of exceptions (which I didn't keep track of, sorry). After a lot of trial and error, I found that if both my assembly under test (Autofac.Configuration) and my unit test assembly (Autofac.Configuration.Test) both targeted aspnet50 then everything would run perfectly.

PCL Support is In Progress

It'd be nice if there was a portable class library profile that just handled everything rather than all of these different profiles + aspnet50 + aspnetcore50. There's not. I gather from Twitter conversations that this may be in the works but I'm not holding my breath.

Also, there's a gotcha with Xamarin tools: If you're using a profile (like Profile259) that targets a common subset of a lot of runtimes including mobile platforms, then the output of your project will change based on whether or not you have Xamarin tools installed. For example, without Xamarin installed you might get .nupkg output for portable-net45+win+wpa81+wp80. However, with Xamarin installed that same project will output for portable-net45+win+wpa81+wp80+monotouch+monoandroid.

Configuration Changes

Obviously with the break from System.Web and some of the monolithic framework, you don't really have web.config as such anymore. Instead, the configuration system has become Microsoft.Framework.ConfigurationModel.

It's a pretty nice and flexible abstraction layer that lets you specify configuration in XML, JSON, INI, or environment variable format. You can see some examples here.

That said, it's a huge change and takes a lot to migrate.

  • No appSettings. I'm fine with this because appSettings always ended up being a dumping ground, but it means everything you have originally tied to appSettings needs to change.
  • No ConfigurationElement objects. I can't tell you how much I have written in the old ConfigurationElement mechanism. It had validation, strong type parsing, serialization, the whole bit. All of that will not work in this new system. You can imagine how this affects things like Autofac.Configuration.
  • List and collection support is nonexistent. I've actually filed a GitHub issue about this. A lot of the configuration I have in both Autofac.Configuration and elsewhere is a list of elements that are parameterized. The current XML and JSON parsers for config specifically disallow list/collection support. Everything must be a unique key in a tree-like hierarchy. That sort of renders the new config system, at least for me, pretty much unusable except for the most trivial of things. Hopefully this changes.
  • Everything is file or in-memory. There's no current support for pulling in XML or JSON configuration that comes from, say, a REST API call from a centralized repository. Even in unit testing, all the bits that actually run the configuration parsing on a stream of XML/JSON are internals rather than exposed - you have to load config from a file or manually create it yourself in memory by adding key/value pairs. There's a GitHub issue open for this, too.

As a workaround, I'm considering using custom object serialization and bypassing the new configuration system altogether. I like the flexibility of the new system but the limitations are pretty overwhelming right now.