windows comments edit

I develop using an account that is not an administrator because I want to make sure the stuff I'm working on will work without extra privileges. I have a separate local machine administrator account I can use when I need to install something or change settings.

To make my experience a little easier, I add my user account to a few items in Local Security Policy to allow me to do things like restart the machine, debug things, and use the performance monitoring tools.

In setting up a new Windows 2012 dev machine, I found that the domain Group Policy had the "Shut down the machine" policy locked down so there was no way to allow my developer account to shut down or restart. Painful.

To work around this, I created a shortcut on my Start menu that prompts me for the local machine administrator password and restarts using elevated credentials.

Here's how:

Create a small batch file in your Documents folder or some other accessible location. I called mine restart-elevated.bat. Inside it, use the runas and shutdown commands to prompt for credentials and restart the machine:

runas /user:YOURMACHINE\administrator "shutdown -r -f -d up:0:0 -t 5"

The shutdown command I've specified there will...

  • Restart the computer.
  • Force running applications to close.
  • Alert the currently logged-in user and wait five seconds before doing the restart.
  • Set the shutdown reason code as "user code, planned shutdown, major reason 'other,' minor reason 'other.'"

Now that you have the batch file, throw it on your Start menu. Open up C:\Users\yourusername\AppData\Roaming\Microsoft\Windows\Start Menu and make a shortcut to the batch file. It's easy if you just right-drag the script in there and select "Create shortcut."

Give the shortcut a nice name. I called mine "Restart Computer (Elevated)" so it's easy to know what's going to happen.

I also changed the icon so it's not the default batch file icon:

  • Right-click the shortcut and select "Properties."
  • On the "Shortcut" tab, select "Change Icon..."
  • Browse to %SystemRoot%\System32\imageres.dll and select an icon. I selected the multi-colored shield icon that indicates an administrative action.

Change the icon to something neat

Finally, hit the Start button and go to the list of applications installed. Right-click on the new shortcut and select "Pin to Start."

Restart shortcut pinned to Start menu

That's it - now when you need to restart as a non-admin, click that and enter the password for the local administrator account.

windows comments edit

I was setting up a new dev machine the other day and whilst attempting to install TestDriven I got a popup complaining about a BEX event.

Looking in the event log, I saw this:

Faulting application name: TestDriven.NET-3.8.2860_Enterprise_Beta.exe, version: 0.0.0.0, time stamp: 0x53e4d386
Faulting module name: TestDriven.NET-3.8.2860_Enterprise_Beta.exe, version: 0.0.0.0, time stamp: 0x53e4d386
Exception code: 0xc0000005
Fault offset: 0x003f78ae
Faulting process id: 0xe84
Faulting application start time: 0x01cfe410a15884fe
Faulting application path: E:\Installers\TestDriven.NET-3.8.2860_Enterprise_Beta.exe
Faulting module path: E:\Installers\TestDriven.NET-3.8.2860_Enterprise_Beta.exe
Report Id: df1b87dd-5003-11e4-80cd-3417ebb288e7

Nothing about a BEX error, but... odd.

Doing a little searching yielded this forum post which led me to disable the Data Execution Prevention settings for the installer.

  • Open Control Panel.
  • Go to the "System and Security" section.
  • Open the "System" option.
  • Open "Advanced System Settings."
  • On the "Advanced" tab, click the "Settings..." button under "Performance."
  • On the "Data Execution Prevention" tab you can either turn DEP off entirely or specifically exclude the installer using the whitelist box provided. (DEP is there to help protect you so it's probably better to just exclude the installer unless you're having other issues.)

vs, sublime comments edit

As developers, we've all argued over tabs vs. spaces, indentation size, how line endings should be, and so on.

And, of course, each project you work on has different standards for these things. Because why not.

What really kills me about these different settings, and what probably kills you, is remembering to reconfigure all your editors to match the project settings. Then when you switch, reconfigure again.

The open source project EditorConfig aims to rescue you from this nightmare. Simply place an .editorconfig file in your project and your editor can pick up the settings from there. Move to the next project (which also uses .editorconfig) and everything dynamically updates.

I don't know why this isn't the most popular Visual Studio add-in ever.

Here's the deal:

Here's the .editorconfig I use. I like tab indentation except in view markup. We're a Windows shop, so lines end in CRLF. I hate trailing whitespace. I also like to keep the default settings for some project/VS files.

root = true

[*]
end_of_line = CRLF
indent_style = tab
trim_trailing_whitespace = true

[*.ascx]
indent_style = space
indent_size = 4

[*.aspx]
indent_style = space
indent_size = 4

[*.config]
indent_style = space
indent_size = 4

[*.cshtml]
indent_style = space
indent_size = 4

[*.csproj]
indent_style = space
indent_size = 2

[*.html]
indent_style = space
indent_size = 4

[*.resx]
indent_style = space
indent_size = 2

[*.wxi]
indent_style = space
indent_size = 4

[*.wxl]
indent_style = space
indent_size = 4

[*.wxs]
indent_style = space
indent_size = 4

Note there's a recent update to the EditorConfig format that supports multiple matching, like:

[{*.wxl,*.wxs}]
indent_style = space
indent_size = 4

...but there's a bug in the Sublime Text plugin around this so I've expanded those for now to maintain maximum compatibility.

I've added one of these to Autofac to help our contributors and us. It makes it really easy to switch from my (preferred) tab settings to use the spaces Autofac likes. No more debate, no more forgetting.

Now, get out there and standardize your editor settings!

testing, vs comments edit

I get a lot of questions from people both at work and online asking for help in troubleshooting issues during development. I'm more than happy to help folks out because I feel successful if I help others to be successful.

That said, there's a limited amount of time in the day, and, you know, I have to get stuff done, too. Plus, I'd much rather teach a person to fish than just hand them the fish repeatedly and I don't want to be the roadblock stopping folks from getting things done, so I figured it'd be good to write up the basic steps I go through when troubleshooting stuff as a .NET developer in the hope it will help others.

Plus - if you ever do ask for help, this is the sort of stuff I'd ask you for, sort of along the lines of calling tech support and them asking you to reboot your computer first. Is it plugged in? That's this sort of thing.

Soooo... assuming you're developing an app, not trying to do some crazy debug-in-production scenario...

Change Your Thinking and Recognize Patterns

This is more of a "preparation for debugging" thing. It is very easy to get intimidated when working with new technology or on something with which you're not familiar. It's also easy to think there's no way the error you're seeing is something you can handle or that it's so unique there's no way to figure it out.

  • Don't get overwhelmed. Stop and take a breath. You will figure it out.
  • Don't raise the red flag. Along with not getting overwhelmed... unless you're five minutes from having to ship and your laptop just lit on fire, consider not sending out the all-hands 'I NEED HELP ASAP' email with screen shots and angry red arrows wondering what this issue means.
  • Realize you are not a special snowflake. That sounds mean, but think about it - even if you're working on the newest, coolest thing ever built, you're building that with components that other people have used. Other folks may not have received literally exactly the same error in literally exactly the same circumstances but there's a pretty high probability you're not the first to run into the issue you're seeing.
  • Don't blame the compiler. Sure, software is buggy, and as we use NuGet to pull in third-party dependencies it means there are a lot of bits out of your control that you didn't write... and sure, they might be the cause of the issue. But most likely it's your stuff, so look there first.
  • Use your experience. You may not have seen this exact error in this exact spot, but have you seen it elsewhere? Have you seen other errors in code similar to the code with the error? Do you recognize any patterns (or anti-patterns) in the code that might clue you in?

Read the Exception Message

This is an extension of RTFM - RTFEM. I recognize that there are times when exception messages are somewhat unclear, but in most cases it actually does tell you what happened with enough detail to start fixing the issue.

And don't forget to look at the stack trace. That can be just as helpful as the message itself.

Look at the Inner Exception

Exceptions don't always just stop with a message and a stack trace. Sometimes one error happens, which then causes a sort of "secondary" error that can seem misleading. Why did you get that weird exception? You're not calling anything in there! Look for an inner exception - particularly if you're unclear on the main exception you're seeing, the inner exception may help you make sense of it.

And don't forget to follow the inner exception chain - each inner exception can have its own inner exception. Look at the whole chain and their inner messages/stack traces. This can really help you pinpoint where the problem is.

Boogle the Message

You know, "Bing/Google" == "Boogle" right? Seriously, though, put the exception message(s) into your favorite search engine of choice and see what comes up.

  • Remove application-specific values - stuff like variable names or literal string values. You're probably more interested in places that type of exception happened rather than literally the exact same exception.
  • Add "hint words" - like if it happened in an MVC application, throw "MVC" in the query. It can help narrow down the scope of the search.
  • Don't give up after the first search - just because the first hit isn't exactly the answer doesn't mean the answer isn't out there. Modify the query to see if you can get some different results.

Ask a Rubber Duck

Rubber duck debugging is a pretty well-known strategy where you pretend to ask a rubber duck your question and, as you are forced to slow down and ask the duck... you end up answering your own question.

Seriously, though, step back from the keyboard for a second and think about the error you're seeing. Run back through your mind and think about the error and what might be causing it. It's easy to get mental blinders on; take 'em off!

Break in a Debugger

Put a breakpoint on the line of code throwing the exception. Use the various debugging windows in Visual Studio to look at the values of the variables in the vicinity. Especially if you're getting something like a NullReferenceException you can pretty quickly figure out what's null and what might be causing trouble.

Step Into Third-Party Code

Many popular NuGet packages put symbol/source packages up on SymbolSource.org. If you configure Visual Studio to use these packages you can step into the source for these. You can also step into Microsoft .NET framework source (the SymbolSource setup enables both scenarios).

Do this!

If you don't know what's going on, try stepping into the code. Figure out why the error is happening, then follow it back to figure out the root cause.

Use a Decompiler

If you can't step into the third-party source, try looking at the third-party stuff in a decompiler like Reflector, JustDecompile, dotPeek, or ILSpy.

You can use the stack trace to narrow down where the issue might be going on and try tracing back the root cause. You might not get an exact line, but it'll narrow it down for you a lot.

Create a Reproduction

Usually crazy hard-to-debug stuff happens in a large, complex system and figuring out why that's happening can feel overwhelming. Try creating a reproduction in a smaller, standalone project. Doing this is a lot like the rubber duck debugging, but it tells you a little more in the way of concrete information.

  • As you work through creating the reproduction, the number of moving pieces becomes easier to visualize.
  • If you can easily reproduce the issue in a smaller environment, you can troubleshoot with many fewer moving pieces and that's easier than doing it in the complex environment. Then you can take that info to the larger system.
  • If you can't easily reproduce the issue then at least you know where the problem isn't. That can sometimes be just as helpful as knowing where the issue is.

Next Steps

Once you've gotten this far, you probably have a lot of really great information with which you can ask a very clear, very smart question. You've probably also learned a ton along the way that you can take with you on your next troubleshooting expedition, making you that much better at what you do. When you do ask your question (e.g., on StackOverflow) be sure to include all the information you've gathered so people can dive right into answering your question.

Good luck troubleshooting!

personal, home comments edit

My daughter and I are both big Doctor Who fans. She has a bathroom that she primarily uses, so we decided to make that into a ThinkGeek extravaganza of TARDIS awesomeness. Here's what we got:

Doctor Who TARDIS Bath Mat

The bath mat is pretty decent. It is smaller than the throw rug and works well in the bathroom.

Doctor Who TARDIS Shower Curtain

The shower curtain is OK, but it is a thinner plastic than I'd like. I really wish it was fabric, like what you'd put in front of a plastic curtain; or maybe a nice thick plastic... but it's not. The first one we received arrived damaged - the print on it had rubbed off and one of the metal grommets at the top was ripped out. ThinkGeek support was super awesome and sent us a new one immediately.

Of course, then my stupid cat decided to chew through a section on the bottom of the new one so I had to do my best to disguise that, but it still irritates me. Damn cat.

Doctor Who TARDIS Ceramic Toothbrush Holder

The toothbrush holder is really nice. Looks good and nice quality. My three-year-old daughter's toothbrush is just a tad short for it and falls in, but that's not a fault in the holder. She just needs a bigger toothbrush.

Doctor Who 3-Piece Bath Towel Set

We got two sets of these towels and they are awesome. Very thick, very plush. I wish all towels were nice like this.

Doctor Who TARDIS Shower Rack

We actually have the shower rack hanging on our wall because our shower is one of those fiberglass inserts rather than tile, so the shower head doesn't sit flush with the wall. We have some hair supplies in there. One problem I ran into with this was that the little stickers didn't adhere very well. I had to do a little super glue work to get the stickers stuck down permanently. It could have just been this one unit, but it was less than optimal.

The bathroom looks really good with all this stuff in it, and my daughter is super pleased with it.

vs, wcf comments edit

I've run into this issue a couple of times now and I always forget what the answer is, so... blog time.

We have some WCF service references in our projects and we were in the process of updating the generated code (right click on the reference, "Update Service Reference") when we got an assembly reference error:

Could not load file or assembly AssemblyNameHere, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1234567890123456 or one of its dependencies. The located assembly's manifest definition does not match the assembly reference.

Normally you can fix this sort of thing by adding a binding redirect to your app.config or web.config and calling it a day. But we had the binding redirect in place for the assembly already. What the... ?!

As it turns out, svcutil.exe and the service reference code generation process don't use binding redirects from configuration. It didn't matter where we put the redirect, we still got the error.

The fix is to reduce the set of assemblies with types that get reused. Right-click the service reference and select "Configure Service Reference." Switch the setting to reuse types in referenced assemblies to be very specific. If you aren't actually reusing types from a particular assembly (especially third-party assemblies you aren't building), don't include it in the list.

We were really only reusing types in one assembly, not the whole giant set of assemblies referenced. Cleaning that up removed the need for the binding redirect and everything started working again as normal.

Note: If you really want to use binding redirects, you can add them to devenv.exe.config so Visual Studio itself uses them. Not awesome, and I wouldn't recommend it, but... technically possible.

testing comments edit

I've noticed that some of our unit tests are running a little long and I'm trying to figure out which ones are taking the longest. While TeamCity has some nice NUnit timing info, it's a pain to build the whole thing on the build server when I can just try things out locally.

If you have NUnit writing XML output in your command line build (using the /xml: switch) then you can use Log Parser to query the XML file and write a little CSV report with the timings in it:

LogParser.exe "SELECT name, time FROM Results.xml#//test-case ORDER BY time DESC" -i:xml -fMode:Tree -o:csv

A little fancier: take all of the tests across several reports and write the output to a file rather than the console:

LogParser.exe "SELECT name, time INTO timings.csv FROM *.xml#//test-case ORDER BY time DESC" -i:xml -fMode:Tree -o:csv

And fancier still: Take all of the reports across multiple test runs and get the average times for the tests (by name) so you can see which tests over time run the longest:

LogParser.exe "SELECT name, AVG(time) as averagetime INTO timings.csv FROM *.xml#//test-case GROUP BY name ORDER BY averagetime DESC" -i:xml -fMode:Tree -o:csv

blog comments edit

Now that I've moved to GitHub Pages for my blog I find that I sometimes forget what all the YAML front matter should be for a blog entry so I end up copy/pasting.

To make the job easier, I've created a little snippet/template for Sublime Text for blog entries. Take this XML block and save it in your User package as Empty GitHub Blog Post.sublime-snippet and it'll be available when you switch syntax to Markdown:

<snippet>
  <content><![CDATA[
---
layout: post
title: "$1"
date: ${2:2014}-${3:01}-${4:01} -0800
comments: true
tags: [$5]
---
$6
]]></content>
  <scope>text.html.markdown</scope>
</snippet>

I've added placeholders so you can tab your way through each of the front matter fields and finally end up at the body of your post.

blog, github comments edit

Well, I finally did it.

For quite some time I've been looking at migrating my blog away from Subtext. At first I wanted to go to WordPress, but then... evaluating all the options, the blog engines, etc., I started thinking "less is more." I originally started out using Subtext because I thought I'd want to extend the blog to do a lot of cool things. It was a great .NET blog platform and I'm a .NET guy, it was perfect.

The problem was... I didn't do any of that.

I contributed as I could to the project, and there was a lot of great planning for the ever-impending "3.0" release, but... it just never came together. People got busy, stuff happened. Eventually, Subtext development pretty much stopped.

Part of my challenge with Subtext was the complexity of it. So many moving pieces. Database, assemblies, tons of pages and settings, skins, totally no documentation. (Well, there was some documentation that I had been writing but an unfortunate server crash lost it all.) I started looking at hosted solutions like WordPress that would be easy to use and pretty common. But, then, the challenge with any of those systems is getting your data in/out, etc. Plus, hosting costs.

So I started leaning toward a code generation sort of system. Fewer moving pieces, simpler data storage. Also, cheap. Because I'm cheap.

I decided on GitHub Pages because it's simple, free, reliable... plus, it's pretty well documented, Jekyll usage is simple, and Markdown is pretty sweet.

Good Stuff About GitHub Pages

  • It's simple. Push a new post in Markdown format to your blog repo and magic happens.
  • It's portable. All the posts are in simple text, right in the repo, so if you need to move somewhere else, it's all right there. No database export, no huge format conversion craziness.
  • It's free. Doesn't get cheaper than that.
  • It's reliable. I'm not saying 100% uptime, but putting your blog in GitHub Pages means you have the whole GitHub team watching to see if the server is down.
  • Browser editor. Create a new post right in the GitHub web interface. Nice and easy.

Less Awesome Stuff About GitHub Pages

  • There's no server-side processing even if you need it. Ideally I'd want a 404 handler that can issue a 302 from the server-side to help people get to broken permalinks. But the 404 is just HTML generated with Jekyll, so you have to rely on JS to do the redirect. Not so awesome for search engines. I have some [really old] blog entries that were on a PHP system where the permalink is querystring-based, so I can't even use jekyll-redirect-from to fix it.
  • The Jekyll plugins are limited. GitHub Pages has very few plugins for Jekyll that it supports. On something like OctoPress you hook up the page generation yourself so you can have whatever plugins you want... but you can't add plugins to GitHub Pages, so the things you can do are kind of limited. (I totally understand why this is the case, doesn't make it awesome.)
  • No post templates, painful preview. With Windows Live Writer or whatever, you didn't have to deal with YAML front matter or any of that. The GitHub web editor interface doesn't have an "add new post" template, so that's a bit rough. Also, to preview your post, you have to commit the post the first time, then you have the "preview" tab you can use to see "changes" in your post. It renders Markdown nicely, but it's sort of convoluted.
  • Drafts are weird. I may be doing this wrong, but it looks like you have to put "posts in progress" into a _drafts folder in your blog until it's ready to go, at which point you move it to _posts.
  • Comments don't exist. It's not a blog host, really, so you need to use a system like Disqus for your comments. That's not necessarily a bad thing, but it means you have some extra setup.

My Migration Process

A lot of folks who move their blog to GitHub Pages sort of "yadda yadda" away the details. "I exported my content, converted it, and imported it into GitHub Pages." That... doesn't help much. So I'll give you as much detail as I can.

Standing on the Shoulders of Giants

Credit where credit is due:

Phil Haack posted a great article about how he migrated from Subtext to GitHub Pages that was super helpful. He even created a Subtext exporter that was the starting point for my data export. I, uh, liberated, a lot of code from his blog around the skin, RSS feed, and so on to get this thing going.

David Ebbo also moved to GitHub Pages and borrowed from Mr. Haack but had some enhancements I liked, like using GitHub user pages for the repository and using "tags" instead of "categories." So I also borrowed some ideas and code from Mr. Ebbo.

If you don't follow these blogs, go subscribe. These are some smart guys.

You Need to Know Jekyll and Liquid

You don't have to be an expert, but it is very, very helpful to know Jekyll (the HTML content generator) and Liquid (the template engine) at least on a high-level basis. As you work through issues and fix styles or config items, this helps a lot to track things down.

Initialize the Repository

I'm using GitHub user pages for my blog, so I created a repository called tillig.github.io to host my blog. For your blog, it'd be yourusername.github.io. The article on user pages is pretty good to get you going.

Get the Infrastructure Right

Clone that repo to your local machine so you can do local dev/test to get things going. Note that if you check things in and push to the repo as you develop, you may get some emails about build failures, so local dev is good.

The GitHub page on using Jekyll tells you about how to get your local dev environment set up to run Jekyll locally.

There's a lot to set up here, from folder structure to configuration, so the easiest way to start is to copy from someone else's blog. This is basically what I did - I grabbed Haack's blog, put that into my local repo, and got it running. Then I started changing the values in _config.yml to match my blog settings and fixed up the various template pieces in the _includes and _layouts folders. You can start with my blog if you like.

GOTCHA: GitHub uses pygments.rb for code syntax highlighting. If you're developing on Windows, there's something about pygments.rb that Windows hates. Or vice versa. Point being, for local dev on Windows, you will need to turn off syntax highlighting during local Windows dev by setting highlighter: null in your _config.yml.

Add Search

I didn't see any GitHub Pages blogs that had search on them, so I had to figure that one out myself. Luckily, Google Custom Search makes it pretty easy to get this going. Create a new "custom search engine" and set it up to search just your site. You can configure the look and feel of the search box and results page right there and it'll give you the script to include in your site. Boom. Done.

Fix Up RSS

The Octopress-based RSS feed uses a custom plugin expand_urls to convert relative URLs like /about.html into absolute URLs like http://yoursite.com/about.html That no worky in GitHub Pages, so you have to use a manual replace filter on URLs in the RSS feed. (If you look at my atom.xml file you can see this in action.)

Make Last Minute Fixes to Existing Content

I found that it was easier to do any last-minute fixes in my existing blog content rather than doing it post-export. For example, I was hosting my images in ImageShack for a long time, but the reliability of ImageShack (even with a paid account) is total crap. I lost so many images... argh. So I went through a process of moving all of my images to OneDrive and it was easier to do that in my original blog so I could make sure the links were properly updated.

If you have anything like that, do it before export.

Export Your Content and Comments

This was the trickiest part, at least for me.

Haack was running on his own server and had direct database access to his content so a little SQL and he was done. I was on a shared server without any real SQL Management Console access or remote access to run SQL against my database, so I had to adjust my export mechanism to be more of a two-phase thing: Get the data out of my database using an .aspx page that executed in the context of the blog, then take the exported content and transform that into blog posts.

There also wasn't anything in Haack's setup to handle the comment export for use in Disqus, so I had to do that, too.

Oh, and Haack was on some newer/custom version of Subtext where the database schema was different from mine, so I had to fix that to work with Subtext 2.5.2.0.

Here's my forked version of Haack's subtext-jekyll-exporter that you can use for exporting your content and comments. You can also fork it as a starter for your own export process.

  • Drop the JekyllExport.aspx and DisqusCommentExport.aspx files into your Subtext blog.
  • Save the output of each as an XML file.
  • Make your URLs relative. I have a little section on this just below, but it's way easier to deal with local blog development if your URLs don't have the protocol or host info in them for internal links. It's easier to do this in the exported content before running the exporter to process into Markdown.
  • Run the SubtextJekyllExporter.exe on the XML from JekyllExport.aspx to convert it into Markdown. These will be the Markdown pages that go in the _posts/archived folder and they'll have Disqus identifiers ready to go to tie existing comments to the articles.
  • In Disqus, import a "general" WXR file and use the XML from DisqusCommentExport.aspx as the WXR file. It may take a while to import, so give it some time.

You can test this out locally when it's done. Using Jekyll to host your site locally, check out your comment section on one of the posts in your site with comments. They should show up.

Make URLs Relative

It is way easier to test your blog locally if the links work. That means if you have absolute links like http://yoursite.com/link/target.html they're going to only work if the link is truly live. If, however, you have /link/target.html then it'll work on your local test machine, it'll work from yourusername.github.io, and it'll work from your final blog site.

I did a crude replacement on my blog entries that seemed to work pretty well.

Replace ="http://www.mysite.com/" with ="/" and that seemed to be enough (using my domain name in there, of course). YMMV on that one.

Push It

Once everything looks good locally, push it to your public repo. (If you're on Windows, don't forget to comment out that highlighter: null in _config.yml.) Give it a few minutes and you should be able to see your blog at http://yourusername.github.io - navigate around and do any further fix-up.

Configure DNS

This was a finicky thing for me. I don't mess with DNS much so it took me a bit to get this just right.

My blog is at www.paraesthesia.com (I like the www part, some folks don't). GitHub has some good info about setting up your custom domain but it was still a little bit confusing and "spread out" for me.

For the www case, like mine:

What got me/wasn't clear was that for the www special case, you have to do both the A and CNAME records.

Once you do that, yourdomain.com and www.yourdomain.com will both make it to your blog. (If you don't like the www part, make your CNAME file in your repo only contain yourdomain.com instead of www.yourdomain.com.)

Remaining Items

I still have a few things to fix up, but for the most part I'm moved over.

There are still some quirky CSS things I need to fix that I'm not happy with. Looking at the headers in this entry, for example, they have some crazy overlapping with the first line under them.

I have some in-page JS demos that were sort of challenging to set up in Subtext but should be easier in the new setup. I need to move those over; right now they're broken.

I also have the "Command Prompt Here Generator" app that was running on my blog site but is now inaccessible because I have to get it going on a site with dynamic abilities. I'll probably use my old blog host site as an "app host" now where I just put little dynamic apps. It'll be easier to do that stuff without Subtext right in the root of the site.

I'll get there, but for now... I'm feeling pretty good.

.NET comments edit

One of the most frequent StackOverflow questions we see for Autofac involves some confusion over how to deal with components in applications like MVC and Web API that support per-request dependencies.

To help folks out, I wrote up a fairly comprehensive document on the Autofac doc site addressing the various questions around per-request dependencies.

We’re still working on porting over the wiki content into the doc site. While that is going on, things are a little “split” between the two sites. If the content is on the doc site, go for that over the wiki. The doc site has content that’s a little more robust and detailed than the wiki had (which is part of the desire to move to the doc site).

.NET comments edit

Over at Autofac we’re trying to get a more robust set of documentation out to help folks. The wiki is nice, but it leaves a lot to be desired.

As part of that, we’re also trying to get some answers published to some of the more frequently asked questions we see popping up on StackOverflow.

Today I pushed out the doc answering that timeless question, “How do I pick a service implementation based on a particular context or consuming object?”

Media comments edit

I’ve been ripping a lot of SD video lately, converting my full-disc VIDEO_TS folder images to .m4v files for use with Plex, and I’ve learned quite a bit about what I like (or don’t) and things I have to look for in the final conversion. Surprisingly enough, default settings never seem to work quite right for me.

The settings I use are some minor changes to the “High Profile” default. I’ll note the differences.

Picture

  • Width: 720 (same as the source)
  • Anamorphic: Loose
  • Modulus: 2
  • Cropping: Automatic

Filters

  • Detelecine: Off
  • Decomb: Default
  • Deinterlace: Off
  • Denoise: Off
  • Deblock: Off
  • Grayscale: Unchecked

Video

  • Video Codec: H.264 (x.264)
  • Framerate FPS: Same as source
  • Constant Framerate (this is different than High Profile)
  • x264 Preset: Slower
  • x264 Tune: Film, Animation, or Grain (depends on the source – I change this per item ripped; this is different than High Profile)
  • H.264 Profile: High
  • H.264 Level: 4.1
  • Fast Decode: Unchecked
  • Extra Options: Empty
  • Quality: Constant Quality 18 (this is different than High Profile)

Audio

Track 1:

  • Source: The best AC3 sound track on there with the most channels. (It usually does a good job of auto-detecting.)
  • Codec: AAC (faac)
  • Bitrate: 256 (this is different than High Profile)
  • Samplerate: Auto
  • Mixdown: Dolby Pro Logic II
  • DRC: 0.0
  • Gain: 0

Track 2:

  • Source: Same as Track 1.
  • Codec: AC3 Passthru

Track 3 (depending on source)

  • Source: The DTS track, if there is one.
  • Codec: DTS Passthru

Subtitles: Generally none, but there are some movies that need them, in which case I’ll add one track. High Profile (and my settings) generally don’t include this.

  • Source: English (VobSub)
  • Forced Only: Unchecked
  • Burn In: Checked
  • Default: Unchecked
  • Everything else default.

Chapters: I do select “Create chapter markers” but I let the automatic detection do the naming and timing.

This seems to give me the best bang for my buck. I tried with lower quality settings and such, but it never quite got where I wanted it. With these settings, I generally can’t tell the difference between the original source and the compressed version.

Now, when a rip is done, I’ve found that I have to check for a few things to see if something needs to be tweaked or re-ripped.

  • Cropping: About 80% of the time, Handbrake does an awesome job cropping the letterbox off and cleaning up the sides. That other 20%, you get this odd floating black bar on one or more of the sides where the picture wasn’t cropped right. If this happens, I re-rip and change the cropping on the picture settings to “manual” and go through a series of preview/fix/preview/fix until I get it right. I’ve found that a screen ruler program can help with that first “fix” to get it pretty close to where it should be. Anymore, I’ll usually run a five-second preview of the rip to check the crop before letting the machine run the whole thing.
  • Film grain: By default I try the “Film” x264 Tune setting for most movies unless I’m sure there’s a grain or high level of detail to it. Nevertheless, sometimes I’ll come across a film where dark spots have the background appear as though it’s “moving” – like a thousand little grains of sand vibrating. If I see that, I re-rip and switch to the “Grain” x264 Tune setting and that fixes it right up. I also sometimes see a film that looks like all the definition was lost and things are blocky – in this case, I’ll also switch to “Grain.”
  • Lip sync: I started out using the default “Variable Framerate” setting on the Video tab. I’m now in the process of re-ripping like a quarter of my movies because I didn’t stop to see if the lips were synchronized with the words in the soundtrack. By switching to “Constant Framerate,” everything syncs up and looks right. I’ve since switched my default setting to Constant Framerate.

.NET, Web Development comments edit

In working with a REST API project I’m on, I was tasked to create a DELETE operation that would take the resource ID in the URL path, like:

DELETE /api/someresource/reallylongresourceidhere HTTP/1.1

The resource ID we had was really, really long base-64 encoded value. About 750 characters long. No, don’t bug me about why that was the case, just… stick with me. I had to get it to work in IIS and OWIN hosting.

STOP. STOP RIGHT HERE. I’m going to tell you some ways to tweak URL request validation. This is a security thing. Security is Good. IN THE END, I DIDN’T DO THESE. I AM NOT RECOMMEDING YOU DO THEM. But, you know, if you run into one of the issues I ran into… here are some ways you can work around it at your own risk.

Problem 1: The Overall URL Length

By default, ASP.NET has a max URL length set at 260 characters. Luckily, you can change that in web.config:

<configuration>
  <system.web>
    <httpRuntime maxUrlLength="2048" />
  </system.web>
</configuration>

Setting that maxUrlLength value got me past the first hurdle.

Problem 2: URL Decoding

Base 64 includes the “/” character – the path slash. Even if you encode it on the URL like this…

/api/someresource/abc%2Fdef%2fghi

…when .NET reads it, it gets entirely decoded:

/api/someresource/abc/def/ghi

…which then, of course, got me a 404 Not Found because my route wasn’t set up like that.

This is also something you can control through web.config:

<configuration>
  <uri>
    <schemeSettings>
      <add name="http" genericUriParserOptions="DontUnescapePathDotsAndSlashes" />
      <add name="https" genericUriParserOptions="DontUnescapePathDotsAndSlashes" />
    </schemeSettings>
  </uri>
</configuration>

Now that the URL is allowed through and it's not being proactively decoded (so I can get routing working), the last hurdle is...

Problem 3: Max Path Segment Length

The key, if you recall, is about 750 characters long. I can have a URL come through that’s 2048 characters long, but there’s still validation on each path segment length.

The tweak for this is in the registry. Find the registry key HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters and add a DWORD value UrlSegmentMaxLength with the value of the max segment length. The default is 260; I had to update mine to 1024.

After you change that value, you have to reboot to get it to take effect.

This is the part that truly frustrated me. Even running in the standalone OWIN host, this value is still used. I thought OWIN and OWIN hosting was getting us away from IIS, but the low-level http.sys is still being used in there somewhere. I guess I just didn’t realize that and maybe I should have. I mean, .NET is all just wrappers on unmanaged crap anyway, right? :)

WHAT I ENDED UP DOING

Having to do all that to get this working set me on edge. I don’t mind increasing, say, the max URL length, but I had to tweak a lot, and that left me with a bad taste in my mouth. Deployment pain, potential security pain… not worth it.

Since we had control over how the resource IDs were generated in the first place, I changed the algorithm so we could fit them all under 260 characters – the max path segment length. I left the overall URL length configuration in web.config at a higher number, but shrunk it down to 1024 instead of sticking at 2048. I ditched the registry change – no longer needed.

.NET, Visual Studio comments edit

In the Autofac project we’ve maintain all of the various packages and integrations in one assembly. In order to make sure each package builds against the right version of Autofac, all references are redirected through NuGet.

A challenge we face is when we are testing a new release of Autofac, we want to update specific integration projects with the latest version of Autofac so we can do testing, eventually upgrading everything as needed. Running through the GUI to do something like that is a time consumer.

Instead, I use a little script in the Package Manager Console to filter out the list of projects I want to update and then run the update command on those filtered projects. It looks like this:

Get-Project -All | Where-Object { $_.Name -ne "Autofac" -and $_.Name -ne "Autofac.Tests" } | ForEach-Object { Update-Package -Id "Autofac" -ProjectName $_.Name -Version "3.5.0-CI-114" -IncludePrerelease -Source "Autofac MyGet" }

In that little script...

  • Get-Project -All gets the entire list of projects in the current loaded solution.
  • The Where-Object is where you filter out the stuff you don’t want upgraded. I don’t want to run the Autofac upgrade on Autofac itself, but I could also add other projects.
  • The ForEach-Object runs the package update for each selected project.
    • The -Version parameter is the build from our MyGet feed that I want to try out.
    • The -Source parameter is the NuGet source name I've added for our MyGet feed.

You might see a couple of errors go by if you don’t filter out the update for a project that doesn’t have a reference to the thing you’re updating (e.g., if you try to update Autofac in a project that doesn’t have an Autofac reference) but that’s OK.

James Chambers has a great roundup of some additional helpful NuGet PowerShell script samples. Definitely something to keep handy.

.NET, Code Snippets comments edit

I’ve run across a similar situation to many folks I’ve seen online, where I have a solution with a pretty modular application and when I build it,I don’t get all the indirect dependencies copied in.

I found a blog article with an MSBuild target in it that supposedly fixes some of this indirect copying nonsense, but as it turns out, it doesn’t actually go far enough.

My app looks something like this (from a reference perspective)

  • Project: App Host
    • Project: App Startup/Coordination
      • Project: Core Utilities
      • Project: Server Utilities
        • NuGet references and extra junk

The application host is where I need everything copied so it all works, but the NuGet references and extra junk way down the stack isn’t making it so there are runtime explosions.

I also decided to solve this with MSBuild, but using an inline code task. This task will…

  1. Look at the list of project references in the current project.
  2. Go find the project files corresponding to those project references.
  3. Calculate the path to the project reference output assembly and include that in the list of indirect references.
  4. Calculate the paths to any third-party references that include a <HintPath> (indicating the item isn’t GAC’d) and include those in the list of indirect references.
  5. Look for any additional project references – if they’re found, go to step 2 and continue recursing until there aren’t any project references we haven’t seen.

While it’s sort of the “nuclear option,” it means that my composable application will have all the stuff ready and in place at the Host level for any plugin runtime assemblies to be dropped in and be confident they’ll find all the platform support they expect.

Before I paste in the code, the standard disclaimers apply: Works on my box; no warranty expressed or implied; no support offered; YMMV; and so on. If you grab this and need to tweak it to fit your situation, go for it. I’m not really looking to make this The Ultimate Copy Paste Solution for Dependency Copy That Works In Every Situation.

And with that, here’s a .csproj file snippet showing how to use the task as well as the task proper:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="12.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <!-- All the stuff normally found in the project, then in the AfterBuild event... -->
  <Target Name="AfterBuild">
    <!-- Here's the call to the custom task to get the list of dependencies -->
    <ScanIndirectDependencies StartFolder="$(MSBuildProjectDirectory)"
                              StartProjectReferences="@(ProjectReference)"
                              Configuration="$(Configuration)">
      <Output TaskParameter="IndirectDependencies" ItemName="IndirectDependenciesToCopy" />
    </ScanIndirectDependencies>

    <!-- Only copy the file in if we won't stomp something already there -->
    <Copy SourceFiles="%(IndirectDependenciesToCopy.FullPath)"
          DestinationFolder="$(OutputPath)"
          Condition="!Exists('$(OutputPath)\%(IndirectDependenciesToCopy.Filename)%(IndirectDependenciesToCopy.Extension)')" />
  </Target>


  <!-- THE CUSTOM TASK! -->
  <UsingTask TaskName="ScanIndirectDependencies" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v12.0.dll">
    <ParameterGroup>
      <StartFolder Required="true" />
      <StartProjectReferences ParameterType="Microsoft.Build.Framework.ITaskItem[]" Required="true" />
      <Configuration Required="true" />
      <IndirectDependencies ParameterType="Microsoft.Build.Framework.ITaskItem[]" Output="true" />
    </ParameterGroup>
    <Task>
      <Reference Include="System.Xml"/>
      <Using Namespace="Microsoft.Build.Framework" />
      <Using Namespace="Microsoft.Build.Utilities" />
      <Using Namespace="System" />
      <Using Namespace="System.Collections.Generic" />
      <Using Namespace="System.IO" />
      <Using Namespace="System.Linq" />
      <Using Namespace="System.Xml" />
      <Code Type="Fragment" Language="cs">
      <![CDATA[
var projectReferences = new List<string>();
var toScan = new List<string>(StartProjectReferences.Select(p => Path.GetFullPath(Path.Combine(StartFolder, p.ItemSpec))));
var indirectDependencies = new List<string>();

bool rescan;
do{
  rescan = false;
  foreach(var projectReference in toScan.ToArray())
  {
    if(projectReferences.Contains(projectReference))
    {
      toScan.Remove(projectReference);
      continue;
    }

    Log.LogMessage(MessageImportance.Low, "Scanning project reference for other project references: {0}", projectReference);

    var doc = new XmlDocument();
    doc.Load(projectReference);
    var nsmgr = new XmlNamespaceManager(doc.NameTable);
    nsmgr.AddNamespace("msb", "http://schemas.microsoft.com/developer/msbuild/2003");
    var projectDirectory = Path.GetDirectoryName(projectReference);

    // Find all project references we haven't already seen
    var newReferences = doc
          .SelectNodes("/msb:Project/msb:ItemGroup/msb:ProjectReference/<a href='https://twitter.com/Include' class='user-mention'>@Include</a>", nsmgr)
          .Cast<XmlAttribute>()
          .Select(a => Path.GetFullPath(Path.Combine(projectDirectory, a.Value)));

    if(newReferences.Count() > 0)
    {
      Log.LogMessage(MessageImportance.Low, "Found new referenced projects: {0}", String.Join(", ", newReferences));
    }

    toScan.Remove(projectReference);
    projectReferences.Add(projectReference);

    // Add any new references to the list to scan and mark the flag
    // so we run through the scanning loop again.
    toScan.AddRange(newReferences);
    rescan = true;

    // Include the assembly that the project reference generates.
    var outputLocation = Path.Combine(Path.Combine(projectDirectory, "bin"), Configuration);
    var localAsm = Path.GetFullPath(Path.Combine(outputLocation, doc.SelectSingleNode("/msb:Project/msb:PropertyGroup/msb:AssemblyName", nsmgr).InnerText + ".dll"));
    if(!indirectDependencies.Contains(localAsm) && File.Exists(localAsm))
    {
      Log.LogMessage(MessageImportance.Low, "Added project assembly: {0}", localAsm);
      indirectDependencies.Add(localAsm);
    }

    // Include third-party assemblies referenced by file location.
    var externalReferences = doc
          .SelectNodes("/msb:Project/msb:ItemGroup/msb:Reference/msb:HintPath", nsmgr)
          .Cast<XmlElement>()
          .Select(a => Path.GetFullPath(Path.Combine(projectDirectory, a.InnerText.Trim())))
          .Where(e => !indirectDependencies.Contains(e));

    Log.LogMessage(MessageImportance.Low, "Found new indirect references: {0}", String.Join(", ", externalReferences));
    indirectDependencies.AddRange(externalReferences);
  }
} while(rescan);

// Expand to include pdb and xml.
var xml = indirectDependencies.Select(f => Path.Combine(Path.GetDirectoryName(f), Path.GetFileNameWithoutExtension(f) + ".xml")).Where(f => File.Exists(f)).ToArray();
var pdb = indirectDependencies.Select(f => Path.Combine(Path.GetDirectoryName(f), Path.GetFileNameWithoutExtension(f) + ".pdb")).Where(f => File.Exists(f)).ToArray();
indirectDependencies.AddRange(xml);
indirectDependencies.AddRange(pdb);
Log.LogMessage("Located indirect references:\n{0}", String.Join(Environment.NewLine, indirectDependencies));

// Finally, assign the output parameter.
IndirectDependencies = indirectDependencies.Select(i => new TaskItem(i)).ToArray();
      ]]>
      </Code>
    </Task>
  </UsingTask>
</Project>

Boom! Yeah, that’s a lot of code. And I could probably tighten it up, but I’m only using it once, in one place, and it runs one time during the build. Ain’t broke, don’t fix it, right?

Hope that helps someone out there.

.NET, Code Snippets comments edit

I’m messing around with Boxstarter and Chocolatey and one of the things I wanted to do was install the various “Command Prompt Here” context menu extensions I use all the time. These extensions are .inf files and, unfortunately, there isn’t really any documentation on how to create a Chocolatey package that installs an .inf.

So here’s how you do it:

First, package the .inf file in the tools folder of your package alongside the chocolateyInstall.ps1 script..inf files are pretty small anyway and you want the file to be around for uninstall, so it’s best to just include it.

Next, set your chocolateyInstall.ps1 to run InfDefaultInstall.exe. That’s an easier way to install .inf files than the rundll32.exe way and it’ll work with Vista and later. So… no XP support. Aw, shucks.

Here’s a sample chocolateyInstall.ps1:

$packageName = 'YourPackageNameHere'
$validExitCodes = @(0)

try {
  $scriptPath = split-path -parent $MyInvocation.MyCommand.Definition
  $target = Join-Path $scriptPath "YourInfFileNameHere.inf"
  $infdefaultinstall = Join-Path (Join-Path $Env:SystemRoot "System32") "InfDefaultInstall.exe"
  Start-ChocolateyProcessAsAdmin "$target" "$infdefaultinstall" -validExitCodes $validExitCodes
  Write-ChocolateySuccess "$packageName"
} catch {
  Write-ChocolateyFailure "$packageName" "$($_.Exception.Message)"
  throw
}

To support uninstall, add a chocolateyUninstall.ps1 script. This will have to use rundll32.exe to uninstall, but it's not too bad.

$packageName = 'YourPackageNameHere'
$validExitCodes = @(0)

try {
  $scriptPath = split-path -parent $MyInvocation.MyCommand.Definition
  $target = Join-Path $scriptPath "YourInfFileNameHere.inf"
  Start-ChocolateyProcessAsAdmin "SETUPAPI.DLL,InstallHinfSection DefaultUninstall 132 $target" "rundll32.exe" -validExitCodes $validExitCodes
  Write-ChocolateySuccess "$packageName"
} catch {
  Write-ChocolateyFailure "$packageName" "$($_.Exception.Message)"
  throw
}

That's it! Run the packaging and you’re set to go. This will support both installation and uninstallation of the .inf file.

Note: At one point I was having some trouble getting this to run on a Windows Server 2012 VM using the one-click Boxstarter execution mechanism. I found this while testing an install script that installs something like 40 things. After rolling back the VM to a base snapshot (before running the script) I’m no longer able to see the failure I saw before, so I’m guessing it was something else in the script causing the problem. This INF install mechanism appears to work just fine.

.NET comments edit

I was testing out some changes to versioning in Autofac. We have a MyGet feed, but all of the internal dependencies of the various NuGet packages when they’re built point to the CI versions, so it’s sort of hard to stage a test of what things will look like when they’re released – you have to rename each .nupkg file to remove the “-CI-XYZ” build number, open each .nupkg file, change the internal .nuspec file to remove the “-CI-XYZ” build number info, then re-zip everything up. In testing, I had to do this a few times, so I scripted it.

I put everything in a folder structure like this:

  • ~/TestFeed
    • backup – contains all of the original .nupkg files (renamed without the “-CI-XYZ”)
    • msbuildcommunitytasks – contains the MSBuild Community Tasks set

Then I wrote up a quick MSBuild script for doing all the extract/update/rezip stuff. I could have used any other scripting language, but, eh, the batching and file scanning in MSBuild made a few things easy.

msbuild fixrefs.proj /t:Undo puts the original packages (from the backup folder) into the test feed folder.

msbuild fixrefs.proj Does the zip/fix/re-zip.

One of the challenges I ran into was that the zip task in MSBuild Community Tasks seemed to always want to add an extra level of folders into the .nupkg – I couldn’t get the original contents to live right at the root of the package. Rather than fight it, I used 7-Zip to do the re-zipping. I probably could have gotten away from the MSBuild Community Tasks entirely had I some form of sed on my machine because I needed that FileUpdate task. But… Windows. And, you know, path of least resistance. I think this was a five-minute thing. Took longer to write this blog entry than it did to script this.

Here’s “fixrefs.proj”:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="All" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
  <PropertyGroup>
    <MSBuildCommunityTasksPath>.</MSBuildCommunityTasksPath>
    <SevenZip>C:\Program Files\7-Zip\7z.exe</SevenZip>
  </PropertyGroup>
  <Import Project="$(MSBuildProjectDirectory)\msbuildcommunitytasks\MSBuild.Community.Tasks.Targets"/>
  <ItemGroup>
    <Package Include="*.nupkg"/>
  </ItemGroup>
  <Target Name="All">
    <MakeDir Directories="%(Package.Filename)" />
    <Unzip ZipFileName="%(Package.FullPath)" TargetDirectory="%(Package.Filename)" />
    <ItemGroup>
      <NuSpec Include="**/*.nuspec" />
    </ItemGroup>
    <FileUpdate Files="@(NuSpec)" Regex="(.)\-CI\-\d+" ReplacementText="$1" WarnOnNoUpdate="true" />
    <Delete Files="@(Package)" />
    <CallTarget Targets="ZipNewPackage" />
    <RemoveDir Directories="%(Package.Filename)" />
  </Target>
  <Target Name="Undo">
    <Delete Files="@(Package)" />
    <ItemGroup>
      <Original Include="backup/*.nupkg" />
    </ItemGroup>
    <Copy SourceFiles="@(Original)" DestinationFolder="$(MSBuildProjectDirectory)" />
  </Target>
  <Target Name="ZipNewPackage" Inputs="@(Package)" Outputs="%(Identity).Dummy">
    <Exec
      Command="&quot;$(SevenZip)&quot; a -tzip &quot;$(MSBuildProjectDirectory)\%(Package.Filename)%(Package.Extension)&quot;"
      WorkingDirectory="$(MSBuildProjectDirectory)\%(Package.Filename)" />
  </Target>
</Project>

.NET comments edit

Until now, Autofac assemblies have changed version using a slow-changing assembly version but a standard semantic version for the NuGet package and file version.

The benefit of that approach is we could avoid some painful assembly redirect issues.

The drawback, of course, is that even minor changes (adding new functionality in a backwards-compatible way) can cause problems – one project uses version 3.0.0.0 of Autofac and works great, a different project also uses version 3.0.0.0 of Autofac but breaks because it needs some of that newer functionality. That’s hard to troubleshoot and pretty much impossible to fix. (It’s the wrong version of 3.0.0.0? That’s a new kind of dependency hell.)

As a compromise to that, we’ve switched to work sort of like MVC and Web API – for major and minor (X.Y) changes, the assembly version will change, but not for patch-level changes; for all changes, the NuGet package and file versions will change.

This initial switch will potentially be a little painful for folks since it means every Autofac package has to be re-issued to ensure assembly dependencies line up. After that, we should be running smooth again.

You’ll see a 0.0.1 update to the packages – all of those have the new assemblies with the new versions and proper prerequisite references. (Not entirely sure 0.0.1 was the right semantic version increment, but, well, c’est la vie.)

Really sorry about the bit of upgrade pain here. I had hoped we could sneak the change out on a package-by-package basis, but as each integration or extras package gets released, it gets its dependencies set and has assembly references, so we’d end up releasing everything a few times – the first time for when the version of the integration package changes; a second time for when core Autofac changes; and one more time for every time any other dependencies change. For packages like Autofac.Extras.Multitenant.Wcf (which relies on Autofac, Autofac.Integration.Wcf, and Autofac.Extras.Multitenant), it’d mean releasing it a minimum of four times just for the assembly reference changes. Best just to rip the bandage off, right? (I hope?)

NuGet should take care of the assembly redirect issues for you, but if you see assembly dependency conflict warnings in your build, it’s because you’ve not updated all of your Autofac packages.

Relevant GitHub issues: #502, #508

.NET, Web Development comments edit

I just spent a day fighting these so I figured I’d share. You may or may not run into them. They do get pretty low-level, like, “not the common use case.”

PROBLEM 1: Why Isn’t My Data Serializing as XML?

I had set up my media formatters so the XML formatter would kick in and provide some clean looking XML when I provided a querystring parameter, like http://server/api/something?format=xml. I did it like this:

var fmt = configuration.Formatters.XmlFormatter;
fmt.MediaTypeMappings.Add(new QueryStringMapping("format", "xml", "text/xml"));
fmt.UseXmlSerializer = true;
fmt.WriterSettings.Indent = true;

It seemed to work on super simple stuff, but then it seemed to arbitrarily just stop - I'd get XML for some things, but others would always come back in JSON no matter what.

The problem was the fmt.UseXmlSerializer = true; line. I picked the XmlSerializer option because it can create prettier XML without all the extra namespaces and cruft of the standard DataContractSerializer

UPDATE: I just figured out it's NOT IEnumerable<T> that's the problem - it's an object way deep down in my hierarchy that doesn't have a parameterless constructor.

When I started returning IEnumerable<T> values, that's when it stopped working. I thought it was because of the IEnumerable<T>, but it turned out that I was enumerating an object that had a property with an object that had another property that didn't have a default constructor. Yeah, deep in the sticks. No logging or exception handling to explain that one. I had to find it by stepping into the bowels of the XmlMediaTypeFormatter.

PROBLEM 2: Why Aren't My Format Configurations Being Used?

Somewhat related to the first issue - I had the XML serializer set up for that query string mapping, and I had JSON set up to use camelCase and nice indentation, too. But for some weird reason, none of those settings were getting used at all when I made my requests.

Debugging into it, I could see that on some requests the configuration associated with the inbound request message was all reset to defaults. What?

This was because of some custom route registration stuff.

When you use attribute routes…

  1. The attribute routing mechanism gets the controller selector from the HttpConfiguration object.
  2. The controller selector gets the controller type resolver from the HttpConfiguration object to which it holds a reference.
  3. The controller type resolver locates all the controller types for the controller selector.
  4. The controller selector builds up a cached list of controller name-to-descriptor mappings. Each descriptor gets passed a reference to the HttpConfiguration object.
  5. The attribute routing mechanism gets the action selector from the HttpConfiguration object.
  6. The action selector uses type descriptors from the controller type selector and creates a cached set of action descriptors. Each action descriptor gets passed a reference to the HttpConfiguration object and get a reference back to the parent controller descriptor.
  7. The actions from the action selector get looked at for attribute route definitions and routes are built from the action descriptor. Each route has a reference to the descriptor so it knows what to execute.
  8. Execution of an action corresponding to one of these specific routes will use the exact descriptor to which it was tied.

Basically. There’s a little extra complexity in there I yada-yada’d away. The big takeaway here is that you can see all the bajillion places references to the HttpConfiguration are getting stored. There’s some black magic here.

I was trying to do my own sort of scanning for attribute routes (like on plugin assemblies that aren’t referenced by the project), but I didn’t want to corrupt the main HttpConfiguration object so I created little temporary ones that I used during the scanning process just to help coordinate things.

Yeah, you can’t do that.

Those temporary mostly-default configurations were getting used during my scanned routes rather than the configuration I had set with OWIN to use.

Once I figured all that out, I was able to work around it, but it took most of the day to figure it out. It’d be nice if things like the action descriptor would automatically chain up to the parent controller descriptor (if it’s present) to get configuration rather than holding its own reference. And so on, all the way up the stack, such that routes get their configuration from the route table, which is owned by the root configuration object. Set it and forget it.

.NET, Web Development comments edit

I’m working on a new Web API project where I want to use AutoMapper for some type conversion. As part of that, I have a custom AutoMapper type converter that takes in some constructor parameters so the converter can read configuration values. I’m using Autofac for dependency injection (naturally).

Historically, I’ve been able to hook AutoMapper into dependency injection using the ConstructServicesUsing method and some sort of global dependency resolver, like:

Mapper.Initialize(cfg =>
{
  cfg.ConstructServicesUsing(t => DependencyResolver.Current.GetService(t));
  cfg.CreateMap();
});

That works great in MVC or in other applications where there's a global static like that. In those cases, the “request lifetime scope” either doesn’t exist or it’s managed by the implementation of IDependencyResolver the way it is in the Autofac integration for MVC.

Retrieving the per-request lifetime scope is much more challenging in Web API because the request lifetime scope is managed by the inbound HttpRequestMessage. Each inbound message gets a lifetime scope associated, so there’s no “global static” from which you can get the request lifetime. You can get the global dependency resolver, but resolving from that won’t be per-request; it’ll be at the application level.

It’s also a challenging situation because AutoMapper really leans you toward using the static Mapper object to do your mapping and you can’t really change the value of ConstructServicesUsing on the static because, well, you know, threading.

So… what to do?

The big step is to change your mindset around the static Mapper object. Instead of using Mapper to map things, take an IMappingEngine as a dependency in your class doing mapping. Yes, that’s one more dependency you’d normally not have to take, but there’s not really a better way given the way the IMappingEngine has to resolve dependencies is actually different per-request.

This frees us up to now think about how to register and resolve a per-request version of IMappingEngine.

Before I show you how to do this, standard disclaimers apply: Works on my machine; I’ve not performance tested it; It might not work for you; etc.

Oooookay.

First, we need to understand how the IMappingEngine we build will come together.

  1. The implementation of AutoMapper.IMappingEngine we’ll be using is AutoMapper.MappingEngine (the only implementation available).
  2. MappingEngine takes in an IConfigurationProvider as a constructor parameter.
  3. IConfigurationProvider has a property ServiceCtor that is the factory we need to manipulate to resolve things out of a per-request lifetime scope.
  4. The main AutoMapper.Mapper has a Configuration property of type IConfiguration… but the backing store for it is really an AutoMapper.ConfigurationStore, which is also an IConfigurationProvider. (This is where the somewhat delicate internal part of things comes in. If something breaks in the future, chances are this will be it.)

Since we need an IConfigurationProvider, let’s make one.

We want to leverage the main configuration/initialization that the static Mapper class provides because there’s a little internal work there that we don’t want to copy/paste. The only thing we really want to change is that ServiceCtor property, but that’s not a settable property, so let’s write a quick wrapper around an IConfigurationProvider that lets us override it with our own method.

public class ConfigurationProviderProxy : IConfigurationProvider
{
  private IComponentContext _context;
  private IConfigurationProvider _provider;

  // Take in a configuration provider we're going to wrap
  // and an Autofac context from which we can resolve things.
  public ConfigurationProviderProxy(IConfigurationProvider provider, IComponentContext context)
  {
    this._provider = provider;
    this._context = context;
  }

  // This is the important bit - we use the passed-in
  // Autofac context to resolve dependencies.
  public Func<Type, object> ServiceCtor
  {
    get
    {
      return this._context.Resolve;
    }
  }

  //
  // EVERYTHING ELSE IN THE CLASS IS JUST WRAPPER/PROXY
  // CODE TO PASS THROUGH TO THE BASE PROVIDER.
  //
  public bool MapNullSourceCollectionsAsNull { get { return this._provider.MapNullSourceCollectionsAsNull; } }

  public bool MapNullSourceValuesAsNull { get { return this._provider.MapNullSourceValuesAsNull; } }

  public event EventHandler<TypeMapCreatedEventArgs> TypeMapCreated
  {
    add { this._provider.TypeMapCreated += value; }
    remove { this._provider.TypeMapCreated -= value; }
  }

  public void AssertConfigurationIsValid()
  {
    this._provider.AssertConfigurationIsValid();
  }

  public void AssertConfigurationIsValid(TypeMap typeMap)
  {
    this._provider.AssertConfigurationIsValid(typeMap);
  }

  public void AssertConfigurationIsValid(string profileName)
  {
    this._provider.AssertConfigurationIsValid(profileName);
  }

  public TypeMap CreateTypeMap(Type sourceType, Type destinationType)
  {
    return this._provider.CreateTypeMap(sourceType, destinationType);
  }

  public TypeMap FindTypeMapFor(ResolutionResult resolutionResult, Type destinationType)
  {
    return this._provider.FindTypeMapFor(resolutionResult, destinationType);
  }

  public TypeMap FindTypeMapFor(Type sourceType, Type destinationType)
  {
    return this._provider.FindTypeMapFor(sourceType, destinationType);
  }

  public TypeMap FindTypeMapFor(object source, object destination, Type sourceType, Type destinationType)
  {
    return this._provider.FindTypeMapFor(source, destination, sourceType, destinationType);
  }

  public TypeMap[] GetAllTypeMaps()
  {
    return this._provider.GetAllTypeMaps();
  }

  public IObjectMapper[] GetMappers()
  {
    return this._provider.GetMappers();
  }

  public IFormatterConfiguration GetProfileConfiguration(string profileName)
  {
    return this._provider.GetProfileConfiguration(profileName);
  }
}

That was long, but there's not much logic to it. You could probably do some magic to make this smaller with Castle.DynamicProxy but I'm keeping it simple here.

Now we need to register IMappingEngine with Autofac so that it:

  • Creates a per-request engine that
  • Uses a per-request lifetime scope to resolve dependencies and
  • Leverages the root AutoMapper configuration for everything else.

That’s actually pretty easy:

// Register your mappings here, but don't set any
// ConstructServicesUsing settings.
Mapper.Initialize(cfg =>
{
  cfg.AddProfile<SomeProfile>();
  cfg.AddProfile<OtherProfile>();
});

// Start your Autofac container.
var builder = new ContainerBuilder();

// Register your custom type converters and other dependencies.
builder.RegisterType<DemoConverter>().InstancePerApiRequest();
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());

// Register the mapping engine to use the base configuration but
// a per-request lifetime scope for dependencies.
builder.Register(c =>
{
  var context = c.Resolve<IComponentContext>();
  var config = new ConfigurationProviderProxy(Mapper.Configuration as IConfigurationProvider, context);
  return new MappingEngine(config);
}).As<IMappingEngine>()
.InstancePerApiRequest();

// Build the container.
var container = builder.Build();

Now all you have to do is take an IMappingEngine as a dependencyand use that rather than AutoMapper.Mapper for mapping.

public class MyController : ApiController
{
  private IMappingEngine _mapper;

  public MyController(IMappingEngine mapper)
  {
    this._mapper = mapper;
  }

  [Route("api/myaction")]
  public SomeValue GetSomeValue()
  {
    // Do some work and use the IMappingEngine for maps.
    return this._mapper.Map<SomeValue>(otherValue);
  }
}

Following that pattern, any mapping dependencies will be resolved out of the per-request lifetime scope rather than the application root container and you won't have to use any static references or fight with request contexts. When the API controller is resolved (out of the request scope) the dependent IMappingEngine will be as well, as will all of the chained-in dependencies for mapping.

While I've not tested it, this technique should also work in an MVC app to allow you to get away from the static DependencyResolver.Current reference. InstancePerApiRequest and InstancePerHttpRequest do effectively the same thing internally in Autofac, so the registrations are cross-compatible.