February 2011 Blog Posts

Who's Your Favorite Bond?

From a conversation at today's staff meeting…

Dev 1: I've been catching up on all the old James Bond movies recently. We just watched Dr. No last night.

Me: Oh, that's a good one. Hey, which is your favorite Bond? Actor, not movie.

Dev 1: Well, let's see, there's Sean Connery, George Lazenby, Roger Moore, Timothy Dalton, Pierce Brosnan, Daniel Cra-

Dev 2: Chuck Norris.

Me: What?

Dev 2: Chuck Norris. I saw this web site that says he can do anything.

So, folks, there you have it: Chuck Norris is the best James Bond.

Dynamic HttpModule Registration in ASP.NET 4.0

I came across this trick while perusing the Autofac code base with the new MVC3 integration. You no longer have to register the HttpModule that disposes of request lifetime scopes because they do it for you dynamically. Figuring out how they did it revealed two really cool little tricks I've not seen documentation on.

Trick #1: System.Web.PreApplicationStartMethodAttribute

.NET 4 adds a new attribute that allows you to programmatically do things just before application startup (that is, before Application_Start in your Global.asax). ASP.NET MVC3, for example, uses this hook to register build providers for the Razor view engine so you won't have to do it manually in web.config.

To use it, first create a static class with a static method in it that contains your application startup logic. Be sure to guard against it getting called twice by having a flag indicating if it was called (sort of the way you track whether Dispose was called):

namespace MyNamespace
  private bool _startWasCalled = false;
  public static class PreApplicationStartCode
    public static void Start()
        _startWasCalled = true;
        // Do your startup logic here.

You don't have to call your class "PreApplicationStartCode," nor do you have to call the method "Start," but that seems to be the convention.

Once you have that class and method, mark your assembly with the attribute and point to your method:

[assembly: PreApplicationStartMethod(typeof(MyNamespace.PreApplicationStartCode), "Start")]

When the application starts, the System.Web.Hosting.HostingEnvironment.Intialize() method calls System.Web.Compilation.BuildManager.CallPreStartInitMethods() (all of that is internal, of course) and magic happens - your application startup logic runs.

Trick #2: Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule

The Microsoft.Web.Infrastructure assembly seems to have appeared along with MVC3 and WebMatrix. The DynamicModuleUtility.RegisterModule method is a ridiculously helpful and equally ridiculously undocumented method that allows you to add an IHttpModule to the request pipeline programmatically so you don't have to put an entry into web.config. You just pass it the type of the IHttpModule implementation and it gets added to the pipeline:


The only catch is… you need to call it just before application startup. (See where I'm going with this?)

Tie it together: Call DynamicModuleUtility.RegisterModule() from inside PreApplicationStartCode.Start() and you can programmatically add HttpModules to the request pipeline.

Pretty nifty, huh?

Again, I saw this first in the Autofac codebase, so props to Alex Meyer-Gleaves (who added that code to the MVC3 support in Autofac) for figuring that one out.

Running Typemock Isolator Outside Visual Studio

I've blogged before about getting Typemock, NUnit, and NCover all working together in MSBuild. Though that's admittedly a tad stale, with a bit of tweaking the contents of that article still apply.

I got a question about running tests that use Typemock Isolator outside of Visual Studio, though, so I figured I'd post this article with some additional info and clarifications.

First, the setup:

  • NCover 3.4.16
  • Typemock Isolator 6.0.6
  • MSTest with Visual Studio 2010

If you have different versions of these tools, you may need to tweak things. Also, I'm building on a 64-bit machine, so you may see some paths referring to 32-bit over 64-bit tools because MSTest is a 32-bit runner so you need to use the 32-bit NCover to get coverage.

When you have Typemock Isolator installed, running tests through the built-in Visual Studio test runner "just works" because Isolator installs a Visual Studio add-in helper. To get coverage, you can use TestDriven.NET to "Test With -> NCover" and it works great.

If you want to run coverage outside of Visual Studio, though, there are a few things you might think to try, some of which work and some of which don't.

THE BIG TAKEAWAY: You have to start things in a specific order.

  1. Start Typemock so it can link with NCover.
  2. Start NCover so it can run and profile your unit tests.
  3. Start your unit test runner so NCover can gather statistics.
  4. When the test runner ends, NCover automatically ends.
  5. Make sure Typemock stops when everything is over, regardless of whether the tests pass or fail.

If you don't start things in the right order, your tests won't work and you won't get the expected results.


The way Isolator works, it's sort of a "pass-through profiler." NCover is a profiler, too, which is how it takes coverage statistics. You can only have one profiler running at one time. The cool "trick" Isolator does is that it "links" with other profilers so calls pass through Isolator first, your mocks get inserted, and then pass along to the linked profiler like NCover. You can actually watch Typemock switch registry entries around on the fly when you start and stop it - it'll temporarily put itself into the registry where you'd expect to see NCover, so if you "start NCover" you're actually starting Typemock, which then chains in NCover.

However, if you try to start the other profiler like NCover first, the linking doesn't happen so your mocks don't show up when you expect them. Problems.

Given that, let's talk about ways to run Typemock Isolator and get coverage when outside of Visual Studio.


Running things through a build script is the most common and recommended way of doing things. It allows you to automate the whole build process and use the same script on a developer machine and in a continuous integration server.

Let me drop some code on you and then we'll walk through it:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="All" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
    <!-- Coverage logs and such will be placed here. -->
    <!-- Build configuration (Debug or Release). -->
    <!-- Path to the NCover 32-bit installation (MSTest is 32-bit). -->
    <!-- Path to the NCover build tasks (different path than NCover 32-bit on a 64-bit machine). -->
    <!-- Path to the Typemock Isolator installation. -->
    <!-- Path to the unit test assembly for easier test execution. -->

  <!-- Get the Typemock and NCover build tasks. -->
  <Import Project="$(TypemockPath)\TypeMock.MSBuild.Tasks"/>
  <UsingTask TaskName="NCover.MSBuildTasks.NCover" AssemblyFile="$(NCoverBuildTasksPath)Build Task Plugins\NCover.MSBuildTasks.dll"/>

  <Target Name="All" DependsOnTargets="Clean;Compile;Test" />
  <Target Name="Clean">
    <Message Text="Removing build output artifacts in preparation for a clean build." />
    <RemoveDir Directories="$(LogDirectory)" ContinueOnError="true" />
    <RemoveDir Directories="$(MSBuildProjectDirectory)\CoverageDemo\bin" ContinueOnError="true" />
    <RemoveDir Directories="$(MSBuildProjectDirectory)\CoverageDemo\obj" ContinueOnError="true" />
    <RemoveDir Directories="$(MSBuildProjectDirectory)\CoverageDemoTests\bin" ContinueOnError="true" />
    <RemoveDir Directories="$(MSBuildProjectDirectory)\CoverageDemoTests\obj" ContinueOnError="true" />
  <Target Name="Compile">
    <Message Text="Compiling the solution." />
    <MSBuild Projects="CoverageDemo.sln" Targets="Build" Properties="Configuration=$(BuildConfiguration)" />
  <Target Name="Test">
    <MakeDir Condition="!Exists('$(BuildLogDirectory)')" Directories="$(BuildLogDirectory)"/>
    <CallTarget Targets="Test_BuildTasks;Test_CommandLine" />
  <Target Name="Test_BuildTasks">
    <Message Text="Testing with Typemock and NCover build tasks." />
    <TypeMockStart Link="NCover3.0"/>
    <CallTarget Targets="__TestFinally"/>
    <OnError ExecuteTargets="__TestFinally"/>
  <Target Name="Test_CommandLine">
    <Message Text="Testing with Typemock and NCover command lines." />
      <!-- Path to Typemock Console Runner. -->
      <!-- Path to NCover.Console Runner. -->
    <Exec Command="&quot;$(TMockRunner)&quot; -first -link NCover3.0 &quot;$(NCoverConsole)&quot; //x &quot;$(LogDirectory)\Test_CommandLine.xml&quot; //l &quot;$(LogDirectory)\Test_CommandLine.log&quot; //a CoverageDemo MSTest.exe /testcontainer:&quot;$(UnitTestAssembly)&quot;" />
  <Target Name="__TestFinally">
    <!-- Make sure we stop Typemock whether there's an error or success in the tests. -->

Now, let's walk through it.

The first thing we do is set up some helpful properties. This will make creation of the various command lines and such a little easier. In this case, it's mostly paths to tools.

Next we include the Typemock and NCover build tasks. That way we can use those to run our tests.

The "All," "Clean," and "Compile" targets are standard fare.

  • The "All" target is our build script entry point. Run that target and it does the full clean/build/test run.
  • The "Clean" target deletes all the binaries and log files so we can get a nice clean build run.
  • The "Compile" target actually builds the assemblies. In this case, I have two - a class library and the corresponding set of unit tests.

The "Test" target creates the folder where we'll dump our coverage logs and then fires off the unit testing.

Now we get to the interesting bit: Showing the two ways you can run tests with coverage.

The Test_BuildTasks target shows coverage using the provided build tasks. This is the recommended way of doing things since the build task interface makes your script a lot more readable and you get some "compile time checking" in case you mistype one of the build script attributes. Plus, in some cases the build script tasks make things a little easier to specify than the longer, more cryptic command lines. You'll notice that we're starting things and stopping them in the order mentioned earlier. That's important, and it's why this works.

The Test_CommandLine target shows coverage using a command line executable. Typemock Isolator comes with a program called "TMockRunner.exe" which is a lot like NCover.Console.exe that comes with NCover - it lets you start up a process that will have Typemock enabled on it. If you dissect that big long command line, you'll see:

  • We lead with TMockRunner.exe, tell it we'll be running coverage (-first) and link it to NCover (-link NCover3.0).
  • We run NCover.Console.exe with its usual command line options, telling it where to put logs and which assembly to profile.
  • Finally we run MSTest.exe and tell it where our unit tests are.

In the command line version, we don't have to explicitly shut down Typemock Isolator because it's only enabled for that one process, just as NCover.Console.exe only enables NCover for the one process it starts.


I showed you a command line in the build script example above, but you don't have to use it inside a build script. It'll work just as well outside the script environment. The only downside to using it alone is that you won't be able to use the handy variables to make the command line more readable the way you can in the build script, but if you make a little batch file or something with the command line in it, that'll work perfectly.


NCover Explorer offers a way to start an application and profile it from right in the UI.

NCover Explorer "New Project" settings dialog

This won't work the way you think because NCover Explorer tries to start NCover first. Remember the critical ordering, above, where Typemock Isolator needs to be started first? That doesn't happen here. NCover Explorer expects to start NCover straight away. So how do you get it to work?

Start NCover Explorer using TMockRunner.exe and Typemock will be enabled during your test runs. A sample command line is as follows:

"C:\Program Files (x86)\Typemock\Isolator\6.0\TMockRunner.exe" -first -link NCover3.0 "C:\Program Files\NCover\NCover.Explorer.exe"

When you run that, the console window where you started NCover Explorer will stay open. Leave it. Now when you set up your project, set the application to profile as MSTest.exe and set your "testcontainer" to your unit test assembly:

"Application to Profile" settings in NCover Explorer

And for NCover Path, make sure you point to the 32-bit version of NCover.Console.exe because MSTest.exe is 32-bit:

"NCover Path" settings in NCover Explorer

Now when you click the "Run Coverage" button, things will work as expected because TMockRunner.exe has enabled Typemock Isolator inside NCover Explorer.


I know we're using MSTest in this example, but I figured a quick note was in order:

If you're using, say, NUnit and want Typemock to work inside the NUnit GUI, you need to do a similar trick as we did in NCover Explorer, above - start NUnit GUI through TMockRunner.exe, just omit the "-first" and "-link NCover3.0" command line options.  This same trick holds for other test runner tools - starting the tool through TMockRunner.exe should get you the results you're looking for.


Hopefully this helps you get your tests running with Typemock Isolator outside Visual Studio. Happy testing!

WiX 3.5.2519.0 Incorrect Intermediate Object path for External Files

I'm upgrading a project to the released version of WiX and found an issue that causes .wixobj files to be created in your source tree in unfortunate locations at build time. I've filed the issue on SourceForge, but for folks running across it, I thought I'd post here as well including the workaround.

If you have a .wixproj that contains .wxs files that are included via relative path outside of the folder structure below the .wixproj, the intermediate objects (.wixobj) get placed in odd/incorrect locations based on the source external .wxs files. What this looks like in the wild is that random .wixobj files just sort of "materialize" during the build and you can't figure out where they're coming from.

For example, say you have a folder structure like this:


The ProductSetup.wixproj includes the set of custom dialogs like this:

    <Compile Include="..\..\..\setup\CustomDialogs.wxs">

(.wixproj simplified for the example)

Given that the OutputPath for the project is relative - bin\$(Configuration) - and the IntermediateOutputPath is also relative - obj\$(Configuration) - I would expect that all .wixobj files get created in obj\$(Configuration)... but they don't.

Alternatively, I could accept (though it'd be unexpected) that intermediate output gets placed in obj\$(Configuration) relative to each .wxs file, so I might see trunk\setup\obj\Debug\CustomDialogs.wixobj in this example. This is also not what happens.

Instead, paths are calculated based on the relative location of the .wxs source combined with the project's intermediate output path. That means, for this example:

trunk\product\solution\installer\obj\Debug (the intermediate output location of the .wixproj project)

combines with

..\..\..\setup (the location of the external .wxs file)

and you find the file


gets created during the build process.

The workaround for this appears to be to manually specify ObjectPath on any included .wxs files, like:

<Compile Include="..\..\..\setup\CustomDialogs.wxs">

This forces the .wixobj files to be created in the appropriate location.

UPDATE 3/21/2011: I got a report that just putting obj\$(Configuration) didn't work for one user and they needed to add a trailing backslash, like obj\$(Configuration)\ to the path. I didn't need that, but if the above isn't working for you, try adding the backslash.

This behavior is new in WiX 3.5.2519.0 (the released/official 3.5 version) and did not exist in 3.5.2403.0.

Sharpening the Saw Isn't Just for Technical Skills

There have been a lot of developer-related "sharpening the saw" articles published and almost all of them speak to the technical aspect of becoming a better developer. A couple of popular ones (Hanselman, Atwood) have some suggestions like having technical brown-bags or reading programming blogs. These are wonderful suggestions for improving your technical abilities as a developer.

However, if you look at the actual description of "sharpening the saw," originally from Steven Covey's 7 Habits of Highly Effective People, it talks about having a "balanced program for self-renewal." If all you do as a developer is focus on increasing your technical skills, you're not really keeping in balance.

Perhaps it's time to broaden your thoughts on what it means, as a developer, to "sharpen the saw." What can you do to increase your skills/value that doesn't involve technical abilities? Here are some ideas:

  • Take a writing course. Your local higher-education facilities (and many correspondence schools) most likely offer courses in basic writing. Not writing code, just writing. Prose. Why is that important? Most likely you aren't writing just code all day. Whether it's email, design documentation, blog posts, or other communications, you're writing. If you want to be sure to be understood and if you want your communications to come across in a reasonably professional and proficient fashion, you need to be able to write in a cohesive fashion with a minimal amount of grammatical problems. This is not to say you need to become a novelist or write for The New Yorker, but especially in this social networking day-and-age where spelling and grammar are pitched out the window in favor of shortcuts and 140-character limits, having a good set of solid, basic writing skills will help you long-term.
  • Learn about UI design and user experience. It's pretty well known that developers generally create some pretty horrible UI out of the box. It's not because developers are incompetent, but because when they're crafting that UI they're not thinking about design principles or the user experience of the thing. Like UI is best left to "those folks on the other side of the office with those fancy turtlenecks and their copies of Photoshop." Just as developers need to understand testing and not defer to QA, it's valuable for developers to understand at least fundamental ideas of UI design and user experience. At the very least, this will allow you to take part in conversations about the UI in a more meaningful fashion and understand not just what you're doing when the UX team asks you to fix something, but also why you're doing it.
  • Take a class on being a mentor. Hanselman mentions in his post that one idea to help sharpen technical skills is to create a mentorship program. That's a really good idea. But do you know how to be a mentor? Do you need a mentor on how to be a mentor? What does "mentoring" even mean? If you have a mentor to help you learn new things, do you know how to be mentored? It sounds like it's a simple thing, but think about it: Has someone ever asked you a question, and when you answered it you knew they didn't understand the answer but you also didn't know how to give them an answer they would understand? Learning how to answer questions (as a mentor) and learning how to ask the right questions (as a mentee) is huge. I've taken the Practical Leader class on peer mentoring and I honestly can't recommend it enough. It's one of the best courses I've ever taken and it totally changed the way I approach helping and teaching people.
  • Improve your interpersonal communication skills. If you've worked in any sort of team, you'll have run into a situation where there was a confrontation between two team members that probably could have been handled better. In coding, for example, sometimes ego gets in the way of what's best for the code. Maybe someone on the team has a strong personality and you don't know how to make suggestions to them. Maybe someone on the team has a milder personality and you don't know how to coax their input/feedback from them when trying to make decisions. Whatever the case may be, strengthening your communications skills will help you work better in the team. One course I've taken and highly recommend is the Vital Smarts "Crucial Conversations" training. It's one of those classes where, as you're going through it, it feels like it's revealing information to you that you already knew but didn't consciously recognize. Suddenly, being conscious of it, you realize all the things you did wrong in previous poor interactions, how things could have been handled better… and how you'll approach things next time you encounter the situation. Well worth the time.

Again, the idea here is that there are things other than technical skills that will help you sharpen your saw as a developer.

What do you do in a non-technical capacity to sharpen your saw?

24 Hours with Kindle

Amazon Kindle - 6" - Wi-Fi and 3GI bought an Amazon Kindle and it arrived yesterday. I got the 6" model with Wi-Fi and 3G on it. So far, I generally like it, but figured I would share my thoughts so far, having about a day with it.

Pushing books from the Amazon web site to the Kindle is not super intuitive. I think this is partly my fault. I thought I'd be slick and try to "pre-load" a few things on there using the Kindle app on Android so when I got the Kindle, I'd have some stuff to get reading. After unboxing the thing and setting it up, I looked and... couldn't find my stuff. It was in this "Archived Items" area on the Kindle, but (as a new user) what the hell is that supposed to mean? Doesn't look archived on my phone.

After a bit of poking around I did figure out how to download them onto the Kindle, but I went in thinking all this stuff would be automatically synchronized.

The Kindle app for Android comes with books not in the Kindle store. When you first get the Kindle app for Android you get two or three free books (Pride and Prejudice, Aesop's Fables, and Treasure Island, if I remember right). Thing is, while it's the same content as the ones on the Kindle store, they're not actually connected to your account or the Kindle store, so when you actually get a Kindle, those books aren't on there. Let me tell you how long it took me to figure that one out. I eventually deleted the local ones and "purchased" the free books out of the Kindle store so now I can keep everything in sync, but this definitely contributed to my confusion over how to get things onto the Kindle.

The screen is beautiful. I mean, truly crisp, like a printed page. When you get the Kindle, there's a little diagram on the screen showing how to plug it in and start the battery charging. I sat for a good couple of minutes trying to figure out how to peel that sticker off when I realized it's an image on the screen, not a sticker. Yeah, it's that clear. Of the reading I've done on it, it's nice and easy on the eyes. And when you turn the Kindle off, a classic book cover or portrait of an author comes up on the screen and stays there, and they're all gorgeous looking.

Getting your own files onto the Kindle is confusing. They talk about how to get your own books or documents onto the Kindle, but the way it's written is kind of confusing. You get an email address with your Kindle (like "mykindle@kindle.com" and they tell you that you can send documents to that address and they'll show up on your Kindle, except there's some sort of charge... unless you send to "mykindle@free.kindle.com" in which case there's no charge but there's some sort of manual work involved. Alternatively, you can connect the Kindle to your computer through USB and drop files into a "documents" folder on the Kindle (which appears like a drive on the computer)... but can you create folders inside the "documents" folder to separate your purchased content from personal content or not? I finally figured it out and have used Calibre to convert a couple of Cory Doctorow books and transfer them to my Kindle, but it wasn't the simplest thing. I can't see non-tech people getting the most out of that functionality, or maybe they just suck it up and eat the costs.

Pricing on Kindle books is odd. Sometimes they're really competitive, like The Hunger Games being only $5 on Kindle, but $9 in paperback - Kindle version's about half price. Other times, you wonder about the cost, like Catching Fire (the sequel to The Hunger Games) being $8.52 on Kindle but $8.97 in hardcover - the electronic version costs only $0.45 less than a hardcover physical book? One would think if I'm not taking up the resources of the printing process, the savings could be passed along to me.

The organization of books on Kindle is confusing and kind of annoying. When you first get your Kindle, all the books show up right on the home screen. If you only have like three books, that's not a big deal, but out of the box you've got two dictionaries and the Kindle User Guide listed. I don't need to look at that stuff every time I fire up the Kindle.

Fortunately, they have the notion of "collections." A collection is sort of like a "tag" or a "folder" you can put your books in. You create a collection, then you add books to that collection. You can add a book into multiple collections, too - so if you have, say, a collection called "Science Fiction" and a collection called "Classics," you could put The Time Machine in both collections. You can read more about collections in the Kindle User Guide.

A Kindle home screen organized in collections.

There are three problems with collections.

Collection Problem 1: If a book is in a collection, you can't also pin it to your home screen. That is, once you've got things organized into collections, you generally show your home screen organized by collections so, you know, you can see your organization in action. (Alternatively, if you sort your home screen by, say, author, you see every book listed right on the home screen and no collections at all. Sort of defeats the purpose of collections. Unfortunately, once you put a book into a collection and you sort your home screen by collections, the only books that appear on the home screen are the ones not in collections. You can't have a book in a collection and say "pin this to the home screen because I'm reading it right now." Granted, you can create a "Current Reading" collection and just use that, but that's sort of a hack, you know?

Collection Problem 2: Collections are always sorted by "most recently updated," not by collection name. This is the most frustrating bit of collections. You set all of your collections up, then sort your home screen by collection and they appear not in alphabetical order, but in descending order from most to least recently updated. (By "updated," I mean either you read a book in that collection or you added/removed an item in the collection.) There's a blog article talking about a workaround for this, but this, too, is a hack. One would think someone would have said, "Hey, you know what? Why don't we separate the 'grouping' concept (collection, title, author) from the 'sorting' concept (alphabetical, date) so people can see things they way they want?" I guess not.

Collection Problem 3: Collections are device-specific. I might go through and totally organize my library on my Kindle, but when I open the Android app it doesn't support collections, so I don't see that. If I were to get another Kindle and connect it to my account, I wouldn't even have the collections there unless I did this process of downloading all the books to the Kindle and then adding the collection from the "Archived Items" folder. (See the "To import a collection from another Kindle" section in the user guide.)

Aside from this stuff, I'm enjoying my Kindle so far. I'm planning on taking it to Australia with me at the end of the month, so that should really be the true test of how I like it.

Working with Windows Identity Foundation in ASP.NET MVC

If you've worked with Windows Identity Foundation, you'll find it very nearly mandates that you implement a passive security token service using classic ASP.NET web forms rather than MVC. It doesn't lend itself well to testability, and in some cases it writes content directly to the response stream without you being able to govern when/how that happens.

All is not lost, though. Here are a couple of helpers and tips when working with Windows Identity Foundation in ASP.NET MVC to create a passive STS.

Programming Windows Identity FoundationFirst, drop what you're doing and go buy a copy of Programming Windows Identity Foundation by Vittorio Bertocci. The documentation on WIF is surprisingly thin and this book is like the lost set of docs that makes everything clear. Will it directly help you specifically with ASP.NET MVC and WIF? No, but it will help you to understand what is going on with WIF so you know where you may need to insert yourself. It'll also explain how things are supposed to work, so when you're setting it up in MVC you can tell if things are going right or not.

Reflector is your friend. I am not condoning that you copy/paste anything out of the WIF assemblies, but using Reflector to figure out what it's doing is key.

For example, in WIF samples and in the WIF STS template you'll see a call to FederatedPassiveSecurityTokenServiceOperations.ProcessSignInRequest followed closely by FederatedPassiveSecurityTokenServiceOperations.ProcessSignInResponse. The ProcessSignInResponse method takes in an HttpResponse rather than an HttpResponseBase, which removes you from the ability in your MVC controller to use the System.Web abstractions for testability. However, if you look at what ProcessSignInResponse is actually doing, it's just taking the SignInResponseMessage that comes from ProcessSignInRequest and then it's writing it out to the response stream. You can do the same thing yourself in your controller using the controller's Response property and HttpResponseBase, allowing you to break that tie to the concrete System.Web classes.

Make use of model binding. In the STS template, the Default.aspx page they provide has a big if/then block that switches on query string parameter values to determine which WS-Federation action the incoming message has. Rather than that, wouldn't it be better to have a controller action that looks like this?

// This ostensibly replaces Default.aspx in the STS template
public class DefaultController : Controller
  public ActionResult Index(WSFederationMessage message)
    if(message.Action == WSFederationConstants.Actions.SignIn)
      // Do your signin processing
    // ...and so on; alternatively you could switch on message.Action.

That, of course, assumes you have a model binder that will look at the incoming query string and parse a WSFederationMessage out of it. That's not too hard to do, and we can pretty easily add support for the derived WSFederationMessage types to it, too, like SignInRequestMessage.

using System;
using System.Web.Mvc;
using Microsoft.IdentityModel.Protocols.WSFederation;

namespace MyNamespace.ModelBinders
  public class WSFederationMessageBinder : IModelBinder
    public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
      if (controllerContext == null)
        throw new ArgumentNullException("controllerContext");
      if (bindingContext == null)
        throw new ArgumentNullException("bindingContext");

        var message = WSFederationMessage.CreateFromUri(controllerContext.HttpContext.Request.Url);
        if (!bindingContext.ModelType.IsAssignableFrom(message.GetType()))
          throw new WSFederationMessageException();
        return message;
      catch (WSFederationMessageException ex)
        bindingContext.ModelState.AddModelError("", ex);
        return null;

You can then register that model binder for the various WS-Federation message types at app startup:

var binder = new WSFederationMessageBinder();
ModelBinders.Binders[typeof(WSFederationMessage)] = binder;
ModelBinders.Binders[typeof(AttributeRequestMessage)] = binder;
ModelBinders.Binders[typeof(PseudonymRequestMessage)] = binder;
ModelBinders.Binders[typeof(SignInRequestMessage)] = binder;
ModelBinders.Binders[typeof(SignOutRequestMessage)] = binder;
ModelBinders.Binders[typeof(SignOutCleanupRequestMessage)] = binder;

Now you can actually do the controller action the way you'd like, with a strongly-typed WSFederationMessage parameter and it will work.

Of course, if you look at the Default.aspx in the WIF STS template, it throws an UnauthorizedAccessException if a WS-Federation message comes in and isn't a sign-in or sign-out request. You can do the same thing declaratively in MVC using an authorization filter. That would change your controller action to look more like this:

[RequireWSFederationMessage(AllowedActions = WSFederationMessageActions.SignIn | WSFederationMessageActions.SignOut)]
public ActionResult Index(WSFederationMessage message)
  // ...handle the message...

Something like that, where you could allow specific message actions to pass through, otherwise the user is seen as "unauthorized."

Create a filter attribute for ensuring only proper message types are allowed through. First you'll need that WSFederationMessageActions enumeration so you can specify what's allowed and what's not.

using System;

namespace MyNamespace.Filters
  public enum WSFederationMessageActions
    All = WSFederationMessageActions.Attribute | WSFederationMessageActions.Pseudonym | WSFederationMessageActions.SignIn | WSFederationMessageActions.SignOut | WSFederationMessageActions.SignOutCleanup,
    Attribute = 1,
    SignIn = 4,
    SignOut = 8,
    SignOutCleanup = 16

Yes, I could have just calculated the result of the "or" operation for the "All" but this way if any values change, I don't need to mess with "All." Do it your way if you're not cool with this.

Next, the filter attribute:

using System;
using System.Collections.Generic;
using System.Web.Mvc;
using Microsoft.IdentityModel.Protocols.WSFederation;

namespace MyNamespace.Filters
  [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = true)]
  public sealed class RequireWSFederationMessageAttribute : FilterAttribute, IAuthorizationFilter
    // Lookup table for converting string actions to the associated flag
    private static readonly Dictionary<string, WSFederationMessageActions> _actionLookup = new Dictionary<string, WSFederationMessageActions>()
      { WSFederationConstants.Actions.Attribute, WSFederationMessageActions.Attribute },
      { WSFederationConstants.Actions.Pseudonym, WSFederationMessageActions.Pseudonym },
      { WSFederationConstants.Actions.SignIn, WSFederationMessageActions.SignIn },
      { WSFederationConstants.Actions.SignOut, WSFederationMessageActions.SignOut },
      { WSFederationConstants.Actions.SignOutCleanup, WSFederationMessageActions.SignOutCleanup },

    public WSFederationMessageActions AllowedActions { get; set; }

    private object _typeId = new object();
    public override object TypeId
        return this._typeId;

    public RequireWSFederationMessageAttribute()
      // Default to allowing all actions.
      this.AllowedActions = WSFederationMessageActions.All;

    public bool IsAllowed(string action)
      if (
        String.IsNullOrWhiteSpace(action) ||
        return false;
      var enumAction = _actionLookup[action];
      return (this.AllowedActions & enumAction) == enumAction;

    public void OnAuthorization(AuthorizationContext filterContext)
      if (filterContext == null)
        throw new ArgumentNullException("filterContext");

      WSFederationMessage message = null;
      // If you can't parse out a message or if the parsed message
      // isn't an allowed action, deny the request.
      if (
        !WSFederationMessage.TryCreateFromUri(filterContext.HttpContext.Request.Url, out message) ||
        filterContext.Result = new HttpUnauthorizedResult();

Now you have a filter attribute that will check to make sure the incoming message is of an expected type and will deny access if it's not.

Hopefully some of this will help you get working with WIF in ASP.NET MVC. It'd have been nice if MVC had been considered in the initial rollout of WIF, but no such luck. I don't even see a Connect page for accepting suggestions. Fingers crossed for the next release…!

I Don't Feel Like a Dad

On a somewhat daily basis, I have this weird sort of "aha!" moment where I realize, again, that I'm a father, and that's really weird to me. I picture fathers as guys that are older than I am, which is to say, that "being a father" somehow equates to "higher age" even though that's obviously not remotely true. Of course, I also don't feel like I'm really an adult yet, either, even being in my 30's, given concerns about getting enough time to play Rock Band overtake my desire to, say, watch the evening news. I own a house, too, and when I think about that in concrete terms it weirds me out as well.

I also have a difficult time equating "baby" and "human," like Phoenix is some sort of small animal that needs to be taken care of (like a house cat). I find myself talking to her the way I talk to the cats, sitting her on the couch and saying, "See, now you're on the human chair!" when, duh, she is a human so of course she's on the human chair. I think that will probably change when she has, you know, motor control and some mechanism of communication beyond "scream."

I don't think Jenn has this sort of cognitive dissonance issue over feeling like she's-a-mom-but-she's-not-a-mom. I think it's sort of clicked for her. I suppose I'll get there eventually. For now, it's just still… surreal.