gists, net, testing, build comments edit

Getting TypeMock, NUnit, and NCover to work together in your build script can sometimes be a tricky thing. Getting any one of those things to work individually is easy enough, getting two going is a little tougher, but getting all three together requires a bit of finesse. Add to that the fact that you may run different versions of different products (different NUnit versions, for example) for different source code bases and it gets downright complicated.

The way my product code works, when I check out the codeline I want to work on it comes with all of the dependencies - every third party assembly, every build tool. That includes TypeMock, NUnit, and NCover. That way the build server doesn’t have to have anything installed when it runs a build - it can auto-deploy TypeMock, register NCover, do its thing, and undo all of that when it’s done. (Yes, there are some drawbacks to that - parallel builds are limited when registered versions of profilers change, for example - but we’ve dealt with that sort of thing in other ways.)

On developer workstations, we have TypeMock installed so we can make use of the tracer and other helpful utilities, but on the build server, we auto-deploy TypeMock.

Since TypeMock is a profiler, if you use it in your unit tests, you can’t just run NUnit in the build and have it work - you have to start TypeMock, then run NUnit, then shut TypeMock back down. If you’re using NCover, you have to make sure NCover is registered and linked with TypeMock.

TypeMock comes with some custom build tasks to help you get this working. You will also want to get NCoverExplorer and NCoverExplorer.Extras to get this working well. NCoverExplorer will aggregate coverage logs for you and NCoverExplorer.Extras comes with NCover and NCoverExplorer MSBuild tasks.

The general flow of what needs to happen is this:

  1. Register TypeMock with the system (if it’s not a developer workstation - our devs have it installed).
  2. Register NCover with the system.
  3. Start TypeMock and link it with NCover.
  4. Run NCover and pass it the command line parameters to run your NUnit tests. Tell it which assemblies to profile.
  5. Stop TypeMock.
  6. Unregister NCover with the system.
  7. Use NCoverExplorer to aggregate the coverage reports into a single report.
  8. On error, stop TypeMock and unregister NCover.

Here’s the example:

<?xml version="1.0" encoding="utf-8" ?>
<Project xmlns="">
  <!-- Register the tasks necessary for running tests and coverage. -->
  <Import Project="Relative\Path\To\TypeMock.NET\TypeMock.MSBuild.Tasks"/>

    <!-- Property indicating we're building on a dev machine - build server will set this to false. -->
    <!-- Property indicating where build logs will go. -->

  <Target Name="test">
    <!-- Register TypeMock only if it's the build server - it'll already be on a developer box. -->
      Company="YOUR COMPANY"
      License="YOUR LICENSE"

    <!-- Register NCover so it's available for TypeMock. -->
    <Exec Command="regsvr32 /s &quot;Relative\Path\To\NCover\CoverLib.dll&quot;"/>

    <!-- Start TypeMock and link it with NCover. -->
    <TypeMockStart Link="NCover"/>

    <!-- Enumerate the test assemblies you'll be executing with NUnit. -->
    <CreateItem Include="Your\Build\Ouptut\*.Test.dll">
      <Output TaskParameter="Include" ItemName="UnitTestAssemblies"/>

    <!-- Create the folder where unit test and coverage logs will go. -->
    <MakeDir Directories="$(BuildLogDirectory)"/>

    <!-- Run NUnit through NCover so profiling happens. -->
    <!-- Note the use of "batching" so this is equivalent to a "foreach" loop in MSBuild. -->
      CommandLineArgs="&quot;%(UnitTestAssemblies.FullPath)&quot; /xml=&quot;$(BuildLogDirectory)\%(UnitTestAssemblies.Filename)-results.xml&quot;"

    <!-- Stop TypeMock and unregister NCover. -->
    <CallTarget Targets="test-finally"/>

    <!-- Get all of the coverage logs and aggregate them with NCoverExplorer. -->
    <CreateItem Include="$(BuildLogDirectory)\*-coverage.xml">
      <Output TaskParameter="Include" ItemName="CoverageReports"/>

    <!-- In case one of the tests fails, make sure to stop TypeMock and unregister NCover. -->
    <OnError ExecuteTargets="test-finally"/>

  <!-- Stopping TypeMock and unregistering NCover is a separate target because it has to happen -->
  <!-- regardless of success or failure of the unit tests. Like the "finally" in a "try/finally" block. -->
  <Target Name="test-finally">
    <Exec Command="regsvr32 /u /s &quot;Relative\Path\To\NCover\CoverLib.dll&quot;" ContinueOnError="true"/>

In the example, notice how the steps for stopping TypeMock and unregistering NCover have been placed in a separate target called “test-finally” since it’s used a lot like a try/finally block. That’s the sort of thing we’re trying to emulate. You’ll also notice that we’re using MSBuild “batching” to run each test assembly through NCover and generate an individual coverage log.

Additional notes:

  • Obviously you’re going to need to change the paths and other placeholder parameters to fit your build.
  • If you’ve got TypeMock installed on dev machines and on the build box, you can skip the TypeMockRegister task and just start/stop TypeMock.
  • If you’ve got NCover installed on dev machines and on the build box, you don’t need to execute “regsvr32” to register/unregister NCover. As long as NCover is registered before you start TypeMock, you’re OK.

net, vs comments edit

I’m playing with the latest release of TypeMock (now “TypeMock Isolator,” as it sounds like they have a suite of products planned beyond their mocking product) and I think my favorite feature is the better debugging support. Sure, you can mock fields now (admittedly, a little scary sounding, but with legitimate applications nonetheless) and they’ve cranked up the performance on it, but how many times have you fired up TestDriven.NET and started your test in the debugger only to get odd behavior because you tried to step into a mocked method?

Now, when you try that, you actually see the method outlined in the debugger so you get a visual cue about what you’re doing:

TypeMock outlines mocked methods in the

Oh, and you know how you ran into trouble popping open the watch window, or QuickWatch, and evaluating a mocked call multiple times, causing mock verification problems? No more! The debugger works without hitches. Love it, love it, love it.

This is all in the new TypeMock Isolator 4.2.1 beta. If you get a chance, check it out. Good stuff coming from those TypeMock folks.

media, movies comments edit

Went to see Cloverfield this weekend.

What I expected: A cool, maybe kinda scary, fun roller-coaster-ride monster movie.

What I got: A headache from too much shaky-cam following stupid people through New York.

Before I get a bunch of people telling me I missed the point, let me stem the tide: I got it. I mean, I get the whole “point of view” thing and how the interesting bit was that rather than tell the monster movie story from an omniscient perspective they drilled down and got it from a ‘new and different” perspective - that of victims. I get it.

That doesn’t make it good.

Really, I think the shaky-cam thing got old. Let’s ignore the “but that’s the point” argument. I’ve never seen home movies as ridiculously shaky as what they showed. Even the stuff at the beginning, when they weren’t being chased by the monster, was shakier than any home movie I’ve ever seen. By the end of the movie, Jenn couldn’t even watch it - she had to close her eyes and just listen because it was far, far more jarring than even Blair Witch (which also sucked).

Oh, and if you’re being chased by a giant monster and buildings are falling all around you and you happen to drop the camera, are you going to go back and pick it up? Further, are you going to run around with it at eye level the whole time? Hell, no. You’re going to throw the camera at the monster chasing you and you’re going to high-tail it out of there.

Did I mention the thing pretty much just ended, without any resolution to what happened to the monster? Did it die? Did it live? We don’t know.

There were really only three parts I dug about it (and, technically, these are spoilers, but you’re going to be smart and save your money, right?):

  1. The part where the stealth bombers show up to lay waste on the monster.
  2. The part where the girl explodes after getting bitten.
  3. The fact that little mini-creatures get dropped off the one big creature. That shit creeped me out.

Other than that, lame, lame, lame. I wanted less of the morons running around the city and more of the military getting medieval on the monster. I wanted less shaky cam and more ability to actually see what was going on.

Save your money. Or, better, go see Juno. That movie was great.

gaming, xbox comments edit

I realized that I hadn’t posted any update on my Xbox Live DRM problems or the dashboard update issues I was having, so here you go.

The DRM Issue

Back in October, I got my third console replacement and ran into the same stupid DRM trouble I had the previous two times: Content I purchased could not be played unless I was logged into Xbox Live - not just signed in, but signed in and online. This isn’t as much a problem for a one-gamertag household, but when you have a two-or-more-gamertag household (like my wife and I have), it means that a game I bought for both of us to play is suddenly only accessible to my wife when I’m signed in, whether I’m actually physically there or not. Lame.

Unfortunately, they changed the process on how to fix this such that it was different from my previous two go-rounds. I was told it’d take two to four weeks to get a resolution.

The issue was still not resolved at the beginning of this month (January 2008), which is well beyond the two-to-four-weeks promised timeframe. Calling Xbox Live Support did no good - they must have a stock answer for situations like this: “I’m sorry, but I don’t have any additional information. Your case has been escalated to Microsoft and they will get back to you.” No one else you can contact, nothing else you can do.

I ended up contacting Major Nelson about it. I gave him the full details - times, dates, names, and status - and within three days a guy from Xbox Escalations called me. I provided some additional information that they were apparently missing (but never asked for) and got a direct phone number for him so if anything goes wrong, I can call him and he’ll personally take care of it.

Of course, the new “deadline” for getting a fix is Feburary 7, so I don’t actually have an answer yet, but I’m more hopeful than before.

An interesting side note: talking to the guy at Escalations, it turns out the Microsoft folk don’t get special treatment on this. He has a cubicle-mate that is in the same wait-it-out boat that I’m in. Apparently, Microsoft didn’t realize that changing the process for re-authorizing a console would be this much challenge. I, personally, am not surprised at all.

Xbox Dashboard Update
FailedThe Dashboard Update

In December, while waiting for my DRM problems to be solved, I found that I was suddenly unable to get onto Xbox Live at all because I couldn’t take the latest dashboard update. I blamed it on the DRM problem and, after calling support on this one, too, it turned out that I wasn’t the only one having issues.

That said, after doing some maintenance and troubleshooting, the problem ended up being with my hard drive.

When you get a console replacement, you send in your console, but you keep your hard drive, faceplate, and other peripherals. As such, my hard drive had been attached to four different consoles and had taken dashboard updates just fine for the first three, but when it came time to take an update for the fourth console, it’d had enough. Something got corrupted on the drive and it needed to be formatted.

I spent, literally, over ten hours on the phone with support for this one. I had to do all sorts of ridiculous troubleshooting (they’d tell me it was a problem with my network, which I damn well knew it wasn’t, then they’d say it was something else, like stabbing in the dark), I got lied to several times (they’d tell me I’d get calls back and I never did, they’d tell me they “escalated my call” and they never did), and generally got put through the wringer.

Once I got past the hoop jumping - which I firmly believe was caused in no small part by language barriers - I finally got to a supervisor who said he’d replace my hard drive. But the process for replacing a hard drive is that you send in your old hard drive, they send you a new one… and you lose all of your data. Unacceptable. After explaining how I’d been lied to and how much time I’d spent on the phone already, I convinced them to send me a hard drive without my having to send them mine in return - compensation for putting up with this crap.

Of course, after a week of not hearing anything, I had to call up and do the same convincing all over again with a different supervisor because the supervisor who promised me a new hard drive was in the Xbox Live division of support but the people who can actually grant that sort of thing are in the Hardware division of support.

Anyway, after fighting that out, I got a new hard drive in the mail, formatted it (it still had someone else’s content on it!), was able to take the dashboard update, and moved as much of my stuff as I could over to the new drive.

Oh, did I mention this was in late December when they were having Xbox Live problems? Think about that in context with my DRM issues - I can’t get to my content (which includes moving it) without signing in, but I can’t sign in because Xbox Live is having problems…

It was painful. After a lot of hassle, I got all but my Zuma and Bejeweled 2 save games moved over (for some reason you can’t move or copy them, so I lost all of my progress in both of those games) and formatted my old drive. Now I have one drive for game content and one drive for video content or archiving stuff.

Now, if only they’d remedy this DRM issue…