media, music, home, cats comments edit

Jack and Xev are friends, aren't
they?I love my two cats, but the little Siamese-tabby mix Jack is pretty aggressive and he’s been chasing my other cat Xev around a lot, particularly in the last, oh, month or so.

Yeah, you’d never guess that from the picture, right?

Okay, so a couple of weeks back, it’s business as usual - Xev is yowling because Jack is riding her back around the house.  (Yes, he’s fixed. They both are.  He’s just being an asshole.)  They make a mad dash out from behind the couch and…

SLAM!

I turn to see that one of the stands with one of my Bose Acoustimass 16 cubes has been knocked over and the two cubes that make up the speaker have come apart from each other at the part where they swivel.

The cubes are broken apart where they
swivel.

Looking at the mechanism that was exposed, it looks like the thing literally just snaps together.  Unfortunately, the way it snaps together is so tight and fits together in just such a way that there’s really no way you can just snap it back together.  I can see that the assembly of the two cubes actually happens pretty early in the manufacturing process because it looks like the two cubes would need to be snapped together before the speaker bits got inserted.  The speakers proper still work, the cubes just aren’t attached anymore.

I tried everything short of actually disassembling the damn thing, but it was no use.  I ended up taking it to a repair shop who specializes in this sort of thing and it turns out the speakers aren’t field serviceable.  Of course they’re not.  Instead, they have to order me a replacement, and that’s going to cost me - wait for it - $151. That’s one-hundred-and-fifty-one American dollars.  That’s in addition to the $25 I already paid them to look at the thing for me (some of that is credited toward the total cost of the speaker, bringing it down to $151).

Travis == Over A Barrel.  ARGH.

GeekSpeak, dotnet, vs comments edit

I started working on converting CR_Documentor over to use XSLT for its documentation transformations this morning and soon realized that it may not be that easy.  The goal was to be able to just take the XSLT from the various documentation generation engines (NDoc, Sandcastle) and as fixes or changes happened, “plug in” the new XSLT and have the preview ready to go.

Not so much.

I tried a simple test using the NDoc XSLT and it turns out that I have a few stumbling blocks.

  • The input XML is complex.  The format NDoc expects the XML to be in prior to executing the transformation is pretty complex.  That’s not really a problem in a post-build timeframe where you’re not looking for real-time changes, but just creating the correct XML hierarchy is a pretty big task, let alone then getting it through the transform engine.
  • Everything is relational.  There are a lot of things in the NDoc XSLT that assume, for example, that you’ve got everything you need to document all in one file, so there are relational things going on.  For example, when you generate the documentation for a method, any cross-reference links you have are also generated… which runs through connecting actual URLs to HTML files and setting up links and everything.  To avoid setting up bad links, the XML that’s generated gets heavily pre-processed.  Again, not something that can readily happen real-time.
  • Much is assumed to be in the filesystem.  Temporary files, the XSLT, images, script… there’s a lot that the XSLT assumes is in specific spots in the filesystem, which means that I couldn’t use the stylesheets as-is anyway; I’d have to heavily massage it to get it where I want it to be.

Unfortunately, a lot of this sort of means using XSLT directly is a non-starter.  Even if I could get past the fact that I’d be doing almost as much work creating the input XML as I’m doing right now to generate the whole preview, the requirement for all the relational things and the fact there’s so much in the filesystem anyway means I’m probably better off just hard-coding the transformation the way I’ve been doing, as lame as that is.

I won’t lie; it doesn’t increase my desire to work on the project.  I like it, and I really wish I could just release it to the community open-source style, but since I can’t, I’m sort of stuck.  Motivationally challenged, shall we say.

Well, I guess my next step is to look for opportunities to refactor it and make the code at least a little easier to maintain and update. Maybe that will make it easier to implement new rendering views.

Going through the process of being acquired twice in the last few months, I’m getting pretty used to how the information dissemination process works.  In a nutshell, there really isn’t any.

From one point of view, I get it - there’s a lot to coordinate, and legal requirements dictate that certain things can’t be shared until certain times and so on.  I get it.  I get it so much I’m really tired of people reminding me about it because they think I don’t get it.  I promise.  I get it.

The other side - the side I seem to always be on is “The Dark Side.” Not like the Dark Side of the Force, more like “people who are in the dark about any details about what’s going on.”  This is actually the majority of the people most of the time, and regardless of how “transparent” communications are supposed to be, management (the people who “know stuff”) generally seems to believe that “more communication is better,” even if there isn’t actually anything to communicate.

If you haven’t been through this process, I thought I’d help you out by throwing together a little Q&A simulator so you know what this is like.

First, imagine you’ve been notified of a very important all-hands meeting.  It’s mandatory.  You must attend.  Your very life depends on it.

You get to the meeting, and the Person In Charge says, basically, “Hey, folks, we’ve been acquired.  We figured this was the best move for the company.  Any questions?”

Now’s the time you get to ask all the questions you might have.  Try them out in my handy simulator:

Ask your question about the acquisition of the company:

Answer:

…and there you have it.

Now go to three or four of these in close succession - one for the whole company, one for your division, one for your group within the division… you get the idea.  Congratulations!  You’ve been through the acquisition experience.

Let’s say you’re writing a service like an HttpModule that performs an action against each page that gets served up.  Maybe it does something like move the viewstate to the bottom of the page, update a property on the page, or fudge the control hierarchy a bit.

The thing is, you want to unit test it, but how?  Mocking an HttpContext is hard enough, and many times you end up going down the UI automation road. Ugly.

Enter TypeMock.

A few lines of code, setting up the minimum amount of stuff, and you can mock just enough context to actually get a full page request lifecycle to execute - events and all.  So say your service needs to be called during the page PreInit and you can check the results of whatever you did during Load… here’s what that looks like:

[Test(Description = "Tests an external influence on the page lifecycle.")]
public void MyPageServiceTest()
{
  Page page = new Page();

  MockObject<HttpBrowserCapabilities> mockBrowser = MockManager.MockObject<HttpBrowserCapabilities>(Constructor.NotMocked);
  mockBrowser.ExpectGetAlways("PreferredRenderingMime", "text/html");
  mockBrowser.ExpectGetAlways("PreferredResponseEncoding", "UTF-8");
  mockBrowser.ExpectGetAlways("PreferredRequestEncoding", "UTF-8");
  mockBrowser.ExpectGetAlways("SupportsMaintainScrollPositionOnPostback", false);

  MockObject<HttpRequest> mockRequest = MockManager.MockObject<HttpRequest>(Constructor.Mocked);
  mockRequest.ExpectGetAlways("FilePath", "/default.aspx");
  mockRequest.ExpectGetAlways("HttpMethod", "GET");
  mockRequest.ExpectGetAlways("Browser", mockBrowser.Object);

  MockObject<HttpResponse> mockResponse = MockManager.MockObject<HttpResponse>(Constructor.Mocked);

  HttpContext mockContext = new HttpContext(mockRequest.Object, mockResponse.Object);

  using (StringWriter stringWriter = new StringWriter())
  using (HtmlTextWriter htmlWriter = new HtmlTextWriter(stringWriter))
  {
    mockBrowser.AlwaysReturn("CreateHtmlTextWriter", htmlWriter);
    page.PreInit +=
      delegate(object sender, EventArgs e)
      {
        // Perform some action
      };
    page.Load +=
      delegate(object sender, EventArgs e)
      {
        // Check/Assert the results of your action
      };
    page.ProcessRequest(mockContext);
  }
}

Obviously the majority of this could be wrapped up into a library or something, but I show it here to illustrate that, at least in ASP.NET 2.0, this is all it takes.

(You’ll notice that I’m using the Reflective mocks instead of the Natural mocks that I prefer in TypeMock.  The reason is that I’m mocking a couple of internal things and mocking non-public items requires the Reflective mocks.  By mocking these internal convenience methods, I can greatly reduce the amount of setup for this to run.)