conferences, aspnet, net comments edit

The conference technically starts tomorrow, but I’m in town a day early to get settled so I can be there, bright-eyed and bushy-tailed. Or at least bright-eyed.

There was a mashup session that ran from 4:00p to 8:00p alongside registration, but by the time I got here, got registered, got to the room, and got something to eat… well, I also got a little tired and didn’t really feel like throwing down with the mad technical skillz. Instead, I thought it would be prudent to take it easy - it’s been a long day, and I do want to be ready to pay attention and learn in some of the great sessions planned.

My schedule looks like this:

Monday, April 30 9:30a - General Session 1:30p - Building Rich Web Experiences using Silverlight and JavaScript for Developers 3:00p - Using Visual Studio “Orcas” to Design and Develop Rich AJAX Enabled Web Sites 4:30p - AJAX Patterns with ASP.NET

Tuesday, May 1 8:30a - Front-Ending the Web with Microsoft Office 10:15a - Designing with AJAX: Yahoo! Pattern Library 11:45a - Developing ASP.NET AJAX Controls with Silverlight 2:15p - Go Deep with AJAX 4:00p - General Session

Wednesday, May 2 8:30a - How to Make AJAX Applications Scream on the Client 10:00a - Windows Presentation Foundation for Developers (Part 1 of 2) 11:30a - Windows Presentation Foundation for Developers (Part 2 of 2)

Interestingly, this isn’t the schedule as I originally planned it on Friday. Even up to the last minute the times, places, and topics are changing. I don’t know if this is the set of classes I’ll actually be in or not, but we’ll see.

Getting into the spirit of things, I’ve joined Facebook and Twitter since those seem to be ways folks are supposed to coordinate things. I’m not super taken with either one, but then, I’m not a big “social networker.” I’ll withhold judgment for now.

gaming, xbox comments edit

So about eight months ago I had to send my Xbox 360 in for repair and they sent me back a refurbished console. Due to the crazy, crappy DRM scheme they have on the content you get from Xbox Live Marketplace (which includes Xbox Live Arcade games), that meant I had to jump through a bunch of hoops to get the games on my system to work correctly again.

Well, I just got my Xbox 360 back from my recent bout with the Red Ring of Death and guess what - they sent me another refurb.

Which, of course, means I get to go through the hoops a second time. That’s right - I get to create a second dummy Xbox Live Silver membership (because I can’t use the dummy account I created last time around), have them refund me points to that account, and then use that account to re-purchase everything. Again.

Net result is that I spent like an hour last night taking inventory of all of the Xbox Live Arcade games we’ve purchased, figuring out which account we originally bought them with, and determining the price for each game as listed in the Xbox Live Marketplace.

I then called Xbox Live Support and after explaining the situation to one of the representatives, he mentioned that I should just be able to go in with the account I purchased the games with, hit Xbox Live Arcade, and select the “re-download” option (without deleting the game from the hard drive first) and it should authorize the new console.

That doesn’t work.

The call got escalated to the supervisor, who spent time going through my account and my wife’s account and calculating up all of the things we’ve purchased. Problem there is that their history only goes back one year so they don’t actually have a visible record of what you purchased beyond that… so they argue with you when you tell them, say, that you bought one of the Xbox Live Gold packages at a retail outlet over a year ago (because you’ve renewed since then) and it came with a copy of Bankshot Billiards 2, and yes, you’d like to have that re-authorized on the console as well.

After all of that, they still came up with a different number of points that they owe me than I did. You know why? Because they use the number of points you originally spent on the game as a guide, not today’s prices. And prices have gone up, so now the game you paid 400 points for six months ago costs 800 points if you want to buy it today but they only want to give you the 400 points you originally paid. Obviously, that causes a little contention on the phone, but the best the supervisor can do is put a note in there that mentions your concern because…

…there’s a guy named Eric whose job it is, apparently, to call all of the people that this happens to and hash out the whole “Points After Repair” thing (yes, they have an actual name for it, which sort of tells you something). I get to argue with Eric about the difference in what they think they owe me and what they actually owe me, and that discussion will happen in “approximately five business days.”

And there it sits. A couple of hours of work and phone later and I’m hanging on for Eric to call me and give me points so I can re-purchase and re-download the games I already own so my console works like it should again. Awesome.

subtext, blog, xml comments edit

I’ve been looking for a while to migrate off this infernal pMachine blog engine I’m on. The major problem is how to migrate my data to the new platform. Enter BlogML.

BlogML is an XML format for the contents of a blog. You can read about it and download it on the CodePlex BlogML site. They’re currently at version 2.0, which implies there was a 1.0 somewhere along the lines that I missed.

Anyway, the general idea is that you can export blog contents in BlogML from one blog engine and import into another blog engine, effectively migrating your content. Thus began my journey down the BlogML road.

If you download BlogML from the site it comes with an XSD schema for BlogML, a sample BlogML export file, a .NET API, and a schema validator.

I didn’t use the .NET API because pMachine is in PHP and all of the routines for extracting data are already in PHP, so I wrote my pMachine BlogML exporter in - wait for it - PHP. As such, I can’t really lend any commentary to the quality of the API’s functionality. That said, a quick perusal of the source shows that there are almost no comments and the rest looks a lot like generated XmlSerializable style code.

The schema validator is a pretty basic .NET application that can validate any XML against any schema - you select the schema and the XML files manually and it just runs validation. This actually makes it troublesome to use; you’d think the schema would be embedded by default. If you have some other schema validation tool, feel free to ignore the one that comes with BlogML.

The real meat of BlogML is the schema. That’s where the value of BlogML is - in defining the standard format for the blog contents.

The overall format of things seems to have been thought out pretty well. The schema accounts for posts, comments and trackbacks on each post, categories, attachments, and authors. I was pretty easily able to map the blog contents of pMachine into the requisite structure for BlogML.

There are three downsides with the schema:

First, the schema could really stand to be cleaned up. This may not be obvious if you’re editing the thing in a straight text editor, but when you throw it into something like XMLSpy, you can see the issues. Things could be made simpler by better use of common base types that get extended. There are odd things like an empty, hanging element sequence in one of the types. Generally speaking, a good tidy-up might make it a lot easier to use, because…

Second, the documentation is super duper light. I think there are like 10 lines of documentation in the schema, tops, and there’s nothing outside the schema that explains it, either. Without going back and forth between the schema and the sample document, I’d have no idea what exactly was supposed to be where, what the format of things needed to be, etc.

Third, and admittedly this may be more pMachine-specific, there’s no notion of distinguishing between a “trackback” and a “pingback.” There’s only a “trackback” entity in the schema, so if your blog supports the notion of a “pingback,” you will lose the differentiation when you export.

Anyway, I planned on importing my blog into Subtext, so I set up a test site on my development machine, ran the export on my pMachine blog (through a utility I wrote; I’m going to do some fine-tuning and release it for all you stranded pMachine users) and did the import. This is where I started noticing the real shortcomings in BlogML proper. These fall into two categories:

Shortcoming 1: Links. If you’ve had a blog for any length of time, you’ve got posts that link to other posts. That works great if your link format doesn’t change. If I’m moving from pMachine to Subtext, though, I don’t want to have to keep my old PHP blog around (hence “moving”), and, if possible, I’d like to have any intra-site links get updated. There doesn’t seem to be any notion in BlogML pre-defining a “new link mapping” (like being able to say “for this post here, its new link will be here”) so import engines will be able to convert content on the fly. There’s also no notion of a response from an import engine to be able to say “Here’s the old post ID, here’s the new one” so you can write your own redirection setup (which you will have to do, regardless of whether you update the links inside the posts).

I think there needs to be a little more with respect to link and post ID handling. BlogML might be great for defining the contents of a blog from an exported standpoint, but it doesn’t really help from an imported standpoint. Maybe offering a second schema for old-ID-to-new-ID mapping (or even old-ID-to-new-post-URL) that blog import engines could return when they finish importing… something to address the mapping issue. As it stands, I’m going to be doing some manual calculation and post-import work.

Shortcoming 2: Non-Text Content If you’ve got images or downloads or other non-text content on your blog posts, it’s most likely stored in some proprietary folder hierarchy for the blog engine you’re on… and if you’re moving, you won’t be having that hierarchy anymore, will you? That means you’ve got to not only move the text content, but the rest of the content into the new blog engine.

There is a notion of attachments in BlogML, but it’s not clear that solves the issue. You can apparently even embed “attachments” for each entry as a base64 encoded entity right in the BlogML. It’s unclear, however, how this attachment relates back to the entry and, further, unclear how the BlogML import will handle it. This could probably be remedied with some documentation, but like I said, there really isn’t any.

This sort of leaves you with one of two options: You can leave the non-text content where it is and leave the proprietary folder structure in place… or you can move the non-text content and process all of the links in all of your posts to point to the new location. One way is less work but also less clean; the other is cleaner but a lot of work. Lose-lose.

Anyway, the end result of my working with BlogML: I like the idea and I’ll be using it as a part of a fairly complex multi-step process to migrate off pMachine. That said, I think it’s got a long way to go for widespread use.

personal, gaming, xbox comments edit

Saturday was a hell of a day.

We got up at around 7:00a and got out of the house basically ASAP so we could get to Jenn’s parents’ house in time to get in their motorhome and head down to Eugene for Jenn’s grandma’s 79th birthday party. That’s like an hour and a half drive, which isn’t as bad when you’re riding in the Adamson Bus, but it’s still a long trip.

We got there and Jenn’s grandma was very pleased to see us. It was a surprise party and the entire family was there.

Now, when I say “the entire family,” I mean like 40 or 50 people. The rockin’ part was that it was raining and we had planned to have the party outside, but instead we had it indoors in a space that was, oh, maybe 500 square feet. You can imagine the chaos - not enough chairs (or enough room for chairs) ended up meaning people sitting on the floor, sitting on laps, standing in the hallway… and it got hot.

So that lasted for about four hours. And the thing is, I like Jenn’s family and all, but I don’t really know anyone and every time we get together it’s like this whirlwind of faces that only look mostly familiar and only results in me being confused and claustrophobic. I don’t really have anything to talk about with them because none of them are tech people and I really don’t follow sports or family gossip. So it’s nice to see them, but I won’t lie, it’s not super duper fun.

After that, we hopped back on the bus and headed home. We got back somewhere around the 6:30p timeframe and on the way home we planned on stopping at my parents’ house because it was my dad’s birthday, too. We got there sometime around quarter to seven, but they weren’t home so I planned on calling him later in the evening. I actually feel bad I didn’t get in touch with him earlier, but he’s going to have to throw me a bone on this one because I was sort of predisposed.

Sunday was errand day so we ran around and did the shopping and so on. Groceries ran far more than I anticipated because we ended up picking up a lot of high-ticket items (cleaning supplies and so on) that we had been putting off. Not great on the pocketbook, but had to be done.

We also picked up one of those automatic cat boxes. We like to occasionally go away for the weekend and the new cat generates a looooot of poop so we want to make sure the box is always clean and doesn’t need to be dealt with for a couple of days at least. We went with the Littermaid Elite after looking at a lot of these things because not only did it seem to be the most popular model, but it also didn’t lock us into proprietary refills or materials beyond the little litter receptacle (which runs about $0.30 each and last around five days - so maybe $2/month, which is better than some of the ongoing costs of the other boxes). The only thing we were afraid of was whether they’d use it.

As I was putting it together, I got to about step seven of ten and had to put the clean litter in the box. I poured it in, turned around, put the litter box down, picked up the instruction manual and turned around to do step eight… but the cat was already in the box taking a fresh crap - even before I got the box put together - so I’m no longer afraid they won’t use it. I couldn’t even finish putting it together before it was used.

Jury’s still out on whether I like it or not. It works great on the clumped-up pee balls, but if the cat poop is… well, soft… it sort of attaches to the rake that cleans the box. I cleaned it off the rake manually the first time, but I left it when I saw it again this morning. I’m going to see if the situation somehow rectifies itself.

I also checked on my sick Xbox, which is on its way home from Xbox Hospital. They are sending me another refurbished machine - it has a different serial number than the one I sent in - so I’m going to end up going through the Xbox DRM problems again. Support actually has a name for this process now - “Points After Repair.” I called them and said I noticed that the serial number was different and that I was disappointed I’d have to go through this again and they were all, “Well, just set up [yet another] Xbox silver account before you call, then when you call in give us your repair number and ask about ‘Points After Repair.’ We’ll hook you up.” Ridiculous. Because that didn’t cause all nature of pain in my ass last time.

testing comments edit

I follow the whole “design-for-testability vs. design-for-usability” debate and, in the interest of full disclosure, I’m a fan of not designing for testability. Part of what I design has to be a usable API and I can’t have a lot of the cruft required in testable design when I get to my finished product.

I mean, think about it - if the .NET Framework was built using design-for-testability, there’s no way like 80% of the people using it today would have been able to pick it up. Considering the inability to solve a FizzBuzz problem, somehow I don’t think having everything infinitely pluggable and flexible solely to support the testing of the framework would have made it any easier to use.

Now think about the notion of the static utility class. Everyone has one (or more) of those classes that are sealed and only provide a few static helper methods. I mean, for .NET 1.1 folks, who didn’t have their own static String.IsNullOrEmpty implementation?

On my project, we have a lot of static helpers that read, process, and cache configuration data. The basic algorithm is something like this:

  1. Check the cache for the data. If it’s there, return it.
  2. Data wasn’t in cache, so open the config file.
  3. XML deserialize the file or do whatever other processing needs to happen.
  4. Cache and return the result.

Here’s some basic sample code: public sealed class ConfigService { private const string CacheKey = “CacheKeyForConfig”; private const string ConfigPath = “/Config/myfile.config”; public static System.Collections.ArrayList GetConfiguration() { System.Collections.ArrayList retVal = null; string configPath = ConfigPath;

    if(System.Web.HttpContext.Current != null)
    {
      retVal = System.Web.HttpContext.Current.Cache[CacheKey]
        as System.Collections.ArrayList;
      configPath = System.Web.HttpContext.Current.Server.MapPath(configPath);
    }
    if(retVal == null)
    {
      if(!File.Exists(configPath))
      {
        throw new FileNotFoundException(
          "Unable to read default configuration file.",
          configPath);
      }

      //... read/process the file... set the return value...

      if(System.Web.HttpContext.Current != null)
      {
        System.Web.HttpContext.Current.Cache.Insert(
          CacheKey,
          retVal,
          new System.Web.Caching.CacheDependency(configPath));
      }
    }

    return retVal;
  }
}

From an API standpoint, it’s a one-liner: ConfigService.GetConfiguration()

But how do you test that? If you’re running FxCop, your static utility class “ConfigService” needs to be sealed. That sort of limits your abilities to mock with several frameworks out there. The simple fact this is a static method is limiting to some frameworks.

Now, granted, you could write a class that provides cache retrieval services specific to this helper and go to the trouble of instantiating that class and… you know what, I’m already tired of typing that out. I don’t need all that. I’ll never sub in a different cache provider for anything other than testing. I don’t want the consumer of the method to even have to know about any of that (so they shouldn’t have to pass the cache instance in as a parameter, for example).

But I do want to have this thing tested. If the object is in cache, does it just return the object without further processing? If it’s not, does it read the file and does it then cache the results? I want the design simple, I want it usable, and I don’t want a lot of moving pieces. In fact, ideally, the code would be about as simple as the sample I posted.

So you need to mock a few things, specifically around HttpContext. (Possibly other things based on the implementation, but we’re going for simple here.)

You can’t really readily do that. Or can you? What if your test looked like this:

[TestFixture]
public class ConfigServiceTest
{
  [Test]
  public void GetConfiguration_NoCache_FileExists()
  {
    //...place a known good configuration in a temporary location...
    string pathToTemporaryConfig = ExtractConfigToTempFileAndGetPath();

    // Set up calls for the cache
    MockObject mockCache = MockManager.MockObject(typeof(System.Web.Caching.Cache), Constructor.Mocked);
    mockCache.ExpectGetIndex(null);
    mockCache.ExpectCall("Insert");

    // Set up calls for the server utility
    MockObject mockServer = MockManager.MockObject(typeof(System.Web.HttpServerUtility), Constructor.Mocked);
    mockServer.ExpectAndReturn("MapPath", pathToTemporaryConfig);

    // Set up calls for the context
    MockObject mockContext = MockManager.MockObject(typeof(System.Web.HttpContext), Constructor.Mocked);
    mockContext.ExpectGetAlways("Cache", mockCache.Object);
    mockContext.ExpectGetAlways("Server", mockServer.Object);

    // Use natural mocks to ensure the mock context is always returned
    using(RecordExpectations recorder = RecorderManager.StartRecording())
    {
      // Ensure any call for HttpContext always gets returned
      System.Web.HttpContext dummyContext = System.Web.HttpContext.Current;
      recorder.Return(mockContext.Object);
      recorder.RepeatAlways();
    }

    System.Collections.ArrayList actual = ConfigService.GetConfiguration();
    Assert.IsNotNull(actual, "The configuration returned should not be null.");
    // ... do other assertions that validate the returned config ...

    MockManager.Verify();
  }
}

Using TypeMock, I was able to easily mock a web context and test this code without having to impact the design or the API usability.

It sounds like I’m shilling for TypeMock, and maybe I am a little. On a larger scale, though, I’m just happy I’m able to get full test coverage without sacrificing my usable API.

And if someone reports a defect with this code? Piece of cake to get a mocked test into place that replicates the behavior and even easier to track down the issue because I don’t have all of these additional unnecessary layers of abstraction to fight through. The code is simple - simple to read, simple to understand, and simple to troubleshoot for the next developer who has to try and fix it. You have to love that.