General Ramblings comments edit

Well, today’s my last day of work until the 30th of this month. I’ve got tomorrow and Friday off to get any last minute preparations done for the wedding, then Saturday’s the wedding and Sunday morning we’re off to Aruba.

Aruba!

A lot of people have asked if I’m nervous about the wedding. No, I’m not. Jenn and I have been together for about six and a half years now, so this isn’t something we’re jumping into blind - I know what I’m getting into. We’re pretty much already married; the paper and the ceremony are more of a formality than anything else. Granted, it’ll be fun and I’m looking forward to the food and the vacation, but it’s nothing to be nervous about.

I’m a little stressed about work, since I’m leaving right in the middle of a pretty big part of the project. I trust that the other folks on the team can get it done, but it’s still hard not getting closure on it. Plus, they know I’ll beat them senseless with a wet noodle if it’s not done to my precise and exacting standards. Oh! There’s the obsessive-compulsive perfectionist rearing its ugly head.

That’s actually the part about the wedding I’m stressed about - not the actual getting married, and not even the numerous vendors we’ve coordinated (well, Jenn has coordinated) to get stuff done. It’s all the people who want to help. I totally appreciate that folks want to help out, and I really do thank everyone who’s offered. Jenn did a really awesome job getting the whole thing together, though, and really all we need is for people who returned their RSVP cards and said they’d be there to just show up. Show up, have a great time, enjoy the food, and share the day with us. That’s it! Plus, you really don’t want to put yourself in my critical path right now. Everything is primroses and peach trees until you tell me you’re going to get something done and then don’t do it exactly the way I need it done right now immediately. Problems arise, awkwardness ensues, peoples’ tires get slashed, and it’s just a mess.

I’m normally not like that, I promise. It’s the wedding that’s doing it. You get one, and I want things to go without issue so I can just take it easy and enjoy it without having to chase things down and make sure everyone’s doing what they need to be doing.

So this is my last post for a while, until after I get back and recuperate from the honeymoon in Aruba. It’ll be my first trip outside the US (yes, even counting Canada and Mexico) so it’s a momentous occasion on several fronts. It’s also a test - the first vacation Jenn and I have gone on where there’s not something to do or see the whole time… just relaxing on the beach. The test is to see if we’re bored or if the relaxation does us good.

We’ll see some of you Saturday. To the rest - see you at the end of the month!

gaming, playstation comments edit

I’m digging my PSP. I’ve been playing videos on it, games on it, having a great time with it. I generally use my iPod for music, but I thought I’d try out the ol’ PSP to see if it’d be a good substitute in a pinch.

I grabbed a couple of files at random from my iTunes collection and dropped them into the PSP’s “MUSIC” folder. One file was an MP3, one file was an M4A (AAC). Fired up the PSP, headed over to the music section, and started playing.

The MP3 played fine. It also displayed the artist name, track title, and cover art for the song playing. Very cool - I’ve spent a lot of tme getting my song metadata updated so it’s nice to see something using it.

The M4A (AAC) also played fine… but it didn’t display any of the metadata, just the name of the song file. Lame.

I contacted Sony support and, after several rounds of email (where they helpfully copy-and-pasted a long discussion of how to get music to play on the PSP - not the problem at all), I got on the phone with a support person who didn’t really know what the word “metadata” means.

After explaining the situation in great detail and using very small words, the support representative walked me around in circles for a while until I realized one of two things must be true: either the PSP supports it and the rep still has no idea what I’m talking about, or the PSP doesn’t support it and the rep isn’t allowed to say so. (“There must be a problem with your song file, sir.” No, there’s not - it plays fine, iTunes sees the metadata, iPod sees the metadata, Xbox 360 sees the metadata… either the PSP doesn’t support it or it reads it from a different place in the song file than every other player I’ve got.)

I finally cornered the rep and got her to admit the PSP doesn’t support it. This morning I filed a question/comment on the issue with Sony requesting an update to the PSP system software to allow display of the AAC metadata. Hopefully they’ll resolve it for the next release.

net, testing, process comments edit

I posted last week a short discussion about whether mock objects are too powerful for most developers. The question originates based on the impression that people may use mock objects in an incorrect way and effectively invalidate the unit tests they write. For example, using mock objects you might actually set up a test and mock out a system state that will never actually be reached. Not so great.

In the end I came to the conclusion that it’s more of an education thing - as long as the person using the tool has a full understanding of what they’re doing, mocks are a great thing.

I used a couple of mock frameworks and then came across TypeMock, a mock framework for .NET that has significantly more power and features than other .NET mock frameworks out there. I instantly fell in love with it because it not only made mocking so easy with its natural syntax and trace utility (among other things), but it has the ability to mock things that many other frameworks don’t - calls to private methods, constructors, static method calls, etc.

Wanna Be Startin'
Somethin'That’s where the debate really heats up. There are effectively two schools of thought - “design for testability” and “test what’s designed.”

In the “design for testability” school, a lot of effort goes into designing very loosely coupled systems where any individual component can be substituted out for any other individual component at test time. The systems here are generally very “pluggable” because in order to test it out, you need to be able to swap test/mock objects in during the unit tests. Test driven development traditionally yields a system that was designed for testability since unit tests have to cover whatever’s coded.

In the “test what’s designed” school comes at it from the other direction. There’s a notion of what the system needs to do and the software gets written to do that, but unit tests are generally written after the fact (yes, shame, shame, shame). “Pluggability” is specifically crafted into the places it needs to be rather than anywhere. Test driven development hasn’t generally worked for systems like this.

In some cases it’s a religious argument, and in some cases it’s not. Miki Watts picked up my post and you can tell there’s a definite trend toward “design for testability” there. He argues that my mocking example isn’t valid - that proper design might have dictated I pass around an interface to the factory and mock the factory’s interface rather than mocking the static call to the factory to get the desired behavior in testing.

Eli Lopian (CTO of TypeMock) picked up my post as well as Miki’s post and argues that the lower coupling of the code (passing around the interface to the factory - an interface that didn’t previously need to exist and that consuming classes really shouldn’t know/care about) lowers the cohesion of the code.

I won’t lie - I’m a “test what’s designed” person. The thing that I always come back to, that the “design for testability” folks can’t seem to account for, is when API is a deliverable. The code needs to look a certain way to folks consuming it because the customer is a developer. Sure, it’s a niche, but it’s a valid requirement. If I design for testability, my API gets fat. Really fat. I have interfaces that only ever get one implementation in the production setting. I have public methods exposed that should/will never be called by consumers wanting to use the product.

My experience has shown that it’s also a pain to be a consumer of products architected like that. Let’s use a recent experience with CruiseControl.NET, the continuous integration build server. That thing is architected to the nines - everything has an interface, everything is super-pluggable, it’s all dependency-injection and factories… amazing architecture in this thing.

Ever try to write a plugin for CruiseControl that does something non-trivial? It’s the very definition of pain. The API clutter (combined with, granted, the less-than-stellar body of documentation) makes it almost impossible to figure out which interfaces to implement, how they should be implemented, and what’s available to you at any given point in execution. It’s a pain to write for and a pain to debug. The architecture gets in the way of the API.

Catch-22Yeah, you could make the “internals” of the product designed for testability and throw a facade on it for consumers of the API. You still have to test that facade, though, so you get into a catch-22 situation.

Enter “testing what’s designed.” You have a defined API that is nice and clean for consumers. You know what needs to go in and you know what needs to come out. But you can’t have 150 extra interfaces that only ever have a single implementation solely for testing. You can’t abstract away every call to any class outside the one you’re working on.

The problem with “testing what’s designed” was that you couldn’t do it in a test-driven fashion. The two notions were sort of counter to each other. With a framework like TypeMock, it’s not - I can keep that API clean and move to a test-driven methodology.

Here’s another example that is maybe a better one than last time. Let’s say you have a class that reads settings from the application configuration file (app.config or web.config, as the case may be). You use System.Configuration.ConfigurationSettings.AppSettings, right? Let’s even abstract that away: You have a class that “uses settings” and you have an interface that “provides settings.” The only implementation of that interface is a pass-through to ConfigurationSettings.AppSettings. Either way, at some point the rubber has to meet the road and you’re going to have to test some code that talks to ConfigurationSettings.AppSettings - either it’s the class that needs the settings or it’s the implementation of the interface that passes through the settings.

How do you test various settings coming back from ConfigurationSettings.AppSettings? Say the setting needs to be parsed into a System.Int32 and if there’s a problem or the setting isn’t there, a default value gets returned. You’ll want to test the various scenarios, right? Well, in that case, you can overwrite the app.config file for each test, which isn’t always necessarily the best way to go because putting the original app.config file back is going to be error prone… or you can set up an intricate system where you spawn a new application that uses your class and has the specified test configuration file (waaaaay too much work)… or you can mock the call to ConfigurationSettings.AppSettings. In this case, I’m going to mock that bad boy.

Let’s say you disagree with that - either you don’t think you need to test the interface implementation or you should go down the intricate temporary application route: More power to you. Seriously. I don’t think “not testing” is an option, but I also have a deadline and writing a bajillion lines of code to test a one-shot interface implementation is a time consumer.

On the other hand, let’s say you agree - that mocking the call to ConfigurationSettings.AppSettings is legit. That sort of negates the need for the one-off interface, then, doesn’t it? From a “you ain’t gonna need it” standpoint, you’ve then got an interface and one implementation of the interface that really are unnecessary in light of the fact you could just call ConfigurationSettings.AppSettings directly.

But if it’s okay to mock the call to ConfigurationSettings.AppSettings, why isn’t it okay to mock a call to a factory (or settings provider, or whatever) that I create?

Hence my original example of mocking the call to the factory - if there’s no need for all the extra interfaces and loose coupling solely to abstract things away for testing (or you don’t have the option because you can’t clutter your API), then mocking the call to the factory is perfectly legitimate. I’m not testing the output of the factory, I’m testing how a particular method that uses the factory will react to different outputs. Sounds like the perfect place for mocking to me.

Miki also calls me out on testing abstract classes: Speaking of abstract classes, I don’t think they should be tested as a seperate part in a unit test… abstract by definition means that something else will inherit from it and possibly add to the default behaviour, but that class on its own will never exist as an instance, so I’m not sure where mocking comes into the picture.

Let’s use the .NET framework as an example. Take the class System.Web.UI.WebControls.ListControl - it’s an abstract class. It doesn’t stand on its own - other classes inherit from it - but there is some specific default behavior that exists and many times isn’t overridden. From the above statement, I get the impression that Miki believes every class that derives from System.Web.UI.WebControls.ListControl has its own obligation to test the inherited behavior of System.Web.UI.WebControls.ListControl. I feel like this would create a lot of redundant test code and instead opt to have separate tests for my abstract classes to test the default behavior. That releases the derived classes from having to test that unless they do specifically override or add to the default behavior and it allows you to isolate the testing for the abstract class away from any given derived object. But since you can’t create an abstract class directly, you end up having to create a dummy class that derives from the abstract class and then test against that dummy class… or you could mock it and test the behavior against the mock. Again, the easier of the two routes seems to me to be to mock it.

Maybe Miki is right - maybe I don’t get the idea of unit testing or the benefits that come from them. That said, using success as a metric, somehow I feel like I’m getting by.

costumes comments edit

I usually go all-out for Halloween and create a pretty elaborate costume. This year I don’t think I’ll be able to because the wedding planning and execution will be taking up too much time.

I’ve always wanted to go as a Ghostbuster, though. I’ve never really found great plans or anything, though, and I don’t own a machine shop to do all the custom work needed. This guy made a pretty dang good one, though, and mostly out of wood, which is something I can work with Dremel-style.

Turns out the key to the whole thing is decent plans (as you’d have guessed), which he got from this site. That and some patches will take you a long way.

Maybe I’ll get on this for next year.

media, movies, tv comments edit

I blogged a bit ago that I was going to try MediaPortal and Daemon Tools as a solution for storing all of my DVDs on a home theater PC and having it work like a big movie jukebox - but with hard drives instead of DVDs.

I have the day off, so I thought I’d give it a run.

I downloaded Daemon Tools and installed it. Fine. I downloaded MediaPortal and installed it. Fine. Spent quite some time configuring MediaPortal and trolling through the available options (there are a lot). I got all of that set up to a point where I figured it would work. I configured MediaPortal to know where Daemon Tools got installed and was ready to go.

I grabbed a movie at random from the cabinet in the other room (it happened to be Collateral Damage) and brought it into the computer room to get an ISO of the disc. ISO ripped, software installed, ready to rumble.

Fired up MediaPortal, told it where to get the ISO I just ripped. That’s when I ran into the first complaint I have about MediaPortal - pretty much any configuration option you change requires a full restart of the application. Add a new folder where movies are stored - restart. Add the extension “.iso” as a movie extension - restart. Lame.

Finally got all that configured and found the ISO. Clicked it to play, and Daemon Tools pops a warning about secure command lines. Click OK a bunch of times and set Daemon Tools to not be in secure mode. In the meantime, MediaPortal isn’t doing anything. Try again.

Clicked the ISO again and MediaPortal tells me it’s loading, but nothing seems to be happening… until the stupid InterActual media player thing fires up and tries to install. No, no, I don’t want that. I just want to see the movie. Cancel install.

No luck. After fussing around with it for some time, it looks like MediaPortal will generally defer to the default movie player assigned when it uses the ISO stuff.

I decided to try it with a regular DVD and the magic combo for that seems to be to disable any autoplay for DVD movies. Even then, the playback seems to be a little choppy. Of course, that might just be my hardware setup - it’s really not designed for this sort of thing.

Back to the ISO now that I have the DVD stuff mostly working. It’s still giving me the stupid InterActual player notification.

I messed around with it for a lot longer and there are two things that need to be done to get this thing to work.

First, set up the Daemon Tools virtual drive so it does not give the Auto Insert Notification. This is key so when the ISO is mounted it doesn’t do its autoplay garbage and try to install crap.

Uncheck the 'Auto Insert Notification' box in Daemon Tools device
parameters.

Second, in the MediaPortal configuration, make sure the drive letter that MediaPortal is using for the Daemon Tools drive matches the one you set up to not autoplay. I’ve set Daemon Tools drive 0 to be drive ‘M:’ (for “movies”). MediaPortal needs to know that information, too.

Ensure that the Daemon Tools section in MediaPortal points to the
proper virtual
drive.

Once you do that, things seem to work. But it’s definitely an undocumented process in getting the planets to align.

Now I just have to figure out whether it’s going to be worth it. You can get hard drives cheap, but I’m going to have to get a new computer for this (connected to the TV) with all of the stuff fast enough to work in this configuration. I can probably put it together for not a lot, but when all is said and done I’m sure I’ll be looking at a couple of grand or more. I also need to figure out how the integration with DVD Profiler works (if it integrates at all) since I have all of my movies cataloged in there.