gaming, playstation comments edit

I’m digging my PSP. I’ve been playing videos on it, games on it, having a great time with it. I generally use my iPod for music, but I thought I’d try out the ol’ PSP to see if it’d be a good substitute in a pinch.

I grabbed a couple of files at random from my iTunes collection and dropped them into the PSP’s “MUSIC” folder. One file was an MP3, one file was an M4A (AAC). Fired up the PSP, headed over to the music section, and started playing.

The MP3 played fine. It also displayed the artist name, track title, and cover art for the song playing. Very cool - I’ve spent a lot of tme getting my song metadata updated so it’s nice to see something using it.

The M4A (AAC) also played fine… but it didn’t display any of the metadata, just the name of the song file. Lame.

I contacted Sony support and, after several rounds of email (where they helpfully copy-and-pasted a long discussion of how to get music to play on the PSP - not the problem at all), I got on the phone with a support person who didn’t really know what the word “metadata” means.

After explaining the situation in great detail and using very small words, the support representative walked me around in circles for a while until I realized one of two things must be true: either the PSP supports it and the rep still has no idea what I’m talking about, or the PSP doesn’t support it and the rep isn’t allowed to say so. (“There must be a problem with your song file, sir.” No, there’s not - it plays fine, iTunes sees the metadata, iPod sees the metadata, Xbox 360 sees the metadata… either the PSP doesn’t support it or it reads it from a different place in the song file than every other player I’ve got.)

I finally cornered the rep and got her to admit the PSP doesn’t support it. This morning I filed a question/comment on the issue with Sony requesting an update to the PSP system software to allow display of the AAC metadata. Hopefully they’ll resolve it for the next release.

net, testing, process comments edit

I posted last week a short discussion about whether mock objects are too powerful for most developers. The question originates based on the impression that people may use mock objects in an incorrect way and effectively invalidate the unit tests they write. For example, using mock objects you might actually set up a test and mock out a system state that will never actually be reached. Not so great.

In the end I came to the conclusion that it’s more of an education thing - as long as the person using the tool has a full understanding of what they’re doing, mocks are a great thing.

I used a couple of mock frameworks and then came across TypeMock, a mock framework for .NET that has significantly more power and features than other .NET mock frameworks out there. I instantly fell in love with it because it not only made mocking so easy with its natural syntax and trace utility (among other things), but it has the ability to mock things that many other frameworks don’t - calls to private methods, constructors, static method calls, etc.

Wanna Be Startin'
Somethin'That’s where the debate really heats up. There are effectively two schools of thought - “design for testability” and “test what’s designed.”

In the “design for testability” school, a lot of effort goes into designing very loosely coupled systems where any individual component can be substituted out for any other individual component at test time. The systems here are generally very “pluggable” because in order to test it out, you need to be able to swap test/mock objects in during the unit tests. Test driven development traditionally yields a system that was designed for testability since unit tests have to cover whatever’s coded.

In the “test what’s designed” school comes at it from the other direction. There’s a notion of what the system needs to do and the software gets written to do that, but unit tests are generally written after the fact (yes, shame, shame, shame). “Pluggability” is specifically crafted into the places it needs to be rather than anywhere. Test driven development hasn’t generally worked for systems like this.

In some cases it’s a religious argument, and in some cases it’s not. Miki Watts picked up my post and you can tell there’s a definite trend toward “design for testability” there. He argues that my mocking example isn’t valid - that proper design might have dictated I pass around an interface to the factory and mock the factory’s interface rather than mocking the static call to the factory to get the desired behavior in testing.

Eli Lopian (CTO of TypeMock) picked up my post as well as Miki’s post and argues that the lower coupling of the code (passing around the interface to the factory - an interface that didn’t previously need to exist and that consuming classes really shouldn’t know/care about) lowers the cohesion of the code.

I won’t lie - I’m a “test what’s designed” person. The thing that I always come back to, that the “design for testability” folks can’t seem to account for, is when API is a deliverable. The code needs to look a certain way to folks consuming it because the customer is a developer. Sure, it’s a niche, but it’s a valid requirement. If I design for testability, my API gets fat. Really fat. I have interfaces that only ever get one implementation in the production setting. I have public methods exposed that should/will never be called by consumers wanting to use the product.

My experience has shown that it’s also a pain to be a consumer of products architected like that. Let’s use a recent experience with CruiseControl.NET, the continuous integration build server. That thing is architected to the nines - everything has an interface, everything is super-pluggable, it’s all dependency-injection and factories… amazing architecture in this thing.

Ever try to write a plugin for CruiseControl that does something non-trivial? It’s the very definition of pain. The API clutter (combined with, granted, the less-than-stellar body of documentation) makes it almost impossible to figure out which interfaces to implement, how they should be implemented, and what’s available to you at any given point in execution. It’s a pain to write for and a pain to debug. The architecture gets in the way of the API.

Catch-22Yeah, you could make the “internals” of the product designed for testability and throw a facade on it for consumers of the API. You still have to test that facade, though, so you get into a catch-22 situation.

Enter “testing what’s designed.” You have a defined API that is nice and clean for consumers. You know what needs to go in and you know what needs to come out. But you can’t have 150 extra interfaces that only ever have a single implementation solely for testing. You can’t abstract away every call to any class outside the one you’re working on.

The problem with “testing what’s designed” was that you couldn’t do it in a test-driven fashion. The two notions were sort of counter to each other. With a framework like TypeMock, it’s not - I can keep that API clean and move to a test-driven methodology.

Here’s another example that is maybe a better one than last time. Let’s say you have a class that reads settings from the application configuration file (app.config or web.config, as the case may be). You use System.Configuration.ConfigurationSettings.AppSettings, right? Let’s even abstract that away: You have a class that “uses settings” and you have an interface that “provides settings.” The only implementation of that interface is a pass-through to ConfigurationSettings.AppSettings. Either way, at some point the rubber has to meet the road and you’re going to have to test some code that talks to ConfigurationSettings.AppSettings - either it’s the class that needs the settings or it’s the implementation of the interface that passes through the settings.

How do you test various settings coming back from ConfigurationSettings.AppSettings? Say the setting needs to be parsed into a System.Int32 and if there’s a problem or the setting isn’t there, a default value gets returned. You’ll want to test the various scenarios, right? Well, in that case, you can overwrite the app.config file for each test, which isn’t always necessarily the best way to go because putting the original app.config file back is going to be error prone… or you can set up an intricate system where you spawn a new application that uses your class and has the specified test configuration file (waaaaay too much work)… or you can mock the call to ConfigurationSettings.AppSettings. In this case, I’m going to mock that bad boy.

Let’s say you disagree with that - either you don’t think you need to test the interface implementation or you should go down the intricate temporary application route: More power to you. Seriously. I don’t think “not testing” is an option, but I also have a deadline and writing a bajillion lines of code to test a one-shot interface implementation is a time consumer.

On the other hand, let’s say you agree - that mocking the call to ConfigurationSettings.AppSettings is legit. That sort of negates the need for the one-off interface, then, doesn’t it? From a “you ain’t gonna need it” standpoint, you’ve then got an interface and one implementation of the interface that really are unnecessary in light of the fact you could just call ConfigurationSettings.AppSettings directly.

But if it’s okay to mock the call to ConfigurationSettings.AppSettings, why isn’t it okay to mock a call to a factory (or settings provider, or whatever) that I create?

Hence my original example of mocking the call to the factory - if there’s no need for all the extra interfaces and loose coupling solely to abstract things away for testing (or you don’t have the option because you can’t clutter your API), then mocking the call to the factory is perfectly legitimate. I’m not testing the output of the factory, I’m testing how a particular method that uses the factory will react to different outputs. Sounds like the perfect place for mocking to me.

Miki also calls me out on testing abstract classes: Speaking of abstract classes, I don’t think they should be tested as a seperate part in a unit test… abstract by definition means that something else will inherit from it and possibly add to the default behaviour, but that class on its own will never exist as an instance, so I’m not sure where mocking comes into the picture.

Let’s use the .NET framework as an example. Take the class System.Web.UI.WebControls.ListControl - it’s an abstract class. It doesn’t stand on its own - other classes inherit from it - but there is some specific default behavior that exists and many times isn’t overridden. From the above statement, I get the impression that Miki believes every class that derives from System.Web.UI.WebControls.ListControl has its own obligation to test the inherited behavior of System.Web.UI.WebControls.ListControl. I feel like this would create a lot of redundant test code and instead opt to have separate tests for my abstract classes to test the default behavior. That releases the derived classes from having to test that unless they do specifically override or add to the default behavior and it allows you to isolate the testing for the abstract class away from any given derived object. But since you can’t create an abstract class directly, you end up having to create a dummy class that derives from the abstract class and then test against that dummy class… or you could mock it and test the behavior against the mock. Again, the easier of the two routes seems to me to be to mock it.

Maybe Miki is right - maybe I don’t get the idea of unit testing or the benefits that come from them. That said, using success as a metric, somehow I feel like I’m getting by.

General Ramblings comments edit

I usually go all-out for Halloween and create a pretty elaborate costume. This year I don’t think I’ll be able to because the wedding planning and execution will be taking up too much time.

I’ve always wanted to go as a Ghostbuster, though. I’ve never really found great plans or anything, though, and I don’t own a machine shop to do all the custom work needed. This guy made a pretty dang good one, though, and mostly out of wood, which is something I can work with Dremel-style.

Turns out the key to the whole thing is decent plans (as you’d have guessed), which he got from this site. That and some patches will take you a long way.

Maybe I’ll get on this for next year.

media, movies, tv comments edit

I blogged a bit ago that I was going to try MediaPortal and Daemon Tools as a solution for storing all of my DVDs on a home theater PC and having it work like a big movie jukebox - but with hard drives instead of DVDs.

I have the day off, so I thought I’d give it a run.

I downloaded Daemon Tools and installed it. Fine. I downloaded MediaPortal and installed it. Fine. Spent quite some time configuring MediaPortal and trolling through the available options (there are a lot). I got all of that set up to a point where I figured it would work. I configured MediaPortal to know where Daemon Tools got installed and was ready to go.

I grabbed a movie at random from the cabinet in the other room (it happened to be Collateral Damage) and brought it into the computer room to get an ISO of the disc. ISO ripped, software installed, ready to rumble.

Fired up MediaPortal, told it where to get the ISO I just ripped. That’s when I ran into the first complaint I have about MediaPortal - pretty much any configuration option you change requires a full restart of the application. Add a new folder where movies are stored - restart. Add the extension “.iso” as a movie extension - restart. Lame.

Finally got all that configured and found the ISO. Clicked it to play, and Daemon Tools pops a warning about secure command lines. Click OK a bunch of times and set Daemon Tools to not be in secure mode. In the meantime, MediaPortal isn’t doing anything. Try again.

Clicked the ISO again and MediaPortal tells me it’s loading, but nothing seems to be happening… until the stupid InterActual media player thing fires up and tries to install. No, no, I don’t want that. I just want to see the movie. Cancel install.

No luck. After fussing around with it for some time, it looks like MediaPortal will generally defer to the default movie player assigned when it uses the ISO stuff.

I decided to try it with a regular DVD and the magic combo for that seems to be to disable any autoplay for DVD movies. Even then, the playback seems to be a little choppy. Of course, that might just be my hardware setup - it’s really not designed for this sort of thing.

Back to the ISO now that I have the DVD stuff mostly working. It’s still giving me the stupid InterActual player notification.

I messed around with it for a lot longer and there are two things that need to be done to get this thing to work.

First, set up the Daemon Tools virtual drive so it does not give the Auto Insert Notification. This is key so when the ISO is mounted it doesn’t do its autoplay garbage and try to install crap.

Uncheck the 'Auto Insert Notification' box in Daemon Tools device
parameters.

Second, in the MediaPortal configuration, make sure the drive letter that MediaPortal is using for the Daemon Tools drive matches the one you set up to not autoplay. I’ve set Daemon Tools drive 0 to be drive ‘M:’ (for “movies”). MediaPortal needs to know that information, too.

Ensure that the Daemon Tools section in MediaPortal points to the
proper virtual
drive.

Once you do that, things seem to work. But it’s definitely an undocumented process in getting the planets to align.

Now I just have to figure out whether it’s going to be worth it. You can get hard drives cheap, but I’m going to have to get a new computer for this (connected to the TV) with all of the stuff fast enough to work in this configuration. I can probably put it together for not a lot, but when all is said and done I’m sure I’ll be looking at a couple of grand or more. I also need to figure out how the integration with DVD Profiler works (if it integrates at all) since I have all of my movies cataloged in there.

General Ramblings comments edit

Stu hangs out with a showgirl outside
Bally'sI’m recovering today from a pretty crazy weekend in Vegas. Stu, Jason, and Adam went with me down to Vegas to do the bachelor party up right.

We headed down there on Friday and checked into the Aladdin (which is being refashioned into the “Planet Hollywood Resort and Casino”). We took most of Friday to trek around the central and northern portions of the strip, seeing the Bellagio, Caesar’s Palace (and the Forum Shops), and the Paris (still my favorite place down there). We even ran past a place where Richard Kiel (“Jaws” from the James Bond movies) was signing autographs. Very cool.

We also went to the Star Trek Experience and rode the two really cool motion simulator rides (Jenn’s not a big Trek fan so I didn’t see that last time I was there) and after almost more Trek than one person can handle, we ended the night at Scores (it’s a bachelor party, right?).

On the way back to the Aladdin, the cab driver lady, in severely broken English, asked us if “we liked the girls.” Yeah, we did. Then, in further broken English, informed us “there are places you can get fuck.” Whoa, whoa, whoa. Nice shootin’, Tex. Let’s just reel that in a couple of notches, there. Why don’t you just drop us off back at the Aladdin and we’ll call it good, shall we? Man, the cabbies there are an interesting bunch (we had several other interesting cabbies while we were there, providing us with a variety of entertainment and near-death experiences).

Saturday morning found Stu and I chowing down at the buffet at the Paris, which is my favorite buffet down there. The line was incredibly long but moved along at a reasonable pace.

After breakfast, we walked down to the Wynn so Stu could check out the Ferrari store in there. It was a hell of a walk to get there, but the store was really cool and Stu came out with a cap and a polo shirt for a not unreasonably gouging price.

We walked back down toward the Aladdin and stopped in at various places on the way, making another run through the Forum Shops and such, before taking a break for an hour or so to let our legs stop aching like death.

Jason and Adam, during this time, were playing in a poker tournament at Caesar’s Palace. I don’t remember exactly how well they did, but I do recall later that night hearing mumblings about certain shirts being unlucky and not doing as well as planned.

We recovered from the walk and then headed down to see Luxor, New York New York, Excalibur, and MGM Grand. After the poker tournament, Jason and Adam met us at the MGM Grand and we ate dinner at the Rainforest Cafe.

Once dinner was over, we hung out for a little while longer, and Stu and I parted ways with Jason and Adam and we headed over to see KA.

KA is, in a word, awesome. And if you’re in the center of the front row, it really feels like the only people in the entire auditorium are you and the performers. It makes the show super personal, and I don’t think I’d have traded that. Most of the action happened no more than ten feet away from me - less than that, most times - and it really made me feel much more involved than in some of the other Cirque shows I’ve seen.

Speaking of other Cirque shows, this one is very different than the others. The shows I’d previously seen were loose, abstract stories where some pretty cool acts were strung together to make sense. KA plays out more like a story and the stuff they do is more like a stage performance than a circus act. Not only that, but the set pieces involved in this show are spectacular. Huge sections of the stage fly around and rotate, most of the time with performers on them. It’s really not like anything you’ve ever seen, and I really can’t recommend it more highly.

So we got to KA and our tickets were taken by a fairly brutal looking Chinese guy (in character, of course - everyone’s in character) who harshly pointed us to the entry we should take to get to our seats. It was restroom break time, though, so I headed over to the restroom.

There was this bathroom attendant in there, another Chinese guy, but he looked more like he was 70 if he was a day. There was this large trough in there with spigots coming out of the wall where you washed your hands when you were done. The attendant (who was very reminiscent of Mr. Miyagi) turned the water on for you by waving his hand over the spigot. Pretty cool. I got my hands wet, soaped up, and the water turned off. Here’s the thing - I couldn’t get that stupid water on to save my life. The little Chinese guy came over, waved his hand, and water came out. Water stopped again, and again I couldn’t figure this thing out. Three or four times the guy had to come over and turn the water on for me, and man, he was getting the hugest kick out of it. Just laughing and laughing and mumbling in Chinese. Too funny.

Another fairly harsh Chinese guy directed us to our seats up front and center and told us it was nice knowing us as fireballs shot into the air from the stage a few feet in front of our seats. A nice lady (the “wife of the village mayor”) told us that the Emperor would be very angry with us if we put our feet on the edge of the stage or if we leaned over the edge to look in, so we decided that we’d refrain so as not to incur the Emperor’s wrath.

The show, which was indescribably amazing and involving, started at 9:30p and ended at 11:30p. There was no intermission (which is different from other Cirque shows), which was sort of unfortunate because I really needed a Red Bull or something. After walking all over during the day, some of the music of the show was beautiful and calming, and as amazing as the show was, I felt so relaxed my eyelids started getting heavy. No, I did not fall asleep - but I could have used a little caffeine.

KA is now my favorite Cirque show, and the $150 ticket for front row center was well worth it. I can’t imagine seeing it any other way, and that feeling of personal involvement in the show made it all that much more memorable.

After the show, Stu and I walked back to the Aladdin and crashed, just in time to get a couple hours’ sleep before leaving for the airport at 6:15a.

We were home in Portland by 2:00p and I unpacked all of my stuff and lazed on the couch for a while before Jenn got home from a baby shower she was at. The story of the trip was relayed, gifts were given, and the evening proceeded as usual.

I’m glad I took today off to recover as my feet and legs are killing me. That always seems to happen when I go to Vegas, but it’s always worth it. It was a hell of a weekend and I totally can’t wait to go back.

Something I noticed that is probably good in some cases but kind of sucks, too - I didn’t get a lot of pictures while I was down there, and those I did get don’t have me in them. That’s actually why picture up at the top is Stu with a showgirl and not me - there was just so much going on that I never remembered to take pictures and no one else really brought a camera with them. But, then again… maybe it’s better that way. Hehehe.

Only two weeks until the wedding and a trip to Aruba. I anticipate some great times there, too. 2006 has been a hell of a year and it’s not even over yet. Gonna be hard to beat.