January 2010 Blog Posts

The Difference Between a Workaround and a Fix

I just had an interesting [to me] interaction on Twitter that got me thinking:

Workaround... fix... tomato tomahto... same same.

Ignoring the original issue - that iTunes cover flow doesn't handle similarly named albums properly - the "workaround... fix... same same" thing got me.

To a person who doesn't develop software, I bet a workaround and a fix are the same thing. To people who develop software, they're very different, and the distinction is important.

What's the difference?

A workaround means a problem has been identified, there's no official solution for it, but if you do some sort of temporary change on your end you can get things to function within reason. It may not be 100% correct behavior, but it'll get you past the problem - in a way, you need to change your expectations to accept a workaround as a solution. In this case, the workaround would be for me to modify the metadata on all of my music to "fool" iTunes into behaving correctly. The important bit here is that the change is applied to how you use the product, not the product proper. The problem in the product still exists and the use of a workaround is expected to be temporary.

A fix means the problem has been officially solved so, once applied, the expected behavior will be the actual behavior. In this case, if the issue was fixed then I wouldn't have to change the metadata on any of my songs - the iTunes cover flow would work properly. The important bit here is that the change is applied to the product proper. The problem in the product no longer exists because it's actually been fixed.

This doesn't sound like it's a big deal, but from a language precision standpoint (particularly for a software developer), it's huge. If someone files a defect on one of my products and I provide a workaround, I'm still expected to fix it.

(Note that this is no reflection on Alex, who's a smart guy and friend of mine. It just got me thinking, is all.)

Separating Metadata Classes from Model Classes in DataAnnotations Using Custom TypeDescriptionProviders

I haven't done much work with ASP.NET Dynamic Data but in a recent project I started working with it and instantly ran into a dilemma. First, let me explain the project setup.

I have a database project that outlines the schema of the database, the stored procedures, all of that. I have a database client project that has some services and a LINQ to SQL data context that I can distribute to clients who want to go that way. I then have a Dynamic Data project for managing the data and a separate web application that will consume the data, both of which need the LINQ to SQL data context.

Switch on your suspension of disbelief for a second with respect to the design. I could do a way better design going more SOA, or using a repository pattern, or whatever, but it's a spike project and part of the goal is for me to learn something about Dynamic Data, LINQ to SQL, and so on.

Now, Dynamic Data uses the LINQ to SQL data context - from the client assembly - to do its work and generate its screens. Here's the problem:

In order to control the rendering of the Dynamic Data screens, I have to have a metadata "buddy class" to describe it. In order to have a "metadata buddy" class, I have to add an attribute to the generated LINQ to SQL model class that points to the metadata type.

See the problem? The Dynamic Data app is the only thing that cares about the metadata "buddy class," so that's where the class will live... but if I have to mark up the original LINQ to SQL class in a separate assembly to get that to happen, I'm hosed.

Here's what a standard scenario looks like:

public partial class Resource
  // Resource is a class in the LINQ to SQL
  // generated data context. A partial class
  // declaration allows us to put the metadata
  // attribute on it.

public class ResourceMetadata
  // The metadata class can define hints for
  // the Dynamic Data UI as to how to render
  // view/edit controls for the similarly named
  // property on the LINQ to SQL model class.
  // This declaration says 'render this as a
  // ResourceValue type.'
  public object Value;

As you can see, we have to mark up the LINQ to SQL class with that MetadataTypeAttribute. I don't want to do that... but how to keep the metadata separate from the model?

The key is in the Global.asax.cs of your Dynamic Data project. The line where you register the data context with the application:

MetaModel model = new MetaModel();
model.RegisterContext(typeof(DataLibrary.ResourceDataContext), new ContextConfiguration()
  ScaffoldAllTables = true

See that "new ContextConfiguration" bit? One of the parameters you can pass is "MetadataProviderFactory." That parameter is a delegate that creates an instance of something deriving from "System.ComponentModel.TypeDescriptionProvider." The default behavior is similar to this:

MetaModel model = new MetaModel();
model.RegisterContext(typeof(DataLibrary.ResourceDataContext), new ContextConfiguration()
  ScaffoldAllTables = true,
  MetadataProviderFactory =
    (type) => {
      return new AssociatedMetadataTypeTypeDescriptionProvider();

The default MetadataProviderFactory is System.ComponentModel.DataAnnotations.AssociatedMetadataTypeTypeDescriptionProvider. That provider uses an internal type (of course it's internal) that gets the metadata type for a model class through reflection.

In order to get your metadata class from somewhere other than reflection, you need to make your own TypeDescriptionProvider.

Fortunately, that's not actually too hard.

First, let's decide what we want to do: We want to have a static mapping, similar to the MVC route table, that lets us manually map a LINQ to SQL type to any metadata type we want. If there's no manual mapping, we want to fall back to default behavior - get it through reflection.

Now we know what we want the outcome to be, let's get cracking. Throw together a place where you can hold the metadata mappings:

using System;
using System.Collections.Generic;

namespace DynamicDataProject
  public static class DisconnectedMetadata
    public static Dictionary<Type, Type> Map { get; private set; }

    static DisconnectedMetadata()
      Map = new Dictionary<Type, Type>();

I suppose if you wanted to get really fancy with it you could have add/remove/clear methods that have a bunch of thread locking around them and such, but this is a simple way to go and most likely you're only going to be registering mappings at app startup so all of that would just be overkill.

Next we have to create a System.ComponentModel.CustomTypeDescriptor. What a CustomTypeDescriptor does is get all of the information about the various metadata - attributes and properties - on your buddy class. The thing is, Microsoft already did all of that for us, they just inconveniently marked the type they use - System.ComponentModel.DataAnnotations.AssociatedMetadataTypeTypeDescriptor - as internal. With a little fancy, maybe slightly unsupported, reflection work we can pretty easily make use of the code that's already there. Instead of doing a giant full implementation of a new CustomTypeDescriptor, we can write a wrapper around the existing one.

using System;
using System.ComponentModel;
using System.ComponentModel.DataAnnotations;
using System.Reflection;

namespace DynamicDataProject
  public class DisconnectedMetadataTypeDescriptor : CustomTypeDescriptor
    private static Type AssociatedMetadataTypeTypeDescriptor =
      .GetType("System.ComponentModel.DataAnnotations.AssociatedMetadataTypeTypeDescriptor", true);

    public Type Type { get; private set; }
    public Type AssociatedMetadataType { get; private set; }
    private object _associatedMetadataTypeTypeDescriptor;

    public DisconnectedMetadataTypeDescriptor(ICustomTypeDescriptor parent, Type type)
      : this(parent, type, GetAssociatedMetadataType(type))

    public DisconnectedMetadataTypeDescriptor(ICustomTypeDescriptor parent, Type type, Type associatedMetadataType)
      : base(parent)
      this._associatedMetadataTypeTypeDescriptor = Activator.CreateInstance(AssociatedMetadataTypeTypeDescriptor, parent, type, associatedMetadataType);
      this.Type = type;
      this.AssociatedMetadataType = associatedMetadataType;

    public override AttributeCollection GetAttributes()
      return AssociatedMetadataTypeTypeDescriptor.InvokeMember(
        BindingFlags.Instance | BindingFlags.Public | BindingFlags.InvokeMethod,
        new object[] { }) as AttributeCollection;

    public override PropertyDescriptorCollection GetProperties()
      return AssociatedMetadataTypeTypeDescriptor.InvokeMember(
        BindingFlags.Instance | BindingFlags.Public | BindingFlags.InvokeMethod,
        new object[] { }) as PropertyDescriptorCollection;

    public static Type GetAssociatedMetadataType(Type type)
      if (type == null)
        throw new ArgumentNullException("type");

      // Try the map first...
      if (DisconnectedMetadata.Map.ContainsKey(type))
        return DisconnectedMetadata.Map[type];

      // ...and fall back to the standard mechanism.
      MetadataTypeAttribute[] customAttributes = (MetadataTypeAttribute[])type.GetCustomAttributes(typeof(MetadataTypeAttribute), true);
      if (customAttributes != null && customAttributes.Length > 0)
        return customAttributes[0].MetadataClassType;
      return null;

We're doing a few interesting things here to be aware of:

  • On static initialization, we get a handle on the original AssociatedMetadataTypeTypeDescriptor - the internal type that does all the attribute reflection action. If we don't get a reference to that type for some reason, we'll throw an exception so we immediately know.
  • We have a GetAssociatedMetadataType method that you can pass any type to - ostensibly a LINQ to SQL model type - and you should come back with the correct metadata buddy class type. First we check the type mapping class that we created before and if it's not there, we fall back to the default behavior - getting the MetadataTypeAttribute off the LINQ to SQL class.
  • The two-parameter constructor, which is used by Dynamic Data, is where we call our GetAssociatedMetadataType method. That's our point of interception.
  • The three-parameter constructor, which lets a developer manually specify the associated metadata type, creates an instance of the original AssociatedMetadataTypeTypeDescriptor and passes the information into it first. We do that because that type has a bunch of validation it runs through to make sure everything is OK with the metadata type. Rather than re-implementing all of that validation, we'll use what's there. We'll hang onto that created object so we can use it later.
  • The GetAttributes and GetProperties overrides call the corresponding overrides in that AssociatedMetadataTypeTypeDescriptor object we created. We do that because there's a lot of crazy stuff that goes into recursing down the metadata class tree to generate all of the metadata information and we don't want to replicate all of that. Again, use what's there.

That's the big work, seriously. A wrapper around AssociatedMetadataTypeTypeDescriptor pretty much does it. Last thing we have to do is create a System.ComponentModel.TypeDescriptionProvider that will generate our descriptors. That's easy:

using System;
using System.ComponentModel;

namespace DynamicDataProject
  public class DisconnectedMetadataTypeDescriptionProvider : TypeDescriptionProvider
    public DisconnectedMetadataTypeDescriptor Descriptor { get; private set; }
    public DisconnectedMetadataTypeDescriptionProvider(Type type)
      : base(TypeDescriptor.GetProvider(type))
      this.Descriptor =
        new DisconnectedMetadataTypeDescriptor(
          base.GetTypeDescriptor(type, null),

    public override ICustomTypeDescriptor GetTypeDescriptor(Type objectType, object instance)
      return this.Descriptor;

As you can see, this basically just provides an override for the "GetTypeDescriptor" method and hands back our custom descriptor.

That's the entirety of the infrastructure:

  • The map.
  • A CustomTypeDescriptor that looks in the map and then falls back to reflection.
  • A TypeDescriptionProvider that uses our CustomTypeDescriptor.

To use this mechanism, in your Dynamic Data project you need to register the mappings and register the TypeDescriptionProvider. Remember the "model.RegisterContext" call in the Global.asax.cs file in the RegisterRoutes method? Add your mappings there and when you call RegisterContext, add a MetadataProviderFactory:

public static void RegisterRoutes(RouteCollection routes)
  // Register types with the map
  DisconnectedMetadata.Map.Add(typeof(DataLibrary.Resource), typeof(DynamicDataProject.ResourceMetadata));

  // When you register the LINQ to SQL data context,
  // also register a MetadataProviderFactory pointing
  // to the custom provider.
  MetaModel model = new MetaModel();
  model.RegisterContext(typeof(DataLibrary.ResourceDataContext), new ContextConfiguration()
    ScaffoldAllTables = true,
    MetadataProviderFactory =
    (type) =>
      return new DisconnectedMetadataTypeDescriptionProvider(type);
  // ...and the rest of the method as usual.

That's it. Now you don't have to mark up your LINQ to SQL objects with partial classes - you can put your metadata "buddy classes" anywhere you want.

There are some optimizations you could make for performance purposes that I didn't do here for clarity. For example, rather than call "InvokeMember" on every call to GetAttributes and GetProperties in the CustomTypeDescriptor, you could cache references during static construction to the MemberInfo corresponding to the two methods and invoke the cached references. This should get the idea across, though.

And, of course, the usual disclaimers apply: YMMV, I'm not responsible if this code burns your house down or crashes your app or whatever, etc., etc. Works on My Machine!

Reduce Build Overhead with Better Code Organization

In a really large system the build can take a long time. A really long time. Long enough to make continuous integration sort of meaningless. You may not be able to do a whole lot about it, but something to look at is your project's code organization. The compiler and linker have some startup overhead you may be able to get rid of by reducing the number of solutions/projects you have.

For example, I threw together a set of three test codebases. Each has 100 (empty) classes, but they're organized in different ways. I then built them a few times and compared the average times.

Project Format Time to Rebuild (Clean/Build) Working Copy Size Post-Build
100 separate solutions, 1 project per solution, 1 class per project 42s 4.29MB
1 solution with 100 projects, 1 class per project 42s 3.52MB
1 solution with 1 project, 100 classes in the project 1s 256KB

I noticed two interesting things here:

  1. From a time perspective, you don't get much if you have 100 solutions or 100 projects - the real gain (and it's significant) is if you put everything into the same project/assembly.
  2. The working copy size post-build (the amount of disk space taken by the source and build output) is orders of magnitude smaller if you put everything into the same project/assembly.

This isn't to say everyone should start shipping mammoth assemblies. Just be careful how you organize things. Choose your assembly boundaries carefully. You may gain yourself some time in the build - and some space on your disk.

11 of My Favorite Authors

A while ago a friend of mine asked me for the names of some of the authors I read. This got me thinking and I figured I'd make a list of a few of my favorites. So, in no particular order...


"Neuromancer" by William Gibson William Gibson: I've read Neuromancer so many times the book has nearly fallen apart. I like how he describes things enough to get a vivid image in your head but not with such numbing detail you get bogged down. Plus, how can you deny the guy who coined the term "cyberspace?" I've read all of his books and I have yet to find one I didn't like.

Richard K. Morgan: Morgan has created a future world where your personality lives in a "cortical stack" at the base of your skull and a character named Takeshi Kovacs is an ex special-forces soldier turned investigator for hire. Action packed and a really fun read, Altered Carbon is the first book in that series.

Tom Clancy: Clancy's sort of hit-or-miss. I've read some of his books that just take freaking forever to get where they're going, but others are really exciting. Most of the books revolve around a modern-day wartime environment. I particularly liked Rainbow Six.

Jeff Noon: Noon writes in a very distinctive style that seems to resonate for some folks but not as much for others. If it works for you, it really works, and the books are amazing. His primary series revolves around a world where we exchange objects between our world and a parallel world and what comes back is a powerful hallucinogenic drug... that you take by sticking a feather in your mouth. If you think it sounds weird, you're right, it is... but it's a very compelling universe, too. The first book in the series, Vurt, brings with it the added challenge that it's written in an invented dialect so it may take a bit to get into, but give it a shot. when I was done with it, I immediately flipped back to the first page and read it again.

"Vurt" by Jeff Noon Steve Aylett: I'll admit I've only read one of Aylett's books - Slaughtermatic - but it was so good I have to include him here. In this particular book, the world is a place where crime is a form of recreation and time travel is possible. It's admittedly a little convoluted, but a really fantastic read.

Philip Pullman: Specifically, Pullman's "His Dark Materials" trilogy which starts with The Golden Compass (also made into a movie). In this world, your soul is physically embodied as a "daemon" - an animal that travels with you. When the movie came out a lot of stink got raised about the social commentary on organized religion that these books present, but I really feel like that was a lot of crap. Is there some commentary? Sure. Is it as important or prevalent as the folks out there would like you to think? I don't think so. This is another set of books I've read several times and enjoy every time.

"The Looking Glass Wars" by Frank Beddor Frank Beddor: I am an Alice in Wonderland freak. I love the story, the characters, and I love imaginative derivatives of it. Beddor has created my favorite reimagining with his "Looking Glass Wars" trilogy - the idea being that Wonderland is a real place and Alyss Heart is a real person who ends up crossing into our world and getting trapped. Not difficult reads but some of my favorites. There's even a soundtrack that goes along with them.

Neal Stephenson: Again, sort of hit-or-miss for me, but I can't recommend more strongly that everyone read Snow Crash. Where else would you find a place where pizza delivery is controled by the mob and it arrives at your house via a guy you refer to as "The Deliverator?" Cyberpunk action at its finest.

Neil Gaiman: Is there a "favorite authors" list Gaiman isn't on? Every story is different and imaginative in its own right, but my absolute, all-time favorite, and one that I have yet to find anyone disappointed with, is his collaboration with Terry Pratchett: Good Omens. Of course, if you were offended by the "commentary" in Philip Pullman's "His Dark Materials" series, Good Omens is probably not for you... but otherwise it's a must-read.

Douglas Adams: The Hitchhiker's Guide series are some favorites for me (and a ton of other people) and I've loved them since I was a kid. One of the best birthday gifts I've gotten was a leather-bound copy of the first four books in the series. I've read them, listened to the audio books (on tape!), seen the movies/TV shows... I can't get enough. Truly funny stuff.

"Jennifer Government" by Max BarryMax Barry: You also might see him listed as "Maxx Barry" but he seems to have changed that in recent times. I discovered Barry through his book Jennifer Government: Welcome to a place where enterprise has taken over enough that your last name is the name of the company where you work and a shoe manufacturer tries to earn "street cred" for his shoes by killing people who buy them. His subsequent efforts are no less interesting or imaginative. Barry's another one where I've read all of his books and love them all.


If you're looking for something new to read, maybe check some of these out. If you do, drop me a line and let me know what you think.

Performance Profiler Showdown for .NET

I recently had to do some performance profiler evaluation for .NET applications and I figured I'd share my results. Note that it's as scientific as I could make a subjective review (e.g., "friendly UI" might mean something different to you than to me), but maybe it'll help you out. Also, I'm not a "profiler expert" and, while I've used profilers before and understand generally what I'm looking at, this isn't my primary job function.

The five profilers I tried out:

I put an explanation of what each feature "means" in tooltip form, so put your cursor over it if you don't understand what I'm talking about. An "X" in the box means it has the feature.

Testing was done on a dual-2.8GHz processor machine running Windows Server 2008 R2 64-bit and 4GB RAM.

  VSTS 2008 ANTS Perf 5.2 VTune 9.1 dotTrace 3.1 AQtime 6
User Interface
Visual Studio integration X     X X
Standalone application   X X X X
Friendly/easy to use X X   X  
Robust reporting   X ?   X
Measurement Style
Sampling X X X X X
Instrumentation X X X X X
Measurements Recorded
CPU time   X X X X
Wall-clock time X X X X X
Additional perf counters   X     X
  This requires Visual Studio, which means you have to have VS installed on the machine running the app you're profiling. That said, this was the easiest to get results from and the easiest to interpret. In general this appeared to be the best balance between "robust" and "usable" but I couldn't actually see the report that came out because it locked up the UI thread on the machine and ate 3GB of memory. I've asked about this in the forums. Turns out this is fixed in the next version, 6, currently in EAP. I couldn't actually get a profile to run using VTune since it complained of being "unable to determine the processor architecture." As such, I don't know how well the reporting works. When I ran dotTrace 3.1 on a multi-proc system, I got several timings that came out with negative numbers (-1,000,289 msec?). You can fix this by setting the proc affinity for the thing you're profiling. I tried a nightly build of dotTrace 4.0 and that's fixed. dotTrace 4.0 will also let you profile a remote application - something the others don't support. AQtime has a lot of power behind it but lacks the usability of some of the other profilers. It appears that if you take the time to really tweak around on your profile project settings, you can get very specific data from an analysis run, but doing that tweaking isn't a small feat. I spent a good hour figuring out how to profile an ASP.NET application in the VS dev server and setting it up. Also, while it very well may be due to my lack of experience with the tool, AQtime had the most noticeable impact on the application's runtime performance. It took several minutes for the first page of my app to load in the browser.

For now, it looks like the VSTS profiler is my best bet. If I could figure out the UI problem with ANTS, or if dotTrace 4.0 was out, I'd say those options tie for my second choice. The VTune profiler seems to be the most... technical... but it also desperately needs a UI refresh and seems geared toward profiling unmanaged code, where managed code is a "nice to have" rather than a first-class feature.

UPDATE 1/21/2010: I added AQtime to the list of profilers I tried out. Also, I removed the "VS integration" checkmark from ANTS because, while it adds a menu entry to VS, all it does is start up the external application. I'm not counting that. Finally, I found out my ANTS problem is fixed in the next version, 6, currently in EAP. Since it's not released, I still have to go with the VSTS profiler, but once it's out, I'd vote ANTS.

Ripping, Storing, and Playing Blu-Rays

Great post by Media Center MVP Pete Stagman on how he handles ripping and playing Blu-ray discs with Media Center and Windows Home Server. Doesn't sound too different from how I have my Media Center stuff set up except he's using MyMovies to list the titles and I'm using the built-in Media Center movie library.

Grand Theft Auto 4 - Episodes from Liberty City

GTA: Episodes from Liberty City Over my holiday break I spent some time playing Grand Theft Auto 4: Episodes from Liberty City. I'm not quite finished with it, but I've gotten far enough that I have an opinion.

Let me give you a frame of reference so you understand where I'm coming from.

I'm a huge Grand Theft Auto fan. Whenever a new Grand Theft Auto comes out, I take a whole week off work and dedicate the entire week just to playing the game. I get the expensive edition with all the bells and whistles. I finished GTA4 with 100% completion. I am, as far as I can tell, the target market for Grand Theft Auto.

Given that, when I heard there were expansion packs coming for GTA4, I was really excited. That, the holiday season, and one of those frozen buckets of daiquiri and I've got the most awesome cheap vacation ever. They released it as separate downloads or as a self-contained disc (same price), so I got the disc (love that Xbox DRM), the strategy guide, and when vacation time came around, I was in.

There are two expansions: "The Lost and Damned" and "The Ballad of Gay Tony." I have very different feelings on each expansion.

In "The Lost and Damned," you play the leader of a biker gang. You go on several missions to fight rival gangs, to get money for your own gang, and to help out friends. The missions themselves are fairly standard GTA fare, but they're fun. I was partial to the gun battles (there were several) since many were fairly large scale with lots of enemies. A couple of missions were difficult, but I never really struggled with them - I might have to run them two or three times, but I could get them. The feel of the whole thing was good and got you involved with the story. Very nice.

I did not play "The Lost and Damned" through to 100% completion. There were some race missions that were optional to the completion of the story and I've always absolutely despised race missions in any of the Grand Theft Auto games. Driving around the town at my own pace is fun, but the driving really isn't the goal in my opinion; it's a means to get you somewhere, so not having pressure when driving is key. The driving controls are pretty sloppy on most vehicles and shooting while driving takes some serious getting used to. Anyway, I didn't do the races because I don't like the race missions, but I suppose I could have if I really cared.

So, overall, I liked "The Lost and Damned."

After "The Lost and Damned," I moved on to "The Ballad of Gay Tony." In that, you play a guy who runs some night clubs with a business partner, Gay Tony, who is always getting into trouble and you get to bail him out. I'm not done with it yet, but this one is where I start having some fairly strong opinions.

First off, I like the characters in the expansion a little better than the ones in "The Lost and Damned." They're a lot more colorful and more fun, so big plus for the story. I particularly like the additional tie-ins with the characters from the original GTA4 game. There were a few tie-ins in "The Lost and Damned," but the connections with "The Ballad of Gay Tony" are, like the characters, more fun and colorful.

I also like the addition of the "base jumping/parachuting" activities. It reminds me of Grand Theft Auto: San Andreas, which was a really fun game.

That said, I'm actually pretty irritated with "The Ballad of Gay Tony" and here's why:

They grade you on how well you did on each mission.

Whenever you complete a mission, you get this statistics screen that pops up and tells you percentage-wise how well you did. There are some [seemingly] arbitrary criteria that you're supposed to fulfill to get 100% completion. Of course, you're not told what the criteria are until after you've finished the mission, so there's problem number one.

The real problem is that many missions are unreasonably difficult.

As it stands, on most missions I'm having to run them three, four, five times to even complete them. While I've played all the Grand Theft Auto games and invest quite a lot of time in them, I'm not 14 years old anymore. I don't have the time or inclination to run, re-run, and re-re-run missions to get 100% on them. I'm not interested in (and possibly not even capable of) developing the Nintendo-timing snap-reflexes required to fulfill the criteria you won't tell me about until after I've run the mission six times and barely completed it.

Not only that, but "grading me" on how well I did is a huge distraction. I'm not in the "sandbox environment" anymore, where I can do whatever I want or solve the mission however I see fit. Now I'm more on rails, having to do things within a predefined time limit using a specific set of resources in a specific way. That totally defeats the purpose of the thing, in my opinion, and makes it feel less like I'm my own character in my own environment and more like... well, more like I'm just playing a standard platformer. If I wanted that, I'd get the standard platform game and move on.

I won't even get into the fact that you can't save wherever you'd like to in GTA games - you have to save between missions and you have to go all the way back to your safehouse to do it. If you're in the middle of a really difficult mission that takes 20 minutes to finish and you get killed at the 19th minute... well, too bad. Get killed on the way back to your safehouse? Tough cookies. (They did implement an "autosave" feature in GTA4 so you at least don't lose your progress if you complete a mission, but still.) Yeah, that's how it's always been, but it doesn't make it right.

I won't even get into some of the tedious missions they added like "Club Management," where you're supposed to help manage a night club and basically walk around from room to room and get forced to "scan the floor for trouble" by looking around with a thumbstick. Sometimes you even get to go run errands for people like escorting a VIP from one club to another. No, thanks.

"The Ballad of Gay Tony" has turned GTA from a game into a chore. I don't want to compete with other peoples' standings on how well they did on the missions. I don't want to be pulled out of the sandbox environment. I don't want to "replay missions" to see if I can improve my score. I liked the "pass/fail" that was going on before... but now that there's grading and achievements attached to your "percentage complete" on the missions, I'll never get 100% on this thing, and really, I'm not interested. And I'm not interested in tedious crap that just eats up time but isn't actually fun.

Should that matter to Rockstar? Yeah, it should. I'm not interested in getting 100% because it's too hard, but I am a completionist, too, so not being able to get 100% irritates me. It makes me feel like I've been ripped off because I bought a game I want to finish but really won't be able to. It's frustrating and irritating.

Yeah, you can argue that it's all in my head and I'm not being forced to get 100%, and that folks who want that additional challenge now get it and I should just ignore it. I just have to question how many folks actually wanted that challenge and whether that group of folks is really the target market. Maybe they are and I'm alone. I sure hope not. I like my vacations in Liberty City and I hope I don't have to find somewhere else to go.

Net result: Go for "The Lost and Damned" but skip "The Ballad of Gay Tony."

I'd love to be able to give this feedback to Rockstar somehow but I'm not sure how. If you know, leave it in the comments for me.

Windows Home Server Storage Upgraded

I found I was running out of space with all of my DVDs and such, even after adding an eSATA port multiplier and a few 1TB drives. I only have one drive slot left, and while at first I thought I'd fill it, I realized that doesn't leave me much wiggle room in the event of a real emergency where I need to do some fancy drive swapping. As such, I decided to replace one of the 500GB drives with a 2TB drive. The 500GB drive I took out will stand ready as a replacement for the system drive should catastrophe strike.

I started the upgrade with about 750GB free because I wanted to be sure there was enough free space to remove the 500GB drive without losing any data.

Post upgrade, I have a total of 7.73TB in storage with 2.08TB free.

My WHS storage screen - click to enlarge.

Given that I've figured DVD images run about 6.7GB each, that gives me room for another 300 DVDs before I run out of space. Of course, when I hit a bit over 1TB free, I'll have to consider what my upgrade options are in case I need to remove a 1TB drive to replace it with something larger.

UPDATE 1/9/2010: Turns out I got a bad drive. The first night it was in I got a bunch of errors from WNAS.SYS telling me something about "VRM temperature too high." Doesn't make a ton of sense, but that's what happened. Anyway, that first night it totally disappeared from the storage pool, as if by magic. The second night I decided to re-seat it (thinking "bad connection") and run chkdsk on all drives. Got the WNAS errors again and a bunch of disk errors, so... back it goes. Most of my drives are Western Digital and the drive I tried out was a Samsung. Being a little technology-superstitious, I'll probably get a WD drive as a replacement.

UPDATE 1/15/2010: I put a Western Digital Caviar Green 2TB drive in and I'm back up to 7.73TB. So far I'm not seeing any of the weird WNAS.SYS errors I was seeing before which leads me to believe I had a bad drive. Every other drive in the system, save the system drive, is a WD Caviar Green drive, and I've had good luck with them, so I'm hoping my luck will hold.

UPDATE 1/16/2010: I see the WNAS.SYS temperature warning errors again, but it appears that so many in succession is generally understood to be some sort of bug in the driver rather than a health issue. The system didn't restart itself or anything, so I guess I'll just watch it. One thing I found while I was looking for the solution to the WNAS.SYS issue is this article over on the HP site that says how Samsung SpinPoint drives will suddenly "disappear" from the system and it's a compatibility issue. As it turns out, that's the type of drive I ordered that failed - a Samsung SpinPoint 2TB. Looks like the WNAS.SYS error and the drive failure were unrelated. I'm still watching how this WD drive behaves. I can ignore false errors in the logs (though it's fishy that they show up when I add a 2TB drive - maybe I'm crossing some size boundary that causes the bug to show up?), but if a drive "disappears" on me, that's trouble. I'll probably wait a week or so before putting any additional info on the server that might make it so I can't remove the drive.

UPDATE 6/16/2010: Be careful of using the WD Green drives. Only some model numbers appear to be good.

How Do Non-Geeks Survive?

In dealing with today's technology, I feel like I'm inundated with what I usually refer to as "fiddly shit": constant, tiny maintenance tasks to make sure things are still working together correctly. No one task is a big deal; most take under five minutes to fix. Some are larger or more chronic issues that require research and troubleshooting over the course of weeks. Let me throw out some examples of recent issues:

  • D-Link DAP-1522 wireless access point/bridge. Wireless networking at home. I got Verizon FiOS and the router they provide only does wireless-G networking. I wanted a faster network to accommodate my gaming and my media, so I added a wireless-N access point. This added a ton of fiddly shit to my list.
    • Access point setup and maintenance. I bought a DAP-1522 for its supposedly easy setup. Setting the thing up was not nearly as straightforward as the instructions would have you think. Even now, once I have it set up, I find sometimes that it won't connect things at wireless-N speeds, dropping them back to wireless-G. Rebooting the access point (pulling the plug) sometimes fixes this, but also requires me to reboot anything that was previously connected to the network because for some reason things won't just reconnect.
    • Conversion to WPA security. There is also an undocumented "feature" on the DAP-1522 that makes it such that if you use WEP security the access point will not let you have wireless-N connectivity. Everything only ever connects at wireless-G. Not documented anywhere, mind you, so some time was spent on the phone to support for this. I was able to connect at N speeds after switching to WPA security... but I have devices (like a Nintendo DS) that only support WEP, so now I have to either support two different wireless networks (WPA with wireless-N via the access point and WEP with wireless-G via the FiOS router) or just not connect the old devices. Right now: no old devices.
    • USB wireless adapters and Windows 7. I upgraded everything to Windows 7 at home and while I love the OS, the drivers don't seem to be quite up to snuff for any of the USB wireless-N adapters I have. They work... mostly. I found that in some cases you have to install not only the driver but also the stupid "configuration utility" that the manufacturer provides and then things work, even if you don't use that utility or even ever open it. Also, if the computer goes to sleep and wakes up, it disconnects and reconnects to the network over the course of about the first minute after you log in. It's stable after that, but come on. Oh, and the wireless-N built-in adapter on my Dell Studio Hybrid just will not connect at N speeds, always preferring G. Still don't know what's up with that.
  • HP MediaSmart Home Server Windows Home Server. I love my Windows Home Server, don't get me wrong, but there are some fiddly things that crop up.
    • Random disk errors. Every two or three months I'll get an error that says my system disk is having problems. I run the repair, everything checks out, and all is well with the world again for the next two or three months. Is it the weird disk replication thing they have going on? Is it PerfectDisk for Windows Home Server doing a disk defragmentation number on me? Disk actually going bad? Who knows.
    • More random disk errors. Since upgrading to Power Pack 3, I had a thing where every week or so the server would just stop responding to any requests at all. You ended up having to reboot the server hard and it would all come back. The lockup seemed to correspond to a scheduled task I had running on a client machine that would do a full directory listing of a really large set of files and archive the list. (My DVD library isn't duplicated, so if a drive dies and I lose files, at least I'll know what to re-rip.) Error log looked like it just stopped communicating with the eSATA port multiplier. I found some upgraded drivers and we'll see how that goes.
  • Media sharing. I've got my media center solution that I'm working on and one of the biggest challenges is figuring out what format the media should be in. DLNA spec supports some formats, Windows Home Server supports some formats, Windows Media Center supports some formats... but which is the right one for me? I'm lucky to have found something like Asset UPnP that will transcode audio formats into something more universal, but that's just audio. What about video?
  • Video editing. I got a Creative Vado HD for Christmas. I like the recording capability but the little editor that comes with it is severely lacking. If you don't want to use that editor, at least on Windows, you're into something like Sony Vegas. But if you want to edit the videos the Vado records, you have to figure out that there's another codec you have to install.

My point on all this is that I'm a geek and I have the knowledge and skills to at least know where to start to figure this sort of thing out. What do the non-geeks do? Do they just give up and use everything out of the box and accept the crappy limitations of the products and complain they don't work? Do they get a geek friend/family member to just continually come fix it?

I can see the appeal of things like the homogenous environment. If you just give in and accept the box in which a specific brand (Apple, Sony, whatever) places you, everything pretty much works together. If they don't have a product that does what you want, well, it's just "not possible" right now and you wait.

As I get older, I won't lie - this sort of thing appeals to me. I'm tired of tweaking and fixing and fighting the fiddly shit that is inherent with getting all this to work together. I don't mind investing time in getting things set up and configured appropriately as long as I don't have to keep adjusting and reconfiguring and troubleshooting. I just want it to work. It should. Why doesn't it?

Sync Any Folder with Dropbox via Symbolic Links

I have some files (like my local Subversion repository, some documents, etc.) that I need to sync between computers and I was recommended Dropbox as the way to get that done. I signed up, installed it, and it works brilliantly.

That said, my primary complaint is that it only synchronizes files inside a special "My Dropbox" folder that it creates. Anything you want to synchronize has to live in there. Thing is, while I don't mind changing the location of some things, like my documents, I really would rather not change the location of other things, like my local Subversion repository. I like it in "C:\LocalSVN" rather than "C:\Documents and Settings\tillig\My Documents\My Dropbox\LocalSVN" or whatever.

Turns out you can use the magic of symbolic links to fix that right up. If you create a symbolic link (junction point) inside "My Dropbox" to content that actually lives outside "My Dropbox" then the content gets synchronized just fine but can live wherever you want.

If you are in Windows XP, you'll need to go get a free copy of Junction and put it somewhere in your path like your C:\Windows\System32 folder. In Windows Vista or Windows 7, you'll use the built-in "mklink" command.

  1. Get Dropbox set up and synchronizing on your computer without the external content.
  2. Open a command prompt as an administrator on your machine.
  3. Change to the "My Dropbox" folder that you set up.
    In Vista or Windows 7 it'll be like:
    cd "\Users\yourusername\Documents\My Dropbox"
    In Windows XP it'll be like:
    cd "\Documents and Settings\yourusername\My Documents\My Dropbox"
  4. Create a directory link to the folder containing the external content.
    In Vista or Windows 7 it'll be like:
    mklink /d "External Content" "C:\Path\To\External Content"
    In Windows XP it'll be like:
    junction "External Content" "C:\Path\To\External Content"

That's it. Dropbox will see the symbolic directory link as a new folder with content it needs to synchronize and it'll get done.

Note that you can do things the other way around, too - move the content into the "My Dropbox" folder and then create the symbolic link from the original location into the moved content... but this way it means you don't have to do the moving to begin with. Admittedly, I kinda wish I had figured this out before I moved everything, but now I know.