media, windows comments edit

Back in June 2009 I picked up a copy of PerfectDisk for Windows Home Server as a solution for defragmenting the system. At the time I hadn’t expanded things too far storage-wise, but since then I’ve increased my storage capacity to nearly 8TB.

Between June and December 2009, I noticed I would get reasonably frequent (roughly weekly) health warnings on my system drive. Running a “repair” on the drive would return things to normal. I prepared myself for it to fail, researching how to recover, replace the system disk, etc. In the meantime, I decided to stop running PerfectDisk on it since the system drive never really got any more fragmented than it already was. Why strain a failing drive, right?

Stopping PerfectDisk on my system drive stopped the health warnings from showing up. It’s been several months (maybe four) since I stopped running PD on that drive and I’ve not seen a single health warning. Failing drive… or PerfectDisk? Before you answer, let me finish the story.

Toward the latter half of the year, a couple of months into my PerfectDisk usage, I noticed that things would lock up on the system occasionally such that you couldn’t access the Windows Home Server console, you couldn’t connect to the Remote Desktop, and you couldn’t access any file shares. You had to power down hard and reboot to get things responding again. Looking in the event logs, I saw what looked like hardware issues:

Source: disk Error: The device, \Device\Harddisk5, is not ready for access yet.

Source: mv61xx Error: The device, \Device\Scsi\mv61xx1, did not respond within the timeout period.

Sounds hardware-ish to me, and that worries me. It always seemed to happen when I was running a scheduled task that backed up some data to another computer on my network (so there was a lot of disk I/O) and the PerfectDisk full defrag was running at the same time. On a hunch, on December 27, 2009, I stopped PerfectDisk from running on my system by disabling all of the jobs.

Windows Home Server started running without a single disk or mv61xx error. As part of my recent storage upgrade issue (where I got an incompatible drive) I ended up running extended diagnostics (both “chkdsk /x /r” and Western Digital disk diagnostics) on all of the drives in the system with no errors detected. Again, no errors - all the way through to yesterday, over a month later.

Yesterday I re-enabled PerfectDisk and set it to run a full defrag. Around 30 minutes into the full defrag, I decided to sync my iPod and all of my music is on the Windows Home Server.

Lockup.

Looking in the error log - same errors as before from “disk” and “mv61xx.”

Since I was able to run a bunch of diagnostics on the disks with no issues, I have a rough time thinking it’s a hardware problem. I might buy that there’s a driver issue and PerfectDisk brings it out by doing so much disk I/O so fast or something, but I don’t have any evidence to back it up. I did notice that when I see these errors, they seem to be related to the disks in my eSATA port multiplier, so maybe something is going on there. Again, I can cruise along for months with no issues, streaming videos, streaming music, sharing files, etc., until I run PerfectDisk, so I have a rough time thinking there’s no connection at all.

I’m currently working through this with PerfectDisk support, but so far they are calling “hardware issue” claiming they “use the Microsoft-provided defrag APIs.” I’m curious if the defrag APIs don’t quite work the same for Windows Home Server and/or if they don’t work nicely with my eSATA setup.

I’ll update this post if I find out anything new. Until then, I’ve got PerfectDisk disabled and I’m thinking, worst-case-scenario, I’m out the $40 I paid for the license.

UPDATE 6/16/2010: It appears that the WD Green drives I was using were not performing well. Removing them from the system allowed PerfectDisk to function properly.

downloads, vs, coderush comments edit

It’s been almost a year, but I’ve finally got the new CR_Documentor out the door. Several bug fixes and a couple of new features including:

  • Ability to “pause” rendering - “pause” the preview window and navigate around without having it update. Helpful if you’re using the documentation preview as a reference while developing.
  • Assignable shortcut actions - set up shortcuts for many of the actions previously only available in the context menu like “convert selection to XML doc comment” or “collapse all XML documentation blocks.”

Still free - head over to check out the release notes and see all the changes or just grab the latest now.

process comments edit

I just had an interesting [to me] interaction on Twitter that got me thinking:

Workaround... fix... tomato tomahto... same
same.

Ignoring the original issue - that iTunes cover flow doesn’t handle similarly named albums properly - the “workaround… fix… same same” thing got me.

To a person who doesn’t develop software, I bet a workaround and a fix are the same thing. To people who develop software, they’re very different, and the distinction is important.

What’s the difference?

A workaround means a problem has been identified, there’s no official solution for it, but if you do some sort of temporary change on your end you can get things to function within reason. It may not be 100% correct behavior, but it’ll get you past the problem - in a way, you need to change your expectations to accept a workaround as a solution. In this case, the workaround would be for me to modify the metadata on all of my music to “fool” iTunes into behaving correctly. The important bit here is that the change is applied to how you use the product, not the product proper. The problem in the product still exists and the use of a workaround is expected to be temporary.

A fix means the problem has been officially solved so, once applied, the expected behavior will be the actual behavior. In this case, if the issue was fixed then I wouldn’t have to change the metadata on any of my songs - the iTunes cover flow would work properly. The important bit here is that the change is applied to the product proper. The problem in the product no longer exists because it’s actually been fixed.

This doesn’t sound like it’s a big deal, but from a language precision standpoint (particularly for a software developer), it’s huge. If someone files a defect on one of my products and I provide a workaround, I’m still expected to fix it.

(Note that this is no reflection on Alex, who’s a smart guy and friend of mine. It just got me thinking, is all.)

dotnet, aspnet, gists, csharp comments edit

I haven’t done much work with ASP.NET Dynamic Data but in a recent project I started working with it and instantly ran into a dilemma. First, let me explain the project setup.

I have a database project that outlines the schema of the database, the stored procedures, all of that. I have a database client project that has some services and a LINQ to SQL data context that I can distribute to clients who want to go that way. I then have a Dynamic Data project for managing the data and a separate web application that will consume the data, both of which need the LINQ to SQL data context.

Switch on your suspension of disbelief for a second with respect to the design. I could do a way better design going more SOA, or using a repository pattern, or whatever, but it’s a spike project and part of the goal is for me to learn something about Dynamic Data, LINQ to SQL, and so on.

Now, Dynamic Data uses the LINQ to SQL data context - from the client assembly - to do its work and generate its screens. Here’s the problem:

In order to control the rendering of the Dynamic Data screens, I have to have a metadata “buddy class” to describe it. In order to have a “metadata buddy” class, I have to add an attribute to the generated LINQ to SQL model class that points to the metadata type.

See the problem? The Dynamic Data app is the only thing that cares about the metadata “buddy class,” so that’s where the class will live… but if I have to mark up the original LINQ to SQL class in a separate assembly to get that to happen, I’m hosed.

Here’s what a standard scenario looks like:

[MetadataType(typeof(ResourceMetadata))]
public partial class Resource
{
  // Resource is a class in the LINQ to SQL
  // generated data context. A partial class
  // declaration allows us to put the metadata
  // attribute on it.
}

public class ResourceMetadata
{
  // The metadata class can define hints for
  // the Dynamic Data UI as to how to render
  // view/edit controls for the similarly named
  // property on the LINQ to SQL model class.
  // This declaration says 'render this as a
  // ResourceValue type.'

  [UIHint("ResourceValue")]
  public object Value;
}

As you can see, we have to mark up the LINQ to SQL class with that MetadataTypeAttribute. I don’t want to do that… but how to keep the metadata separate from the model?

The key is in the Global.asax.cs of your Dynamic Data project. The line where you register the data context with the application:

MetaModel model = new MetaModel();
model.RegisterContext(typeof(DataLibrary.ResourceDataContext), new ContextConfiguration()
{
  ScaffoldAllTables = true
});

See that “new ContextConfiguration” bit? One of the parameters you can pass is “MetadataProviderFactory.” That parameter is a delegate that creates an instance of something deriving from “System.ComponentModel.TypeDescriptionProvider.” The default behavior is similar to this:

MetaModel model = new MetaModel();
model.RegisterContext(typeof(DataLibrary.ResourceDataContext), new ContextConfiguration()
{
  ScaffoldAllTables = true,
  MetadataProviderFactory =
    (type) => {
      return new AssociatedMetadataTypeTypeDescriptionProvider();
    }
});

The default MetadataProviderFactory is System.ComponentModel.DataAnnotations.AssociatedMetadataTypeTypeDescriptionProvider. That provider uses an internal type (of course it’s internal) that gets the metadata type for a model class through reflection.

In order to get your metadata class from somewhere other than reflection, you need to make your own TypeDescriptionProvider.

Fortunately, that’s not actually too hard.

First, let’s decide what we want to do: We want to have a static mapping, similar to the MVC route table, that lets us manually map a LINQ to SQL type to any metadata type we want. If there’s no manual mapping, we want to fall back to default behavior - get it through reflection.

Now we know what we want the outcome to be, let’s get cracking. Throw together a place where you can hold the metadata mappings:

using System;
using System.Collections.Generic;

namespace DynamicDataProject
{
  public static class DisconnectedMetadata
  {
    public static Dictionary<Type, Type> Map { get; private set; }

    static DisconnectedMetadata()
    {
      Map = new Dictionary<Type, Type>();
    }
  }
}

I suppose if you wanted to get really fancy with it you could have add/remove/clear methods that have a bunch of thread locking around them and such, but this is a simple way to go and most likely you’re only going to be registering mappings at app startup so all of that would just be overkill.

Next we have to create a System.ComponentModel.CustomTypeDescriptor. What a CustomTypeDescriptor does is get all of the information about the various metadata - attributes and properties - on your buddy class. The thing is, Microsoft already did all of that for us, they just inconveniently marked the type they use - System.ComponentModel.DataAnnotations.AssociatedMetadataTypeTypeDescriptor

  • as internal. With a little fancy, maybe slightly unsupported, reflection work we can pretty easily make use of the code that’s already there. Instead of doing a giant full implementation of a new CustomTypeDescriptor, we can write a wrapper around the existing one.

    using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Reflection;

    namespace DynamicDataProject { public class DisconnectedMetadataTypeDescriptor : CustomTypeDescriptor { private static Type AssociatedMetadataTypeTypeDescriptor = typeof(AssociatedMetadataTypeTypeDescriptionProvider) .Assembly .GetType(“System.ComponentModel.DataAnnotations.AssociatedMetadataTypeTypeDescriptor”, true);

      public Type Type { get; private set; }
      public Type AssociatedMetadataType { get; private set; }
      private object _associatedMetadataTypeTypeDescriptor;
    
      public DisconnectedMetadataTypeDescriptor(ICustomTypeDescriptor parent, Type type)
        : this(parent, type, GetAssociatedMetadataType(type))
      {
      }
    
      public DisconnectedMetadataTypeDescriptor(ICustomTypeDescriptor parent, Type type, Type associatedMetadataType)
        : base(parent)
      {
        this._associatedMetadataTypeTypeDescriptor = Activator.CreateInstance(AssociatedMetadataTypeTypeDescriptor, parent, type, associatedMetadataType);
        this.Type = type;
        this.AssociatedMetadataType = associatedMetadataType;
      }
    
      public override AttributeCollection GetAttributes()
      {
        return AssociatedMetadataTypeTypeDescriptor.InvokeMember(
          "GetAttributes",
          BindingFlags.Instance | BindingFlags.Public | BindingFlags.InvokeMethod,
          null,
          this._associatedMetadataTypeTypeDescriptor,
          new object[] { }) as AttributeCollection;
      }
    
      public override PropertyDescriptorCollection GetProperties()
      {
        return AssociatedMetadataTypeTypeDescriptor.InvokeMember(
          "GetProperties",
          BindingFlags.Instance | BindingFlags.Public | BindingFlags.InvokeMethod,
          null,
          this._associatedMetadataTypeTypeDescriptor,
          new object[] { }) as PropertyDescriptorCollection;
      }
    
      public static Type GetAssociatedMetadataType(Type type)
      {
        if (type == null)
        {
          throw new ArgumentNullException("type");
        }
    
        // Try the map first...
        if (DisconnectedMetadata.Map.ContainsKey(type))
        {
          return DisconnectedMetadata.Map[type];
        }
    
        // ...and fall back to the standard mechanism.
        MetadataTypeAttribute[] customAttributes = (MetadataTypeAttribute[])type.GetCustomAttributes(typeof(MetadataTypeAttribute), true);
        if (customAttributes != null && customAttributes.Length > 0)
        {
          return customAttributes[0].MetadataClassType;
        }
        return null;
      }
    }   }
    

We’re doing a few interesting things here to be aware of:

  • On static initialization, we get a handle on the original AssociatedMetadataTypeTypeDescriptor - the internal type that does all the attribute reflection action. If we don’t get a reference to that type for some reason, we’ll throw an exception so we immediately know.
  • We have a GetAssociatedMetadataType method that you can pass any type to - ostensibly a LINQ to SQL model type - and you should come back with the correct metadata buddy class type. First we check the type mapping class that we created before and if it’s not there, we fall back to the default behavior - getting the MetadataTypeAttribute off the LINQ to SQL class.
  • The two-parameter constructor, which is used by Dynamic Data, is where we call our GetAssociatedMetadataType method. That’s our point of interception.
  • The three-parameter constructor, which lets a developer manually specify the associated metadata type, creates an instance of the original AssociatedMetadataTypeTypeDescriptor and passes the information into it first. We do that because that type has a bunch of validation it runs through to make sure everything is OK with the metadata type. Rather than re-implementing all of that validation, we’ll use what’s there. We’ll hang onto that created object so we can use it later.
  • The GetAttributes and GetProperties overrides call the corresponding overrides in that AssociatedMetadataTypeTypeDescriptor object we created. We do that because there’s a lot of crazy stuff that goes into recursing down the metadata class tree to generate all of the metadata information and we don’t want to replicate all of that. Again, use what’s there.

That’s the big work, seriously. A wrapper around AssociatedMetadataTypeTypeDescriptor pretty much does it. Last thing we have to do is create a System.ComponentModel.TypeDescriptionProvider that will generate our descriptors. That’s easy:

using System;
using System.ComponentModel;

namespace DynamicDataProject
{
  public class DisconnectedMetadataTypeDescriptionProvider : TypeDescriptionProvider
  {
    public DisconnectedMetadataTypeDescriptor Descriptor { get; private set; }
    public DisconnectedMetadataTypeDescriptionProvider(Type type)
      : base(TypeDescriptor.GetProvider(type))
    {
      this.Descriptor =
        new DisconnectedMetadataTypeDescriptor(
          base.GetTypeDescriptor(type, null),
          type);
    }

    public override ICustomTypeDescriptor GetTypeDescriptor(Type objectType, object instance)
    {
      return this.Descriptor;
    }
  }
}

As you can see, this basically just provides an override for the “GetTypeDescriptor” method and hands back our custom descriptor.

That’s the entirety of the infrastructure:

  • The map.
  • A CustomTypeDescriptor that looks in the map and then falls back to reflection.
  • A TypeDescriptionProvider that uses our CustomTypeDescriptor.

To use this mechanism, in your Dynamic Data project you need to register the mappings and register the TypeDescriptionProvider. Remember the “model.RegisterContext” call in the Global.asax.cs file in the RegisterRoutes method? Add your mappings there and when you call RegisterContext, add a MetadataProviderFactory:

public static void RegisterRoutes(RouteCollection routes)
{
  // Register types with the map
  DisconnectedMetadata.Map.Add(typeof(DataLibrary.Resource), typeof(DynamicDataProject.ResourceMetadata));

  // When you register the LINQ to SQL data context,
  // also register a MetadataProviderFactory pointing
  // to the custom provider.
  MetaModel model = new MetaModel();
  model.RegisterContext(typeof(DataLibrary.ResourceDataContext), new ContextConfiguration()
  {
    ScaffoldAllTables = true,
    MetadataProviderFactory =
    (type) =>
    {
      return new DisconnectedMetadataTypeDescriptionProvider(type);
    }
  });

  // ...and the rest of the method as usual.
}

That’s it. Now you don’t have to mark up your LINQ to SQL objects with partial classes- you can put your metadata “buddy classes” anywhere you want.

There are some optimizations you could make for performance purposes that I didn’t do here for clarity. For example, rather than call “InvokeMember” on every call to GetAttributes and GetProperties in the CustomTypeDescriptor, you could cache references during static construction to the MemberInfo corresponding to the two methods and invoke the cached references. This should get the idea across, though.

And, of course, the usual disclaimers apply: YMMV, I’m not responsible if this code burns your house down or crashes your app or whatever, etc., etc. Works on My Machine!