dotnet, aspnet, gists, csharp comments edit

I haven’t done much work with ASP.NET Dynamic Data but in a recent project I started working with it and instantly ran into a dilemma. First, let me explain the project setup.

I have a database project that outlines the schema of the database, the stored procedures, all of that. I have a database client project that has some services and a LINQ to SQL data context that I can distribute to clients who want to go that way. I then have a Dynamic Data project for managing the data and a separate web application that will consume the data, both of which need the LINQ to SQL data context.

Switch on your suspension of disbelief for a second with respect to the design. I could do a way better design going more SOA, or using a repository pattern, or whatever, but it’s a spike project and part of the goal is for me to learn something about Dynamic Data, LINQ to SQL, and so on.

Now, Dynamic Data uses the LINQ to SQL data context - from the client assembly - to do its work and generate its screens. Here’s the problem:

In order to control the rendering of the Dynamic Data screens, I have to have a metadata “buddy class” to describe it. In order to have a “metadata buddy” class, I have to add an attribute to the generated LINQ to SQL model class that points to the metadata type.

See the problem? The Dynamic Data app is the only thing that cares about the metadata “buddy class,” so that’s where the class will live… but if I have to mark up the original LINQ to SQL class in a separate assembly to get that to happen, I’m hosed.

Here’s what a standard scenario looks like:

[MetadataType(typeof(ResourceMetadata))]
public partial class Resource
{
  // Resource is a class in the LINQ to SQL
  // generated data context. A partial class
  // declaration allows us to put the metadata
  // attribute on it.
}

public class ResourceMetadata
{
  // The metadata class can define hints for
  // the Dynamic Data UI as to how to render
  // view/edit controls for the similarly named
  // property on the LINQ to SQL model class.
  // This declaration says 'render this as a
  // ResourceValue type.'

  [UIHint("ResourceValue")]
  public object Value;
}

As you can see, we have to mark up the LINQ to SQL class with that MetadataTypeAttribute. I don’t want to do that… but how to keep the metadata separate from the model?

The key is in the Global.asax.cs of your Dynamic Data project. The line where you register the data context with the application:

MetaModel model = new MetaModel();
model.RegisterContext(typeof(DataLibrary.ResourceDataContext), new ContextConfiguration()
{
  ScaffoldAllTables = true
});

See that “new ContextConfiguration” bit? One of the parameters you can pass is “MetadataProviderFactory.” That parameter is a delegate that creates an instance of something deriving from “System.ComponentModel.TypeDescriptionProvider.” The default behavior is similar to this:

MetaModel model = new MetaModel();
model.RegisterContext(typeof(DataLibrary.ResourceDataContext), new ContextConfiguration()
{
  ScaffoldAllTables = true,
  MetadataProviderFactory =
    (type) => {
      return new AssociatedMetadataTypeTypeDescriptionProvider();
    }
});

The default MetadataProviderFactory is System.ComponentModel.DataAnnotations.AssociatedMetadataTypeTypeDescriptionProvider. That provider uses an internal type (of course it’s internal) that gets the metadata type for a model class through reflection.

In order to get your metadata class from somewhere other than reflection, you need to make your own TypeDescriptionProvider.

Fortunately, that’s not actually too hard.

First, let’s decide what we want to do: We want to have a static mapping, similar to the MVC route table, that lets us manually map a LINQ to SQL type to any metadata type we want. If there’s no manual mapping, we want to fall back to default behavior - get it through reflection.

Now we know what we want the outcome to be, let’s get cracking. Throw together a place where you can hold the metadata mappings:

using System;
using System.Collections.Generic;

namespace DynamicDataProject
{
  public static class DisconnectedMetadata
  {
    public static Dictionary<Type, Type> Map { get; private set; }

    static DisconnectedMetadata()
    {
      Map = new Dictionary<Type, Type>();
    }
  }
}

I suppose if you wanted to get really fancy with it you could have add/remove/clear methods that have a bunch of thread locking around them and such, but this is a simple way to go and most likely you’re only going to be registering mappings at app startup so all of that would just be overkill.

Next we have to create a System.ComponentModel.CustomTypeDescriptor. What a CustomTypeDescriptor does is get all of the information about the various metadata - attributes and properties - on your buddy class. The thing is, Microsoft already did all of that for us, they just inconveniently marked the type they use - System.ComponentModel.DataAnnotations.AssociatedMetadataTypeTypeDescriptor

  • as internal. With a little fancy, maybe slightly unsupported, reflection work we can pretty easily make use of the code that’s already there. Instead of doing a giant full implementation of a new CustomTypeDescriptor, we can write a wrapper around the existing one.

    using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Reflection;

    namespace DynamicDataProject { public class DisconnectedMetadataTypeDescriptor : CustomTypeDescriptor { private static Type AssociatedMetadataTypeTypeDescriptor = typeof(AssociatedMetadataTypeTypeDescriptionProvider) .Assembly .GetType(“System.ComponentModel.DataAnnotations.AssociatedMetadataTypeTypeDescriptor”, true);

      public Type Type { get; private set; }
      public Type AssociatedMetadataType { get; private set; }
      private object _associatedMetadataTypeTypeDescriptor;
    
      public DisconnectedMetadataTypeDescriptor(ICustomTypeDescriptor parent, Type type)
        : this(parent, type, GetAssociatedMetadataType(type))
      {
      }
    
      public DisconnectedMetadataTypeDescriptor(ICustomTypeDescriptor parent, Type type, Type associatedMetadataType)
        : base(parent)
      {
        this._associatedMetadataTypeTypeDescriptor = Activator.CreateInstance(AssociatedMetadataTypeTypeDescriptor, parent, type, associatedMetadataType);
        this.Type = type;
        this.AssociatedMetadataType = associatedMetadataType;
      }
    
      public override AttributeCollection GetAttributes()
      {
        return AssociatedMetadataTypeTypeDescriptor.InvokeMember(
          "GetAttributes",
          BindingFlags.Instance | BindingFlags.Public | BindingFlags.InvokeMethod,
          null,
          this._associatedMetadataTypeTypeDescriptor,
          new object[] { }) as AttributeCollection;
      }
    
      public override PropertyDescriptorCollection GetProperties()
      {
        return AssociatedMetadataTypeTypeDescriptor.InvokeMember(
          "GetProperties",
          BindingFlags.Instance | BindingFlags.Public | BindingFlags.InvokeMethod,
          null,
          this._associatedMetadataTypeTypeDescriptor,
          new object[] { }) as PropertyDescriptorCollection;
      }
    
      public static Type GetAssociatedMetadataType(Type type)
      {
        if (type == null)
        {
          throw new ArgumentNullException("type");
        }
    
        // Try the map first...
        if (DisconnectedMetadata.Map.ContainsKey(type))
        {
          return DisconnectedMetadata.Map[type];
        }
    
        // ...and fall back to the standard mechanism.
        MetadataTypeAttribute[] customAttributes = (MetadataTypeAttribute[])type.GetCustomAttributes(typeof(MetadataTypeAttribute), true);
        if (customAttributes != null && customAttributes.Length > 0)
        {
          return customAttributes[0].MetadataClassType;
        }
        return null;
      }
    }   }
    

We’re doing a few interesting things here to be aware of:

  • On static initialization, we get a handle on the original AssociatedMetadataTypeTypeDescriptor - the internal type that does all the attribute reflection action. If we don’t get a reference to that type for some reason, we’ll throw an exception so we immediately know.
  • We have a GetAssociatedMetadataType method that you can pass any type to - ostensibly a LINQ to SQL model type - and you should come back with the correct metadata buddy class type. First we check the type mapping class that we created before and if it’s not there, we fall back to the default behavior - getting the MetadataTypeAttribute off the LINQ to SQL class.
  • The two-parameter constructor, which is used by Dynamic Data, is where we call our GetAssociatedMetadataType method. That’s our point of interception.
  • The three-parameter constructor, which lets a developer manually specify the associated metadata type, creates an instance of the original AssociatedMetadataTypeTypeDescriptor and passes the information into it first. We do that because that type has a bunch of validation it runs through to make sure everything is OK with the metadata type. Rather than re-implementing all of that validation, we’ll use what’s there. We’ll hang onto that created object so we can use it later.
  • The GetAttributes and GetProperties overrides call the corresponding overrides in that AssociatedMetadataTypeTypeDescriptor object we created. We do that because there’s a lot of crazy stuff that goes into recursing down the metadata class tree to generate all of the metadata information and we don’t want to replicate all of that. Again, use what’s there.

That’s the big work, seriously. A wrapper around AssociatedMetadataTypeTypeDescriptor pretty much does it. Last thing we have to do is create a System.ComponentModel.TypeDescriptionProvider that will generate our descriptors. That’s easy:

using System;
using System.ComponentModel;

namespace DynamicDataProject
{
  public class DisconnectedMetadataTypeDescriptionProvider : TypeDescriptionProvider
  {
    public DisconnectedMetadataTypeDescriptor Descriptor { get; private set; }
    public DisconnectedMetadataTypeDescriptionProvider(Type type)
      : base(TypeDescriptor.GetProvider(type))
    {
      this.Descriptor =
        new DisconnectedMetadataTypeDescriptor(
          base.GetTypeDescriptor(type, null),
          type);
    }

    public override ICustomTypeDescriptor GetTypeDescriptor(Type objectType, object instance)
    {
      return this.Descriptor;
    }
  }
}

As you can see, this basically just provides an override for the “GetTypeDescriptor” method and hands back our custom descriptor.

That’s the entirety of the infrastructure:

  • The map.
  • A CustomTypeDescriptor that looks in the map and then falls back to reflection.
  • A TypeDescriptionProvider that uses our CustomTypeDescriptor.

To use this mechanism, in your Dynamic Data project you need to register the mappings and register the TypeDescriptionProvider. Remember the “model.RegisterContext” call in the Global.asax.cs file in the RegisterRoutes method? Add your mappings there and when you call RegisterContext, add a MetadataProviderFactory:

public static void RegisterRoutes(RouteCollection routes)
{
  // Register types with the map
  DisconnectedMetadata.Map.Add(typeof(DataLibrary.Resource), typeof(DynamicDataProject.ResourceMetadata));

  // When you register the LINQ to SQL data context,
  // also register a MetadataProviderFactory pointing
  // to the custom provider.
  MetaModel model = new MetaModel();
  model.RegisterContext(typeof(DataLibrary.ResourceDataContext), new ContextConfiguration()
  {
    ScaffoldAllTables = true,
    MetadataProviderFactory =
    (type) =>
    {
      return new DisconnectedMetadataTypeDescriptionProvider(type);
    }
  });

  // ...and the rest of the method as usual.
}

That’s it. Now you don’t have to mark up your LINQ to SQL objects with partial classes- you can put your metadata “buddy classes” anywhere you want.

There are some optimizations you could make for performance purposes that I didn’t do here for clarity. For example, rather than call “InvokeMember” on every call to GetAttributes and GetProperties in the CustomTypeDescriptor, you could cache references during static construction to the MemberInfo corresponding to the two methods and invoke the cached references. This should get the idea across, though.

And, of course, the usual disclaimers apply: YMMV, I’m not responsible if this code burns your house down or crashes your app or whatever, etc., etc. Works on My Machine!

dotnet, vs comments edit

In a really large system the build can take a long time. A really long time. Long enough to make continuous integration sort of meaningless. You may not be able to do a whole lot about it, but something to look at is your project’s code organization. The compiler and linker have some startup overhead you may be able to get rid of by reducing the number of solutions/projects you have.

For example, I threw together a set of three test codebases. Each has 100 (empty) classes, but they’re organized in different ways. I then built them a few times and compared the average times.

Project Format Time to Rebuild (Clean/Build) Working Copy Size Post-Build
100 separate solutions, 1 project per solution, 1 class per project 42s 4.29MB
1 solution with 100 projects, 1 class per project 42s 3.52MB
1 solution with 1 project, 100 classes in the project 1s 256KB

I noticed two interesting things here:

  1. From a time perspective, you don’t get much if you have 100 solutions or 100 projects - the real gain (and it’s significant) is if you put everything into the same project/assembly.
  2. The working copy size post-build (the amount of disk space taken by the source and build output) is orders of magnitude smaller if you put everything into the same project/assembly.

This isn’t to say everyone should start shipping mammoth assemblies. Just be careful how you organize things. Choose your assembly boundaries carefully. You may gain yourself some time in the build - and some space on your disk.

General Ramblings comments edit

A while ago a friend of mine asked me for the names of some of the authors I read. This got me thinking and I figured I’d make a list of a few of my favorites. So, in no particular order…

"Neuromancer" by William
Gibson

William Gibson: I’ve read Neuromancer so many times the book has nearly fallen apart. I like how he describes things enough to get a vivid image in your head but not with such numbing detail you get bogged down. Plus, how can you deny the guy who coined the term “cyberspace?” I’ve read all of his books and I have yet to find one I didn’t like.

Richard K. Morgan: Morgan has created a future world where your personality lives in a “cortical stack” at the base of your skull and a character named Takeshi Kovacs is an ex special-forces soldier turned investigator for hire. Action packed and a really fun read, Altered Carbon is the first book in that series.

Tom Clancy: Clancy’s sort of hit-or-miss. I’ve read some of his books that just take freaking forever to get where they’re going, but others are really exciting. Most of the books revolve around a modern-day wartime environment. I particularly liked Rainbow Six.

Jeff Noon: Noon writes in a very distinctive style that seems to resonate for some folks but not as much for others. If it works for you, it really works, and the books are amazing. His primary series revolves around a world where we exchange objects between our world and a parallel world and what comes back is a powerful hallucinogenic drug… that you take by sticking a feather in your mouth. If you think it sounds weird, you’re right, it is… but it’s a very compelling universe, too. The first book in the series, Vurt, brings with it the added challenge that it’s written in an invented dialect so it may take a bit to get into, but give it a shot. when I was done with it, I immediately flipped back to the first page and read it again.

"Vurt" by Jeff
Noon

Steve Aylett: I’ll admit I’ve only read one of Aylett’s books - Slaughtermatic - but it was so good I have to include him here. In this particular book, the world is a place where crime is a form of recreation and time travel is possible. It’s admittedly a little convoluted, but a really fantastic read.

Philip Pullman: Specifically, Pullman’s “His Dark Materials” trilogy which starts with The Golden Compass (also made into a movie). In this world, your soul is physically embodied as a “daemon” - an animal that travels with you. When the movie came out a lot of stink got raised about the social commentary on organized religion that these books present, but I really feel like that was a lot of crap. Is there some commentary? Sure. Is it as important or prevalent as the folks out there would like you to think? I don’t think so. This is another set of books I’ve read several times and enjoy every time.

"The Looking Glass Wars" by Frank
Beddor

Frank Beddor: I am an_Alice in Wonderland_ freak. I love the story, the characters, and I love imaginative derivatives of it. Beddor has created my favorite reimagining with his “Looking Glass Wars” trilogy - the idea being that Wonderland is a real place and Alyss Heart is a real person who ends up crossing into our world and getting trapped. Not difficult reads but some of my favorites. There’s even a soundtrack that goes along with them.

Neal Stephenson: Again, sort of hit-or-miss for me, but I can’t recommend more strongly that everyone read Snow Crash. Where else would you find a place where pizza delivery is controled by the mob and it arrives at your house via a guy you refer to as “The Deliverator?” Cyberpunk action at its finest.

Neil Gaiman: Is there a “favorite authors” list Gaiman_isn’t_on? Every story is different and imaginative in its own right, but my absolute, all-time favorite, and one that I have yet to find anyone disappointed with, is his collaboration with Terry Pratchett: Good Omen. Of course, if you were offended by the “commentary” in Philip Pullman’s “His Dark Materials” series,Good Omens is probably not for you… but otherwise it’s a must-read.

Douglas Adams: The Hitchhiker’s Guide series are some favorites for me (and a ton of other people) and I’ve loved them since I was a kid. One of the best birthday gifts I’ve gotten was a leather-bound copy of the first four books in the series. I’ve read them, listened to the audio books (on tape!), seen the movies/TV shows… I can’t get enough. Truly funny stuff.

"Jennifer Government" by Max
Barry

Max Barry: You also might see him listed as “Maxx Barry” but he seems to have changed that in recent times. I discovered Barry through his book Jennifer Government: Welcome to a place where enterprise has taken over enough that your last name is the name of the company where you work and a shoe manufacturer tries to earn “street cred” for his shoes by killing people who buy them. His subsequent efforts are no less interesting or imaginative. Barry’s another one where I’ve read all of his books and love them all.

If you’re looking for something new to read, maybe check some of these out. If you do, drop me a line and let me know what you think.

dotnet, testing comments edit

I recently had to do some performance profiler evaluation for .NET applications and I figured I’d share my results. Note that it’s as scientific as I could make a subjective review (e.g., “friendly UI” might mean something different to you than to me), but maybe it’ll help you out. Also, I’m not a “profiler expert” and, while I’ve used profilers before and understand generally what I’m looking at, this isn’t my primary job function.

The five profilers I tried out:

I put an explanation of what each feature “means” in tooltip form, so put your cursor over it if you don’t understand what I’m talking about. An “X” in the box means it has the feature.

Testing was done on a dual-2.8GHz processor machine running Windows Server 2008 R2 64-bit and 4GB RAM.

VSTS 2008

ANTS Perf 5.2

VTune 9.1

dotTrace 3.1

AQtime 6

User Interface

Visual Studio integration

X

X

X

Standalone application

X

X

X

X

Friendly/easy to use

X

X

X

Robust reporting

X

?

X

Measurement Style

Sampling

X

X

X

X

X

Instrumentation

X

X

X

X

X

Measurements Recorded

CPU time

X

X

X

X

Wall-clock time

X

X

X

X

X

Additional perf counters

X

X

Notes

This requires Visual Studio, which means you have to have VS installed on the machine running the app you’re profiling. That said, this was the easiest to get results from and the easiest to interpret.

In general this appeared to be the best balance between “robust” and “usable” but I couldn’t actually see the report that came out because it locked up the UI thread on the machine and ate 3GB of memory. I’ve asked about this in the forums. Turns out this is fixed in the next version, 6, currently in EAP.

I couldn’t actually get a profile to run using VTune since it complained of being “unable to determine the processor architecture.” As such, I don’t know how well the reporting works.

When I ran dotTrace 3.1 on a multi-proc system, I got several timings that came out with negative numbers (-1,000,289 msec?). You can fix this by setting the proc affinity for the thing you’re profiling. I tried a nightly build of dotTrace 4.0 and that’s fixed. dotTrace 4.0 will also let you profile a remote application - something the others don’t support.

AQtime has a lot of power behind it but lacks the usability of some of the other profilers. It appears that if you take the time to really tweak around on your profile project settings, you can get very specific data from an analysis run, but doing that tweaking isn’t a small feat. I spent a good hour figuring out how to profile an ASP.NET application in the VS dev server and setting it up. Also, while it very well may be due to my lack of experience with the tool, AQtime had the most noticeable impact on the application’s runtime performance. It took several minutes for the first page of my app to load in the browser.

For now, it looks like the VSTS profiler is my best bet. If I could figure out the UI problem with ANTS, or if dotTrace 4.0 was out, I’d say those options tie for my second choice. The VTune profiler seems to be the most… technical… but it also desperately needs a UI refresh and seems geared toward profiling unmanaged code, where managed code is a “nice to have” rather than a first-class feature.

UPDATE 1/21/2010: I added AQtime to the list of profilers I tried out. Also, I removed the “VS integration” checkmark from ANTS because, while it adds a menu entry to VS, all it does is start up the external application. I’m not counting that. Finally, I found out my ANTS problem is fixed in the next version, 6, currently in EAP. Since it’s not released, I still have to go with the VSTS profiler, but once it’s out, I’d vote ANTS.