November 2007 Blog Posts

Linked Windows Live IDs and Email Address Verification

I noticed, about a month ago, that whenever I would log in to Windows Live Messenger, I'd get a little toast popup telling me to verify my email address:

You need to verify your e-mail address with Windows Live Messenger. Click here.

I clicked, I followed the verification process, and I thought everything was cool.  Not so.  The prompt kept coming up.  On subsequent times I tried to verify my address, though, it would tell me that I didn't need to verify.  What gives?

I had recently taken advantage of the "linked ID" feature of Windows Live IDs.  The misleading thing here: I did need to verify an email address, just not the one I was signing in with.  Curiously, signing in with the other IDs didn't prompt me to verify... but that's neither here nor there.  After fighting my way through support for a month on this (and specifically asking them if it had to do with my linked IDs), it came back that, yes, it was something to do with one of the other accounts.

Anyway, if you're getting prompted and you recently linked your IDs, it's probably a problem with one of the linked accounts.  Make sure they're all verified.

Combining Skins and Localized Strings in ASP.NET

Let's say you have some text on your web site that has an image inline.  Something like this:

Fields with a ! icon indicate failed validation.

You've seen that before.  You've probably done that before.  Now let's say you're using ASP.NET themes and skins to control your look and feel, and that icon is different based on your theme.  It might change to be:

Fields with a ! icon indicate failed validation.

You could have some ASPX code that handles that with the SkinID of the image, right?

Fields with a <asp:Image ID="icon" runat="server" SkinID="validationIcon" /> icon indicate failed validation.

Okay, that works acceptably, provided your skin defines an Image that has a SkinID "validationIcon."  But you're a savvy ASP.NET developer and you know better than to put literal strings like that right in your web application - you put your strings in resx files and look them up at runtime so your app can be localized.  Now what?

There are a few ideas you could try, but they don't work (or not well):

  • String.Format - You could leave a {0} in the page text and try to String.Format the image into place.  That might work if you weren't skinning, but you need to take advantage of ASP.NET skins... so that won't work.
  • Client-side script - You could try some client-side script to render the image to a hidden place on the page and then place it into a <span /> or something in in the instructions, but that's kind of painful.
  • Break the text up into separate controls - You could make the text three controls: the text before the image, the image, and the text after the image.  This could become hard to manage from a localization perspective (what if there isn't any text after the image?) and it's not terribly flexible.

The answer: ParseControl.

The System.Web.UI.Page class has a ParseControl method on it that it inherits from TemplateControl.  You can use this to your advantage - just put the markup inline into your resource string and use ParseControl method to create a control hierarchy on the fly.  Then just swap the parsed control hierarchy into your page where you want the text to display.  Put a Literal in your ASPX and do your string lookup...

<asp:Literal ID="pageText" runat="server" Text="<%$Resources: MyResources, PageText %>"/>

And in your resx file, put the entire text, including the image markup:

<data name="PageText" xml:space="preserve">
  <value>Fields with a &lt;asp:Image ID="icon" runat="server" SkinID="validationIcon" /&gt; indicate failed validation.</value>

See how that looks just like ASPX page markup?  Perfect.  Now in the codebehind of your page class, replace the Literal with the control hierarchy that gets parsed from the text in that very Literal:

public partial class MyPage : System.Web.UI.Page
  protected void Page_Load(object sender, EventArgs e)
    Control parsed = this.ParseControl(this.pageText.Text);
    Control parent = this.pageText.Parent;
    int index = parent.Controls.IndexOf(this.pageText);
    parent.Controls.AddAt(index, parsed);

In that example code, we:

  • Parse the text that got looked up from resources into a control hierarchy.  This lets us account for localization while still providing the inline themed icon we're looking for.
  • Grab the parent of the original control.  This is important because you need to insert the parsed control hierarchy into the proper spot in the control tree.
  • Find the exact location of the original control in the parent's control structure.  This tells us where to put the parsed control hierarchy.
  • Insert the parsed control hierarchy into the proper spot.
  • Remove the original control from the hierarchy - we've replaced it, so it's no longer needed.

Pretty nifty, eh?  There are lots of other things you might want to consider if you do this, but I'll leave them as "an exercise for the reader," so to speak:

  • The example replaces the control with the parsed hierarchy; you might instead wish to add the parsed hierarchy as a child of the control you're "replacing."
  • If you use the same text twice in the same page, you may end up with control naming clashes; you might want to wrap the parsed control hierarchy in a naming container to avoid that.
  • You probably don't want to repeat this code over and over in every page; you'd want to have a single service class that does this for you.

A tiny word of warning:  you probably don't want to do this on every single string you localize. It's not super expensive, but it isn't necessarily "free," either.  Here's the timing from trace output - you can see the obvious difference in the amount of time the page Load event (where I'm doing this) takes vs. the other events.

Timings for parsing inline controls.

It's only ~0.01 seconds, but you wouldn't want to, say, put this in the middle of a big databinding block.

Dad's Icy Car Accident

Dad's Car Accident
Dad's Car Accident

My dad was headed to work yesterday and hit an icy patch in the road.  His car spun 180 degrees, went into the ditch, and hit a tree.

He's okay, albeit on some serious pain killers and with some chipped vertebrae.  No one else was involved or injured and after some physical therapy and some time to rest up, he should be good as new.

Here are some pictures of what was left of his car.  Absolutely totaled.  He had to climb out the passenger window because the frame was so bent up neither door would open.

Of course, the thing he seems to be most concerned with is that, somewhere in the chaos, he lost his glasses.  Heh.

Domestic Dispute

Woke up last night at 2:45a to the neighbors out in their yard arguing.  No physical violence, and it was mostly the guy yelling, but super loud.  I don't even think I'd heard them before, except for their dog barking incessantly a year or so back.  Sigh.

Jenn ended up calling the cops who couldn't actually find our house on a map.  She kept having to give a bunch of nearby cross-streets and directions.  That makes me feel really safe.  I imagine a time where there's a psycho killer in my house trying to stab me and the 911 conversation goes like this:

Travis: Oh my God, help, there's someone in my house trying to stab me!  Here's my address - get here as soon as possible!
Dispatcher: Hmmm... you don't seem to be on the map. What's your nearest cross-street?
Travis: I think he's coming up the stairs! Wait - cross-street? Uh... Foo Ave.
Dispatcher: Yeeeaaaah, um, is that north or south of Bar St.?
Travis: South.  Hurry!
Dispatcher: Are you sure? Where are you in relation to Baz Pl.?
Travis: -stabbed to death-

That's the first time I've called the cops on someone.  When they finally arrived, it was in total force - like, two cars and another cop running up a side street.  I'm surprised the SWAT team wasn't there.

Had a hell of a time getting back to sleep, though.  It took me forever to get to sleep the first time, then this.  I'm dead tired today.  Damn, anyway.

I Might Just Suck As A Drummer

Okay, so for a while I thought it might be that I got a bum set of drums with my copy of Rock Band because I'm only able to get like 98% on Easy level.  I mean, I swear I'm hitting them right, but it's just not registering.

To test, I went into practice mode and played a song segment that basically has no drums at all - so I wouldn't get confused by notes being hit or missed.  I did a roll on each drum to see how it went and all of the hits were registered.  I also hit various combinations of two pads simultaneously and all of those hits were registered.  So I was/am encountering a combination of issues, all adding up to Bad News for Me:

  1. The timing on the drums is utterly unforgiving.  If you're playing guitar, you get a little plus/minus room for strumming and hitting a note.  Sort of a "close enough" buffer.  With drums, you're either ON or you're NOT.  No fudge room.  A millisecond or two off, and it won't count.
  2. The USB hub that comes with the game is pretty bad and seems to introduce a little latency.  Not a lot, but with timing having to be pretty precise, I notice a little better response when I plug the drums into the Xbox directly and everything else goes in the hub.
  3. Calibrating the game is hugely important.  Sometimes the presets work; I manually calibrated mine and things seemed to be a little better.
  4. I may just be a sucky drummer.  After checking out the responsiveness of the drums, calibrating everything, and plugging the drums right into the Xbox, I'm doing a little better but am still missing a couple of notes per song on Easy mode.  It's usually during a spot where I have to hit two pads simultaneously and I think it's that I'm hitting them close to correctly but actually hitting one pad slightly earlier than the other, so it doesn't count.

I'm pretty good at Donkey Konga bongos.  I think playing drums with your hands and playing them with sticks might be two different beasts.  I'm OK with that - knowing that it means practice is one thing; doubting yourself without knowing if it's the equipment or not is a whole other set of insecurities.

Rocking Out to Rock Band

My copy of Rock Band arrived yesterday and it well and truly does rock.  The thing I was excited for was the drum kit; I've always fancied myself a drummer, and I finally get to test that out.

Turns out, I'm not all that bad.

A breakdown by instrument...

I've played my fair share of Guitar Hero 2 and Guitar Hero 3, so I knew what to expect out of the guitar experience.  You can play lead or bass, your choice.  The actual playing experience with the guitar is roughly the same, so I won't go too far into it.  That said, I did notice the difficulty level was significantly lower than GH3, and possibly slightly lower than GH2.  That's not a bad thing - it just makes you feel cooler.

I can't say much about the controller it ships with, though, because mine arrived broken (the directional pad on it is stuck so it constantly thinks the "down" arrow is being pushed - throws a wrench in the works, let me tell you).  Fortunately their support is really good and you can very quickly get an RMA and a replacement through their automated online system - no need for massive escalation or trouble.  Ordered my replacement this morning.

Microphone (Vocals/Tambourine)
I haven't played SingStar but I gather the vocals portion here is the same as that.  It's a little harder than I expected and it does require you to really know the words and the tune - including all the little fluctuations the singer makes while singing.  In some cases you can get by with volume over accuracy (which isn't far from real life, right?) but generally you do need to know the song pretty well.

For example, I sing along to Bon Jovi's "Wanted Dead or Alive" in the car, but it turns out I really only know about half the song... and thinking back, I do sort of hum my way through a lot in the car.  That said, I was able to get 100% (on Easy level) for The Clash song "Should I Stay Or Should I Go?"  So it's not impossible, just sort of hard.

During musical solos, you can tap the mic in time with some "beats" that appear and play the tambourine.  This sort of reminds me of the clapping that you have to do in Donkey Konga.

This is what I was waiting for.  It's a heck of a setup and isn't really small sitting in your living room, but it's hella fun.  There are "notes" on the screen for hitting each of the four drum pads (just like Guitar Hero) and the kick drum is signified by a hard line that crosses all of the four note positions at once.  It takes a little getting used to, but once you figure it out, you're in.  I'm working through Easy difficulty right now because I tried Medium and... well, there's too much going on and I haven't quite got drums down yet.

In some cases, though, I can't figure out whether my drum controller is faulty or if I'm just a sucky drummer.  I swear I hit the right pad at the right time, it just says I didn't.  Other times, everything's fine.  I can get like 98% in some cases, I just miss a couple.  I would think if the controller was faulty it would be... more predictable.  Again, the 60 day warranty might be good here.  Drums are definitely less forgiving than the guitar - with the guitar, you can "pretty much" get the note and it'll count it; with drums, you either get it dead on or you don't get credit.  I think that's more likely what I'm running into.

That said, I really hope someone comes out with some after-market drums for this game.  I like the drums, but I'd like some of it to be a little more adjustable.  I'm 6'2" and I feel sort of cramped by the kit due to the placement of the kick drum pedal.  Maybe I just need to try some different positioning.

The Game
The game itself is pretty good.  Far better set list than Guitar Hero 3 had.  I also really like the way the campaign is set up as a "world tour" where you start in small venues and gather money and fans - it feels more tangible than the arbitrary progression you get in Guitar Hero.  You also have a really nice character creation system that allows you to personalize your character including face, hair, clothes... very cool.  There were only three points of confusion I had:

Point of Confusion 1:  Bands are attached to an Xbox Live profile and to a Rock Band character.  When you create a band, the person who's creating the band gets the band saved to their Xbox Live profile.  Further, as you select (or create) your character in the band, that specific character has to play in the band for the entire life of the band - they're the band leader.  This is very important because...

Point of Confusion 2:  Characters can't change instruments.  Once you create a character for a particular instrument (guitar/drums/mic), they can't switch.  Jenn and I created a band where I was playing drums and she was playing guitar.  She wanted to try the drums out, so we tried to get it so her character was playing drums and mine was on guitar.  No dice - both of us were only allowed to create new characters.  And since my profile was the one with the band leader, and the band leader was playing drums, we either had to play under the other person's profile or create a new band.  We created a new band.  (The third option, really, was to back out and do a "quick play" where you can form a band impromptu with anyone playing any instrument - no leader required.)

Point of Confusion 3: Once the instruments are attached to the Xbox 360, you can't change their position.  So, say I attach the instruments and the drums are player 1, the guitar is player 2, and the mic is player 3.  By default, my console signs me in when it turns on, so I'm signed in as player 1 and I'm stuck on drums.  The easy way to fix this - connect all of the controllers to the Xbox, turn it on, and sign everyone out.  Everyone pick up the control they want to play with and sign in from there.  During the game if you want to change instruments, you can sign out and sign back in without exiting the game, so just do that - sign out on your current instrument and sign in on the one you want to switch to.  It sounds like a no-brainer when I say it here, but trust me, this was a huge problem for us to figure out.

Again, by-and-large, it's an awesome game.  I'm also super happy [so far] with the ease of customer support.

Hey, since I've got the day off, I should probably go do a little rockin' right now.

"Command Prompt Here" Round-Up

NOTE: I'm no longer maintaining the Command Prompt Round-Up. Instead, visit the Command Prompt Here Generator.

With the release of VS 2008 and yet-another-Visual-Studio-command-prompt, I figured I'd do a round-up of all of the "Command Prompt Here" power toys that I've gathered to assist me in keeping this all working.

  • Doshere.inf - Standard, no-frills command prompt (basically the original PowerToy).
  • powershellhere.inf - PowerShell command prompt (from Scott Hanselman).
  • VSNet2003cmdhere.inf - Visual Studio 2003 command prompt (from Scott Hanselman).
  • VSNet2005cmdhere.inf - Visual Studio 2005 command prompt (from Scott Hanselman).
  • VSNet2008cmdhere.inf - Visual Studio 2008 command prompt for x86 (my own).
  • VSNet2008Admincmdhere.inf - Visual Studio 2008 elevated/Administrator command prompt for x86 (my own).
  • VSNet2010cmdhere.inf - Visual Studio 2010 command prompt for x86 (my own).

Pick any or all of them, your choice.  Of course, if you only use one of the VS command prompts, you can just set it so your command prompt is always a VS command prompt, but that's less intriguing when you have to support side-by-side VS installs.

Oh, and all of these together looks pretty crazy.  But it's useful.

All of the Command Prompt Here options working together.

Yours for the taking!

NOTE: I'm no longer maintaining the Command Prompt Round-Up zip file. Instead, visit the Command Prompt Here Generator.

UPDATE 3/12/09: If you need more in the way of easy run-as-admin elevation stuff, like the VS 2008 Admin command prompt tool, you'll want to check out the Elevation Power Toys on TechNet. Everything from "elevate this script" to "PowerShell Admin prompt here" is waiting for you.

VS 2008 Now My Worst Install Experience EVAR

I just got done with installing Visual Studio 2008.  It now is the installation experience that I will rank other installation experiences against to see how badly they suck - on a scale of "Awesome" to "VS 2008."  It went something like this:

  • Download VS 2008 ISO from MSDN.
  • Start up Virtual CD Control Panel and mount the ISO as a drive.
  • Install VS 2008.
  • Get about halfway finished and get asked to reboot.  Click the "OK" button and reboot.  No option available to not reboot.
  • Log in and watch the setup alert me, after "loading setup files" for 10 minutes or so, that setup has failed and I need to restart.
  • Figure out that there's no option to automatically mount an ISO at startup using the Virtual CD Control Panel and guess the issue is probably that the drive disappeared after my reboot.  Hard to say, though, with no specific error message.
  • Mount the ISO again figuring I'll give it another run.
  • Get about 75% finished and, again, get asked to reboot.  Again, no option not to reboot.  Click "OK" and reboot.
  • Watch very closely as I get logged in again and haul ass to get the Virtual CD Control Panel up and re-mount the ISO before the "loading setup files" gives me the failure message.  Get the drive mounted but still get the failure.
  • Decide to uninstall the bits that got installed, figuring something got corrupted.
  • 15 minutes into "generating setup script" for the uninstall, get a notice that I need to close Outlook because it has Word open.  Close Outlook.  Watch in horror as the "generating setup script" bit starts over from the beginning.
  • Finish the uninstall and reboot for good measure.
  • Decide that the Virtual CD Control Panel is bad news for me and that it's time to burn the ISO to a DVD.
  • Realize after 20 minutes of futzing around that of the two computers in my office, one only has a CD-RW burner and the other has a DVD burner so old that the Windows Server 2003 OS doesn't actually recognize it as a DVD burner.  No drivers, discontinued support.  No one else in my general cube vicinity has a DVD burner either.  What the...?
  • Download an ISO extraction utility and extract the contents of the ISO to my drive and start the installation from there.
  • Shut down everything on the machine that could remotely be construed as productive for fear that the install will be mad at me and restart midstream.  Includes Outlook, IE, Word, Messenger, etc.
  • Finally get through the install of both VS 2008 and the associated documentation.  Takes somewhere between "forever" and "holy crap" to finish.
  • ...and now I'm installing all of my add-ins.

Elapsed time from start to installing add-ins: 6 HOURS.  Absolutely ridiculous.  From what I hear, I'm not the only one eating it on this one.  I don't remember the betas kicking my ass like this.  What happened?

Bought My XO Laptop

The XO laptop - One Laptop Per Child program.The One Laptop Per Child foundation has their "Give One, Get One" program going on where you buy two of these little XO laptops and one comes to your house, one gets donated to a child in a developing country.  It's $400, and $200 of that is tax-deductible (the cost of the laptop that gets sent to the developing country).  The rest goes toward the laptop that hits your hot little hands.  (Ostensibly to give to a child in your life, but for me it's more likely a toy to hack.)

A charity to help teach kids computing skills? Oh, hellz yeah.  I bought mine; have you got yours?


On Writing Good XML Documentation Comments

XML doc comment screenshotIt occurred to me the other day that there's information out there about the technical aspects of writing XML doc comments in .NET code (i.e., the markup tags) but there's nothing out there about what you should put in that markup.  While not every developer is also a technical writer or novelist, sometimes all the users of your code have to go on is the documentation you generate, so it's important to write it well.

And, no, you can't just defer your users to Reflector.  You'd actually be amazed at how many people don't even know what Reflector is.

TL;DR - THE GOLDEN RULES OF DOCUMENTATION There's a lot here. If you don't take anything else away, please at least take these two things:

  • Write like it’s MSDN. After you write the documentation, read it back to yourself, maybe even out loud. Does it sound like something you’d read from MSDN? How’s the grammar? They have smart people writing docs over there - learn from them the same as you "View Source" to learn good HTML.
  • Write like the reader doesn’t have the source code. Write the doc, then collapse all the method definitions so all you see is XML doc. Go get a coffee. Come back. Now read the documentation. Does it tell you everything you will need to know to work with the function? Pretend the reader doesn't have Reflector (you'd be surprised, lots don't).

Given those two rules, here are some tips on writing better XML doc comments:

  1. Think about what you'd like to see and write that.  I intentionally made this the number one rule because it's the most important.  When you're writing your documentation, step outside yourself for a minute and think, "If I was handed just this assembly and a help doc, and I didn't have access to Reflector or anything like that, what sort of documentation would help me to understand how to use this code?"  Remember:
    • Your users won't necessarily have the source to refer to, and even if they do, you shouldn't force them to resort to that.
    • Not everyone knows everything you know about the code.
    • The flow of control may not actually be as obvious as you think.
  2. Learn the markup.  If you only know about the <summary/> tag that gets put in when you hit /// (or ''', for you VB people), you don't know XML doc comments.  MSDN has the reference for the base tag set but there is a good reference here that includes a set of fairly widely accepted extensions.  There's a lot more to comments than the summary.
  3. Keep the <summary/> short.  The content in the <summary/> and <param/> tags shows up in Intellisense in the Visual Studio IDE.  Don't write a novel there - one sentence, maybe two tops is all you need.  Leave the detailed comments for the <remarks/> section.
  4. Don't explain something in terms of itself.  The documentation is where you should expound on what's going on and, in some cases, why.  Say you have a custom "ICoolThing" interface and you implement that in a "ReallyCoolThing" class.  A bad <summary/> would be, "An implementation of the ICoolThing interface that is really cool."  That's not at all helpful - it doesn't tell you anything.  Instead, try something like, "Cool thing used to render XML doc comments."  (Or whatever it's used for.)  Explaining something in terms of itself isn't clarifying, it's just redundant.
  5. Write in complete sentences...  Writing code is a very terse experience.  There's a grammar, and it [generally] reads well enough,  but it's a different beast than writing documentation.  Documentation is where you need to describe in full, complete sentences and paragraphs what's going on.
  6. ...But be straightforward and don't go overboard with verbosity.  Basically, "know when to say when."  You're not writing a legal document.  You're not writing a scientific research paper.  (Or maybe you are, but you know what I mean.)  Don't "fluff up" your docs with extra language.  Don't over-formalize the language.  Make it easy to read, explain what's going on, and call it a day.
  7. I can has grammarz?  Use proper spelling, grammar, and punctuation.  If you're not confident in your writing abilities, have someone who is good at this proofread for you.  (Or, better still, integrate the proofreading into your code review process.  You have a code review process, right?)  It may seem unimportant, but these things can make your documentation far easier to understand and may even give users more confidence in your code.  (If the person writing the code can't write a decent sentence, would you really have the confidence that all of the error handling and such is done right?)  The only time you can write your docs with bad spelling, grammar, and punctuation is if you're writing in LOLCODE.
  8. Read your own documentation.  Once you've written your docs, read them through to see if they make sense.  This sounds like common sense, but it's amazing how many times I've seen docs get written and the author never actually read through them to see if they were intelligible.  Docs aren't a write-only stream - read and revise as necessary.
  9. Remember that whitespace doesn't render.  Or at least not like you think it does.  Don't forget that you're writing in XML - throwing in a standard line break isn't going to actually get you onto a new line.  So, for example, this:

    /// <remarks>
    /// This is the first line.
    /// This is the second line.
    /// </remarks>

    Renders as:

    This is the first line. This is the second line.

    Create paragraphs by using <para>...</para> tags (similar to the <p>...</p> in HTML - put your content between the <para> and </para> tags).  Generally you won't actually want single line breaks anywhere because in the 80% case, you'll actually be wanting to use a different construct - a list, a table, or paragraphs.  A revised version of the above block would be:

    /// <remarks>
    /// <para>
    /// This is the first line.
    /// </para>
    /// <para>
    /// This is the second line.
    /// </para>
    /// </remarks>

    The exception to this rule is the <code/> tag - whitespace is respected in there because it's assumed to be a code snippet.
  10. Hyperlink, hyperlink, hyperlink.  The beauty of XML doc comments is how easily you can cross-reference related topics.  If you're talking about one method from the documentation on another method, use a <see /> tag to add a link to the relevant method right from the comment body.  If there are related topics that are really important but may not have been linked to from the body of the comment (or maybe they warrant special attention via additional links), use <seealso /> tags at the bottom of your comment block.
  11. Use special markup for reserved words.  One of the extensions that NDoc added is the ability to use <see /> tags on certain reserved words.  When the documentation rendering engine sees these reserved words, it can apply special formatting or perform common expansions on them.  The syntax for this is <see langword="reservedword" />.  For example:
    <see langword="true" />
    ...renders as...
    <see langword="null" />
    ...renders as...
    a null reference (Nothing in Visual Basic)

    The recognized words are:
    • abstract
    • false
    • null
    • sealed
    • static
    • true
    • virtual
  12. Add valid code samples wherever possible.  Nothing helps a developer like seeing a code snippet.  The key is to not only add these snippets where possible (in a nested <code/> tag inside the <example/> tag) but also to make sure they're valid.  This is sometimes a hard task.  A good way to come up with a valid snippet is to actually write a small demo program and copy the code from that - that way you know the snippet works.
  13. Don't forget to XML encode entities.  Again, you're writing XML, so don't forget that < needs to be &lt;, > needs to be &gt;, and so on.  The compiler will generally catch errors for you, but sometimes things work when they shouldn't and you'll get some unexpected results.
  14. Update your doc when you update your code.  The worst problem you'll run into is that the doc you wrote six months ago doesn't actually reflect what the code is doing.  It's easy to overlook updating your docs because the build doesn't break when your docs are wrong.  Incorrect documentation is actually worse than bad documentation because while bad docs are hard to read, incorrect docs will potentially lead your users to spin wheels wondering why things aren't working as documented.
  15. Make documentation a priority.  Don't let documentation be a second class citizen to cranking out the code.  Give it equal rights in your development process and let developers on your team know that documentation is important, too.  If documentation isn't seen to be important, it won't get the focus it needs.  Add documentation to your checklist of what needs to be finished before a task is marked complete.

There are a few tools out there that can help you improve your XML doc comment writing experience.  My two favorites are:

  • GhostDoc - Gives you a starting point for writing XML documentation.  Really helpful when you're implementing interfaces or overriding methods because it can grab the docs from the base method and use that as your starting point.
  • CR_Documentor - Shows a preview of what your documentation will look like when rendered.  Also adds some XML documentation templates to the editor context menu.
  • CodeRush Templates - Expand templates to write documentation quickly and consistently.

Write the docs you'd like to see. Start with that, and the rest should fall into place for you.

First Ikea Trip Ever

Sunday was my first trip to an Ikea store ever.

I never really understood what the big deal with Ikea was.  Everyone I talked to who'd been there would start salivating when I talked about Ikea, barely able to contain their feelings about how awesome it is.  My uncle (and many, many others) swears by the meatballs.  I've got friends who would only ever shop at Ikea (and no other stores, ever) if they had that option.  I just never got it.

Sunday, Jenn and I packed up and, with our friends Angela and Keaka, headed to the Portland Ikea store.  I went with a skeptical mindset, fully expecting to be underwhelmed.

Ikea freaking rocks.

Ikea is sort of like... well, if General Electric or Mitsubishi decided to open a store that just sold everything they made, that's what Ikea is.  They sell everything.  Batteries.  Furniture.  Art.  Food.  Toys.  Seriously, it's just overwhelming.  Angela told us she usually allocates about three hours to an Ikea trip and I can see why - you can't actually make it through the place in much under two.  We took two and a half hours.

I didn't expect to buy anything.  I've always seen the catalog and not been really much interested in anything.  I ended up purchasing a chair, an ottoman, a rug, and some artwork for our front room.  All put together, it's got a really nice lounge sort of feel to it, and it's nice to get some furniture in that room.  We've been in the house for a few years now and have been looking for the right stuff.  I think a key factor here was price - we got all that for less than $750.  Not bad.

Of course, less than 24 hours went by before Jenn's cat peed on the ottoman and the rug.  Maybe the low cost paid off.  (As far as I can tell, the stains came out, but still.  Come on.)

We also had the obligatory meatball lunch in the cafe there and I liked it.  I'm not all crazy over the meatballs, but they're pretty decent.  I also discovered that I like lingonberry and several of the desserts there.

So I'm an Ikea convert.  I finally get it.  I don't know that I'll have a Pavlovian response or anything, but if someone says they want to venture out to Ikea, I'll go.  As long as we have at least three hours to kill.

Microsoft Patterns and Practices Summit Photo Album

I posted the pictures I took at the MS Patterns & Practices summit over on Picasa.  If you're interested, go check it out.

Microsoft Patterns & Practices Summit 2007

Microsoft Patterns & Practices Summit 2007 - Day 5

The topic of Day 5: Applications.

Keynote - Scott Hanselman

Yes, you're reading that right - Hanselman keynoted two days in a row.  This time the presentation tended toward the humorous and sort of tied things in with a message to the community that was basically the end of Bill and Ted's Excellent Adventure:  "Be excellent to each other... and party on, dudes!"  You know, in so many words.

There was a snap-on demo of some more MVC, but generally that was it.  I like Scott - he's a friend - and I thought the presentation was hilarious, but I had a little difficulty tying it in to "patterns and practices" or to the theme of the day - "Applications."  That said, if he posts the video of the presentation to his blog, watch it.  It's a crack-up.

Future of Patterns and Practices - Rick Maguire

Maguire discussed the challenges that the patterns and practices team faces and talked about where things are headed.  Challenges they see are things like technology changes (so many changes so quickly), increasing complexity of software, and compliance with standards and regulatory issues.

What the future boils down to: They've focused previously on tools.  They're switching focus to developer centers and documentation - helping people find which tools will help them get the job done.

Evolving Client Architecture - Billy Hollis

Hollis discussed some of the recent changes in client technology - specifically around WPF and Silverlight.  He gives the impression that WPF is the Way and the Light.  I think it's interesting stuff, but somehow I don't think it's the End All Be All.  At least, not yet.

The basic idea, though, was that it's a good thing if you're looking at XAML.  It's got a good programming model and will allow you to get some of the reuse that you weren't previously able to achieve before.  (But it'll be better when Silverlight 1.1 is done.)

Introduction to the Microsoft Client Continuum - Kathy Kam

This was almost a continuation of the talk Hollis gave, talking about the variety of clients you can target with .NET technologies.  The discussion here was more on the variation between having wide reach with your app - standard HTML via ASP.NET - and having a rich experience - using WPF in a native app.

The interesting thing here was an illustration of how you can reuse components across some of these.  For example, say you have a straight HTML app.  Not rich, but very client-accessible.  In a basic Silverlight app, you can take the same HTML app you had and add richer interactivity in select portions of the app (like replacing an image with a XAML content block).  In your native app, you can take the XAML that you used in the Silverlight/HTML app and use it in your WPF app.  Very cool.

The technologies she reviewed, on the scale of "reach" to "rich":

  • ASP.NET 2.0
  • ASP.NET 3.5
  • Silverlight 1.0
  • Silverlight 1.1
  • WPF 3.0
  • WPF 3.5

Fresh Cracked CAB - Ward Bell

This was one of the talks that I think I can take back and immediately start using some of the ideas from.  Bell showed how he uses the Composite UI Application Block to better architect applications.  (There's a Composite Web Application Block as part of the Web Client Software Factory... but I don't know how applicable this was.  Still, this was an interesting thing.)

There was some explanation about how the CAB works, which was good, but it got really good when he started talking about some of the patterns used.  Of particular interest was a slight addition he made to the MVP pattern - MicroViewControllers (yeah, it's "MVC," but not in the sense we normally think about "MVC").

Think about this - how many times do you basically have what amounts to generated code where you...

  • Data bind model information to controls?
  • Set error provider information?
  • Set control visibility/editability?
  • Format data in the view?
  • Localize control text?

All that just fattens up the interfaces and makes code cumbersome.  The idea of the MicroViewController is that it's a facade over all of these things - a single object shared between the view and the presenter to handle all of that.

Think code like:
cvc.AddDescriptor(ageTextBox, properties.Age).WithLabel(ageLabel).WithEditability(Editability.ReadOnly);

Very cool stuff.

Wrap-Up - Billy Hollis

A fantastic and entertaining rant from Hollis about how, frankly, there's just too much out there to learn.  You'll never know everything you need to know, especially with the changes coming at us fast and furious.  And it's not just technology - it's even IDE features.  Which just goes to show we can't solve this problem with more features - that just adds to the complexity.

And we have no one to blame but ourselves.

The question now is - how do we fix it?  Maybe some ideas here...

The Simplicity Manifesto v1.0 (per Billy Hollis):

  • Stop adding features.
  • Make help helpful.
  • Fix the bugs.
  • CRUD for free.
  • Hide the plumbing.
  • Get better names.

IE Component Activation Removal

I hate that stupid thing where a Flash or Silverlight object isn't "active" in IE until you click on it.  Annoying as hell.  Looks like I'm not the only one who thought so - they're removing it in April 2008.  I can hardly freaking wait.

Consideration - Domain Name Change

So I'm thinking about the fact that folks generally can't remember how to spell "Paraesthesia."  I like the name, I've had it for a while, but I think there's a memory barrier there.  I own "" (which currently redirects you to my site).

What do folks out there think about the idea of making "" my primary domain and having Paraesthesia redirect?  Would it be easier?  Do you care?  It would be no trivial amount of work and I'd lose a bit in the way of Google results/ranking for a while, so I want to be sure if I start thinking about doing that.  (I would definitely make sure any existing links out there would continue to work.)

What do you all think?


And you thought there were no good uses of the DLR.  John Lam and Martin Maly have implemented LOLCODE on the DLR.  Go check that out.

posted @ Friday, November 09, 2007 9:56 AM | Feedback (0) | Filed Under [ .NET ]

P&P Nerd Dinner

Went to Hanselman's Nerd Dinner last night and must say, it was well worth it.  A lot of folks turned out for it, including Omar Shahine, Adam Kinney, John Lam, Jason Haley... oh, yeah, and Lutz Roeder.  A great social event and a piroshkyHellz yeah.

Oh, I bugged Lutz about it again, and he's letting me release CR_Documentor as open source.  Apparently there was some political hoopla going on before that would have been bad had I released it, but that seems to have passed so it's cool now.  All you folks wanting new features and wondering why I've been stagnating - now you can contribute.  I'm going to finish up my refactor of it so it's stable and get it out there.  This definitely gives me a renewed interest in finishing up the next release.  Yay!

Microsoft Patterns & Practices Summit 2007 - Day 4

The topic of Day 4: Software Factories.

Keynote - Scott Hanselman

Hanselman's keynote was a demonstration of the upcoming web MVC framework that Phil Haack et. al. are working on.

Much of this can already be seen via the videos posted over on his site - we saw a pretty basic CRUD interface for working with customers and products against the Northwind database.  I'm interested academically in this, but the demos always sort of go for "super simple."  How many people actually only do basic CRUD?  Where's my input validation? Where's my localization?  I need some more meat in my demos before I'm sold.

Domain-Specific Development with VS DSL Tools - Gareth Jones

At this point in the conference we really started hitting what, I believe, David Trowbridge referred to as "meta-moments."  Jones showed us how the Visual Studio DSL tools allow you to model your own doman-specific language and generate a Visual Studio designer and toolbox set that allow you to get developers modeling and generating code right from your DSL.

The problem we had was that nothing ever ended up being concrete.  I recall hearing things like, "Okay, say we have a class that refers to our model.  We'll call that 'ModelClass.'  It has an attribute.  We'll call it 'Attribute.'"  That sort of thing.  It would have hit home a bit more had it been a little more concrete.  Model me an ordering system or something.  This didn't actually make much sense to me until a later talk in the day about the web service software factory.

Patterns of Software Factories - Wojtek Kozaczynski

This was a discussion of how the current set of software factories work and the design patterns you can see used in each one.  It was interesting to see it all from an academic standpoint and see all of the patterns work together (as well as how you might make a huge enterprise application using all of the software factories together) but I'm not sure how much of this I'm going to be able to take back with me and use immediately.  Definitely one of those presentations I'll keep around and when I'm trying to solve a problem I know appears in one of the software factories, I'll check back to see how they solved it.

Introducing the Aikido Project - Andres Aguiar

A thinly-disguised Infragistics sales presentation on the Aikido AJAX web control framework built on top of ASP.NET AJAX.  Looks like it might make some of the ASP.NET AJAX stuff easier to work with, but come on - another framework?

That said, I may have just been grumpy and unreceptive - we got "boxed lunches" so we could watch the presentation during lunch and, fast as I tried to get up to the lunch line, all that was left when I got there was a choice between turkey and tuna, neither of which I'll eat.  The ham and the roast beef disappeared, as if by magic.  I ended up eating chips and a cookie for lunch and getting a headache later on.  Note to conference organizers: Catered boxed lunches are always a bullshit cop-out.  If you're going to do that, at least give folks enough time to go out and get something else if they don't like what you've pre-packaged.

Sevice Factory: Modeling Edition - Bob Brumfield, Ade Miller

As mentioned earlier, this is where the Visual Studio DSL Tools started becoming concrete for me.  This presentation showed a designer for modeling services and generating service code (including the request/response and domain objects) that was actually the output of the Visual Studio DSL Tools.  Aha!  Now I get it!

This looked like a pretty compelling way to get services jumpstarted.  While it doesn't have the full functionality of schema, you can use schema to augment it so the flexibility exists, albeit not all in the designer.  Definitely something I can take back and use.

Web Client Software Factory - Chris Tavares, Blaine Wastell, Michael Puleio

This one I was really into because it was very obviously directly relevant to what I do.  It was a great walkthrough of the Web Client Software Factory - what's there now, what's on the way.  Showing some of the stuff they have - role-based UI, Composite Web Application Block, easier stories for management/deployment of apps - was really interesting because it's not something you can just "pick up and use" - it takes time to get these things hooked up, so seeing it working and the possibilities available was nice so you can justify that time.

Some of the things coming up in the future include:

  • Suggestion pattern (autocomplete).
  • Live form pattern ( validate data as the user enters it into the form).
  • User controls that can be used cross-module via dependency injection.
  • More focus on formalizing the MVP pattern.
  • Page composability - build a single page view out of multiple components.

There was also a valuable compare/contrast of the MVC and MVP patterns.

  • Integrates well with WebForms (able to use existing controls/services for WebForms)
  • Enables testability
  • Highly decoupled
  • Testable out of the box
  • More maintainable, extensible
  • Fewer moving parts
  • Part of Microsoft platform (soon)
  • Extra classes and code (view/model interfaces, forwarding functions in views)
  • Steep learning curve
  • Most of the controls and services you're used to now won't work.  (They're still working on the control story)

Combine that with the fact that MVP and MVC fit at different architectural levels - MVP picks the view for you, MVC lets you pick the view - and it boils down to just picking the one that works best for you.

Another interesting item: they use WatiN to do their UI automation tests.  Not necessarily that being an endorsement, but it surely says something.

Team Factories - David Trowbridge

This was an investigation on how teams can use software factories to work together and more easily come together and work on a well-architected system.

An interesting situation, but it occurred to me how very high the level of discipline in your team would have to be to get this working.  I have a feeling it might fail in most environments because, when it comes down to it, a feature needs to be created and someone is going to feel time pressure and just hack the thing together to get it to work.  That sort of seat-of-the-pants development, which I do not endorse but acknowledge exists, sort of throws a wrench in the works here.

Build Your Own Software Factory - Wojtek Kozaczynski, Bob Brumfield, Ade Miller

This was a discussion of what it would take for you to create your own software factory - tools, recipes, etc. - based on the experience of building the Web Service Software Factory.

At a high level, you should expect creating a software factory to take two to three times longer than it would take to create a one-off product.  You can expect return on your investment somewhere around the third to fifth instance of factory usage.

If you're going to do this, they recommend starting with the Web Service Software Factory and customizing/changing from there.  Building the Web Service Software Factory took about three and a half months over an eight month period and if you start from the Web Service Software Factory, you can save yourself ~60% of the effort, which goes into software factory platform: model extensions, validation, code generation framework, etc.

Recommendations if you do decide go to this route:

  • Have a VS extensibility expert on the team.
  • Have an installation expert (WiX) available to the team early on.
  • Work with domain experts up front to minimize model changes.  Changing the model is expensive.
  • Provide drops to your user community early and often.
  • Reuse as much of the Web Service Software Factory as you can.

Testing Out UrlAbsolutifierModule

I figured out the problem with my URL absolutifier HttpModule (which I'm calling "UrlAbsolutifierModule" - gasp).  I was hooking into things too early in the request lifecycle, so when the Subtext RSS handler attached a GZip filter to compress the RSS feed it was GZipping things first, then I was trying to process URLs.  Attaching later in the lifecycle allows me to wrap the GZip filter so I can do my processing before the contents get encoded.

If you're reading this site through RSS, pictures should now properly display.  Here's a little picture of my cat so the RSS folks can see it in action - if you see the picture in your RSS reader, you've got the cleaned up feed:

My cat. If you see this picture in RSS, the UrlAbsolutifier is working.

If you don't see the picture in RSS, it's not working.  Should that be the case, please leave me a comment to that effect.

I'll test this out for a while before I release it.  If it works, I'll put it out there for folks to consume.

Microsoft Patterns & Practices Summit 2007 - Day 3

The topic of Day 3: Development.

Keynote - John Lam

Lam's keynote was primarily a demo of IronRuby and an explanation of how they arrived at where they're at with the project as well as where they're going.  It was very interesting to see a Ruby app on Mono running Windows Forms... but I realized as I watched this that I don't think I'm nearly as interested in the whole Dynamic Language Runtime thing as everyone else out there is.  I mean, it's cool and all, and maybe I'm just burned out on it, but when people say "DLR" I don't instantly think "Yes!"

The Right Tools for the Right Job - Rocky Lhotka

This was less a presentation on tools (as it sounds like it might be) and more a presentation on application architecture urging you to use the right tools - and patterns - for the solution you're creating.  In most cases, this boiled down to the fact that you need to have the discipline to keep your application layers (presentation, business, data) separate so you can appropriately accommodate technology changes.

Model-Based Design - David Trowbridge, Suhail Dutta

This talk was specifically geared around the modeling tools built into Visual Studio Rosario.  Three modeling tools were shown:

  • Logical class diagram - An enhanced version of the exisitng class diagram functionality.  Generate class stubs based on the diagram and update the diagram based on code changes.
  • Sequence diagram - An extension from the logical class diagram.  Show how classes interact in a standard sequence diagram.  As you add method calls to the sequence diagram, it updates the class diagram, which allows you to generate code.  What I didn't see here was whether the actual sequencing in the diagram generates any code.
  • Dependency analysis - They called this "Progression."  Pleading ignorance, I don't recall why.  Anyway, this frankly looked like a watered-down version of NDepend.

Dependency Injection Frameworks - Scott Densmore, Peter Provost

A discussion on the principles of dependency injection more than specific framework usage, which was just fine.  I won't go over the whole thing because there's plenty out there on dependency injection.  The two things I liked were the list of different types of dependency injection and the potential drawbacks.

Types of dependency injection they mentioned (who knew there were so many?):

  • Service locator (not really dependency injection, more late-binding to services)
  • Interface injection
  • Setter injection
  • Constructor injection
  • Method call injection
  • Getter injection

...and drawbacks of dependency injection.  (I liked this because proponents of dependency injection rarely mention these things as drawbacks, instead calling it "good design," which is debatable.)

  • Lots of little objects - you generally have to break things down into very, very small pieces.  Rather than two-1000 line objects, you might have 20-100 line objects.
  • Runtime wire-up can be complicated and difficult to visualize - figuring out which objects were populated by what context and how the dependency came to be can be hard to wrap your head around, especially in systems of any size.  Couple that with the "lots of little objects" drawback and you might realize you have a defect... but which of the bajillion little objects is it in?
  • Interface explosion - everything gets an interface because everything's gotta be pluggable.

They recommended that if you write reusable libraries with these techniques, you should wrap the public facing stuff with a facade to mask this confusion from the library consumers.

Designing for Workflow - Ted Neward

A two-part talk on things to keep in mind when designing for workflow (specifically, Windows Workflow Foundation).  The first part started out by basically saying that there's not enough info out there to be able to identify best practices for workflow development.  That said, keep in mind the goals:

  • Capture long-running processes.  (Be able to "pause" and "resume" a long-running process.)
  • Provide "knowledge workers" with the ability to edit a process.
  • Provide a component market.  (Developers create activities - components - that knowledge workers can use to compose workflows.)
  • Keep workflows decoupled from the environment.  (What if you started a process on a Blackberry and resumed it when you got to work and logged into the web application?)
  • Embrace flexibility in workflow hosting.  (You might host the workflow in your web app, in a Windows forms app, etc.)

The second half of the talk was open discussion.  The key that came out here was that, when working with workflow and looking for patterns, don't neglect work that's already been done.  Check out the Workflow Patterns site for some documented workflow patterns.

Panel: The Future of Design Patterns - Dragos Manolescu, Wojtek Kozaczynski, Ade Miller, Jason Hogg

An open forum to debate whether future investment in pattern education for the masses should occur in tools (creating tools that more easily allow you to introduce patterns into your code) or in materials (web sites and books that educate you about patterns).

No real resolution was reached, but there were definitely some strong feelings on both sides.  Some felt that simply giving people tools would make it too easy for junior folks who don't understand the patterns to shoot themselves in the foot by misusing the tools and making bad code even worse.  Others felt that there's already enough material out there and investing in even more would be a waste.  And, of course, there are the middle-ground folks who say we need both.

But if you can only have one of those things, which would you take?

EntLib Devolved - Scott Densmore

An exploratory discussion on why the Enterprise Library is the way it is and ideas on how it might be made easier to use.  Wouldn't it be nice to be able to say EnterpriseLibrary.Get<Database>("Sales"); or something as simple as that?  What's stopping us?

The answer: Nothing.

They're working on it.

An Evening With Microsoft Research - Jim Larus

A peek at some of the stuff Microsoft Research has been working on.  You'd be surprised (or maybe not) at the breadth of topics they look at.

I think my favorite one was the analysis they did on a developer's day including all of the interruptions and task switching that goes on - things you might not even notice - and how that impacts not only that developer but others around them.  They call it "Human Interactions in Programming."  Looking at a graphical representation of a 90 minute period that shows interruptions for several developers was fascinating.  They even analyzed what the most frequent question types were that people interrupted to ask ("Why is my code behaving like this?" sorts of things) and how satisfied they were with the answers they got back.

Neat stuff.

Microsoft Patterns & Practices Summit 2007 - Day 2

The topic of Day 2: Agile.

Keynote - Steve McConnell

McConnell gave one of his usual interesting and insightful presentations on Agile development practices.  I think the thing I liked the best was that he talked about how you don't have to stick to every single Agile ideal to the letter to call yourself Agile - in practice, doing what works for your team and your company is what's important.

A couple of interesting quotes:

"We see XP fail more often than it succeeds."

"We see Scrum succeed more often than it fails."

Practices he's seen succeed in Agile environments:

  • Short release cycles.
  • Highly interactive release planning.
  • Timebox development.
  • Empowered, small, cross-functional teams.
  • Involvement of active management.
  • Coding standards.
  • Frequent integration and test.
  • Automated regression tests.

On the other hand, things like daily stand-ups should be evaluated - make sure you're not meeting just for the sake of meeting.  And don't oversimplify your design - YAGNI is a good principle, but don't use it as an excuse for a design that is too inflexible to accommodate for change.

Agile is More Than Monkey-See Monkey Do - Peter Provost

Provost started this talk with an altogether-too-close-to-home story called "An Agile Tragedy" about a team that attempted to adopt Agile practices only to sacrifice certain key tenets and have a project fail miserably and wind up with a very unhappy team.

Basically, just following Agile practices doesn't make you Agile.  You have to actually subscribe to the principles, not just go through the motions.

Empirical Evidence of Agile Methods - Grigori Melnik

This talk was a discussion about the metrics we have that support the value of Agile development practices.  What it brought to light is that we don't actually have a lot of metrics - Agile is largely measurement-free and most experiments that have been done are too trivial to count or have inherent design flaws.

What's New in Rosario's Process Templates - Alan Ridlehoover

"Rosario" is the version of Visual Studio that comes after Visual Studio 2008 (Orcas).  It's built on the VS 2008 technology and adds features.

This talk focused on the Team System features they're adding to Rosario to support a more integrated Agile development process.  Specifically, they showed some of the templates they're adding that allow you to manage your backlog and work items.  It looked, to me, a lot like VersionOne meets Visual Studio.

Other features that stuck out to me:

  • Continuous integration support - They're building in a continuous integration server that's supposedly better than CruiseControl.  I'll have to see that to believe it.
  • Drop management - Once you've built something in your continuous integration server, where does it go? How long do you maintain it? That's what this does.
  • Test impact analysis - If you change a line of code, this will tell you which tests need to be run to validate the change you made.

Lessons Learned in Unit Testing - Jim Newkirk

Some very interesting discussion about things learned in the creation of NUnit and other experiences in unit testing.

  • Lesson 1: Just do it.  You have to write your tests and they have to be first-class citizens.
  • Lesson 2: Write tests using the 3A pattern.  Arrange, Act, Assert.  Each test should have code that does those things in that order.
  • Lesson 3: Keep your tests close.  Close to the original code, that is.  Consider putting them in the same assembly as the code they test and ship the tests.  One possibilty to still maintain the ability to not ship tests includes using multi-module assemblies - put your production code in one module and your tests in another.  When you're debugging/testing, compile both modules into the assembly; when you release, only include the product module.  Unfortunately, Visual Studio doesn't support creating this sort of assembly.
  • Lesson 4: Use alternatives to ExpectedException.  The ExpectedException attribute, part of NUnit, breaks the 3A principle because it puts the "Assert" - the ExpectedException attribute - at the top.
  • Lesson 5: Small fixtures.  Keeping test fixtures small helps readability and maintainability.  One idea is to create one main test fixture class and each method's tests go in a nested class/fixture.  (Of course, this does introduce nested classes, which isn't supported by all test runners...)
  • Lesson 6: Don't use SetUp or TearDown.  The problem is that they become a dumping ground for every test's setup/teardown even though they apply to every test.  Forcing each test to do its own setup and teardown makes each test isolated and more readable... but it will introduce duplicate initialization code.
  • Lesson 7: Improve testability with inversion of control.  This was sort of a nod to "design-for-testability" with interfaces that allow you to swap in test implementations of objects at test time.  (Dependency injection centralizes the management of this.)  The benefits are better test isolation and decoupled class implementaiton.  The drawbacks are that it decreases encapsulation and risks "interface explosion" (a high proliferation of interfaces - every object ends up with a corresponding interface, even if it's just for testing).  Plus, in many cases a dependency injection framework is overkill.

Very interesting stuff, even though I disagree with some of the lesons (no SetUp/TearDown, inversion of control/design for testability).

Agile Security - David LeBlanc

This was a talk about how secure coding practices like threat modeling can work into an Agile project.  There were some good general ideas, but the main point was that you need to work it into your own process - there's no one way to get it in there.

Ideas include:

  • Appoint a security owner, ideally someone who's interested in it.  That person will be responsible for ensuring the team meets security goals.
  • Agile threat modeling is sometimes just as good as a heavyweight process.  Sketch data flow diagrams on the whiteboard and make sure threat mitigations get added to the backlog.
  • Use code scanning tools daily or weekly.  Also use peer code review - this can not only catch functional defects but security defects, too.
  • Build security tests at the same time you build your other tests.

"Yet Another Agile Talk On Agility" - Peter Provost

This was an interactive session where we actually used an Agile process to, as a group, ask questions about Agile, add them to a backlog, rank the priority of each question, and get the questions answered.

An interesting exercise and lively discussion about a wide variety of Agile development topics.

"Open Source in the Enterprise" - Discussion Panel hosted by CodePlex

Ted Neward, Jim Newkirk, Rocky Lhotka, and Sara Ford sat in on a discussion panel to talk about different topics related to open source - getting management buy-off to allow use of open source projects in development, contributing to open source projects, mitigating risk when using open source projects, etc.

After a while, it became more audience-participation-oriented and speakers started swapping out.  For a time, I was even up there with Jim Newkirk, Sara Ford, Stuart Celarier, and Rory Plaire.  I have to say, it was pretty sweet sitting up there with that crowd.  Maybe I need to seek me out some speaking gigs.

Still Working the Absolute URL RSS Problem

I'm still working on a decent solution to the absolute URL problem I'm seeing in my RSS feed (which is why the images in my RSS feed appear broken - the images are sourced from a relative URL, like "/images/foo.gif" which, paired with FeedBurner, make it look like the images are supposed to come from FeedBurner, and they're not).

Anyway, I have a sort of general-purpose HttpModule for filtering response output and converting URLs to absolute format, but it's not working with Subtext's RSS feed when compression is turned on.  I think I'm inserting myself too late in the request lifecycle so my filter is trying to process GZipped content and is puking on it.

So... I've got some more testing and coding to do.

Another stumbling block I hit and wasn't even thinking of - I wrote my first run at the module to filter HTML... but what it really needs to filter is XML with embedded encoded HTML because that's what RSS is.

That leaves me with a little bit of a design quandary - I can make it a general purpose module at the cost of increased development and testing or I can narrow the scope to my specific case and reduce the set of customers that would find it useful.  Ah, isn't that just the typical development dilemma?

Microsoft Patterns & Practices Summit 2007 - Day 1

The topic of Day 1: Architecture.

Keynote - Anders Hejlsberg

Anders showed a great demo of LINQ.  Not having had time myself to do much with LINQ, it was nice to see several of the features working and learn a little more about how LINQ works from the inside as well as seeing some of the C# 3.0 features.

The idea behind LINQ is that we've pretty much run the gamut of possibilities in imperative programming - declarative programming still has a lot of new ground to cover.  Rather than spending time imperatively writing out not only what data you want but how you want to get it, LINQ lets you declaratively write what data you want and let the framework take care of the work.  Easier to write, easier to maintain.

The biggest source of conflict I have with LINQ is that age-old argument of whether you write SQL in your code and query the database tables directly or whether you use stored procedures.  I'm a stored procedure guy. (Which, peripherally, explains why I'm not a big fan of the Active Record pattern - I don't want my database schema extended into my code.  A class per table?  What happens when my schema changes? No, no, no.)

Luckily, Microsoft officially abstains from this battle.  You can use LINQ that generates SQL or you can use stored procedures.  Everyone's happy.  I'm looking forward to this.

A Software Patterns Study: Where Do You Stand? - Dragos Manolescu

This was more of an interactive presentation where Manolescu brought to our attention (via polling the audience) that while we all claim to use software patterns, most of us don't really know where the resources are to read up on new pattern developments and contribute to the community.  Publicity is a problem for the patterns community and that needs to be fixed.

Architecture of the Microsoft ESB Guidance - Marty Masznicky

I'm not sure if it was intended to be this way, but this was less a presentation on enterprise service bus guidance than it was a sales pitch for BizTalk server.  We learned a lot of how BizTalk handled things like exceptions and logging... and that's about it.

Pragmatic Architecture - Ted Neward

Neward's talk was sort of a reality check for folks who claim to be architects.  He started out by talking about the Joel On Software "Hammer Factory" example - "Why I Hate Frameworks."  The danger: following patterns for the sake of following patterns.  Doing things in a purist fashion for the sake of idealism.  While it's important to have a good system architecture, you can't ignore the end goal - working software.

Architects need to understand project goals and constraints and reassess these when change happens.  Architects need to evaluate new tools, technologies and processes to determine their usefulness to a given project.  Don't just implement something because it's new and cool or because it's "best practice" - do what makes sense.

Architecting a Scalable Platform - Chris Brown

This was a discussion of things to think about when you're working on a scalable platform.  Things like using content distribution networks and unified logging were touched on.

The biggest point here was the notion of building in fault tolerance.  One example is the "gold box" on the web site.  The "gold box" is actually an independent service that has a certain amount of time to respond.  If it doesn't respond, the page will render without rendering the "gold box" feature - it gracefully degrades.  Scalable systems need to consider how to handle fault tolerance and appropriately degrade (or report to the user) when things go wrong.

Grid Security - Jason Hogg

The discussion here was on SecPAL - the Microsoft Research "Security Policy Assertion Language."  It's basically a query language that allows you to easily write queries to determine if a user is authorized to do something.  Using a common language and infrastructure, you can fairly easily implement things like delegation in a system.  There are even visualizers and things to help you determine how authorization decisions were made - very cool.

I won't lie - some of this got a little above my head.  There's a lot here and I can see some great applications for it in our online banking application, but the concrete notion of exactly how I'd go about implementing it and what it means is something I'm going to have to noodle on for a while and maybe do a couple of test projects.

Moving Beyond Industrial Software - Harry Pierson

Pierson's idea is that we need to stop thinking about software in a "factory" sense - cranking out applications - and start thinking about software in a different sense.  Put the user in control.  Stop trying to directly address ever-changing business needs and enable business people to address their own needs.  Think outside the box.

The canonical example offered here was SharePoint - it's not really an application so much as an infrastructure.  Users create their own spaces for their own needs in SharePoint and it's not something that needs interaction from IT or the application developers.  It puts the users in control.

This is another one I'm going to have to think about.  This sounds like it applies more to IT development than it does with "off-the-shelf" style product development.  How we, as product developers, think outside the box and how we can change for the better is something to consider.

Guitar Hero 3

I picked up my copy of Guitar Hero 3 at Costco about a week ago.  It sort of snuck up on me and I didn't actually realize it was coming out this soon, so it was a pretty big surprise to see it.  Regardless, we knew we wanted it, so we grabbed it.

If you haven't played a Guitar Hero game, it's time to climb out of the hole you've been living in.  It's good times.  The thing I really liked about Guitar Hero 2 was how playing the songs really made me feel cool.  I'm not super good at it - I can only really play acceptably on the "normal" difficulty - but it's just inherently fun.  Not only that, but I really like most of the songs so playing them was cool.  With Guitar Hero 3, I expected "more" and "better."

It's good, but... I dunno.  There's just something missing.  I think that the combination of a few of the changes sort of put me off.

First, and foremost, the songs.  I've played through co-op career mode on "Easy" and I'm almost through solo career mode on "Medium" and I think I really only like maybe 25% of the songs.  I'm a mainstream rock fan.  I like, for example, the Poison and Guns n' Roses songs they included.  Some of the more popular classics are cool, too, like "Paint It Black" by The Rolling Stones.  I'm all over that stuff.  But that's sort of the minority of the songs.  The rest?  Eh.  I more... "tolerate them" than I do "like them."  I mean, "The Seeker" by The Who?  Mildly acceptable.  "Kool Thing" by Sonic Youth (or anything by Sonic Youth)?  Lame.  The redeeming tune is "Cult of Personality" by Living Colour.  I've wanted that song since I first played Guitar Hero.  But, generally speaking, mediocre fair song-wise.  (Here's the complete song list on Wikipedia.)

The other thing is the difficulty level. In GH2, "Easy" was easy and "Normal" was slightly more difficult, but not so bad that you couldn't just pick up and play and have fun.  "Hard" was actually hard and "Expert" was for the hardcore folks only.  In GH3, everything is about 50% harder.  "Easy" isn't nearly as easy as the GH2 "Easy," and "Normal" isn't just pick-it-up-and-rock - it looks like it'll take some practice (I'm only halfway through).  "Hard" will definitely require practice and I won't even look at the "Expert" level.  The difficulty in GH2 reflected the idea that casual gamers could pop in and play something a little more than "Easy" and still have fun.  In GH3... you're either dedicated or you're stuck on "Easy."

I'm not as concerned as other folks that some focus has moved to competition.  The co-op career is fun and I feel like it compensates for the competition aspect that's been added.  They needed a little something new and the competition aspect is an interesting direction.  Jury's still out on whether I think they should go further in that direction, but it's not bad.  There are some interesting glitches with the co-op achievements where if you're playing co-op career mode both partners will get the co-op note streak achievements but only the person logged in as "player 1" will get the career completion achievements.  Hopefully that will be fixed in a patch.

In all... I generally like GH3, but I think it could generally have been better.  Even just choosing better songs would have made it better for me.  I'm having fun with it, and I'll keep playing it, but I hope that GH4, if they come out with it, has better songs.  I did order Rock Band and I'm looking forward to it.  I think the new instruments (and a mildly better, albeit slightly overlapping, song list) will be a nice change.

139 Trick-or-Treaters

In a downward trend from the last two years, we came in at 139 trick-or-treaters this year.  Many more older kids came by, many in that "hey, maybe you should have actually worn a costume" state.

The graph:

139 Trick-or-Treaters for 2007

The 6:30 - 7:30 hour was the most productive, and once again 6:30 to 7:00 seems to be prime candy-grabbing time.  Two Costco bags of candy were sufficient with about a quarter-bag left over, though instead of mini candy bars like we had last year, this year we handed out more of a "candy assortment" (many more small candies rather than fewer large candies).  We ran a half-hour longer than we did last year due to the poor turnout of the first half-hour starting at 6:00.

Still, it was a pretty decent sized reduction in kids this year, and I think it may have been one or more of several factors at play:

  • Average age of the neighborhood kids increases as time goes by - less locals seeking candy.
  • This is the first year daylight saving time was changed for that energy bill - it's darker a little earlier until we switch over and that may have stopped the earlier/smaller kids from venturing out.
  • Last year we had a projector showing an animated Halloween scene on our garage.  I got home too late to put it out this year.  Less decoration - less enticing to knock on the door.

I think next year I'll make it a point to put the projector out and see if that changes things.  The average age of kids can't be helped, but the DST issue won't have changed.