subtext, xml, blog comments edit

I’m still working on a decent solution to the absolute URL problem I’m seeing in my RSS feed (which is why the images in my RSS feed appear broken - the images are sourced from a relative URL, like “/images/foo.gif” which, paired with FeedBurner, make it look like the images are supposed to come from FeedBurner, and they’re not).

Anyway, I have a sort of general-purpose HttpModule for filtering response output and converting URLs to absolute format, but it’s not working with Subtext’s RSS feed when compression is turned on.  I think I’m inserting myself too late in the request lifecycle so my filter is trying to process GZipped content and is puking on it.

So… I’ve got some more testing and coding to do.

Another stumbling block I hit and wasn’t even thinking of - I wrote my first run at the module to filter HTML… but what it really needs to filter is XML with embedded encoded HTML because that’s what RSS is.

That leaves me with a little bit of a design quandary - I can make it a general purpose module at the cost of increased development and testing or I can narrow the scope to my specific case and reduce the set of customers that would find it useful.  Ah, isn’t that just the typical development dilemma?

GeekSpeak comments edit

The topic of Day 2: Agile.

Keynote - Steve McConnell

McConnell gave one of his usual interesting and insightful presentations on Agile development practices.  I think the thing I liked the best was that he talked about how you don’t have to stick to every single Agile ideal to the letter to call yourself Agile - in practice, doing what works for your team and your company is what’s important.

A couple of interesting quotes:

“We see XP fail more often than it succeeds.”

“We see Scrum succeed more often than it fails.”

Practices he’s seen succeed in Agile environments:

  • Short release cycles.
  • Highly interactive release planning.
  • Timebox development.
  • Empowered, small, cross-functional teams.
  • Involvement of active management.
  • Coding standards.
  • Frequent integration and test.
  • Automated regression tests.

On the other hand, things like daily stand-ups should be evaluated - make sure you’re not meeting just for the sake of meeting.  And don’t oversimplify your design - YAGNI is a good principle, but don’t use it as an excuse for a design that is too inflexible to accommodate for change.

Agile is More Than Monkey-See Monkey Do - Peter Provost

Provost started this talk with an altogether-too-close-to-home story called “An Agile Tragedy” about a team that attempted to adopt Agile practices only to sacrifice certain key tenets and have a project fail miserably and wind up with a very unhappy team.

Basically, just following Agile practices doesn’t make you Agile.  You have to actually subscribe to the principles, not just go through the motions.

Empirical Evidence of Agile Methods - Grigori Melnik

This talk was a discussion about the metrics we have that support the value of Agile development practices.  What it brought to light is that we don’t actually have a lot of metrics - Agile is largely measurement-free and most experiments that have been done are too trivial to count or have inherent design flaws.

What’s New in Rosario’s Process Templates - Alan Ridlehoover

“Rosario” is the version of Visual Studio that comes after Visual Studio 2008 (Orcas).  It’s built on the VS 2008 technology and adds features.

This talk focused on the Team System features they’re adding to Rosario to support a more integrated Agile development process.  Specifically, they showed some of the templates they’re adding that allow you to manage your backlog and work items.  It looked, to me, a lot like VersionOne meets Visual Studio.

Other features that stuck out to me:

  • Continuous integration support - They’re building in a continuous integration server that’s supposedly better than CruiseControl. I’ll have to see that to believe it.
  • Drop management - Once you’ve built something in your continuous integration server, where does it go? How long do you maintain it? That’s what this does.
  • Test impact analysis - If you change a line of code, this will tell you which tests need to be run to validate the change you made.

Lessons Learned in Unit Testing - Jim Newkirk

Some very interesting discussion about things learned in the creation of NUnit and other experiences in unit testing.

  • Lesson 1: Just do it.  You have to write your tests and they have to be first-class citizens.
  • Lesson 2: Write tests using the 3A pattern.  Arrange, Act, Assert. Each test should have code that does those things in that order.
  • Lesson 3: Keep your tests close.  Close to the original code, that is.  Consider putting them in the same assembly as the code they test and ship the tests.  One possibilty to still maintain the ability to not ship tests includes using multi-module assemblies - put your production code in one module and your tests in another. When you’re debugging/testing, compile both modules into the assembly; when you release, only include the product module. Unfortunately, Visual Studio doesn’t support creating this sort of assembly.
  • Lesson 4: Use alternatives to ExpectedException.  The ExpectedException attribute, part of NUnit, breaks the 3A principle because it puts the “Assert” - the ExpectedException attribute - at the top.
  • Lesson 5: Small fixtures.  Keeping test fixtures small helps readability and maintainability.  One idea is to create one main test fixture class and each method’s tests go in a nested class/fixture.  (Of course, this does introduce nested classes, which isn’t supported by all test runners…)
  • Lesson 6: Don’t use SetUp or TearDown.  The problem is that they become a dumping ground for every test’s setup/teardown even though they apply to every test.  Forcing each test to do its own setup and teardown makes each test isolated and more readable… but it will introduce duplicate initialization code.
  • Lesson 7: Improve testability with inversion of control.  This was sort of a nod to “design-for-testability” with interfaces that allow you to swap in test implementations of objects at test time. (Dependency injection centralizes the management of this.)  The benefits are better test isolation and decoupled class implementaiton.  The drawbacks are that it decreases encapsulation and risks “interface explosion” (a high proliferation of interfaces
    • every object ends up with a corresponding interface, even if it’s just for testing).  Plus, in many cases a dependency injection framework is overkill.

Very interesting stuff, even though I disagree with some of the lesons (no SetUp/TearDown, inversion of control/design for testability).

Agile Security - David LeBlanc

This was a talk about how secure coding practices like threat modeling can work into an Agile project.  There were some good general ideas, but the main point was that you need to work it into your own process - there’s no one way to get it in there.

Ideas include:

  • Appoint a security owner, ideally someone who’s interested in it. That person will be responsible for ensuring the team meets security goals.
  • Agile threat modeling is sometimes just as good as a heavyweight process.  Sketch data flow diagrams on the whiteboard and make sure threat mitigations get added to the backlog.
  • Use code scanning tools daily or weekly.  Also use peer code review - this can not only catch functional defects but security defects, too.
  • Build security tests at the same time you build your other tests.

“Yet Another Agile Talk On Agility” - Peter Provost

This was an interactive session where we actually used an Agile process to, as a group, ask questions about Agile, add them to a backlog, rank the priority of each question, and get the questions answered.

An interesting exercise and lively discussion about a wide variety of Agile development topics.

“Open Source in the Enterprise” - Discussion Panel hosted by CodePlex

Ted Neward, Jim Newkirk, Rocky Lhotka, and Sara Ford sat in on a discussion panel to talk about different topics related to open source - getting management buy-off to allow use of open source projects in development, contributing to open source projects, mitigating risk when using open source projects, etc.

After a while, it became more audience-participation-oriented and speakers started swapping out.  For a time, I was even up there with Jim Newkirk, Sara Ford, Stuart Celarier, and Rory Plaire.  I have to say, it was pretty sweet sitting up there with that crowd.  Maybe I need to seek me out some speaking gigs.

GeekSpeak comments edit

The topic of Day 1: Architecture.

Keynote - Anders Hejlsberg

Anders showed a great demo of LINQ.  Not having had time myself to do much with LINQ, it was nice to see several of the features working and learn a little more about how LINQ works from the inside as well as seeing some of the C# 3.0 features.

The idea behind LINQ is that we’ve pretty much run the gamut of possibilities in imperative programming - declarative programming still has a lot of new ground to cover.  Rather than spending time imperatively writing out not only what data you want but how you want to get it, LINQ lets you declaratively write what data you want and let the framework take care of the work.  Easier to write, easier to maintain.

The biggest source of conflict I have with LINQ is that age-old argument of whether you write SQL in your code and query the database tables directly or whether you use stored procedures.  I’m a stored procedure guy. (Which, peripherally, explains why I’m not a big fan of the Active Record pattern - I don’t want my database schema extended into my code. A class per table?  What happens when my schema changes? No, no, no.)

Luckily, Microsoft officially abstains from this battle.  You can use LINQ that generates SQL or you can use stored procedures.  Everyone’s happy.  I’m looking forward to this.

A Software Patterns Study: Where Do You Stand? - Dragos Manolescu

This was more of an interactive presentation where Manolescu brought to our attention (via polling the audience) that while we all claim to use software patterns, most of us don’t really know where the resources are to read up on new pattern developments and contribute to the community. Publicity is a problem for the patterns community and that needs to be fixed.

Architecture of the Microsoft ESB Guidance - Marty Masznicky

I’m not sure if it was intended to be this way, but this was less a presentation on enterprise service bus guidance than it was a sales pitch for BizTalk server.  We learned a lot of how BizTalk handled things like exceptions and logging… and that’s about it.

Pragmatic Architecture - Ted Neward

Neward’s talk was sort of a reality check for folks who claim to be architects.  He started out by talking about the Joel On Software “Hammer Factory” example - “Why I Hate Frameworks.” The danger: following patterns for the sake of following patterns. Doing things in a purist fashion for the sake of idealism.  While it’s important to have a good system architecture, you can’t ignore the end goal - working software.

Architects need to understand project goals and constraints and reassess these when change happens.  Architects need to evaluate new tools, technologies and processes to determine their usefulness to a given project.  Don’t just implement something because it’s new and cool or because it’s “best practice” - do what makes sense.

Architecting a Scalable Platform - Chris Brown

This was a discussion of things to think about when you’re working on a scalable platform.  Things like using content distribution networks and unified logging were touched on.

The biggest point here was the notion of building in fault tolerance. One example is the “gold box” on the Amazon.com web site.  The “gold box” is actually an independent service that has a certain amount of time to respond.  If it doesn’t respond, the page will render without rendering the “gold box” feature - it gracefully degrades.  Scalable systems need to consider how to handle fault tolerance and appropriately degrade (or report to the user) when things go wrong.

Grid Security - Jason Hogg

The discussion here was on SecPAL - the Microsoft Research “Security Policy Assertion Language.”  It’s basically a query language that allows you to easily write queries to determine if a user is authorized to do something.  Using a common language and infrastructure, you can fairly easily implement things like delegation in a system.  There are even visualizers and things to help you determine how authorization decisions were made - very cool.

I won’t lie - some of this got a little above my head.  There’s a lot here and I can see some great applications for it in our online banking application, but the concrete notion of exactly how I’d go about implementing it and what it means is something I’m going to have to noodle on for a while and maybe do a couple of test projects.

Moving Beyond Industrial Software - Harry Pierson

Pierson’s idea is that we need to stop thinking about software in a “factory” sense - cranking out applications - and start thinking about software in a different sense.  Put the user in control.  Stop trying to directly address ever-changing business needs and enable business people to address their own needs.  Think outside the box.

The canonical example offered here was SharePoint - it’s not really an application so much as an infrastructure.  Users create their own spaces for their own needs in SharePoint and it’s not something that needs interaction from IT or the application developers.  It puts the users in control.

This is another one I’m going to have to think about.  This sounds like it applies more to IT development than it does with “off-the-shelf” style product development.  How we, as product developers, think outside the box and how we can change for the better is something to consider.

gaming, xbox comments edit

I picked up my copy of Guitar Hero 3 at Costco about a week ago.  It sort of snuck up on me and I didn’t actually realize it was coming out this soon, so it was a pretty big surprise to see it.  Regardless, we knew we wanted it, so we grabbed it.

If you haven’t played a Guitar Hero game, it’s time to climb out of the hole you’ve been living in.  It’s good times.  The thing I really liked about Guitar Hero 2 was how playing the songs really made me feel cool.  I’m not super good at it - I can only really play acceptably on the “normal” difficulty - but it’s just inherently fun.  Not only that, but I really like most of the songs so playing them was cool.  With Guitar Hero 3, I expected “more” and “better.”

It’s good, but… I dunno.  There’s just something missing.  I think that the combination of a few of the changes sort of put me off.

First, and foremost, the songs.  I’ve played through co-op career mode on “Easy” and I’m almost through solo career mode on “Medium” and I think I really only like maybe 25% of the songs.  I’m a mainstream rock fan.  I like, for example, the Poison and Guns n’ Roses songs they included.  Some of the more popular classics are cool, too, like “Paint It Black” by The Rolling Stones.  I’m all over that stuff.  But that’s sort of the minority of the songs.  The rest?  Eh.  I more… “tolerate them” than I do “like them.”  I mean, “The Seeker” by The Who?  Mildly acceptable.  “Kool Thing” by Sonic Youth (or anything by Sonic Youth)?  Lame.  The redeeming tune is “Cult of Personality” by Living Colour.  I’ve wanted that song since I first played Guitar Hero.  But, generally speaking, mediocre fair song-wise.  (Here’s the complete song list on Wikipedia.)

The other thing is the difficulty level. In GH2, “Easy” was easy and “Normal” was slightly more difficult, but not so bad that you couldn’t just pick up and play and have fun.  “Hard” was actually hard and “Expert” was for the hardcore folks only.  In GH3, everything is about 50% harder.  “Easy” isn’t nearly as easy as the GH2 “Easy,” and “Normal” isn’t just pick-it-up-and-rock - it looks like it’ll take some practice (I’m only halfway through).  “Hard” will definitely require practice and I won’t even look at the “Expert” level.  The difficulty in GH2 reflected the idea that casual gamers could pop in and play something a little more than “Easy” and still have fun.  In GH3… you’re either dedicated or you’re stuck on “Easy.”

I’m not as concerned as other folks that some focus has moved to competition.  The co-op career is fun and I feel like it compensates for the competition aspect that’s been added.  They needed a little something new and the competition aspect is an interesting direction. Jury’s still out on whether I think they should go further in that direction, but it’s not bad.  There are some interesting glitches with the co-op achievements where if you’re playing co-op career mode both partners will get the co-op note streak achievements but only the person logged in as “player 1” will get the career completion achievements. Hopefully that will be fixed in a patch.

In all… I generally like GH3, but I think it could generally have been better.  Even just choosing better songs would have made it better for me.  I’m having fun with it, and I’ll keep playing it, but I hope that GH4, if they come out with it, has better songs.  I did order Rock Band and I’m looking forward to it.  I think the new instruments (and a mildly better, albeit slightly overlapping, song list) will be a nice change.

halloween, costumes comments edit

In a downward trend from the last two years, we came in at 139 trick-or-treaters this year.  Many more older kids came by, many in that “hey, maybe you should have actually worn a costume” state.

The graph:

139 Trick-or-Treaters for
2007

The 6:30 - 7:30 hour was the most productive, and once again 6:30 to 7:00 seems to be prime candy-grabbing time.  Two Costco bags of candy were sufficient with about a quarter-bag left over, though instead of mini candy bars like we had last year, this year we handed out more of a “candy assortment” (many more small candies rather than fewer large candies).  We ran a half-hour longer than we did last year due to the poor turnout of the first half-hour starting at 6:00.

Still, it was a pretty decent sized reduction in kids this year, and I think it may have been one or more of several factors at play:

  • Average age of the neighborhood kids increases as time goes by - less locals seeking candy.
  • This is the first year daylight saving time was changed for that energy bill - it’s darker a little earlier until we switch over and that may have stopped the earlier/smaller kids from venturing out.
  • Last year we had a projector showing an animated Halloween scene on our garage.  I got home too late to put it out this year.  Less decoration - less enticing to knock on the door.

I think next year I’ll make it a point to put the projector out and see if that changes things.  The average age of kids can’t be helped, but the DST issue won’t have changed.