GeekSpeak comments edit

The topic of Day 3: Development.

Keynote - John Lam

Lam’s keynote was primarily a demo of IronRuby and an explanation of how they arrived at where they’re at with the project as well as where they’re going.  It was very interesting to see a Ruby app on Mono running Windows Forms… but I realized as I watched this that I don’t think I’m nearly as interested in the whole Dynamic Language Runtime thing as everyone else out there is.  I mean, it’s cool and all, and maybe I’m just burned out on it, but when people say “DLR” I don’t instantly think “Yes!”

The Right Tools for the Right Job - Rocky Lhotka

This was less a presentation on tools (as it sounds like it might be) and more a presentation on application architecture urging you to use the right tools - and patterns - for the solution you’re creating.  In most cases, this boiled down to the fact that you need to have the discipline to keep your application layers (presentation, business, data) separate so you can appropriately accommodate technology changes.

Model-Based Design - David Trowbridge, Suhail Dutta

This talk was specifically geared around the modeling tools built into Visual Studio Rosario.  Three modeling tools were shown:

  • Logical class diagram - An enhanced version of the exisitng class diagram functionality.  Generate class stubs based on the diagram and update the diagram based on code changes.
  • Sequence diagram - An extension from the logical class diagram.  Show how classes interact in a standard sequence diagram.  As you add method calls to the sequence diagram, it updates the class diagram, which allows you to generate code.  What I didn’t see here was whether the actual sequencing in the diagram generates any code.
  • Dependency analysis - They called this “Progression.”  Pleading ignorance, I don’t recall why.  Anyway, this frankly looked like a watered-down version of NDepend.

Dependency Injection Frameworks - Scott Densmore, Peter Provost

A discussion on the principles of dependency injection more than specific framework usage, which was just fine.  I won’t go over the whole thing because there’s plenty out there on dependency injection.  The two things I liked were the list of different types of dependency injection and the potential drawbacks.

Types of dependency injection they mentioned (who knew there were so many?):

  • Service locator (not really dependency injection, more late-binding to services)
  • Interface injection
  • Setter injection
  • Constructor injection
  • Method call injection
  • Getter injection

…and drawbacks of dependency injection.  (I liked this because proponents of dependency injection rarely mention these things as drawbacks, instead calling it “good design,” which is debatable.)

  • Lots of little objects - you generally have to break things down into very, very small pieces.  Rather than two-1000 line objects, you might have 20-100 line objects.
  • Runtime wire-up can be complicated and difficult to visualize - figuring out which objects were populated by what context and how the dependency came to be can be hard to wrap your head around, especially in systems of any size.  Couple that with the “lots of little objects” drawback and you might realize you have a defect… but which of the bajillion little objects is it in?
  • Interface explosion - everything gets an interface because everything’s gotta be pluggable.

They recommended that if you write reusable libraries with these techniques, you should wrap the public facing stuff with a facade to mask this confusion from the library consumers.

Designing for Workflow - Ted Neward

A two-part talk on things to keep in mind when designing for workflow (specifically, Windows Workflow Foundation).  The first part started out by basically saying that there’s not enough info out there to be able to identify best practices for workflow development.  That said, keep in mind the goals:

  • Capture long-running processes.  (Be able to “pause” and “resume” a long-running process.)
  • Provide “knowledge workers” with the ability to edit a process.
  • Provide a component market.  (Developers create activities - components - that knowledge workers can use to compose workflows.)
  • Keep workflows decoupled from the environment.  (What if you started a process on a Blackberry and resumed it when you got to work and logged into the web application?)
  • Embrace flexibility in workflow hosting.  (You might host the workflow in your web app, in a Windows forms app, etc.)

The second half of the talk was open discussion.  The key that came out here was that, when working with workflow and looking for patterns, don’t neglect work that’s already been done.  Check out the Workflow Patterns site for some documented workflow patterns.

Panel: The Future of Design Patterns - Dragos Manolescu, Wojtek Kozaczynski, Ade Miller, Jason Hogg

An open forum to debate whether future investment in pattern education for the masses should occur in tools (creating tools that more easily allow you to introduce patterns into your code) or in materials (web sites and books that educate you about patterns).

No real resolution was reached, but there were definitely some strong feelings on both sides.  Some felt that simply giving people tools would make it too easy for junior folks who don’t understand the patterns to shoot themselves in the foot by misusing the tools and making bad code even worse.  Others felt that there’s already enough material out there and investing in even more would be a waste.  And, of course, there are the middle-ground folks who say we need both.

But if you can only have one of those things, which would you take?

EntLib Devolved - Scott Densmore

An exploratory discussion on why the Enterprise Library is the way it is and ideas on how it might be made easier to use.  Wouldn’t it be nice to be able to say EnterpriseLibrary.Get<Database>("Sales"); or something as simple as that?  What’s stopping us?

The answer: Nothing.

They’re working on it.

An Evening With Microsoft Research - Jim Larus

A peek at some of the stuff Microsoft Research has been working on.  You’d be surprised (or maybe not) at the breadth of topics they look at.

I think my favorite one was the analysis they did on a developer’s day including all of the interruptions and task switching that goes on - things you might not even notice - and how that impacts not only that developer but others around them.  They call it “Human Interactions in Programming.”  Looking at a graphical representation of a 90 minute period that shows interruptions for several developers was fascinating.  They even analyzed what the most frequent question types were that people interrupted to ask (“Why is my code behaving like this?” sorts of things) and how satisfied they were with the answers they got back.

Neat stuff.

subtext, xml, blog comments edit

I’m still working on a decent solution to the absolute URL problem I’m seeing in my RSS feed (which is why the images in my RSS feed appear broken - the images are sourced from a relative URL, like “/images/foo.gif” which, paired with FeedBurner, make it look like the images are supposed to come from FeedBurner, and they’re not).

Anyway, I have a sort of general-purpose HttpModule for filtering response output and converting URLs to absolute format, but it’s not working with Subtext’s RSS feed when compression is turned on.  I think I’m inserting myself too late in the request lifecycle so my filter is trying to process GZipped content and is puking on it.

So… I’ve got some more testing and coding to do.

Another stumbling block I hit and wasn’t even thinking of - I wrote my first run at the module to filter HTML… but what it really needs to filter is XML with embedded encoded HTML because that’s what RSS is.

That leaves me with a little bit of a design quandary - I can make it a general purpose module at the cost of increased development and testing or I can narrow the scope to my specific case and reduce the set of customers that would find it useful.  Ah, isn’t that just the typical development dilemma?

GeekSpeak comments edit

The topic of Day 2: Agile.

Keynote - Steve McConnell

McConnell gave one of his usual interesting and insightful presentations on Agile development practices.  I think the thing I liked the best was that he talked about how you don’t have to stick to every single Agile ideal to the letter to call yourself Agile - in practice, doing what works for your team and your company is what’s important.

A couple of interesting quotes:

“We see XP fail more often than it succeeds.”

“We see Scrum succeed more often than it fails.”

Practices he’s seen succeed in Agile environments:

  • Short release cycles.
  • Highly interactive release planning.
  • Timebox development.
  • Empowered, small, cross-functional teams.
  • Involvement of active management.
  • Coding standards.
  • Frequent integration and test.
  • Automated regression tests.

On the other hand, things like daily stand-ups should be evaluated - make sure you’re not meeting just for the sake of meeting.  And don’t oversimplify your design - YAGNI is a good principle, but don’t use it as an excuse for a design that is too inflexible to accommodate for change.

Agile is More Than Monkey-See Monkey Do - Peter Provost

Provost started this talk with an altogether-too-close-to-home story called “An Agile Tragedy” about a team that attempted to adopt Agile practices only to sacrifice certain key tenets and have a project fail miserably and wind up with a very unhappy team.

Basically, just following Agile practices doesn’t make you Agile.  You have to actually subscribe to the principles, not just go through the motions.

Empirical Evidence of Agile Methods - Grigori Melnik

This talk was a discussion about the metrics we have that support the value of Agile development practices.  What it brought to light is that we don’t actually have a lot of metrics - Agile is largely measurement-free and most experiments that have been done are too trivial to count or have inherent design flaws.

What’s New in Rosario’s Process Templates - Alan Ridlehoover

“Rosario” is the version of Visual Studio that comes after Visual Studio 2008 (Orcas).  It’s built on the VS 2008 technology and adds features.

This talk focused on the Team System features they’re adding to Rosario to support a more integrated Agile development process.  Specifically, they showed some of the templates they’re adding that allow you to manage your backlog and work items.  It looked, to me, a lot like VersionOne meets Visual Studio.

Other features that stuck out to me:

  • Continuous integration support - They’re building in a continuous integration server that’s supposedly better than CruiseControl.  I’ll have to see that to believe it.
  • Drop management - Once you’ve built something in your continuous integration server, where does it go? How long do you maintain it? That’s what this does.
  • Test impact analysis - If you change a line of code, this will tell you which tests need to be run to validate the change you made.

Lessons Learned in Unit Testing - Jim Newkirk

Some very interesting discussion about things learned in the creation of NUnit and other experiences in unit testing.

  • Lesson 1: Just do it.  You have to write your tests and they have to be first-class citizens.
  • Lesson 2: Write tests using the 3A pattern.  Arrange, Act, Assert.  Each test should have code that does those things in that order.
  • Lesson 3: Keep your tests close.  Close to the original code, that is.  Consider putting them in the same assembly as the code they test and ship the tests.  One possibilty to still maintain the ability to not ship tests includes using multi-module assemblies - put your production code in one module and your tests in another.  When you’re debugging/testing, compile both modules into the assembly; when you release, only include the product module.  Unfortunately, Visual Studio doesn’t support creating this sort of assembly.
  • Lesson 4: Use alternatives to ExpectedException.  The ExpectedException attribute, part of NUnit, breaks the 3A principle because it puts the “Assert” - the ExpectedException attribute - at the top.
  • Lesson 5: Small fixtures.  Keeping test fixtures small helps readability and maintainability.  One idea is to create one main test fixture class and each method’s tests go in a nested class/fixture.  (Of course, this does introduce nested classes, which isn’t supported by all test runners…)
  • Lesson 6: Don’t use SetUp or TearDown.  The problem is that they become a dumping ground for every test’s setup/teardown even though they apply to every test.  Forcing each test to do its own setup and teardown makes each test isolated and more readable… but it will introduce duplicate initialization code.
  • Lesson 7: Improve testability with inversion of control.  This was sort of a nod to “design-for-testability” with interfaces that allow you to swap in test implementations of objects at test time.  (Dependency injection centralizes the management of this.)  The benefits are better test isolation and decoupled class implementaiton.  The drawbacks are that it decreases encapsulation and risks “interface explosion” (a high proliferation of interfaces
    • every object ends up with a corresponding interface, even if it’s just for testing).  Plus, in many cases a dependency injection framework is overkill.

Very interesting stuff, even though I disagree with some of the lesons (no SetUp/TearDown, inversion of control/design for testability).

Agile Security - David LeBlanc

This was a talk about how secure coding practices like threat modeling can work into an Agile project.  There were some good general ideas, but the main point was that you need to work it into your own process - there’s no one way to get it in there.

Ideas include:

  • Appoint a security owner, ideally someone who’s interested in it.  That person will be responsible for ensuring the team meets security goals.
  • Agile threat modeling is sometimes just as good as a heavyweight process.  Sketch data flow diagrams on the whiteboard and make sure threat mitigations get added to the backlog.
  • Use code scanning tools daily or weekly.  Also use peer code review - this can not only catch functional defects but security defects, too.
  • Build security tests at the same time you build your other tests.

“Yet Another Agile Talk On Agility” - Peter Provost

This was an interactive session where we actually used an Agile process to, as a group, ask questions about Agile, add them to a backlog, rank the priority of each question, and get the questions answered.

An interesting exercise and lively discussion about a wide variety of Agile development topics.

“Open Source in the Enterprise” - Discussion Panel hosted by CodePlex

Ted Neward, Jim Newkirk, Rocky Lhotka, and Sara Ford sat in on a discussion panel to talk about different topics related to open source - getting management buy-off to allow use of open source projects in development, contributing to open source projects, mitigating risk when using open source projects, etc.

After a while, it became more audience-participation-oriented and speakers started swapping out.  For a time, I was even up there with Jim Newkirk, Sara Ford, Stuart Celarier, and Rory Plaire.  I have to say, it was pretty sweet sitting up there with that crowd.  Maybe I need to seek me out some speaking gigs.

GeekSpeak comments edit

The topic of Day 1: Architecture.

Keynote - Anders Hejlsberg

Anders showed a great demo of LINQ.  Not having had time myself to do much with LINQ, it was nice to see several of the features working and learn a little more about how LINQ works from the inside as well as seeing some of the C# 3.0 features.

The idea behind LINQ is that we’ve pretty much run the gamut of possibilities in imperative programming - declarative programming still has a lot of new ground to cover.  Rather than spending time imperatively writing out not only what data you want but how you want to get it, LINQ lets you declaratively write what data you want and let the framework take care of the work.  Easier to write, easier to maintain.

The biggest source of conflict I have with LINQ is that age-old argument of whether you write SQL in your code and query the database tables directly or whether you use stored procedures.  I’m a stored procedure guy. (Which, peripherally, explains why I’m not a big fan of the Active Record pattern - I don’t want my database schema extended into my code.  A class per table?  What happens when my schema changes? No, no, no.)

Luckily, Microsoft officially abstains from this battle.  You can use LINQ that generates SQL or you can use stored procedures.  Everyone’s happy.  I’m looking forward to this.

A Software Patterns Study: Where Do You Stand? - Dragos Manolescu

This was more of an interactive presentation where Manolescu brought to our attention (via polling the audience) that while we all claim to use software patterns, most of us don’t really know where the resources are to read up on new pattern developments and contribute to the community.  Publicity is a problem for the patterns community and that needs to be fixed.

Architecture of the Microsoft ESB Guidance - Marty Masznicky

I’m not sure if it was intended to be this way, but this was less a presentation on enterprise service bus guidance than it was a sales pitch for BizTalk server.  We learned a lot of how BizTalk handled things like exceptions and logging… and that’s about it.

Pragmatic Architecture - Ted Neward

Neward’s talk was sort of a reality check for folks who claim to be architects.  He started out by talking about the Joel On Software “Hammer Factory” example - “Why I Hate Frameworks.”  The danger: following patterns for the sake of following patterns.  Doing things in a purist fashion for the sake of idealism.  While it’s important to have a good system architecture, you can’t ignore the end goal - working software.

Architects need to understand project goals and constraints and reassess these when change happens.  Architects need to evaluate new tools, technologies and processes to determine their usefulness to a given project.  Don’t just implement something because it’s new and cool or because it’s “best practice” - do what makes sense.

Architecting a Scalable Platform - Chris Brown

This was a discussion of things to think about when you’re working on a scalable platform.  Things like using content distribution networks and unified logging were touched on.

The biggest point here was the notion of building in fault tolerance.  One example is the “gold box” on the web site.  The “gold box” is actually an independent service that has a certain amount of time to respond.  If it doesn’t respond, the page will render without rendering the “gold box” feature - it gracefully degrades.  Scalable systems need to consider how to handle fault tolerance and appropriately degrade (or report to the user) when things go wrong.

Grid Security - Jason Hogg

The discussion here was on SecPAL - the Microsoft Research “Security Policy Assertion Language.”  It’s basically a query language that allows you to easily write queries to determine if a user is authorized to do something.  Using a common language and infrastructure, you can fairly easily implement things like delegation in a system.  There are even visualizers and things to help you determine how authorization decisions were made - very cool.

I won’t lie - some of this got a little above my head.  There’s a lot here and I can see some great applications for it in our online banking application, but the concrete notion of exactly how I’d go about implementing it and what it means is something I’m going to have to noodle on for a while and maybe do a couple of test projects.

Moving Beyond Industrial Software - Harry Pierson

Pierson’s idea is that we need to stop thinking about software in a “factory” sense - cranking out applications - and start thinking about software in a different sense.  Put the user in control.  Stop trying to directly address ever-changing business needs and enable business people to address their own needs.  Think outside the box.

The canonical example offered here was SharePoint - it’s not really an application so much as an infrastructure.  Users create their own spaces for their own needs in SharePoint and it’s not something that needs interaction from IT or the application developers.  It puts the users in control.

This is another one I’m going to have to think about.  This sounds like it applies more to IT development than it does with “off-the-shelf” style product development.  How we, as product developers, think outside the box and how we can change for the better is something to consider.

gaming, xbox comments edit

I picked up my copy of Guitar Hero 3 at Costco about a week ago.  It sort of snuck up on me and I didn’t actually realize it was coming out this soon, so it was a pretty big surprise to see it.  Regardless, we knew we wanted it, so we grabbed it.

If you haven’t played a Guitar Hero game, it’s time to climb out of the hole you’ve been living in.  It’s good times.  The thing I really liked about Guitar Hero 2 was how playing the songs really made me feel cool.  I’m not super good at it - I can only really play acceptably on the “normal” difficulty - but it’s just inherently fun.  Not only that, but I really like most of the songs so playing them was cool.  With Guitar Hero 3, I expected “more” and “better.”

It’s good, but… I dunno.  There’s just something missing.  I think that the combination of a few of the changes sort of put me off.

First, and foremost, the songs.  I’ve played through co-op career mode on “Easy” and I’m almost through solo career mode on “Medium” and I think I really only like maybe 25% of the songs.  I’m a mainstream rock fan.  I like, for example, the Poison and Guns n’ Roses songs they included.  Some of the more popular classics are cool, too, like “Paint It Black” by The Rolling Stones.  I’m all over that stuff.  But that’s sort of the minority of the songs.  The rest?  Eh.  I more… “tolerate them” than I do “like them.”  I mean, “The Seeker” by The Who?  Mildly acceptable.  “Kool Thing” by Sonic Youth (or anything by Sonic Youth)?  Lame.  The redeeming tune is “Cult of Personality” by Living Colour.  I’ve wanted that song since I first played Guitar Hero.  But, generally speaking, mediocre fair song-wise.  (Here’s the complete song list on Wikipedia.)

The other thing is the difficulty level. In GH2, “Easy” was easy and “Normal” was slightly more difficult, but not so bad that you couldn’t just pick up and play and have fun.  “Hard” was actually hard and “Expert” was for the hardcore folks only.  In GH3, everything is about 50% harder.  “Easy” isn’t nearly as easy as the GH2 “Easy,” and “Normal” isn’t just pick-it-up-and-rock - it looks like it’ll take some practice (I’m only halfway through).  “Hard” will definitely require practice and I won’t even look at the “Expert” level.  The difficulty in GH2 reflected the idea that casual gamers could pop in and play something a little more than “Easy” and still have fun.  In GH3… you’re either dedicated or you’re stuck on “Easy.”

I’m not as concerned as other folks that some focus has moved to competition.  The co-op career is fun and I feel like it compensates for the competition aspect that’s been added.  They needed a little something new and the competition aspect is an interesting direction.  Jury’s still out on whether I think they should go further in that direction, but it’s not bad.  There are some interesting glitches with the co-op achievements where if you’re playing co-op career mode both partners will get the co-op note streak achievements but only the person logged in as “player 1” will get the career completion achievements.  Hopefully that will be fixed in a patch.

In all… I generally like GH3, but I think it could generally have been better.  Even just choosing better songs would have made it better for me.  I’m having fun with it, and I’ll keep playing it, but I hope that GH4, if they come out with it, has better songs.  I did order Rock Band and I’m looking forward to it.  I think the new instruments (and a mildly better, albeit slightly overlapping, song list) will be a nice change.