home comments edit

A few weeks back we had a rock [somehow/magically] hit the frame of the screen door that sits in front of our sliding glass back door. It tweaked the frame enough that we had to get a new screen door. After a trip to Lowe’s this weekend, I learned more than enough about screen doors. I figured I’d impart the knowledge, maybe save you a trip.

Make sure you actually need a screen door. If the screen is ripped, you can just replace the screen. New screen and spline (the cord-like stuff that holds the screen in) and maybe a spline tool (the tool to push the cord in between the screen and the frame) will run you less than $20. It’s not a five-minute repair job, but it’s not too bad and will save you some money. In our case, the frame of the door was messed up, so we actually needed a new door.

Take precise measurements of your door. Get the height, width, and depth. You’ll want a door that matches all three measurements and the store may or may not actually have them in stock.

Look at what kind of track the door sits in. Your screen door (and sliding glass door) are held in by a track. What’s that made out of? It’ll either be metal or vinyl. This is important because…

Universal screen doors only work in metal tracks. I learned this the hard way and had it confirmed by the guys in the door department at Lowe’s. You might find a door that is the proper height and width (I did) but when you try to put it into the track you’ll find it’s just a little too thick to properly sit in there. There went $45 and now I have a “spare” door in my garage that I can’t use.

For doors in vinyl tracks, you’ll probably have to custom order. It’d be awesome if there were universal doors that fit in vinyl tracks, but since there don’t seem to be, you’ll have to go into the store and see if one of the three or four models in stock will fit. Don’t buy a door that almost fits - you want a door with the exact same measurements as the ones you took. You can custom order a door if there isn’t one in stock. It will be more expensive than the universal door you wish would fit in the track. The one we ended up with was almost double the price, but it’s also a more sturdy frame. The “basic model” was still about 50% more than the universal door.

Doors come preassembled. This sounds like a dumb thing to mention, but if you don’t have a car/truck that can fit a full-sized door in back, you’ll need to arrange one. I always sort of thought screen doors would come in “kits” the way some picture frames do, so you can take the kit home and assemble it. A kit would have fit in the back of my car. Full size door, not so much.

My new door will be here in a couple of weeks. Looking forward to getting that installed.

dotnet, process comments edit

I was watching some Twitter stream by and caught a bit of a discussion asking about why people haven’t moved to xUnit.net yet for unit testing. It started here

Legitimate, good question. xUnit is a nice unit test framework.

The thing is, I see a lot of these things fly past - Why haven’t you updated to ASP.NET MVC3? Why haven’t you switched your project to .NET 4.0 from .NET 3.5? How come you’re not installing every third-party dependency in your project through NuGet now? What? You’re still on jQuery 1.5.2? But 1.6 is out! You’re still using Rhino Mocks? But Moq is totally the way to go now! Why aren’t you on the latest and greatest framework?

There’s no denying that there are some pretty compelling reasons to do technology upgrades. Easier and cheaper feature implementation is usually a pretty key driver. But I think some of the folks that push for staying on the latest and greatest sometimes forget some of the hidden costs of staying on that cutting edge. (Not that I’m saying @lazycoder, above, is one of these; that’s just a tweet that got me thinking.)

Upgrade costs. Using the xUnit.net example above, I have to question what the upgrade cost would be to convert 4000+ unit tests in NUnit to xUnit.net. Is it worth it? Probably not. So then you might say, “Oh, then only use the latest and greatest in new projects rather than existing projects.” I’m not sure where you work, but in companies with long-established product lines, my experience tells me that there’s not as much opportunity for new project work as there is in “adding features to existing projects.” So when you add features do you do it with the existing toolset or do you try to introduce a new tool/dependency at that time?

Too many ways to do the same thing. Continuing that thought - if you add a new dependency into the mix when you add a new feature in an existing product, you invariably introduce a new way to do the same thing. That is, say you switch from Moq to Typemock Isolator or something. You’ll be writing mocks in some tests one way and in some tests another. How do people know which way to go? You might laugh at that question, but if you’re on a large distributed team of varying skill levels, you can’t really have people “making it up as they go” because, while it may be “intuitive” to some, there are others who will “guess wrong.” To minimize the guesswork, you need to have some [minimal] development standards. Ever try to add an “if/then/else” to development standards? How’d that work out for you? (I’m not saying code should be rubber-stamped out or that you need guidelines for everything you do… just that diverse styles and skill levels become a larger issue the larger/more distributed your team gets and you can run into maintainability issues pretty quickly if people don’t at least have some sort of basic standard and common approach to things.)

Training costs. It’s really easy to say “people just need to raise their personal bars” when throwing a new version of a framework or tool into the mix, but the truth is, some folks adapt faster than others. If your team is reasonably small, you can probably get away with this a little easier than if, say, you have a 40+ team of engineers of all skill levels jamming on the same code base. There are going to be some road bumps unless you do a little training, which isn’t free, even if you do it in-house during lunchtime seminars or whatever. Not everyone out there is reading tech blogs daily, working on personal projects, and trying to “sharpen the saw” at every opportunity. I think this fact is pretty easily forgotten by people who have the luxury of staying up to date.

Other dependencies. In some cases, you have two dependencies in your product that also rely on each other. For example, if you want to integrate NUnit into TeamCity build reporting, the TeamCity build agent needs to have a compatible NUnit test runner (or you need to do some manual hackery for less than perfect integration). You may have every opportunity to update your code to the latest NUnit, but that other dependency requires you to stay back a version or two. That also may limit your choices of tools - if I have to take a component that only works if I use log4net for logging (arbitrary example), then I’m sort of stuck with log4net even if I want to use Enterprise Library logging.

Corporate policy. In large enough organizations you inevitably get some sort of review board that approves (or rejects) dependencies based on various policies/analyses - security, legal, or what-have-you. That, too, can limit your options.

Customer acceptance. Depending on your customer base, some customers don’t actually want to be on the latest and greatest. They want “tried and true.” The government and financial institutions come to mind here. Maybe you can’t upgrade to .NET 4.0 until SP1 comes out for it or something. Point being, your customers may not allow you to upgrade even if you want to.

I love working on the latest stuff. It keeps me interested. It keeps me learning. I encourage you to do the same. But I understand if your project is still stuck in .NET 2.0 in Visual Studio 2005 because sometimes there are really good reasons you can’t upgrade. Keep looking for opportunities to move forward. You’ll get there.

I recently had the opportunity to chat with Scott Hanselman about our experiences with Synology DiskStation products and Windows Home Server. Looks like the podcast is up now so go check that out if you haven’t already.

If you’re coming in here after having listened to the podcast, here are some links to related blog entries you may be interested in:

Also, if you’re a regular Hanselminutes listener, your trivia for the day (which we touch on in the podcast): The original definition of Hanselminutes.

Finally, thanks again to Scott for having me on the show. A good time was had by all.

dotnet, aspnet comments edit

I could also have called this “wildcard .NET mapping in IIS Express from web.config.”

I’m sure that, like, everyone out there but me has figured this out by now, but… well, I’ll blog it anyway.

Problem: Your ASP.NET web site has aVirtualPathProviderthat serves static files (e.g., .jpg, .css, etc.). It works great in the Visual Studio development web server but switching to IIS Express, it suddenly doesn’t work.

My team has just such a provider that serves static files out of embedded resources. We switched from Cassini over to IIS Express and couldn’t for the life of us figure out why it suddenly stopped working. I mean, it’s “integrated pipeline,” right? WTF?

OK, so my first “duh!” moment was when I realized that it’s integrated pipeline, not “.NET is responsible for handling each request.” That is, you have a managed request pipeline but the actual handler that serves the content may or may not be managed. It’s one of those things you know, then forget you know, then remember again when you hit a snag.

At that point I went looking in config to see what the handler was for static files and I saw this in the system.webServer/handlers section of applicationhost.config:

<add name="StaticFile" path="*" verb="*" modules="StaticFileModule,DefaultDocumentModule,DirectoryListingModule" resourceType="Either" requireAccess="Read" />

This is where I made my mistake. I know what the line there says, but in my mind, I read it as “Use the StaticFileHandler for any files not previously mentioned.” So I’m thinking System.Web.StaticFileHandler, right? It’s integrated, so that’s your built-in wildcard mapping… right?

That’s not what it says.

It says, “When all else fails, use the unmanaged default mechanism to serve up the static content.” Which, further, means “skip all VirtualPathProviders and go right to disk.”

My teammate, Sagar, figured that one out and we were both slapping our foreheads. Of course. Again, integrated pipeline, not “.NET handles all requests.”

The fix is to add the .NET static file handler back into your pipeline. You can do this in your web.config in system.webServer/handlers:

<add name="AspNetStaticFileHandler" path="*" verb="*" type="System.Web.StaticFileHandler" />

We did that, and suddenly things were working again. Bam! Done.

Note that doing this has some performance and caching implications. The unmanaged, standard IIS static file handler is pretty well optimized for performance; more so than the managed static file handler. Also, the managed static file handler doesn’t write caching-related information (e.g., ETag or Expires headers) for virtual files that are not served up from disk. Something to consider.

media, music comments edit

While I’m on a recommendation kick, I figured I’d throw out five of my favorites that most people out there are probably not listening to right now, and may not have even heard of. Try something new!

In no particular order…

Album Description
Pop Will Eat Itself - Cure For Sanity Pop Will Eat Itself - Cure For Sanity -  You may recognize the name Clint Mansell from his film score work, most recently for the movie Black Swan. Mansell was originally the lead singer for PWEI. I like most of PWEI’s stuff, but this one is my favorite. Slightly interesting personal note: it’s also one of the only two albums (along with Pretty Hate Machine) my mom ever confiscated from me when I was a kid because she didn’t like the lyrics or, I assume, the electronic/percussive nature of the music. Sorry, Mom. I think I had another copy like a week after that. It’s been one of my favorites since I originally heard it carpooling with my friend Molly to school. Still is.
2nu - Ponderous 2nu - Ponderous - I’m not even sure how to classify this. It’s more… “ambient spoken word” than straight music. If you recognize anything at all from this album, it’ll be the short-radio-run title song, “This is Ponderous.” “I had this dream the other night. I went to work one day and nobody remembered who I was. So I decided to take the day off.”
Afghan Whigs - Gentlemen Afghan Whigs - Gentlemen - Great alternative album and a fantastic live show. This is actually the first concert I ever went to, with my buddy Mike. Super intimate venue that isn’t there anymore where I also saw George Clinton and Parliament Funkadelic (which was also a hell of a show)… but I digress. This is an awesome driving-around-town album. Go get it.
Republica - Republica Republica - Republica - You probably heard this one (maybe even had it) and forgot about it entirely. Electronic rock you want to play with the volume turned up and the windows rolled down.
Utah Saints - Utah Saints Utah Saints - Utah Saints - Great house music, period. If you’re into that sort of thing and you don’t have this one, it’s a must-have.