May 2010 Blog Posts

My Problem Is I Can't Leave Well Enough Alone

It's not my only problem by any means, but it's a big one.

I'm a perfectionist. I admit that. I'm also an engineer, the side effect of which means I'm always trying to make stuff better in some unquantifiable fashion.

I've blogged before about my media center setup and some of the issues I've run into. I've also wondered aloud at how non-geeks survive and all the fiddly shit that comes along with getting all of these devices and things to work together.

Recently I've moved my DVD library off my Windows Home Server onto a Synology DS1010+. In doing that, I think I figured out some of the reliability issues my home server was running into, so I almost have the system nice and stable.

In the back of my mind, there's a voice. It's asking me, "What can you do to make it better?"

I know, consciously, that I need to stop messing around with the damn thing because it's working perfectly. Subconsciously, though, is the constant drive to enhance.

How can I make the network faster? This would involve getting gigabit adapters for devices that don't have them, fiddling with jumbo frames, setting up dual-band wireless-N for the devices that need it... but the network, while it could be improved, is at least reasonably stable right now and doing some of those things is just going to upset equilibrium.

How can I improve the media center front end PC? I think it's underpowered and some of the drivers are out of date because the devices don't all have recent 64-bit drivers... but they do have 32-bit drivers, so do I step the OS down to 32-bit, get a different system, or...? But it does what it needs to just fine - play DVD images off the network.

How can I make the home theater easier to use? I have a reasonable remote control right now and it's not that bad to switch on the various components, but it'd be nice to have something like a Harmony One to round out the functionality. Of course, that involves the setup and maintenance of an additional piece of equipment, plus training costs (if you know what I mean).

...and so on. Do any of these things really need to be dealt with? No. Does something inside me drive me to want to? Yes. And, of course, doing so will create work - work that, once I'm in the thick of it, I really wish I didn't have to do. Like I'm punishing myself for something.

I really need to learn to just leave well enough alone.

Calendar and Contact Sync Software Recommendation: gSyncit

I'm on a Blackberry Curve right now and I'd like to move to an Android-based phone. Since Android plays so nicely with Google apps (calendar, contacts, etc.) and as I'd like to have everything in a nice central location, I figured I needed to get my info Googlefied.

I need keep the following in sync:

  • Outlook: My current system of record. Meetings get scheduled here, I do most of my daily work here.
  • Blackberry Curve: My phone (for now).
  • Google: A centrally accessible place, plus my interface with the Android phone I want to get (not sure on model yet).

For the Blackberry, the Blackberry Desktop Manager software works fine to sync. It's not awesome - actually, it feels very fragile - but it mostly works. For the calendar, there are some odd issues with recurring appointments, reminders, and meeting attendees that I'm not thrilled with, but no showstoppers. Contacts appear to sync perfectly.

That leaves getting Outlook synchronized with Google, which is not an uncommon problem.

Failure 1: Plaxo

I've had a Plaxo account for a long time and a year or two back I upgraded to the premium account because I had some Outlook profile issues that caused all of my contacts to be lost. Having the automated remote backup for contacts was a lifesaver and still gives me peace of mind.

Plaxo has the ability to synchronize with various places, which is cool. Unfortunately, Google used to be one of those places but is not anymore. Calendar sync was working great until a couple of months ago, but contact sync never worked. Odd since Plaxo is a contact-based product. Research in the forums tells me they're aware of the issue but there's no schedule for a fix.

Failure 2: SyncMyCal

SyncMyCal came recommended to me by a couple of different friends who have been using it successfully, though I'm not sure how. I didn't find any problems during my trial period, so I purchased... and then instantly found four problems.

  • Some Outlook contact fields don't sync right. I couldn't figure out exactly what the pattern here was, but I noticed that some of my contacts were not synchronizing. SyncMyCal tries to synchronize some Outlook contact fields as "user defined fields" in Google (because Outlook has so many fields on a contact) but sometimes it only sends over the field value and forgets to send a key. In Google, each custom contact field has to be a key/value pair. You end up getting a Google API error if you don't do it right... and SyncMyCal doesn't always do it right.
  • Reminder information isn't synchronized if you don't set up your Google calendar with a default reminder. In order for meeting reminder information to be synchronized to Google, you have to go into your Google calendar settings and configure a default reminder value. If you don't, none of the appointments that get synchronized will have the reminders attached. This was a hard one to figure out, but at least there's a workaround.
  • Contacts with multiple mailing addresses don't synchronize all addresses. Say you have a contact with a work address and a home address. SyncMyCal picks one (apparently arbitrarily) and that's the one that gets synchronized with Google. The other address(es) don't get synchronized or even acknowledged. (Multiple email addresses synchronize fine; it's multiple physical mailing addresses that have problems.)
  • Recurrence exceptions don't synchronize from Google to Outlook correctly. Set up a recurring appointment in Google that runs every weekday for two weeks. On the second week, delete the Tuesday and Thursday appointments. Move the Wednesday appointment to one hour later. Now sync back to Outlook - SyncMyCal will still show the deleted appointments and the moved appointment will still be in its original slot. It doesn't properly bring those exceptions back from Google. (It does, however, send exceptions properly from Outlook to Google.)

Finally, support for SyncMyCal is horrendous. You file a ticket, you get back a copy/paste response about how they're sorry for the inconvenience... but no real solution. A month later, they'll send you a patch for the old version of their product and give you a bunch of steps to run through involving backing up your data, uninstalling/reinstalling SyncMyCal, etc. When you finally do it, the patch they send won't even communicate with Google, let alone synchronize. You report that, rinse and repeat. It's like they didn't actually try any of the patches they're sending you.

Why are they sending me patches for the old version of the product? That doesn't even make sense. I asked about that, too, and got an unclear answer about design problems or something.

Anyway, I reported all of these issues over four months ago and have had no resolution on any of them. I can't really turn on two-way synchronization if neither calendar nor contact sync actually works. There goes $25.

Failure 3: Google Calendar Sync

Google Calendar Sync wasn't that bad, but I found that it didn't actually sync all of my meetings properly. I couldn't ascertain the pattern here, either, except that it would sync appointments (no attendees) and some meetings... just not all meetings. I'd get an error in my synchronization log saying "Participant is neither attendee  nor organizer."

There are tons of forum posts about this with just as many different things that "fixed it" for people. I tried all of the fixes people recommended and none of them got all of my meetings synchronizing. (Though, interestingly enough, two-way sync didn't delete the meetings, either. They just sort of got ignored.)

Success: gSyncit

gSyncit does calendar and contact sync... but also task and memo sync, too, which is more than the above products do. I've been running two-way sync on the calendar, tasks, and memos now for a couple of weeks and it correctly synchronizes everything - recurrence exceptions, reminders, everything.

The only problems I've had with calendar sync involve really crazy recurrence exceptions and time zones.

I've caught it a couple of times where I was messing around and created a recurrence exception on Google, synchronized to Outlook, updated it in Google, and it didn't properly update back to Outlook... but I was intentionally testing the boundaries so I probably did something really edge case there. (Just be aware, is all.) I tried to set up a specific reproduction but haven't figured out quite the exact set of steps.

Also, I've had a couple of weird issues involving time zones - like if a meeting organizer sets something up at 12:15 EST, that's 9:15 PST... but somehow it gets interpreted as 9:15 EST - the local time, the remote time zone - and ends up appearing at the wrong local time (in this example, 6:15 PST). It's only happened for two meetings (neither of which I was going to anyway...). I reported it this morning and got an answer from support within 15 minutes. (The SLA is 24 - 48 hours, but I won't complain about a 15 minute turnaround!) This is apparently an issue with Google Calendar not handling time zone issues well. The author is working on a fix that may resolve the issue.

I have not yet run contact sync two-way, but one-way from Outlook to Google works perfectly. It caught the multiple mailing addresses without issue, correctly located the contacts I already had in Google and added to their profiles... just fine. Honestly, the only reason I haven't done two-way sync is because I have to clean up my contacts in Google a bit - I'm afraid I'm going to get a ton of junk flooding into Outlook that I don't want. I have full faith that the two-way sync will work fine.

UPDATE 5/25/2010 12:00P: I enabled two-way contact sync between Outlook and Google "My Contacts" folder after doing some cleanup and it worked very well. It did add some contacts to Outlook that were in my Google "My Contacts" group that I didn't want, but after I moved them out of "My Contacts" into "All Contacts," they were properly removed from Outlook. I also had a couple of duplicates appear where in Outlook I had one email address for a person and in Google I had a different address. A little manual merge action fixed that up without issue and now I'm two-way-syncing my way to freedom and leisure.

It's pretty flexible - you can sync multiple Outlook calendars to Google calendars (and you choose the mappings). You can sync your contacts with specific groups (e.g., the "My Contacts" group in Google rather than the "All Contacts" group). Memos get synchronized as Google docs and you can put them in a specific GDocs folder to keep them separate.

The only weird thing is that tasks synchronize with a separate calendar and show up as events. The reason is, apparently, that there's no Google API to interface with the actual task list. I'm OK with that.

Downside: You can't really tell at a glance when the last time you synchronized was or the number of items synchronized. The best you can do is look at the debug/error log, but it's not straightforward.

Anyway, if you're looking for sync software, check out gSyncit. I really like it, and for $15 (at the time of this writing), you can't really beat it.

Moving to a Synology DS1010+

I've been having some trouble with my Windows Home Server involving some potentially misbehaving hardware when put under load. This really only manifests itself when I run PerfectDisk to defrag it, but I'm gathering it's really a hardware or driver issue and not PerfectDisk's fault. When you defrag the server will entirely hang up until you reboot it. Occasionally I'll get file conflicts or lose my backup database. Not great.

Anyway, I have a lot of data stored on that Windows Home Server - terabytes of DVD rips (from discs I own) - and with the problems I'm having, it doesn't give me a lot of confidence, especially since I can't turn on file duplication given the size of the data. I don't have enough storage to handle keeping double copies of it. Not only that, but I've noticed that, on occasion, just streaming the DVD rips (not HD, just regular old DVD rips) can get a little slow. Again, not great.

Since the WHS works for music and backups and other videos reasonably well, I figured I'd find a solution to move the DVD rips to and get them off the WHS. Once they're off, I can remove some of [what I believe to be] the problem drives and figure out what the real issue is. Either way, finding a different NAS solution for my DVDs is a must so if a hard drive goes out, I don't have to re-rip a bunch of stuff.

I did some research on NAS solutions that support RAID of various levels and I ended up on the Synology DS1010+. Why?

  • Speed. Looking at various reviews for NAS devices, Synology devices seem to always be rated high for speed, and usually higher than others.
  • Expandability. Most consumer-grade NAS solutions come with a max of four drive bays. After that, you can expand with eSATA (like I did for my Windows Home Server) and be at the mercy of the compatibility of the NAS with the port replicator or whatever. The DS1010+ actually has a specific port replicator that Synology sells that ensures the fast performance you expect and gives you a total of 10 drives' worth of storage.
  • Data protection. As mentioned earlier, I can't duplicate my DVD rips because I don't have the room to store everything twice. In a RAID 5 array, though, I have protection for my data if a drive dies but I don't have to have double the storage capacity to do it.
  • Flexibility. This thing appears to be a reasonable answer to WHS as far as features are concerned. You can have it run FTP, iTunes library sharing, DLNA media serving, client PC backup, security camera monitoring/recording, or a ton of other stuff. (I'm not going to do that all immediately; right now, just storing the DVD images is enough.)
  • Confidence. This is more a psychological thing, but... after having so many troubles with this WHS and the disks in it, I've lost some of the confidence I once had with it. I've started compulsively checking the light on the front to see if there's a "Network Health Critical" warning. I never know if the thing's going to hang up or fail. I need to find something new I can have some confidence in and put my mind at ease. That's not a new WHS.

I picked the diskless NAS up at Amazon for $980. Next, drives.

Synology has a compatibility list for the drives it supports in its various devices. For the DS1010+, drives basically fall into two categories: "We've tested it and it works" and "It should probably work." Given my current hardware issues, I wanted drives that were in the "We've tested it and it works" category. I wanted 2TB drives, I wanted reasonable performance (it doesn't have to be an SSD to store DVD rips), and I didn't want to go broke on it.

I settled on Seagate Barracuda ST32000542AS 2TB 5900RPM drives for $120/each at Amazon. Why?

  • Reasonable reviews. I found that unless you get into really expensive drives, most hard drives have poor reviews. The general reason it appears is that sometimes folks will get a DOA drive and instantly go for the one-star rating rather than resolving the issue and then rating the drive proper. You'll also get the folks who had to call support and had a bad time, which factors in some, but doesn't really say anything about the drive. Excluding those, it looks like [assuming you get fully functional drives] they're pretty good.
  • Reasonable speed. They're not 7200 RPM drives, but they are faster than 5400 RPM drives and even appear to compare favorably with some older 7200 RPM drives.
  • Price. There's a gap in 2TB drive pricing between some of the 5400 RPM drives and the faster 7200 RPM drives. Like, it jumps from around $150/drive up to $280/drive without anything in the middle. Price for supposed performance, I couldn't really beat $120/each.

I picked up four of those drives, so my total cost was $980 + (4 * $120) = $1460. That's not a cheap bit of kit up front, but if I consider the storage and what I've already put in, it's not that bad.

Interesting side note on my Windows Home Server issue: While I was researching drives, I came across a note in the Synology forums talking about issues people have seen with WD Green drives - the drives I have! Even on the Synology compatibility list you'll see that there are only a couple of sub-models that performed reasonably in testing. I went through my drives on my WHS and it turns out only about half of them are the decent/peformant models; the others are models that have tested as poor and degrading performance over time. That very well could explain my problems. After I get my DS1010+ set up with all the DVDs moved over, I'll be removing the problem disks to see if that fixes things.

UPDATE 6/16/2010: Removing the problem drives appears to have stabilized my WHS.

I got the NAS and the disks today. Love that Amazon Prime. Here are the boxes and then the unbox:

I installed all of the drives following the instructions in the quick start guide (very easy), plugged it into my UPS, connected it to the network, and turned it on. Here's the NAS under my desk. From left to right: Synology DS1010+, Tripp-Lite UPS, Rosewill RSV-S5 eSATA port replicator, and HP EX475 Windows Home Server. You'll notice that the DS1010+ is about the same size as the Home Server, just laid out horizontally instead of vertically.

Once it was plugged in, it was time to install the firmware. To do that, you use a program called "Synology Assistant" that installs on your client computer. The Assistant detects your NAS and allows you to install the "DSM" or "DiskStation Manager" software/firmware over the network. It's a lot like installing Windows Home Server in that respect - the NAS is headless and you install and configure it all over the net.

I downloaded the latest Synology Assistant and DSM patch from the Synology download site rather than using the ones that came on the included CD. I wanted to be sure I had the latest version of everything rather than installing an old version and upgrading later. I unzipped it all in a folder and away I went.

I installed the Synology Assistant and there was a second of panic when I couldn't find the icon for it in my Start menu - the reason is that I was running as a non-admin user and the installer only installs a shortcut for the user it installs under. In this case, the local machine Administrator was the credential set I entered when the installer asked for better credentials so that's who got the icon. Rather than log out and log back in, I just ran the DSAssistant.exe program found in the install folder.

After unblocking it from Windows Firewall, I got this screen showing the detection of the DS1010+ and that no firmware was installed.

I double clicked on the server and it took me to an installation screen. First, I selected the DSM "patch" I had downloaded.

Then I walked through setting up the name of the NAS, the admin password, network settings, etc. Note that I used the "Step By Step" setup rather than the "One-Click." Seeing as how I left everything as defaults except the administrator password, the one-click setup probably would have been fine.

 

 

After finishing the install, I went back to the Synology Assistant management screen (using the icons at the top) and it sort of freaked me out because the server status appeared hung on "Starting services." I did a manual refresh (using the not-so-intuitive "Search" button) and the status updated to "Ready."

 

I selected the DiskStation and clicked the "Connect" button which brought up the web interface to log in. I could also have just gone to port 5000 on the DiskStation by manually entering a URL in a browser.

After logging in, I went into the "Management" section and then into Storage -> Volume Manager, which automatically started the Volume Creation Wizard. I used the web-based wizard to create a RAID 5 volume out of the installed disks. Two notes on this:

  1. I used the "Custom Volume" option rather than the "Standard Volume" option because I wasn't clear on what would happen in a multi-disk volume in "Standard" mode. I wanted RAID 5, so I specified.
  2. I selected the option to check/remap all the bad sectors. There shouldn't be any on the new drives, but I also wanted to do some burn-in/health checking and this appeared to be the way to do it. That said, it takes FOREVER. Click the "go" button and leave it overnight. Note that you don't have to stay connected to the web-based manager - you can close it up and let it run. To give you an idea, I let it run for about a half hour and got to 7% before deciding to let it be.

 

 

 

 

 

Once the volume was created, I wanted to make sure the disks were running in good order, so I ran an extended SMART Test on them. Granted, it's not like a major stress test or anything, but it's good to check what the drive's reported condition is.

I let that run because the extended test takes 255 minutes. In the end, the results came back "Normal."

And here's the detailed info for one of the drives:

So, the disks seem to be working.

I noticed is that these particular drives are not always quiet. When they "woke up" the next morning (I left volume creation running overnight and logged in the next day), there was a noticeable amount of disk noise coming from them. I'd read a little about this in some of the user reviews. During the SMART Test, and even during the volume creation, they were reasonably quiet, but I/O can sometimes be a little noisy. They appear to test out, though, so if it's just noise, I can handle that. It's under my desk in the office, not sitting next to my TV while I'm watching a movie.

With the disks tested and ready for content, I had to make sure Windows file sharing was enabled. I also ensured the NAS was in the "WORKGROUP" workgroup so we can use our Windows credentials. (All of my machines are in the default "WORKGROUP" workgroup so this was fine.) Easy enough through the web console:

I then went in and created a user account on the system for all the users in the workgroup. I made sure to give them the same usernames and passwords as on the local machines so the Windows pass-through auth will work.

Finally, I had to create a shared folder for my DVDs to be stored in - also easy:

Note that I left the permissions read/write for the default system group. Since all the users are in that group, it means everyone has read/write permissions, which, for my purposes, is perfect.

From a general user standpoint, the web-based management utility is really nice and clean. If you didn't know better, you'd think you were using a native application. It's a little more confusing than the WHS console, but then, it also does a lot more right out of the box.

Last thing to do is a little [really rough] speed test. I decided to copy a DVD rip I had made to both the home server and the new NAS. I used the speed estimation thing that shows up in the Windows copy dialog box, so it's not, like "a benchmark" so much as a general indicator. Also, my laptop only has a 100 Mbit card on it so even though I'm connected to a gigabit switch, it's negotiating down. (I tried a wireless N connection where I was getting 135 Mbit but various network interference and such, which is horrible in my house, ended up making it slower than a wired 100 Mbit connection.)

Write speed: Copying to Windows Home Server went between 10.5MB/sec and 10.8MB/sec, usually sticking around 10.7MB/sec. Copying to the Synology DS1010+ went between 10.6MB/sec and 11.1MB/sec, usually sticking at 11.0MB/sec. Not the major performance increase I thought it would be, but it's a little faster.

Read speed: Copying from the Windows Home Server went between 10.9MB/sec and 11.2MB/sec. Copying from the Synology DS1010+ stuck pretty consistently between 11.1MB/sec and 11.3MB/sec. Again, not the major performance increase I thought it would be, but, again, a little faster.

Considering that I'm actually getting some level of data protection and a slight boost in speed, I can't really complain. With my WHS setup, if a disk goes, I'm re-ripping. With the NAS, I've got a little RAID 5 overhead but I'm protected if a disk goes.

Also, again, it's 100Mbit connection, so ostensibly with an actual gigabit connection I could get 10x the speed. I'd be curious to see the results with that. Maybe I'll have to get a different adapter or try a different computer.

This sort of helps me in diagnosing some of the issues I've been seeing with Windows Media Center and DVD file sharing. I wonder now if maybe my media center PC is potentially a little underpowered to be driving a 1080p display. Maybe. I digress.

All in all, with the benefits listed earlier, I think this is a good move. I think the peace of mind alone will probably make up for the cost. Maybe that's just me.

Anyway, I'm going to get my DVDs moved over to this and decommission some of the problem drives on my WHS and see how that goes.

UPDATE 5/6/2011: I had an opportunity to talk about my experience with the DS1010+ on the Hanselminutes podcast with Scott Hanselman.

Unblocking Multiple Files at Once

When you download a file from the internet and save it to your Windows computer, it "knows" where it came from and you have to right-click it and click an "Unblock" button to allow it to run. It's a security thing, and generally it's a good idea.

The "Unblock" button for downloaded files.

What happens if you have 10, 100, or even 1000 different files you need to unblock? You don't want to do that manually.

  1. Go download the SysInternals "Streams" utility.
  2. Run it on the files you want to unblock using the "-d" option and delete the alternate filestreams. It will look something like this if you're unblocking a ton of help documents:
    streams -d *.chm

The reason this works is that the information about where you downloaded the file(s) from is stored in an alternate filestream. Nuking those alternate streams means Windows will think it's a local file and will stop blocking it.

Diagnosing Google Calendar Sync Issues

I'm using Google Calendar Sync to keep my Outlook calendar and Google calendar synchronized and I've noticed a couple of meetings that don't quite get synchronized right - the error message being "Participant is neither attendee  nor organizer." (Yes, there are two spaces between "attendee" and "nor.").

I haven't figured out what the problem there is but I did find this interesting nugget to help you troubleshoot issues:

  1. Go to your Google Calendar log folder. On WinXP that'll be like C:\Documents and Settings\YOURUSERNAME\Local Settings\Application Data\Google\Google Calendar Sync\logs
  2. Put a text file in there called "level.txt" and put one word in it: VERBOSE
  3. Run a sync from Google Calendar Sync. The log will come out a lot larger and will have a ton more logging information in it.
  4. Delete the "level.txt" file. You don't want verbose logging all the time.

Interestingly, for me the appointments that won't sync are all meetings that my boss organized. Is Google Calendar trying to tell me something? :)

UPDATE 5/27/10: I switched to gSyncit to sync my calendar.

Laser Hair Removal: Before and After

Some of you reading this blog may have seen entries in the past talking about my experience with laser hair removal. After 30 treatments, I'm "done" and here are the results.

I did laser hair removal because my beard was so thick and coarse that I was having all nature of problems. I'd get really bad ingrown hairs if I let it get too long so my dermatologist told me I'd always have to be clean shaven or suffer the consequences. I destroyed pillowcases and the necks on my shirts. Since I had to keep it shaved anyway, I figured, why not get it removed?

Here are the links to the various blog entries from the treatments I documented: 1, 2, 3, 5, 6, 7, 9, 11, 12, 26.

I didn't keep a timeline of photos after each treatment because... well, I didn't really think about it, to be honest. I did do before and after, though, so here's that.

Before the treatment, you can see my beard in any picture. Here's me in my wedding photo:

Jennifer and Travis Illig: October 14, 2006

That's clean-shaven. Still a pretty dark beard line. I got some photos three treatments in that were closer, to see how the progress was going:

Left side, three treatments in.

Right side, three treatments in.

Front, three treatments in.

You can see there's a little bit of "patchiness" in the chin and a little on the sides. You can also see a couple of my famous ingrowns.

I got some pictures after four treatments in, too, to see if there was a difference across treatments:

Left side, four treatments in.

Right side, four treatments in.

Front, four treatments in.

You'll notice that between treatments three and four there wasn't much change. It seemed that way for quite some time in the beginning. At that point we were using the Dermo Flash IPL (intense pulsed light) - it was good for thinning things down, but isn't quite as effective at getting the thicker, coarser hair like I have in my beard. It was still important to do this, though, because using a laser to start (we tried a little in my first treatment) with was so insanely painful that anything to reduce the amount of hair that would be hit eventually by the laser was a good thing.

In the fifth treatment I resumed use of the actual laser (a MeDioStar) and it hurt like hell, but started getting better results. In later treatments, I think around the #18 time, we started alternating between a MeDioStar laser and a Syneron eLaser which shoots not only laser at the hair but also a pulse of radio frequency.

I ran for 30 treatments and here are my results:

Left side, after 30 treatments.

Right side, after 30 treatments.

Front, after 30 treatments.

You'll notice that the sides and neck are pretty well clear, but there's still some lingering around my lips and chin. The upper lip is the most painful area to get, so we didn't focus as much on it as we probably could have. You also can't get too close to the lips because you don't want the laser hitting them. The chin was a stubborn area to begin with because the hairs are so plentiful and are at their thickest/coarsest there.

After about 26 treatments I started seeing diminishing returns so I decided after the end of my 30th I'd call it "good enough." I don't ruin shirts anymore, it doesn't look patchy at the end of the day, and I'm free of ingrowns. Basically, success.

Notes based on my experience to people considering getting laser hair removal:

  • Prepare for the long haul. The clinic might sell you treatments in bundles of six or something, but you will probably need more than that, particularly in areas you have more hair and/or where the hair is coarse.
  • It hurts a LOT. I can't understate this. You may hear people tell you "it's like a rubber band snap." The Dermo Flash IPL is actually like that - a quick snap and you're done. (For me, about 10 quick snaps and you're done.) On the other hand, it's only really effective on the thinner hair, so if there's any significant amount of hair, you'll probably need something stronger like a laser. Lasers hurt really bad. I've heard of guys who have full back tattoos and have had laser hair removal and they said the laser hair removal hurt more. I don't have any tattoos so I can't vouch for that, but I think that says something. I can't express it in words, really. It's not like any other kind of pain I've experienced. Particularly in early treatments when there's a lot of hair, it's instant-eye-watering-please-I'll-tell-you-anything-just-stop kind of pain. Once you get further in, it eases up, but some things still hurt. My upper lip makes me wince just thinking about it.
  • It only works well on dark hair. The basic premise of the thing is that the laser heat is drawn to the hair pigment. The heat transfers down through the hair and cooks the root. If you have blonde hair on light skin, you're kind of hosed because there's not much pigment for the heat to be drawn to. If you have dark skin, the heat can't really differentiate between the hair pigment and your skin pigment. What this means for me is that areas where my beard was "salt and pepper" is now just "salt" - I have a few spots where there is some thick, coarse white hair. Laser hair removal will never get that.
  • Once you start, you're committed. This is more for the folks doing visible areas like the face, but it's good to be aware of. When the hair starts coming out, it's not necessarily "even." There were points where my beard looked a little like a zebra pattern because the hair was coming out in odd swatches. This lasted for around 15 treatments in the middle of my full series of treatments. Had I decided to quit, I'd have a really weirdly growing beard that you'd even notice when it was shaved. Once they start removing hair, you're committed to the whole procedure, as long as it takes, because if you quit before the hair's all gone it'll be weird.
  • You will not end up hairless. You will still have to shave. I did not fully realize this at the outset, but I can see that it's somewhat unavoidable. The combination of diminishing returns as I neared the 30th treatment and the white hairs in my beard that weren't going to be removed anyway means no matter what I do, I'm still shaving. I have to assume that's the case for anywhere - it'll thin the hair down a lot, maybe enough that you don't have to shave as often, but you'll still have to shave.

Given all that, would I still do it? Yeah, I think I would. I like being able to look down and read a book in the evening without giving my own neck a rash or pilling up my shirt. I like being able to lie down and roll over without hearing a sandpaper noise that indicates my face is destroying another pillowcase. Just go in informed and knowing that it's not going to be six months of pain-free treatments and you'll be fine.

Note: I get a lot of comments on my laser hair removal entries that are spam, people trying to sell laser hair removal, or people telling me that their laser hair removal clinic would have done a better job. I will delete these non-constructive comments, so please save us all some time by not leaving them.

UPDATE 2/27/2012: I get a lot of questions about how I've fared since I wrote this entry nearly two years ago so I'll answer them here:

  • Have I had any regrowth? A bit, but not a ton. My cheeks and neck are still really clear, just like in the photos. If I've had regrowth, it's been in my lip/chin region, which, as you can see, didn't come clear anyway.
  • Do I have to shave? Yes. I've always had to shave, even immediately following treatment. You won't end up hairless.
  • Does it look patchy? Not as long as I stay shaved. Again, you don't end up hairless, so you will have to keep yourself shaved. When I'm shaved you'd never notice that I had anything done at all except that I don't have that super-dark beard line I used to have. When I wake up in the morning it is a little patchy looking, but not too bad. I wouldn't go a full day or more without shaving, though.
  • Does it look feminine? Not from what I can tell. Like I said, it just looks like I've shaved. Shave your own face and decide if you look feminine. That's your answer.
  • Would I do it again? Yes. I can't tell you how much of a pain it was to be tearing up my shirt necks and sheets and such with the beard I had. Not having to deal with that has been worth it.

Failing the Build with NCover 3.4.x

I've spent the last week working on getting NCover 3.4.2 (and, later, 3.4.3) working in my environment. I was previously using the older free NCover with the original NCoverExplorer reporting tasks, but in moving up to .NET 4, it was also time to move up to a newer NCover.

One of the shortcomings I've found with NCover is that it's really hard to get a simple set of summary coverage numbers from inside the build script. It's pretty well geared around dumping out reports and summaries in XML or HTML, but even then, the XML summaries don't have all the numbers in an easily consumable format.

Further, the new division between the "Classic" licenses (ostensibly for the everyday dev) and the "Complete" licenses (for your build server) give us the fact that only the "Complete" license supports failing the build based on coverage. I'm not sure why, that’s just how it is. Oh, and the "Complete" license costs over twice what the "Classic" license costs, so it’s a little cost-prohibitive to buy all your devs a "Complete" license just so they can fail a local build.

Unfortunately, that doesn't really work for me. I'm going to run unit tests on my local machine before I check my code into the repo so I don't break the build. I kind of also want to know if I'm going to break the build because I went under the minimum coverage requirements.

Fortunately, you can do this, it's just a little tricky. You'll have to stick with me while we jump through a few hoops together.

I'm working with the following tools:

  • .NET 4.0
  • MSBuild (with the .NET 4.0 tools version)
  • NCover 3.4.3 Classic

The basic algorithm:

  1. Run your tests with the <NCover> MSBuild task and get your coverage numbers.
  2. Run the <NCoverReporting> MSBuild task to create a "SymbolModule" summary report.
  3. Use XSLT inside the <NCoverReporting> task to transform the output of the "SymbolModule" report into something you can more easily use with actual coverage percentages in it.
  4. Use the <XmlPeek> task to get the minimum coverage requirements out of the MSBuild script.
  5. Use the <WriteLinesToFile> task to create a temporary XML file that contains the minimum coverage requirements and the actual coverage information.
  6. Use the <XslTransformation> task to transform that temporary XML file into something that has simple pass/fail data in it.
  7. Use the <XmlPeek> task to look in that simplified report and determine if there are any failures.
  8. Use the <Error> task to fail the build if there are any coverage failures.

If this seems like a lot of hoops to jump through, you're right. It's a huge pain. Longer term, you could probably encapsulate steps 4 – 8 in a single custom MSBuild task, but for the purposes of explaining what's going on (and trying to use things that come out of the box with MSBuild and NCover), I haven't done that.

You may get lost here. Like I said, it's a huge number of steps. At the end I put all the steps together in an MSBuild snippet so it might make more sense when you get there. I'll walk you through the steps, and then I'll show you the summary. Follow all the way through to the end. If you get bored and start skipping steps or skimming, you'll miss something.

On with the show.

Run your tests with the <NCover> MSBuild task and get your coverage numbers.

Your build script will have some properties set up and you'll use the <NCover> task to run NUnit or whatever. I won’t get into the details on this one because this is the easy part.

<PropertyGroup>
  <NCoverPath>$(ProgramW6432)\NCover\</NCoverPath>
  <TestCommandLineExe>$(ProgramW6432)\NUnit\NUnit-Console.exe</TestCommandLineExe>
  <RawCoverageFile>$(MSBuildProjectDirectory)\Coverage.Unit.xml</RawCoverageFile>
</PropertyGroup>
<UsingTask TaskName="NCover.MSBuildTasks.NCover" AssemblyFile="$(NCoverPath)Build Task Plugins\NCover.MSBuildTasks.dll"/>
<Target Name="Test">
  <!-- Define all of your unit test command line, the assemblies to profile, etc., then...-->
  <NCover
    ContinueOnError="false"
    ToolPath="$(NCoverPath)"
    TestRunnerExe="$(TestCommandLineExe)"
    TestRunnerArgs="$(TestCommandLineArgs)"
    IncludeAssemblies="@(AssembliesToProfile)"
    LogFile="Coverage.Unit.log"
    CoverageFile="$(RawCoverageFile)"
    ExcludeAttributes="CoverageExcludeAttribute;System.CodeDom.Compiler.GeneratedCodeAttribute"
    IncludeAutoGenCode="false"
    RegisterProfiler="false"/>
</Target>

In this example, when you run the Test target in your MSBuild script, NUnit will run and be profiled by NCover. You'll get a data file out the back called "Coverage.Unit.xml" - remember where the coverage file output is, you'll need it. I recommend setting an MSBuild variable with the location of your coverage file output so you can use it later.

Run the <NCoverReporting> MSBuild task to create a "SymbolModule" summary report.

At some time after you run the <NCover> task, you're going to need to generate some nature of consumable report from the output. To do that, you'll run the <NCoverReporting> task. For our purposes, we specifically want to create a "SymbolModule" report since we will be failing coverage based on overall assembly statistics.

You need to define the set of reports that will be run as a property in a <PropertyGroup> and pass that info to the <NCoverReporting> task. It will look something like this:

<PropertyGroup>
  <NCoverPath>$(ProgramW6432)\NCover\</NCoverPath>
  <RawCoverageFile>$(MSBuildProjectDirectory)\Coverage.Unit.xml</RawCoverageFile>
  <SimplifiedReportXsltPath>$(MSBuildProjectDirectory)\SimplifiedCoverageStatistics.xsl</SimplifiedReportXsltPath>
  <SimplifiedCoverageReportPath>$(MSBuildProjectDirectory)\CoverageReport.Simplified.xml</SimplifiedCoverageReportPath>
  <SimplifiedCoverageReportOutputs>
    <Report>
      <ReportType>SymbolModule</ReportType>
      <Format>Html</Format>
      <OutputPath>$(SimplifiedCoverageReportPath)</OutputPath>
    </Report>
  </SimplifiedCoverageReportOutputs>
  <MinimumCoverage>
    <Threshold>
      <CoverageMetric>SymbolCoverage</CoverageMetric>
      <Type>Assembly</Type>
      <Value>95.0</Value>
      <Pattern>YourAssembly</Pattern>
    </Threshold>
    <Threshold>
      <CoverageMetric>SymbolCoverage</CoverageMetric>
      <Type>Assembly</Type>
      <Value>95.0</Value>
      <Pattern>YourOtherAssembly</Pattern>
    </Threshold>
  </MinimumCoverage>
</PropertyGroup>
<UsingTask
  TaskName="NCover.MSBuildTasks.NCoverReporting"
  AssemblyFile="$(NCoverPath)Build Task Plugins\NCover.MSBuildTasks.dll"/>
<Target Name="CoverageReport">
  <NCoverReporting
    ContinueOnError="false"
    ToolPath="$(NCoverPath)"
    CoverageDataPaths="$(RawCoverageFile)"
    OutputPath="$(MSBuildProjectDirectory)"
    OutputReport="$(SimplifiedCoverageReportOutputs)"
    MinimumCoverage="$(MinimumCoverage)"
    XsltOverridePath="$(SimplifiedReportXsltPath)"
    />
</Target>

Now, there are a few interesting things to notice here.

  • There's a variable called "SimplifiedReportXsltPath" that points to an XSLT file you don't have yet. I'll give that to you in a minute.
  • SimplifiedCoverageReportPath will eventually have the easy XML summary of the stuff we're interested in. Keep that around.
  • SimplifiedCoverageReportOutputs variable follows the format for defining a report to generate as outlined in the NCover documentation. NCover Classic doesn't support many reports, but SymbolModule is one it does support.
  • The SymbolModule report is defined as an Html format report rather than Xml. This is important because when we define it as "Html" then the report will automatically run through our XSLT to transform. The result of the transformation doesn't actually have to be HTML.
  • The MinimumCoverage variable is defined in the format used to fail the build if you're running under NCover Complete. This format is also defined in the documentation. The parameter as passed to the <NCoverReporting> task will be ignored if you run it under Classic but will actually act to fail the build if run under Complete. The point here is that we'll be using the same definition for minimum coverage that <NCoverReporting> uses.
  • An XsltOverridePath is specified on the <NCoverReporting> task. This lets us use our custom XSLT (which I'll give you in a minute) to create a nice summary report.

Use XSLT inside the <NCoverReporting> task to transform the output of the "SymbolModule" report into something you can more easily use with actual coverage percentages in it.

Basically, you need to create a little XSLT that will generate some summary numbers for you. The problem is, you will have to do some manual calculation to get those summary numbers.

The math is simple but a little undiscoverable. For symbol coverage, you'll need to get the total number of sequence points available and the number visited, then calculate the percentage:

Coverage Percent = (Visited Sequence Points / (Unvisited Sequence Points + Visited Sequence Points)) * 100

Or, smaller:

cp = (vsp / (usp + vsp)) * 100

You can get the USP and VSP numbers for the entire coverage run or on a per-assembly basis by looking in the appropriate places in the SymbolModule report.

I won't show you the XML that comes out of <NCoverReporting> natively, but I will give you the XSLT that will calculate this for you:

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
  <xsl:template match="/">
    <xsl:element name="symbolCoverage">
      <xsl:call-template name="display-symbol-coverage">
        <xsl:with-param name="key">__Summary</xsl:with-param>
        <xsl:with-param name="stats" select="//trendcoveragedata/stats" />
      </xsl:call-template>
      <xsl:for-each select="//trendcoveragedata/mod">
        <xsl:call-template name="display-symbol-coverage">
          <xsl:with-param name="key" select="assembly/text()" />
          <xsl:with-param name="stats" select="stats" />
        </xsl:call-template>
      </xsl:for-each>
    </xsl:element>
  </xsl:template>
  <xsl:template name="display-symbol-coverage">
    <xsl:param name="key" />
    <xsl:param name="stats" />
    <xsl:variable name="percentage" select="format-number(($stats/@vsp div ($stats/@usp + $stats/@vsp)) * 100, '0.00')" />
    <xsl:element name="coverage">
      <xsl:attribute name="module"><xsl:value-of select="$key" /></xsl:attribute>
      <xsl:attribute name="percentage">
        <xsl:choose>
          <xsl:when test="$percentage='NaN'">100</xsl:when>
          <xsl:otherwise><xsl:value-of select="$percentage" /></xsl:otherwise>
        </xsl:choose>
      </xsl:attribute>
    </xsl:element>
  </xsl:template>
</xsl:stylesheet>

Save that file as SimplifiedCoverageStatistics.xsl. That's the SimplifiedReportXsltPath document we referred to earlier in MSBuild. When you look at the output of <NCoverReporting> after using this, the SymbolModule report you generated will look something like this:

<?xml version="1.0" encoding="utf-8"?>
<symbolCoverage>
  <coverage module="__Summary" percentage="95.36" />
  <coverage module="YourAssembly" percentage="91.34" />
  <coverage module="YourOtherAssembly" percentage="99.56" />
</symbolCoverage>

If you're only reporting some statistics, you're pretty much done. The special "__Summary" module is the overall coverage for the entire test run; each other module is an assembly that got profiled and its individual coverage. You could use the <XmlPeek> task from here and look in that file to dump out some numbers. For example, you can report out to TeamCity using a <Message> task and the "__Summary" number in that XML report.

However, if you want the build to fail based on coverage failure, you still have to compare those numbers to the expectations.

Use the <XmlPeek> task to get the minimum coverage requirements out of the MSBuild script.

You can't just use the $(MinimumCoverage) variable directly because there's no real way to get nested values from it. MSBuild sees that as an XML blob. (If it were an "Item" rather than a "Property" it'd be easier to manage, but NCover needs it as a "Property" so we've got work to do.) We'll use <XmlPeek> to get the values out in a usable format. That <XmlPeek> call looks like this:

<XmlPeek
  Namespaces="&lt;Namespace Prefix='msb' Uri='http://schemas.microsoft.com/developer/msbuild/2003'/&gt;"
  XmlContent="&lt;Root xmlns='http://schemas.microsoft.com/developer/msbuild/2003'&gt;$(MinimumCoverage)&lt;/Root&gt;" 
  Query="/msb:Root/msb:Threshold[msb:Type='Assembly']">
  <Output TaskParameter="Result" ItemName="ModuleCoverageRequirements" />
</XmlPeek>

More crazy stuff going on here.

First we have to define the MSBuild namespace on the <XmlPeek> task so we can do an XPath statement on the $(MinimumCoverage) property - again, it's an XML blob.

Next, we're specifying some "XmlContent" on that <XmlPeek> task because we have the variable already and we don't need to re-read it from the file. However, it's sort of an XML fragment because there may be several <Threshold> elements defined in the variable so we wrap the variable with a <Root> element so it's a proper XML document.

The "Query" parameter uses some XPath to find all of the <Threshold> elements defined in $(MinimumCoverage) that are assembly-level thresholds. We can't really do anything with, say, cyclomatic-complexity thresholds (at least, not in this article) so we're only getting the values we can do something about.

Finally, we're sticking the <Threshold> nodes we found into a @(ModuleCoverageRequirements) array variable. Each item in that array will be one <Threshold> node (as an XML string).

Use the <WriteLinesToFile> task to create a temporary XML file that contains the minimum coverage requirements and the actual coverage information.

We have the report at $(SimplifiedCoverageReportPath) that <NCoverReporting> generated containing the actual coverage percentages. We also have @(ModuleCoverageRequirements) with the associated required coverage percentages. Let's create a single, larger XML document that has both of these sets of data in it. We can do that with an <XmlPeek> to get the nodes out of the simplified coverage report and then a <WriteLinesToFile> task:

<PropertyGroup>
  <BuildCheckCoverageReportPath>$(MSBuildProjectDirectory)\CoverageReport.BuildCheck.xml</BuildCheckCoverageReportPath>
</PropertyGroup>
<!-- Get the actuals out of the transformed summary report. -->
<XmlPeek
  XmlInputPath="$(SimplifiedCoverageReportPath)"
  Query="/symbolCoverage/coverage">
  <Output
    TaskParameter="Result"
    ItemName="ModuleCoverageActuals"/>
</XmlPeek>
<!-- Merge the requirements and actuals into a single document. -->
<WriteLinesToFile
  File="$(BuildCheckCoverageReportPath).tmp"
  Lines="&lt;BuildCheck&gt;&lt;Requirements&gt;;@(ModuleCoverageRequirements);&lt;/Requirements&gt;&lt;Actuals&gt;;@(ModuleCoverageActuals);&lt;/Actuals&gt;&lt;/BuildCheck&gt;"
  Overwrite="true" />

As you can see, we're generating "yet another" XML document. It's temporary, so don't worry, but we do generate another document.

We're using <XmlPeek> to get all of the <coverage> elements out of the simplified report we generated earlier. (Look up a little bit in the article to see a sample of what that report looks like.)

Finally, we use <WriteLinesToFile> to wrap some XML around the requirements and the actuals and generate a larger report. Notice we stuck a ".tmp" extension onto the actual file path in the "File" attribute on <WriteLinesToFile> - that's important.

This temporary report will look something like this:

<BuildCheck>
  <Requirements>
    <Threshold xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
      <CoverageMetric>SymbolCoverage</CoverageMetric>
      <Type>Assembly</Type>
      <Value>95.0</Value>
      <Pattern>YourAssembly</Pattern>
    </Threshold>
    <Threshold xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
      <CoverageMetric>SymbolCoverage</CoverageMetric>
      <Type>Assembly</Type>
      <Value>95.0</Value>
      <Pattern>YourOtherAssembly</Pattern>
    </Threshold>
  </Requirements>
  <Actuals>
    <coverage module="__Summary" percentage="95.36" />
    <coverage module="YourAssembly" percentage="91.34" />
    <coverage module="YourOtherAssembly" percentage="99.56" />
  </Actuals>
</BuildCheck>

Use the <XslTransformation> task to transform that temporary XML file into something that has simple pass/fail data in it.

We need to take that temporary report and make it a little more easily consumable. We'll use another XSLT to transform it.

First, save this XSLT as "BuildCheckCoverageStatistics.xsl":

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msb="http://schemas.microsoft.com/developer/msbuild/2003">
  <xsl:template match="/">
    <xsl:element name="symbolCoverage">
      <xsl:for-each select="/BuildCheck/Requirements/msb:Threshold[msb:Type='Assembly']">
        <xsl:variable name="module"><xsl:value-of select="msb:Pattern/text()" /></xsl:variable>
        <xsl:variable name="expected"><xsl:value-of select="msb:Value/text()" /></xsl:variable>
        <xsl:variable name="actual"><xsl:value-of select="/BuildCheck/Actuals/coverage[@module=$module]/@percentage" /></xsl:variable>
        <xsl:if test="$actual != ''">
          <xsl:element name="coverage">
            <xsl:attribute name="module"><xsl:value-of select="$module" /></xsl:attribute>
            <xsl:attribute name="expected"><xsl:value-of select="$expected" /></xsl:attribute>
            <xsl:attribute name="actual"><xsl:value-of select="$actual" /></xsl:attribute>
            <xsl:attribute name="pass">
              <xsl:choose>
                <xsl:when test="$actual >= $expected">true</xsl:when>
                <xsl:otherwise>false</xsl:otherwise>
              </xsl:choose>
            </xsl:attribute>
          </xsl:element>
        </xsl:if>
      </xsl:for-each>
    </xsl:element>
  </xsl:template>
</xsl:stylesheet>

What that XSLT does is look at the requirements and the actuals in the XML file and if it finds some actuals that match a defined requirement, it outputs a node with the name of the assembly, the expected and actual coverage percentages, and a simple pass/fail indicator.

The reason it doesn't just include all of the requirements is that NCover Classic doesn't allow you to merge the results from different test runs into a single data set. As such, we may need to run this transformation a few times over different data sets and we don't want to fail the build just because there's a requirement defined for an assembly that wasn't tested in the given test run.

Now transform the temporary XML file using <XslTransformation> like this:

<PropertyGroup>
  <BuildCheckReportXsltPath>$(MSBuildProjectDirectory)\BuildCheckCoverageStatistics.xsl</BuildCheckReportXsltPath>
</PropertyGroup>
<XslTransformation
  OutputPaths="$(BuildCheckCoverageReportPath)"
  XmlInputPaths="$(BuildCheckCoverageReportPath).tmp"
  XslInputPath="$(BuildCheckReportXsltPath)" />

As an input, we're taking that ".tmp" file we generated with <WriteLinesToFile> earlier. The "OutputPaths" attribute is the $(BuildCheckCoverageReportPath) that we defined earlier. The "XslInputPath" is the XSLT above.

The resulting report will be a nice, simple document like this:

<?xml version="1.0" encoding="utf-8"?>
<symbolCoverage>
  <coverage module="YourAssembly" expected="95.0" actual="91.34" pass="false" />
  <coverage module="YourOtherAssembly" expected="95.0" actual="99.56" pass="true" />
</symbolCoverage>

The top-level "__Summary" data is gone (because we're only dealing with assembly-level requirements) and you can see easily what the expected and actual coverage percentages are. Even easier, there's a "pass" attribute that tells you whether there was success.

Notice in my sample report that one of the assemblies passed and the other failed because it didn't meet minimum coverage. We want to fail the build when that happens.

After the transformation, you should do a little cleanup. We have some little temporary files and, really, we only want one simplified report - the one we just generated. Use the <Delete> and <Move> tasks to do that cleanup:

<Delete Files="$(BuildCheckCoverageReportPath).tmp;$(SimplifiedCoverageReportPath)" />
<Move
  SourceFiles="$(BuildCheckCoverageReportPath)"
  DestinationFiles="$(SimplifiedCoverageReportPath)" />

The net result of that:

  • The .tmp file will be deleted.
  • The $(SimplifiedCoverageReportPath) will now be that final report with the pass/fail marker in it.

Use the <XmlPeek> task to look in that simplified report and determine if there are any failures.

With such a simple report, the <XmlPeek> call to see if there are any failing coverage items is fairly self explanatory:

<XmlPeek
  XmlInputPath="$(SimplifiedCoverageReportPath)"
  Query="/symbolCoverage/coverage[@pass!='true']">
  <Output TaskParameter="Result" ItemName="FailedCoverageItems"/>
</XmlPeek>

That gives us a new variable called @(FailedCoverageItems) where each item in the variable array has one node containing a failed coverage item.

Use the <Error> task to fail the build if there are any coverage failures.

Last step! Use <Error> with a "Condition" attribute to fail the build if there is anything found in @(FailedCoverageItems):

<Error
  Text="Failed coverage: @(FailedCoverageItems)"
  Condition="'@(FailedCoverageItems)' != ''" />

That'll do it!

If we put all of the MSBuild together, it'll look something like the following.

NOTE: THIS IS NOT A COPY/PASTE READY SCRIPT. IT WILL NOT RUN BY ITSELF. IT IS A SAMPLE ONLY.

<PropertyGroup>
  <NCoverPath>$(ProgramW6432)\NCover\</NCoverPath>
  <RawCoverageFile>$(MSBuildProjectDirectory)\Coverage.Unit.xml</RawCoverageFile>
  <SimplifiedReportXsltPath>$(MSBuildProjectDirectory)\SimplifiedCoverageStatistics.xsl</SimplifiedReportXsltPath>
  <BuildCheckReportXsltPath>$(MSBuildProjectDirectory)\BuildCheckCoverageStatistics.xsl</BuildCheckReportXsltPath>
  <BuildCheckCoverageReportPath>$(MSBuildProjectDirectory)\CoverageReport.BuildCheck.xml</BuildCheckCoverageReportPath>
  <SimplifiedCoverageReportPath>$(MSBuildProjectDirectory)\CoverageReport.Simplified.xml</SimplifiedCoverageReportPath>
  <SimplifiedCoverageReportOutputs>
    <Report>
      <ReportType>SymbolModule</ReportType>
      <Format>Html</Format>
      <OutputPath>$(SimplifiedCoverageReportPath)</OutputPath>
    </Report>
  </SimplifiedCoverageReportOutputs>
  <MinimumCoverage>
    <Threshold>
      <CoverageMetric>SymbolCoverage</CoverageMetric>
      <Type>Assembly</Type>
      <Value>95.0</Value>
      <Pattern>YourAssembly</Pattern>
    </Threshold>
    <Threshold>
      <CoverageMetric>SymbolCoverage</CoverageMetric>
      <Type>Assembly</Type>
      <Value>95.0</Value>
      <Pattern>YourOtherAssembly</Pattern>
    </Threshold>
  </MinimumCoverage>
</PropertyGroup>
<UsingTask TaskName="NCover.MSBuildTasks.NCoverReporting" AssemblyFile="$(NCoverPath)Build Task Plugins\NCover.MSBuildTasks.dll"/>
<Target Name="CoverageReport">
  <!-- This assumes you've run the NCover task, etc. and have a $(RawCoverageFile) to report on. -->
  <NCoverReporting
    ContinueOnError="false"
    ToolPath="$(NCoverPath)"
    CoverageDataPaths="$(RawCoverageFile)"
    OutputPath="$(MSBuildProjectDirectory)"
    OutputReport="$(SimplifiedCoverageReportOutputs)"
    MinimumCoverage="$(MinimumCoverage)"
    XsltOverridePath="$(SimplifiedReportXsltPath)"
    />
  <XmlPeek
    Namespaces="&lt;Namespace Prefix='msb' Uri='http://schemas.microsoft.com/developer/msbuild/2003'/&gt;"
    XmlContent="&lt;Root xmlns='http://schemas.microsoft.com/developer/msbuild/2003'&gt;$(MinimumCoverage)&lt;/Root&gt;" 
    Query="/msb:Root/msb:Threshold[msb:Type='Assembly']">
    <Output TaskParameter="Result" ItemName="ModuleCoverageRequirements" />
  </XmlPeek>
  <XmlPeek XmlInputPath="$(SimplifiedCoverageReportPath)" Query="/symbolCoverage/coverage">
    <Output TaskParameter="Result" ItemName="ModuleCoverageActuals"/>
  </XmlPeek>
  <WriteLinesToFile
    File="$(BuildCheckCoverageReportPath).tmp"
    Lines="&lt;BuildCheck&gt;&lt;Requirements&gt;;@(ModuleCoverageRequirements);&lt;/Requirements&gt;&lt;Actuals&gt;;@(ModuleCoverageActuals);&lt;/Actuals&gt;&lt;/BuildCheck&gt;"
    Overwrite="true" />
  <XslTransformation
    OutputPaths="$(BuildCheckCoverageReportPath)"
    XmlInputPaths="$(BuildCheckCoverageReportPath).tmp"
    XslInputPath="$(BuildCheckReportXsltPath)" />
  <Delete Files="$(BuildCheckCoverageReportPath).tmp;$(SimplifiedCoverageReportPath)" />
  <Move
    SourceFiles="$(BuildCheckCoverageReportPath)"
    DestinationFiles="$(SimplifiedCoverageReportPath)" />
  <XmlPeek
    XmlInputPath="$(SimplifiedCoverageReportPath)"
    Query="/symbolCoverage/coverage[@pass!='true']">
    <Output TaskParameter="Result" ItemName="FailedCoverageItems"/>
  </XmlPeek>
  <Error
    Text="Failed coverage: @(FailedCoverageItems)"
    Condition="'@(FailedCoverageItems)' != ''" />
</Target>

There are exercises left to the reader. THIS IS NOT A COPY/PASTE READY SCRIPT.

There are some obvious areas where you'll need to make some choices. For example, you probably don't actually want to dump all of these reports out right in the same folder as the MSBuild script so you'll want to set various paths appropriately. You may want to put the <NCoverReporting> task call in a separate target than the crazy build-time-analysis bit to try and keep things manageable and clean. Filenames may need to change based on dynamic variables, like if you're running the reporting task after each solution in a multi-solution build, so you'll have to adjust for that. This should basically get you going.

Remind me again... WTF? Why all these hoops?

NCover Classic won't let you fail the build based on coverage. I have my thoughts on that and other shortcomings that I'll save for a different blog entry. Suffice it to say, without creating a custom build task to encompass all of this, or just abandoning hope for failing the build based on coverage, this is about all you can do. Oh, or you could buy every developer in your organization an NCover Complete license.

HELP! Why doesn't XYZ work for me?

Unfortunately, there are a lot of moving pieces here. If it's not working for you, I don't really have the ability to offer you individual support on it. If you find a problem, leave a comment on this blog entry and I'll look into it; if you grabbed all of these things and your copy isn't quite doing what you think it should be doing, I can't really do anything for you. From a troubleshooting perspective, I'd add the various build tasks one at a time and run the build after each addition. Look and see what the output is, what files are created, etc. Use <Message> and <Error> tasks to debug the script. Make sure you're 100% aware of what each call does and where every file is going. Make sure you specified all the properties for <NCoverReporting> correctly and you didn't leave a typo in the minimum coverage or report output properties (e.g., make sure the SymbolModule report is an "Html" not "Xml" report, etc.) There are a lot of steps, but they're simple steps, so you should be able to work through it.

Also, drop NCover a line and let them know you'd be interested in seeing better direct support for something like this. I've told them myself, but the more people interested in it, the more likely it will see light in the next product release.

posted @ Thursday, May 06, 2010 12:27 PM | Feedback (1) | Filed Under [ .NET ]

Barkdust for 2010

It was once again time for new barkdust, so, just as we did a couple of years back, we called Grimm's Advanced Bark Blowing and got our one unit of medium fresh fir. It was still $295, just as it was two years ago, so it was nice to see the prices didn't go up. Flowerbeds are looking nice again. Now we just have to go out and dig out the sprinkler heads since they're buried in... barkdust.

Connect to the D-Link DAP-1522 Access Point Configuration Manually

I had a problem this morning where my D-Link DAP-1522 access point had to be reset to factory defaults. After clicking the reset button on the back and having it reboot, I was unable to go to the configuration page following the instructions (visit 192.168.0.50 and log in). Totally inaccessible.

I ended up calling D-Link support and they explained how to do a more manual connection to the access point. Basically the DHCP server wasn't enabled so I wasn't able to get an IP address when connecting directly to the access point so I had to mangle my network settings a bit long enough to connect and set things up.

  1. Connect your computer to the access point with an Ethernet cable.
  2. Go into the adapter settings for the network adapter you've connected to the access point.
  3. Update the TCP/IPv4 settings on the adapter so it's not DHCP anymore. Use these settings:
    • IP = 192.168.0.99
    • Subnet Mask = 255.255.255.0
    • Gateway = 192.168.0.50
  4. Now open up a browser and go to 192.168.0.50 as you normally would to get to the configuration page. It should come up.

I kinda wish that had been in the instruction manual, but since it's not, there you go.

Enable Typemock Isolator for a Non-Admin User

Generally speaking, it's good practice to develop as a non-administrative user so you can make sure your applications will run for non-admin users and so you won't do any damage to your environment as you develop. Unfortunately, some things end up forcing you to develop as an admin because they require rights that most non-administrative users don't have.

Typemock Isolator no longer has to be one of those things that forces you to run as an administrator.

The Isolator install guide has a "Security" section that outlines the various registry keys and files that Isolator needs read/write access to. If you give your non-admin user the rights to these keys and files, that non-admin user can start, stop, and link Typemock Isolator with other profilers.

In a recent round of troubleshooting, I ended up writing a program to modify the ACL on the requisite keys and files as found on the target machine. The result is EnableTypemockForNonAdmin - a command-line program that automates this permissions setup process.

This program will make permissions changes to files and your registry. Read the enclosed readme file and make sure you fully understand what's going to happen before you run it.

Usage is simple. Open a command prompt as an administrator and run the program, passing in the name of the non-admin user you want to have access to Typemock Isolator.

EnableTypemockForNonAdmin.exe YOURDOMAIN\yourusername

Standard disclaimers apply - I'm not responsible for any damage done by the program; YMMV; use at your own risk; etc.

UPDATE 5/4/2010: Typemock Isolator 6.0.3 (not yet released at the time of this writing) may fix these issues if you are using Typemock Isolator with TestDriven.NET to make this program unnecessary. Jamie Cansdale from TestDriven.NET has commented below and left a link to a registry file you can install to make things work without changing permissions. I will leave this program available as it is still helpful for earlier versions of Typemock Isolator and/or TD.NET, and may still be required for command-line builds. (We'll have to see once Isolator 6.0.3 comes out.)

UPDATE 5/5/2010: I verified that with Typemock Isolator 6.0.3 and NCover 3.4.3 the registry additions provided by Jamie Cansdale will allow you to run as a non-admin user (both using the Typemock Config Tool and TestDriven.NET), though I can't speak to earlier versions of Isolator or linking with profilers other than NCover. These keys are also custom additions to your registry, so it's a little "non-standard." YMMV. I think the permissions change is probably the route I'll continue to go until the profiler companies and/or Typemock start shipping these tweaks as supported items out of the box.

UPDATE 1/20/2011: Typemock Isolator 6.0.6 now requests read/write permissions on the registry key where the license info is kept right when the config tool starts up, regardless of whether you're going to modify the value. I updated the EnableTypemockForNonAdmin tool to version 1.0.1.0 and added that registry key to the list of keys to give your non-admin user permissions to.

Download now - free!

[EnableTypemockForNonAdmin - 1.0.1.0 (zip)]

[EnableTypemockForNonAdmin Source - 1.0.1.0 (zip)]

Typemock Isolator, NCover, and the #20000 Error

If you are running Typemock Isolator along with another profiler like NCover and a crash occurs (e.g., the parent build process gets killed), it has the potential to corrupt the registry. What that means is that subsequent operations when you link/unlink with your coverage profiler may not work properly.

For NCover, you may see the build fail with exit code #20000 and the message "NCover.Console is returning exit code #20000. NCover couldn't create a coverage report." The reason it couldn't create a coverage report is that Isolator and NCover weren't linked correctly so NCover wasn't actually running.

To fix it, repair your NCover installation. This will fix the corrupt registry keys and subsequent Typemock Isolator/NCover linkages will work correctly.

Thanks to Ohad at Typemock, Alan at NCover, and Jamie Cansdale from TestDriven.NET for helping me track this one down.

posted @ Monday, May 03, 2010 1:57 PM | Feedback (0) | Filed Under [ .NET ]

How to Run a Different NUnit Version with TestDriven.NET

Here's the setup:

You have a project that uses NUnit 2.5.5. You don't actually have NUnit installed - you have it checked in along with your project's source as a third-party dependency. (You did it that way so you can have different projects using different NUnit versions without having to install/uninstall things.) You're using TestDriven.NET to run tests inside Visual Studio but you noticed that it ships with NUnit 2.5.3 - an earlier version of NUnit - and you want to use the version your project references.

How do you tell TestDriven.NET to use your project's version of NUnit?

First, make sure your checked-in version of NUnit keeps the same directory structure it's distributed with. That means you have a folder that contains the appropriate version of NUnit-Console.exe, etc., and a subfolder called "framework" that has the nunit.framework.dll in it, like this:

YourProject
|
+-lib
  |
  +-NUnit // Has NUnit-Console.exe in it
    |
    +- framework // Has nunit.framework.dll in it
    |
    +- lib
    |
    +- tests

Ensure you're referencing nunit-framework.dll from the NUnit "framework" folder. There's an nunit.framework.dll in the NUnit folder, probably, too, but don't reference that one - reference the one in the "framework" folder.

Open the TestDriven.NET install folder. This will be something like C:\Program Files\TestDriven.NET 3. On a 64-bit system it might be in C:\Program Files (x86)\TestDriven.NET 3 or the like.

Go into the TestDriven NUnit folder for the version you're referencing. You should see a folder called "NUnit" in the TestDriven.NET install folder. Open that. Inside there you'll see different folders for each version of NUnit. Right now there's "2.2," "2.4," and "2.5." In this example, we're looking at using NUnit 2.5.5 instead of 2.5.3, so we'll open up the "2.5" folder. You should now be in a folder like C:\Program Files\TestDriven.NET 3\NUnit\2.5.

Copy the nunit.tdnet.dll file into your lib\NUnit folder. Look in the TestDriven.NET NUnit version folder you should be in right now. You'll see a file called "nunit.tdnet.dll." Copy that into your checked-in lib\NUnit folder - the same folder that has NUnit-Console.exe in it. You will need to check this in along with your NUnit dependency.

Go into the TestDriven.NET "framework" folder. Still in that C:\Program Files\TestDriven.NET 3\NUnit\2.5 folder - open the "framework" folder under that. You should be in C:\Program Files\TestDriven.NET 3\NUnit\2.5\framework.

Copy the nunit.framework.dll.tdnet file into your lib\NUnit\framework folder. In that C:\Program Files\TestDriven.NET 3\NUnit\2.5\framework folder you should see a file called "nunit.framework.dll.tdnet". Copy that into your lib\NUnit\framework folder - the same folder that has nunit.framework.dll in it. You will need to check this in along with your NUnit dependency.

Run TestDriven.NET. Now when you run your tests with TestDriven.NET you should see it report that it's using the version of NUnit you have checked in along with your project. That wasn't too hard, now, was it?

What if I need to customize the locations? What if you don't have the whole NUnit/framework folder structure and such? The basic principle here is that nunit.tdnet.dll needs to be in the same folder as NUnit-Console.exe and nunit.framework.dll.tdnet needs to be in the same folder as nunit.framework.dll. You may need to open the nunit.framework.dll.tdnet file in a text editor (it's an XML file) and modify the "AssemblyPath" node in there. I haven't actually tried this myself, so YMMV, but it should work.

posted @ Monday, May 03, 2010 12:23 PM | Feedback (0) | Filed Under [ .NET ]