android comments edit

My phone is a Samsung Galaxy S3 on Verizon.

If you already know about running custom ROMs and customizing your Android phone, you're probably laughing right now. Not knowing any better, I took all the standard over-the-air ("OTA") updates all the way through current Android 4.4.2, figuring when the time came I could follow whatever the latest rooting process is and update to something like Cyanogenmod. Oh, how wrong I was.

The problem mostly was in the things I didn't understand, or thought I understood, with the whole process of putting a custom ROM on the phone. There is so much information out there, but there isn't a guide that both tells you how to do the upgrade and what it is you're actually doing, that is, why each step is required.

I learned so much in failing to flash my phone. I failed miserably, getting the phone into a state where it would mostly boot up, but would sometimes fail with some security warning ("soft-bricking" the phone; fully "bricked" would imply I couldn't do anything with it at all).

So given all that, I figured rather than write a guide to how to put a custom ROM on your phone, I'd just write up all the stuff I learned so maybe folks trying this themselves will understand more about what's going on.

Disclaimers, disclaimers: I'm a Windows guy, though I have some limited Linux experience. Things that might be obvious to Linux folks may not be obvious to me. I also may not have the 100% right description at a technical level for things, but this outlines how I understand it. My blog is on GitHub - if you want to correct something, feel free to submit a pull request.

Background/Terminology

An "OS image" that you want to install on your phone is a ROM. I knew this going in, but just to level-set, you should know the terminology. A ROM generally contains a full default setup for a version of Android, and there are a lot of them. The ones you get from your carrier are "stock" or "OTA" ROMs. Other places, like Cyanogenmod, build different configurations of Android and let you install their version.

ROMs generally include software to run your phone's modem. At least, the "stock" ROMs do. This software tells the phone how to connect to the carrier network, how to connect to wireless, etc. I don't actually know if custom ROMs also include modem software, but I'm guessing not since these seem to be carrier-specific.

You need "root" access on your phone to do any low-level administrative actions. You'll hear this referred to as "rooting" the phone. ("root" is the name of the superuser account in Linux, like "administrator" in Windows.) Carriers lock their stock ROMs down so software can't do malicious things... and so you can't uninstall the crapware they put on your phone. The current favorite I've seen is Towelroot.

With every update to the stock ROM, carriers try to "plug the holes" that allow you to get root access. Sometimes they also remove root access you might already have.

You need this root access so you can install a custom "recovery mode" on your phone. (I'll get to what "recovery" is in a minute.)

When you turn on your phone or reboot, a "bootloader" is responsible for starting up the Android OS. This is a common thing in computer operating systems. Maybe you've seen computers that "dual boot" two different operating systems; or maybe you've used a special menu to go into "safe mode" during startup. The bootloader is what allows that to happen.

In Android, the bootloader lets you do basically one of three things:

  • Boot into the Android OS installed.
  • Boot into "recovery mode," which allows you to do some maintenance functions.
  • Boot into "download mode," which allows you to connect your phone to your computer to do special software installations.

You don't ever actually "see" the bootloader. It's just software behind the scenes making decisions about what to do when the power button gets pushed.

Recovery mode on your phone provides access to maintenance functions. If you really get into a bind, you may want to reset your phone to factory defaults. Or you may need to clear some cached data the system has that's causing incorrect behavior. The "recovery mode" menu on the phone allows you to do these things. This is possible because it's all happening before the Android OS starts up.

What's interesting is that people have created "custom recovery modes" that you can install on the phone that give the phone different/better options here. This is the gateway for changing the ROM on your phone or making backups of your current ROM.

Download mode on your phone lets you connect the phone to a computer to do custom software installations. The complement to recovery mode is download mode. This allows you to connect the phone to a computer with a USB cable and push a ROM from the computer over to the phone.

Odin is software for Samsung devices that uses download mode to flash a ROM onto a device. When you go into download mode on the phone, something has to be running on your computer to push the software to the phone. For Samsung devices, this software is called "Odin." I can't really find an "official" download for Odin, which is sort of scary and kind of sucks. (You can apparently also use software called Heimdall, but I didn't try that.)

The Process (And Where I Failed)

Now that you know the terminology, understanding what's going on when you're putting a custom ROM on the phone should make a bit more sense. It should also help you figure out better what's gone wrong (should something go wrong) so you know where to look to fix it.

First you need to root the phone. You'll need the administrative access so you can install some software that will work at a superuser level to update the recovery mode on your phone.

Rooting the phone for me was pretty easy. Towelroot did the trick with one button click.

Next you need to install a custom recovery mode. A very popular one is ClockworkMod ROM Manager. You can get this from the Google Play store or from their site. It is sad how lacking the documentation is. There's nothing on their web site but download links; and other "how to use" guides are buried in forums.

If you do use ClockworkMod ROM Manager, though, there's a button inside the app that lets you flash the ClockworkMod Recovery Mode. Doing this will update the recovery mode menu and start letting you use options that ClockworkMod provides, like installing a custom ROM image or backing up your current ROM.

THIS IS WHERE THINGS WENT WRONG FOR ME. Remember how you get into the recovery mode by going through the bootloader? Verizon has very annoyingly locked down the bootloader on the Galaxy S3 on more recent stock ROM images such that it detects if you've got a custom recovery mode installed. If you do, you get a nasty warning message telling you that some unrecognized software is installed and you have to go to Verizon to fix it.

Basically, by installing ClockworkMod Recovery, I had soft-bricked my phone. Everything looked like it was going to work... but it didn't.

This is apparently a fairly recent thing with later OTA updates from Verizon. Had I not taken the updates, I could have done this process. But... I took the updates, figuring someone would have figured out a way around it by the time I was interested in going the custom ROM route, and I was wrong.

If the custom recovery works for your phone then switching to a custom ROM would be a matter of using the custom recovery menu to select a ROM and just "switch" to it. The recovery software would take care of things for you. ROMs are all over for the download, like right off the Cyanogenmod site. Throw the ROM on your SD card, boot into recovery, choose the ROM, and hang tight. You're set.

If the custom recovery doesn't work for your phone then you're in my world and it's time to figure out what to do.

The way to un-soft-brick my phone was to manually restore the stock ROM. Again, there are really no official download links for this stuff, so it was a matter of searching and using (what appeared to be) reputable places to get the software.

  • Install the Odin software on your computer.
  • Boot the phone into "download mode" so it's ready to get the software.
  • Connect the phone to the computer.
  • Tell the phone to start downloading.
  • In Odin, select the stock ROM in "AP" or "Phone" mode. (You can't downgrade - I tried that. The best I could do was reinstall the same thing I had before.)
  • Hit the Odin "Start" button and be scared for about 10 minutes while it goes about its work and reboots.

After re-flashing the stock ROM, I was able to reboot without any security warnings. Of course, I had to reinstall all of my apps, re-customize my home screens, and all that...

...But I was back to normal. Almost.

My current problem is that I'm having trouble connecting to my wireless network. It sees the network, it says it's getting an IP address, but it gets hung on this part "determining the quality of your internet connection." This is a new problem that I didn't have before.

It seems to be a fairly common problem with no great solution. Some people fix it by rebooting their wireless router (didn't fix it for me). Some people fix it by telling the phone to "forget" the network and then manually reconnecting to it (didn't fix it for me).

My current attempt at solving it involves re-flashing the modem software on the phone. Remember how I mentioned that the stock ROM comes with modem software in it? You can also get the modem software separately and use Odin to flash just the modem on the phone. Some folks say this solves it. I did the Odin part just this morning and while I'm connected to wireless now, the real trouble is after a phone restart. I'll keep watch on it.

Hopefully this helps you in your Android modding travels. I learned a lot, but knowing how the pieces work together would have helped me panic a lot less when things went south and would have helped me know what to look for when fixing things.

media, hardware, synology, home, music comments edit

Way back in 2008 I put up an overview of my media server solution based on the various requirements I had at the time - what I wanted out of it, what I wasn't so interested in.

I've tried to keep that up to date somewhat, but I figured it was time to provide a nice, clean update with everything I've got set up thus far and a little info on where I'm planning on taking it. Some of my requirements have changed, some of the ideas about what I want out of it have changed.

Requirements

  • Access to my DVD collection: I want to be able to get to all of the movies and TV shows in my collection. I am not terribly concerned with keeping the menus or extra features, but I do want the full audio track and video without noticeably reduced fidelity.
  • Family acceptance factor: I want my wife and daughter to be able to navigate through the system and find what they want to watch with minimal effort.
  • Access to my pictures: I want to be able to see my family photos from a place outside my office where the computers generally sit.
  • Access to my music: I want to be able to listen to my music collection from any room in the house.
  • As compatible as possible: When choosing formats, software, communication protocols, etc., I want it to be compatible with as many devices I own as possible. I have an Android phone, an iPod classic, an iPad, Windows machines, a PS4, an Xbox 360, a Kindle Fire, and a Google Chromecast.

Hardware

My hardware footprint has changed a bit since I started, but I'm in a pretty comfortable spot with my current setup and I think it has a good way forward.

  • Synology DS1010+: I use the Synology DS1010+ for my movie storage and as the Plex server (more on Plex in the software section). The 1010+ is an earlier version of the Synology DS1513+ and is amazingly flexible and extensible.
  • HP EX475 MediaSmart Server: This little machine was my first home server and was originally going to be my full end-to-end solution. Right now it serves as picture and audio storage as well as the audio server.
  • Playstation 3: My main TV has an Xbox 360, a PS3, and a small home theater PC attached to it... but I primarily use the PS3 for the front end for all of this stuff. The Xbox 360 may become the primary item once the Plex app is released for it. The PC was primary for a while but it's pretty underpowered and cumbersome to turn on, put to sleep, etc.
  • Google Chromecast: Upstairs I have the Chromecast and an Xbox 360 on it. The Chromecast does pretty well as the movie front end. I sort of switch between this and the 360, but I find I spend more time with the Chromecast when it comes to media.

Software

I use a fairly sizable combination of software to manage my media collection, organize the files, and convert things into compatible formats.

  • Picasa: I use Picasa to manage my photos. I mostly like it, though I've had some challenges as I have moved it from machine to machine over the years in keeping all of the photo album metadata and the ties to the albums synchronized online. Even with these challenges, it is the one tool I've seen with the best balance of flexibility and ease of use. My photos are stored on a network mounted drive on the HP MediaSmart home server.
  • Asset UPnP: Asset UPnP is the most flexible audio DLNA server I've found. You can configure the junk out of it to make sure it transcodes audio into the most compatible formats for devices, and you can even get your iTunes playlists in there. I run Asset UPnP on the HP MediaSmart server.
  • Plex: I switched from XBMC/Kodi to Plex for serving video, and I've also got Plex serving up my photos. The beauty of Plex is that it has a client on darn near every platform; it has a beautiful front end menu system; and it's really flexible so you can have it, say, transcode different videos into formats the clients require (if you're using the Plex client). Plex is a DLNA server, so if you have a client like the Playstation 3 that can play videos over DLNA, you don't even need a special client. Plex can allow you to stream content outside your local network so I can get to my movies from anywhere, like my own personal Netflix. Plex is running on the Synology DS1010+ for the server; and I have the Plex client on my iPad, Surface RT, home theater PC, Android phones, and Kindle Fire.
  • Handbrake: Handbrake is great for taking DVD rips and converting to MP4 format. (See below for why I am using MP4.) I blogged my settings for what I use when converting movies.
  • DVDFab HD Decrypter: I've been using DVDFab for ripping DVDs to VIDEO_TS images in the past. It works really well for that. These rips easily feed into Handbrake for getting MP4s.
  • MakeMKV: Recently I've been doing some rips from DVD using MakeMKV. I've found sometimes there are odd lip sync issues when ripping with DVDFab that don't show up with MakeMKV. (And vice versa - sometimes ripping with MakeMKV shows some odd sync issues that you don't see with DVDFab.) When I get to ripping Blu-ray discs, MakeMKV will probably be my go-to.
  • DVD Profiler: I use this for tracking my movie collection. I like the interface and the well-curated metadata it provides. I also like the free online collection interface - it helps a lot while I'm at the store browsing for new stuff to make sure I don't get any duplicates. Also helpful for insurance purposes.
  • Music Collector: I use this for tracking my music collection. The feature set is nice, though the metadata isn't quite as clean. Again, big help when looking at new stuff to make sure I don't get duplicates as well as for insurance purposes.
  • CrashPlan: I back up my music and photo collection using CrashPlan. I don't have my movies backed up because I figured I can always re-rip from the original media... but with CrashPlan it's unlimited data, so I could back it up if I wanted. CrashPlan runs on my MediaSmart home server right now; if I moved everything to Plex, I might switch CrashPlan to run on the DS1010+ instead.

Media Formats and Protocols

  • DLNA: I've been a fan from the start of DLNA, but the clients and servers just weren't quite there when I started out. This seems to be much less problematic nowadays. The PS3 handles DLNA really well and I even have a DLNA client on my Android phone so I can easily stream music. This is super helpful in getting compatibility out there.
  • Videos are MP4: I started out with full DVD rips for video, but as I've moved to Plex I've switched to MP4. While it can be argued that MKV is a more flexible container, MP4 is far more compatible with my devices. The video codec I use is x264. For audio, I put the first track as a 256kbps AAC track (for compatibility) and make the second track the original AC3 (or whatever) for the home theater benefit. I blogged my settings info.
  • Audio is MP3, AAC, and Apple Lossless: I like MP3 and get them from Amazon on occasion, but I am still not totally convinced that 256kbps MP3 is the way and the light. I still get a little scared that there'll be some better format at some point and if I bought the MP3 directly I won't be able to switch readily. I still buy CDs and I rip those into Apple Lossless format. (Asset UPnP will transcode Apple Lossless for devices that need the transcoding; or I can plug the iPod/iPad in and play the lossless directly from there.) And I have a few AAC files, but not too many.

Media Organization

Videos are organized using the Plex recommendations: I have a share on the Synology DS1010+ called "video" and in there I have "Movies," "TV," and "Home Movies" folders. I have Plex associating the appropriate data scrapers for each folder.

/videos
    /Home Movies
        /2013
        /2014
            /20140210 Concert 01.mp4
            /20140210 Concert 02.mp4
    /Movies
        /Avatar (2009).mp4
        /Batman Begins (2014).mp4
    /TV
        /Heroes
            /Season 01
                /Heroes.s01e01.mp4
                /Heroes.s01e02.mp4

You can read about the Plex media naming recommendations here:

Audio is kept auto-organized in iTunes: I just checked the box in iTunes to keep media automatically organized and left it at that. The media itself is on a mapped network drive on the HP MediaSmart server and that works reasonably enough, though at times the iTunes UI hangs as it transfers data over the network.

Photos are organized in folders by year and major event: I've not found a good auto-organization method that isn't just "a giant folder that dumps randomly named pictures into folders by year." I want it a little more organized than that, though it means manual work on my part. If I have a large number of photos corresponding to an event, I put those in a separate folder. For "one-off photos" I keep a separate monthly folder. Files generally have the date and time in YYYYMMDD_HHMMSS format so it's sortable.

/photos
    /2012
    /2013
    /2014
        /20140101 Random Pictures
            /20140104_142345 Lunch at McMenamins.jpg
            /20140117_093542 Traffic Jam.jpg
        /20140307 Birthday Party
            /20140307_112033.jpg
            /20140307_112219.jpg

Picasa works well with this sort of folder structure and it appears nicely in DLNA clients when they browse the photos by folder via Plex.

Network

My main router is a Netgear WNDR3700v2 and I love it. I've been through a few routers and wireless access points in the past but this thing has been solid and flexible enough with the out-of-the-box firmware such that I don't have to tweak with it to get things working. It just works.

I have wired network downstairs between the office/servers and the main TV/PS3/Xbox 360/HTPC. This works well and is pretty much zero maintenance. I have two D-Link switches (one in the office, one in the TV room) to reach all the devices. (Here's the updated version of the ones I use.)

The router provides simultaneous dual-band 2.4GHz and 5GHz wireless-N through the house which covers almost everywhere except a few corners. I've just recently added some Netgear powerline adapters to start getting wired networking upstairs into places where the wireless won't reach.

The Road Ahead

This setup works pretty well so far. I'm really enjoying the accessibility of my media collection and I find I'm using it even more often than I previously was. So where do I go next?

  • Plex on Xbox 360: The only reason I still have that home theater PC in my living room is that it's running the Plex app and if I want a nice interface with which to browse my movies, the HTPC is kinda the way to go. Plex has just come out with an app for Xbox One and should shortly be available for Xbox 360. This will remove the last reason I have an HTPC at all.
  • Add a higher-powered Plex server: My Synology DS1010+ does a great job running Plex right now, but it can't transcode video very well. Specifically, if I have a high-def video and I want to watch it on my phone, the server wants to transcode that to accommodate for bandwidth constraints and whatnot... but the Synology is too underpowered to handle that. I'd like to see about getting a more powerful server running as the actual Plex server - store the data on the Synology, but use a different machine to serve it up, handle transcodingI, and so forth. (That little HTPC in the living room isn't powerful enough, so I'll have to figure something else out.)
  • Add wireless coverage upstairs: It's great that I can hook the Xbox upstairs to wired networking using the powerline adapters but that doesn't work so well for, say, my phone or the Chromecast. I'd like to add some wireless coverage upstairs (maybe chain another WNDR3700 in?) so I can "roam" in my house. I think even with the powerline stuff in there, it'd be fast enough for my purposes.
  • Integrate music into Plex: I haven't tried the Plex music facilities and I'm given to understand that not all Plex clients support music streaming. This is much lower priority for me given my current working (and awesome) Asset UPnP installation, but it'd be nice long-term to just have one primary server streaming content rather than having multiple endpoints to get different things.

synology, security comments edit

A few months back Cory Doctorow stopped by the local library and did a great talk on security and copyright issues. Very cool stuff which inspired me to look into how to secure my public/open wifi usage.

I have a Synology DS1010+ with a ton of helpful packages and features on it, so that seemed like the best place to start. It took a while, but I got it. I'm going to show you how.

Truly, Synology has made this super easy. I'm not sure this would have been something I could have done nearly as easily without that device and the amazing Diskstation Manager "OS" they have on it. If you haven't got one of their NAS devices, just go get one. I've loved mine since I got it and it just keeps getting more features with every DSM release they put out.

So, with that, the general steps are:

  • Set up user accounts on your Synology NAS.
  • Make your Synology NAS publicly accessible.
  • Add a proxy server to the NAS.
  • Add VPN support to the NAS.
  • Make sure the firewall and router allow the VPN to connect.
  • Configure your client (e.g., phone) to use the VPN and proxy.

I'll walk you through each step.

Don't skim and skip steps. I can't stress this enough. Getting this up and running requires some virtual "planets to align" as it were, so if you skip something, the process will break down and it is kind of tough to troubleshoot.

You need to set up user accounts for people accessing the VPN. Chances are if you have your NAS set up already, you have these accounts - these are the same accounts you use to grant access to NAS files and other resources. There is a nice detailed walkthrough on the Synology site showing how to do this.

Now you need to set up your Synology NAS so you can access it from outside your home network. This is accomplished through a service called "dynamic DNS" or "DDNS." But you don't really need to know too much about that because, built right into the DSM interface, is a program called "EZ-Internet" that will do all the work for you. For the easiest solution, you'll need to set up a user account with Synology, but that's free... and if you use their DDNS system (a "synology.me" domain name) then that's also free. They have a really super tutorial on getting this set up. Focus specifically on the EZ-Internet part of the tutorial - the QuickConnect stuff is neat and good to set up, but it won't work for VPN usage.

It took me something like (seriously) five minutes to get this part working from start to finish. Some of the steps may seem "scary" if you've not set it up before, but Synology has made this really painless and if you don't know what to do, accept the defaults. They're good defaults.

When that's done, you'll see your DDNS setup in the Synology control panel under "External Access."

The DDNS settings will show your NAS

Next, install the Proxy Server and VPN Server packages using the DSM Package Station package manager. Installing packages is a point-and-click sort of thing - just select them from the list of available packages and click "Install." Make sure you set them as "Running" if they don't automatically start up. Once they're installed, you'll see them in the list of installed packages.

Proxy Server and VPN Server packages installed

Let's configure the proxy server. From the application manager (the top-left corner icon in the DSM admin panel) select the "Proxy Server" application. There isn't much to this. Just go to the main "Settings" tab and...

  • Put your email address in the "Proxy server manager's email" box.
  • Make a note of the "Proxy server port" value because you'll need it later.

You can optionally disable caching on the proxy server if you're not interested in your Synology doing caching for you. I didn't want that - I wanted fresh data every time - so I unchecked that box. You can also optionally change the proxy server port but I left it as the default value provided.

Proxy server settings updated

Done with the proxy server! Close that out.

Now let's configure the VPN server. This is a bit more complex than the proxy server, but not too bad.

Again from the application manager (the top-left corner icon in the DSM admin panel) select the "VPN Server" application.

On the "Overview" pane in the VPN server you you will start out showing no VPNs listed. Once you've finished configuring the VPN, you'll see what I see - the NAS running the VPN and the VPN showing as enabled.

My overview tab after the VPN has been enabled

The VPN Server application offers several different VPN types to choose from. You can read about the differences on this article. I chose to use PPTP for my VPN for compatibility reasons - it was the easiest to get set up and running and I had some challenges trying to get different devices hooked up using the others. I am not specifically recommending you use PPTP, that's just what I'm using. The steps here show how to set up PPTP but it isn't too different to set up the other VPN types.

On the PPTP tab, check the "Enable PPTP VPN server" option. That's pretty much it. That gets it working.

Check the PPTP enabled box

That's it for the VPN configuration.

To allow people to connect to the VPN on the NAS, we need to set up the firewall on the NAS. In the Synology DSM control panel, go to the "Security" tab on the left, then select "Firewall" at the top. Click the "Create" button to create a new firewall rule.

Start creating a new firewall rule

When prompted, choose the "Select from a list of built-in applications" option on the "Create Firewall Rules" page. This makes it super easy - the DSM already knows which ports to open for the VPN server.

Select from a list of built-in applications

Scroll through the list of applications and check the box next to "VPN Server (PPTP)" to open the firewall ports for the VPN.

Select the VPN from the list of applications

The firewall settings will be applied and you'll see it in the list of rules.

The last thing to do on the NAS is to set up the router port forwarding configuration. DSM can automatically configure your router right from the NAS to enable the VPN connection to come through.

In the DSM Control Panel, go to the "External Access" tab on the left and choose "Router Configuration" from the top. This is almost identical to the firewall configuration process. Click the "Create" button to add a new rule and you'll be prompted to choose from a list of existing applications. Do that, and select the VPN server from the list.

Choose "Built-in application" and select the VPN

Once it's configured, the DSM will issue some commands to your router and the rule will show up in the list.

The router rule in DSM control panel

That's it for your server configuration! Now you have to connect your clients to it.

The rest of this walkthrough shows how I got my Android 4 phone connected to the VPN. I don't have walkthroughs for other devices. Sorry.

Go to the main settings screen. From here, you're going to choose "More settings."

Choose "More settings"

Scroll down to the VPN settings and click that.

Choose "VPN"

For a PPTP VPN, select "Basic VPN" from the list.

Choose "Basic VPN"

Give your VPN a memorable name and put the DDNS name for your server in the "Server address" box.

Name your VPN and put the DDNS name as the server address

When you connect to the VPN you'll be asked for a username and password. Use the username and password from your user account on the Synology NAS. (Remember that first step of setting up user accounts? This is why.)

The last configuration step is to set the proxy server. Android 4 has this hidden inside the wifi configuration for each wifi hotspot. For the hotspot you're connected to, edit the settings and check the "Show advanced options" box. Fill in the proxy details using the local machine name of your NAS (not the DDNS name) and the proxy server port you have configured.

The proxy server configuration in the wifi hotspot

Now connect to the VPN and the wifi hotspot at the same time. Go back through the Settings => More settings => VPN path to find the VPN you configured. Connect to it and if you haven't previously set up credentials you'll be prompted. Connect to the wifi hotspot as well so it's using the proxy server.

When you're connected to both the VPN and the hotspot with the proxy settings, things work! You will see a little "key" at the top of the phone showing you're connected to a VPN. You can pull up some VPN details from there.

The VPN details will show connection information

And here's a screen shot of me surfing my blog through my VPN and proxy server, securely from an open wifi hotspot. Note the key at the top!

Secure browsing through VPN and proxy

I'm still working out a few things and may change my setup as time goes on, but this is the easiest DIY VPN/proxy setup I've seen.

Stuff I'd like to do next...

  • Switch from PPTP to a different VPN type (or maybe offer more than one VPN type so I can be compatible with devices requiring PPTP but offer better security for devices that can handle it).
  • Figure out if caching helps.* I've found that some stuff is pretty fast, but other stuff is slow (or doesn't flow quite right through the proxy). I'm not sure why that is. Maybe additional proxy settings I'm not aware of yet?

And, finally, again - thanks to Cory Doctorow for prodding me into researching this; and thanks to Synology for making it easy. Part of what Doctorow was saying at his visit is that Security is Hard, particularly the implementation of decent security for the lay person. Synology is as close to point-and-click easy setup as I've ever seen for this.

If you're looking for one of these devices, the Synology DS214se is pretty budget-friendly right now, though the Synology DS414j might give you a little room to grow. I have the DS1010+, which is basically the previous model of the Synology DS1513+, which is more spendy but is super extensible. All of the Synology products run the DSM so you really can't go wrong.

powershell, teamcity comments edit

We have a nice TeamCity build server at work and we somewhat-recently updated it to use a MySQL database instead of XML for the data storage (like for the VCS roots).

We have a number of service accounts we use for interacting with the version control systems and they periodically need their passwords changed. It used to be that we could modify the XML document search-and-replace style, but now it's hidden in the database somewhere and is less straightforward to update.

Thankfully, TeamCity offers a REST API you can work with, so I decided to play with PowerShell and the Invoke-RestMethod command to automate the drudgery of going through the something-like-50 VCS roots we have defined and updating the passwords for selected accounts.

Here's the code for a small one-function module:

<#
.Synopsis
   Updates the password for a user account in TeamCity associated with VCS root entries.
.DESCRIPTION
   Iterates through the VCS roots defined in TeamCity and updates the password associated with the specified user for all VCS roots.
.EXAMPLE
   $credential = Get-Credential
   Update-TeamCityVcsAccount -TeamCityUrl "http://your-teamcity-dash/" -TeamCityCredential $credential -VcsUserName "serviceaccount" -VcsPassword "TheNewPassword"
.NOTES
   This command uses the TeamCity REST API to iterate through the VCS roots and update the password for matching accounts.
#>
function Update-TeamCityVcsAccount
{
    [CmdletBinding()]
    Param
    (
        # The URL to the TeamCity dashboard.
        [Parameter(Mandatory=$true,
                   ValueFromPipeline=$false)]
        [ValidateNotNull()]
        [ValidateNotNullOrEmpty()]
        [Uri]
        $TeamCityUrl,

        # The credentials of the TeamCity administrator account to make changes.
        [Parameter(Mandatory=$true,
                   ValueFromPipeline=$false)]
        [ValidateNotNull()]
        [ValidateNotNullOrEmpty()]
        [PSCredential]
        $TeamCityCredential,

        # The username of the VCS user that should be updated.
        [Parameter(Mandatory=$true,
                   ValueFromPipeline=$false)]
        [ValidateNotNull()]
        [ValidateNotNullOrEmpty()]
        [String]
        $VcsUserName,

        # The new password for the VCS user.
        [Parameter(Mandatory=$true,
                   ValueFromPipeline=$false)]
        [ValidateNotNull()]
        [ValidateNotNullOrEmpty()]
        [String]
        $VcsPassword
    )

    Begin
    {
        $updated = @()
        $progressActivity = "Updating VCS root passwords for $VcsUserName..."
    }
    Process
    {
        $vcsRootsUri = New-Object -TypeName System.Uri -ArgumentList $TeamCityUrl, "/httpAuth/app/rest/vcs-roots"
        $allRoots = Invoke-RestMethod -Uri $vcsRootsUri -Method Get -Credential $credential
        foreach($href in $allRoots.'vcs-roots'.'vcs-root'.href)
        {
            $rootHref = New-Object -TypeName System.Uri -ArgumentList $TeamCityUrl, $href
            $vcsRoot = Invoke-RestMethod -Uri $rootHref -Method Get -Credential $credential
            $currentVcsUserName = $vcsRoot.'vcs-root'.properties.property | Where-Object { $_.name -eq "user" } | Select-Object -ExpandProperty "value"
            if($currentVcsUserName -ne $VcsUserName)
            {
                continue;
            }

            # secure:svn-password == Subversion Repo
            # secure:tfs-password == TFS Repo
            # Making the assumption all the password fields have this
            # name format...
            $propToChange = $vcsRoot.'vcs-root'.properties.property  | Where-Object { ($_.name -like 'secure:*') -and ($_.name -like '*-password') }  | Select-Object -ExpandProperty "name"
            $propHref = New-Object -TypeName System.Uri -ArgumentList $rootHref, "$href/properties/$propToChange"

            Write-Progress -Activity $progressActivity -Status "VCS root: $href"
            Invoke-RestMethod -Uri $propHref -Method Put -Credential $credential -Body $VcsPassword | Out-Null
            $updated += $propHref;
        }
    }
    End
    {
        Write-Progress -Activity $progressActivity -Completed -Status "VCS roots updated."
        return $updated
    }
}

Export-ModuleMember -Function Update-TeamCityVcsAccount

Save that as TeamCity.psm1 and then you can do this:

Import-Module .\TeamCity.psm1
$credential = Get-Credential
Update-TeamCityVcsAccount -TeamCityUrl "http://your-teamcity-dash/" -TeamCityCredential $credential -VcsUserName "serviceaccount" -VcsPassword "TheNewPassword"

When you run Get-Credential you'll be prompted for some credentials. Enter your TeamCity username and password. Fill in the appropriate values for the parameters and you'll see progress rolling by for the password updates. The return value is the list of VCS root URLs that got updated.

Now that I have a reasonably-working pattern for this, it should be easy enough to use the REST API on TeamCity to automate other common admin tasks we do. Neat!

vs, azure comments edit

I have an MSDN subscription at work which comes with some Azure services like virtual machines. I'm using one of these VMs to explore the VS 14 CTP.

The problem is... port 3389 isn't open through the firewall at work, so using the default port for Terminal Services doesn't work for me.

Luckily, you can change the port your VM uses for Terminal Services. Knowing I won't be hosting a web site here, changing to port 80 makes it easy.

First, open up the VM in the Azure Portal and click the "Settings" button.

Click the Settings button on the VM

Now click the "Endpoints" entry on the list of settings.

Click Endpoints in the settings menu

We want the public port for Terminal Services to be port 80. Click the Terminal Services entry to edit it.

We want TS on port 80

Update the public port to 80 and click the Save button at the top.

Update the public port to 80

Now go back to the main VM dashboard and click the "Connect" button.

Click the Connect button

A small .rdp file will download. If you open it in a text editor it will look like this:

full address:s:yourmachine.cloudapp.net:3389
prompt for credentials:i:1

Change that port at the end to 80.

full address:s:yourmachine.cloudapp.net:80
prompt for credentials:i:1

Save that and double-click the file to start a Terminal Service session. Boom! Done.

autofac, github comments edit

All Autofac documentation has moved to our official documentation site at http://autofac.readthedocs.org/.

Since moving from Google Code to GitHub we've had documentation spread all over, some of which was getting pretty stale from not being maintained. We wanted to get control over that and set a good stage going forward, so we consolidated everything to our site on Read the Docs.

Doing this provides a lot of benefit:

  • Documentation is searchable.
  • You can get the docs in multiple formats (online, PDF, epub).
  • Docs are readable on a mobile browser.
  • We can start versioning the documentation.
  • We can update docs in one spot, inside the source tree, and not worry about wikis all spread out getting stale.

As part of this, you will see some changes to our wikis:

  • All of the pages in our GitHub wiki have been removed except for the release notes pages. We'll only be maintaining release notes in the wiki. If you want docs, you need to go to the doc site. This may break some links in things like StackOverflow answers, but the other choice was to keep a bunch of placeholder redirect pages in place, which would be just painful to maintain. Instead we ripped the bandage off.
  • All of the pages in the Google Code wiki have been cleared out and replaced with some text pointing to the new documentation location. There are a substantially larger number of articles and answers linking to the old wiki and that wiki doesn't change anymore so putting some pseudo-redirects in there was a simple one-time effort.

Apologies if this causes some issue with broken links.

It's taken a long time to get here, but we think this will provide a better documentation experience for everyone now and going forward.

personal, tv, costumes, halloween comments edit

For Halloween this year I went as the Tenth Doctor from Doctor Who (originally played by David Tennant).

David Tennant as the Tenth Doctor

I make my costume every year (well, pretty much every year) and I enjoy sewing so it was fun to take this on. However, I don't normally post "behind-the-scenes" stuff and there are folks who don't really realize what goes into making a costume so I figured this year I'd do it. Oh, and if you want to see the pictures in a larger format, I have an annotated photo album on Google+.

Before doing anything else, I did some research. The Making My Tennant Suit blog was the best resource I found for info on the suit, the fabrics, and so forth. It has a really good fabric comparison showing different fabrics and sources that match/approximate the fabric from the suit. I also gathered a few pictures from the web to help me pick the right pieces.

I was due for some new glasses, so I picked some out that both look good on me (IMHO) and are close to the ones seen in the show.

My new Tennant-style glasses

I went to Jo-Ann Fabrics and searched for a pattern. None were exact, but I found that Vogue pattern 8890 was pretty close. I figured I could take "View A" jacket from the pattern, change it from a two-button jacket to four buttons, and add a custom breast pocket. The "View D" pants could be done unmodified.

Vogue Pattern 8890

The pattern was actually pretty ambitious. Given that it wasn't a "costume pattern," it was fully lined with all the extra stuff you'd find if you bought a suit - nicely finished pockets, extra give/pleats in the lining for movement... Definitely the most complex thing I've taken on to date.

The fabric I picked was ordered online from Hancock Fabrics. It's item #3859071 "Brown and Teal Pinstripe Suiting." I got it on sale half-off so I bought something like eight yards so I wouldn't run out if I made a mistake or had to lengthen the pants/sleeves on the suit.

My Tenth Doctor fabric from Hancock

This particular fabric was a little challenging to work with because it was somewhat light and stretchy. When you work with cotton or wool, it's not really stretchy so you can cut and pin it without worrying about it moving on you or changing shape. With this, I had to be really careful about pinning it, making sure I wasn't stretching it while it was getting cut, and so on.

The buttons I used were some pretty standard tortoise shell ones off the shelf.

The buttons I used on the suit

Thread was Coats & Clark #8960. It was the perfect brown to match the fabric so hems and seams were nice and hidden. I think I went through three of these spools of thread.

Coats & Clark #8960

The pocket insides, waistband lining, and other strong internals was all done with some off-the-shelf brown cotton twill. You don't really see this from the outside, but it is a nice shade to offset the suiting. Not that I had a lot of choice; there was only one color of brown twill available when I went shopping and I wasn't feeling too picky.

My cotton twill

After I got all the materials together, I got down to work. I ironed the pattern (yes, ironed the pattern - on low heat, to make it easy to cut out and all flat), cut it out, and pinned the pattern to the fabric. There were something like 15 pieces to the pants and 30 pieces to the jacket.

Pinning the pattern

I did the pants first (though I didn't get any pictures of the making of the pants). Normally I've found Vogue patterns run a little small, so I took my measurements and did the pants the next size up. This pattern seemed to run pretty true to size, so I had to take the pants in when they were done. I haven't yet figured out how to fit a pattern on myself before it's finished.

Doing the pants first helped me figure out that I needed to make the jacket true to size.

The first part of the jacket to be done is the main body outside. In this picture you can see I've replaced the breast pocket from the pattern with one of my own design so it matches the Tenth Doctor. I did that without a pattern, sort of taking an average measurement on width/height of pockets on other garments and fudging something together. This custom pocket is about 5.5" wide and 6" tall.

The outside jacket body

After the body of the jacket was done, it was time to sew the arms in. Putting arms in a jacket is always a real pain because the fabric at the top part of the arm is larger than the arm hole on the jacket body. They do that so you can move around, but it means you have to be really careful about putting the arm in and evenly distributing the extra fabric or you'll get gathers along the seam where the fabric folds over onto itself. This is a particular problem with stretchy fabric, which likes to move around a lot. I had to rip out and redo a couple of areas to remove the gathering, but I got the arms in.

The right sleeve sewn in

Here's the jacket with both sleeves sewn in but the lining not yet put in. The white stuff you see on the collar is interfacing - a sort of mesh-like fabric that you attach to make other fabric less flexible. You have interfacing in collars and cuffs, for example. I used "fusible interfacing" which is basically iron-on to attach. This pattern called for "hair canvas" interfacing, which is really expensive and much harder to work with. If I was making this as a suit and not as a costume, I probably would have tried to work with the hair canvas.

Both sleeves in, but no lining

With the outside done, it was time to do the lining. The first bit of lining was the inside front - the part with the inside pocket. Here's the inside of the right front. You can see in the image a diagonal line where the collar is intended to fold over. You can also see a small, thin rectangle where the inside pocket will eventually go.

The inside right front, minus the inside pocket

Here's the inside right front after getting the inside pocket in. You can see a small loop hanging down off the top of the pocket that will be used to button the pocket closed. The pattern called for 2" of ribbon (I used bias tape) for the loop, but that turned out to be too small to fold around the button that will be later attached below the pocket. If I were to do it again, I'd use 3" or 3.5" of ribbon. You can always move the button down a bit, but I had to sew my button right on the pocket welt (the twill "lip" lining the pocket).

The inside right front, this time with the inside pocket

Here's what the lining looks like fully assembled - both inside front pieces, the back, and the sleeves. If you've never lined a coat before, it's sort of like making a second copy of the coat, just inside-out. Then you take the lining, put it in the jacket, and sew along the edges. Basically.

In the picture on the left you see the inside pocket as you'll view it when wearing the jacket; on the right is the other side - that brown square is the other inside pocket.

The lining, fully assembled

Once you put the lining in, you have to attach it. The back was able to be machine-sewn in, but the sleeves required hand sewing. Here you see I have the sleeve lining pinned in place so I can hand sew it in.

Sleeve lining pinned in place

Here's the same sleeve lining after the hand sewing. I also have the sleeve buttons attached, so this sleeve is done.

The sleeve with the lining and buttons attached

Once the lining is in, the last thing to happen is the front buttons. Here's the jacket entirely finished. You can see in the photo the white marks around the button holes on the front where I was sketching out the button locations.

Finished jacket with button hole markings

I did a little cleanup on the markings and here's how it turned out.

First time wearing the complete jacket

And, once the whole costume was on, here's how it looked. I think it turned out pretty well.

Travis as the Tenth Doctor

For those interested: The shoes are unbleached white Converse Chuck Taylors. The shirt is one I already had; any old white dress shirt will do. The sonic screwdriver is the toy version that's been out for a while. The tie is a maroon polka dot tie by Chevalier.

I don't know how much time it took exactly, but I know that I watch TV/Netflix while I'm working and I made it through three seasons of Kyle XY, the Jekyll miniseries, a couple of movies, and half a season of The Blacklist... and I wasn't watching something the whole time. So... it took a while.

As far as cost, that's another thing I didn't really keep track of, but roughly (guessing on a few of these)...

  • Shoes: $45
  • Tie: $15
  • Pinstripe Suiting: $50
  • Lining: $10
  • Interfacing: $10
  • Felt (for the collar): $5
  • Twill: $10
  • Thread, buttons, zipper, notions: $30

So... uh... $175? Give or take. It's not cheap. Even if you take out the cost for the shoes and tie, which I can wear elsewhere, you're still looking at over $100. Plus the time.

This definitely increases my admiration and respect for folks who do this on a convention circuit.

Again, if you want to see the pictures in a larger format, I have an annotated photo album on Google+.

testing, culture comments edit

One of the projects I work on has some dynamic culture-aware currency formatting stuff and we, of course, have tests around that.

I'm in the process of moving our build from Windows Server 2008R2 to Windows Server 2012 and I found that a lot of our tests are failing. I didn't change any of the code, just updated a couple of lines of build script. What gives?

It appears Windows Server 2012 has different culture settings installed than the previous platforms. Per the documentation, "Windows versions or service packs can change the available cultures" and it appears I'm getting hit by that.

I cobbled together a quick program to do some testing using LINQPad.

var nfi = CultureInfo.CreateSpecificCulture("as-IN").NumberFormat;
Console.WriteLine("{0}:{1}:{2}",
  nfi.CurrencyNegativePattern,
  nfi.CurrencyPositivePattern,
  nfi.CurrencySymbol);

The results were the same on Windows 7 and Windows Server 2008R2 but different on Windows Server 2012:

Item Windows 7 Windows 2008R2 Windows 2012
CurrencyNegativePattern 12 12 12
CurrencyPositivePattern 1 1 2
CurrencySymbol

Notice the positive pattern is different? Yeah. That's not the only culture or item that differs across the installed cultures.

So... now I have to figure out a way to craft our tests to be a little more... dynamic(?)... about the expected value vs. the actual value.

halloween, costumes comments edit

It was raining this year and I think that put a damper on the trick-or-treat count. We also didn't put out our "Halloween projector" that puts a festive image on our garage, so I think the rain, combined with lack of decor, resulted in quite a bit fewer kids showing up. When it was busy, it was really busy; but when it wasn't... it was dead.

2014: 176
trick-or-treaters.

The graph is starting to look like a big mess so I will probably start keeping more like "the last five years" on there. I'll also keep an overall average graph to keep the bigger picture.

Average Trick-or-Treaters by Time Block

The table's also starting to get pretty wide; might have to switch it so time block goes across the top and year goes down.

Cumulative data:

  Year
Time Block 2006 2007 2008 2009 2010 2011 2013 2014
6:00p - 6:30p 52 5 14 17 19 31 28 19
6:30p - 7:00p 59 45 71 51 77 80 72 54
7:00p - 7:30p 35 39 82 72 76 53 113 51
7:30p - 8:00p 16 25 45 82 48 25 80 42
8:00p - 8:30p 0 21 25 21 39 0 5 10
Total 162 139 237 243 259 189 298 176

My costume this year was the Tenth Doctor from Doctor Who. Jenn was Anna from Frozen. We both made our costumes and I posted a different blog article walking through how I made the suit. Phoenix decided she was going to be Sleeping Beauty this time, which was a time-saver for us since she already has a ton of princess costumes.

Travis and Jenn

ndepend comments edit

NDepend is awesome and I use it to analyze all sorts of different projects.

One of the nice things in NDepend is you can define queries that help qualify what is your code (JustMyCode) and what isn't (notmycode).

I've seen two challenges lately that make rule analysis a bit tricky.

  • async and await: These generate state machines on the back end and NDepend always flags the generated code as complex (because it is). However, you can't just exclude the code because basically the generated state machine moves your code in there, so excluding the state machine will exclude some of your code.
  • Anonymous types: I see these a lot in MVC code, for example, where the anonymous type is being used as a dictionary of values to truck around.

I haven't figured out the async and await thing yet... but here's how to exclude anonymous types from the JustMyCode set of code:

First, in the "Queries and Rules Explorer" window in your project, go to the "Defining JustMyCode" group.

Defining JustMyCode

In there, create a query like this:

// <Name>Discard anonymous types from JustMyCode</Name>
notmycode
from t in Application.Types where
t.Name.Contains("<>f__AnonymousType")
select new { t, t.NbLinesOfCode }

Save that query.

Now when you run your code analysis, you won't see anonymous types causing any violations in queries.

windows comments edit

I develop using an account that is not an administrator because I want to make sure the stuff I'm working on will work without extra privileges. I have a separate local machine administrator account I can use when I need to install something or change settings.

To make my experience a little easier, I add my user account to a few items in Local Security Policy to allow me to do things like restart the machine, debug things, and use the performance monitoring tools.

In setting up a new Windows 2012 dev machine, I found that the domain Group Policy had the "Shut down the machine" policy locked down so there was no way to allow my developer account to shut down or restart. Painful.

To work around this, I created a shortcut on my Start menu that prompts me for the local machine administrator password and restarts using elevated credentials.

Here's how:

Create a small batch file in your Documents folder or some other accessible location. I called mine restart-elevated.bat. Inside it, use the runas and shutdown commands to prompt for credentials and restart the machine:

runas /user:YOURMACHINE\administrator "shutdown -r -f -d up:0:0 -t 5"

The shutdown command I've specified there will...

  • Restart the computer.
  • Force running applications to close.
  • Alert the currently logged-in user and wait five seconds before doing the restart.
  • Set the shutdown reason code as "user code, planned shutdown, major reason 'other,' minor reason 'other.'"

Now that you have the batch file, throw it on your Start menu. Open up C:\Users\yourusername\AppData\Roaming\Microsoft\Windows\Start Menu and make a shortcut to the batch file. It's easy if you just right-drag the script in there and select "Create shortcut."

Give the shortcut a nice name. I called mine "Restart Computer (Elevated)" so it's easy to know what's going to happen.

I also changed the icon so it's not the default batch file icon:

  • Right-click the shortcut and select "Properties."
  • On the "Shortcut" tab, select "Change Icon..."
  • Browse to %SystemRoot%\System32\imageres.dll and select an icon. I selected the multi-colored shield icon that indicates an administrative action.

Change the icon to something neat

Finally, hit the Start button and go to the list of applications installed. Right-click on the new shortcut and select "Pin to Start."

Restart shortcut pinned to Start menu

That's it - now when you need to restart as a non-admin, click that and enter the password for the local administrator account.

windows comments edit

I was setting up a new dev machine the other day and whilst attempting to install TestDriven I got a popup complaining about a BEX event.

Looking in the event log, I saw this:

Faulting application name: TestDriven.NET-3.8.2860_Enterprise_Beta.exe, version: 0.0.0.0, time stamp: 0x53e4d386
Faulting module name: TestDriven.NET-3.8.2860_Enterprise_Beta.exe, version: 0.0.0.0, time stamp: 0x53e4d386
Exception code: 0xc0000005
Fault offset: 0x003f78ae
Faulting process id: 0xe84
Faulting application start time: 0x01cfe410a15884fe
Faulting application path: E:\Installers\TestDriven.NET-3.8.2860_Enterprise_Beta.exe
Faulting module path: E:\Installers\TestDriven.NET-3.8.2860_Enterprise_Beta.exe
Report Id: df1b87dd-5003-11e4-80cd-3417ebb288e7

Nothing about a BEX error, but... odd.

Doing a little searching yielded this forum post which led me to disable the Data Execution Prevention settings for the installer.

  • Open Control Panel.
  • Go to the "System and Security" section.
  • Open the "System" option.
  • Open "Advanced System Settings."
  • On the "Advanced" tab, click the "Settings..." button under "Performance."
  • On the "Data Execution Prevention" tab you can either turn DEP off entirely or specifically exclude the installer using the whitelist box provided. (DEP is there to help protect you so it's probably better to just exclude the installer unless you're having other issues.)

vs, sublime comments edit

As developers, we've all argued over tabs vs. spaces, indentation size, how line endings should be, and so on.

And, of course, each project you work on has different standards for these things. Because why not.

What really kills me about these different settings, and what probably kills you, is remembering to reconfigure all your editors to match the project settings. Then when you switch, reconfigure again.

The open source project EditorConfig aims to rescue you from this nightmare. Simply place an .editorconfig file in your project and your editor can pick up the settings from there. Move to the next project (which also uses .editorconfig) and everything dynamically updates.

I don't know why this isn't the most popular Visual Studio add-in ever.

Here's the deal:

Here's the .editorconfig I use. I like tab indentation except in view markup. We're a Windows shop, so lines end in CRLF. I hate trailing whitespace. I also like to keep the default settings for some project/VS files.

root = true

[*]
end_of_line = CRLF
indent_style = tab
trim_trailing_whitespace = true

[*.ascx]
indent_style = space
indent_size = 4

[*.aspx]
indent_style = space
indent_size = 4

[*.config]
indent_style = space
indent_size = 4

[*.cshtml]
indent_style = space
indent_size = 4

[*.csproj]
indent_style = space
indent_size = 2

[*.html]
indent_style = space
indent_size = 4

[*.resx]
indent_style = space
indent_size = 2

[*.wxi]
indent_style = space
indent_size = 4

[*.wxl]
indent_style = space
indent_size = 4

[*.wxs]
indent_style = space
indent_size = 4

Note there's a recent update to the EditorConfig format that supports multiple matching, like:

[{*.wxl,*.wxs}]
indent_style = space
indent_size = 4

...but there's a bug in the Sublime Text plugin around this so I've expanded those for now to maintain maximum compatibility.

I've added one of these to Autofac to help our contributors and us. It makes it really easy to switch from my (preferred) tab settings to use the spaces Autofac likes. No more debate, no more forgetting.

Now, get out there and standardize your editor settings!

testing, vs comments edit

I get a lot of questions from people both at work and online asking for help in troubleshooting issues during development. I'm more than happy to help folks out because I feel successful if I help others to be successful.

That said, there's a limited amount of time in the day, and, you know, I have to get stuff done, too. Plus, I'd much rather teach a person to fish than just hand them the fish repeatedly and I don't want to be the roadblock stopping folks from getting things done, so I figured it'd be good to write up the basic steps I go through when troubleshooting stuff as a .NET developer in the hope it will help others.

Plus - if you ever do ask for help, this is the sort of stuff I'd ask you for, sort of along the lines of calling tech support and them asking you to reboot your computer first. Is it plugged in? That's this sort of thing.

Soooo... assuming you're developing an app, not trying to do some crazy debug-in-production scenario...

Change Your Thinking and Recognize Patterns

This is more of a "preparation for debugging" thing. It is very easy to get intimidated when working with new technology or on something with which you're not familiar. It's also easy to think there's no way the error you're seeing is something you can handle or that it's so unique there's no way to figure it out.

  • Don't get overwhelmed. Stop and take a breath. You will figure it out.
  • Don't raise the red flag. Along with not getting overwhelmed... unless you're five minutes from having to ship and your laptop just lit on fire, consider not sending out the all-hands 'I NEED HELP ASAP' email with screen shots and angry red arrows wondering what this issue means.
  • Realize you are not a special snowflake. That sounds mean, but think about it - even if you're working on the newest, coolest thing ever built, you're building that with components that other people have used. Other folks may not have received literally exactly the same error in literally exactly the same circumstances but there's a pretty high probability you're not the first to run into the issue you're seeing.
  • Don't blame the compiler. Sure, software is buggy, and as we use NuGet to pull in third-party dependencies it means there are a lot of bits out of your control that you didn't write... and sure, they might be the cause of the issue. But most likely it's your stuff, so look there first.
  • Use your experience. You may not have seen this exact error in this exact spot, but have you seen it elsewhere? Have you seen other errors in code similar to the code with the error? Do you recognize any patterns (or anti-patterns) in the code that might clue you in?

Read the Exception Message

This is an extension of RTFM - RTFEM. I recognize that there are times when exception messages are somewhat unclear, but in most cases it actually does tell you what happened with enough detail to start fixing the issue.

And don't forget to look at the stack trace. That can be just as helpful as the message itself.

Look at the Inner Exception

Exceptions don't always just stop with a message and a stack trace. Sometimes one error happens, which then causes a sort of "secondary" error that can seem misleading. Why did you get that weird exception? You're not calling anything in there! Look for an inner exception - particularly if you're unclear on the main exception you're seeing, the inner exception may help you make sense of it.

And don't forget to follow the inner exception chain - each inner exception can have its own inner exception. Look at the whole chain and their inner messages/stack traces. This can really help you pinpoint where the problem is.

Boogle the Message

You know, "Bing/Google" == "Boogle" right? Seriously, though, put the exception message(s) into your favorite search engine of choice and see what comes up.

  • Remove application-specific values - stuff like variable names or literal string values. You're probably more interested in places that type of exception happened rather than literally the exact same exception.
  • Add "hint words" - like if it happened in an MVC application, throw "MVC" in the query. It can help narrow down the scope of the search.
  • Don't give up after the first search - just because the first hit isn't exactly the answer doesn't mean the answer isn't out there. Modify the query to see if you can get some different results.

Ask a Rubber Duck

Rubber duck debugging is a pretty well-known strategy where you pretend to ask a rubber duck your question and, as you are forced to slow down and ask the duck... you end up answering your own question.

Seriously, though, step back from the keyboard for a second and think about the error you're seeing. Run back through your mind and think about the error and what might be causing it. It's easy to get mental blinders on; take 'em off!

Break in a Debugger

Put a breakpoint on the line of code throwing the exception. Use the various debugging windows in Visual Studio to look at the values of the variables in the vicinity. Especially if you're getting something like a NullReferenceException you can pretty quickly figure out what's null and what might be causing trouble.

Step Into Third-Party Code

Many popular NuGet packages put symbol/source packages up on SymbolSource.org. If you configure Visual Studio to use these packages you can step into the source for these. You can also step into Microsoft .NET framework source (the SymbolSource setup enables both scenarios).

Do this!

If you don't know what's going on, try stepping into the code. Figure out why the error is happening, then follow it back to figure out the root cause.

Use a Decompiler

If you can't step into the third-party source, try looking at the third-party stuff in a decompiler like Reflector, JustDecompile, dotPeek, or ILSpy.

You can use the stack trace to narrow down where the issue might be going on and try tracing back the root cause. You might not get an exact line, but it'll narrow it down for you a lot.

Create a Reproduction

Usually crazy hard-to-debug stuff happens in a large, complex system and figuring out why that's happening can feel overwhelming. Try creating a reproduction in a smaller, standalone project. Doing this is a lot like the rubber duck debugging, but it tells you a little more in the way of concrete information.

  • As you work through creating the reproduction, the number of moving pieces becomes easier to visualize.
  • If you can easily reproduce the issue in a smaller environment, you can troubleshoot with many fewer moving pieces and that's easier than doing it in the complex environment. Then you can take that info to the larger system.
  • If you can't easily reproduce the issue then at least you know where the problem isn't. That can sometimes be just as helpful as knowing where the issue is.

Next Steps

Once you've gotten this far, you probably have a lot of really great information with which you can ask a very clear, very smart question. You've probably also learned a ton along the way that you can take with you on your next troubleshooting expedition, making you that much better at what you do. When you do ask your question (e.g., on StackOverflow) be sure to include all the information you've gathered so people can dive right into answering your question.

Good luck troubleshooting!

personal, home comments edit

My daughter and I are both big Doctor Who fans. She has a bathroom that she primarily uses, so we decided to make that into a ThinkGeek extravaganza of TARDIS awesomeness. Here's what we got:

Doctor Who TARDIS Bath Mat

The bath mat is pretty decent. It is smaller than the throw rug and works well in the bathroom.

Doctor Who TARDIS Shower Curtain

The shower curtain is OK, but it is a thinner plastic than I'd like. I really wish it was fabric, like what you'd put in front of a plastic curtain; or maybe a nice thick plastic... but it's not. The first one we received arrived damaged - the print on it had rubbed off and one of the metal grommets at the top was ripped out. ThinkGeek support was super awesome and sent us a new one immediately.

Of course, then my stupid cat decided to chew through a section on the bottom of the new one so I had to do my best to disguise that, but it still irritates me. Damn cat.

Doctor Who TARDIS Ceramic Toothbrush Holder

The toothbrush holder is really nice. Looks good and nice quality. My three-year-old daughter's toothbrush is just a tad short for it and falls in, but that's not a fault in the holder. She just needs a bigger toothbrush.

Doctor Who 3-Piece Bath Towel Set

We got two sets of these towels and they are awesome. Very thick, very plush. I wish all towels were nice like this.

Doctor Who TARDIS Shower Rack

We actually have the shower rack hanging on our wall because our shower is one of those fiberglass inserts rather than tile, so the shower head doesn't sit flush with the wall. We have some hair supplies in there. One problem I ran into with this was that the little stickers didn't adhere very well. I had to do a little super glue work to get the stickers stuck down permanently. It could have just been this one unit, but it was less than optimal.

The bathroom looks really good with all this stuff in it, and my daughter is super pleased with it.

vs, wcf comments edit

I've run into this issue a couple of times now and I always forget what the answer is, so... blog time.

We have some WCF service references in our projects and we were in the process of updating the generated code (right click on the reference, "Update Service Reference") when we got an assembly reference error:

Could not load file or assembly AssemblyNameHere, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1234567890123456 or one of its dependencies. The located assembly's manifest definition does not match the assembly reference.

Normally you can fix this sort of thing by adding a binding redirect to your app.config or web.config and calling it a day. But we had the binding redirect in place for the assembly already. What the... ?!

As it turns out, svcutil.exe and the service reference code generation process don't use binding redirects from configuration. It didn't matter where we put the redirect, we still got the error.

The fix is to reduce the set of assemblies with types that get reused. Right-click the service reference and select "Configure Service Reference." Switch the setting to reuse types in referenced assemblies to be very specific. If you aren't actually reusing types from a particular assembly (especially third-party assemblies you aren't building), don't include it in the list.

We were really only reusing types in one assembly, not the whole giant set of assemblies referenced. Cleaning that up removed the need for the binding redirect and everything started working again as normal.

Note: If you really want to use binding redirects, you can add them to devenv.exe.config so Visual Studio itself uses them. Not awesome, and I wouldn't recommend it, but... technically possible.

testing comments edit

I've noticed that some of our unit tests are running a little long and I'm trying to figure out which ones are taking the longest. While TeamCity has some nice NUnit timing info, it's a pain to build the whole thing on the build server when I can just try things out locally.

If you have NUnit writing XML output in your command line build (using the /xml: switch) then you can use Log Parser to query the XML file and write a little CSV report with the timings in it:

LogParser.exe "SELECT name, time FROM Results.xml#//test-case ORDER BY time DESC" -i:xml -fMode:Tree -o:csv

A little fancier: take all of the tests across several reports and write the output to a file rather than the console:

LogParser.exe "SELECT name, time INTO timings.csv FROM *.xml#//test-case ORDER BY time DESC" -i:xml -fMode:Tree -o:csv

And fancier still: Take all of the reports across multiple test runs and get the average times for the tests (by name) so you can see which tests over time run the longest:

LogParser.exe "SELECT name, AVG(time) as averagetime INTO timings.csv FROM *.xml#//test-case GROUP BY name ORDER BY averagetime DESC" -i:xml -fMode:Tree -o:csv

blog comments edit

Now that I've moved to GitHub Pages for my blog I find that I sometimes forget what all the YAML front matter should be for a blog entry so I end up copy/pasting.

To make the job easier, I've created a little snippet/template for Sublime Text for blog entries. Take this XML block and save it in your User package as Empty GitHub Blog Post.sublime-snippet and it'll be available when you switch syntax to Markdown:

<snippet>
  <content><![CDATA[
---
layout: post
title: "$1"
date: ${2:2014}-${3:01}-${4:01} -0800
comments: true
tags: [$5]
---
$6
]]></content>
  <scope>text.html.markdown</scope>
</snippet>

I've added placeholders so you can tab your way through each of the front matter fields and finally end up at the body of your post.

blog, github comments edit

Well, I finally did it.

For quite some time I've been looking at migrating my blog away from Subtext. At first I wanted to go to WordPress, but then... evaluating all the options, the blog engines, etc., I started thinking "less is more." I originally started out using Subtext because I thought I'd want to extend the blog to do a lot of cool things. It was a great .NET blog platform and I'm a .NET guy, it was perfect.

The problem was... I didn't do any of that.

I contributed as I could to the project, and there was a lot of great planning for the ever-impending "3.0" release, but... it just never came together. People got busy, stuff happened. Eventually, Subtext development pretty much stopped.

Part of my challenge with Subtext was the complexity of it. So many moving pieces. Database, assemblies, tons of pages and settings, skins, totally no documentation. (Well, there was some documentation that I had been writing but an unfortunate server crash lost it all.) I started looking at hosted solutions like WordPress that would be easy to use and pretty common. But, then, the challenge with any of those systems is getting your data in/out, etc. Plus, hosting costs.

So I started leaning toward a code generation sort of system. Fewer moving pieces, simpler data storage. Also, cheap. Because I'm cheap.

I decided on GitHub Pages because it's simple, free, reliable... plus, it's pretty well documented, Jekyll usage is simple, and Markdown is pretty sweet.

Good Stuff About GitHub Pages

  • It's simple. Push a new post in Markdown format to your blog repo and magic happens.
  • It's portable. All the posts are in simple text, right in the repo, so if you need to move somewhere else, it's all right there. No database export, no huge format conversion craziness.
  • It's free. Doesn't get cheaper than that.
  • It's reliable. I'm not saying 100% uptime, but putting your blog in GitHub Pages means you have the whole GitHub team watching to see if the server is down.
  • Browser editor. Create a new post right in the GitHub web interface. Nice and easy.

Less Awesome Stuff About GitHub Pages

  • There's no server-side processing even if you need it. Ideally I'd want a 404 handler that can issue a 302 from the server-side to help people get to broken permalinks. But the 404 is just HTML generated with Jekyll, so you have to rely on JS to do the redirect. Not so awesome for search engines. I have some [really old] blog entries that were on a PHP system where the permalink is querystring-based, so I can't even use jekyll-redirect-from to fix it.
  • The Jekyll plugins are limited. GitHub Pages has very few plugins for Jekyll that it supports. On something like OctoPress you hook up the page generation yourself so you can have whatever plugins you want... but you can't add plugins to GitHub Pages, so the things you can do are kind of limited. (I totally understand why this is the case, doesn't make it awesome.)
  • No post templates, painful preview. With Windows Live Writer or whatever, you didn't have to deal with YAML front matter or any of that. The GitHub web editor interface doesn't have an "add new post" template, so that's a bit rough. Also, to preview your post, you have to commit the post the first time, then you have the "preview" tab you can use to see "changes" in your post. It renders Markdown nicely, but it's sort of convoluted.
  • Drafts are weird. I may be doing this wrong, but it looks like you have to put "posts in progress" into a _drafts folder in your blog until it's ready to go, at which point you move it to _posts.
  • Comments don't exist. It's not a blog host, really, so you need to use a system like Disqus for your comments. That's not necessarily a bad thing, but it means you have some extra setup.

My Migration Process

A lot of folks who move their blog to GitHub Pages sort of "yadda yadda" away the details. "I exported my content, converted it, and imported it into GitHub Pages." That... doesn't help much. So I'll give you as much detail as I can.

Standing on the Shoulders of Giants

Credit where credit is due:

Phil Haack posted a great article about how he migrated from Subtext to GitHub Pages that was super helpful. He even created a Subtext exporter that was the starting point for my data export. I, uh, liberated, a lot of code from his blog around the skin, RSS feed, and so on to get this thing going.

David Ebbo also moved to GitHub Pages and borrowed from Mr. Haack but had some enhancements I liked, like using GitHub user pages for the repository and using "tags" instead of "categories." So I also borrowed some ideas and code from Mr. Ebbo.

If you don't follow these blogs, go subscribe. These are some smart guys.

You Need to Know Jekyll and Liquid

You don't have to be an expert, but it is very, very helpful to know Jekyll (the HTML content generator) and Liquid (the template engine) at least on a high-level basis. As you work through issues and fix styles or config items, this helps a lot to track things down.

Initialize the Repository

I'm using GitHub user pages for my blog, so I created a repository called tillig.github.io to host my blog. For your blog, it'd be yourusername.github.io. The article on user pages is pretty good to get you going.

Get the Infrastructure Right

Clone that repo to your local machine so you can do local dev/test to get things going. Note that if you check things in and push to the repo as you develop, you may get some emails about build failures, so local dev is good.

The GitHub page on using Jekyll tells you about how to get your local dev environment set up to run Jekyll locally.

There's a lot to set up here, from folder structure to configuration, so the easiest way to start is to copy from someone else's blog. This is basically what I did - I grabbed Haack's blog, put that into my local repo, and got it running. Then I started changing the values in _config.yml to match my blog settings and fixed up the various template pieces in the _includes and _layouts folders. You can start with my blog if you like.

GOTCHA: GitHub uses pygments.rb for code syntax highlighting. If you're developing on Windows, there's something about pygments.rb that Windows hates. Or vice versa. Point being, for local dev on Windows, you will need to turn off syntax highlighting during local Windows dev by setting highlighter: null in your _config.yml.

Add Search

I didn't see any GitHub Pages blogs that had search on them, so I had to figure that one out myself. Luckily, Google Custom Search makes it pretty easy to get this going. Create a new "custom search engine" and set it up to search just your site. You can configure the look and feel of the search box and results page right there and it'll give you the script to include in your site. Boom. Done.

Fix Up RSS

The Octopress-based RSS feed uses a custom plugin expand_urls to convert relative URLs like /about.html into absolute URLs like http://yoursite.com/about.html That no worky in GitHub Pages, so you have to use a manual replace filter on URLs in the RSS feed. (If you look at my atom.xml file you can see this in action.)

Make Last Minute Fixes to Existing Content

I found that it was easier to do any last-minute fixes in my existing blog content rather than doing it post-export. For example, I was hosting my images in ImageShack for a long time, but the reliability of ImageShack (even with a paid account) is total crap. I lost so many images... argh. So I went through a process of moving all of my images to OneDrive and it was easier to do that in my original blog so I could make sure the links were properly updated.

If you have anything like that, do it before export.

Export Your Content and Comments

This was the trickiest part, at least for me.

Haack was running on his own server and had direct database access to his content so a little SQL and he was done. I was on a shared server without any real SQL Management Console access or remote access to run SQL against my database, so I had to adjust my export mechanism to be more of a two-phase thing: Get the data out of my database using an .aspx page that executed in the context of the blog, then take the exported content and transform that into blog posts.

There also wasn't anything in Haack's setup to handle the comment export for use in Disqus, so I had to do that, too.

Oh, and Haack was on some newer/custom version of Subtext where the database schema was different from mine, so I had to fix that to work with Subtext 2.5.2.0.

Here's my forked version of Haack's subtext-jekyll-exporter that you can use for exporting your content and comments. You can also fork it as a starter for your own export process.

  • Drop the JekyllExport.aspx and DisqusCommentExport.aspx files into your Subtext blog.
  • Save the output of each as an XML file.
  • Make your URLs relative. I have a little section on this just below, but it's way easier to deal with local blog development if your URLs don't have the protocol or host info in them for internal links. It's easier to do this in the exported content before running the exporter to process into Markdown.
  • Run the SubtextJekyllExporter.exe on the XML from JekyllExport.aspx to convert it into Markdown. These will be the Markdown pages that go in the _posts/archived folder and they'll have Disqus identifiers ready to go to tie existing comments to the articles.
  • In Disqus, import a "general" WXR file and use the XML from DisqusCommentExport.aspx as the WXR file. It may take a while to import, so give it some time.

You can test this out locally when it's done. Using Jekyll to host your site locally, check out your comment section on one of the posts in your site with comments. They should show up.

Make URLs Relative

It is way easier to test your blog locally if the links work. That means if you have absolute links like http://yoursite.com/link/target.html they're going to only work if the link is truly live. If, however, you have /link/target.html then it'll work on your local test machine, it'll work from yourusername.github.io, and it'll work from your final blog site.

I did a crude replacement on my blog entries that seemed to work pretty well.

Replace ="http://www.mysite.com/" with ="/" and that seemed to be enough (using my domain name in there, of course). YMMV on that one.

Push It

Once everything looks good locally, push it to your public repo. (If you're on Windows, don't forget to comment out that highlighter: null in _config.yml.) Give it a few minutes and you should be able to see your blog at http://yourusername.github.io - navigate around and do any further fix-up.

Configure DNS

This was a finicky thing for me. I don't mess with DNS much so it took me a bit to get this just right.

My blog is at www.paraesthesia.com (I like the www part, some folks don't). GitHub has some good info about setting up your custom domain but it was still a little bit confusing and "spread out" for me.

For the www case, like mine:

What got me/wasn't clear was that for the www special case, you have to do both the A and CNAME records.

Once you do that, yourdomain.com and www.yourdomain.com will both make it to your blog. (If you don't like the www part, make your CNAME file in your repo only contain yourdomain.com instead of www.yourdomain.com.)

Remaining Items

I still have a few things to fix up, but for the most part I'm moved over.

There are still some quirky CSS things I need to fix that I'm not happy with. Looking at the headers in this entry, for example, they have some crazy overlapping with the first line under them.

I have some in-page JS demos that were sort of challenging to set up in Subtext but should be easier in the new setup. I need to move those over; right now they're broken.

I also have the "Command Prompt Here Generator" app that was running on my blog site but is now inaccessible because I have to get it going on a site with dynamic abilities. I'll probably use my old blog host site as an "app host" now where I just put little dynamic apps. It'll be easier to do that stuff without Subtext right in the root of the site.

I'll get there, but for now... I'm feeling pretty good.