media comments edit

Over the last six months, give or take, I’ve noticed that my Netflix streaming performance over my Comcast internet has taken a dive.

If I stream a show during the day, I can get a full 1080P HD signal. Watching through my PS3, I get Super HD. It’s awesome.

However, watching that same show once it hits “prime time” – between, say, 7:30p and 10:30p – I get a maximum of 480SD.

I saw this article that went around about a fellow who caught Verizon supposedly doing some traffic shaping around Amazon Web Services and it got me wondering if Comcast was doing the same thing.

I called Comcast support and got the expected runaround. The internet people sent me to the TV people (because you watch Netflix on your TV?!); the TV people sent me to the home network support people (because no way is it a Comcast issue), then the home network people said they would transfer me to their dedicated Netflix support team…

…which transferred me to actual Netflix support. No affiliation with Comcast at all.

Netflix support ran me through the usual network troubleshooting steps, which I’d already done and basically amounts to “reboot everything and use a wired connection,” all of which I’d already done, and then we ended up with “call your ISP.” That’s how I got here in the first place. Sigh.

I reached out this time to @comcastcares on Twitter and had a much better result. I got in touch with a very helpful person, @comcastcamille, who did a few diagnostics on their end and then got me in touch with their “executive support” department.

The executive support department sent a tech to my house who replaced my aging cable modem. That actually improved my speed tests – I used to only occasionally make it to ~25Mbps down, but with the new modem I consistently get between 25Mbps and 30Mbps. Unfortunately, that didn’t fix the Netflix issue, so I called back.

This time I got ahold of a network tech named Miguel who not only spoke very clearly but also knew what he was talking about. A rare find in tech support.

First we did a speed test on two different sites. My results:

Looking good. On that same computer, I then tried streaming Netflix. 480SD. Lame.

Then he mentioned something I didn’t consider: Amazon Prime is also backed by AWS. Same computer, streamed an Amazon Prime video… full HD in less than three seconds buffering.

For giggles, we tried streaming Netflix and running the speed test at the same time and got similar results as the first speed test. I also ran the Net Neutrality speed test and got great results.

Of course, as mentioned on the Net Neutrality test site, much of the Netflix traffic doesn’t actually flow from AWS, but through Open Connect peering agreements. Ars Technica has a nice article about how several providers are having trouble keeping up with Netflix and it may not necessarily be intentional traffic shaping so much as sheer volume.

In the end, Miguel convinced me that it may not be entirely a Comcast problem. He also mentioned that he, himself, switched from Netflix to Amazon Prime because the quality is so much better. Something to consider.

Of course, Google Fiber is now looking at Portland, so that may be a good alternative.

For the record, I’ve never really had any problems with Comcast the way many people have. I admit I am possibly an exception. Other than the phone runaround, which you often get with any type of service provider, Comcast service has been reliable and good for me. Netflix aside, the TV works, the phone works, the internet is getting good speed and is always up… I can’t complain. (Well, the prices do continue to go up, which sucks, but that’s only peripherally related to the service quality discussion.) We tried Frontier, the primary local competitor, and I had the experiences with Frontier that other people seem to report with Comcast. Frontier (when I was with them) had outages constantly, pretty much refused to help… and actually did things that required me to reset my router periodically and fully reconfigure the network.

But, you know, Google Fiber…

net comments edit

Let me say up front, I’m no TFS guru. I’m sure there’s something simple I’m probably overlooking. I just feel like this was far more complicated than it had to be so I can’t get over the idea I’m missing a simple switch flip here.

Anyway.

We have a bunch of TFS 2012 build agents. They all have VS 2012 installed and they build VS 2012 solutions really well. But we’d like to start working in VS 2013, using VS 2013 tools, so I undertook the adventure of figuring this out.

I thought just installing Visual Studio 2013 on the build agent would be enough, but… not so.

I’m guessing most folks haven’t run into trouble with this, or maybe they have the option of upgrading to TFS 2013 and they bypass the issue entirely. The first sign of trouble I ran into was our custom FxCop rules: they were built against VS 2012 (FxCop 11.0) assemblies, so if you run the build in VS 2012 it works great, but in VS 2013 there are assembly binding problems when it loads up the custom rules. It went downhill from there.

I’ll skip to the end so you don’t have to follow me on this journey. Suffice to say, it was a long day.

Here’s what I had to do to get a TFS 2012 build agent running with a full VS 2013 stack – no falling back to VS 2012 tools:

On the build agent

  1. Install Visual Studio 2013.
  2. Update the build agent configuration to have a tag “vs2013” – you need a tag so you can target your build configurations to agents that support the new requirements.

In your project/solution

  1. Update all of your .csproj and MSBuild scripts to specify ToolsVersion="12.0" at the top in the <Project> element. In VS 2012 this used to be ToolsVersion="4.0" so you might be able to search for that.
  2. Update any path references in your scripts, project files, custom FxCop ruleset definitions, etc., to point to the VS 2013 install location. That is, switch from C:\Program Files (x86)\Microsoft Visual Studio 11.0\... to C:\Program Files (x86)\Microsoft Visual Studio 12.0\...; or from VS110COMNTOOLS to VS120COMNTOOLS.
  3. If you’re using NuGet and have package restore enabled, make sure you have the latest NuGet.targets file. You can get that by setting up a new project really quickly and just enabling package restore on that, then stealing the NuGet.targets. You may need to tweak the ToolsVersion at the top.
  4. Update your project’s TFS build configuration so…
    • It requires a build agent with the “vs2013” flag.
    • In the “MSBuild Arguments” setting, pass /p:VisualStudioVersion=12.0 so it knows to use the latest tools.

Once you’ve done all that, the build should run with all VS 2013 tools. You can verify it by turning logging up to diagnostic levels, then opening the final MSBuild log and searching for “11.0” – if you find that there are any paths or anything set to the VS 2012 install location, you’ll know you missed a reference. You will still probably see the VS110COMNTOOLS environment variable, but it won’t be getting used anywhere.

process, github comments edit

Yesterday we moved Autofac over to GitHub.

I’m cool with that. There’s a lot of momentum behind Git and GitHub in the source control community and I understand that. Nick Blumhardt’s post on the Autofac forum and the linked Eric Raymond post in the Emacs developer list hit close to home for me – I wish Mercurial had “won,” but Git’s fine, too. I don’t feel super strongly about it.*

In moving, we got a lot of really nice and supportive tweets and posts and it’s been a nice welcoming party. That’s cool.

Then there have been some puzzling ones, though, and here’s where I switch out of my “Autofac project coordinator person” hat and into my “I’m just a dev” hat:

Thank you for switching! Its been fun to get int eh codebase and look around. :) OSS FTW!

I sometimes get excited by new OSS projects by my heart sinks when I see they're not on github and I lose interest

Again, I’m not picking on these folks personally, because I respect them and their skills. I’ve seen a few of these and I know (hope?) they won’t take it personally that I grabbed theirs out of the bunch. What I want to address is more my puzzlement around the sentiment I see here:

Why is source control, or a particular source control system, such a barrier?

In a world where the polyglot programmer is becoming more the norm than the exception; where it’s pretty common that a single developer can work in more than one environment, on more than one OS, and/or in more than one language… I have a difficult time understanding why version control systems don’t also fit into that bucket.

I get that folks might have a personal preference. I’m a Sublime Text guy, not Notepad++. I’m a CodeRush guy, not ReSharper. That’s not to say I can’t use those other tools, just that I have a preference and I’m more productive when using my preference.

When I see something like “It’s been fun to get in the codebase and look around,” though, and the [implied] reason that was somehow possible now when it wasn’t before is because of a switch from Mercurial/Google Code to Git/GitHub?

That doesn’t make sense to me.

When I see a project that sparks my interest and makes me think I could make use of it, I don’t really care what version control system they use.** There’s a bit of a “when in Rome” for me there. I mean, honestly, once I pull the NuGet package (or gem, or whatever) down and start using it, and I go to their site to learn more… I don’t really lose interest if they’re hosting their source in some source control system or with a host I don’t prefer. I don’t know of an open source site that doesn’t let you browse the source through a web interface anymore, so even if I wanted to dig into the code a little, it’s not like I have to install any new tools. The “issues” systems on most open source hosting sites are roughly the same, too, so there’s no trouble if I want to file a defect or enhancement request.

Sure, there are some things about GitHub that make certain aspects of open source easier, but it’s primarily around contribution. I concede that pull requests are nice – I’ve made a few, and I’ve taken a few.*** That said, there are some well established conventions around things like patch files (remember those, kids?) that have worked for a long time.

Having a different mechanism of contribution has also never really stopped me. Have you let it stop you? Why?

I guess I look at these other systems more as an opportunity to learn than as a barrier. Just like I have to learn about your coding conventions in your project, your project’s style, the right way to fix the issue I found (or add the enhancement) before I can contribute, the version control may be one of those things, too. It’s not really that big of a deal, especially considering there’s really only Mercurial, Git, and Subversion out there in the open source world and you’ve covered the 99% case.

Don’t get me wrong. I think removing friction is great. Making easy jobs easy and hard jobs possible is awesome. GitHub has been great at that, and I applaud them for it. I just don’t think folks should let source control be a barrier. Add the skills to your portfolio. Sharpen the saw. Have a preference, but don’t let it shackle you.


* All that said, I can’t remember a time when I had more trouble with trivial crap like line endings with any source control system other than Git. And until fairly recently, setting up a decent, working Git environment in Windows (where I spend most of my time) wasn’t as straightforward as all that. Obviously my experiences may differ from yours.

** Except TFS version control. Anything beyond even the simplest operations is a ridiculous pain. “Workspaces?” Really? Still? It’s VSS with “big boy” pants.

*** And, as we all know, pull requests aren’t Git, they’re GitHub (or the host) because they’re workflow items, not source control specific. BitBucket has pull requests for Mercurial.

dotnet, aspnet comments edit

For reasons I won’t get into, we recently ended up with a scenario in MVC where we needed to use RenderAction to get some data into a view. Some of the data was exposed via async calls to services.

The challenge is that RenderAction doesn’t support asynchronous controller actions. To accomplish the task, we ended up with a synchronous controller action that used Task.Run to get data from certain async calls. And, yeah, I know that’s not really the greatest thing in the world but there wasn’t a great way around that.

That landed us with a new challenge: HttpContext.Current was null in the Task.Run action but not in the partial view the controller action returned.

Normally that wouldn’t bother me, a service call not having a web request context, but due to a certain chain of logic (again, which I won’t get into) we had a call to DependencyResolver.Current in the asynchronous action. We’re using Autofac, which creates a lifetime scope per web request, but without any request context – explosions.

In the end, we had two solutions.

The first solution manually set the call context in the asynchronous task.

var context = System.Web.HttpContext.Current;
return Task.Run(() =>
  {
    CallContext.HostContext = context;
    return this.AsyncCallThatReturnsTask();
  }).Result;

That worked in some simple cases, but for some reason it didn’t stick in certain chained async/await calls down the stack.

The second solution was to rewrite certain things to be synchronous and only make async calls on things that don’t need HttpContext. That’s sort of a cop-out, but we couldn’t really find a way around it without getting… really, really deep. This is where we actually ended up.

I have a feeling there is something more that could be done by cloning bits of the current SynchronizationContext and/or ExecutionContext, setting up a custom TaskFactory, and firing up the async calls through that, but given the problem we’re solving is sort of a one-off and the high risk of deadlock or something crazy breaking under load…  it wasn’t worth diving that deep.

It would be nice if MVC would support asynchronous RenderAction calls, though.

halloween, costumes comments edit

Record year this year despite Halloween being on a weekday. The weather was pretty nice, which I’m guessing made it more amenable to be out, but otherwise I’m not sure why we got such a boost. We even shut down half an hour early - at 8:00p instead of 8:30p - to get Phoenix to bed. (We had a couple of kids knock after we shut the lights off, so you see those in that final time block.)

2013: 298
trick-or-treaters.

Cumulative data:

</tr> </thead> </table> The costume this year was a BioShock splicer. ![Travis as a splicer](https://www.paraesthesia.com/images/20131101_splicer.jpg) Jenn didn't get her costume done, but is working on a splicer costume for the Halloween party we're attending this weekend. Phoenix was, at various points, a fairy; a princess; and Merida from *Brave*. People in general were much more pleasant this year, but it probably helped that Phoenix was the one handing out the candy most of the time. It's hard to be pissed off with a two year old fairy putting candy in your bag. Even the older kids who are usually sort of belligerent got really friendly. Plus, Phoe had a great time with it and talked to all of them like they were best friends. This was also Phoe's first trick-or-treat year. She went to Jenn's work, my mom's work, and my work; and she also ran up and down our block. There's more candy at our house than we know what to do with, and she had a total blast.
  Year
2006 2007</th> 2008</th> 2009 2010 2011 2012 2013
Time Block 6:00p - 6:30p 52 5 14 17 19 31 -- 28
6:30p - 7:00p 59 45 71 51 77 80 -- 72
7:00p - 7:30p 35 39 82 72 76 53 -- 113
7:30p - 8:00p 16 25 45 82 48 25 -- 80
8:00p - 8:30p 0 21 25 21 39 0 -- 5
  Total 162 139 237 243 259 189 -- 298