net, vs comments edit

I was working yesterday on a solution in Visual Studio and noticed that every time I’d rebuild VS would report the build as failed… but without any error messages.

I thought it was just a fluke, but then I had to update a service reference. When I tried, I got the following error message:

Could not resolve mscorlib for target framework “.NETFramework,v4.0”. This can happen if the target framework is not installed or if the framework moniker is incorrectly formatted.

I searched all over and verified the TargetFramework settings on every project. No luck. Tried removing the service references so I could re-create them. Got the error and couldn’t remove the references. Rebooted the computer, you know, because that’s what you do. Still got the error. At which point I was like…

Fuuuuuuuuuuuu

And then I foundthis blog entrythat saved my life. I was hitting a maximum path length error.

I’m on Windows 2008, not XP like in the article, but MAX_PATH is still 260 characters. I was working on a project that was only about 100 characters deep, but if you look at the files that VS generates when updating a service reference, you see filenames that can be 100 characters long with the fully qualified type name of the proxy type being generated and a suffix of “.datasource” (e.g., “Some.Really.Super.Long.Namespace.That.May.Be.Inside.Your.Project.datasource”). All of that put together and I was bumping up against the max path length.

Moving my project closer to the root of my drive resolved the issue(C:\project rather than C:\dev\project\tasks\taskname\trunk sort of depth) and I was able to build again.

I’m guessing that something in there isn’t using the Unicode path extensions that would allow for a 32,767 character max path length. Hopefully that will be fixed in the next VS… but I’m not holding my breath.

gaming, xbox, media comments edit

I spent far more time than I’d like to admit in troubleshooting this issue so I figured I’d at least blog it.

Symptoms: When you run the “network connection test” on the Xbox it consistently succeeds. When you try to connect to Xbox Live, either to sign in with your profile or do a “Recover Gamertag,” it fails and tells you to go run the network connection test.

Let me tell you how frustrating this behavior is. The connection test says fine, but when you try to connect it says it can’t - go run the test?!

My network is set up right now so I have my wireless router downstairs feeding a wireless bridge (DAP-1522) upstairs in the game room. The Xbox is connected through the bridge. Everything was working wonderfully until around two weeks ago. Nothing changed on the network to my knowledge, no configuration changes or devices added/removed. Just… magic. Things are failing.

The first thing I did was disconnect the wired connection to the bridge and connect using the Xbox onboard wireless adapter. Same symptoms, only this time I could occasionally connect to Xbox Live if I tried signing in five or six times in a row. Not a lot of progress, but progress.

I rebooted and reset every network device I own with no luck.

I contacted Xbox Support via email and they directed me to this page about troubleshooting connection issues. None of the items here helped, but it’s understandable - there’s no way they could have guessed what was wrong.

The breakthrough came when I powered down the bridge and then connected to Xbox Live via wireless. Instant success. Something about that wireless bridge was interfering hardcore with the rest of the wireless network.

I ended up resetting the DAP-1522 bridge to factory defaults, doing a firmware upgrade (not sure if that was necessary, but there was a minor-version update available, so I figured why not), and reconfiguring the whole thing from scratch.

The Xbox is connected through the bridge once more, but now it signs in correctly.

This isn’t the first time that DAP-1522 has given me grief. When I was using it as an access point rather than a bridge it also had a couple of times where I had to reset it to factory defaults and start over. Like running for an extended period of time causes some sort of “buildup” that has to be flushed out. I may have to replace it with something more reliable.

gaming, xbox comments edit

Now that Call of Duty: Modern Warfare 3 is out, I’ve had some folks asking when I was getting it.

I’m not. At least, not yet.

Why not?

I’m a single-player campaign guy. I like the stories that go along with those campaigns. I like being able to pick it up in the 17 minutes I have between getting home from work and the arrival of my wife and daughter, which indicates it’s time to get back to work around the house.

I also like co-op play. Borderlands was really spectacular for this. Modern Warfare 2 was OK with its “spec ops” mode, but I really want a co-op campaign. I like working with my dad, uncle, and friends toward a common goal. “Spec ops” only being two-person… was limiting.

Battlefield
3What I’m not interested in is what is widely termed “multiplayer” but basically boils down to “100 different free-for-all modes.” Deathmatch and Team Deathmatch are roughly identical - the only difference is that in the latter, half the people aren’t shooting at you. “Horde mode,” “Capture the Flag,” and other almost-goal-based modes are only tolerable (to me) for a little while before I get really bored. I want a reason to do what I’m doing, not just endless waves of guys to shoot.

I did get Battlefield 3, which I was led to believe by various previews and things would have a decent co-op mode.

Eh…. not so much. I did a separate review of that and discussed my thoughts there, so I won’t go over that again.

Anyway, I’m starting to feel “done” with the whole military-based first-person-shooter. They’re all starting to feel “same-y” to me. The single-player campaigns get shorter, the “multiplayer” gets larger, and co-op gets the short end of the stick. New features get added that are totally things I’m not interested in (I don’t need “battle log” integration to track my friends’ kills) but none of the stuff I would like (better co-op?) shows up.

At some point down the road, I may pick up MW3. I’m not saying I’ll never get it. Right now, though, I’m wading through Battlefield 3, Halo: Reach, and LA Noire. I haven’t finished the new Portal 2 co-op levels, either. And when Borderlands 2 comes out… I’ll definitely be on that.

process comments edit

I’m working on this theory that the more automation you wrap around a development environment - testing, static analysis, etc. - the lazier the developers in that environment get due to increased reliance on the automation.

That’s not necessarily a bad thing, but I’m not so sure it’s good, either.

Let’s say you have a .NET project and you’ve got it totally wired up with automation.

  • You have a continuous integration server running the build on every check-in.
  • You’re running unit tests and failing the build if the test coverage levels fall below a certain limit.
  • FxCop runs with almost all the rules turned on to check for consistent behavior.
  • StyleCop runs with almost all the rules turned on to ensure the code is consistently formatted.
  • NDepend runs to check your dependencies and ensure other custom project-specific standards are met.

You even run your code through code reviews to ensure you get a second (or third, or fourth) set of eyes on everything. (No, that’s not automated.)

The point is, you’ve got all of these checks and balances in place so, ideally, the automation will catch a lot of the stuff before it even makes it to code review.

Somehow, though, you still see things creeping through. Things that should have been caught somewhere along the way. Things that don’t make sense.

Maybe it’s a bad naming convention that’s started that encourages the breaking of the Single Responsibility Principle… like HtmlHelper. (If you have to name your class with “Helper” or “Utility,” I’ll also put good money down that you don’t really know what it’s supposed to do and that you’ve totally broken SRP. But even if you disagree with my example, stick with me.)

So you add some automation around that to try and head off that bad naming convention. The build will break or give a warning or something if the bad convention is used. You educate the team on why the rules were updated and folks agree to amend behavior. Fixed, right?

Nope. There’s always a way to game the system. Now there’s a new convention where instead of “Helper” or “Utility” it’s “Service” but the point is the same - it’s a dumping ground for random, only loosely related functionality. But now everyone thinks it’s OK, it’s not a problem. Why is that?

The automation didn’t catch it, so it must not be a problem.

The automation only catches the exact rule being broken, but can’t enforce the principle. If a person doesn’t catch it - you, the developer - that doesn’t mean it isn’t an issue.

Another example: You have your test coverage requirement cranked up to 95%. That’s pretty high. It’s not 100% because in some cases that’s not even possible, but 95% gives you some wiggle room to leave things uncovered but have full coverage on those things you are able to cover.

The thing is, once you have a significant codebase, that 5% can potentially be pretty big. It might be several [small but important] classes.

How come there aren’t any unit tests for this code? “Well, the build didn’t fail, so isn’t it OK?” No, it’s not OK to not test your code just because you found a loophole in the automated rules. I mean, the cops may not have caught you for robbing the bank, but does it mean you didn’t break the law?

The automation didn’t catch it, so it must not be a problem.

The thing is, there’s not even always a way to catch this in code review. The naming convention thing can be caught… if the reviewer is familiar with the team’s past history and why the convention is in place. The coverage is actually harder to catch, especially if there is a lot to review all at once. You have to have a lot of discipline to mentally follow through the various execution paths and see how the tests exercise it. You also have to trust that the developer submitting the code for review did their due diligence and actually wrote the tests.

Missing it in code review can sometimes even mean worse things. Now, not only did the automation let it slide, but a fellow developer also missed it. Must mean you never have to fix it if someone else runs across it and does catch it, right? Why do you have this 3000 line method? “It made it through code review.”

Sometimes this means you update the automation to try to catch these things. Cool, now you can’t have a 3000 line method. But that 2999 line method is just fine because the automation didn’t catch it.

That’s why I’m starting to think automation makes you lazy. You can rely on the automation to catch so much, you stop trying to preemptively catch it yourself. You stop using your own heuristics to detect issues with your code and you fall back on the automation. And that’s sort of the point of automation - to help you catch things you’d otherwise miss… but it doesn’t entirely remove your responsibility for being the developer.

Be a professional developer. Use the automation, but also use your brain. Work with the automation. Understand that the automation around your build is a tool there to help you, not robotic police trying to enforce the law. Collaborate with your team and ask questions if you don’t know. Be amenable to refactoring - or rewriting - if something gets discovered after the fact.

Don’t let the automation make you lazy.