The Big Bang Theory - Season
1Some friends of mine at work told me I needed to watch The Big Bang Theory because I resembled one of the characters. After a couple of mentions of this, I gave in and got the first season from Netflix.

Before we started watching, I sat my wife down and told her we needed to figure out which character I resemble. Up to the task, we started the disc.

About 10 minutes into the first episode, we had a conversation like this:

Jenn: You’re Sheldon. Travis: What? Are you sure? I could be Leonard. Jenn: You’re Sheldon. Travis: I dunno… HOLY CRAP I’M SHELDON.

After watching the first six episodes, I have to say that while I have some Leonard in me, I’m Sheldon. Like this video where Penny sits in Sheldon’s seat… I think I’ve actually had this conversation, or something eerily like it. (Though mine was more around my parking spot at work than my seat at home.)

Anyway, it’s a great show, so if you haven’t seen it, check it out. I’ll be watching the rest as they come in from Netflix.

I don’t do a lot of traveling for business, but when I do I’m sort of caught without proper luggage. It turns into an interesting dance of duffel bags and briefcases trying to figure out the best way to get the clothes and the computer properly ready to carry on the airplane. For MIX09 this year, I decided enough was enough.

I did some research and informal Twitter polls, looking at what people liked and balancing that with cost. In the end, here’s what I came up with:

Brookstone XpressCheck 21" Computer
TravelerBrookstone XpressCheck 21” Ballistic Computer Traveler

This carry-on sized bag has a reasonable amount of space for clothes, but the cool bit is the front. There’s a zip-open pocket with a mini-briefcase that’s padded and perfectly holds your laptop and charger

  • when you get to your destination, zip it out and carry it with you, no separate computer bag needed. There are pockets on the top for easy access to stuff like your boarding pass and your quart-bag of liquids, and it has all the regular stuff you’d expect in a carry-on bag (extendible handle, wheels, handles on two sides, etc.)

I looked at a bunch of carry-on bags (Da Kine, Victorinox, etc.) and this seemed to be the best all-around carry-on with a specific focus on the computer. Granted, you’ll get a little less clothing space here, but the zip-out computer bag and such really is cool.

fūl Backpack 5093
BPfūl Backpack 5093 BP

While walking around at MIX, I wanted to have my laptop with me, but I was going to need more space than the little zip-out thing that came with the XpressCheck bag - gotta have somewhere to put your swag bag so you’re not carting that around, right?

To that end, I got some recommendations for various backpacks (lots of folks like the Spire series) but I didn’t want to spend another $200 on a backpack. This $30 model I found at Costco has all the same stuff - a padded area for your computer, looooots of pockets, fully adjustable straps… it was perfect. I wore that thing for three days with no problems at all. If you’re looking for a computer backpack, definitely check it out.

ASP.NET Ninjas On Fire Black Belt Tips

Demo-heavy Haack talk on ASP.NET MVC:

  • CSRF
  • Unit Testing
  • Model Binders
  • Concurrency
  • Expression Helpers
  • Custom Scaffolding
  • AJAX Grid
  • Route Debugger

The first demo started with Haack writing a bank site. A topic close to my heart. And it’s for CSRF protection, which is also interesting.

The [Authorize] attribute on a controller means anyone accessing the controller method needs to be authenticated. Cool.

OK, so the demo is showing a cross-site request forgery on a POST request. You apply a [ValidateAntiForgeryToken] attribute on the controller action and in the form you put a hidden form field with a random value associated with your session using the Html.AntiForgeryToken method. This appears to me to be the MVC answer to ViewStateUserKey and ViewState MAC checking. If the POST is made without the token, an exception is thrown. I was talking to Eilon Lipton at the attendee party a couple of nights back and confirmed that only POST requests can be protected. The problem there is that if the browser is insecure and allows the attacker to create a cross-domain GET to retrieve the form and inspect the results of that GET, then it can grab the anti-forgery token, add it to the POST, and it will succeed. (This is the same case with ViewState MAC checking in web forms.) A full CSRF protection mechanism covers every request, not just select ones. I’ll have to see if I can get that pushed through into MVC. (That would be a pretty compelling solution to get us to switch away from web forms/MVP.)

Next demo is how to do a controller action unit test. I got this one. Should be using Isolator for mocking, though. :) Showed some good patterns for folks who are unfamiliar with them, though - TDD, dependency injection, repository pattern… valuable stuff to get the community thinking about. Might have been just a liiiittle too fast for some of the folks unfamiliar with the patterns, though.

Next demo is model binding. The [BindAttribute] lets you specify which fields posted to the controller action should be used when populating the action’s parameters. I think more time should have been spent on this because model binding is actually pretty interesting. (Maybe I missed this in the latter half of yesterday’s talk.)

Concurrency. That is, two people editing the same record through the web interface at the same time. The tip here used a timestamp in the database using the “rowversion” data type and setting the “Update Check” value to “true” on that column. When you try to submit an update to the record, it’ll check to see if the row version you’re sending in is different than the one on the actual record in the database. If they’re different, you know the record has changed since you started editing and you throw an exception; if they’re the same, you’re good to go.

He’s using stuff from the “Microsoft.Web.Mvc” assembly - the MVC Futures assembly

  • which isn’t part of the RTM that was announced this week. Not sure I’d be demoing stuff that doesn’t ship… but I understand. Now I’m curious to see what’s in the Futures assembly besides the base64 encoding method he’s showing. (Futures is hard to find on CodePlex. Look for the MVC “source” release - you’ll find it there.)

One of the most confusing things about the [HandleError] attribute is that if you’re using it on localhost, it has the same semantics as the CustomErrors section in web.config. If you want to see the [HandleError] attribute work, you need to set web.config correctly.

MVC Futures has “expression-based helpers” to render controls based on your model using lambdas. Instead of: <% Html.TextBox(“Title”, null, new {width=80}) %> you can use: <% Html.TextBoxFor(m => m.Title, new {width=80}) %> Nice because of the strong typing.

In order to move from string-based to expression-based binding, you need to override the T4 templates that generate the default views. Putting your overrides in your project in a CodeTemplates/AddController or CodeTemplates/AddView folder will get the project to override the defaults for that project. You’ll need to remember to remove the custom tool from the .tt templates or it will try to generate output for them. You can even add your own custom .tt templates in there so when you do File -> New Controller or whatever it will show up in the dialogs.

If you’re doing a lot of T4 editing, the Clairus VisualT4 editor looks nice. It adds syntax highlighting for T4 into Visual Studio. Not sure I’d have included that in the demo, though, since it’s not what the lay-user is going to see.

“Validation in ASP.NET MVC is a little tricky because we don’t have built-in support for DataAnnotations.” There’s an example on CodePlex for this. I’ve played a bit with DataAnnotations and I’m not overly won-over. You have to add a partial class to “extend” your data object, put the [MetadataType] attribute on that and point to a “buddy class,” then create another class that has properties all of the same name as the data object that you want to annotate. Something like this:

[MetadataType(typeof(Question.Metadata))]
public partial class Question
{
  private class Metadata
  {
    [Required]
    [StringLength(10, ErrorMessage="Too long.")]
    public string Title { get; set; }
  }
}

(This is how Dynamic Data does it.) Apparently there’s some way coming out where you can specify that metadata through XML rather than attributes. I think I’ll be more interested when that comes out.

Nice tip here, instead of specifying an error message in your annotation, you can specify a resource. That’s key, since we have to localize everything.

[MetadataType(typeof(Question.Metadata))]
public partial class Question
{
  private class Metadata
  {
    [Required]
    [StringLength(10,
                  ErrorMessageResourceType=typeof(Resources),
                  ErrorMessageResourceName="TitleVerboseError")]
    public string Title { get; set; }
  }
}

Finally, a demo that shows something more complicated around validation. Now to see a demo where the validation parameters aren’t static…

Route debugging. Haack has posted a nice route debugger that puts up a page that shows the various routes in the table and which route was matched based on the incoming URL. Very helpful if you’re having a tough time figuring out why you’re not getting to the controller action you think you should be getting to.

We skipped the demo for the jQuery AJAX grid. He’ll show that in an open space later if you want to see it.

There’s a Little Scripter in All of Us

This is Rob Conery’s challenge to the audience to embrace their inner scripter and move away from the “architecture astronauts.”

First point is the acronyms we get into with ASP.NET. TDD, DRY, KISS, etc. Can we break the rules that ASP.NET generally leads us to? “Not everything is an enterprise app.” Hmm. This is going to be a little interesting for me since I’d actually like to see MORE focus on enterprise app development in ASP.NET. It’s like ASP.NET is hovering in this limbo area where it’s not fully set for enterprise development, but it’s also more than tiny scripting sorts of apps need. Makes me wonder if it’s trying to be too much. Jack of all trades, master of none.

Lots of apologies for the demo. “I’m on a Mac and the tech here doesn’t like it. The CSS on the demo doesn’t like a 1024 x 768 resolution so it looks bad on the screen.” As an audience member I don’t care, I just want to see it working and looking good.

He mentions that he jammed together a truckload of reeeeeally bad JavaScript code to get the MVC Storefront to work. “If I showed you that code, you’d probably throw up. Do I care?” Hmmm. This is getting harder for me to swallow. “Success as a metric” only works if you don’t have to go back and maintain the app, fix bugs, or add features. Oh, or if your team never changes. Just because it works doesn’t mean it’s right.

Oh, there’s another apology. “OpenID should be showing up down there… but I don’t have network connectivity.” Demo FAIL. With all the stuff not working, it’s really not convincing me that the rapid scripter approach to things is the way to go.

Bit of a backtrack - “I’m not giving up on architecture.” Showed some data access stuff - repository pattern, state pattern. Okay… and then we get to see the massive amount of inline script in the view. Wow. My head a-splode.

Here’s the point, I think: He showed this application he downloaded that had like 20 assemblies and when it didn’t work… it was so complex it was impossible to troubleshoot. The architecture might have been great, but it’s not something you could just download and get going. With a flatter application you might have a less “correct” architecture, but it might also be easier to get up and running and in front of the eyes of your users. That, I will buy. Granted, you have to take it with a grain of salt - if you’re making a massive distributed system that has certain scalability and deployment requirements, yeah, it’s going to be complex. On the other hand, if you’re just “making a web site,” you might not need all that. He kind of took it from one far end of the spectrum to the other (which made it a hard sell to me) but I get the idea.

Crap. Battery’s dying. Time to plug in.

Building Microsoft Silverlight Controls

I’ve not done a lot of Silverlight work so seeing this stuff come together is good. The lecture is in the form of building a shopping site using Silverlight. I got here a little late (was eating lunch) and the topic is setting up styles in a separate XAML file (StaticResources). Sort of like CSS for XAML. Good.

The clipboard manager the presenter is using is kind of cool. Curious what it is. Looks WPF.

So, new styling stuff in Silverlight 3 - “BasedOn” styles, so you can basically “derive and override” styling. Also, “merged dictionaries” so you can define styles that are compilations of mulitple styles. (Not sure I described that last one well. There was no demo and it was skimmed over.)

Skinning works with custom controls but not user controls or panels. The reason for this is that custom control visuals are in a <ControlTemplate> in XAML and all of the control logic is in code - good separation. User controls, I’m gathering, are more tightly coupled.

“Parts and States Model” - Make it easy to skin your control by separating logic and visuals and defining an explicit control contract. It’s a recommended pattern but is not enforced. “Parts” are named elements (x:Name) in a template that the code manipulates in some way. “States” are a way for you to define the way a control should look in the “mouseover” state or the “pressed” state. You define these with <VisualState> elements. Not all controls have states. “Transitions” are the visual look your control goes through as it moves between states and are defined with a <VisualTransition> element. “State gropus” are sets of mutually exclusive states and are defined in <VisualStateGroup> elements. (I’m gathering that the demo here will show this all in action.)

The demo is making a validated text box. Styling of the textbox is done using {TemplateBinding} markup so if someone sets various properties on the text box they can change the style. Another “part” of the text box is the place where the text goes and… oh, she moved too fast. Somehow by calling that element “ContentElement” using x:Name attribute the text magically showed up in the text box. We saw a VisualState setup where the mouseover for an element on the text box would enlarge the element (a little star, when the mouse isn’t over it, would get twice its original size in mouseover state). Using VisualTransitions, she animated the transition between the two states so it looked nice and smooth.

The default binding for a text box is, apparently, that whenever the user tabs away, that’s when the “onchanged” event happens. In Silverlight 3 they let you set the binding to be explicit (it will never automatically happen) and then you can add a KeyUp event handler that lets you do the binding every time a key is pressed. Nice. (Seems a little roundabout, but I’m gathering this is a big improvement from Silverlight 2.)

Out of the box, Silverlight 3 will have good, standard-looking validation UI. TextBox, CheckBox, adioButton, ComboBox, ListBox, PasswordBox. Good. I think we’re fighting validation right now in one of our projects.

I haven’t used Blend a lot before, but I have used Photoshop, Illustrator, AutoCAD, and 3DSMax. Those are listed in order of UI complexity (my opinion based on my experiences with them). Blend seems to fall somewhere between Illustrator and AutoCAD. The demo of hooking up states in Blend is interesting, but… well, not really straightforward. If someone grabbed me right after this there’s no way I could repeat it.

“The coolest and least interesting demo” for people who have used Silverlight 2 - They’ve enabled the ability to change the style of elements at runtime. I’m gathering that wasn’t possible in previous versions. The demo looked basically like a demo that uses JS to change CSS on some HTML at runtime. Glad Silverlight can do… uh… the same thing DHTML has been able to do for years.

Next demo is creation of a custom control showing the control’s contract (attributes that define the various states the control can be in) and the manner you programmatically track the control’s state. The default style for your control should be in “generic.xaml” and needs to be included in the Themes namespace of your control assembly as an embedded resource. The custom control created was a five-star “rating” control like you’d see on Netflix or Amazon. Cool.

A lot of the way this seems to work is reminiscent of trying to deliver packaged user controls. The markup (ASCX in user controls, XAML for these Silverlight controls) may or may not have all of the controls they should because the designer may or may not have included them all, so you have to check to see if the nested controls even exist before acting on them.

Just about time for the final session of the day.

Building High-Performance Web Applications and Sites

The tips here should help in all web browsers, not just IE, but specific stats will be in IE (since it is given by an IE team member).

In the top 100 sites online (don’t know what those are), IE spent 16% of its time in script but the rest in layout. In AJAX-heavy web sites, it only increased to 33% in script. Most time is spent in layout and rendering.

CSS performance.

  • Minimize included styles. Unused styles increase download size and rendering time because failures (CSS selectors that don’t point to anything) cost time.
  • Simplify selectors. Complex selectors are slow. Where possible, use class or ID selectors. Use a child selector (ul > li) instead of a descendant selector (ul li). Don’t use RTL and LTR styles. Minimizing included styles makes this easier.
  • Don’t use expressions. They’re non-standard and they get constantly evaluated.
  • Minimize page re-layouts. Basically, as the site is dynamically updating or the user’s working on things, you want to minimize the amount of things tha update. The example here was a page that dynamicaly builds itself and inserts advertisements as they load… and things jump all over the place. When those sorts of changes happen, the browser has to re-layout the page. A better approach for this would be to have placeholders where the ads are so the page doesn’t re-layout - content just gets inserted and things don’t jump around.

Optimizing JavaScript symbol resolution… Lookups are done by scope - local, intermediate, global - or by prototype - instance, object prototype, DOM. If you can optimize these lookups, your script will run faster. One example showed the difference between using the “var” keyword to declare a local scope variable and forgetting the keyword - if you forget the keyword, the variable isn’t local so the lookups get longer. Another example was showing repeated access of an element’s innerHTML property - rather than doing a bunch of sets on the property, calculate the total value you’re going to set at the end and access innerHTML once. Yet a third example showed a function that got called in a loop - every time it runs, the symbol gets resolved. Making a local scope variable function pointer and resolving the symbol once is better.

Of course, you only want to do this sort of optimization when you need to, but how do you know if you need to? There are various JS profilers out there, and the presenter showed the one in IE8 which is pretty sweet and easy to use. I haven’t gotten so far into JS that I needed to profile, but it’s nice to know this sort of thing is out there. Anyway, the interesting point of this part of the demo was showing that optimizing some of the lookup chains (in these simple examples) reduced some execution times from, say, 400ms to 200ms. I guess VS2010 will have this built in.

JavaScript Coding Inefficiencies.

  • Parsing JSON. You do an AJAX call, get some script back and need to turn it into an object. How do you do it? With “eval()” it’s slow and pretty insecure. In a third-party parsing library it’s slower but more secure. The ideal solution is to use the native JSON parsing methods JSON.parse(), JSON.stringify(), and toJSON() on Date/Number/String/Boolean prototypes. This is in IE8 and FF 3.5.
  • The switch statement. In a compiled language, the compiler does some optimization around switch/case statements. Apparently in JavaScript, that optimization doesn’t happen - it turns into huge if/else if blocks. A better way to go is to make a lookup table surrounded by a try/catch block where the catch block is the default operation. Definitely want to run that through the profiler to see if it’s worth it.
  • Property access methods. Instead of getProperty() and setProperty(value) methods (which makes for clean code), just directly access the property backing store directly. Skip the function call and added symbol resolution.
  • Minimize DOM interaction. As mentioned above, the DOM is the last place that’s looked to resolve symbols. The less you have to do that, the better. (DOM performance has improved, apparently, in IE8.)
  • Smart use of DOM methods. For example, use nextSibling() rather than nodes[i] when iterating through a node list. These methods are optimized to be fast. The querySelectorAll method, new in IE8, is optimized for getting elements by CSS class selectors and can be faster than getElementById or iterating through the whole DOM to find groups of elements.

Through all of this, though, optimize only when needed and consider code maintainability when you do optimize. You don’t just want to blindly implement this stuff.

HTTP Performance. This is a lot of that YSlow stuff you’re already familiar with.

  • Use HTTP compression. Whenever you get a request that says it allows gzip, you can gzip the response. You only want to do this on text or other uncompressed things, though - you don’t want to compress something like a JPEG that’s already compressed. If you do, in some cases, the download to the client might actually get bigger and you’ve wasted both client and server cycles in compressing/decompressing that JPEG.
  • Scaling images. Dont use the width/height tags on an image to scale it down - actually scale the image file.
  • File linking. Rather than having a bunch of JS or CSS files, link them all together into a single CSS and a single JS file. You’ll still get client-side caching, but you’ll reduce the number of requests/responses going on.
  • CSS sprites instead of several images. Say you have a bunch of buttons on a toolbar. You could have a bunch of images - one image per button… or you could have one composite image and use DIVs and CSS to show the appropriate portion of the compositie image on each button.
  • Repeat visits. Use conditional requests - use the Expires header in a response so the browser knows if it can get the item out of local cache.
  • Script blocking. When a browser hits a <script> tag the browser stops because it doesn’t know if it’s going to change the page or not. Where you can, put the <script> at the bottom of the body so it’s loaded last. This is improved in IE8, but it’s still there.

IE8 has increased the connections-per-domain from two to six by default. No more registry hacking to get that to work.

Tools

  • Fiddler - inspects network traffic.
  • neXpert - plugin for Fiddler to aid performance testing.

And that’s all, folks. Battery’s dead and the conference is over. Time to fly home!

Keynote

Bill Buxton introduced the keynote today, which is about the release of Internet Explorer 8. The intro video, once again, was awesome. I think every web meme in existence showed up in this thing. (Not as good as ScottGu wrestling a bear yesterday, but pretty funny nonetheless.)

The first speaker is Dean Hachamovitch, GM of IE. He has some interesting points. We need a browser that “just works” for people who want to browse. Something that’s secure and stable. We also need a browser that works well for developers (uh… Firefox?). That’s what, apparently, IE8 is supposed to be. Available today, you can go download the final release version of Internet Explorer 8.

Some interesting statistics presented and the way they dealt with them in making IE8: 80% of user navigations are the user going back to a page they already were at. 70% of people have more than one search provider installed. To address that, the search box will return results as you type that come from your history and make that easier to get to. They also added easy buttons at the bottom of the search results box to toggle search providers on and off.

Oh, surprise: when a browser crashes, users don’t care why it crashed, they just don’t want to be interrupted. Not sure what genius figured that one out. The historic problem is you might be dong a bunch of stuff and if the browser crashes, you lose everything. To answer that, they did the thing Chrome did where each tab runs in its own process so if one crashes, the rest don’t. That took long enough.

Some of the performance statistics they’re showing are nice. Comparable to Firefox 3, nice and fast. Faster than Chrome. I’ll have to see if that plays out in more day-to-day scenarios.

Some of the little security stuff they did is nice. The top level domain is highlighted in the address bar so it’s easier to see. Say you went to “http://www.paypal.badguy.com/foo/bar/baz” - it’s not obvious that you’re not on Paypal… but if you highlight the top level domain, it is: http://www.paypal.**badguy.com**/foo/bar/baz. Oh, and built-in clickjacking prevention, that’s cool.

The standards compliance stuff is compelling… but the side effect of showing that IE8 is really standards compliant is that it shows the other browsers might not be quite as compliant, so you’re still going to be dealing with cross-browser formatting problems. It’ll be more compelling when all of the browsers get as behind standards compliance as this.

Web slices look like an interesting developer technology. The’re these little HTML snippets that run in a tiny gadget-style window in IE8 so the user doesn’t have to open a whole tab and log in. I can see some interesting potential use cases in some of our projects - let you get your account balances, for example, without having to go to your banking site. Examples they showed on this is the ability to check your Yahoo! mail or look at traffic reports in little web slice windows. Sounds pretty easy to implement, too - just add a few tags around exsting comments.

Accelerators also look pretty interesting. Context-sensitive functionality like the ability to highlight some text and send it in Gmail, or select an address on a page and get a map. That content, like the slices, shows up in a little gadget-style window. I wonder if it would be interesting to people to be able to, say, highlight a biller’s name and have an accelerator to start a payment to that biller.

He’s making a big point about the fact that “they’re going to listen to the users” in the future. Interesting. I mean, we all know they didn’t listen to us before, but dwelling on it shows they really heard that this was an issue. Let’s hope it sticks.

Next speaker is Deborah Adler, a designer who revolutionized the way pharmaceuticals get packaged and labeled. Not a techie by any means - not even someone who interacts with the tech world. She started out by trying to solve a problem for her grandmother and ended up solving a problem for the world. The problem was that her grandmother mistook her grandfather’s Alzheimer’s medication and took the wrong one. Same medicine, similar names… but different doses. Problems.

Other problems she saw were things like people chewing pills that shouldn’t be chewed because the warning about that got hidden or obscured among a lot of other text on the bottle. Apparently like 60% of Americans make mistakes in taking their medication because of difficulty in reading or understanding the instructions. Showing us the issues made it very clear - poor coloration, far too much text that is diffcult to read in tiny print, extra pages of difficult text to accompany the prescription bottle… I’ve seen it myself. It’s not clear. Tiny, poorly printed labels that make sense to the pharmacy but not to the end user.

Her solution - a revised label - is really good. It still has all of the information on it, but formatted in a much clearer manner where the information you need immediately (what the medicine is, how to take it) is prominent and the less important things (the phone number for the pharmacy) are less prominent. Labels get color-coded on a per-person and per-medication basis so my prescription for something will have a different color label than your prescription for the same thing - so I won’t accidentally take your meds. The bottle is reshaped to be flat on the back and slightly round on the front so you don’t have to rotate the bottle 360 degrees to read the information. Warnings go in bold, clear print on the back of the bottle. And a huge improvement - the label will actually get a red X that shows up on the front when the drug has expired so you know not to take it. Automatically. (Like time-release ink.) Standardized warning icons that are clear and easy to understand.

She tried to get it pushed through at the federal level but, while the FDA liked the idea, each state has its own pharmacy board so they couldn’t do it. In the end, Target took it and they’re using the idea now. It’s now called the “ClearRx” system.

This is really cool - it’s designed specifically for good human interaction. Granted, there were challenges in getting it out there (there are 23 different variations in the label to accommodate the different states’ regulatory requirements) but it’s a huge improvement from the crappy orange hard-to-read bottles.

Gonna have to ask Jenn if she’s seen this at the VA. The Surgeon General really likes it.

Makes me wonder what major changes we can make online to help people this much. Working in online banking, I’m sure there’s a lot of improvement we could make to clarify what people are looking at and make online banking easier and more compelling.

Wireframes That Work

Presented by a representative from a company called Cynergy that does contract RIA design, primarily in Adobe Flex. They list Bank of America as a customer, which is interesting to me.

Interesting point one - good design does not necessarily equate with good user experience. The example here was a house in Germany that won Time Magazine design of the year. It looks great… but the people living there aren’t having such a great time. Great design, great look, not great UX.

So here’s a new xDD acronym for you: Purpose-Driven Design. This seems to be the idea that you need to design your experience with the end purpose of the app in mind. Tailoring the experience to the user, the user’s needs, and the overall aim of the application.

Interesting idea that came up (that I happen to agree with) - don’t wait for the users to come back and complain about the experience before you start fixing the problem. Anticipate the issues and fix them up front. How often have you been on a project where you clap some UI on something that you know isn’t awesome but that’s what the stakeholders asked for… only to hear that it’s not the greatest and it needs to be redone?

Everyone comes into the deisgn process with some baggage - tunnel vision (thinking you’re limited by technology or “this is how we’ve always done it), changing minds (or not making any decision)… In a purposeful design scenario you have to step back from that and look at the problem. Watch the customer do their work. Look at the pain points. Look at the problem you’re trying to solve. Solve it without that baggage.

A tip presented: Turn off your computers when doing high-level design. Use a whiteboard. Use a pencil and paper. Computers are great productivity tools, but how many times do you check your email, get interrupted by IM, get sidetracked… and it’s true. I think about how I work and I totally get all of that information coming in all the time. (The computer will obviously have its place in the process, but try doing some of the brainstorming without it.)

And a note on process: Don’t be so rigid in process that it hurts the development effort or the flow of ideas. Hmmm. That’s definitely something I’ll have to take back to work with me next week.

From the presentation:

  • “It hasn’t been hard to make things look interesting or cool. Usefulness and joy can be elusive.”
  • “Design like an architect, refine like a sculptor.”
  • “Don’t be a usability nazi.” (This has to do with the idea of getting too caught up in process and letter-of-the-law usability guidelines like the Jakob Nielsen things like minimizing number of clicks and such. Solving the problem in the best way might break some of those guidelines but will actually provide a better experience.)
  • “In software, the desired goal is often a disruptive solution in the marketplace. Know that this may require a disruptive process.” This is definitely one I want to take back to work with me.

For my projects, I know I have opinions about how we should be doing things. I’m going to have to stop and think now - am I looking at it with my baggage-goggles on? Or am I really solving the problem? I know our UX folks are doing a great job at researching peoples’ needs, and I’ve seen the personality profiles and such that they’ve come up with… but one of the questions I have now that I didn’t think of before - have we talked to people who don’t do online banking and figured out why? Are we solving the people for only existing users or are we solving it for everyone? How do we solve the problem in such a way that we can increase our user base instead of just retaining the existing folks?

Lunchtime - Microsoft Surface

Got to play a bit with a Microsoft Surface during lunch. It’s sort of hard to really understand the coolness of the tactile experience without actually doing it. The videos and demos you see are neat, but when you actually use it, it makes a lot more sense.

One of the apps they had was a CD player where you set the disc on the table and it [somehow] looks at the case, figures out what the CD is, and starts playing the music from it. And, of course, you’ve seen the demos where the person sets their phone down and starts working with the pictures on it.

What if you could set your wallet on the table and see your account information? See your balances and such for your various accounts and credit cards? Want to pay your credit card bill? Drag a payment from your checking account over to your credit card account. Work with your electronic balances as easily as you work with cash, adding an easy to understand, tactile experience to your online banking. Might be interesting. Now if I can just convince work that I need to get a Surface… you know, for development purposes.

Securing Web Applications

I admittedly got here a few minutes late because I couldn’t find the room, but coming in… it looks like a better title for this would be “How We Improved Security in IE8.” Not quite what I expected. We’ll see.

Oh, yeah, uh… looking at the description - “Learn how to take advantage of browser security improvements to help protect your web applications and visitors.” Might have to go see what other presentations are out there. Recent projects have taught me that the security department won’t let us trust security to the browser - we have to control it all entirely at the server level. So…

Choosing Between ASP.NET Web Forms and MVC

This session is to help you determine what’s better for you - standard ASP.NET web forms or the new ASP.NET MVC framework. The demo shown here is two applications that have identical user interfaces, do exactly the same thing, but one’s web forms and the other’s MVC. Comparing apples to apples, so to speak.

Interesting bit when describing the way the demos were put together - a guy asked why there weren’t any themes used (.skin files, etc.) for the demo and all the styling was done in CSS. The answer - no web controls in MVC, so it doesn’t make sense to use .skin files. Interesting because I’m curious why it wouldn’t work if you were using ASPX as the view engine. Thinking what they meant wasn’t “you can’t use them” so much as “we chose not to.”

The presenter (Rachel Appel) seems to be dwelling on the URL format that MVC routing gives you. She brings up the querystring vs. nice routed URLs… but you can use routing with web forms. I’ve done it. Not sure the URL format is a selling point one way or the other. (Actually, later she mentions that routing will work with both, though she did pretty well omit it and sell hard when talking about MVC.)

She also seems to be talking about using web forms but NOT using the MVP pattern to separate the code out of the codebehind and into a separately testable class. I think that’s missing here. She brings up a lot about separation of concerns, but you can get some pretty good SoC with MVP.

I think the best part here and the most obvious thing that never gets said: With MVC you get full control over everything… but there’s a corresponding increase in effort to get results out of the box. You don’t get anything for free. Sort of the Spider-Man “with great power comes great responsibility.” Kudos to Appel for saying it. It’s true, and no one ever really mentions that.

Another thing she said that never gets said: when showing a <% foreach %> loop building a table, she mentioned how this is reminiscent of classic ASP. Absolutely. What she doesn’t mention is that the next logical step of creating lots of pages with tables is to create a block of logic that you can call and pass data into so you don’‘t have to write the <% foreach %> on every page with every table. Isn’t that… server controls?

Really this solidifies my thoughts that the best way to go is a sort of middle ground: web forms using MVP, taking advantage of the routing (which shipped separately from MVC, by the way), and having all of that third-party control support and the richness of web forms while also getting your separation of concerns goodness.

Granted, I very well could be convinced otherwise when MVC 2.0 ships, whenever that is. I was talking to Eilon Lipton on the MVC team last night about some of my concerns that never seem to be shown in the MVC demos. Complex input validation and localization. Can it be done? Sure, but it’s not really a great story. Again, with all that control, you get a lot more manual wireup and, in some cases, no help at all. Apparently some of these more complex scenarios are on the list of things to address. Looking forward to seeing that.

File -> New Company: NerdDinner.com

In this one, Hanselman is showing how to easily create a reasonably rich application, his example being a dinner scheduling application. Technologies used include LINQ to SQL and MVC. The data is getting abstracted away with the repository pattern. A very good demo of how you can really rapidly get something going here. Also a good overview of how MVC comes together. Probably a little more useful for the folks who haven’t messed with MVC, but good to see it all come together.

You know how you say a word so many times you forget what it means and it sounds like gibberish? The word “dinner” has been worn out for me now. Dinner dinner dinner dinner dinner. Yup. Meaningless.

New favorite site: sadtrombone.com. (Yes, you can find anything on the web.)

ASP.NET MVC - America’s Next Top Model View Controller Framework

This is an introduction to MVC given by Phil Haack. File -> New Project demo including a walkthrough of the project structure. How controllers get set up, that sort of thing.

I think this should probably have been given on day one to give the people a foundation on which to build over the course of the next two days.

Connecting Applications Across Networks with Microsoft .NET Services

This is an intro to the Microsoft .NET Service Bus, which looks interesting, particularly since we’re doing a lot of WCF in one of my current projects. Clemens Vasters is the presenter on this one.

Lots of interesting features here. For example, they’re working on a feature where you’ll still be able to connect to your service endpoint even if the port is blocked by the firewall. Sounds sort of like the way Google Talk will use port 80 instead of the standard Jabber port 5222 if it’s blocked. No real details but, still, on the horizon.

Another interesting thing - if you have a client talking to a service and the service bus detects that, say, they’re in the same subnet, the bus will detect that and upgrade the connection to get the client talking directly to the service. There’s an event you can listen to that will tell you when that happens. (I’m pretty sure I’m understanding that right, but I admittedly came in a little late.) You can also set connections to be reliable so if a connection breaks it’ll automatically be re-established.

They have a queuing behavior where you can send messages into a queue and the service will pull messages off the queue and respond to them. This is set up through a policy in the service registry. He made a big deal to say this isn’t, say, MSMQ queuing, but I’m not really sure how specifically it differs. The behavior seems to be the same, but with some REST sort of semantics based on HTTP verbs (like “GET” on the queue will read a message on the queue bot leave it and not dequeue it).

Something else interesting - if you want to see what’s subscribed to a certain message set, you can do a GET on a router subscriptions feed and get an ATOM document back with the list of all subscriptions. Do a POST to create a new subscription, DELETE to unsubscribe… all RESTful semantics around that subscription endpoint.

Good demo just sort of solidified it for me, though. Sort of like a chat app. Two Silverlight applications subscribing to a service on the bus listen for messages. Someone enters some text, submits it to the service. The service turns around and sends a message to the subscribers

  • the listening chat clients. Both chat clients get the text that was submitted. Basically Twitter. Got it. I see what’s going on now. (Oh, hey, the demo’s called “Text140!” I get it!) Was feeling a little out of sorts for a bit, not really knowing what I was looking at. Messages, at least in the demo, all take the form of ATOM entries.

OK. I get it. REST + ATOM + pub/sub + cloud = Microsoft.ServiceBus. Basically. Nice. Unfortunately, with the cloud portion, I don’t think we’ll be able to use it for the project I’m on (banks + cloud isn’t gonna happen) but I can see that it could be very useful in other scenarios. Twitter competitor? :) (Didn’t realize it was an Azure service until pretty late in the game. Again, probably from being late to the show here.)