gaming, xbox comments edit

When Xbox One was first announced the whole game licensing thing, admittedly, had me a little worried.

I don’t buy stuff from iTunes because the DRM has always been a pain in the ass. I have an iPod, my wife has an iPod, we each have our own separate user accounts, and making sure that music or videos that I buy are playable on her device is just a huge pain.

I have a few Kindle books, but I find that Kindle DRM is also a pain in the ass. I buy a book, my wife wants to read it… and it’s (of course) not one of the books that’s “eligible” to be loaned to someone. The publisher or whomever has locked it down. My wife has to physically borrow my Kindle to read it, which means I can’t read any of my other books. That’s crap.

We have two Xbox 360s at my house - one upstairs, one downstairs. My wife and I each have an Xbox Live account. Sometimes I want to play a game upstairs while she plays a different game downstairs (or vice versa). Sometimes we buy two copies of a game so we can play each other

So when I saw that there was this whole licensing discussion going on, I have to say, I headed for anti-DRM territory and got sort of scared.

However, it was pretty early and the details were sketchy. Some of the stuff sounded like the same crap - “Anyone can play your games on your console - regardless of whether you are logged in or their relationship to you.” That’s how the Xbox Live Arcade works now, where it gets licensed both to your user account and to a particular console. That doesn’t account for the multi-console household like we have, which makes some things really hard. If I buy some Rock Band tracks on one console, but we want to play on the other console… I have to be logged in. Having had the red ring of death three times, I’m no stranger tothe stupid “license transfer” process to move from one console to another. Seeing this come back with “even more move toward digital-only content” had me scared.

I also got scared by the notes about trading in disc-based games. “…Publishers can enable you to trade in your games at participating retailers.” That sounds like Kindle to me. Book publishers can allow me to loan people my Kindle books but almost none do. Why would I believe game publishers would be any different?

Some of the stuff did sound pretty cool - the “ten-person family sharing” thing was actually pretty great. I could buy a game, play it, and loan it to my uncle in the next state without having to mail him a disc. They had also figured out how to give your games to other people when you’re done with them, which is also pretty slick.

The once-every-24-hours-phone-home thing didn’t bother me a bit. I don’t play many online/multiplayer games, but I’m always connected to the network and I’m always signed in with my profile. It doesn’t bug me if they want to run a ping once a day. I’m sure it’d be connected to other services more often than that for other things anyway. I know a lot of folks were worked up about that one. I think that’s a mountains-out-of-mole-hills thing, but that’s just me.

But I guess that’s all kind of moot now, since they’ve changed their tune. We’re basically going back to the original model, which doesn’t hurt my feelings but does make me wonder what could have been. It sounds like there are some folks who are pretty disappointed that we’re going back to same-old-same-old and I’m a little disappointed, too, but probably not as much as this guy on Gizmodo.

I think the problem with the Gizmodo point of view is that there were a lot of hopes listed about things that could have been with the new licensing model but no actual concrete facts. Let’s address each of the points in that article:

  • Each game you buy would be tied to your account: That’s not necessarily a good thing. I mentioned the trouble I’ve had in the past in properly being able to loan Kindle books and in the XBLA content I’ve purchased. I think this is a fixable problem, but it wasn’t addressed in the original communication.
  • Publishers could create hubs to resell games: “Could” is a key word there. We’ve heard no intention from anyone that it would happen. And I’m not super interested in going to each individual publisher’s hub to resell stuff. What happened to the free market?
  • Publishers could make money on resold games: That is true. I’m not convinced that’s a benefit to me.
  • New games could be cheaper because publishers will make that money on resold games: Again, “could” is a key word. I’ve yet to see a console release where games for the new console were cheaper than the previous generation. What’s the impetus to lower the price?
  • You’d get a better return on your used games: That sounds like it’d be up to these hypothetical markets we have no information on. It’d be possible, but no guarantees.
  • We know this is possible because of Steam: I think there are a lot of factors that went into making Steam what it is today. FWIW, I’m not a Steam user or a PC gamer. It’d be nice to see other examples of similar marketplaces working - one example is not sufficient evidence beyond proving something is technically possible. Michael Jordan has a 48” vertical leap. I don’t see a ton of other folks pulling that one off.
  • Sharing games would have been cooler: I’ll give them this one. That 10-person family sharing thing sounded neat.

I’m not sure going back to the “old way” was the right choice, but maybe pulling a few things back? What if you had the 10-person-sharing-plan AND a “five-console-ownership” plan? I register the two consoles I own, my wife registers those same consoles, and now the games will work on those consoles OR with the people on my plan? Honestly, that’d be perfect. You might get people abusing it, registering consoles they don’t own, but they’ll do that with the family plan anyway.

Or a guarantee that I can trade in disc-based games rather than “publishers can allow me to?” I want full control over the content I purchased. It’s mine. Once you throw “publishers can allow this or that” into the mix, all bets are off. I’ve stopped listening because you just rented me content instead of selling me content.

I think the real failure here was in the communication about the licensing model. We got a couple of press releases with some bullet points but no real detailed information. Particularly around the resale hubs and so on - all that was talking-head-level-stuff, no real concrete… anything. I think a lot of fears could have been assuaged by just having those details ready up front. Can you show me one of these resale hubs? Can you give me any idea about where prices might start in there? What have publishers actually said about this stuff? There’s a lot of fear, uncertainty, and doubt there, and I think it fueled the masses. They should have been ready with a ton of details, but they weren’t, and now we are where we are.

What I’m curious about is if they’ll “phase in” the new model. Start off with the old, but then add components in one at a time. You can buy a disc-based game, but if you buy a digital game… now you have that 10-person-sharing-plan - a benefit of going digital instead of disc. Get people used to some of the cool parts without doing the “rip-off-the-bandage” approach to changing it up. I don’t see why they couldn’t.

I love the Disney Parks Blog. They post interesting stuff (if you’re a Disney fan like me), particularly if you’re into behind-the-scenes things.

Today’s post on Mickey and the Magical Map is cool and tells you about how they put that show together.

Seeing some of that behind-the-scenes stuff reminds me of a time back when I was younger, maybe… I don’t know, 10? There was some sort of promotion going on at a local department store. I think it was JC Penney, but I don’t remember exactly. They had some Disney character artists touring through and they’d give little demonstrations every hour and show you how to draw the characters.

I remember being really excited to see it, sitting on the floor in front of a small raised platform that had a chair and a drawing table. A bunch of other kids had gathered around, too, and we were all anxious to see what was going to happen.

Eventually a lady came out and talked all about drawing the characters - how you could imagine all of them as basic shapes, then sort of “tweak” the shapes to get a little closer to what you wanted. Donald Duck’s head is basically round, while Chip and Dale’s heads are more oval. That sort of thing. Looking back, it was all sort of basic drawing techniques - putting the cross-shaped guidelines on the head to place the eyes and nose, and so on. Simple stuff when I look at it now, but so inspiring and magical when you’re young, watching your favorite characters basically materialize in front of you from some simple pencil marks.

The artist gave her drawings away to the audience members as she finished them. I remember wanting one really badly but not getting one, being jealous of the kid in front of me who got the one of Chip.

I also got an opportunity back in college to work for a short amount of time at Will Vinton Studios. It was really cool to see the little sets that the stop motion animation was done on, how the cameras and the figures all came together to create this magical moving picture. I ended up writing some conversion tools for Kuper motion control cameras to help integrate computer animation with physical camera movements so computer animation and clay animation could coexist. I don’t know how useful it was, but I heard they liked it and used it quite a bit.

I still love seeing that stuff. How the animation is done, how the shows are put together… and when I see it, and remember, it makes me wish I was part of that magic. That I was helping to put on the shows, or create the animation, or make that happen for other people. My friends Sheldon and Jason are doing that, and I admit it makes me a little jealous.

Hey, Pixar… need any remote developers in Oregon?

net, vs comments edit

In working on some NuGet packages, one thing I wanted to do was set up some configuration files in preparation for SlowCheetah integration. Instead of seeing a folder structure like this in the project…

  • ~/Config
    • MyConfig.config
    • MyConfig.Debug.config
    • MyConfig.Release.config

I wanted to see the file dependencies set up like you usually get with Web.config:

  • ~/Config
    • MyConfig.config
      • MyConfig.Debug.config
      • MyConfig.Release.config

That’s not really a straightforward thing to do, as it turns out.

Luckily, NuGet provides your package the ability to have a PowerShell script run at install time, and part of what it passes you is a reference to the EnvDTE project into which the package is being installed.

EnvDTE is the way you automate Visual Studio for things like custom tools and add-ins. I’ve messed around with EnvDTE before (though lately I prefer using CodeRush for my automation tasks) so this wasn’t too hard to get back into. Here’s the script for Install.ps1:

param($installPath, $toolsPath, $package, $project)
# Sets the configuration files to have dependent transforms (.Debug/.Release).
# Selections of items in the project are done with Where-Object rather than direct
# access into the ProjectItems collection because if the object is moved or doesn't
# exist then Where-Object will give us a null response rather than the error that
# DTE will give us.

$configFolder = $project.ProjectItems | Where-Object { $_.Properties.Item("Filename").Value -eq "Config" -and  $_.ProjectItems.Count -gt 0 }
if($configFolder -eq $null)
  # Upgrade scenario - user has moved/removed the Config folder
  # or has moved/removed the configuration files out of the folder.

$baseConfig = $configFolder.ProjectItems | Where-Object { $_.Properties.Item("Filename").Value -eq "MyConfig.config" -and $_.ProjectItems.Count -eq 0 }
if($baseConfig -eq $null)
  # Upgrade scenario - user has moved/removed the MyConfig.config file
  # or it already has the dependent items set.

# Config file exists, so update the properties.
$baseConfig.Properties.Item("SubType").Value = "Designer"

$debugConfig = $configFolder.ProjectItems | Where-Object { $_.Properties.Item("Filename").Value -eq "MyConfig.Debug.config" }
if($debugConfig -eq $null)
  # Upgrade scenario - user has moved/removed the MyConfig.Debug.config file
  # or it's already set as a dependent item. (Dependent items show up as children
  # of the file on which they depend, not as a child of the folder.)

# Handle the update for MyConfig.Debug.config - set it as BuildAction = None
# and move it to be a dependency of MyConfig.config.
$debugConfig.Properties.Item("ItemType").Value = "None"

$releaseConfig = $configFolder.ProjectItems | Where-Object { $_.Properties.Item("Filename").Value -eq "MyConfig.Release.config" }
if($releaseConfig -eq $null)
  # Upgrade scenario - user has moved/removed the MyConfig.Release.config file
  # or it's already set as a dependent item. (Dependent items show up as children
  # of the file on which they depend, not as a child of the folder.)

# Handle the update for MyConfig.Release.config - set it as BuildAction = None
# and move it to be a dependency of MyConfig.config.
$releaseConfig.Properties.Item("ItemType").Value = "None"

What this does is switch this .csproj snippet…

  <Content Include="MyConfig.config" />
  <Content Include="MyConfig.Debug.config" />
  <Content Include="MyConfig.Release.config" />

Into this:

  <Content Include="MyConfig.config">
  <None Include="MyConfig.Debug.config">
  <None Include="MyConfig.Release.config">

What I’ve not yet figured out is how to get a new custom element <TransformOnBuild>true</TransformOnBuild> to show up on the MyConfig.config element. From this article on MSDN, it appears there’s a much more involved bit of work to do and I’m not sure that I have access to all the requisite DTE objects from inside the script.

We’ve had a recent issue where Phoenix won’t do something because she claims she’s “scared”.

“Phoenix, can you come over here?”

“No,” she says. “I scared.”

“What are you scared of?” As if we didn’t already know.

“Bears. Bears eat my shoes.”

That’s right, she’s scared of bears eating her shoes. Or her coat. Or my car. Pretty much anything out there is something waiting for a bear to eat it. At night, we have this somewhat covered.

“The bear can’t get you, Phoenix, because you have your unicorn to protect you. Unicorns stop bears.” We’ll hand her this little stuffed unicorn and all is well.

“My un-corn.” Long “u” is a hard sound, I guess, so it’s not “unicorn,” it’s “un-corn.” Whatever.

This morning in the car, though, I didn’t have the unicorn and the bear talk started. I tried to think up something new.

“Daddy, I scared.”

Sigh. No unicorn. Well, let’s just get down to it. “Are you scared of bears?”

“Yes. Bears eat my shoes.”

“I know. But you like dragons, right?”

“I not dragon, I princen.” Hard “s” is also difficult, so “princess” becomes “princen.”

“Yes, you’re a princess… are you princess of the dragons?” I think you Game of Thrones folks see where I’m going with this.

“I princen of dragons!”

“That’s right, you’re the Khaleesi.”

“I kee-see!”

“Now, tell your dragons to stop the bears. Dragons can stop bears.”

“No, I not tell dragons.”

Dammit. “Why not?”

“Dragons scared of bears.”

Are you freaking kidding me? “Are you sure?” Then, out of nowhere…

“PA-KOW! PA-KOW! I shoot bear!”


Wait, what? “Phoe, you shot the bear?”

“PA-KOW! I shoot bear!”

Um. Well, uh… I’m not really sure where she picked that one up, but… I guess… bear problem solved, right?

net, aspnet comments edit

We do a lot of interesting stuff with FluentValidation at work and more than a few times I’ve had to give the whiteboard presentation of how a server-side FluentValidation validator makes it to jQuery validation rules on the client and back. I figured it was probably time to just write it up so I can refer folks as needed.

Let’s start out with a simple model we want to validate. We’ll carry these examples with us in the walkthrough so you have something concrete to tie back to.

public class MyModel
  public string Name { get; set; }
  public int Age { get; set; }

On the server, we’d validate that model using FluentValidation by implementing FluentValidation.AbstractValidator<T> like this:

public class MyModelValidator : AbstractValidator<MyModel>
  public MyModelValidator()
    RuleFor(m => m.Name)
      .WithMessage("Please provide a name.");

    RuleFor(m => m.Age)
      .WithMessage("You must be over 21 to access this site.");

First let’s look at the way rules are set up in FluentValidation. You’ll see that each call to RuleFor points to a property we want to validate. Calling RuleFor sort of starts a “stack” of validators that are associated with that property. Each method that adds a validation (NotEmpty, GreaterThan) adds that validator to the “stack” and then sets it as “active” so other extensions that modify behavior (WithMessage) will know which validator they’re modifying.

A more complex setup might look like this:

RuleFor(m => m.Name)                        // Start a new validation "rule" with a set of validators attached
  .NotEmpty()                               // Add a NotEmptyValidator to the stack and make it "active"
  .WithMessage("Please provide a name.")    // Set the message for the NotEmptyValidator
  .Matches("[a-zA-Z]+")                     // Add a RegularExpressionValidator to the stack and make it "active"
  .WithMessage("Please use only letters."); // Set the message for the RegularExpressionValidator

In a server-side only scenario, when you validate an object each “rule” gets iterated through and each validator associated with that “rule” gets executed. More or less. There’s a bit of complexity to it, but that’s a good way to explain it without getting into the weeds.

ASP.NET MVC ships with an abstraction around model validation that starts with a ModelValidatorProvider. Out of the box, MVC has support for DataAnnotations attributes. (Rachel Appel has a great walkthrough of how that works.) From the name, you can guess that what this does is provide validators for your models. When MVC wants to validate your model (or determine how it’s validated), it asks the set of registered ModelValidatorProvider objects and they return the validators. It’s dumb when I say it out loud, but the class names start getting really long and a little confusing, so I just wanted to bring this up now: if you start getting confused, really stop to read the name of the class you’re confused about. It’ll help, trust me, because they all start sounding the same after a while.

FluentValidation has an associated FluentValidation.Mvc library that has the MVC adapter components. In there is a FluentValidationModelValidatorProvider. To get this hooked in, you need to register that FluentValidationModelValidatorProvider with MVC at application startup. The easiest way to do this is by calling FluentValidationModelValidatorProvider.Configure(), which automatically adds the provider to the list of available providers and also lets you do some additional configuration as needed.

Things you can do in FluentValidationModelValidatorProvider.Configure:

  • Specify the validator factory to use on the server to retrieve FluentValidation validators corresponding to models.
  • Add mappings for custom validators that map between the server-side FluentValidation validator and the MVC client-side validation logic.

Right now we’ll talk about the validator factory piece. I’ll get to the custom validator mappings later.

As mentioned, when you run FluentValidationModelValidatorProvider.Configure(), you can tell it which FluentValidation.IValidatorFactory to use for mapping server-side validators (like the MyModelValidator) to models (like MyModel). Out of the box, FluentValidation ships with the FluentValidation.Attributes.AttributedValidatorFactory. This factory type lets you attach validators to your models with attributes, like this:

public class MyModel
  // ...

That’s one way to go, but I find that to be somewhat inflexible. I also don’t like my code too closely tied together like that, so instead, in MVC, I like to use the DependencyResolver to get my model-validator mappings. FluentValidation doesn’t ship with a factory that uses DependencyResolver, but it’s pretty easy to implement:

public class DependencyResolverModelValidatorFactory : IValidatorFactory
  public IValidator GetValidator(Type type)
    if (type == null)
      throw new ArgumentNullException("type");
    return DependencyResolver.Current.GetService(typeof(IValidator<>).MakeGenericType(type)) as IValidator;

  public IValidator<T> GetValidator<T>()
    return DependencyResolver.Current.GetService<IValidator<T>>();

Using that validator factory, you need to register your validators with your chosen DependencyResolver so that they’re exposed as IValidator<T> (like IValidator<MyModel>). Luckily AbstractValidator<T> implements that interface, so you’re set. In Autofac (my IoC container of choice) the validator registration during container setup would look like…


To register the model validator factory with FluentValidation, we’d do that during FluentValidationModelValidatorProvider.Configure() at application startup, like this:

  provider =>
    provider.ValidatorFactory = new DependencyResolverModelValidatorFactory();

Let’s checkpoint. Up to now we have:

  • A custom validator for our model.
  • A validator factory for FluentValidation that will tie our validator to our model using dependency resolution.
  • FluentValidation configured in MVC to be a source of model validations.

Next let’s talk about how the FluentValidation validators get their results into the MVC ModelState for server-side validation.

ASP.NET MVC has a class called ModelValidator that is used to abstract away the concept of model validation. It’s responsible for generating the validation errors that end up in ModelState as well as the set of client validation rules that define how the browser-side behavior gets wired up. The ModelValidator class has two methods, each of which corresponds to one of these responsibilities.

  • Validate: Executes server-side validation of the model and returns the list of results.
  • GetClientValidationRules: Gets metadata in the form of ModelClientValidationRule objects to send to the client so script can do client-side validation.

For DataAnnotations, there are implementations of ModelValidator corresponding to each attribute. For FluentValidation, there are ModelValidator implementations that correspond to each server-side validation type. For example, looking at our MyModel.Name property, we have a NotEmptyValidator attached to it. That NotEmptyValidator maps to a FluentValidation.Mvc.RequiredFluentValidationPropertyValidator. These mappings are maintained by the FluentValidationModelValidatorProvider. (Remember I said you could add custom mappings during the call to Configure? This is what I was talking about.)

When MVC needs the set of ModelValidator implementations associated with a model, the basic process is this:

  • The ModelValidatorProviders.Providers.GetValidators method is called. This iterates through the registered providers to get all the validators available. Eventually the FluentValidationModelValidatorProvider is queried.
  • The FluentValidationModelValidatorProvider uses the registered IValidatorFactory (DependencyResolverModelValidatorFactory) to get the FluentValidation validator (MyModelValidator) associated with the model (MyModel).
  • The FluentValidationModelValidatorProvider then looks at the rules associated with the property being validated (MyModel.Name) and converts each validator in the rule (NotEmptyValidator) into its mapped MVC ModelValidator type (RequiredFluentValidationPropertyValidator).
  • The list of mapped ModelValidator instances is returned to MVC for execution.

Here’s where it starts coming together.

When you do an MVC HtmlHelper call to render a data entry field for a model property, it looks something like this:

@Html.TextBoxFor(m => m.Name)

Internally, during the textbox generation, a bit more happens under the covers:

  • The input field HTML generation process calls ModelValidatorProviders.Providers.GetValidators for the field. (This is the process mentioned earlier.)
  • The ModelValidator(s) returned each have GetClientValidationRules called to get the set of ModelClientValidationRule data defining the client-side validation info.
  • The ModelClientValidationRule objects get converted to data-val- attributes that will be attached to the input field.

Using our example, if we did that Html.TextBoxFor(m => m.Name) call…

  • The FluentValidationModelValidatorProvider would, through the process mentioned earlier, yield a RequiredFluentValidationPropertyValidator corresponding to the NotEmptyValidator we’re using on the Name field.
  • That RequiredFluentValidationPropertyValidator would have GetClientValidationRules called to get the client-side validation information. I happen to know that the RequiredFluentValidationPropertyValidator returns a ModelValidationRequiredRule.
  • Each of the ModelClientValidationRule objects (in this case, just the ModelValidationRequiredRule) would be converted to data-val- attributes on the textbox.

What’s in a ModelClientValidationRule and how does it translate to attributes?

Basically each ModelClientValidationRule corresponds to a jQuery validation type. The ModelValidationRequiredRule is basically just a pre-populated ModelClientValidationRule derivative that looks like this:

new ModelClientValidationRule
  ValidationType = "required",
  ErrorMessage = "Please provide a name."

The ValidationType corresponds to the name of a jQuery validation type and the error message is the one we specified way back when we defined our FluentValidation validator.

When that gets converted into data-val- attributes, it ends up looking like this:

<input type="text" ... data-val-required="Please provide a name." />

That’s how the validation information gets from the server to the client. Once it’s there, it’s time for script to pick it up.

MVC uses jQuery validation to execute the client-side validation. It takes advantage of a library jQuery Unobtrusive Validation which is used to parse the data-val- attributes into client-side logic. The attributes fairly well correspond one-to-one with existing jQuery validation rules. For example, data-val-required corresponds to the required() validation method. That jQuery Unobtrusive Validation library reads the attributes and maps them (and their values) into jQuery validation actions that are attached to the form. Standard jQuery validation takes it from there.

The whole process for validating a submitted form is:

  • The Html.TextBoxFor call gets the set of validators for the field and attaches the data-val- attributes to it using the process described earlier.
  • jQuery Unobtrusive Validation parses the attributes on the client and sets up the client-side validation.
  • When the user submits the form, jQuery validation executes. Assuming it passes, the form gets submitted to the server.
  • During model binding, the MVC DefaultModelBinder gets the list of ModelValidator instances for the model using the process described earlier.
  • Each ModelValidator gets its Validate method called. Any failures will get added to the ModelState by the DefaultModelBinder. In our case, the RequiredFluentValidationPropertyValidator will pass through to our originally configured NotEmptyValidator on the server side and the results of the NotEmptyValidator will be translated into ModelValidationResult objects for use by the DefaultModelBinder.

Whew! That’s a lot of moving pieces. But that should explain how it comes together so you at least know what you’re looking at.

When you want to add a custom server-side validator, you have to hook into that process. Knowing the steps outlined above, you can see you have a few things to do (and a few places where things can potentially break down).

The basic steps for adding a new custom FluentValidation validator in MVC are:

  • Create your server-side validator. This means implementing FluentValidation.Validators.IPropertyValidator. It’s what gets used on the server for validation but doesn’t do anything on the client. I’d look at existing validators to get ideas on how to do this, since it’s a little different based on whether, say, you’re comparing two properties on the same object or comparing a property with a fixed value. I’d recommend starting by looking at a validator that does something similar to what you want to do. The beauty of open source.
  • Add an extension method that allows your new validator type to participate in the fluent grammar. The standard ones are in FluentValidation.DefaultValidatorExtensions. This is how you get nice syntax like RuleFor(m => m.Name).MyCustomValidator(); for your custom validator.
  • Create a FluentValidation.Mvc.FluentValidationPropertyValidator corresponding to your server-side validator. This will be responsible for creating the appropriate ModelClientValidationRule objects to tie your server-side validator to the client and for executing the corresponding server-side logic. (Really the work is in the client-side part of things. The base FluentValidationPropertyValidator already passes-through logic to your custom server-side validator so you don’t have to really do anything to get that to happen.)
  • Add a mapping between your server-side validator and your FluentValidationPropertyValidator. This takes place in FluentValidationModelValidatorProvider.Configure. Let’s talk about that.

**Remember earlier I said we’d talk about adding custom validator mappings to the FluentValidationModelValidatorProvider **during configuration time? You need to do that if you write your own custom validator. When you want to map a server-side validator to a client-side validator, it looks like this:

  provider =>
    provider.ValidatorFactory = new DependencyResolverModelValidatorFactory();
      (metadata, context, rule, validator) => new MyCustomFluentValidationPropertyValidator(metadata, context, rule, validator));

There are mappings already for the built-in validators. You just have to add your custom ones. Unfortunately, the actual list of mappings itself isn’t documented anywhere I can find, so you’ll need to look at the static constructor on FluentValidationModelValidatorProvider to see what maps to what by default.

I won’t walk through the full creation of an end-to-end custom validator here. That’s probably another article since this is already way, way too long without enough pictures to break up the monotony. Instead, I’ll just mention a couple of gotchas.


Say you have a custom email validator. You decide you want to shortcut a few things by just deriving from the original email validator and overriding a couple of things. Sounds fine, right?

public class MyCustomEmailValidator : EmailValidator

Or maybe you don’t do that, but you do decide to implement the standard FluentValidation email validator interface:

public class MyCustomEmailValidator : PropertyValidator, IEmailValidator

The gotcha here is that server-side-to-client-side mapping that I showed you earlier. FluentValidation maps by type and already ships with mappings for the out-of-the-box validators. If you implement your custom validator like this, chances are you’re going to end up with the default client-side logic rather than your custom one. The default was registered first, right? It’ll never get to your override… and there’s not really a way to remove mappings.

You also don’t want to override the mapping like this:

  (metadata, context, rule, validator) => new MyCustomFluentValidationPropertyValidator(metadata, context, rule, validator));

You don’t want to do that because then when you use the out-of-the-box email validator, it’ll end up with your custom logic.

This is really painful to debug. Sometimes this is actually what you want - reuse of client (or server) logic but not both… but generally, you want a unique custom validator from end to end.

Recommendation: Don’t implement the standard interfaces unless you plan on replacing the standard behavior across the board and not using the out-of-the-box validator of that type. (That is, if you want to totally replace everything about client and server validation for IEmailValidator, then implement IEmailValidator on your server-side validator and don’t use the standard email validator anymore.)


Say you don’t want to implement a whole custom validator and would rather use the FluentValidation lambda syntax like this:

RuleFor(m => m.Age).Must(val => val - 2 > 6);

That lambda will only run on the server. Maybe that’s OK for a quick one-off, but if you want corresponding client-side validation you actually need to implement the end-to-end solution. There’s no automated functionality for translating the lambdas into script, nor is there an automatic facility for converting this into an AJAX remote validation call. (But that’s a pretty neat idea.)


If you dive deep enough, you’ll notice that certain server-side FluentValidation validators boil down to the same ModelClientValidationRule values when it’s time to emit the data-val- attributes. For example, the FluentValidation GreaterThanValidator and LessThaValidator both end up with ModelClientValidationRule values that say the validation type is “range” (not “min” or “max” as you might expect). What that means is that on the server side, this looks sweet and works:

RuleFor(m => m.Age).GreaterThan(5).LessThan(10);

But when you try to render the corresponding textbox, you’re going to get an exception telling you that you can’t put “range” validation twice on the same field. Problem.

What it means is that when you create custom validations, you have to be mindful of which jQuery rule you’re going to associate it with on the client. It also means you have to be creative, sometimes, about how you set up MVC validations on the server to make sure you don’t have problems when it comes to adapting things to the client-side. Stuff that works on the server won’t necessarily work in MVC.

For reference, here’s a SUPER ABBREVIATED SEQUENCE DIAGRAM of how it comes together. Again, there are several steps omitted here, but this should at least jog your memory and help you visualize all the stuff I mentioned above.