process, security comments edit

I feel like I should write a book. It’d be epic like Moby Dick but would start with, “Call me Yossarian.” This is going to sound confusing and comedic, straight out of Catch-22, but I assure you it’s entirely true. It is happening to me right now.

Serenity Now!

We write a lot of documentation to a wiki at work. I’ve got permissions on it to add pages, rename pages, move pages… but not delete pages. If I want to delete a page, I have to find someone who has delete rights and ask them to do that, which doesn’t make sense because I’m a pretty heavy contributor to the wiki.

I decided to seek out delete permissions for myself.

The wiki is managed by an overseas team. The previous process to get permissions to the wiki was to send an email to their infrastructure distribution list with your request and the issue would be dealt with in a day or two. It was fairly effective from a customer perspective.

The new process to get wiki permissions is to file a ticket in this custom-built ticketing system they’ve adopted. You find this out by sending an email to the infrastructure distribution list and reading the “out of office” autoresponder thing that comes back.

You can’t file a ticket unless you have an account on the ticketing system. That’s… well, not unheard of, but a bit inconvenient. Fine, I need to create an account.

In order to get an account on the ticketing system, you need to file a ticket. No joke. As one colleague put it, this is sort of like a secret society - you can’t get in unless you already know someone who’s in and will “vouch for you” by creating a ticket on your behalf.

Three working days later, I have an account so I log in. The ticketing system is a totally custom beast that was initially written starting in 2001 and hasn’t really been updated since 2008. It looks and behaves exactly like you think - it’s very bare-bones, there’s no significant help, and it’s entirely unintuitive to people who don’t already use it every day.

Seeking out help, I notice in the autoresponder email there’s a wiki link to a guide on how to file tickets. Cool. I visit that link and… I don’t have permissions to see the wiki link.

In order to see the guide on how to file tickets, I have to file a ticket. Of course, I’m not sure what kind of ticket to file, since I can’t see the guide.

I search around to see if there’s any hint pointing me to which ticket type to file since they all have great titles like “DQT No TU Child Case.” Totally obvious, right? I end up stumbling onto a screen shot someone has taken and posted to a comment section on an unrelated wiki page referring me to the type of case I need to file.

I don’t see the right case type on the list of available tickets I can file. Turns out I don’t have ticket system permissions to file that kind of ticket.

I have now opened a ticket so I can get permissions to open a ticket to get permissions to delete pages from the wiki. This is after, of course, the initial “secret society” ticket was filed to get me an account so I can file tickets.

humor, rest comments edit

I was browsing around the other day and found your mom’s REST API. Naturally, I pulled my client out and got to work.

An abbreviated session follows:

GET /your/mom HTTP/1.1

HTTP/1.1 200 OK

PUT /your/mom HTTP/1.1
":)"

HTTP/1.1 402 Payment Required

POST /your/mom HTTP/1.1
"$"

HTTP/1.1 411 Length Required

PUT /your/mom HTTP/1.1
":)"

HTTP/1.1 406 Not Acceptable
HTTP/1.1 413 Request Entity Too Large
HTTP/1.1 200 OK
.
.
.
HTTP/1.1 200 OK
.
.
HTTP/1.1 200 OK
.
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 502 Bad Gateway
HTTP/1.1 503 Service Unavailable

I think I need to get a new API key before she gives me the ol’ 410. :)

build comments edit

In making a package similar to the NuGet.Server package, I had a need to, from one project in the solution, get the list of build output assemblies from other projects in the same solution.

That is, in a solution like:

  • MySolution.sln
    • Server.csproj
    • Project1.csproj
    • Project2.csproj

…from the Server.csproj I wanted to get the build output assembly paths for the Project1.csproj and Project2.csproj projects.

The technically correct solution is sort of complicated and Sayed Ibrahim Hashimi has documented it on his blog. The problem with the technically correct solution is that it requires you to invoke a build on the target projects.

That build step was causing no end of trouble. Projects were re-running AfterBuild actions, code was getting regenerated at inopportune times, cats and dogs living together - mass hysteria.

I came up with a different way to get the build outputs that is less technically correct but gets the job done and doesn’t require you to invoke a build on the target projects.

My solution involves loading the projects in an evaluation context using a custom inline MSBuild task. Below is a snippet showing the task in action. Note that the snippet is in the context of a .targets file that would be added to your .csproj by a NuGet package, so you’ll see environment variables used that will only be present in a full build setting:

<Project DefaultTargets="EnumerateOutput" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" >
  <ItemGroup>
    <!-- Include all projects in the solution EXCEPT this one -->
    <ProjectToScan Include="$(SolutionDir)/**/*.csproj" Exclude="$(SolutionDir)/**/$(ProjectName).csproj" />
  </ItemGroup>
  <Target Name="EnumerateOutput" AfterTargets="Build">
    <!-- Call the custom task to get the output -->
    <GetBuildOutput ProjectFile="%(ProjectToScan.FullPath)">
      <Output ItemName="ProjectToScanOutput" TaskParameter="BuildOutput"/>
    </GetBuildOutput>

    <Message Text="%(ProjectToScanOutput.Identity)" />
  </Target>

  <UsingTask TaskName="GetBuildOutput" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v12.0.dll" >
    <ParameterGroup>
      <ProjectFile ParameterType="System.String" Required="true"/>
      <BuildOutput ParameterType="Microsoft.Build.Framework.ITaskItem[]" Output="true"/>
    </ParameterGroup>
    <Task>
      <Reference Include="System.Xml"/>
      <Reference Include="Microsoft.Build"/>
      <Using Namespace="Microsoft.Build.Evaluation"/>
      <Using Namespace="Microsoft.Build.Utilities"/>
      <Code Type="Fragment" Language="cs">
      <![CDATA[
        // The dollar-properties here get expanded to be the
        // actual values that are present during build.
        var properties = new Dictionary<string, string>
        {
          { "Configuration", "$(Configuration)" },
          { "Platform", "$(Platform)" }
        };

        // Load the project into a separate project collection so
        // we don't get a redundant-project-load error.
        var collection = new ProjectCollection(properties);
        var project = collection.LoadProject(ProjectFile);

        // Dollar sign can't easily be escaped here so we use the char code.
        var expanded = project.ExpandString(((char)36) + @"(MSBuildProjectDirectory)\" + ((char)36) + "(OutputPath)" + ((char)36) + "(AssemblyName).dll");
        BuildOutput = new TaskItem[] { new TaskItem(expanded) };
      ]]>
      </Code>
    </Task>
  </UsingTask>
</Project>

How it works:

  1. Create a dictionary of properties you want to flow from the current build environment into the target project. In this case, the Configuration and Platform properties are what affects the build output location, so I pass those. The $(Configuration) and $(Platform) in the code snippet will actually be expanded on the fly to be the real values from the current build environment.
  2. Create a tiny MSBuild project collection (similar to the way MSBuild does so for a solution). Pass the set of properties into the collection so they can be used by your project. You need this collection so the project doesn’t get loaded in the context of the solution. You get an error saying the project is already loaded if you don’t do this.
  3. Load the project into your collection. When you do, properties will be evaluated using the global environment - that dictionary provided.
  4. Use the ExpandString method on the project to expand $(MSBuildProjectDirectory)\$(OutputPath)$(AssemblyName).dll into whatever it will be in context of the project with the given environment. This will end up being the absolute path to the assembly being generated for the given configuration and platform. Note the use of (char)36 there - I spent some time trying to figure out how to escape $ but never could, so rather than fight it… there you go.
  5. Return the information from the expansion to the caller.

That step with ExpandString is where the less technically correct bit comes into play. For example, if the project generates an .exe file rather than a .dll - I don’t account for that. I could enhance it to accommodate for that, but… well, this covers the majority case for me.

I considered returning a property rather than an item, but I have a need to grab a bunch of build output items and batch/loop over them, so items worked better in that respect.

There’s also probably a real way of escaping $ that just didn’t pop up in my searches. Leave a comment if you know; I’d be happy to update.

sublime, xml, gists comments edit

I already have my build scripts tidy up my XML configuration files but sometimes I’m working on something outside the build and need to tidy up my XML.

There are a bunch of packages that have HTML linting and tidy, but there isn’t really a great XML tidy package… and it turns out you don’t really need one.

  1. Get a copy of Tidy and make sure it’s in your path.
  2. Install the Sublime package “External Command” so you can pipe text in the editor through external commands.
  3. In Sublime, go to Preferences -> Browse Packages... and open the “User” folder.
  4. Create a new file in there called ExternalCommand.sublime-commands. (The name isn’t actually important as long as it ends in .sublime-commands but I find it’s easier to remember what the file is for with this name.)

Add the following to the ExternalCommand.sublime-commands file:

[
    {
        "caption": "XML: Tidy",
        "command": "filter_through_command",
        "args": { "cmdline": "tidy --input-xml yes --output-xml yes --preserve-entities yes --indent yes --indent-spaces 4 --input-encoding utf8 --indent-attributes yes --wrap 0 --newline lf" }
    }
]

Sublime should immediately pick this up, but sometimes it requires a restart.

Now when you’re working in XML and want to tidy it up, go to the command palette (Ctrl+Shift+P) and run the XML: Tidy command. It’ll be all nicely cleaned up!

The options I put here match the ones I use in my build scripts.. If you want to customize how the XML looks, you can change up the command line in the ExternalCommand.sublime-commands file using the options available to Tidy.

aspnet, rest, json comments edit

Here’s the situation:

You have a custom object type that you want to use in your Web API application. You want full support for it just like a .NET primitive:

  • It should be usable as a route value like api/operation/{customobject}.
  • You should be able to GET the object and it should serialize the same as it does in the route.
  • You should be able to POST an object as the value for a property on another object and that should work.
  • It should show up correctly in ApiExplorer generated documentation like Swashbuckle/Swagger.

This isn’t as easy as you might think.

The Demo Object

Here’s a simple demo object that I’ll use to walk you through the process. It has some custom serialization/deserialization logic.

public class MyCustomObject
{
  public int First { get; set; }

  public int Second { get; set; }

  public string Encode()
  {
    return String.Format(
        CultureInfo.InvariantCulture,
        "{0}|{1}",
        this.First,
        this.Second);
  }

  public static MyCustomObject Decode(string encoded)
  {
    var parts = encoded.Split('|');
    return new MyCustomObject
    {
      First = int.Parse(parts[0]),
      Second = int.Parse(parts[1])
    };
  }
}

We want the object to serialize as a pipe-delimited string rather than a full object representation:

var obj = new MyCustomObject
{
  First = 12,
  Second = 345
}

// This will be "12|345"
var encoded = obj.Encode();

// This will decode back into the original object
var decoded = MyCustomObject.Decode(encoded);

Here we go.

Outbound Route Value: IConvertible

Say you want to generate a link to a route that takes your custom object as a parameter. Your API controller might do something like this:

// For a route like this:
// [Route("api/value/{value}", Name = "route-name")]
// you generate a link like this:
var url = this.Url.Link("route-name", new { value = myCustomObject });

By default, you’ll get a link that looks like this, which isn’t what you want: http://server/api/value/MyNamespace.MyCustomObject

We can fix that. UrlHelper uses, in this order:

  • IConvertible.ToString()
  • IFormattable.ToString()
  • object.ToString()

So, if you implement one of these things, you can control how the object appears in the URL. I like IConvertible because IFormattable runs into other things like String.Format calls, where you might not want the object serialized the same.

Let’s add IConvertible to the object. You really only need to handle the ToString method; everything else, just bail with InvalidCastException. You also have to deal with the GetTypeCode implementation and a simple ToType implementation.

using System;
using System.Globalization;

namespace SerializationDemo
{
  public class MyCustomObject : IConvertible
  {
    public int First { get; set; }

    public int Second { get; set; }

    public static MyCustomObject Decode(string encoded)
    {
      var parts = encoded.Split('|');
      return new MyCustomObject
      {
        First = int.Parse(parts[0]),
        Second = int.Parse(parts[1])
      };
    }

    public string Encode()
    {
      return String.Format(
        CultureInfo.InvariantCulture,
        "{0}|{1}",
        this.First,
        this.Second);
    }

    public TypeCode GetTypeCode()
    {
      return TypeCode.Object;
    }

    public override string ToString()
    {
      return this.ToString(CultureInfo.CurrentCulture);
    }

    public string ToString(IFormatProvider provider)
    {
      return String.Format(provider, "<{0}, {1}>", this.First, this.Second);
    }

    string IConvertible.ToString(IFormatProvider provider)
    {
      return this.Encode();
    }

    public object ToType(Type conversionType, IFormatProvider provider)
    {
      return Convert.ChangeType(this, conversionType, provider);
    }

    /* ToBoolean, ToByte, ToChar, ToDateTime,
       ToDecimal, ToDouble, ToInt16, ToInt32,
       ToInt64, ToSByte, ToSingle, ToUInt16,
       ToUInt32, ToUInt64
       all throw InvalidCastException */
  }
}

There are a couple of interesting things to note here:

  • I explicitly implemented IConvertible.ToString. I did that so the value you’ll get in a String.Format call or a standard ToString call will be different than the encoded value. To get the encoded value, you have to explicitly cast the object to IConvertible. This allows you to differentiate where the encoded value shows up.
  • ToType pipes to Convert.ChangeType. Convert.ChangeType uses IConvertible where possible, so you kinda get this for free. Another reason IConvertible is better here than IFormattable.

Inbound Route Value, Action Parameter, and ApiExplorer: TypeConverter

When ApiExplorer is generating documentation, it needs to know whether the action parameter can be converted into a string (so it can go in the URL). It does this by getting the TypeConverter for the object and querying CanConvertFrom(typeof(string)). If the answer is false, ApiExplorer assumes the parameter has to be in the body of a request - which wrecks any generated documentation because that thing should be in the route.

To satisfy ApiExplorer, you need to implement a TypeConverter.

When your custom object is used as a route value coming in or otherwise as an action parameter, you also need to be able to model bind the encoded value to your custom object.

There is a built-in TypeConverterModelBinder that uses TypeConverter so implementing the TypeConverter will address model binding as well.

Here’s a simple TypeConverter for the custom object:

using System;
using System.ComponentModel;
using System.Globalization;

namespace SerializationDemo
{
  public class MyCustomObjectTypeConverter : TypeConverter
  {
    public override bool CanConvertFrom(
        ITypeDescriptorContext context,
        Type sourceType)
    {
      return sourceType == typeof(string) ||
             base.CanConvertFrom(context, sourceType);
    }

    public override bool CanConvertTo(
        ITypeDescriptorContext context,
        Type destinationType)
    {
      return destinationType == typeof(string) ||
             base.CanConvertTo(context, destinationType);
    }

    public override object ConvertFrom(
        ITypeDescriptorContext context,
        CultureInfo culture,
        object value)
    {
      var encoded = value as String;
      if (encoded != null)
      {
        return MyCustomObject.Decode(encoded);
      }

      return base.ConvertFrom(context, culture, value);
    }

    public override object ConvertTo(
        ITypeDescriptorContext context,
        CultureInfo culture,
        object value,
        Type destinationType)
    {
      var cast = value as MyCustomObject;
      if (destinationType == typeof(string) && cast != null)
      {
        return cast.Encode();
      }

      return base.ConvertTo(context, culture, value, destinationType);
    }
  }
}

And, of course, add the [TypeConverter] attribute to the custom object.

[TypeConverter(typeof(MyCustomObjectTypeConverter))]
public class MyCustomObject : IConvertible
{
  //...
}

Setting Swagger/Swashbuckle Doc

Despite all of this, generated Swagger/Swashbuckle documentation will still show an expanded representation of your object, which is inconsistent with how a user will actually work with it from a client perspective.

At application startup need to register a type mapping with the Swashbuckle SwaggerSpecConfig.Customize method to map your custom type to a string.

SwaggerSpecConfig.Customize(c =>
{
  c.MapType<MyCustomObject>(() =>
      new DataType { Type = "string", Format = null });
});

Even More Control: JsonConverter

Newtonsoft.Json should handle converting your type automatically based on the IConvertible and TypeConverter implementations.

However, if you’re doing something extra fancy like implementing a custom generic object, you may need to implement a JsonConverter for your object.

There is some great doc on the Newtonsoft.Json site so I won’t go through that here.

Using Your Custom Object

With the IConvertible and TypeConverter implementations, you should be able to work with your object like any other primitive and have it properly appear in route URLs, model bind, and so on.

// You can define a controller action that automatically
// binds the string to the custom object. You can also
// generate URLs that will have the encoded value in them.
[Route("api/increment/{value}", Name = "increment-values")]
public MyCustomObject IncrementValues(MyCustomObject value)
{
  // Create a URL like this...
  var url = this.Url.Link("increment-values", new { value = value });

  // Or work with an automatic model-bound object coming in...
  return new MyCustomObject
  {
    First = value.First + 1,
    Second = value.Second + 1
  }
}

Bonus: Using Thread Principal During Serialization

If, for whatever reason, your custom object needs the user’s principal on the thread during serialization, you’re in for a surprise: While the authenticated principal is on the thread during your ApiController run, HttpServer restores the original (unauthenticated) principal before response serialization happens.

It’s recommended you use HttpRequestMessage.GetRequestContext().Principal instead of Thread.CurrentPrincipal but that’s kind of hard by the time you get to type conversion and so forth and there’s no real way to pass that around.

The way you can work around this is by implementing a custom JsonMediaTypeFormatter.

The JsonMediaTypeFormatter has a method GetPerRequestFormatterInstance that is called when serialization occurs. It does get the current request message, so you can pull the principal out then and stick it on the thread long enough for serialization to happen.

Here’s a simple implementation:

public class PrincipalAwareJsonMediaTypeFormatter : JsonMediaTypeFormatter
{
  // This is the default constructor to use when registering the formatter.
  public PrincipalAwareJsonMediaTypeFormatter()
  {
  }

  // This is the constructor to use per-request.
  public PrincipalAwareJsonMediaTypeFormatter(
    JsonMediaTypeFormatter formatter,
    IPrincipal user)
    : base(formatter)
  {
    this.User = user;
  }

  // For per-request instances, this is the authenticated principal.
  public IPrincipal User { get; private set; }

  // Here's where you create the per-user/request formatter.
  public override MediaTypeFormatter GetPerRequestFormatterInstance(
    Type type,
    HttpRequestMessage request,
    MediaTypeHeaderValue mediaType)
  {
    var requestContext = request.GetRequestContext();
    var user = requestContext == null ? null : requestContext.Principal;
    return new PrincipalAwareJsonMediaTypeFormatter(this, user);
  }

  // When you deserialize an object, throw the principal
  // on the thread first and restore the original when done.
  public override object ReadFromStream(
    Type type,
    Stream readStream,
    Encoding effectiveEncoding,
    IFormatterLogger formatterLogger)
  {
    var originalPrincipal = Thread.CurrentPrincipal;
    try
    {
      if (this.User != null)
      {
        Thread.CurrentPrincipal = this.User;
      }

      return base.ReadFromStream(type, readStream, effectiveEncoding, formatterLogger);
    }
    finally
    {
      Thread.CurrentPrincipal = originalPrincipal;
    }
  }

  // When you serialize an object, throw the principal
  // on the thread first and restore the original when done.
  public override void WriteToStream(
    Type type,
    object value,
    Stream writeStream,
    Encoding effectiveEncoding)
  {
    var originalPrincipal = Thread.CurrentPrincipal;
    try
    {
      if (this.User != null)
      {
        Thread.CurrentPrincipal = this.User;
      }

      base.WriteToStream(type, value, writeStream, effectiveEncoding);
    }
    finally
    {
      Thread.CurrentPrincipal = originalPrincipal;
    }
  }
}

You can register that at app startup with your HttpConfiguration like this:

// Copy any custom settings from the current formatter
// into a new formatter.
var formatter = new PrincipalAwareJsonMediaTypeFormatter(config.Formatters.JsonFormatter);

// Remove the old formatter, add the new one.
config.Formatters.Remove(config.Formatters.JsonFormatter);
config.Formatters.Add(formatter);

Conclusion

I have to admit, I’m a little disappointed in the different ways the same things get handled here. Why do some things allow IConvertible but others require TypeConverter? It’d be nice if it was consistent.

In any case, once you know how it works, it’s not too hard to implement. Knowing is half the battle, right?

Hopefully this helps you in your custom object creation journey!