javascript, home comments edit

I have, like, 1,000 of those little keyring cards for loyalty/rewards. You do, too. There are a ton of apps for your phone that manage them, and that's cool.

Loyalty card phone apps never work for me.

For some reason, I seem to go to all the stores where they've not updated the scanners to be able to read barcodes off a phone screen. I've tried different phones and different apps, all to no avail.

You know what always works? The card in my wallet. Which means I'm stuck carrying around these 1,000 stupid cards.

There are sites, some of them connected to the phone apps, that will let you buy a combined physical card. But I'm cheap and need to update just frequently enough that it's not worth paying the $5 each time. I used to use a free site called "JustOneClubCard" to create a combined loyalty card but that site has gone offline. I think it was purchased by one of the phone app manufacturers. ((Seriously.)

So...

Enter: LoyaltyCard

I wrote my own app: LoyaltyCard. You can go there right now and make your own combined loyalty card.

You can use the app to enter up to eight bar codes and then download the combined card as a PDF to print out. Make as many as you like.

And if you want to save your card? Just bookmark the page with the codes filled in. Done. Come back and edit anytime you like.

Go make a loyalty card.

Behind the Scenes

I made the app not only for this but as a way to play with some Javascript libraries. The whole app runs in the client with the exception of one tiny server-side piece that loads the high-resolution barcodes for the PDF.

You can check out the source over on GitHub.

vs comments edit

I installed Visual Studio 2015 today. I had the RC installed and updated to the the RTM.

One of the minor-yet-annoying things I found about the RTM version showed up when I pinned it to my taskbar next to VS2013:

Confusing icons on the taskbar

Sigh.

Luckily it's an easy fix.

Windows 7 / Server 2008

First, unpin VS2015 from your taskbar. You'll put it back after you've fixed the icon.

Open up your Start menu and right-click on the "Visual Studio 2015" shortcut in there. On the context menu, choose "Properties." Click the "Change Icon" button.

Click the 'Change Icon' button

VS2015 actually comes with a few icons. They're not all awesome, but they're at least different than the VS2013 icon. I chose the one with the little arrow because it's, you know, upgraded from VS2013.

Pick a better icon

Click OK enough times to close all the property dialogs. You'll see the icon in the Start menu has changed. Now right-click that and pin it to the taskbar. Problem solved.

At least you can tell which is which now

Windows 8 / Server 2012

If you haven't pinned VS2015 to your taskbar yet, do that now so you can get a shortcut.

Open up the taskbar icons folder. This is at C:\Users\yourusername\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar.

Copy the "Visual Studio 2015" shortcut out of that folder and onto your desktop.

Unpin VS2015 from your taskbar. The shortcut in that TaskBar folder will disappear.

Right-click on the "Visual Studio 2015" shortcut you copied to your desktop. On the context menu, choose "Properties." Click the "Change Icon" button.

Click the 'Change Icon' button

VS2015 actually comes with a few icons. They're not all awesome, but they're at least different than the VS2013 icon. I chose the one with the little arrow because it's, you know, upgraded from VS2013.

Pick a better icon

Click OK enough times to close all the property dialogs. You'll see the icon on your desktop has changed.

Right-click on the icon on your desktop and pin that one to your taskbar. A new shortcut with the correct icon will be added to that TaskBar folder and will appear on the taskbar. You can now delete the one from your desktop.

At least you can tell which is which now

gaming, xbox comments edit

I tried playing a couple of Xbox 360 Kinect games with my four-year-old daughter, Phoenix. We had less than stellar results.

The first game was "Sesame Street TV." Basically it's interactive Sesame Street. We picked it up from the library to try it and I'm glad it was free.

Problem 1: She's very small compared to me. If the Kinect sees me, it somehow stops seeing her. And vice versa - if it sees her, it stops detecting me. There seemed to be a sort of very small "magic area" in the room where it'd find both of us.

Problem 2: The interaction for that game isn't constant. It's more like: they sing a song, then you have a small bit of interaction, then they tell a story, then there's a small bit of interaction. She'll watch or she'll interact, but she loses interest in interacting once you switch to watching.

Problem 3: Slight misrepresentation of the game on the box. The concept behind the game is like you going into the TV and being on Sesame Street. There is a picture on the box to illustrate the concept. Phoenix wants that to be the reality. It is really hard to explain that the box just shows an idea of what it's like, that you don't really transfer yourself into the television.

After a bit of Sesame Street, we tried "Kinect Adventures." I did this thinking that the constant interaction would keep her engaged.

We still ran into the problem where there was basically the small area where it recognized us both, but then it was compounded with a couple of new problems.

Problem 4: Many of the games aren't obvious to four-year-olds. In particular, the game where you have to walk from side to side and jump to control the raft - that was entirely unintuitive to Phoenix. She was far more concerned with whether or not the avatar on the raft actually looked like her, which then led to a half-hour diversion where we had to set up an avatar.

Problem 5: Auto jump-in/jump-out. The ability to jump in and out of the game quickly is great for folks that "get it" and when you have a properly sized room without the "magic area" where you're recognized. However, every time Phoenix accidentally stepped out of the "magic area," her avatar would disappear because it thought she was jumping out of the game, at which point I'd have to try to convince her to come back into the area - but not too close to me - so we could continue.

In the end, we decided it a better idea to just go watch some Looney Tunes cartoons we picked up at the library. Which, now that I think about it, is sort of the opposite of what Kinect is trying to get you to do - get off the couch and be active. Hmmm.

Over the years I've posted about my home media center developments. Back in 2008 I posted a summary with links to articles, then I did another roundup in 2014.

The problem with this sort of periodic summary is that it's hard to get an accurate picture of how things are working right now. I might forget to blog it, or I'll take some notes on something I found and forget to post it, or whatever.

I was keeping my media center and home networking notes in a personal wiki on PBworks but I figured it was time to make things a bit more official.

My media center and home network documentation is now live at illigmediacenter.readthedocs.org

Diagram of my home network

This is the place I'll add notes or tips on how my media center setup works. I've got everything from the hardware I use to my process for getting video content into the system. I've got my plan and analysis for how I cut cable including cost breakdowns and options. It's all on this site.

My biggest problem in getting my media center going was that I didn't know what I didn't know. Information about all this stuff - hardware, software, how to get things done - is spread out all over the place. I never found a complete guide to help me on my way.

I hope this documentation can help you jump start your media center or improve the one you have. As things change in my system, I'll be keeping the documentation here up to date so it should always have the latest info.

home, media, music, movies comments edit

We finally did it: We cut the cable.

On Friday, we took all the cable boxes back to Comcast, cut off the cable TV and the phones, and we're down to internet service and mobile phones only.

I have to say, I know I'm only a few days into it but I haven't really noticed it. Aside from calling my various financial institutions and utilities to change my phone number with them, it's pretty status quo. We were already watching most of our stuff on demand or through online services anyway.

If you'd like to know what I did or how I did it, I documented the whole plan. I'll do a blog entry later for the official release of my media center documentation site, but you can read over there about my cable cutting plan: what we did and the equipment/services we use.

net, vs, ndepend comments edit

NDepend 6 was recently released with a ton of new features. I've been working with NDepend for quite some time (my earliest blog entry on it was for version 2.7) and every release gets better. It's been a couple of years since version 5 came out. What's new?

The first thing you notice that's new when you start things up is the additional integrations they've added. It used to be just "install the extension for Visual Studio" but now there are icons for TFS, TeamCity, SonarQube, and Reflector integration.

NDepend 6 integrations

I'm particularly interested in the TeamCity integration because that's the build server I use. I have manually integrated it in the past using MSBuild and some manual TeamCity configuration, but with the new add-in, I can just drop NDepend on the build server and have all that work done for me. There's even a specific NDepend build step type added and the report magically shows up in the dashboard. There are some great step-by-step walkthrough videos on the NDepend site showing how to set this up.

I decided to analyze some of the new code I've been working on. It was pretty easy to get my project started. I love how NDepend helps you figure out where to go next if you haven't used it before.

NDepend beginner dialog

The report has improved by adding "how to fix" information to rule failures. One of the challenges I've had in the past is that you could see what things might have failed a rule, but you didn't really have anything clearly "actionable" you could tell folks to fix. You had to kind of "know" what a rule meant. Now there's no guesswork.

Report showing how to fix violations

One of my huge complaints with other tools (coverage, analysis) has been addressed - handling of async/await methods. A lot of what I've been working on lately has been Web API code, which is async/await from the ground up. Have you ever looked at that stuff in a decompiler like Reflector? Or a code coverage tool? I've found you don't get any information on it ("Let's just omit it!"); you get incorrect information on it ("You don't have full coverage because you didn't cover all the cases in the generated state machine!"); or you get confusing information ("I'll show you all of the compiler generated methods that don't make sense!").

Reports are very clean and complete, but you don't see the compiler generated state machine junk. Finally!

The metrics view just doubled in value by adding a second "dimension" to its display. You used to be able to just change the size of an item in the view based on a specific metric; now you can compare one metric to a second metric by adding a sort of "heat map" style coloration to it.

My favorite combination so far is to set the box size by "# IL Instructions" and set the color of the boxes by "IL Cyclomatic Complexity." It gives you a pretty good indication of things that need to be refactored - just look for the huge red boxes!

NDepend metrics view

My favorite new feature is the shareable rule files. We have a standard FxCop ruleset we use on all of our projects. We have a standard StyleCop ruleset we use on all of our projects. We can finally have a standard NDepend ruleset we use on all of our projects.

You can create a rule file with all of your analysis rules stored outside the project file and then tell projects to reference the central/common NDepend rules file.

Create a rules file

Once you have a custom rules file, you can reference it from your project. You will probably want to switch the paths in your project to be relative to the project file so it works on your machine and the build server.

Change paths to relative

With every iteration, NDepend just gets more compelling. I get so much insight from it about our code and areas we need to improve - things that are hard to see when you're neck deep in code and NuGet package references and under a deadline. You owe it to yourself to check it out.

Full disclosure: I got a free personal license from Patrick at NDepend. However, we have also purchased several licenses at work and make use of it to great benefit.

lastpass, security comments edit

I use LastPass for a lot of things including storing my personal software license files. I use the "secure note" function to save the license information and attach the license file to the secure note.

I was working on something today and trying to save a license to my machine and kept getting a dialog saying, "Error opening attachment. Error C." Nothing really specific and very confusing. I was able to save the attachment from the LastPass web site but not through the browser extension.

I ended up finding the solution in this forum post.

  1. LastPass Icon > Tools > Advanced Tools > Clear Local Cache
  2. LastPass Icon > Tools > Advanced Tools > Refresh Sites

After doing a clear and refresh, the attachment saved correctly. These are probably good steps to try whenever you get any sort of error with the LastPass browser extensions. Filed for future reference.

vs, coderush comments edit

CR_Documentor version 4.0.0 has been released to the Visual Studio Gallery and adds support for Visual Studio 2015.

Head over to the gallery to get your copy or get it through "Extensions and Updates" in the Visual Studio "Tools" menu.

Note: In VS 2015 RC you may notice that after installing the add-in the only add-in that shows up for CodeRush is CR_Documentor. I'm not sure why this is, but it seems to be fixed by clearing out the files in your loader cache in these folders:

%appdata%\CodeRush for VS .NET\1.1\Settings.xml\Loader
%appdata%\CodeRush for VS .NET\1.1\Settings.xml\_Scheme_FrictionFree\Loader

It is safe to delete these files because they will be re-created on the next restart of VS. This will get all the CodeRush features to show up again.

I filed an issue with DevExpress about this. If you are having this problem, please add a comment to that issue so they know it's not just me.

media comments edit

Back in March 2014 I started converting my DVD rips into MP4 files for use with Plex. I ran two laptops (both with 2.3GHz dual-core CPUs) 24/7 until early March 2015 when I added a third computer - an eight-core 4GHz machine.

Today I finally finished converting all of my disc-based video content to MP4.

Some quick statistics:

  • Total number of files: 4998
  • Total content runtime: 134 days, 8 hours, 56 minutes, 47 seconds
    • SD runtime: 115 days, 12 hours, 25 minutes, 17 seconds
    • HD runtime: 18 days, 20 hours, 31 minutes, 30 seconds
  • Total file size: 5182.3GB
    • SD file size: 3042.04GB
    • HD file size: 2140.26GB
  • Average MB/minute for SD content: 18.73
  • Average MB/minute for HD content: 80.72

I'm pretty pleased with how everything has come together. Seeing it all in Plex, nicely organized... it's a good feeling.

I can definitely say CPU power is important in video conversion. My laptops could convert an average SD movie in three or four hours, but an HD movie... I couldn't get one converted in a day. The eight-core behemoth can take the same SD movie and finish in an hour or less; and HD movies take about four hours - same as SD content on my laptops.

Anyway, if you're looking to convert a bunch of video, it's worth investing in some hefty CPU power. It'll save you tons of time.

Finally, as part of this, I'd like to introduce my media center documentation on ReadTheDocs.

It's a work in progress, so this is sort of a "soft launch," but I think it's fleshed out enough to be of some use. I will probably do a more dedicated blog entry for it when I've got more of it filled out.

Information about how I converted my stuff with Handbrake, including the script I used to pull the report data above, as well as the specs for my behemoth conversion/Plex server, is all over there.

process, security comments edit

I feel like I should write a book. It'd be epic like Moby Dick but would start with, "Call me Yossarian." This is going to sound confusing and comedic, straight out of Catch-22, but I assure you it's entirely true. It is happening to me right now.

Serenity Now!

We write a lot of documentation to a wiki at work. I've got permissions on it to add pages, rename pages, move pages... but not delete pages. If I want to delete a page, I have to find someone who has delete rights and ask them to do that, which doesn't make sense because I'm a pretty heavy contributor to the wiki.

I decided to seek out delete permissions for myself.

The wiki is managed by an overseas team. The previous process to get permissions to the wiki was to send an email to their infrastructure distribution list with your request and the issue would be dealt with in a day or two. It was fairly effective from a customer perspective.

The new process to get wiki permissions is to file a ticket in this custom-built ticketing system they've adopted. You find this out by sending an email to the infrastructure distribution list and reading the "out of office" autoresponder thing that comes back.

You can't file a ticket unless you have an account on the ticketing system. That's... well, not unheard of, but a bit inconvenient. Fine, I need to create an account.

In order to get an account on the ticketing system, you need to file a ticket. No joke. As one colleague put it, this is sort of like a secret society - you can't get in unless you already know someone who's in and will "vouch for you" by creating a ticket on your behalf.

Three working days later, I have an account so I log in. The ticketing system is a totally custom beast that was initially written starting in 2001 and hasn't really been updated since 2008. It looks and behaves exactly like you think - it's very bare-bones, there's no significant help, and it's entirely unintuitive to people who don't already use it every day.

Seeking out help, I notice in the autoresponder email there's a wiki link to a guide on how to file tickets. Cool. I visit that link and... I don't have permissions to see the wiki link.

In order to see the guide on how to file tickets, I have to file a ticket. Of course, I'm not sure what kind of ticket to file, since I can't see the guide.

I search around to see if there's any hint pointing me to which ticket type to file since they all have great titles like "DQT No TU Child Case." Totally obvious, right? I end up stumbling onto a screen shot someone has taken and posted to a comment section on an unrelated wiki page referring me to the type of case I need to file.

I don't see the right case type on the list of available tickets I can file. Turns out I don't have ticket system permissions to file that kind of ticket.

I have now opened a ticket so I can get permissions to open a ticket to get permissions to delete pages from the wiki. This is after, of course, the initial "secret society" ticket was filed to get me an account so I can file tickets.

humor, rest comments edit

I was browsing around the other day and found your mom's REST API. Naturally, I pulled my client out and got to work.

An abbreviated session follows:

GET /your/mom HTTP/1.1

HTTP/1.1 200 OK

PUT /your/mom HTTP/1.1
":)"

HTTP/1.1 402 Payment Required

POST /your/mom HTTP/1.1
"$"

HTTP/1.1 411 Length Required

PUT /your/mom HTTP/1.1
":)"

HTTP/1.1 406 Not Acceptable
HTTP/1.1 413 Request Entity Too Large
HTTP/1.1 200 OK
.
.
.
HTTP/1.1 200 OK
.
.
HTTP/1.1 200 OK
.
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 502 Bad Gateway
HTTP/1.1 503 Service Unavailable

I think I need to get a new API key before she gives me the ol' 410. :)

build comments edit

In making a package similar to the NuGet.Server package, I had a need to, from one project in the solution, get the list of build output assemblies from other projects in the same solution.

That is, in a solution like:

  • MySolution.sln
    • Server.csproj
    • Project1.csproj
    • Project2.csproj

...from the Server.csproj I wanted to get the build output assembly paths for the Project1.csproj and Project2.csproj projects.

The technically correct solution is sort of complicated and Sayed Ibrahim Hashimi has documented it on his blog. The problem with the technically correct solution is that it requires you to invoke a build on the target projects.

That build step was causing no end of trouble. Projects were re-running AfterBuild actions, code was getting regenerated at inopportune times, cats and dogs living together - mass hysteria.

I came up with a different way to get the build outputs that is less technically correct but gets the job done and doesn't require you to invoke a build on the target projects.

My solution involves loading the projects in an evaluation context using a custom inline MSBuild task. Below is a snippet showing the task in action. Note that the snippet is in the context of a .targets file that would be added to your .csproj by a NuGet package, so you'll see environment variables used that will only be present in a full build setting:

<Project DefaultTargets="EnumerateOutput" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" >
  <ItemGroup>
    <!-- Include all projects in the solution EXCEPT this one -->
    <ProjectToScan Include="$(SolutionDir)/**/*.csproj" Exclude="$(SolutionDir)/**/$(ProjectName).csproj" />
  </ItemGroup>
  <Target Name="EnumerateOutput" AfterTargets="Build">
    <!-- Call the custom task to get the output -->
    <GetBuildOutput ProjectFile="%(ProjectToScan.FullPath)">
      <Output ItemName="ProjectToScanOutput" TaskParameter="BuildOutput"/>
    </GetBuildOutput>

    <Message Text="%(ProjectToScanOutput.Identity)" />
  </Target>

  <UsingTask TaskName="GetBuildOutput" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v12.0.dll" >
    <ParameterGroup>
      <ProjectFile ParameterType="System.String" Required="true"/>
      <BuildOutput ParameterType="Microsoft.Build.Framework.ITaskItem[]" Output="true"/>
    </ParameterGroup>
    <Task>
      <Reference Include="System.Xml"/>
      <Reference Include="Microsoft.Build"/>
      <Using Namespace="Microsoft.Build.Evaluation"/>
      <Using Namespace="Microsoft.Build.Utilities"/>
      <Code Type="Fragment" Language="cs">
      <![CDATA[
        // The dollar-properties here get expanded to be the
        // actual values that are present during build.
        var properties = new Dictionary<string, string>
        {
          { "Configuration", "$(Configuration)" },
          { "Platform", "$(Platform)" }
        };

        // Load the project into a separate project collection so
        // we don't get a redundant-project-load error.
        var collection = new ProjectCollection(properties);
        var project = collection.LoadProject(ProjectFile);

        // Dollar sign can't easily be escaped here so we use the char code.
        var expanded = project.ExpandString(((char)36) + @"(MSBuildProjectDirectory)\" + ((char)36) + "(OutputPath)" + ((char)36) + "(AssemblyName).dll");
        BuildOutput = new TaskItem[] { new TaskItem(expanded) };
      ]]>
      </Code>
    </Task>
  </UsingTask>
</Project>

How it works:

  1. Create a dictionary of properties you want to flow from the current build environment into the target project. In this case, the Configuration and Platform properties are what affects the build output location, so I pass those. The $(Configuration) and $(Platform) in the code snippet will actually be expanded on the fly to be the real values from the current build environment.
  2. Create a tiny MSBuild project collection (similar to the way MSBuild does so for a solution). Pass the set of properties into the collection so they can be used by your project. You need this collection so the project doesn't get loaded in the context of the solution. You get an error saying the project is already loaded if you don't do this.
  3. Load the project into your collection. When you do, properties will be evaluated using the global environment - that dictionary provided.
  4. Use the ExpandString method on the project to expand $(MSBuildProjectDirectory)\$(OutputPath)$(AssemblyName).dll into whatever it will be in context of the project with the given environment. This will end up being the absolute path to the assembly being generated for the given configuration and platform. Note the use of (char)36 there - I spent some time trying to figure out how to escape $ but never could, so rather than fight it... there you go.
  5. Return the information from the expansion to the caller.

That step with ExpandString is where the less technically correct bit comes into play. For example, if the project generates an .exe file rather than a .dll - I don't account for that. I could enhance it to accommodate for that, but... well, this covers the majority case for me.

I considered returning a property rather than an item, but I have a need to grab a bunch of build output items and batch/loop over them, so items worked better in that respect.

There's also probably a real way of escaping $ that just didn't pop up in my searches. Leave a comment if you know; I'd be happy to update.

sublime, xml comments edit

I already have my build scripts tidy up my XML configuration files but sometimes I'm working on something outside the build and need to tidy up my XML.

There are a bunch of packages that have HTML linting and tidy, but there isn't really a great XML tidy package... and it turns out you don't really need one.

  1. Get a copy of Tidy and make sure it's in your path.
  2. Install the Sublime package "External Command" so you can pipe text in the editor through external commands.
  3. In Sublime, go to Preferences -> Browse Packages... and open the "User" folder.
  4. Create a new file in there called ExternalCommand.sublime-commands. (The name isn't actually important as long as it ends in .sublime-commands but I find it's easier to remember what the file is for with this name.)

Add the following to the ExternalCommand.sublime-commands file:

[
    {
        "caption": "XML: Tidy",
        "command": "filter_through_command",
        "args": { "cmdline": "tidy --input-xml yes --output-xml yes --preserve-entities yes --indent yes --indent-spaces 4 --input-encoding utf8 --indent-attributes yes --wrap 0 --newline lf" }
    }
]

Sublime should immediately pick this up, but sometimes it requires a restart.

Now when you're working in XML and want to tidy it up, go to the command palette (Ctrl+Shift+P) and run the XML: Tidy command. It'll be all nicely cleaned up!

The options I put here match the ones I use in my build scripts.. If you want to customize how the XML looks, you can change up the command line in the ExternalCommand.sublime-commands file using the options available to Tidy.

aspnet, rest, json comments edit

Here's the situation:

You have a custom object type that you want to use in your Web API application. You want full support for it just like a .NET primitive:

  • It should be usable as a route value like api/operation/{customobject}.
  • You should be able to GET the object and it should serialize the same as it does in the route.
  • You should be able to POST an object as the value for a property on another object and that should work.
  • It should show up correctly in ApiExplorer generated documentation like Swashbuckle/Swagger.

This isn't as easy as you might think.

The Demo Object

Here's a simple demo object that I'll use to walk you through the process. It has some custom serialization/deserialization logic.

public class MyCustomObject
{
  public int First { get; set; }

  public int Second { get; set; }

  public string Encode()
  {
    return String.Format(
        CultureInfo.InvariantCulture,
        "{0}|{1}",
        this.First,
        this.Second);
  }

  public static MyCustomObject Decode(string encoded)
  {
    var parts = encoded.Split('|');
    return new MyCustomObject
    {
      First = int.Parse(parts[0]),
      Second = int.Parse(parts[1])
    };
  }
}

We want the object to serialize as a pipe-delimited string rather than a full object representation:

var obj = new MyCustomObject
{
  First = 12,
  Second = 345
}

// This will be "12|345"
var encoded = obj.Encode();

// This will decode back into the original object
var decoded = MyCustomObject.Decode(encoded);

Here we go.

Outbound Route Value: IConvertible

Say you want to generate a link to a route that takes your custom object as a parameter. Your API controller might do something like this:

// For a route like this:
// [Route("api/value/{value}", Name = "route-name")]
// you generate a link like this:
var url = this.Url.Link("route-name", new { value = myCustomObject });

By default, you'll get a link that looks like this, which isn't what you want: http://server/api/value/MyNamespace.MyCustomObject

We can fix that. UrlHelper uses, in this order:

  • IConvertible.ToString()
  • IFormattable.ToString()
  • object.ToString()

So, if you implement one of these things, you can control how the object appears in the URL. I like IConvertible because IFormattable runs into other things like String.Format calls, where you might not want the object serialized the same.

Let's add IConvertible to the object. You really only need to handle the ToString method; everything else, just bail with InvalidCastException. You also have to deal with the GetTypeCode implementation and a simple ToType implementation.

using System;
using System.Globalization;

namespace SerializationDemo
{
  public class MyCustomObject : IConvertible
  {
    public int First { get; set; }

    public int Second { get; set; }

    public static MyCustomObject Decode(string encoded)
    {
      var parts = encoded.Split('|');
      return new MyCustomObject
      {
        First = int.Parse(parts[0]),
        Second = int.Parse(parts[1])
      };
    }

    public string Encode()
    {
      return String.Format(
        CultureInfo.InvariantCulture,
        "{0}|{1}",
        this.First,
        this.Second);
    }

    public TypeCode GetTypeCode()
    {
      return TypeCode.Object;
    }

    public override string ToString()
    {
      return this.ToString(CultureInfo.CurrentCulture);
    }

    public string ToString(IFormatProvider provider)
    {
      return String.Format(provider, "<{0}, {1}>", this.First, this.Second);
    }

    string IConvertible.ToString(IFormatProvider provider)
    {
      return this.Encode();
    }

    public object ToType(Type conversionType, IFormatProvider provider)
    {
      return Convert.ChangeType(this, conversionType, provider);
    }

    /* ToBoolean, ToByte, ToChar, ToDateTime,
       ToDecimal, ToDouble, ToInt16, ToInt32,
       ToInt64, ToSByte, ToSingle, ToUInt16,
       ToUInt32, ToUInt64
       all throw InvalidCastException */
  }
}

There are a couple of interesting things to note here:

  • I explicitly implemented IConvertible.ToString. I did that so the value you'll get in a String.Format call or a standard ToString call will be different than the encoded value. To get the encoded value, you have to explicitly cast the object to IConvertible. This allows you to differentiate where the encoded value shows up.
  • ToType pipes to Convert.ChangeType. Convert.ChangeType uses IConvertible where possible, so you kinda get this for free. Another reason IConvertible is better here than IFormattable.

Inbound Route Value, Action Parameter, and ApiExplorer: TypeConverter

When ApiExplorer is generating documentation, it needs to know whether the action parameter can be converted into a string (so it can go in the URL). It does this by getting the TypeConverter for the object and querying CanConvertFrom(typeof(string)). If the answer is false, ApiExplorer assumes the parameter has to be in the body of a request - which wrecks any generated documentation because that thing should be in the route.

To satisfy ApiExplorer, you need to implement a TypeConverter.

When your custom object is used as a route value coming in or otherwise as an action parameter, you also need to be able to model bind the encoded value to your custom object.

There is a built-in TypeConverterModelBinder that uses TypeConverter so implementing the TypeConverter will address model binding as well.

Here's a simple TypeConverter for the custom object:

using System;
using System.ComponentModel;
using System.Globalization;

namespace SerializationDemo
{
  public class MyCustomObjectTypeConverter : TypeConverter
  {
    public override bool CanConvertFrom(
        ITypeDescriptorContext context,
        Type sourceType)
    {
      return sourceType == typeof(string) ||
             base.CanConvertFrom(context, sourceType);
    }

    public override bool CanConvertTo(
        ITypeDescriptorContext context,
        Type destinationType)
    {
      return destinationType == typeof(string) ||
             base.CanConvertTo(context, destinationType);
    }

    public override object ConvertFrom(
        ITypeDescriptorContext context,
        CultureInfo culture,
        object value)
    {
      var encoded = value as String;
      if (encoded != null)
      {
        return MyCustomObject.Decode(encoded);
      }

      return base.ConvertFrom(context, culture, value);
    }

    public override object ConvertTo(
        ITypeDescriptorContext context,
        CultureInfo culture,
        object value,
        Type destinationType)
    {
      var cast = value as MyCustomObject;
      if (destinationType == typeof(string) && cast != null)
      {
        return cast.Encode();
      }

      return base.ConvertTo(context, culture, value, destinationType);
    }
  }
}

And, of course, add the [TypeConverter] attribute to the custom object.

[TypeConverter(typeof(MyCustomObjectTypeConverter))]
public class MyCustomObject : IConvertible
{
  //...
}

Setting Swagger/Swashbuckle Doc

Despite all of this, generated Swagger/Swashbuckle documentation will still show an expanded representation of your object, which is inconsistent with how a user will actually work with it from a client perspective.

At application startup need to register a type mapping with the Swashbuckle SwaggerSpecConfig.Customize method to map your custom type to a string.

SwaggerSpecConfig.Customize(c =>
{
  c.MapType<MyCustomObject>(() =>
      new DataType { Type = "string", Format = null });
});

Even More Control: JsonConverter

Newtonsoft.Json should handle converting your type automatically based on the IConvertible and TypeConverter implementations.

However, if you're doing something extra fancy like implementing a custom generic object, you may need to implement a JsonConverter for your object.

There is some great doc on the Newtonsoft.Json site so I won't go through that here.

Using Your Custom Object

With the IConvertible and TypeConverter implementations, you should be able to work with your object like any other primitive and have it properly appear in route URLs, model bind, and so on.

// You can define a controller action that automatically
// binds the string to the custom object. You can also
// generate URLs that will have the encoded value in them.
[Route("api/increment/{value}", Name = "increment-values")]
public MyCustomObject IncrementValues(MyCustomObject value)
{
  // Create a URL like this...
  var url = this.Url.Link("increment-values", new { value = value });

  // Or work with an automatic model-bound object coming in...
  return new MyCustomObject
  {
    First = value.First + 1,
    Second = value.Second + 1
  }
}

Bonus: Using Thread Principal During Serialization

If, for whatever reason, your custom object needs the user's principal on the thread during serialization, you're in for a surprise: While the authenticated principal is on the thread during your ApiController run, HttpServer restores the original (unauthenticated) principal before response serialization happens.

It's recommended you use HttpRequestMessage.GetRequestContext().Principal instead of Thread.CurrentPrincipal but that's kind of hard by the time you get to type conversion and so forth and there's no real way to pass that around.

The way you can work around this is by implementing a custom JsonMediaTypeFormatter.

The JsonMediaTypeFormatter has a method GetPerRequestFormatterInstance that is called when serialization occurs. It does get the current request message, so you can pull the principal out then and stick it on the thread long enough for serialization to happen.

Here's a simple implementation:

public class PrincipalAwareJsonMediaTypeFormatter : JsonMediaTypeFormatter
{
  // This is the default constructor to use when registering the formatter.
  public PrincipalAwareJsonMediaTypeFormatter()
  {
  }

  // This is the constructor to use per-request.
  public PrincipalAwareJsonMediaTypeFormatter(
    JsonMediaTypeFormatter formatter,
    IPrincipal user)
    : base(formatter)
  {
    this.User = user;
  }

  // For per-request instances, this is the authenticated principal.
  public IPrincipal User { get; private set; }

  // Here's where you create the per-user/request formatter.
  public override MediaTypeFormatter GetPerRequestFormatterInstance(
    Type type,
    HttpRequestMessage request,
    MediaTypeHeaderValue mediaType)
  {
    var requestContext = request.GetRequestContext();
    var user = requestContext == null ? null : requestContext.Principal;
    return new PrincipalAwareJsonMediaTypeFormatter(this, user);
  }

  // When you deserialize an object, throw the principal
  // on the thread first and restore the original when done.
  public override object ReadFromStream(
    Type type,
    Stream readStream,
    Encoding effectiveEncoding,
    IFormatterLogger formatterLogger)
  {
    var originalPrincipal = Thread.CurrentPrincipal;
    try
    {
      if (this.User != null)
      {
        Thread.CurrentPrincipal = this.User;
      }

      return base.ReadFromStream(type, readStream, effectiveEncoding, formatterLogger);
    }
    finally
    {
      Thread.CurrentPrincipal = originalPrincipal;
    }
  }

  // When you serialize an object, throw the principal
  // on the thread first and restore the original when done.
  public override void WriteToStream(
    Type type,
    object value,
    Stream writeStream,
    Encoding effectiveEncoding)
  {
    var originalPrincipal = Thread.CurrentPrincipal;
    try
    {
      if (this.User != null)
      {
        Thread.CurrentPrincipal = this.User;
      }

      base.WriteToStream(type, value, writeStream, effectiveEncoding);
    }
    finally
    {
      Thread.CurrentPrincipal = originalPrincipal;
    }
  }
}

You can register that at app startup with your HttpConfiguration like this:

// Copy any custom settings from the current formatter
// into a new formatter.
var formatter = new PrincipalAwareJsonMediaTypeFormatter(config.Formatters.JsonFormatter);

// Remove the old formatter, add the new one.
config.Formatters.Remove(config.Formatters.JsonFormatter);
config.Formatters.Add(formatter);

Conclusion

I have to admit, I'm a little disappointed in the different ways the same things get handled here. Why do some things allow IConvertible but others require TypeConverter? It'd be nice if it was consistent.

In any case, once you know how it works, it's not too hard to implement. Knowing is half the battle, right?

Hopefully this helps you in your custom object creation journey!

autofac, aspnet comments edit

We've been silent for a while, but we want you to know we've been working diligently on trying to get a release of Autofac that works with ASP.NET 5.0/vNext.

When it's released, the ASP.NET vNext compatible version will be Autofac 4.0.

Here's a status update on what's been going on:

  • Split repositories for Autofac packages. We had been maintaining all of the Autofac packages - Autofac.Configuration, Autofac.Wcf, and so on - in a single repository. This made it easier to work with but also caused trouble with independent package versioning and codeline release tagging. We've split everything into separate repositories now to address these issues. You can see the repositories by looking at the Autofac organization in GitHub.
  • Switched to Gitflow. Previously we were just working in master and it was pretty easy. Occasionally we'd branch for larger things, but not always. We've switched to using Gitflow so you'll see the 4.0 work going on in a "develop" branch in the repo.
  • Switched the build. We're trying to get the build working using only the new stuff (.kproj/project.json). This is proving to be a bit challenging, which I'll discuss more below.
  • Switched the tests to xUnit. In order to see if we broke something we need to run the tests, and the only runner in town for vNext is xUnit, so... we switched, at least for core Autofac.
  • Working on code conversion. Most of the differences we've seen in the API has to do with the way you access things through reflection. Of course, IoC containers do a lot of that, so there's a lot of code to update and test. The new build system handles things like resources (.resx) slightly differently, too, so we're working on making sure everything comes across and tests out.
  • Moved continuous integration to AppVeyor. You'll see build badges on all of the README files in the respective repos. The MyGet CI NuGet feed is still live and where we publish the CI builds, but the build proper is on AppVeyor. I may have to write a separate blog entry on why we switched, but basically - we had more control at AppVeyor and things are easier to manage. (We are still working on getting a CI build for the vNext stuff going on there.)

Obviously at a minimum we'd like to get core Autofac out sooner rather than later. Ideally we could also get a few other items like Autofac.Configuration out, too, so folks can see things in a more "real world" scenario.

Once we can get a reliable Autofac core ported over, we can get the ASP.NET integration piece done. That work is going on simultaneously, but it's hard to get integration done when the core bits are still moving.

There have, of course, been some challenges. Microsoft's working hard on getting things going, but things still aren't quite baked. Most of it comes down to "stuff that will eventually be there but isn't quite done yet."

  • Portable Class Library support isn't there. We switched Autofac to PCL to avoid having a ton of #if ASPNETCORE50 sorts of code in the codebase. We had that early on with things like Silverlight and PCL made this really nice. Unfortunately, the old-style .csproj projects don't have PCL support for ASP.NET vNext yet (though it's supposed to be coming) and we're not able to specify PCL target profiles in project.json. (While net45 works, it doesn't seem that .NETPortable,Version=v4.6,Profile=Profile259 does, or anything like it.) That means we're back to a couple of #if items and still trying to figure out how to get the other platforms supported. UPDATE: Had a Twitter conversation with Dave Kean and it turns out we may need to switch the build back to .csproj to get PCL support, but PCL should allow us to target ASP.NET vNext.
  • Configuration isn't quite baked. Given there's no web.config or ConfigurationElement support in ASP.NET, configuration is handled differently - through Microsoft.Framework.ConfigurationModel. Unfortunately, they don't currently support the notion of arrays/collections, so for Autofac.Configuration if you wanted to register a list of modules... you can't with this setup. There's an issue for it filed but it doesn't appear to have any progress. Sort of a showstopper and may mean we need to roll our own custom serialization for configuration.
  • The build structure has a steep learning curve. I blogged about this before so I won't recap it, but suffice to say, there's not much doc and there's a lot to figure out in there.
  • No strong naming. One of the things they changed about the new platform is the removal of strong naming for assemblies. Personally, I'm fine with that - it's always been a headache - but there's a lot of code access security stuff in Autofac that we'd put into place to make sure it'd work in partial trust; we had [InternalsVisibleTo] attributes in places... and that all has to change. You can't have a strong-named assembly depend on a not-strong-named assembly, and as they move away from strong naming, it basically means everything has to either maintain two builds (strong named and not strong named) or we stop strong naming. I think we're leaning toward not strong naming - for the same reason we tried getting away from the #if statements. One codeline, one release, easy to manage.

None of this is insurmountable, but it is a lot like dominos - if we can get the foundation stuff up to date, things will just start falling into place. It's just slow to make progress when the stuff you're trying to build on isn't quite there.

aspnet, net, autofac, github comments edit

Alex and I are working on switching Autofac over to ASP.NET vNext and as part of that we're trying to figure out what the proper structure is for a codeline, how a build should look, and so on.

There is a surprisingly small amount of documentation on the infrastructure bits. I get that things are moving quickly but the amazing lack of docs of any detail creates for a steep learning curve and a lot of frustration. I mean, you can read about the schema for project.json but even that is out of date/incomplete so you end up diving into the code, trying to reverse-engineer how things come together.

Below is a sort of almost-stream-of-consciousness braindump of things I've found while working on sorting out build and repo structure for Autofac.

No More MSBuild - Sake + KoreBuild

If you're compiling only on a Windows platform you can still use MSBuild, but if you look at the ASP.NET vNext repos, you'll see there's no MSBuild to be found.

This is presumably to support cross-platform compilation of the ASP.NET libraries and the K runtime bits. That's a good goal and it's worth pursuing - we're going that direction for at least core Autofac and a few of the other core libs that need to change (like Autofac.Configuration). Eventually I can see all of our stuff switching that way.

The way it generally works in this system is:

  • A base build.cmd (for Windows) and build.sh (for Linux) use NuGet to download the Sake and KoreBuild packages.
  • The scripts kick off the Sake build engine to run a makefile.shade which is platform-agnostic.
  • The Sake build engine, which is written in cross-platform .NET, handles the build execution process.

The Sake Build System

Sake is a C#-based make/build system that appears to have been around for quite some time. There is pretty much zero documentation on this, which makes figuring it out fairly painful.

From what I gather, it is based on the Spark view engine and uses .shade view files as the build scripts. When you bring in the Sake package, you get several shared .shade files that get included to handle common build tasks like updating assembly version information or running commands.

It enables cross-platform builds because Spark, C#, and the overall execution process works both on Mono and Windows .NET.

One of the nice things it has built in, and a compelling reason to use it beyond the cross-platform support, is that a convention-based standard build lifecycle that runs clean/build/test/package targets in a standard order. You can easily hook into this pipeline to add functionality but you don't have to think about the order of things. It's pretty nice.

The KoreBuild Package

KoreBuild is a build system layered on top of Sake that is used to build K projects. As with Sake, there is zero doc on this.

If you're using the new K build system, though, and you're OK with adopting Sake, there's a lot of value in the KoreBuild package. KoreBuild layers in Sake support for automatic NuGet package restore, native compile support, and other K-specific goodness. The _k-standard-goals.shade file is where you can see the primary set of things it adds.

The Simplest Build Script

Assuming you have committed to the Sake and KoreBuild way of doing things, you can get away with an amazingly simple top-level build script that will run a standard clean/build/test/package lifecycle automatically for you.

var AUTHORS='Your Authors Here'

use-standard-lifecycle
k-standard-goals

At the time of this writing, the AUTHORS value must be present or some of the standard lifecycle bits will fail... but since the real authors for your package are specified in project.json files now, this really just is a placeholder that has to be there. It doesn't appear to matter what the value is.

Embedded Resources Have Changed

There is currently no mention of how embedded resources are handled in the documentation on project.json but if you look at the schema you'll see that you can specify a resources element in project.json the same way you can specify code.

A project with embedded resources might look like this (minus the frameworks element and all the dependencies and such to make it easier to see):

{
    "description": "Enables Autofac dependencies to be registered via configuration.",
    "authors": ["Autofac Contributors"],
    "version": "4.0.0-*",
    "compilationOptions": {
        "warningsAsErrors": true
    },
    "code": ["**\\*.cs"],
    "resources": "**\\*.resx"
    /* Other stuff... */
}

Manifest Resource Path Changes

If you include .resx files as resources, they correctly get converted to .resources files without doing anything. However, if you have other resources, like an embedded XML file...

{
    "code": ["**\\*.cs"],
    "resources": ["**\\*.resx", "Files\\*.xml"]
}

...then you get an odd generated path. Easiest to see with an example. Say you have this:

~/project/
  src/
    MyAssembly/
      Files/
        Embedded.xml

In old Visual Studio/MSBuild, the file would be embedded and the internal manifest resource stream path would be MyAssembly.Files.Embedded.xml - the folders would represent namespaces and path separators would basically become dots.

However, in the new world, you get a manifest resource path Files/Embedded.xml - literally the relative path to the file being embedded. If you have unit tests or other stuff where embedded files are being read, this will throw you for a loop.

No .resx to .Designer.cs

A nice thing about the resource system in VS/MSBuild was the custom tool that would run to convert .resx files into strongly-typed resources in .Designer.cs files. There's no automatic support for this anymore.

However, if you give in to the KoreBuild way of things, they do package an analogous tool inside KoreBuild that you can run as part of your command-line build script. It won't pick up changes if you add resources to the file in VS, but it'll get you by.

To get .resx building strongly-typed resources, add it into your build script like this:

var AUTHORS='Your Authors Here'

use-standard-lifecycle
k-standard-goals

#generate-resx .resx description='Converts .resx files to .Designer.cs' target='initialize'

What that does is add a generate-resx build target to your build script that runs during the initialize phase of the standard lifecycle. The generate-resx target dependes on a target called resx which does the actual conversion to .Designer.cs files. The resx target comes from KoreBuild and is included when you include the k-standard-goals script, but it doesn't run by default, which is why you have to include it yourself.

Gotcha: The way it's currently written, your .resx files must be in the root of your project (it doesn't use the resources value from project.json). They will generate the .Designer.cs files into the Properties folder of your project. This isn't configurable.

ASP.NET Repo Structure is Path of Least Resistance

If you give over to Sake and KoreBuild, it's probably good to also give over to the source repository structure used in the ASP.NET vNext repositories. Particularly in KoreBuild there are some hardcoded assumptions in certain tasks that you're using that repo structure.

The structure looks like this:

~/MyProject/
  src/
    MyProject.FirstAssembly/
      Properties/
        AssemblyInfo.cs
      MyProject.FirstAssembly.kproj
      project.json
    MyProject.SecondAssembly/
      Properties/
        AssemblyInfo.cs
      MyProject.SecondAssembly.kproj
      project.json
  test/
    MyProject.FirstAssembly.Test/
      Properties/
        AssemblyInfo.cs
      MyProject.FirstAssembly.Test.kproj
      project.json
    MyProject.SecondAssembly.Test/
      Properties/
        AssemblyInfo.cs
      MyProject.SecondAssembly.Test.kproj
      project.json
  build.cmd
  build.sh
  global.json
  makefile.shade
  MyProject.sln

The key important bits there are: - Project source is in the src folder. - Tests for the project are in the test folder. - There's a top-level solution file (if you're using Visual Studio). - The global.json points to the src file as the place for project source. - There are build.cmd and build.sh scripts to kick off the cross-platform builds. - The top-level makefile.shade handles build orchestration. - The folder names for the source and test projects are the names of the assemblies they generate. - Each assembly has... - Properties with AssemblyInfo.cs where the AssemblyInfo.cs doesn't include any versioning information, just other metadata. - A .kproj file (if you're using Visual Studio) that is named after the assembly being generated. - A project.json that spells out the authors, version, dependencies, and other metadata about the assembly being generated.

Again, a lot of assumptions seem to be built in that you're using that structure. You can save a lot of headaches by switching.

I can see this may cause some long-path problems. Particularly if you are checking out code into a deep file folder and have a long assembly name, you could have trouble. Think C:\users\myusername\Documents\GitHub\project\src\MyProject.MyAssembly.SubNamespace1.SubNamespace2\MyProject.MyAssembly.SubNamespace1.SubNamespace2.kproj. That's 152 characters right there. Add in those crazy WCF-generated .datasource files and things are going to start exploding.

Assembly/Package Versioning in project.json

Part of what you put in project.json is your project/package version:

{
    "authors": ["Autofac Contributors"],
    "version": "4.0.0-*",
    /* Other stuff... */
}

There desn't appear to be a way to keep multiple assemblies in a solution consistently versioned. That is, you can't put the version info in the global.json at the top level and I'm not sure where else you could store it. You could probably come up with a custom build task to handle centralized versioning, but it'd be nice if there was something built in for it.

XML Doc Compilation Warnings

The old compiler csc.exe had a thing where it would automatically output compiler warnings for XML documentation errors (syntax or reference errors). The K compiler apparently doesn't do this by default so they added custom support for it in the KoreBuild package.

To get XML documentation compilation warnings output in your build, add it into your build script like this:

var AUTHORS='Your Authors Here'

use-standard-lifecycle
k-standard-goals

#xml-docs-test .clean .build-compile description='Check generated XML documentation files for errors' target='test'
  k-xml-docs-test

That adds a new xml-docs-test target that runs during the test part of the lifecycle (after compile). It requires the project to have been cleaned and built before running. When it runs, it calls the k-xml-docs-test target to manually write out XML doc compilation warnings.

Runtime Update Gotchas

Most build.cmd or build.sh build scripts have a line like this:

CALL packages\KoreBuild\build\kvm upgrade -runtime CLR -x86
CALL packages\KoreBuild\build\kvm install default -runtime CoreCLR -x86

Basically: - Get the latest K runtime from the feed. - Set the latest K runtime as the 'default' one to use.

While I think this is fine early on, I can see a couple of gotchas with this approach.

  • Setting the 'default' modifies the user profile. When you call kvm install default the intent is to set the aliast default to refer to the specified K runtime version (in the above example, that's the latest version). When you set this alias, it modifies a file attached to the user profile containing the list of aliases - it's a global change. What happens if you have a build server environment where lots of builds are running in parallel? You're going to get the build processes changing aliases out from under each other.
  • How does backward compatibility work? At this early stage, I do want the latest runtime to be what I build against. Later, though, I'm guessing I want to pin a revision of the runtime in my build script and always build against that to ensure I'm compatible with applications stuck at that runtime version. I guess that's OK, but is there going to be a need for some sort of... "binding redirect" (?) for runtime versions? Do I need to specify some sort of "list of supported runtime versions?"

Testing Means XUnit and aspnet50

At least at this early stage, XUnit seems to be the only game in town for unit testing. The KoreBuild stuff even has XUnit support built right in, so, again, path of least resistance is to switch if you're not already on it.

I did find a gotcha, though, where if you want k test to work your assemblies must target aspnet50.

Which is to say... in your unit test project.json you'll have a line to specify the test runner command:

{
    "commands": {
        "test": "xunit.runner.kre"
    },
    "frameworks": {
        "aspnet50": { }
    }
}

Specifying that will allow you to drop to a command prompt inside the unit test assembly's folder and run k test to execute the unit tests.

In early work for Autofac.Configuration I was trying to get this to work with the Autofac.Configuration assembly only targeting aspnetcore50 and the unit test assembly targeting aspnetcore50. When I ran k test I got a bunch of exceptions (which I didn't keep track of, sorry). After a lot of trial and error, I found that if both my assembly under test (Autofac.Configuration) and my unit test assembly (Autofac.Configuration.Test) both targeted aspnet50 then everything would run perfectly.

PCL Support is In Progress

It'd be nice if there was a portable class library profile that just handled everything rather than all of these different profiles + aspnet50 + aspnetcore50. There's not. I gather from Twitter conversations that this may be in the works but I'm not holding my breath.

Also, there's a gotcha with Xamarin tools: If you're using a profile (like Profile259) that targets a common subset of a lot of runtimes including mobile platforms, then the output of your project will change based on whether or not you have Xamarin tools installed. For example, without Xamarin installed you might get .nupkg output for portable-net45+win+wpa81+wp80. However, with Xamarin installed that same project will output for portable-net45+win+wpa81+wp80+monotouch+monoandroid.

Configuration Changes

Obviously with the break from System.Web and some of the monolithic framework, you don't really have web.config as such anymore. Instead, the configuration system has become Microsoft.Framework.ConfigurationModel.

It's a pretty nice and flexible abstraction layer that lets you specify configuration in XML, JSON, INI, or environment variable format. You can see some examples here.

That said, it's a huge change and takes a lot to migrate.

  • No appSettings. I'm fine with this because appSettings always ended up being a dumping ground, but it means everything you have originally tied to appSettings needs to change.
  • No ConfigurationElement objects. I can't tell you how much I have written in the old ConfigurationElement mechanism. It had validation, strong type parsing, serialization, the whole bit. All of that will not work in this new system. You can imagine how this affects things like Autofac.Configuration.
  • List and collection support is nonexistent. I've actually filed a GitHub issue about this. A lot of the configuration I have in both Autofac.Configuration and elsewhere is a list of elements that are parameterized. The current XML and JSON parsers for config specifically disallow list/collection support. Everything must be a unique key in a tree-like hierarchy. That sort of renders the new config system, at least for me, pretty much unusable except for the most trivial of things. Hopefully this changes.
  • Everything is file or in-memory. There's no current support for pulling in XML or JSON configuration that comes from, say, a REST API call from a centralized repository. Even in unit testing, all the bits that actually run the configuration parsing on a stream of XML/JSON are internals rather than exposed - you have to load config from a file or manually create it yourself in memory by adding key/value pairs. There's a GitHub issue open for this, too.

As a workaround, I'm considering using custom object serialization and bypassing the new configuration system altogether. I like the flexibility of the new system but the limitations are pretty overwhelming right now.

android comments edit

A couple of years back I bought some Samsung TecTiles for use with my Galaxy S3. I created a tag that would easily switch my phone to vibrate mode at work - get to work, scan it, magic.

Within the last couple of weeks I upgraded to a Galaxy Note 4 and when I tried to use the Note 4 on my TecTile I got a message saying the tag type wasn't supported.

A little research revealed that the TecTiles I bought are "MIFARE Classic" format, which are apparently not universally compatible. So... crap. If I want to mess around with NFC, I'm going to need to get some different tags. These TecTiles can't be read by any device I have in my home anymore.

android comments edit

My phone is a Samsung Galaxy S3 on Verizon.

If you already know about running custom ROMs and customizing your Android phone, you're probably laughing right now. Not knowing any better, I took all the standard over-the-air ("OTA") updates all the way through current Android 4.4.2, figuring when the time came I could follow whatever the latest rooting process is and update to something like Cyanogenmod. Oh, how wrong I was.

The problem mostly was in the things I didn't understand, or thought I understood, with the whole process of putting a custom ROM on the phone. There is so much information out there, but there isn't a guide that both tells you how to do the upgrade and what it is you're actually doing, that is, why each step is required.

I learned so much in failing to flash my phone. I failed miserably, getting the phone into a state where it would mostly boot up, but would sometimes fail with some security warning ("soft-bricking" the phone; fully "bricked" would imply I couldn't do anything with it at all).

So given all that, I figured rather than write a guide to how to put a custom ROM on your phone, I'd just write up all the stuff I learned so maybe folks trying this themselves will understand more about what's going on.

Disclaimers, disclaimers: I'm a Windows guy, though I have some limited Linux experience. Things that might be obvious to Linux folks may not be obvious to me. I also may not have the 100% right description at a technical level for things, but this outlines how I understand it. My blog is on GitHub - if you want to correct something, feel free to submit a pull request.

Background/Terminology

An "OS image" that you want to install on your phone is a ROM. I knew this going in, but just to level-set, you should know the terminology. A ROM generally contains a full default setup for a version of Android, and there are a lot of them. The ones you get from your carrier are "stock" or "OTA" ROMs. Other places, like Cyanogenmod, build different configurations of Android and let you install their version.

ROMs generally include software to run your phone's modem. At least, the "stock" ROMs do. This software tells the phone how to connect to the carrier network, how to connect to wireless, etc. I don't actually know if custom ROMs also include modem software, but I'm guessing not since these seem to be carrier-specific.

You need "root" access on your phone to do any low-level administrative actions. You'll hear this referred to as "rooting" the phone. ("root" is the name of the superuser account in Linux, like "administrator" in Windows.) Carriers lock their stock ROMs down so software can't do malicious things... and so you can't uninstall the crapware they put on your phone. The current favorite I've seen is Towelroot.

With every update to the stock ROM, carriers try to "plug the holes" that allow you to get root access. Sometimes they also remove root access you might already have.

You need this root access so you can install a custom "recovery mode" on your phone. (I'll get to what "recovery" is in a minute.)

When you turn on your phone or reboot, a "bootloader" is responsible for starting up the Android OS. This is a common thing in computer operating systems. Maybe you've seen computers that "dual boot" two different operating systems; or maybe you've used a special menu to go into "safe mode" during startup. The bootloader is what allows that to happen.

In Android, the bootloader lets you do basically one of three things:

  • Boot into the Android OS installed.
  • Boot into "recovery mode," which allows you to do some maintenance functions.
  • Boot into "download mode," which allows you to connect your phone to your computer to do special software installations.

You don't ever actually "see" the bootloader. It's just software behind the scenes making decisions about what to do when the power button gets pushed.

Recovery mode on your phone provides access to maintenance functions. If you really get into a bind, you may want to reset your phone to factory defaults. Or you may need to clear some cached data the system has that's causing incorrect behavior. The "recovery mode" menu on the phone allows you to do these things. This is possible because it's all happening before the Android OS starts up.

What's interesting is that people have created "custom recovery modes" that you can install on the phone that give the phone different/better options here. This is the gateway for changing the ROM on your phone or making backups of your current ROM.

Download mode on your phone lets you connect the phone to a computer to do custom software installations. The complement to recovery mode is download mode. This allows you to connect the phone to a computer with a USB cable and push a ROM from the computer over to the phone.

Odin is software for Samsung devices that uses download mode to flash a ROM onto a device. When you go into download mode on the phone, something has to be running on your computer to push the software to the phone. For Samsung devices, this software is called "Odin." I can't really find an "official" download for Odin, which is sort of scary and kind of sucks. (You can apparently also use software called Heimdall, but I didn't try that.)

The Process (And Where I Failed)

Now that you know the terminology, understanding what's going on when you're putting a custom ROM on the phone should make a bit more sense. It should also help you figure out better what's gone wrong (should something go wrong) so you know where to look to fix it.

First you need to root the phone. You'll need the administrative access so you can install some software that will work at a superuser level to update the recovery mode on your phone.

Rooting the phone for me was pretty easy. Towelroot did the trick with one button click.

Next you need to install a custom recovery mode. A very popular one is ClockworkMod ROM Manager. You can get this from the Google Play store or from their site. It is sad how lacking the documentation is. There's nothing on their web site but download links; and other "how to use" guides are buried in forums.

If you do use ClockworkMod ROM Manager, though, there's a button inside the app that lets you flash the ClockworkMod Recovery Mode. Doing this will update the recovery mode menu and start letting you use options that ClockworkMod provides, like installing a custom ROM image or backing up your current ROM.

THIS IS WHERE THINGS WENT WRONG FOR ME. Remember how you get into the recovery mode by going through the bootloader? Verizon has very annoyingly locked down the bootloader on the Galaxy S3 on more recent stock ROM images such that it detects if you've got a custom recovery mode installed. If you do, you get a nasty warning message telling you that some unrecognized software is installed and you have to go to Verizon to fix it.

Basically, by installing ClockworkMod Recovery, I had soft-bricked my phone. Everything looked like it was going to work... but it didn't.

This is apparently a fairly recent thing with later OTA updates from Verizon. Had I not taken the updates, I could have done this process. But... I took the updates, figuring someone would have figured out a way around it by the time I was interested in going the custom ROM route, and I was wrong.

If the custom recovery works for your phone then switching to a custom ROM would be a matter of using the custom recovery menu to select a ROM and just "switch" to it. The recovery software would take care of things for you. ROMs are all over for the download, like right off the Cyanogenmod site. Throw the ROM on your SD card, boot into recovery, choose the ROM, and hang tight. You're set.

If the custom recovery doesn't work for your phone then you're in my world and it's time to figure out what to do.

The way to un-soft-brick my phone was to manually restore the stock ROM. Again, there are really no official download links for this stuff, so it was a matter of searching and using (what appeared to be) reputable places to get the software.

  • Install the Odin software on your computer.
  • Boot the phone into "download mode" so it's ready to get the software.
  • Connect the phone to the computer.
  • Tell the phone to start downloading.
  • In Odin, select the stock ROM in "AP" or "Phone" mode. (You can't downgrade - I tried that. The best I could do was reinstall the same thing I had before.)
  • Hit the Odin "Start" button and be scared for about 10 minutes while it goes about its work and reboots.

After re-flashing the stock ROM, I was able to reboot without any security warnings. Of course, I had to reinstall all of my apps, re-customize my home screens, and all that...

...But I was back to normal. Almost.

My current problem is that I'm having trouble connecting to my wireless network. It sees the network, it says it's getting an IP address, but it gets hung on this part "determining the quality of your internet connection." This is a new problem that I didn't have before.

It seems to be a fairly common problem with no great solution. Some people fix it by rebooting their wireless router (didn't fix it for me). Some people fix it by telling the phone to "forget" the network and then manually reconnecting to it (didn't fix it for me).

My current attempt at solving it involves re-flashing the modem software on the phone. Remember how I mentioned that the stock ROM comes with modem software in it? You can also get the modem software separately and use Odin to flash just the modem on the phone. Some folks say this solves it. I did the Odin part just this morning and while I'm connected to wireless now, the real trouble is after a phone restart. I'll keep watch on it.

Hopefully this helps you in your Android modding travels. I learned a lot, but knowing how the pieces work together would have helped me panic a lot less when things went south and would have helped me know what to look for when fixing things.

media, music, movies, hardware, home, synology comments edit

UPDATE 7/8/2015 - All current documentation for my media center and home network is at illigmediacenter.readthedocs.org.

Way back in 2008 I put up an overview of my media server solution based on the various requirements I had at the time - what I wanted out of it, what I wasn't so interested in.

I've tried to keep that up to date somewhat, but I figured it was time to provide a nice, clean update with everything I've got set up thus far and a little info on where I'm planning on taking it. Some of my requirements have changed, some of the ideas about what I want out of it have changed.

Requirements

  • Access to my DVD collection: I want to be able to get to all of the movies and TV shows in my collection. I am not terribly concerned with keeping the menus or extra features, but I do want the full audio track and video without noticeably reduced fidelity.
  • Family acceptance factor: I want my wife and daughter to be able to navigate through the system and find what they want to watch with minimal effort.
  • Access to my pictures: I want to be able to see my family photos from a place outside my office where the computers generally sit.
  • Access to my music: I want to be able to listen to my music collection from any room in the house.
  • As compatible as possible: When choosing formats, software, communication protocols, etc., I want it to be compatible with as many devices I own as possible. I have an Android phone, an iPod classic, an iPad, Windows machines, a PS4, an Xbox 360, a Kindle Fire, and a Google Chromecast.

Hardware

My hardware footprint has changed a bit since I started, but I'm in a pretty comfortable spot with my current setup and I think it has a good way forward.

  • Synology DS1010+: I use the Synology DS1010+ for my movie storage and as the Plex server (more on Plex in the software section). The 1010+ is an earlier version of the Synology DS1513+ and is amazingly flexible and extensible.
  • HP EX475 MediaSmart Server: This little machine was my first home server and was originally going to be my full end-to-end solution. Right now it serves as picture and audio storage as well as the audio server.
  • Playstation 3: My main TV has an Xbox 360, a PS3, and a small home theater PC attached to it... but I primarily use the PS3 for the front end for all of this stuff. The Xbox 360 may become the primary item once the Plex app is released for it. The PC was primary for a while but it's pretty underpowered and cumbersome to turn on, put to sleep, etc.
  • Google Chromecast: Upstairs I have the Chromecast and an Xbox 360 on it. The Chromecast does pretty well as the movie front end. I sort of switch between this and the 360, but I find I spend more time with the Chromecast when it comes to media.

Software

I use a fairly sizable combination of software to manage my media collection, organize the files, and convert things into compatible formats.

  • Picasa: I use Picasa to manage my photos. I mostly like it, though I've had some challenges as I have moved it from machine to machine over the years in keeping all of the photo album metadata and the ties to the albums synchronized online. Even with these challenges, it is the one tool I've seen with the best balance of flexibility and ease of use. My photos are stored on a network mounted drive on the HP MediaSmart home server.
  • Asset UPnP: Asset UPnP is the most flexible audio DLNA server I've found. You can configure the junk out of it to make sure it transcodes audio into the most compatible formats for devices, and you can even get your iTunes playlists in there. I run Asset UPnP on the HP MediaSmart server.
  • Plex: I switched from XBMC/Kodi to Plex for serving video, and I've also got Plex serving up my photos. The beauty of Plex is that it has a client on darn near every platform; it has a beautiful front end menu system; and it's really flexible so you can have it, say, transcode different videos into formats the clients require (if you're using the Plex client). Plex is a DLNA server, so if you have a client like the Playstation 3 that can play videos over DLNA, you don't even need a special client. Plex can allow you to stream content outside your local network so I can get to my movies from anywhere, like my own personal Netflix. Plex is running on the Synology DS1010+ for the server; and I have the Plex client on my iPad, Surface RT, home theater PC, Android phones, and Kindle Fire.
  • Handbrake: Handbrake is great for taking DVD rips and converting to MP4 format. (See below for why I am using MP4.) I blogged my settings for what I use when converting movies.
  • DVDFab HD Decrypter: I've been using DVDFab for ripping DVDs to VIDEO_TS images in the past. It works really well for that. These rips easily feed into Handbrake for getting MP4s.
  • MakeMKV: Recently I've been doing some rips from DVD using MakeMKV. I've found sometimes there are odd lip sync issues when ripping with DVDFab that don't show up with MakeMKV. (And vice versa - sometimes ripping with MakeMKV shows some odd sync issues that you don't see with DVDFab.) When I get to ripping Blu-ray discs, MakeMKV will probably be my go-to.
  • DVD Profiler: I use this for tracking my movie collection. I like the interface and the well-curated metadata it provides. I also like the free online collection interface - it helps a lot while I'm at the store browsing for new stuff to make sure I don't get any duplicates. Also helpful for insurance purposes.
  • Music Collector: I use this for tracking my music collection. The feature set is nice, though the metadata isn't quite as clean. Again, big help when looking at new stuff to make sure I don't get duplicates as well as for insurance purposes.
  • CrashPlan: I back up my music and photo collection using CrashPlan. I don't have my movies backed up because I figured I can always re-rip from the original media... but with CrashPlan it's unlimited data, so I could back it up if I wanted. CrashPlan runs on my MediaSmart home server right now; if I moved everything to Plex, I might switch CrashPlan to run on the DS1010+ instead.

Media Formats and Protocols

  • DLNA: I've been a fan from the start of DLNA, but the clients and servers just weren't quite there when I started out. This seems to be much less problematic nowadays. The PS3 handles DLNA really well and I even have a DLNA client on my Android phone so I can easily stream music. This is super helpful in getting compatibility out there.
  • Videos are MP4: I started out with full DVD rips for video, but as I've moved to Plex I've switched to MP4. While it can be argued that MKV is a more flexible container, MP4 is far more compatible with my devices. The video codec I use is x264. For audio, I put the first track as a 256kbps AAC track (for compatibility) and make the second track the original AC3 (or whatever) for the home theater benefit. I blogged my settings info.
  • Audio is MP3, AAC, and Apple Lossless: I like MP3 and get them from Amazon on occasion, but I am still not totally convinced that 256kbps MP3 is the way and the light. I still get a little scared that there'll be some better format at some point and if I bought the MP3 directly I won't be able to switch readily. I still buy CDs and I rip those into Apple Lossless format. (Asset UPnP will transcode Apple Lossless for devices that need the transcoding; or I can plug the iPod/iPad in and play the lossless directly from there.) And I have a few AAC files, but not too many.

Media Organization

Videos are organized using the Plex recommendations: I have a share on the Synology DS1010+ called "video" and in there I have "Movies," "TV," and "Home Movies" folders. I have Plex associating the appropriate data scrapers for each folder.

/videos
    /Home Movies
        /2013
        /2014
            /20140210 Concert 01.mp4
            /20140210 Concert 02.mp4
    /Movies
        /Avatar (2009).mp4
        /Batman Begins (2014).mp4
    /TV
        /Heroes
            /Season 01
                /Heroes.s01e01.mp4
                /Heroes.s01e02.mp4

You can read about the Plex media naming recommendations here:

Audio is kept auto-organized in iTunes: I just checked the box in iTunes to keep media automatically organized and left it at that. The media itself is on a mapped network drive on the HP MediaSmart server and that works reasonably enough, though at times the iTunes UI hangs as it transfers data over the network.

Photos are organized in folders by year and major event: I've not found a good auto-organization method that isn't just "a giant folder that dumps randomly named pictures into folders by year." I want it a little more organized than that, though it means manual work on my part. If I have a large number of photos corresponding to an event, I put those in a separate folder. For "one-off photos" I keep a separate monthly folder. Files generally have the date and time in YYYYMMDD_HHMMSS format so it's sortable.

/photos
    /2012
    /2013
    /2014
        /20140101 Random Pictures
            /20140104_142345 Lunch at McMenamins.jpg
            /20140117_093542 Traffic Jam.jpg
        /20140307 Birthday Party
            /20140307_112033.jpg
            /20140307_112219.jpg

Picasa works well with this sort of folder structure and it appears nicely in DLNA clients when they browse the photos by folder via Plex.

Network

My main router is a Netgear WNDR3700v2 and I love it. I've been through a few routers and wireless access points in the past but this thing has been solid and flexible enough with the out-of-the-box firmware such that I don't have to tweak with it to get things working. It just works.

I have wired network downstairs between the office/servers and the main TV/PS3/Xbox 360/HTPC. This works well and is pretty much zero maintenance. I have two D-Link switches (one in the office, one in the TV room) to reach all the devices. (Here's the updated version of the ones I use.)

The router provides simultaneous dual-band 2.4GHz and 5GHz wireless-N through the house which covers almost everywhere except a few corners. I've just recently added some Netgear powerline adapters to start getting wired networking upstairs into places where the wireless won't reach.

The Road Ahead

This setup works pretty well so far. I'm really enjoying the accessibility of my media collection and I find I'm using it even more often than I previously was. So where do I go next?

  • Plex on Xbox 360: The only reason I still have that home theater PC in my living room is that it's running the Plex app and if I want a nice interface with which to browse my movies, the HTPC is kinda the way to go. Plex has just come out with an app for Xbox One and should shortly be available for Xbox 360. This will remove the last reason I have an HTPC at all.
  • Add a higher-powered Plex server: My Synology DS1010+ does a great job running Plex right now, but it can't transcode video very well. Specifically, if I have a high-def video and I want to watch it on my phone, the server wants to transcode that to accommodate for bandwidth constraints and whatnot... but the Synology is too underpowered to handle that. I'd like to see about getting a more powerful server running as the actual Plex server - store the data on the Synology, but use a different machine to serve it up, handle transcodingI, and so forth. (That little HTPC in the living room isn't powerful enough, so I'll have to figure something else out.)
  • Add wireless coverage upstairs: It's great that I can hook the Xbox upstairs to wired networking using the powerline adapters but that doesn't work so well for, say, my phone or the Chromecast. I'd like to add some wireless coverage upstairs (maybe chain another WNDR3700 in?) so I can "roam" in my house. I think even with the powerline stuff in there, it'd be fast enough for my purposes.
  • Integrate music into Plex: I haven't tried the Plex music facilities and I'm given to understand that not all Plex clients support music streaming. This is much lower priority for me given my current working (and awesome) Asset UPnP installation, but it'd be nice long-term to just have one primary server streaming content rather than having multiple endpoints to get different things.