I wanted to be able to not only tidy my JSON objects, but also sort by property. I wanted to do this so I could unify my project.json and config.json files while working in .NET Core. Figuring out where people were adding keys, finding redundant things added to files, and so on… having a predictable order makes it all that much easier.

Up front, I’ll tell you this is a total hack. I got it to work as a user package (code in your User folder) but haven’t taken it as far as putting it into a repo or adding it to Package Control. That’s probably the next step. I just wanted to get this out there.

I’ll also say this is instructions for a Windows environment. The places you’ll have to adjust for Linux should be obvious, but I don’t have guidance or instructions to help you. Sorry.

First, install the External Command package. This is a great general-purpose package for setting up external commands and pushing Sublime Text buffers through. Select some text and have that text passed to an external shell command on stdin. (No selection? It runs the whole file.)

Next, create a folder called SortJson in your User package folder. This is where we’ll put the contents of the user module.

If you don’t have Node installed… why not? Really, though, if you don’t, go get it and install it. We need it because we use the Node json-stable-stringify package to do the work.

Drop to a command prompt in the SortJson folder and install the json-stable-stringify module.

npm install json-stable-stringify

You should get a node_modules folder under that SortJson folder and inside you’ll have json-stable-stringify (and maybe dependencies, but that’s fine).

Now we need a little script to take the contents of stdin and pass it through json-stable-stringify.

Create a script called sort-json.js in the SortJson folder. In that script, put this:

var stringify = require('json-stable-stringify');
var opts = {
    "space": 2
};

var stdin = process.stdin,
    stdout = process.stdout,
    inputChunks = [];

stdin.resume();
stdin.setEncoding('utf8');

stdin.on('data', function (chunk) {
    inputChunks.push(chunk);
});

stdin.on('end', function () {
    var inputJSON = inputChunks.join(""),
        parsedData = JSON.parse(inputJSON),
        outputJSON = stringify(parsedData, opts);
    stdout.write(outputJSON);
    stdout.write('\n');
});

Unfortunately, the External Command package doesn’t let you set a working directory, so you can’t just fire up Node and run the sort-json.js directly. We have to create a little batch file that helps our script find the json-stable-stringify module at runtime.

Create a batch script called sort-json.cmd in the SortJson folder. In that script, put this:

@SETLOCAL
@SET NODE_MODULES=%~dp0node_modules
@node "%~dp0sort-json.js" %*

That temporarily adds the SortJson\node_modules folder to the NODE_MODULES environment variable before running the sort-json.js script.

The last thing you need is a tie to the Sublime Text command palette so you can run the command to sort JSON.

Create a file called sort-json.sublime-commands in the SortJson folder. In that file, put this:

[
    {
        "caption": "JSON: Sort Object",
        "command": "filter_through_command",
        "args": { "cmdline": "\"%APPDATA%\\Sublime Text 3\\Packages\\User\\SortJson\\sort-json.cmd\"" }
    }
]

You’ll have to restart Sublime, but when you do you’ll see a command in the palette “JSON: Sort Object”. Load up a file with a JSON object and run that command. You should get a sorted JSON object.

I try to pair this with the JsFormat package (for JSBeautify integration) as well as SublimeLinter-json (for linting/error checking), both of which are in Package Control. If you want to tweak the formatting that comes out of the sort directly, the opts variable you see at the top of sort-json.js are the options used by json-stable-stringify.

personal, movies, humor comments edit

I was watching Cutthroat Island this weekend with my daughter, who loves pirate movies, when I started thinking about these giant treasure chests full of gold you see in such films.

The stereotypical treasure chest

While I get that it’s a movie, it was fun to think about how practical carting around that treasure chest of doubloons might be.

Assumptions:

Now, let’s say the treasure chest is like 90cm x 60cm x 60cm on the inside. A little large-ish, but not unheard of in a pirate movie.

  • The chest has 324,000cc interior capacity.
  • Multiplied by the 35% packing efficiency, you’d have 113,400cc of gold.
  • 113,400cc x 19.3g per cc = 2,188,620g = 2188.62kg (4825.1lb).

There is no way pirates are carrying around 5000lb gold chests.

Let’s figure a couple guys - one on each end of the chest - need to cart the chest through the jungle or something. They’re strong, but not they’re not Hafþór Júlíus Björnsson. You’re looking at something like 115kg (253.5lb) or so lest it gets unwieldy.

Working backwards, 115kg is 5958.55cc of gold. With the packing ratio, that’s a chest with 17024.43cc total capacity. To make the math easy, let’s say it’s a cube-shaped chest. That’d yield a chest with internal dimensions of about roughly 25.73cm (10.13 inches) on a side.

That’s a pretty tiny treasure chest.

At least, tiny in comparison to what you usually see on a pirate movie.

Now, I could be generous with my packing efficiency. Maybe it’s far less than 35%, or it could be that the chest isn’t packed to the top with gold, or both.

If you had that 90cm x 60cm x 60cm chest and limited yourself to the 115kg weight, that’d put the packing efficiency of doubloons at closer to 5%; or it’d mean the chest is not quite a quarter of the way full.

Just for fun, we can also calculate the value of such treasure. The price of gold today (as I write this) is $39,175.66 USD per kg.

  • 115kg of gold = $4,505,200.90 USD
  • 2188.62kg of gold = $85,740,632.99 USD

If the chest was full of doubloons (which, again, are actually 22k gold, not 24k), we know that doubloons weigh 6.867g so you’d have…

  • 115kg of doubloons = 16,746 doubloons
  • 2188.62kg of doubloons = 318,715 doubloons

Doubloons seem to be baesd on weight rather than physical size (or, at least, I didn’t see any average size listed anywhere in my two minutes of searching) so I’m not sure how big a chest with that number of doubloons might need to be. I can’t imagine it’s too far off from my original calculation.

Anyway, it was kind of fun to think about. It makes for a better movie to have the giant chest of treasure, so it’s all good.

dotnet, aspnet, azure comments edit

I got the opportunity to hit the Microsoft Build conference this year. Last time I was able to make it was 2012, so it was good to be able to get back in and see what’s new in person.

I’m going to review this from my perspective as a web / web service / REST API sort of developer. As a different person or developer, you may have picked up something different you thought was super cool that I totally missed or tuned out. So.

Usually there are some “key themes” that get pushed at Build. Back in 2012, it was all Windows 8 applications. This year it was:

  • Internet of Things
  • Office and Cortana Integration
  • Cross-platform Applications
  • Microservices and Bots

Keeping in mind my status as a web developer, the microservice/bot stuff was the most interesting thing to me. I don’t work with hardware, so IoT is neat but not valuable. I don’t really need to integrate with Office, and Cortana isn’t web-based. I can maybe see doing some cross-platform stuff for mobile apps that talk to my REST APIs, but this was largely outside the scope of what I do, too.

I was reasonably disappointed by the keynotes. They usually have some big reveal in the keynotes. Day one left me wanting. Something about 22 new machine learning APIs being released that I won’t use. Day two the big reveal for me was free Xamarin for everyone. Again, cross-platform dev isn’t my thing, but still that’s pretty cool.

The sessions were impossible to get into. In 2012 they hosted the conference in Redmond on the Microsoft campus. I got into every session I was interested in. Since then they’ve hosted it in San Francisco at the Moscone Center. I can’t speak for other years, but this year you just couldn’t get into the sessions. If you weren’t lined up half an hour early to get in, forget it - there weren’t enough seats. In total I only got to see three different sessions. In three days, I saw three sessions. I don’t feel like I should have to catch up on the conference I paid to attend by watching videos of the sessions I wanted to see.

The schedule wasn’t public very early. I normally like to check out the web site and figure out which sessions I want to see when I get there. They didn’t release the speaker schedule (to my knowledge) until the day before the conference. I may well have canceled my reservation had I known the list of topics ahead of time. Maybe that’s why they didn’t release it.

There were great code labs. Something they didn’t have as much in previous years were interactive labs so you could learn new tech. They did a really good job of this, with several physical labs with hardware all set up so you could try stuff out. This was super valuable and the majority of my conference takeaways came from these labs. In particular I finally got a good feel for Docker by doing this lab.

There was no hardware giveaway this year which makes me wonder why the price was so high. I get that there are big parties and so on, but I would rather the price go down or there be some hardware than just keep the price cranked up. That said, I did come out with a Raspberry Pi 2 Azure IoT starter kit, so I can at least experiment with some of the IoT things they announced. Who knows? Maybe I’ll turn into an IoT aficionado.

There was a pitifully small amount of information about .NET Core. .NET Core and ASP.NET Core are on the top of my mind lately. Most of my current projects, including Autofac, are working through the challenges of RC1 and getting to RC2. There were something like three sessions total on .NET Core, most of which was just intro information. Any target dates on RC2? What’s the status on dotnet CLI? Honestly, I was hoping the big keynote announcement would be the .NET Core RC2 release. No such luck.

Access to the actual product teams was awesome. This almost (but not quite) makes up for the sessions being full. The ability to talk directly to various product team members for things like Visual Studio Online, ASP.NET Core, NuGet, Visual Studio, and Azure offerings was fantastic. It can be so hard sometimes to get questions answered or get the straight scoop on what’s going on with a project - cutting through the red tape and just talking to people is the perfect answer to that.

There was a big to-do around HoloLens. There seemed to be a lot around HoloLens - from conceptual demos to a full demo of walking on Mars. The lines for this were ridiculous. I didn’t get a chance to try it myself; a couple of colleagues tried it and said it wasn’t as mind-blowing as it was promoted to be.

Nutshell:

  • Logistics: Not good. If you sell out in five minutes and don’t have enough seats for sessions, that’s not cool.
  • Topics: Not good. I get there’s a focus on a certain subset of topics, but I can usually find something cool I’m excited about. Not this time.
  • Educational Value: OK. I didn’t get much from sessions but the labs and the on-hand staff were great.
  • Networking Value: Good. I don’t normally “network” with people in the whole “sales” context, but being able to meet up with people from different vendors and product teams and speak face to face was a valuable thing.

net comments edit

I’ve been working a bit with Serilog and ASP.NET Core lately. In both cases, there are constructs that use CallContext to store data across an asynchronous flow. For Serilog, it’s the LogContext class; for ASP.NET Core it’s the HttpContextAccessor.

Running tests, I’ve noticed some inconsistent behavior depending on how I set up the test fakes. For example, when testing some middleware that modifies the Serilog LogContext, I might set it up like this:

var mw = new SomeMiddleware(ctx => Task.FromResult(0));

Note the next RequestDelegate I set up is just a Task.FromResult call because I don’t really care what’s going on in there - the point is to see if the LogContext is changed after the middleware executes.

Unfortunately, what I’ve found is that the static Task methods, like Task.FromResult and Task.Delay, don’t behave consistently with respect to using CallContext to store data across async calls.

To illustrate the point, I’ve put together a small set of unit tests here:

public class CallContextTest
{
  [Fact]
  public void SimpleCallWithoutAsync()
  {
    var value = new object();
    SetCallContextData(value);
    Assert.Same(value, GetCallContextData());
  }

  [Fact]
  public async void AsyncMethodCallsTaskMethod()
  {
    var value = new object();
    await NoOpTaskMethod(value);
    Assert.Same(value, GetCallContextData());
  }

  [Fact]
  public async void AsyncMethodCallsAsyncFromResultMethod()
  {
    var value = new object();
    await NoOpAsyncMethodFromResult(value);

    // THIS FAILS - the call context data
    // will come back as null.
    Assert.Same(value, GetCallContextData());
  }

  private static object GetCallContextData()
  {
    return CallContext.LogicalGetData("testdata");
  }

  private static void SetCallContextData(object value)
  {
    CallContext.LogicalSetData("testdata", value);
  }

  /*
   * Note the difference between these two methods:
   * One _awaits_ the Task.FromResult, one returns it directly.
   * This could also be Task.Delay.
   */

  private async Task NoOpAsyncMethodFromResult(object value)
  {
    // Using this one will cause the CallContext
    // data to be lost.
    SetCallContextData(value);
    await Task.FromResult(0);
  }

  private Task NoOpTaskMethod(object value)
  {
    SetCallContextData(value);
    return Task.FromResult(0);
  }
}

As you can see, changing from return Task.FromResult(0) in a non async/await method to await Task.FromResult(0) in async/await suddenly breaks things. No amount of configuration I could find fixes it.

StackOverflow has related questions and there are forum posts on similar topics, but this is the first time this has really bitten me.

I gather this is why AsyncLocal<T> exists, which means maybe I should look into that a bit deeper.

personal, culture comments edit

There has been a lot of push lately for people to learn to code. From Hour of Code to the President of the United States pushing for more coders, the movement towards everyone coding is on.

What gets lost in the hype, drowned out by the fervor of people everywhere jamming keys on keyboards, is that simply being able to code is not software development.

OK, sure, technically speaking when you write code that executes a task you have just developed a piece of software. Also, technically speaking, when you fumble out Chopsticks on the keyboard while walking through Costco you just played the piano. That doesn’t make you a pianist any more than taking an hour to learn to code makes you a software developer.

Here’s where my unpopular opinion comes out. Here’s where I call out the elephant in the room and the politically-correct majority gasp at how I can be so unencouraging to these folks learning to code.

Software development is an art, not a science.

Not everyone can be a software developer in the same way not everyone can be a pianist, a painter, or a sculptor. Anyone can learn to play the piano well; anyone can learn to paint or sculpt reasonably. That doesn’t mean just anyone can make a living doing these things.

It’s been said that if you spend 10,000 hours practicing a task you can become great at anything. 10,000 hours is basically five years of a full-time job. So, ostensibly, if you spent five years full-time coding, you’d be a developer.

However, we’ve all heard that argument about experience: Have you had 20 years of experience? Or one year of experience 20 times? Does spending 10,000 hours coding make you a developer? Or does it just mean you spent a lot of time coding?

Regardless of your time in any field you have probably run across both of these people - the ones who really have 20 years’ experience and the ones who have been working for 20 years and you wonder how they’ve advanced so far in their careers.

I say that to be a good developer - or a good artist - you need three things: skills, aptitude, and passion.

Skills are the rote abilities you learn when you start with that Hour of Code or first take a class on coding. Pretty much anyone can learn a certain level of skill in nearly any field. It’s learned ability that takes a brainpower and dedication.

Aptitude is a fuzzier quality meaning your natural ability to do something. This is where the “art” part of development starts coming in. You may have learned the skills to code, but do you have any sort of natural ability to perform those skills?

Passion is your enthusiasm - in this case, the strong desire to execute the skills you have and continue to improve on them. This is also part of the “art” of development. You might be really good at jamming out code, but if you don’t like doing it you probably won’t come up with the best solutions to the problems with which you’re faced.

Without all three, you may be able to code but you won’t really be a developer.

A personal anecdote to help make this a bit more concrete: When I went to college, I told my advisors that I really wanted to be a 3D graphics animator/modeler. My dream job was (and still kind of is) working for Industrial Light and Magic on special effects. As a college kid, I didn’t know any better, so when the advisors said I should get a Computer Science degree, I did. Only later did I find out that wouldn’t get me into ILM or Pixar. Why? In their opinion (at the time, in my rejection letters), “you can teach computer science to an artist but you can’t teach art to a computer scientist.”

The first interesting thing I find there is that, at least at the time, the thought there was that art “isn’t teachable.” For the most part, I agree - without the skills, aptitude, and passion for art, you’re not going to be a really great artist.

The more interesting thing I find is the lack of recognition that solving computer science problems, in itself, is an art.

If you’ve dived into code, you’re sure to have seen this, though maybe you didn’t realize it.

  • Have you ever seen a really tough problem solved in an amazingly elegant way that you’d never have thought of yourself? What about the converse - a really tough problem solved in such a brute force manner that you can’t imagine why that’s good?
  • Have you ever picked up someone else’s code and found that it’s entirely unreadable? If you hand someone else your code, can they make heads or tails of it? What about code that was so clearly written you didn’t even need any comments to understand how it worked?
  • Have you ever seen code that’s so deep and unnecessarily complicated that if anything went wrong with it you could never fix it? What about code that’s so clear you could easily fix anything with it if a problem was discovered?

We’ve all seen this stuff. We’ve all written this stuff. I know I have… and still do. Sometimes we even laugh about it.

The important part is that those three factors - skill, aptitude, and passion - work together to improve us as developers.

I don’t laugh at a beginner’s code because their skills aren’t there yet. However, their aptitude and passion may help to motivate them to raise their skill level, which will make them overall better at what they do.

The art of software development isn’t about the quantity of code churned out, it’s about quality. It’s about constant improvement. It’s about change. These are the unquantifiable things that separate the coders from the developers.

Every artist constantly improves. I’m constantly improving, and I hope you are, too. It’s the artistic aspect of software development that drives us to do so, to solve the problems we’re faced with. Don’t just be a software developer, be a software artist. And be the best artist you can be.