csharp, javascript comments edit

I saw a Twitter thread the other day that got me thinking:

One of the responses hit home for me:

Warning: I’m going off on a bit of a tear here, and I color a little outside the lines of the argument. I’m having a tough time trying to convey a lot of frustration I’ve had recently with Go and Spring Boot / Java, and reading about folks loving the removal of boilerplate as a feature is touching on a pain point.

My daily work is C# and TypeScript. However, I sometimes also work in Terraform and the related SDKs, which are all in Go. So… I do have to use Go. Sometimes, but not often. When I do, I, too, find that I need to relearn nearly from the ground up. I also work, very occasionally, with Java, mostly apps based on Spring. Same thing there - I see the stuff, I sorta figure out what I need to, but if I come back to it a month later, I have no idea what’s going on.

I’m trying to figure out why that is. Like, if I have to get into a Python program and figure something out or add to it, I don’t have that feeling of being instantly just “lost.” I totally feel that with Go.

I think this part hints at it: “I … have no tolerance for excessive boilerplate.” I’m curious what, exactly, that means. I can guess, having messed around with Go - the convention-over-configuration for project structure, for example.

This is a lot of what I see Spring Boot in Java trying to address. Removing boilerplate. Convention over configuration. Auto-wireup. Just make it happen - code less, get more done.

But… if you’re removing boilerplate, you have to supplement that with easy to locate, comprehensive documentation that explains how to get things done.

The command go test runs your unit tests. Cool. What are the parameters you can pass to that? Go search on go test parameters. I’ll wait. You know what the first several results are? The test package. Doesn’t tell you anything. OK, cool, here’s a hint - just run go and you can start tracing down the help stack. Eventually you’ll get to go help testflags. Hmmm. Seems like a lot of references to GOMAXPROCS in there. What does that mean? Back to searching…

Here’s another one - I wanted to turn up the log level on a Spring Boot app that was in a Docker container. This seems like it should be easy. There is no reasonable set of search terms that will get you to the point where you just see a clear explanation that setting -Dlogging.level.root=TRACE or setting LOGGING_LEVEL_ROOT in the environment is the thing you want. I’ll save you the trouble, it’s not there. It’s just layers upon layers of abstractions in an effort to remove boilerplate, but you have to know what’s being abstracted in order to understand how to work adequately with the abstraction.

This seems to be a sort of conflicting goal in the tweet - the person doesn’t have a “tolerance for excessive boilerplate” but also doesn’t want “overabstracted APIs.” By definition, I think, removing the boilerplate necessarily implies there’s some abstraction bridging that gap.

Looking at the whole C# 9 feature of “top-level programs” - that is, a program that doesn’t require a Main() method. Seems like removing nice boilerplate, right? Except, read that bit about args - “args is available as a ‘magic’ parameter.’”


I’m not sure writing a class and a method declaration as an explicit entry point for my program was really the stumbling block for getting things done. And what’s the debugging story? When you have to figure out where the “magic parameter” comes from… how do you do that?

I think that’s the crux of my whole issue here. I’m not a fan of just throwing away keystrokes (though you’d be hard pressed to realize that from this rant), but there’s gotta be a balance between “fewer keystrokes,” “ease of use,” and “maintainability.” If I need to spend a year learning everything that sits under Spring Boot so I can understand how to change the log levels in an app, that’s not maintainable. It might save keystrokes, it might be easy to use ‘if you know,’ but… if you don’t?

I wonder if this contributes to the polarization of people working with languages. It becomes harder and harder to be that polyglot programmer because the stack you have to know for each individual language just grows. Eventually it’s not worth trying to span all the languages because you aren’t getting anything done. So you get the C# fans, or the Go fans, or the Python fans, or the JavaScript fans, and they all love their individual languages and ecosystem, but only because they spend all day every day in there. They know the right search terms to plug in, they know the stack and why the boilerplate was removed.When they switch to something else (as it is with me), it’s a “somebody moved my cheese” situation and the tolerance for pain is far lower than the desire to just get back to being productive.

ndepend, net, vs comments edit

I love NDepend and I’ve been using it for a long time. It’s a great way to get a big-picture view of your application code whilst still being able to drill in for good details. However, I have to say, the latest version - v2020.1 - has the coolest dependency graph functionality. It’s a total overhaul of the feature and I think this is going to be my new go-to view in reports.

First, it’s worth checking out the video that walks you through how to use it. The help videos are great and super valuable.

I watched this and had to try it out. I went and grabbed the Orchard Core codebase which is one solution with 156 projects in it, at the time of this writing.

I got the new version of NDepend and installed the VS extension. It’s interesting to note there’s still a VS 2010 version of the extension available. Are you still using VS 2010? Please upgrade. Please. Also stop using IE.

Install NDepend extension for VS

I loaded up the Orchard Core solution, fired up a new NDepend project for it, and ran the analysis. (If you don’t know how to do this, there is so much help on it, just waiting for you.).

After that, I dived right into the new dependency graph. And… wow. Orchard Core is big. Here’s the initial eye chart you get, but don’t be overwhelmed - it’s a high-level view, right, like looking at a map of the world. I didn’t even bother letting you click to zoom in because it’s just too much.

The Orchard Core map of the world, in dependency graph form

I decided to start by looking at things the main OrchardCore.Cms.Web application references, to get a more specific picture of what the app is doing. I went to Solution Explorer and dragged the OrchardCore.Cms.Web project right into the Dependency Graph. This highlighted the project on the graph so I could zoom in.

OrchardCore.Cms.Web dependency graph

Looking at that, I noticed the red double-ended arrows under OrchardCore.DisplayManagement.Liquid. Hmmm. Let’s zoom in and check that out.

OrchardCore.DisplayManagement.Liquid dependency graph

Hmmm. There are three codependencies there, but let’s just pick one to figure out. Seems the OrchardCore.DisplayManagement.Liquid namespace uses stuff from OrchardCore.DisplayManagement.Liquid.Tags, but the Liquid.Tags namespace also uses stuff from the parent Liquid namespace. What exactly is that?

Easy enough to find out. Double-click on that double-arrow or right-click and select “Build a Graph made of Code Elements involved in this dependency.”

Dive into the OrchardCore.DisplayManagement.Liquid codependency graph

Whoa! Another really tall graph. You can see OrchardCore.DisplayManagement.Liquid.LiquidViewTemplate calls a bunch of stuff in the OrchardCore.DisplayManagement.Liquid.Tags namespace… but… what’s that little arrow at the bottom being called by OrchardCore.DisplayManagement.Liquid.Tags.HelperStatement.WriteToAsync?

OrchardCore.DisplayManagement.Liquid codependency graph

That looks like the culprit. Let’s just zoom in.

OrchardCore.DisplayManagement.Liquid.ViewBufferTextWriterContent is the culprit!

Aha! There is one class being referenced from the Liquid.Tags namespace back to the Liquid namespace - OrchardCore.DisplayManagement.Liquid.ViewBufferTextWriterContent. That’s something that may need some refactoring.

All done here? Click the “application map” button to get back to the top level world map view.

The application map button

That’s a really simple example showing how easy it is to start at the world view of things and explore your code. There’s so much more to see, too. You can…

  • Change the clustering and grouping display to get as high or low level as you want - maybe in simpler apps you don’t want to group as much but in more complex apps you want to be more specific about the grouping.
  • Show or hide third-party code - by default it’s hidden, but have you ever wondered how many dependencies you have on something like log4net?
  • Run a code query using CQL, put your cursor over a result, and see the corresponding item(s) highlighted in the dependency graph.

Truly, check out the video, it’s six minutes of your time well spent.

I’m going to go start loading up some source from some other projects (things I maybe can’t provide screen shots of?) and see if there are some easy refactorings I can find to improve them.

azure, git comments edit

I use Azure DevOps wikis a lot and I love me some PlantUML diagrams - they’re far easier to maintain than dragging lines and boxes around.

Unfortunately, Azure DevOps wiki doesn’t support PlantUML. There’s Mermaid.js support but it’s a pretty old version that doesn’t support newer diagram types so it’s very limited. They’re being very slow to update to the latest Mermaid.js version, too, so it kind of leaves you stuck. Finally, it doesn’t seem like there’s any traction on getting PlantUML into Azure DevOps, so… we have to bridge that gap.

I bridged it by creating an automatic image generation script for PlantUML files. If you’re super anxious, here’s the code Otherwise, let explain how it works.

First, I made sure my wiki was published from a Git repository. I need to be able to access the files.

I used a combination of node-plantuml for generating PNG files from PlantUML diagrams along with watch to notify me of filesystem changes.

Once that script is running, I can create a .puml file with my diagram. Let’s call it Simple-Diagram.puml:

Bob->Alice : hello

When I save that file, the script will see the new .puml file and kick off the image generator. It will generate a .png file with the same name as the .puml (so Simple-Diagram.png in this case). As the .puml file changes the generated image will update. If you delete the .puml file, the image will also be removed.

Simple diagram example from PlantUML

Now in your wiki page, you just include the image.

![PlantUML Diagram](Simple-Diagram.png)

The drawback to this approach is that you have to render these on your editing client - it’s not something that happens via a build. I didn’t want a build generating something and committing that to the repo so I don’t mind that too much; you can look at integrating it into a build if you like.

The benefit is that it doesn’t require a PlantUML server, it doesn’t require you run things in a container to get it working… it just works. Now, I think under the covers the node-plantuml module is running a PlantUML .jar file to do the rendering, but I’m fine with that.

The editing experience is pretty decent. Using a Markdown preview extension, you can see the diagram update in real-time.

VS Code and PlantUML

I have an example repo here with everything wired up! It has the watcher script, VS Code integration, the whole thing. You could just take that and add it to any existing repo with an Azure DevOps wiki and you’re off to the races.

azure, build, javascript comments edit

I’m in the process of creating some custom pipeline tasks for Azure DevOps build and release. I’ve hit some gotchas that hopefully I (and you!) can avoid. Some of these are undocumented, some are documented but maybe not so easy to find.

Your Task Must Be In a Folder

You can’t put your task.json in the root of the repo. Convention assumes that your task is in a folder of the same name as the task. Azure DevOps won’t be able to find your task in the extension if it’s in the root.


+- task.json
+- vss-extension.json

I messed with it for a really long time, you really just need that task folder. They show it that way in the tutorial but they never explain the significance.


+- YourTaskName/
|  +- task.json
+- vss-extension.json

You Can Have Multiple Task Versions in One Extension

You may have seen tasks show up like NuGetInstaller@0 and NuGetInstaller@1 in Azure DevOps. You can create a versioned task using a special folder structure where the name of the task is at the top and each version is below:

+- YourTaskName/
|  +- YourTaskNameV0/
|     +- task.json
|  +- YourTaskNameV1/
|     +- task.json
+- vss-extension.json

This repo has a nice example showing it in action.

Don’t Follow the Official Azure Pipeline Tasks Repo

The official repo with the Azure DevOps pipeline tasks is a great place to learn how to use the task SDK and write a good task… but don’t follow their pattern for your repo layout or build. Their tasks don’t get packaged as a VSIX or deployed as an extension the way your custom tasks will, so looking for hints on packaging will lead you down a pretty crazy path.

This repo is a bit of a better example, with some tasks that… help you build task extensions. It’s a richer sample of how to build and package a task extension.

You Have Two Versions to Change

In a JavaScript/TypeScript-based project, there are like four different files that have versions:

  • The root package.json
  • The VSIX manifest vss-extension.json
  • The task-level package.json
  • The task-level task.json

The only versions that matter in Azure DevOps are the VSIX manifest vss-extension.json and the task-level task.json. The Node package.json versions don’t matter.

In order to get a new version of your VSIX published, the vss-extension.json version must change. Even if the VSIX fails validation on the Marketplace, that version is considered “published” and you can’t retry without updating the version.

Azure DevOps appears to cache versions of your tasks, too, so if you publish a VSIX and forget to update your task.json version, your build may not get the latest version of the task. You need to increment that task.json version for your new task to be used.

You’re Limited to 50MB

Your VSIX package must be less than 50MB in size or the Visual Studio Marketplace will reject it.

This is important to know, because it combines with…

Each Task Gets Its Own node_modules

Visual Studio Marketplace extensions pre-date the loader mechanism so your extension, as a package, isn’t actually used as an atomic entity. Let’s say you have two tasks in an extension:

+- FirstTask/
|  +- node_modules/
|  +- package.json
|  +- task.json
+- SecondTask/
|  +- node_modules/
|  +- package.json
|  +- task.json
+- node_modules/
+- package.json
+- vss-extension.json

Your build/package process might have some devDependencies like TypeScript in the root package.json and each task might have its own set of dependencies in its child package.json.

At some point, you might see that both tasks need mostly the same stuff and think to move the dependency up to the top level - Node module resolution will still find it, TypeScript will still compile things fine, all is well on your local box.

Don’t do that. Keep all the dependencies for a task at the task level. It’s OK to share devDependencies, don’t share dependencies.

The reason is that, even if on your dev box and build machine the Node module resolutiono works, when the Azure DevOps agent runs the task, the “root” is wherever that task.json file is. No modules will be found above there.

What that means is you’re going to have a lot of duplication across tasks (and task versions). At a minimum, you know every task requires the Azure Pipeline Task SDK, and a pretty barebones “Hello World” level task packs in at around 1MB in the VSIX if you’re not being careful.

This means you can pretty easily hit that 50MB limit if you have a few tasks in a single extension.

It also means if you have some shared code, like a “common” library, it can get a little tricky. You can’t really resolve stuff outside that task folder.

You might think “I can Webpack it, right?” Nope.

You Can’t Webpack Build Tasks

I found this one out the hard way. It may have been in the past this was possible, but as of right now the localization functions in the Azure Pipeline Task SDK are hard-tied to the filesystem. If the SDK needs to display an error message, it goes to look up its localized message data which requires several .resjson files in the same folder as the SDK module. Failing that, it tries some fallback… and eventually you get a stack overflow. The Azure Pipelines Tool SDK also has localized stuff, so even if you figured out how to move all the Task SDK localization files to the right spot, you’d also have to merge in the localization files from other libraries.

The best you can do is make use of npm install --production and/or npm prune --production to reduce the node_modules folder as much as possible. You could also selectively delete files, like you could remove all the *.ts files from the node_modules folder. A few of these tricks can save a lot of space.

It’s Best to Have One Task Per Repo Per Extension

All of the size restrictions, complexity around trimming node_modules, and so on means that it’s really going to make your life easier if you stick to one task per extension. Further, it’ll be even simpler if you keep that in its own repo. It could be a versioned task, but one VSIX, one task [with all its versions], one repo.

  • You won’t exceed the 50MB size limit.
  • You can centralize/automate the versioning - the VSIX vss-extension.json and the task task.json versions can stay in sync to make it easier to track and manage.
  • Your build will generally be less complex. You don’t have to recurse into different tasks/versions to build things, repo size will be smaller, fewer moving pieces.
  • VS Code integration is easier. Having a bunch of different tasks with their own tests and potentially different versions of Mocha (or whatever) all over… it makes debugging and running tests harder.
  • Shared code will have to be published as a Node module, possibly on an internal feed, so it can be shipped along with the extension/task.
  • GitHub Actions require one task per repo.

Wait, what? GitHub Actions? We’re talking about Azure DevOps Pipelines.

With Microsoft’s acquisition of GitHub, more and more integration is happening between GitHub and Azure DevOps. You can trigger Azure Pipelines from GitHub Actions and a lot of work is going into GitHub Actions. Some of the Microsoft tasks are looking at ways to share logic between both pipeline types - write once, use both places. Having one task per repo will make it easier to support this sort of functionality without having to refactor or pull your extension apart.