net, aspnet comments edit

Here’s the situation:

  • I have a .NET Core / ASP.NET Core (DNX) web app. (Currently it’s an RC1 app.)
  • When I start it in Visual Studio, I get IIS Express listening for requests and handing off to DNX.
  • When I start the app from a command line, I want the same experience as VS - IIS Express listening and handing off to DNX.

Now, I know I can just dnx web and get Kestrel to work from a simple self-host perspective. I really want IIS Express here. Searching around, I’m not the only one who does, though everyone’s reasons are different.

Since the change to the IIS hosting model you can’t really do the thing that the ASP.NET Music Store was doing where you copy the AspNet.Loader.dll to your bin folder and have magic happen when you start IIS Express.

When Visual Studio starts up your application, it actually creates an all-new applicationhost.config file with some special entries that allow things to work. I’m going to tell you how to update your per-user IIS Express applicationhost.config file so things can work outside VS just like they do inside.

There are two pieces to this:

  1. Update your applicationhost.config (one time) to add the httpPlatformHandler module so IIS Express can “proxy” to DNX.
  2. Use appcmd.exe to point applications to IIS Express.
  3. Set environment variables and start IIS Express using the application names you configured using appcmd.exe

Let’s walk through each step.

applicationhost.config Updates

Before you can host DNX apps in IIS Express, you need to update your default IIS Express applicationhost.config to know about the httpPlatformHandler module that DNX uses to start up its child process.

You only have to do this one time. Once you have it in place, you’re good to go and can just configure your apps as needed.

To update the applicationhost.config file I used the XML transform mechanism you see in web.config transforms - those web.Debug.config and web.Release.config deals. However, I didn’t want to go through MSBuild for it so I did it in PowerShell.

First, save this file as applicationhost.dnx.xml - this is the set of transforms for applicationhost.config that the PowerShell script will use.

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <configSections>
        <sectionGroup name="system.webServer"
                      xdt:Locator="Match(name)">
            <section name="httpPlatform"
                     overrideModeDefault="Allow"
                     xdt:Locator="Match(name)"
                     xdt:Transform="InsertIfMissing" />
        </sectionGroup>
    </configSections>
    <location path=""
              xdt:Locator="Match(path)">
        <system.webServer>
            <modules>
                <add name="httpPlatformHandler"
                     xdt:Locator="Match(name)"
                     xdt:Transform="InsertIfMissing" />
            </modules>
        </system.webServer>
    </location>
    <system.webServer>
        <globalModules>
            <add name="httpPlatformHandler"
                 image="C:\Program Files (x86)\Microsoft Web Tools\HttpPlatformHandler\HttpPlatformHandler.dll"
                 xdt:Locator="Match(name)"
                 xdt:Transform="InsertIfMissing" />
        </globalModules>
    </system.webServer>
</configuration>

I have it structured so you can run it over and over without corrupting the configuration - so if you forget and accidentally run the transform twice, don’t worry, it’s cool.

Here’s the PowerShell script you’ll use to run the transform. Save this as Merge.ps1 in the same folder as applicationhost.dnx.xml:

function script:Merge-XmlConfigurationTransform
{
    [CmdletBinding()]
    Param(
        [Parameter(Mandatory=$True)]
        [ValidateNotNullOrEmpty()]
        [String]
        $SourceFile,

        [Parameter(Mandatory=$True)]
        [ValidateNotNullOrEmpty()]
        [String]
        $TransformFile,

        [Parameter(Mandatory=$True)]
        [ValidateNotNullOrEmpty()]
        [String]
        $OutputFile
    )

    Add-Type -Path "${env:ProgramFiles(x86)}\MSBuild\Microsoft\VisualStudio\v14.0\Web\Microsoft.Web.XmlTransform.dll"

    $transformableDocument = New-Object 'Microsoft.Web.XmlTransform.XmlTransformableDocument'
    $xmlTransformation = New-Object 'Microsoft.Web.XmlTransform.XmlTransformation' -ArgumentList "$TransformFile"

    try
    {
        $transformableDocument.PreserveWhitespace = $false
        $transformableDocument.Load($SourceFile) | Out-Null
        $xmlTransformation.Apply($transformableDocument) | Out-Null
        $transformableDocument.Save($OutputFile) | Out-Null
    }
    finally
    {
        $transformableDocument.Dispose();
        $xmlTransformation.Dispose();
    }
}

$script:ApplicationHostConfig = Join-Path -Path ([System.Environment]::GetFolderPath([System.Environment+SpecialFolder]::MyDocuments)) -ChildPath "IISExpress\config\applicationhost.config"
Merge-XmlConfigurationTransform -SourceFile $script:ApplicationHostConfig -TransformFile (Join-Path -Path $PSScriptRoot -ChildPath applicationhost.dnx.xml) -OutputFile "$($script:ApplicationHostConfig).tmp"
Move-Item -Path "$($script:ApplicationHostConfig).tmp" -Destination $script:ApplicationHostConfig -Force

Run that script and transform your applicationhost.config.

Note that the HttpPlatformHandler isn’t actually a DNX-specific thing. It’s an IIS 8+ module that can be used for any sort of proxying/process management situation. However, it doesn’t come set up by default on IIS Express so this adds it in.

Now you’re set for the next step.

Configure Apps with IIS Express

I know you can run IIS Express with a bunch of command line parameters, and if you want to do that, go for it. However, it’s just a bunch easier if you set it up as an app within IIS Express so you can more easily launch it.

Set up applications pointing to the wwwroot folder.

A simple command to set up an application looks like this:

"C:\Program Files (x86)\IIS Express\appcmd.exe" add app /site.name:"MyApplication" /path:/ /physicalPath:C:\some\folder\src\MyApplication\wwwroot

Whether you use the command line parameters to launch every time or set up your app like this, make sure the path points to the wwwroot folder.

Set Environment Variables and Start IIS Express

If you look at your web.config file in wwwroot you’ll see something like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <system.webServer>
        <handlers>
            <add name="httpPlatformHandler"
                 path="*"
                 verb="*"
                 modules="httpPlatformHandler"
                 resourceType="Unspecified" />
        </handlers>
        <httpPlatform processPath="%DNX_PATH%"
                      arguments="%DNX_ARGS%"
                      stdoutLogEnabled="false"
                      startupTimeLimit="3600" />
    </system.webServer>
</configuration>

The important bit there are the two variables DNX_PATH and DNX_ARGS.

  • DNX_PATH points to the dnx.exe executable for the runtime you want for your app.
  • DNX_ARGS are the arguments to dnx.exe, as if you were running it on a command line.

A very simple PowerShell script that will launch an IIS Express application looks like this:

$env:DNX_PATH = "$($env:USERPROFILE)\.dnx\runtimes\dnx-clr-win-x86.1.0.0-rc1-update1\bin\dnx.exe"
$env:DNX_ARGS = "-p `"C:\some\folder\src\MyApplication`" web"
Start-Process "${env:ProgramFiles(x86)}\IIS Express\iisexpress.exe" -ArgumentList "/site:MyApplication"

Obviously you’ll want to set the runtime version and paths accordingly, but this is basically the equivalent of running dnx web and having IIS Express use the site settings you configured above as the listening endpoint.

windows, azure, security comments edit

I’ve been experimenting with Azure Active Directory Domain Services (currently in preview) and it’s pretty neat. If you have a lot of VMs you’re working with, it helps quite a bit in credential management.

However, it hasn’t all been “fall-down easy.” There are a couple of gotchas I’ve hit that folks may be interested in.

##Active Directory Becomes DNS Control for the Domain When you join an Azure VM to your domain, you have to set the network for that VM to use the Azure Active Directory as the DNS server. This results in any DNS entries for the domain - for machines on that network - only being resolved by Active Directory.

This is clearer with an example: Let’s say you own the domain mycoolapp.com and you enable Azure AD Domain Services for mycoolapp.com. You also have…

  • A VM named webserver.
  • A cloud service responding to mycoolapp.cloudapp.net that’s associated with the VM.

You join webserver to the domain. The full domain name for that machine is now webserver.mycoolapp.com. You want to expose that machine to the outside (outside the domain, outside of Azure) to serve up your new web application. It needs to respond to www.mycoolapp.com.

You can add a public DNS entry mapping www.mycoolapp.com to the mycoolapp.cloudapp.net public IP address. You can now get to www.mycoolapp.com correctly from outside your Azure domain. However, you can’t get to it from inside the domain. Why not?

You can’t because Active Directory is serving DNS inside the domain and there’s no VM named www. It doesn’t proxy external DNS records for the domain, so you’re stuck.

There is not currently a way to manage the DNS for your domain within Azure Active Directory.

Workaround: Rename the VM to match the desired external DNS entry. Which is to say, call the VM www instead of webserver. That way you can reach the same machine using the same DNS name both inside and outside the domain.

##Unable to Set User Primary Email Address When you enable Azure AD Domain Services you get the ability to start authenticating against joined VMs using your domain credentials. However, if you try managing users with the standard Active Directory MMC snap-ins, you’ll find some things don’t work.

A key challenge is that you can’t set the primary email address field for a user. It’s totally disabled in the snap-in.

This is really painful if you are trying to manage a cloud-only domain. Domain Services sort of assumes that you’re synchronizing an on-premise AD with the cloud AD and that the workaround would be to change the user’s email address in the on-premise AD. However, if you’re trying to go cloud-only, you’re stuck. There’s no workaround for this.

##Domain Services Only Connects to a Single ASM Virtual Network When you set up Domain Services, you have to associate it with a single virtual network (the vnet your VMs are on), and it must be an Azure Service Manager style network. If you created a vnet with Azure Resource Manager, you’re kinda stuck. If you have ARM VMs you want to join (which must be on ARM vnets), you’re kinda stuck. If you have more than one virtual network on which you want Domain Services, you’re kinda stuck.

Workaround: Join the “primary vnet” (the one associated with Domain Services) to other vnets using VPN gateways.

There is not a clear “step-by-step” guide for how to do this. You need to sort of piece together the information in these articles:

##Active Directory Network Ports Need to be Opened Just attaching the Active Directory Domain Services to your vnet and setting it as the DNS server may not be enough. Especially when you get to connecting things through VPN, you need to make sure the right ports are open through the network security group or you won’t be able to join the domain (or you may be able to join but you won’t be able to authenticate).

Here’s the list of ports required by all of Domain Services. Which is not to say you need all of them open, just that you’ll want that for reference.

I found that enabling these ports outbound for the network seemed to cover joining and authenticating against the domain. YMMV. There is no specific guidance (that I’ve found) to explain exactly what’s required.

  • LDAP: Any/389
  • LDAP SSL: TCP/636
  • DNS: Any/53

personal, gaming, toys, xbox comments edit

This year for Christmas, Jenn and I decided to get a larger “joint gift” for each other since neither of us really needed anything. That gift ended up being an Xbox One (the Halo 5 bundle), the LEGO Dimensions starter pack, and a few expansion packs.

LEGO Dimensions Starter Pack

Never having played one of these collectible toy games before, I wasn’t entirely sure what to expect beyond similar gameplay to other LEGO video games. We like the other LEGO games so it seemed like an easy win.

LEGO Dimensions is super fun. If you like the other LEGO games, you’ll like this one.

The story is, basically, that a master bad guy is gathering up all the other bad guys from the other LEGO worlds (which come from the licensed LEGO properties like Portal, DC Comics, Lord of the Rings, and so on). Your job is to stop him from taking over these “dimensions” (each licensed property is a “dimension”) by visiting the various dimensions and saving people or gathering special artifacts.

With the starter pack you get Batman, Gandalf, and Wildstyle characters with which you can play the game. These characters will allow you to beat the main story.

So why get expansion packs?

  • There are additional dimensions you can visit that you can’t get to without characters from that dimension. For example, while the main game lets you play through a Doctor Who level, you can’t visit the other Doctor Who levels unless you buy the associated expansion pack.
  • As with the other LEGO games, you can’t unlock certain hidden areas or collectibles unless you have special skills. For example, only certain characters have the ability to destroy metal LEGO bricks. With previous LEGO games you could unlock these characters by beating levels; with LEGO Dimensions you unlock characters by buying the expansion packs.

Picking the right packs to get the best bang for your buck is hard. IGN has a good page outlining the various character abilities, which pack contains each, and some recommendations on which ones will get you the most if you’re starting fresh.

The packs Jenn and I have (after getting some for Christmas and grabbing a couple of extras) are:

Portal level pack
Portal level pack

Back to the Future level pack
Back to the Future level pack

Emmet fun pack
Emmet fun pack

Zane fun pack
Zane fun pack

Gollum fun pack
Gollum fun pack

Eris fun pack
Eris fun pack

Wizard of Oz Wicked Witch fun pack
Wizard of Oz Wicked Witch fun pack

Doctor Who level pack
Doctor Who level pack

Unikitty fun pack
Unikitty fun pack

Admittedly, this is a heck of an investment in a game. We’re suckers. We know.

This particular combination of packs unlocks just about everything. There are still things we can’t get to - levels we can’t enter, a few hidden things we can’t reach - but this is a good 90%. Most of the stuff we can’t get to is because there are characters where only that one character has such-and-such ability. For example, Aquaman (for whatever reason) seems to have one or two abilities unique to him for which we’ve run across the need. Unikitty is also a character with unique abilities (which we ended up getting). I’d encourage you as you purchase packs to keep consulting the character ability matrix to determine which packs will best help you.

I have to say… There’s a huge satisfaction in flying the TARDIS around or getting the Twelfth Doctor driving around in the DeLorean. It may make that $15 or whatever worth it.

If you’re a LEGO fan anyway, the packs actually include minifigs and models that are detachable - you can play with them with other standard LEGO sets once you get tired of the video game. It’s a nice dual-purpose that other collectible games don’t provide.

Finally, it’s something fun Jenn and I can play together to do something more interactive than just watch TV. I don’t mind investing in that.

In any case, if you’re looking at one of the collectible toy games, I’d recommend LEGO Dimensions. We’re having a blast with it.

personal comments edit

It’s been a busy year, and in particular a pretty crazy last-three-months, so I’m rounding out my 2015 by finally using up my paid time off at work and effectively taking December off.

What that means is I probably won’t be seen on StackOverflow or handling Autofac issues or working on the Autofac ASP.NET 5 conversion.

I love coding, but I also have a couple of challenges if I do that on my time off:

  • I stress out. I’m not sure how other people work, but when I see questions and issues come in I feel like there’s a need I’m not addressing or a problem I need to solve, somehow, immediately right now. Even if that just serves as a backlog of things to prioritize, it’s one more thing on the list of things I’m not checking off. I want to help people and I want to provide great stuff with Autofac and the other projects I work on, but there is a non-zero amount of stress involved with that. It can pretty quickly turn from “good, motivating stress” to “bad, overwhelming stress.” It’s something I work on from a personal perspective, but taking a break from that helps me regain some focus.
  • I lose time. There are so many things I want to do that I don’t have time for. I like sewing and making physical things - something I don’t really get a chance to do in the software world. If I sit down and start coding stuff, pretty soon the day is gone and I may have made some interesting progress on a code-related project, but I just lost a day I could have addressed some of the other things I want to do. Since I code for a living (and am lucky enough to be able to get Autofac time in as part of work), I try to avoid doing much coding on my time off unless it’s helping me contribute to my other hobbies. (For example, I just got an embroidery machine - I may code to create custom embroidery patterns.)

I don’t really take vacation time during the year so I often end up in a “use it or lose it” situation come December, which works out well because there are a ton of holidays to work around anyway. Why not de-stress, unwind, and take the whole month off?

I may even get some time to outline some of the blog entries I’ve been meaning to post. I’ve been working on some cool stuff from Azure to Roslyn code analyzers, not to mention the challenges we’ve run into with Autofac/ASP.NET 5. I’ve just been slammed enough that I haven’t been able to get those out. We’ll see. I should at least start keeping a list.

halloween, costumes comments edit

It was raining again this year and that definitely took down the number of visitors. Again this year we also didn’t put out our “Halloween projector” that puts a festive image on our garage. In general, it was pretty slow all around. I took Phoenix out this year while Jenn answered the door so I got to see what was out there firsthand. Really hardly anyone out there this year.

##Trick-or-Treaters

2015: 85
trick-or-treaters.

Average Trick-or-Treaters by Time Block

The table’s also starting to get pretty wide; might have to switch it so time block goes across the top and year goes down.

Cumulative data:

  Time Block
Year 6:00p - 6:30p 6:30p - 7:00p 7:00p - 7:30p 7:30p - 8:00p 8:00p - 8:30p Total
2006 52 59 35 16 0 162
2007 5 45 39 25 21 139
2008 14 71 82 45 25 237
2009 17 51 72 82 21 243
2010 19 77 76 48 39 259
2011 31 80 53 25 0 189
2013 28 72 113 80 5 298
2014 19 54 51 42 10 176
2015 13 14 30 28 0 85

##Costumes

My costume this year was Robin Hood. Jenn was Merida from Brave so we were both archers. Phoenix had two costumes - for trick-or-treating at Jenn’s work she was a bride with a little white dress and veil; for going out in the neighborhood she was a ninja.

The finished Robin Hood costume

Costume with the cloak closed

I posted some in-progress pictures of my costume on social media, but as part of the statistical breakdown of Halloween this year I thought it’d be interesting to dive into more of exactly what went into the making outside of the time and effort - actual dollars put in.

On my costume, I made the shirt, the doublet, the pants, and the cape. I bought the boots, the tights, and the bow.

###Accessories and Props

Let’s start with the pieces I bought:

Total: $99.10

###The Shirt The shirt is made of a gauzy fabric that was pretty hard to work with. The pattern was also not super helpful because you’d see “a step” in the pattern consisting of several actions.

Confusing shirt pattern

I did learn how to use an “even foot” (sometimes called a “walking foot”) on our sewing machine, which was a new thing for me.

Even foot on the sewing machine

  • Shirt, doublet, and pants pattern - $10.17
  • Gauze fabric - $6.59
  • Thread - $3.29
  • Buttons - $5.99
  • Interfacing - $0.62

Total: $26.66

###The Pants

I don’t have any in-progress shots of the pants being made, but they were pretty simple pants. I will say I thought I should make the largest size due to my height… but then the waist turned out pretty big so I had to do some adjustments to make them fit. Even after adjusting they were pretty big. I should probably have done more but ran out of time.

  • Shirt, doublet, and pants pattern - (included in shirt cost)
  • Black gabardine fabric - $23.73
  • Thread - $4.00
  • Buttons - $1.90
  • Eyelets - $2.39
  • Ribbon - $1.49
  • Interfacing - (I had some already for this)

Total: $33.51

###The Doublet

The doublet was interesting to make. It had a lot of pieces, but they came together really well and I learned a lot while doing it. Did you know those little “flaps” on the bottom are called “peplum?” I didn’t.

I hadn’t really done much with adding trim, so this was a learning experience. For example, this trim had a sort of “direction” or “grain” to it - if you sewed with the “grain,” it went on really smoothly. If you went against the “grain,” the trim would get all caught up on the sewing machine foot. I also found that sewing trim to the edge of a seam is really hard on thick fabric so I ended up adding a little margin between the seam and the trim.

Putting trim on the body of the doublet

These are the peplums that go around the bottom of the doublet. You can see the trim pinned to the one on the right.

Sewing peplums

Once all the peplums were done, I pinned and machine basted them in place. Getting them evenly spaced was a challenge, but it turned out well.

Pinning peplums

After the machine basting, I ran the whole thing through the serger which gave them a strong seam and trimmed off the excess. This was the first project I’d done with a serger and it’s definitely a time saver. It also makes finished seams look really professional.

Serging peplums

To cover the seam where the peplums are attached, the lining in the doublet gets hand sewn over the top. There was a lot of hand sewing in this project, which was the largest time sink.

Slip-stitching the doublet lining

Here’s the finished doublet.

The finished doublet

  • Shirt, doublet, and pants pattern - (included in shirt cost)
  • Quilted fabric (exterior) - $15.73
  • Brown broadcloath fabric (lining) - $5.99
  • Thread - $8.29
  • Eyelets - $8.28
  • Trim - $23.95
  • Leather lacing - $2.50
  • Interfacing - $6.11

Total: $70.85

###The Cape

The cape was the least complex of the things to make but took the most time due to the size. Just laying out and cutting the pattern pieces took a couple of evenings.

As you can see, I had to lay them out in our hallway.

Cutting the exterior cape pieces

I learned something while cutting the outside of the cape: The pattern was a little confusing in that the diagrams of how the pattern should be laid out were inconsistent with the notation they describe. This resulted in my cutting one of the pattern pieces backwards and Jenn being kind enough to go back to the fabric store all the way across town and get the last bit of fabric from the bolt. I was very lucky there was enough to re-cut the piece the correct way.

I used binder clips on the edges in an attempt to stop the two fabric layers from slipping around. It was mildly successful.

Cutting the cape lining

I found with the serger I had to keep close track of the tension settings to make sure the seams were sewn correctly. Depending on the thread and weight of the fabric being sewn, I had to tweak some things.

To help me remember settings, I took photos with my phone of the thread, the fabric being sewn, and the dials on the serger so I’d know exactly what was set.

Here are the settings for sewing together two layers of cape lining.

Cape lining serger settings

And the settings for attaching the lining to the cape exterior.

Serger settings for attaching lining to cape exterior

I took a shot of my whole work area while working on the cape. It gets pretty messy, especially toward the end of a project. I know where everything is, though.

You can also see I’ve got things set up so I can watch TV while I work. I got through a couple of different TV seasons on Netflix during this project.

My messy work area

One of the big learning things for me with this cape was that with a thicker fabric it’s hard to get the seams to lay flat. I ironed the junk out of that thing and all the edge seams were rounded and puffy. I had to edgestitch the seams to make sure they laid flat.

Edge stitching the cape hem

  • Green suedecloth (exterior) - $55.82
  • Gold satin (lining) - $40.46
  • Dark green taffeta (hood lining) - $5.99
  • Interfacing - (I had some already for this)
  • Thread - $11.51
  • Silver “conchos” (the metal insignias on the neck) - $13.98
  • Scotchgard (for waterproofing) - $5.99

Total: $133.75

###Total - Accessories and props - $99.10 - Shirt - $26.66 - Pants - $33.51 - Doublet - $70.85 - Cape - $133.75

Total: $363.87

That’s probably reasonably accurate. I know I had some coupons where I saved some money on fabric (you definitely need to be watching for coupons!) so the costs on those may be lower, but I also know I had to buy some incidentals like more sewing machine needles after I broke a couple, so it probably roughly balances out.

I get a lot of folks asking why I don’t just rent a costume. Obviously from a money and time perspective it’d be a lot cheaper to do that.

The thing is… I really like making the costume. I’m a software engineer, so generally when I “make something,” it’s entirely intangible - it’s electronic bits stored somewhere that make other bits do things. I don’t come out with something I can hold in my hands and say I did this. When I make a shirt or a costume or whatever, there’s something to be said for having a physical object and being able to point to it and be proud of the craftsmanship that went into it. It’s a satisfying feeling.

home comments edit

At home, especially after a long day, I’ve noticed my phone may be low on power even though I’d like to continue using it. All of our chargers are in rooms other than the living room where we spend most of our time and I didn’t want to move one into the living room because I didn’t want cords all over the place or a bajillion different things to plug in.

I finally figured out the answer.

##Materials

Charger, cables, and adhesive

##Assembly Use one of the Command strips to affix the USB charger under the lip on the back of your end table. I went with Command strips because they’re reasonably strong but generally won’t ruin the finish on your table because they can be easily removed.

Plug one or more of the one-foot cables into the charger.

If you get the Sabrent USB cables I mentioned earlier, they have a little bit of Velcro on them you can use to your benefit. Put a little Velcro under a nearby area of the table and push the Velcro tie on the USB cable to the end. You can then attach the end of the cable in easy reach from your couch using the Velcro.

Charger attached to the back of the table

##Usage The nice thing about this is it’s entirely unobtrusive. Charge your phone on the end table while you’re sitting and watching TV, but when you’re done you can drop the cable back behind the table (it’s only a foot long so it won’t drag on the ground or look messy); or if you have the Velcro you can affix the cable under the lip of the table in easy reach for the next usage.

Charging a phone

net, aspnet, build, autofac comments edit

We recently released Autofac 4.0.0-beta8-157 to NuGet to coincide with the DNX beta 8 release. As part of that update, we re-added the classic PCL target .NETPortable,Version=v4.5,Profile=Profile259 (which is portable-net45+dnxcore50+win+wpa81+wp80+MonoAndroid10+Xamarin.iOS10+MonoTouch10) because older VS versions and some project types were having trouble finding a compatible version of Autofac 4.0.0 - they didn’t rectify the dotnet target framework as a match.

If you’re not up on the dotnet target framework moniker, Oren Novotny has some great articles that help a lot:

I’m now working on a beta 8 compatible version of Autofac.Configuration. For beta 7 we’d targeted dnx451, dotnet, and net45. I figured we could just update to start using Autofac 4.0.0-beta8-157, rebuild, and call it good.

Instead, I started getting a lot of build errors when targeting the dotnet framework moniker.

Building Autofac.Configuration for .NETPlatform,Version=v5.0
  Using Project dependency Autofac.Configuration 4.0.0-beta8-1
    Source: E:\dev\opensource\Autofac\Autofac.Configuration\src\Autofac.Configuration\project.json

  Using Package dependency Autofac 4.0.0-beta8-157
    Source: C:\Users\tillig\.dnx\packages\Autofac\4.0.0-beta8-157
    File: lib\dotnet\Autofac.dll

  Using Package dependency Microsoft.Framework.Configuration 1.0.0-beta8
    Source: C:\Users\tillig\.dnx\packages\Microsoft.Framework.Configuration\1.0.0-beta8
    File: lib\dotnet\Microsoft.Framework.Configuration.dll

  Using Package dependency Microsoft.Framework.Configuration.Abstractions 1.0.0-beta8
    Source: C:\Users\tillig\.dnx\packages\Microsoft.Framework.Configuration.Abstractions\1.0.0-beta8
    File: lib\dotnet\Microsoft.Framework.Configuration.Abstractions.dll

  Using Package dependency System.Collections 4.0.11-beta-23409
    Source: C:\Users\tillig\.dnx\packages\System.Collections\4.0.11-beta-23409
    File: ref\dotnet\System.Collections.dll

  (...and some more package dependencies that got resolved, then...)

  Unable to resolve dependency fx/System.Collections

  Unable to resolve dependency fx/System.ComponentModel

  Unable to resolve dependency fx/System.Core

  (...and a lot more fx/* items unresolvable.)

This was, at best, confusing. I mean, in the same target framework, I see these two things together:

  Using Package dependency System.Collections 4.0.11-beta-23409
    Source: C:\Users\tillig\.dnx\packages\System.Collections\4.0.11-beta-23409
    File: ref\dotnet\System.Collections.dll

  Unable to resolve dependency fx/System.Collections

So it found System.Collections, but it didn’t find System.Collections. Whaaaaaaa?!

After a lot of searching (with little success) I found David Fowler’s indispensible article on troubleshooting dependency issues in ASP.NET 5. This led me to the dnu list --details command, where I saw this:

[Target framework .NETPlatform,Version=v5.0 (dotnet)]

Framework references:
  fx/System.Collections  - Unresolved
    by Package: Autofac 4.0.0-beta8-157

  fx/System.ComponentModel  - Unresolved
    by Package: Autofac 4.0.0-beta8-157

  (...and a bunch more of these...)


Package references:
* Autofac 4.0.0-beta8-157
    by Project: Autofac.Configuration 4.0.0-beta8-1

* Microsoft.Framework.Configuration 1.0.0-beta8
    by Project: Autofac.Configuration 4.0.0-beta8-1

  Microsoft.Framework.Configuration.Abstractions 1.0.0-beta8
    by Package: Microsoft.Framework.Configuration 1.0.0-beta8

  System.Collections 4.0.11-beta-23409
    by Package: Autofac 4.0.0-beta8-157...
    by Project: Autofac.Configuration 4.0.0-beta8-1

  (...and so on.)

Hold up - Autofac 4.0.0-beta8-157 needs both the framework assembly and the dependency package for System.Collections?

Looking in the generated .nuspec file for the updated core Autofac, I see:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">
  <metadata>
    <!-- ... -->
    <dependencies>
      <group targetFramework="DNX4.5.1" />
      <group targetFramework=".NETPlatform5.0">
        <dependency id="System.Collections" version="4.0.11-beta-23409" />
        <dependency id="System.Collections.Concurrent" version="4.0.11-beta-23409" />
        <!-- ... -->
      </group>
      <group targetFramework=".NETFramework4.5" />
      <group targetFramework=".NETCore4.5" />
      <group targetFramework=".NETPortable4.5-Profile259" />
    </dependencies>
    <frameworkAssemblies>
      <!-- ... -->
      <frameworkAssembly assemblyName="System.Collections" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.ComponentModel" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Core" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Diagnostics.Contracts" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Diagnostics.Debug" targetFramework=".NETPortable4.5-Profile259" />
      <frameworkAssembly assemblyName="System.Diagnostics.Tools" targetFramework=".NETPortable4.5-Profile259" />
      <!-- ... -->
    </frameworkAssemblies>
  </metadata>
</package>

The list of failed fx/* dependencies is exactly the same as the list of frameworkAssembly references that target .NETPortable4.5-Profile259 in the .nuspec.

By removing the dotnet target framework moniker from Autofac.Configuration and compiling for specific targets, everything resolves correctly.

What I originally thought was that dotnet indicated, basically, “I support what my dependencies support,” which I took to mean, “we’ll figure out the lowest common denominator of all the dependencies and that’s the set of stuff this supports.”

What dotnet appears to actually mean is, “I support the superset of everything my dependencies support.”

The reason I take that away is that the Microsoft.Framework.Configuration 1.0.0-beta8 package targets net45, dnx451, dnxcore, and dotnet - but it doesn’t specifically support .NETPortable,Version=v4.5,Profile=Profile259. I figured Autofac.Configuration, targeting dotnet, would rectify to support the common frameworks that both core Autofac and Microsft.Framework.Configuration support… which would mean none of the <frameworkAssembly /> references targeting .NETPortable4.5-Profile259 would need to be resolved to build Autofac.Configuration.

Since they do, apparently, need to be resolved, I have to believe dotnet implies superset rather than subset.

This appears to mostly just be a gotcha if you have a dependency that targets one of the older PCL framework profiles. If everything down the stack just targets dotnet it seems to magically work.

If you’d like to try this and see it in action, check out the Autofac.Configuration repo at 14c10b5bf6 and run the build.ps1 build script.

autofac, net comments edit

As part of DNX RC1, the Microsoft.Framework.* packages are getting renamed to Microsoft.Extensions.*.

The Autofac.Framework.DependencyInjection package was named to follow along with the pattern established by those libraries: Microsoft.Framework.DependencyInjection -> Autofac.Framework.DependencyInjection.

With the RC1 rename of the Microsoft packages, we’ll be updating the name of the Autofac package to maintain consistency: Autofac.Extensions.DependencyInjection. This will happen for Autofac as part of beta 8.

We’ll be doing the rename as part of the beta 8 drop since beta 8 appears to have been pushed out by a week and we’d like to get a jump on things. For beta 8 we’ll still refer to the old Microsoft dependency names to maintain compatibility but you’ll have a new Autofac dependency. Then when RC1 hits, you won’t have to change the Autofac dependency because it’ll already be in line.

You can track the rename on Autofac issue #685.

powershell, vs, net comments edit

I love PowerShell, but I do a lot with the Visual Studio developer prompt and that’s still good old cmd.

Luckily, you can make your PowerShell prompt also a Visual Studio prompt.

First, go get the PowerShell Community Extensions. If you’re a PowerShell user you should probably have these already. You can either get them from the CodePlex site directly or install via Chocolatey.

Now, in your PowerShell profile (stored at %UserProfile%\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1) add the following:

Import-Module Pscx

# Add Visual Studio 2015 settings to PowerShell
Import-VisualStudioVars 140 x86

# If you'd rather use VS 2013, comment out the
# above line and use this one instead:
# Import-VisualStudioVars 2013 x86

Next time you open a PowerShell prompt, it’ll automatically load up the Visual Studio variables to also make it a Visual Studio prompt.

Take this to the next level by getting a “PowerShell Prompt Here” context menu extension and you’re set!

net, build, tfs comments edit

I’m working with some builds in Visual Studio Online which, right now, is basically Team Foundation Server 2015. As part of that, I wanted to do some custom work in my build script but only if the build was running in TFS.

The build runs using the Visual Studio Build build step. I couldn’t find any documentation for what additional / special environment variables where available, so I ran a small build script like this…

<Exec Command="set" />

…and got the environment variables. For reference, here’s what you get when you run the Visual Studio Build task on an agent with Visual Studio 2015:

agent.jobstatus=Succeeded
AGENT_BUILDDIRECTORY=C:\a\ba99c3da
AGENT_HOMEDIRECTORY=C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools
AGENT_ID=1
AGENT_JOBNAME=Build
AGENT_MACHINENAME=TASKAGENT-0001
AGENT_NAME=Hosted Agent
AGENT_ROOTDIRECTORY=C:\a
AGENT_WORKFOLDER=C:\a
AGENT_WORKINGDIRECTORY=C:\a\SourceRootMapping\2808d0ee-383a-4503-86cd-e9c64da409e3\Job-070e8691-db73-41a3-88e3-97deaf4dd9a1
ALLUSERSPROFILE=C:\ProgramData
ANDROID_HOME=C:\java\androidsdk\android-sdk
ANDROID_NDK_HOME=C:\java\androidsdk\android-ndk-r10d
ANT_HOME=C:\java\ant\apache-ant-1.9.4
APPDATA=C:\Users\buildguest\AppData\Roaming
build.fetchtags=false
BUILDCONFIGURATION=release
BUILDPLATFORM=any cpu
BUILD_ARTIFACTSTAGINGDIRECTORY=C:\a\ba99c3da\artifacts
BUILD_BUILDID=15
BUILD_BUILDNUMBER=2015.09.17.3
BUILD_BUILDURI=vstfs:///Build/Build/15
BUILD_CONTAINERID=85945
BUILD_DEFINITIONNAME=Continuous Integration
BUILD_DEFINITIONVERSION=8
BUILD_QUEUEDBY=[DefaultCollection]\Project Collection Service Accounts
BUILD_QUEUEDBYID=a75bc823-f51a-48bc-8ec8-4d7dacaf7dc9
BUILD_REPOSITORY_CLEAN=True
BUILD_REPOSITORY_GIT_SUBMODULECHECKOUT=False
BUILD_REPOSITORY_LOCALPATH=C:\a\ba99c3da\MyProject
BUILD_REPOSITORY_NAME=MyProject
BUILD_REPOSITORY_PROVIDER=TfsGit
BUILD_REPOSITORY_URI=https://myvsoproject.visualstudio.com/DefaultCollection/_git/MyProject
BUILD_REQUESTEDFOR=Your Name Here
BUILD_REQUESTEDFORID=8c477f14-acc1-4765-b1a0-ec6cfb88740d
BUILD_SOURCEBRANCH=refs/heads/master
BUILD_SOURCEBRANCHNAME=master
BUILD_SOURCESDIRECTORY=C:\a\ba99c3da\MyProject
BUILD_SOURCESDIRECTORYHASH=ba99c3da
BUILD_SOURCEVERSION=91c1ef45e3fcc91873cd599d4a7e2e1adf15d9a5
BUILD_STAGINGDIRECTORY=C:\a\ba99c3da\staging
CommonProgramFiles=C:\Program Files (x86)\Common Files
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
CommonProgramW6432=C:\Program Files\Common Files
COMPUTERNAME=TASKAGENT-0001
ComSpec=C:\Windows\system32\cmd.exe
CORDOVA_CACHE=C:\cordova\cli
CORDOVA_DEFAULT_VERSION=5.1.1
CORDOVA_HOME=C:\cordova\cli\_cordova
EnableNuGetPackageRestore=True
FP_NO_HOST_CHECK=NO
GRADLE_USER_HOME=C:\java\gradle\user
GTK_BASEPATH=C:\Program Files (x86)\GtkSharp\2.12\
JAVA_HOME=C:\java\jdk\jdk1.8.0_25
LOCALAPPDATA=C:\Users\buildguest\AppData\Local
M2_HOME=C:\java\maven\apache-maven-3.2.2
MSBuildLoadMicrosoftTargetsReadOnly=true
NPM_CONFIG_CACHE=C:\NPM\Cache
NPM_CONFIG_PREFIX=C:\NPM\Modules
NUMBER_OF_PROCESSORS=2
OS=Windows_NT
Path=C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agent\worker\Modules\Microsoft.TeamFoundation.DistributedTask.Task.Internal\NativeBinaries\amd64;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.0\;C:\Program Files\Microsoft SQL Server\120\Tools\Binn\;C:\Users\VssAdministrator\.dnx\bin;C:\Program Files\Microsoft DNX\Dnvm\;C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\;C:\Program Files (x86)\Microsoft Emulator Manager\1.0\;C:\Program Files (x86)\GtkSharp\2.12\bin;C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.4\;C:\Program Files (x86)\Git\bin;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\PrivateAssemblies\;C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies\;C:\Program Files (x86)\Microsoft SQL Server\120\DTS\Binn\;C:\Program Files\Microsoft\Web Platform Installer\;C:\NPM\Modules;C:\Program Files\nodejs\;C:\NPM\Modules;C:\cordova;C:\java\ant\apache-ant-1.9.4\bin;
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.CPL
PLUGMAN_HOME=C:\cordova\cli\_plugman
PROCESSOR_ARCHITECTURE=x86
PROCESSOR_ARCHITEW6432=AMD64
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 45 Stepping 7, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=2d07
ProgramData=C:\ProgramData
ProgramFiles=C:\Program Files (x86)
ProgramFiles(x86)=C:\Program Files (x86)
ProgramW6432=C:\Program Files
PROMPT=$P$G
PSModulePath=C:\Users\buildguest\Documents\WindowsPowerShell\Modules;C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules\;C:\Program Files\SharePoint Online Management Shell\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\PowerShell\Modules\;C:\Program Files (x86)\Microsoft SQL Server\120\Tools\PowerShell\Modules\;C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement;C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agent\worker\Modules
PUBLIC=C:\Users\Public
SYSTEM=mstf
SystemDrive=C:
SystemRoot=C:\Windows
SYSTEM_ARTIFACTSDIRECTORY=C:\a\ba99c3da
SYSTEM_COLLECTIONID=2808d0ee-383a-4503-86cd-e9c64da409e3
SYSTEM_DEFAULTWORKINGDIRECTORY=C:\a\ba99c3da\MyProject
SYSTEM_DEFINITIONID=1
SYSTEM_HOSTTYPE=build
SYSTEM_TEAMFOUNDATIONCOLLECTIONURI=https://myvsoproject.visualstudio.com/DefaultCollection/
SYSTEM_TEAMFOUNDATIONSERVERURI=https://myvsoproject.visualstudio.com/DefaultCollection/
SYSTEM_TEAMPROJECT=MyProject
SYSTEM_TEAMPROJECTID=a9f2e0a9-752d-4529-a657-35a421584815
SYSTEM_WORKFOLDER=C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\_work
TEMP=C:\Users\BUILDG~1\AppData\Local\Temp
TF_BUILD=True
TMP=C:\Users\BUILDG~1\AppData\Local\Temp
USERDOMAIN=TASKAGENT-0001
USERNAME=buildguest
USERPROFILE=C:\Users\buildguest
VS100COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\Tools\
VS110COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\Tools\
VS120COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\
VS140COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\
VSSDK110Install=C:\Program Files (x86)\Microsoft Visual Studio 11.0\VSSDK\
VSSDK120Install=C:\Program Files (x86)\Microsoft Visual Studio 12.0\VSSDK\
VSSDK140Install=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VSSDK\
windir=C:\Windows
WIX=C:\Program Files (x86)\WiX Toolset v3.7\
XNAGSShared=C:\Program Files (x86)\Common Files\Microsoft Shared\XNA\

Note that BUILDCONFIGURATION and BUILDPLATFORM are parameters to the Visual Studio Build task. You’ll see that text right in the TFS build system dashboard.

The BUILD_BUILDNUMBER value is the one I’m the most interested in - that’s what I can key off to set the assembly version. TF_BUILD seems to be what you use to determin if the build is running in TFS.

autofac, net comments edit

In the continuing journey toward the Autofac 4.0.0 release, some integration/extras packages have been released:

  • Autofac.Wcf 4.0.0: Doesn’t require Autofac 4.0 but is tested to be compatible with it. .NET 4.0 framework support is removed; minimum is now .NET 4.5. No interface changes were made - the major semver increment is for the .NET compatibility break. AllowPartiallyTrustedCallers and other pre-4.5 security markings are also removed.
  • Autofac.Mef 4.0.0: Doesn’t require Autofac 4.0 but is tested to be compatible with it. .NET 4.0 framework support is removed; minimum is now .NET 4.5. No interface changes were made - the major semver increment is for the .NET compatibility break. AllowPartiallyTrustedCallers and other pre-4.5 security markings are also removed.
  • Autofac.Extras.AttributeMetadata 4.0.0: This is a rename from Autofac.Extras.Attributed to Autofac.Extras.AttributeMetadata to express what it actually attributes. Doesn’t require Autofac 4.0 but is tested to be compatible with it. Requires Autofac.Mef 4.0 due to the security attribute changes and minimum .NET 4.5 requirement.
  • Autofac.Multitenant.Wcf 4.0.0-beta7-223: The WCF component compatible with Autofac.Multitenant 4.0 beta. This will stay in beta until Autofac.Multitenant can also be fully released. This is a rename from Autofac.Extras.Multitenant.Wcf to match the rename of Autofac.Extras.Multitenant to Autofac.Multitenant. Requires Autofac.Wcf 4.0 due to the security attribute changes and minimum .NET 4.5 requirement.

There’s a lot going on to try and keep up with DNX compatibility (where possible) and checking/testing existing integration packages to ensure they still work with Autofac 4. For the most part, it seems just updating things to use .NET 4.5 instead of .NET 4.0 allows us to retain forward-compatibility, though in some cases the code access security changes require changes.

We’re working hard on it. Watch our Twitter feed for announcements!

autofac, net comments edit

I just pushed Autofac.Multitenant 4.0.0-beta8-216 to NuGet.

This update includes some breaking changes from the previous Autofac.Extras.Multitenant package:

  • Multitenant support has been promoted to first-class. The package is now Autofac.Multitenant and not Autofac.Extras.Multitenant. (Note the “Extras” part is gone. We always talked about how, at some point, an “Extras” package might become a core feature, figured it was time to finally actually do that.)
  • This package requires Autofac 4.0.0-beta8-* and up because…
  • The multitenant support has been updated to support the same set of portable platforms that core Autofac supports (dnx451, dotnet, net45, netcore45).
  • This builds on top of DNX beta 8.

If you’re using DNX and want to try out multitenancy, give this a shot. If you find any issues, please let us know at the Autofac.Multitenant repo!

personal, costumes comments edit

I normally don’t sew too much outside of Halloween, when it becomes more an excuse to set aside time to make cool stuff than anything else. However, I saw this “Exploding TARDIS” fabric at Jo-Ann the other day and had to make something out of it.

Since my daughter Phoenix is a Whovian like myself, I figured I’d make her a little dress. I went with Butterick “See & Sew” pattern B5443 because it was a whopping $3 and it was a fairly simple thing. I also got some shiny blue lining to go in it.

The exploding TARDIS fabric, lining, and pattern

The most time consuming part was, as usual, pinning and cutting all the pieces.

Cutting the main pieces

The back closes in a zipper, which is usually a painful experience but actually went smoothly this time. Here I’m pinning it in…

Pinning the zipper

…and here it’s finished.

The finished zipper

I did a bit of a modification and let the blue lining hang slightly below the body of the dress for a peek of that shiny blue. I also finished the waist with a ribbon that has gold sun, moon, and stars printed on it.

The 'exploding TARDIS' dress

And here is the proud four-year-old Whovian in her new exploding TARDIS dress.

Phoenix in her new dress

All told it took about a day and a half from start to finish. I really started around 10 or 11 on Saturday, ran it through until around 10 Saturday night, and finished up in a couple of hours on Sunday morning. Not too bad.

This was the first thing I’ve made since I got Jenn a Brother 1034D serger for Christmas, and let me tell you - the serger makes all the difference. The seams come out very professional looking and the garment has a much more “store bought” quality to it.

I bought enough of the fabric to make myself a shirt using Vogue pattern 8800. I’ve used that pattern before and it comes out well, if a bit snug, so this next go-round with it I’ll make one size larger for some breathing room.

synology, security comments edit

In November of last year I set up a PPTP VPN on my Synology NAS so I could do more secure browsing on public wifi. Since then, I’ve updated my setup to use OpenVPN and made the connection a lot easier.

I based the steps to get my connection working on this forum post but I didn’t do quite the extra work with the certificates.

Assuming you’ve got the VPN package installed and ready to go on your Synology NAS (which I walk through in the previous article), the next steps to get OpenVPN going are:

  • Open the VPN Server application in the Diskstation Manager.
  • Enable the OpenVPN protocol by checking the box. Leave everything else as default.
  • At the bottom of the OpenVPN panel, click “Export Configuration.” This will give you the profile you’ll need for your devices to connect to the VPN.
  • In the Control Panel, go to the “Security” tab. On the “certificate” panel, click “Export Certificate.” Save that somewhere and call it ca.crt. This is a little different than what I was expecting - I had hoped the certificates that come in the OpenVPN zip file (when you export that configuration) would just work, but it turns out I needed to get this particular certificate. YMMV on this.
  • Just like with the PPTP VPN, make sure the firewall has a rule to allow port 1194 (the OpenVPN port) through. You also need to create a port forwarding rule for port 1194 with your router. You can see how to do this in my other article.

You should have OpenVPN up and running. That part, at least for me, was the easiest part. The harder part was getting my Android phone connected to it and trying to automate that.

First things first, let’s get connected.

Install the OpenVPN Connect app for Android. There are several OpenVPN apps out there; I use this one and the rest of my article will assume you do, too. The app is free, so there’s no risk if you don’t like it.

Open the zip file of exported OpenVPN configuration you got from the Synology and pull out the openvpn.ovpn file. Pop that open in a text editor and make sure that…

  • The remote line at the top points to your public DNS entry for your Synology, like yourdiskstation.synology.me or whatever you set up.
  • The ca line has ca.crt in it.

Here’s what it should generally look like. I’ve left the comments in that are there by default.

dev tun
tls-client

remote yourdiskstation.synology.me 1194

# The "float" tells OpenVPN to accept authenticated packets from any address,
# not only the address which was specified in the --remote option.
# This is useful when you are connecting to a peer which holds a dynamic address
# such as a dial-in user or DHCP client.
# (Please refer to the manual of OpenVPN for more information.)

#float

# If redirect-gateway is enabled, the client will redirect it's
# default network gateway through the VPN.
# It means the VPN connection will firstly connect to the VPN Server
# and then to the internet.
# (Please refer to the manual of OpenVPN for more information.)

redirect-gateway

# dhcp-option DNS: To set primary domain name server address.
# Repeat this option to set secondary DNS server addresses.

#dhcp-option DNS DNS_IP_ADDRESS

pull

# If you want to connect by Server's IPv6 address, you should use
# "proto udp6" in UDP mode or "proto tcp6-client" in TCP mode
proto udp

script-security 2

ca ca.crt

comp-lzo

reneg-sec 0

auth-user-pass

Put the ca.crt certificate you exported and the openvpn.ovpn file on your Android device. Make sure it’s somewhere you can find later.

Open the OpenVPN Connect app and select “Import Profile.” Select the openvpn.ovpn file you pushed over. Magic should happen and you will see your VPN show up in the app.

Now’s a good time to test the connection to your VPN. Enter your username and password into the OpenVPN Connect app, check the Save button to save your credentials, and click the “Connect” button. It should find your VPN and connect. When you connect you may see a little “warning” icon saying network communication could be monitored by a third-party - that’s Android seeing your Synology’s certificate. You should also see OpenVPN Connect telling you you’re connected.

OpenVPN Connect showing the connection is active

It’s important to save your credentials in OpenVPN Connect or the automation of connecting to the VPN later will fail.

If you’re not able to connect, it could be a number of different things. Troubleshooting this is the biggest pain of the whole thing. Feel good if things worked the first time; I struggled figuring out all the certificates and such. Things to check:

  • Did you enter your username/password correctly using an account defined on the Synology?
  • Does the account you used have permissions to the VPN? (By default it should, but you may be trying to use a limited access account, so check that.)
  • Did the router port forwarding get set up?
  • Did the firewall rule get set up?
  • Is your dynamic DNS entry working?
  • Is the ca.crt in the same folder on your Android device as the openvpn.ovpn file?
  • If that ca.crt isn’t working, did you try the one that came in the zip file with the OpenVPN configuration you exported? (The one in that zip didn’t work for me, but it might work for you.)
  • Consider trying the instructions in this forum post to embed the certificate info right in the openvpn.ovpn file.

From here on out, I assume you can connect to your VPN.

Now we want to make it so you connect automatically to the VPN when you’re on a wifi network that isn’t your own. I even VPN in when I’m on a “secure” network like at a hotel where you need a password because, well, there are a lot of people on there with you and do you trust them all? I didn’t think so.

Install the Tasker app for Android. This one will cost you $3 but it’s $3 well spent. Tasker helps you automate things on your Android phone and you don’t even need root access.

I found the instructions for setting up Tasker with OpenVPN Connect over on the OpenVPN forums via a reddit thread. I’ll put them here for completeness, but total credit to the folks who originally figured this out.

The way Tasker works is this: You create “tasks” to run on your phone, like “show an alert” or “send an email to Mom.” You then set up “contexts” so Tasker knows when to run your tasks. A “context” is like “when I’m at this location” or “when I receive an SMS text message” - it’s a condition that Tasker can recognize to raise an event and say, “run a task now!” Finally, you can tie multiple “contexts” together with “tasks” in a profile - “when I’m at this location AND I receive an SMS text message THEN send an email to Mom.”

We’re going to set up a task to connect to the VPN when you’re on a network not your own and then disconnect from the VPN when you leave the network.

You need to know the name of your OpenVPN Connect profile - the text that shows at the top of OpenVPN Connect when you’re logging in. For this example, let’s say it’s yourdiskstation.synology.me [openvpn]

  1. Create a new task in Tasker. (You want to create the task first because it’s easier than doing it in the middle of creating a profile.)
    1. Call the task Connect To Home VPN.
    2. Use System -> Send Intent as the action.
    3. Fill in the Send Intent fields like this (it is case-sensitive, so be exact; also, these are all just one line, so if you see line wraps, ignore that):
      • Action: android.intent.action.VIEW
      • Category: None
      • Mime Type:
      • Data:
      • Extra: net.openvpn.openvpn.AUTOSTART_PROFILE_NAME: yourdiskstation.synology.me [openvpn]
      • Extra:
      • Extra:
      • Package:net.openvpn.openvpn
      • Class: net.openvpn.openvpn.OpenVPNClient
      • Target: Activity
  2. Create a second new task in Tasker.
    1. Call the task Disconnect From Home VPN.
    2. Use System -> Send Intent as the action.
    3. Fill in the Send Intent fields like this (it is case-sensitive, so be exact; also, these are all just one line, so if you see line wraps, ignore that):
      • Action: android.intent.action.VIEW
      • Category: None
      • Mime Type:
      • Data:
      • Extra:
      • Extra:
      • Extra:
      • Package:net.openvpn.openvpn
      • Class: net.openvpn.openvpn.OpenVPNDisconnect
      • Target: Activity
  3. Create a new profile in Tasker and add a context.
    1. Use State -> Net -> Wifi Connected as the context.
    2. In the SSID field put the SSID of your home/trusted network. If you have more than one, separate with slashes like network1/network2.
    3. Check the Invert box. You want the context to run when you’re not connected to these networks.
  4. When asked for a task to associate with the profile, select Connect To Home VPN.
  5. On the home screen of Tasker you should see the name of the profile you created and, just under that, a “context” showing something like Not Wifi Connected network1/network2.
  6. Long-press on the context and it’ll pop up a menu allowing you to add another context.
    1. Use State -> Net -> Wifi Connected as the context.
    2. Leave all the other fields blank and do not check the Invert box.
  7. On the home screen of Tasker you should now see the profile has two contexts - one for Not Wifi Connected network1/network2 and one for Wifi Connected *,*,*. This profile will match when you’re on a wifi network that isn’t in your “whitelist” of trusted networks. Next to the contexts you should see a little green arrow pointing to Connect To Home VPN - this means when you’re on a wifi network not in your “whitelist” the VPN connection will run.
  8. Long-press on the Connect To Home VPN task next to those contexts. You’ll be allowed to add an “Exit Task.” Do that.
  9. Select the Disconnect From Home VPN task you created as the exit task. Now when you disconnect from the untrusted wifi network, you’ll also disconnect from the VPN.

You can test the Tasker tasks out by going to the “Tasks” page in Tasker and running each individually. Running the Connect To Home VPN task should quickly run OpenVPN Connect, log you in, and be done. Disconnect From Home VPN should log you out.

If you’re unable to get the Connect To Home VPN task working, things to check:

  • Did you save your credentials in the OpenVPN Connect app?
  • Do you have a typo in any of the task fields?
  • Did you copy your OpenVPN Connnect profile name correctly?

You should now have an Android device that automatically connects to your Synology-hosted OpenVPN whenever you’re on someone else’s network.

The cool thing about OpenVPN that I didn’t see with PPTP is that I don’t have to set up a proxy with it. I got some comments on my previous article where some folks were lucky enough to not need to set up a proxy. I somehow needed it with PPTP but don’t need it anymore with OpenVPN. Nice.

NOTE: I can’t offer you individual support on this. Much as I’d like to be able to help everyone, I just don’t have time. I ask questions and follow forum threads like everyone else. If you run into trouble, Synology has a great forum where you can ask questions so I’d suggest checking that out. The above worked for me. I really hope it works for you. But it’s not fall-down easy and sometimes weird differences in network setup can make or break you.

autofac, net, testing comments edit

Autofac DNX support is under way and as part of that we’re supporting both DNX and DNX Core platforms. As of DNX beta 6, you can sign DNX assemblies using your own strong name key.

To use your own key, you need to add it to the compilationOptions section of your project.json file:

{
  "compilationOptions": {
    "keyFile": "myApp.snk"
  }
}

Make sure not to specify keyFile and strongName at the same time - you can only have one or the other.

The challenge we ran into was with testing: We wanted to run our tests under both DNX and DNX Core to verify the adjustments we made to handle everything in a cross-platform fashion. Basically, we wanted this:

dnvm use 1.0.0-beta6 -r CLR
dnx test/Autofac.Test test
dnvm use 1.0.0-beta6 -r CoreCLR
dnx test/Autofac.Test test

Unfortunately, that yields an error:

System.IO.FileLoadException : Could not load file or assembly 'Autofac, Version=4.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. General Exception (Exception from HRESULT: 0x80131500)
---- Microsoft.Framework.Runtime.Roslyn.RoslynCompilationException : warning DNX1001: Strong name generation is not supported on CoreCLR. Skipping strongname generation.
error CS7027: Error signing output with public key from file '../../Build/SharedKey.snk' -- Assembly signing not supported.
Stack Trace:
     at Autofac.Test.ContainerBuilderTests.SimpleReg()
  ----- Inner Stack Trace -----
     at Microsoft.Framework.Runtime.Roslyn.RoslynProjectReference.Load(IAssemblyLoadContext loadContext)
     at Microsoft.Framework.Runtime.Loader.ProjectAssemblyLoader.Load(AssemblyName assemblyName, IAssemblyLoadContext loadContext)
     at Microsoft.Framework.Runtime.Loader.ProjectAssemblyLoader.Load(AssemblyName assemblyName)
     at dnx.host.LoaderContainer.Load(AssemblyName assemblyName)
     at dnx.host.DefaultLoadContext.LoadAssembly(AssemblyName assemblyName)
     at Microsoft.Framework.Runtime.Loader.AssemblyLoaderCache.GetOrAdd(AssemblyName name, Func`2 factory)
     at Microsoft.Framework.Runtime.Loader.LoadContext.Load(AssemblyName assemblyName)
     at System.Runtime.Loader.AssemblyLoadContext.LoadFromAssemblyName(AssemblyName assemblyName)
     at System.Runtime.Loader.AssemblyLoadContext.Resolve(IntPtr gchManagedAssemblyLoadContext, AssemblyName assemblyName)

I ended up filing an issue about it to get some help figuring it out.

Under the covers, DNX rebuilds the assembly under test rather than using the already-built artifacts. This was entirely unclear to me since you don’t actually see any rebuild process happen. If you turn DNX tracing on (set DNX_TRACE=1) then you actually will see that Roslyn is recompiling.

If you want to test the same build output under different runtimes, you need to publish your tests as though they are applications. Which is to say, you need to use the dnu publish command on your unit test projects, like this:

dnu publish test\Your.Test --configuration Release --no-source --out C:\temp\Your.Test

When you run dnu publish you’ll get all of the build output copied to the specified output directory and you’ll get some small scripts corresponding to the commands in the project.json. For a unit test project, this means you’ll see test.cmd in the output folder. To execute the unit tests, you run test.cmd rather than dnx test\Your.Test test on your tests.

The Autofac tests now run (basically) like this:

dnvm use 1.0.0-beta6 -r CLR
dnu publish test\Autofac.Test --configuration Release --no-source --out .\artifacts\tests
.\artifacts\tests\test.cmd
dnvm use 1.0.0-beta6 -r CoreCLR
.\artifacts\tests\test.cmd

Publishing the unit tests bypasses the Roslyn recompile, letting you sign the assembly with your own key but testing under Core CLR.

I published an example project on GitHub showing this in action. In there you’ll see two build scripts - one that breaks because it doesn’t use dnu publish and one that works because it publishes the tests before executing.

autofac comments edit

Today we pushed the following packages with support for DNX beta 6:

This marks the first release of the Autofac.Configuration package for DNX and includes a lot of changes.

Previous Autofac.Configuration packages relied on web.config or app.config integration to support configuration. With DNX, the new configuration mechanism is through Microsoft.Framework.Configuration and external configuration that isn’t part of web.config or app.config.

While this makes for a cleaner configuration story with a lot of great flexibility, it means if you want to switch to the new Autofac.Configuration, you have some migration to perform.

There is a lot of documentation with examples on the Autofac doc site showing how new configuration works.

A nice benefit is you can now use JSON to configure Autofac, which can make things a bit easier to read. A simple configuration file might look like this:

{
    "defaultAssembly": "Autofac.Example.Calculator",
    "components": [
        {
            "type": "Autofac.Example.Calculator.Addition.Add, Autofac.Example.Calculator.Addition",
            "services": [
                {
                    "type": "Autofac.Example.Calculator.Api.IOperation"
                }
            ],
            "injectProperties": true
        },
        {
            "type": "Autofac.Example.Calculator.Division.Divide, Autofac.Example.Calculator.Division",
            "services": [
                {
                    "type": "Autofac.Example.Calculator.Api.IOperation"
                }
            ],
            "parameters": {
                "places": 4
            }
        }
    ]
}

If you want, you can still use XML, but it’s not the same as the old XML - you have to make it compatible with Microsoft.Framework.Configuration. Here’s the above JSON config converted to XML:

<?xml version="1.0" encoding="utf-8" ?>
<autofac defaultAssembly="Autofac.Example.Calculator">
    <components name="0">
        <type>Autofac.Example.Calculator.Addition.Add, Autofac.Example.Calculator.Addition</type>
        <services name="0" type="Autofac.Example.Calculator.Api.IOperation" />
        <injectProperties>true</injectProperties>
    </components>
    <components name="1">
        <type>Autofac.Example.Calculator.Division.Divide, Autofac.Example.Calculator.Division</type>
        <services name="0" type="Autofac.Example.Calculator.Api.IOperation" />
        <injectProperties>true</injectProperties>
        <parameters>
            <places>4</places>
        </parameters>
    </components>
</autofac>

When you want to register configuration, you do that by building up your configuration model first and then registering that with Autofac:

``` c# // Add the configuration to the ConfigurationBuilder. var config = new ConfigurationBuilder(); config.AddJsonFile(“autofac.json”);

// Register the ConfigurationModule with Autofac. var module = new ConfigurationModule(config.Build()); var builder = new ContainerBuilder(); builder.RegisterModule(module); ```

Again, check out the documentation for some additional detail including some of the differences and new things we’re supporting using this model.

Finally, big thanks to the Microsoft.Framework.Configuration team for working to get collection/array support into the configuration model.

javascript, home comments edit

I have, like, 1,000 of those little keyring cards for loyalty/rewards. You do, too. There are a ton of apps for your phone that manage them, and that’s cool.

Loyalty card phone apps never work for me.

For some reason, I seem to go to all the stores where they’ve not updated the scanners to be able to read barcodes off a phone screen. I’ve tried different phones and different apps, all to no avail.

You know what always works? The card in my wallet. Which means I’m stuck carrying around these 1,000 stupid cards.

There are sites, some of them connected to the phone apps, that will let you buy a combined physical card. But I’m cheap and need to update just frequently enough that it’s not worth paying the $5 each time. I used to use a free site called “JustOneClubCard” to create a combined loyalty card but that site has gone offline. I think it was purchased by one of the phone app manufacturers. ((Seriously.)

So…

Enter: LoyaltyCard

I wrote my own app: LoyaltyCard. You can go there right now and make your own combined loyalty card.

You can use the app to enter up to eight bar codes and then download the combined card as a PDF to print out. Make as many as you like.

And if you want to save your card? Just bookmark the page with the codes filled in. Done. Come back and edit anytime you like.

Go make a loyalty card.

Behind the Scenes

I made the app not only for this but as a way to play with some Javascript libraries. The whole app runs in the client with the exception of one tiny server-side piece that loads the high-resolution barcodes for the PDF.

You can check out the source over on GitHub.

vs comments edit

I installed Visual Studio 2015 today. I had the RC installed and updated to the the RTM.

One of the minor-yet-annoying things I found about the RTM version showed up when I pinned it to my taskbar next to VS2013:

Confusing icons on the taskbar

Sigh.

Luckily it’s an easy fix.

Windows 7 / Server 2008

First, unpin VS2015 from your taskbar. You’ll put it back after you’ve fixed the icon.

Open up your Start menu and right-click on the “Visual Studio 2015” shortcut in there. On the context menu, choose “Properties.” Click the “Change Icon” button.

Click the 'Change Icon' button

VS2015 actually comes with a few icons. They’re not all awesome, but they’re at least different than the VS2013 icon. I chose the one with the little arrow because it’s, you know, upgraded from VS2013.

Pick a better icon

Click OK enough times to close all the property dialogs. You’ll see the icon in the Start menu has changed. Now right-click that and pin it to the taskbar. Problem solved.

At least you can tell which is which now

Windows 8 / Server 2012

If you haven’t pinned VS2015 to your taskbar yet, do that now so you can get a shortcut.

Open up the taskbar icons folder. This is at C:\Users\yourusername\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar.

Copy the “Visual Studio 2015” shortcut out of that folder and onto your desktop.

Unpin VS2015 from your taskbar. The shortcut in that TaskBar folder will disappear.

Right-click on the “Visual Studio 2015” shortcut you copied to your desktop. On the context menu, choose “Properties.” Click the “Change Icon” button.

Click the 'Change Icon' button

VS2015 actually comes with a few icons. They’re not all awesome, but they’re at least different than the VS2013 icon. I chose the one with the little arrow because it’s, you know, upgraded from VS2013.

Pick a better icon

Click OK enough times to close all the property dialogs. You’ll see the icon on your desktop has changed.

Right-click on the icon on your desktop and pin that one to your taskbar. A new shortcut with the correct icon will be added to that TaskBar folder and will appear on the taskbar. You can now delete the one from your desktop.

At least you can tell which is which now

gaming, xbox comments edit

I tried playing a couple of Xbox 360 Kinect games with my four-year-old daughter, Phoenix. We had less than stellar results.

The first game was “Sesame Street TV.” Basically it’s interactive Sesame Street. We picked it up from the library to try it and I’m glad it was free.

Problem 1: She’s very small compared to me. If the Kinect sees me, it somehow stops seeing her. And vice versa - if it sees her, it stops detecting me. There seemed to be a sort of very small “magic area” in the room where it’d find both of us.

Problem 2: The interaction for that game isn’t constant. It’s more like: they sing a song, then you have a small bit of interaction, then they tell a story, then there’s a small bit of interaction. She’ll watch or she’ll interact, but she loses interest in interacting once you switch to watching.

Problem 3: Slight misrepresentation of the game on the box. The concept behind the game is like you going into the TV and being on Sesame Street. There is a picture on the box to illustrate the concept. Phoenix wants that to be the reality. It is really hard to explain that the box just shows an idea of what it’s like, that you don’t really transfer yourself into the television.

After a bit of Sesame Street, we tried “Kinect Adventures.” I did this thinking that the constant interaction would keep her engaged.

We still ran into the problem where there was basically the small area where it recognized us both, but then it was compounded with a couple of new problems.

Problem 4: Many of the games aren’t obvious to four-year-olds. In particular, the game where you have to walk from side to side and jump to control the raft - that was entirely unintuitive to Phoenix. She was far more concerned with whether or not the avatar on the raft actually looked like her, which then led to a half-hour diversion where we had to set up an avatar.

Problem 5: Auto jump-in/jump-out. The ability to jump in and out of the game quickly is great for folks that “get it” and when you have a properly sized room without the “magic area” where you’re recognized. However, every time Phoenix accidentally stepped out of the “magic area,” her avatar would disappear because it thought she was jumping out of the game, at which point I’d have to try to convince her to come back into the area - but not too close to me - so we could continue.

In the end, we decided it a better idea to just go watch some Looney Tunes cartoons we picked up at the library. Which, now that I think about it, is sort of the opposite of what Kinect is trying to get you to do - get off the couch and be active. Hmmm.

Over the years I’ve posted about my home media center developments. Back in 2008 I posted a summary with links to articles, then I did another roundup in 2014.

The problem with this sort of periodic summary is that it’s hard to get an accurate picture of how things are working right now. I might forget to blog it, or I’ll take some notes on something I found and forget to post it, or whatever.

I was keeping my media center and home networking notes in a personal wiki on PBworks but I figured it was time to make things a bit more official.

My media center and home network documentation is now live at illigmediacenter.readthedocs.org

Diagram of my home network

This is the place I’ll add notes or tips on how my media center setup works. I’ve got everything from the hardware I use to my process for getting video content into the system. I’ve got my plan and analysis for how I cut cable including cost breakdowns and options. It’s all on this site.

My biggest problem in getting my media center going was that I didn’t know what I didn’t know. Information about all this stuff - hardware, software, how to get things done - is spread out all over the place. I never found a complete guide to help me on my way.

I hope this documentation can help you jump start your media center or improve the one you have. As things change in my system, I’ll be keeping the documentation here up to date so it should always have the latest info.