build, process comments edit

Pre-emptive disclaimer: This set of anecdotes doesn’t refer to a specific product or a specific repository. Some things come from past life experience, some come from open source projects, some come from other places I’ve encountered. As they say in the movies, “The story, all names, characters, and incidents portrayed in this production are fictitious. No identification with actual persons (living or deceased), places, buildings, and products is intended or should be inferred.”

Some of what I do involves continuous integration and continuous delivery, build tooling, that sort of thing. A particular pain inflicted on me is when I run into a monolith repository - a huge, multi-project, multi-solution repo with tons of different things that are all intended to work together so they all live in one source code repository.

Autofac was in one of these monoliths until a few years ago in our move to GitHub. There were good reasons for it to be that way, but in the end we split it up due to some of the challenges.

Here are a few of the challenges I’ve seen in working with monolith repos.

Slow Builds

Once a repository gets to a certain size the time it takes to build becomes unbearably long. I’ve seen estimates of about 10 - 15 minutes being a reasonable build time including tests. I think that’s about right. Much longer and folks stop building everything.

Don’t get me wrong, it’s OK to let the build server do its job and offload the task of running huge builds to detect things that are broken. What’s not OK is when your developers can’t build the whole thing - either because the build run takes several hours or because various tooling changes and build optimizations (see below) have made it so only build agents have the right configuration to enable a build to get out the door.

“Views” on the Source

If it takes too long to build, or the source is so big it’s hard to work in, folks end up working in logical “views” over the source. For the most part this is a reasonable thing to consider, but this has an unforeseen effect on how you build, which leads to some of the challenges I’ll explain below: incorrect build optimizations, inconsistent tools, non-modular modules, and so on.

This becomes more challenging when people want their own custom views. For example, a person who might be working on a specific application also wants to be able to refactor things in dependencies of that application. Creating that custom “view” can give the impression that it’s OK to just change things within that view and that it won’t have any impact on anything outside.

Incorrect Build Optimizations

Once you have slow builds going on, folks are going to want to speed the build up, especially for their “view” (if you went that direction).

One way you might try optimizing is to run all the tests in a separate build - build one time to compile, provide feedback on whether it even compiled or not, and then run tests with additional feedback on pass/fail. The problem here is that it’s pretty easy to get something compiling that totally won’t work at runtime. How long do those tests take? Who caused them to fail? If you have too many compilation builds queuing up for tests to run, you’ll end up batching them which means finger pointing. “I’m sure it wasn’t me, so someone else can fix it.”

Another way you might try optimizing is to use package management systems to handle dependencies. Each “view” can build independently and in any order, so you can have separate builds if you want, one for each “view” or logical component. That’s actually a pretty decent idea as long as you do that consistently and with a lot of discipline… and as long as you acknowledge that building everything in the entire repository won’t necessarily ensure that the latest versions of everything work together. With package management, the build output of this part of the build won’t necessarily be the build input to some other part.

Stale and Inconsistent Tools

Once the build gets too large or complicated, it can be an easy pattern to fall into to just tweak one or two code files, check them in, and let the build server do all the work to verify things.

However, if developers never actually run the build, the developers may start using newer tools than the build agent uses, causing the actual build tooling to become stale.

Alternatively, different developers or teams working in different areas of the monolith may want to update the tooling used to build their “view” of the monolith repo. As long as you don’t want to build everything together, that’s fine. Chances are, though, the decision of the team on their component now puts additional requirements on build agents and makes it so all devs can’t build the whole set of components.

A great example of this is MSBuild and Visual Studio: Say you start your project when Visual Studio 2010 is out. As time goes on, developers update their machines to Visual Studio 2012, 2013, and beyond. Parts of the monolith that don’t change much remain back on 2010 while people move on. Eventually no developer has VS 2010 installed and could never actually build the solution. In the meantime, the build agent requires all of those Visual Studio versions be installed because there wasn’t a concerted effort to unify on a version.

Another MSBuild / Visual Studio example: Again, say you start your project with Visual Studio 2010. New features and functions get added but in an effort to not change tooling for everyone, people start manually tweaking files used by the tooling - solution and project files. Now you have a situation where if you actually open the solution in VS 2010 you get an error saying there are project types in the solution that aren’t supported… but if you open the solution in a newer version of Visual Studio you get a notice that the solution is on old tooling and must be upgraded to work.

Bad Versioning Strategies

Generally a build of a codebase corresponds to a single version of something. In the case where the code has multiple logical applications or components, the build might correspond to a single release of those things.

In a more microservice scenario, you likely wouldn’t have each microservice building and deploying as part of a monolith repository. Instead you might have a “build” that continuously runs integration tests over the deployed microservices to ensure they’re still working together as expected.

Anyway, since the build number (or build version, or whatever) can really only track one version number at a time, you have three choices, none of which are awesome:

  1. Version everything together. That means if you change one line in one component and the build kicks off, the version on every component in the entire repo changes. If you only ever deploy the entire monolith at the same time and it all builds together (e.g., not through a package management mechanism or using “views” on the repo), that can work.
  2. Implement an alternative independent versioning mechanism. In this case you’d have to figure out a custom way to indicate the version of each component in the system that can “version independently.” That version will not be tied to the build server version/number. You may build the same version of a component multiple times if the overall build gets kicked off and the component version hasn’t been incremented. This gets more complex if you want to see in which build a particular component version originated.
  3. Never change the version. This really only works if it’s a small project and everything is entirely, like, “software as a service” or something where you always only ever have the latest version deployed and you never have to report on it.

We tried ideas one and two in Autofac before splitting the repos in GitHub. I’ve seen idea three in other projects.

Circular Builds

If you build as a monolith, it’s really easy to accidentally create a circular build dependency.

Say you have some custom build tasks or scripts that help you with your build, like custom MSBuild tasks. That’s fine as long as the custom tasks are entirely independent of the code they’re building. However, say you have a custom build task that has a dependency on one of the assemblies being built… and that assembly being built requires the use of the build tasks to succeed.

Bad times. You need to unravel that.

Package management decoupling can also make it a piece of cake in the monolith to create a circular build dependency. Component A takes package B as part of its dependencies. Component A builds, publishes package A. Now component B builds… and takes package A as a dependency. This can be a really hard thing to detect, especially if package A and package B don’t themselves properly declare package dependencies.

Non-Modular Modules

Once the build gets broken up into logical components, applications, microservices, or modules, it’s far too easy to “just add a dependency” on some other piece of the source code repository and ignore the application and process isolation required to ensure that module actually stays modular. A lot of times you’ll see inconsistent application of dependencies - some come from a package management system, some come from the local repository’s build output from some other component.

Ever-Changing Framework

If the shared libraries or shared dependencies you use are in the same repo as your consuming components, it takes a lot of discipline to not start new functionality by instantly putting new items right in the common code. Your new application or component is going to need to validate phone numbers, why wouldn’t you add a whole shareable framework component for phone number validation? Hey, just throw that in the lowest level dependency so it’s readily available anywhere at any time!

Of course, that means from that point on anyone using your shared library will assume the new functionality is there and removing it will be a breaking change… so… uh… maybe that’s not the best idea.

It Failed, but Really Succeeded

The build server may say the build failed, but if you build “a view” of the monolith in isolation, that same piece may actually succeed. Which one is right? Is it a problem with the overall ordering of how the repository builds the logical components? Is the build server actually the system of record anymore?

It Succeeded, but Really Failed

After a certain level of complexity gets introduced, it gets pretty easy to start ignoring warnings that get generated or inadvertently cause errors to get ignored.

For example, say your build uses MSBuild in some areas and PowerShell in others. MSBuild calls a PowerShell script which then ends up calling MSBuild. Errors reported in that innermost MSBuild execution may not actually cause the overall build to fail… which means the build will show as successful even though it’s not.

Illig’s Law of Monolithic Repositories

The amount of discipline required to maintain a build is directly proportional to the size of the source code repository. The amount of discipline actually used is inversely proportional.

Most of the monolith repository problems you see could be avoided with enough developer due diligence and discipline. However, the larger a repository gets, the less personal responsibility folks start to feel for keeping the build performing and running clean. It’s too easy to complain about the size and complexity, passing general housekeeping off as technical debt to be addressed later. Eventually people become complacent (“That’s just how it is, we can’t fix it.”) and nothing ever does get fixed.

It’s the opposite of “too big to fail”: It’s “too big to fix.”

Our receiver, a Marantz SR5010, is in a cabinet. While it supports on-screen display to show current volume levels and input info, it seems to be fairly well known in the community (e.g., forums, etc.) that getting it to work in conjunction with a 4K display is more luck than skill.

We had no problem with the OSD in standard 1080p HD format, once we got a 4K TV, the OSD basically stopped appearing. Turning everything off and back on again might see it reappear for a 10 or 15 minute span, but after that it disappears.

The challenge: We like to see the receiver’s power/volume/input status but we don’t want to leave the cabinet hanging open.

The solution: An Arduino-based volume monitor to provide a remote display for the receiver.

Finished volume monitor

Here’s a video where you can see it in action.

Volume monitor in action

Parts

Prices listed are the prices I paid - they may have changed since I bought them, etc.

In that list I didn’t include the box you may or may not want to put the finished product in; and little stuff like solder and a length of wire you’ll need to patch the 1602 shield.

How It Works

Marantz receivers have an HTTP API used for remote control programs and general network interaction. By making a GET request to http://<receiver-ip-address>/goform/formMainZone_MainZoneXml.xml you will get a fairly large XML document that has all the information about the receiver’s current status.

The Arduino volume monitor polls this endpoint and displays values based on the contents of the response.

The basic algorithm is:

  1. Make a request to get the XML status from the receiver.
  2. If the receiver is OFF, wait for five seconds before polling again.
  3. If the receiver is ON…
    1. Parse the XML to get the volume, selected input, and audio type.
    2. Update the 1602 display with the latest information.
    3. Immediately poll again.

The wiki on GitHub where I posted my code has a lot more details on specifically what the Marantz API responses look like and how that works.

Assembling the Hardware

The hardware itself is pretty easy to assemble. We’re going to stack the shields in the order (from bottom to top): Arduino, Ethernet shield, 1602 shield. We’ll do that after we do two things.

The Arduino and shields, ready to stack

First, we need to patch the 1602 shield. The W5100 Ethernet shield needs digital pin 10 on the Arduino for transmission. The problem is, the 1602 shield (at least the one that I bought) also wants pin 10 for control of enable/disable on the display. If you just stack them up now, things go all haywire - the Ethernet shield never really transmits correctly and the 1602 display basically stays disabled all the time.

  1. On the 1602 shield, clip off pin 10. You don’t want the 1602 shield picking up anything from the Ethernet shield. I clipped this off with a small set of flush cutters.
  2. Solder a small jumper wire across the top of the 1602 shield from pin 10 to pin 3. You could choose a different pin if you like, but pin 3 seemed reasonable.

Now if you want to control the display enable/disable toggle, you can write to pin 3. It won’t interfere with the Ethernet shield and it works great.

In the picture below, you can see me pointing with a screwdriver at the clipped-off pin and my purple jumper wire.

Patch the 1602 shield so it doesn't interfere with the Ethernet shield

Second, you need to create some risers out of stackable shield headers. The Ethernet jack on the W5100 shield is too tall to just stack another shield on top. I used stackable shield headers to create some small extensions/risers to ensure the 1602 shield didn’t run into the Ethernet jack.

I did clip the stackable header pins down a bit so they sat nice and flush with the Ethernet shield headers.

Create extensions with stackable headers

Now just stack ‘em up and you’re ready to go!

Installing the Software

I published the software on GitHub. You can head over there, grab it, and upload it to your Arduino.

I used the Visual Micro extension for Visual Studio when developing, so you’ll see some Visual Studio files in the source, but you should be able to load it up in the standard Arduino IDE and use it without issue. If you find a problem, file an issue and let me know.

You may need to adjust the button resistance tolerances. In the DFRobotLCDShield.h I have some input values as the buttons are read on analog pin 0. These don’t match the values I saw in any other code snippet or data sheet they posted. I don’t know if yours will match mine, but if they don’t, you’ll have to tweak it.

Using the Software

When you first start up the Arduino it will get a DHCP address and then try to read the configured IP address for your Marantz receiver. If none is configured, you’ll be sent into a setup routine to configure the receiver’s IP address. Alternatively, you can push the “SELECT” button on the 1602 shield keypad and enter the setup routine.

In setup, use the left and right buttons to move the cursor to the right spot in the IP address and up/down to increment/decrement. When you have the IP address entered, press “SELECT” again to save it and continue.

The Arduino will use the receiver’s IP address as the location to make the HTTP GET request as noted above. When the receiver is off, the display on the 1602 will be off; when the receiver is on, the display will be on and showing current data.

It may take a second or two between turning the receiver on and seeing the display come on. This is due to the Arduino only polling every five seconds for status when the receiver is off.

The Arduino is not a super-powered CPU so the data may be delayed by half a second or so. The HTTP request and subsequent processing of the response takes about a half second, give or take. It polls as fast as it can, but that still means you’ll get maybe three updates a second. As such, if you hold down on the receiver’s volume button, you’ll see the Arduino display “jump” in increments instead of incrementing and decrementing smoothly. It also may be slightly behind.

Say you are holding down the volume button on the receiver so it’s constantly going up:

Time Action Arduino Display Actual Volume
0.0s Arduino makes HTTP request 0 0
0.1s Receiver sends response 0 1
0.2s Arduino starts processing response 0 2
0.3s Arduino still processing 0 3
0.4s Arduino still processing 0 4
0.5s Arduino updates display 1 5

The Arduino is going to display the volume at the time the receiver sent the response, which may not be the same volume at the time of display. Not to worry, it should catch up on the next request. At most you’ll be about a half second behind, which isn’t so bad.

Finishing Touches

I put my volume monitor in a box. I used one of those little unfinished boxes from a craft store and stained it dark. I padded the inside with a little craft foam to keep it in place.

The unfinished box

Once it was all put together, it looked pretty good on the shelf!

The finished monitor on the shelf

Interesting Points

I learned a lot while working on this project.

I was going to do dynamic discovery of the receiver during the setup process so you didn’t have to manually enter the receiver IP address, but that requires UDP multicast to use SSDP. I found out the W5100 series shield doesn’t support UDP multicast… or if it does, I couldn’t get it working. There really aren’t any examples out there. All the standard Arduino Ethernet library support for UDP multicast seems to be for the later W5500 shield.

I noticed that a lot of projects skip the “nice UI” thing and hardcode a lot of stuff. For example, most projects seem to hardcode destination IP addresses right in the program. I suppose that’s fine, but if (for whatever reason) I need to change the receiver’s IP address, I really don’t want to have to reprogram the Arduino. That’s why I put the setup UI in there, though it does take some of my program space and RAM to support its existence.

Since you only get 2K of RAM to work with, the Marantz HTTP response being in XML was challenging. Even if it was in JSON, it’d still be far too large to read in its entirety so I had to do some pretty basic string parsing to read the XML and process it as a stream. I’m kind of surprised there aren’t SAX parsers for Arduino, though I suppose these projects generally avoid XML altogether.

The Repository

The code is all free on GitHub. I included a lot of more technical info in the wiki for that repo.

Go check it out!

build, security comments edit

ProGet is a very cool, very flexible package and symbol server. They offer a free edition but when you get into the paid licenses you can secure your feeds using LDAP or Active Directory credentials.

NOTE: This is written against ProGet 4.7.6. Inedo is pretty good about getting new and cool features out quickly, so if you don’t see all the stuff I’m talking about here it may have been rolled up into “more official” UI.

Given that disclaimer…

ProGet supports three user directories:

  1. Built-in - the local user store, no Active Directory or LDAP.
  2. LDAP or Single Domain Active Directory - uses a single LDAP directory for users.
  3. Active Directory with Multiple Domains - enables the local Active Directory as well as an additional domain.

General Troubleshooting

There’s a hidden integrated auth debugging page at http://yourprogetserver/debug/integrated-auth and it dumps out a bunch of data about which user directory you’re using, which user account you’re using to authenticate currently, and so on.

I’m not putting a screen shot here because the view would be nearly useless given the amount of info I’d have to redact. Just try it, you’ll see.

Built-in Directory

There’s not much to say about the built-in directory. It’s very simple. Members, groups, done. However, you may, at some point want to test searching for specific users or groups if folks are reporting trouble.

There is a hidden test page for the built-in directory here: http://yourprogetserver/administration/security/configure-directory?directoryId=1 (on 4.7.14 this has moved to http://yourprogetserver/administration/security/directories/edit?userDirectoryId=1)

That page allows you to do a test search. There’s really nothing to configure otherwise, so it’s just diagnostics.

Built-in directory test page

LDAP or Single Domain Active Directory

Connecting to the local Active Directory is pretty straightforward. There’s good doc on how to do that.

However, there are additional properties you can configure, like which LDAP OU you want users to be in, and there’s no UI… or so it would seem.

There is a hidden configuration and test page for LDAP or Single Domain Active Directory here: http://yourprogetserver/administration/security/configure-directory?directoryId=2 (on 4.7.14 this has moved to http://yourprogetserver/administration/security/directories/edit?userDirectoryId=2)

LDAP/single AD test page

This is super useful if you need to authenticate as a service account for AD queries or otherwise modify the LDAP query.

I didn’t find a way to change the LDAP server here. I may be misunderstanding, but the description on this option makes it sound like that’s possible: “ProGet will use the user and group information of a single domain or a generic LDAP directory.” In practice this option will only use the Active Directory of the account running the ProGet web app. If the web app is running as a local machine service account (e.g., SYSTEM or NETWORK SERVICE) that means the Active Directory to which the local machine belongs.

In order to use a different domain than the service account or machine you need to switch to the “Active Directory with Multiple Domains” option.

Active Directory with Multiple Domains

This option lets you authenticate users against a different or second directory. In this case, “Multiple Domains” means “the default domain and an additional one.”

There is a hidden configuration and test page for Active Directory with Multiple Domains here: http://yourprogetserver/administration/security/configure-directory?directoryId=3 (on 4.7.14 this has moved to http://yourprogetserver/administration/security/directories/edit?userDirectoryId=3)

AD/multiple domains test page

I think the “AD with Multiple Domains” option also uses the settings from the “LDAP or Single Domain Active Directory” option, but I can’t promise that. It seems like the configuration page should have the same sorts of values on it but it doesn’t.

vs, uwp, wpf comments edit

In working on a recent project I started getting a bunch of big red X marks in the XAML designer due to various issues in design time. To fix it, I had to debug into the XAML designer itself.

There are several articles out there that say “just attach to XDesProc.exe with the debugger” but this alone didn’t work for me because the errors I was encountering were happening in just such a spot that I couldn’t attach to the designer process before it had already occurred. (We’ll get into why later on.)

Here’s how to debug the XAML designer:

  1. Close all instances of Visual Studio.
  2. Open a command prompt window.
  3. Enter the command setx XPROCESS_PROMPT 1 and hit enter.
  4. Close the command prompt.
  5. Open the problem project in Visual Studio and get to the designer to trigger the problem.
  6. When the designer is about to start, you’ll get a little dialog box telling you exactly which process to debug.

    Attach debugger now!

  7. Open a second instance of Visual Studio and attach to that process. Make sure in the “Attach to Process” dialog you select “Managed (v4.6, v4.5, v4.0)” - you may have to manually change the value here.

    Attach to managed code

  8. If you’re interested in setting a breakpoint on your project code, open your project in that second copy of Visual Studio (the one attached to XDesProc.exe) and set your breakpoints.
  9. Click the “OK” button in the “Attach debugger now!” dialog to allow the first instance of VS to continue the design-time rendering.

When you’re finished debugging you need to turn the prompt off or it’ll keep happening when you start the designer.

  1. Close all instances of Visual Studio.
  2. Open a command prompt window.
  3. Enter the command setx XPROCESS_PROMPT "" (yes, empty double-quotes) and hit enter.
  4. Close the command prompt.

Gotcha: This doesn’t help with assembly binding problems.

The problem I was having was an assembly binding problem. The red X in the designer pointed to an error where an assembly couldn’t be found at design time. It was there fine at runtime, but for some reason the designer couldn’t find it.

It turns out the designer has a “shadow cache” where it keeps a copy of your app for execution during design time. It’s in a location that looks something like this:

C:\Users\yourusername\AppData\Local\Microsoft\VisualStudio\15.0_317bf9c1\Designer\ShadowCache

Each app gets a uniquely-named directory under there. What you’ll find is:

  • The designer creates a separate folder in the unique app directory for each assembly it shadow-copies.
  • Not everything in your app’s bin folder actually makes it to the shadow copy.

That latter point - that not everything makes it to the shadow copy - was causing my problem.

I never did figure out the logic behind what does get to the shadow copy vs. what doesn’t get there. Adding a direct reference to the assembly or NuGet package doesn’t necessarily guarantee it’ll make it.

When you run into this, you get managed C++ errors trying to marshal FileNotFoundException info back to .NET but without any requisite inner exception or stack trace details. It’s basically impossible to figure it out unless you want to start trolling through memory dumps and WinDBG. That would tell you what’s missing but still wouldn’t reveal the logic as to why it’s missing.

I worked around the problem with a two-pronged approach:

  • Make judicious use of design-time checking via if (DesignMode.DesignModeEnabled) checks in appropriate places to behave differently in design.
  • Use knowledge of how the JIT compiler works to separate code that requires the missing assemblies from code that needs to run at design time.

Let me expand a bit on that latter one, since it’s key.

When the XAML designer executes code, it only compiles the code it requires to run. If it touches a type in the system, it compiles it (from the MSIL code in the assembly) so it can be used.

Say you have a control MyControl. MyControl has a view model MyViewModel which then uses an EF DbContext to get some data. If all of that is in code in MyControl then the designer needs to compile MyControl, MyViewModel, and the DbContext. If the designer can’t find all the required EF assemblies, the DbContext compilation will fail, which means MyViewModel will fail, which will bring the whole design time thing crashing down and yield a red X.

To get around that, you can create an interface IViewModel with the properties on MyViewModel. Add IViewModel to MyViewModel and create a second view model just for design time - DesignViewModel - that also implements IViewModel with some simple properties. Now in the constructor of your control, you set your control’s view model to the DesignViewModel when DesignMode.DesignModeEnabled is true. MyViewModel can be passed in as a constructor parameter from elsewhere or set as the data context later in other code.

Point being, you’ve now separated your control code from the view model code by using an interface - and the designer won’t need to JIT compile the full “real” view model, so it won’t look for assemblies that aren’t there and you’ll get a good design time experience.

Yeah, it’s sorta complicated. If I could figure out the logic of what causes the designer to bring an assembly into the shadow copy I’d just, uh, “flag” the missing assemblies (or whatever) and bypass this whole complex thing.

UPDATE 10/25/2017: This may be fixed as part of this issue in VS 2017 15.5.

autofac, dotnet comments edit

Alex and I have been working on deprecating the ability to update an Autofac container after it’s already been built. There are lots of reasons for this, and if you’re curious about that or have feedback on it, we have a discussion issue set up. You can also see ways to work around the need to update the container in that issue, so check it out.

However, one of the main reasons we’ve heard about why people want to update the container is to handle conditional registrations. For example, “I only want ComponentB registered in the container if ComponentA is not registered.”

To that end, in version 4.4.0 we’ve added OnlyIf() and IfNotRegistered() extensions to support conditional registration.

OnlyIf() lets you provide a lambda that acts on an IComponentRegistry. You can check if something is or isn’t registered and have some other registration execute only if the predicate returns true.

IfNotRegistered() is a convenience method built on OnlyIf() that allows you to execute a registration if some other service is not registered.

The documentation has been updated to explain how it works including examples but here’s a taste:

var builder = new ContainerBuilder();

// Only ServiceA will be registered.
// Note the IfNotRegistered takes the SERVICE TYPE to
// check for (the As<T>), NOT the COMPONENT TYPE
// (the RegisterType<T>).
builder.RegisterType<ServiceA>()
       .As<IService>();
builder.RegisterType<ServiceB>()
       .As<IService>()
       .IfNotRegistered(typeof(IService));

// HandlerA WILL be registered - it's running
// BEFORE HandlerB has a chance to be registered
// so the IfNotRegistered check won't find it.
//
// HandlerC will NOT be registered because it
// runs AFTER HandlerB. Note it can check for
// the type "HandlerB" because HandlerB registered
// AsSelf() not just As<IHandler>(). Again,
// IfNotRegistered can only check for "As"
// types.
builder.RegisterType<HandlerA>()
       .AsSelf()
       .As<IHandler>()
       .IfNotRegistered(typeof(HandlerB));
builder.RegisterType<HandlerB>()
       .AsSelf()
       .As<IHandler>();
builder.RegisterType<HandlerC>()
       .AsSelf()
       .As<IHandler>()
       .IfNotRegistered(typeof(HandlerB));

// Manager will be registered because both an IService
// and HandlerB are registered. The OnlyIf predicate
// can allow a lot more flexibility.
builder.RegisterType<Manager>()
       .As<IManager>()
       .OnlyIf(reg =>
         reg.IsRegistered(new TypedService(typeof(IService))) &&
         reg.IsRegistered(new TypedService(typeof(HandlerB))));

// This is when the conditionals actually run. Again,
// they run in the order the registrations were added
// to the ContainerBuilder.
var container = builder.Build();

If that’s something you’re into, head over to NuGet, grab v4.4.0 of Autofac, and try it out. Find something not working? Let us know!