1 2 3 4 5 6 7 8 9 10 11
`IServer`: converting hierarchy to composition

Quino has long included support for connecting to an application server instead of connecting directly to databases or other sources. The application server uses the same model as the client and provides modeled services (application-specific) as well as CRUD for non-modeled data interactions.

We wrote the first version of the server in 2008. Since then, it's acquired better authentication and authorization capabilities as well as routing and state-handling. We've always based it on the .NET HttpListener.

Old and Busted

As late as Quino 2.0-beta2 (which we had deployed in production environments already), the server hierarchy looked like screenshot below, pulled from issue QNO-4927:

image

This screenshot was captured after a few unneeded interfaces had already been removed. As you can see by the class names, we'd struggled heroically to deal with the complexity that arises when you use inheritance rather than composition.

The state-handling was welded onto an authentication-enabled server, and the base machinery for supporting authentication was spread across three hierarchy layers. The hierarchy only hints at composition in its naming: the "Stateful" part of the class name CoreStatefulHttpServerBase<TState> had already been moved to a state provider and a state creator in previous versions. That support is unchanged in the 2.0 version.

Implementation Layers

We mentioned above that implementation was "spread across three hierarchy layers". There's nothing wrong with that, in principle. In fact, it's a good idea to encapsulate higher-level patterns in a layer that doesn't introduce too many dependencies and to introduce dependencies in other layers. This allows applications not only to be able to use a common implementation without pulling in unwanted dependencies, but also to profit from the common tests that ensure the components works as advertised.

In Quino, the following three layers are present in many components:

  1. Abstract: a basic encapsulation of a pattern with almost no dependencies (generally just Encodo.Core).
  2. Standard: a functional implementation of the abstract pattern with dependencies on non-metadata assemblies (e.g. Encodo.Application, Encodo.Connections and so on)
  3. Quino: an enhancement of the standard implementation that makes use of metadata to fill in implementation left abstract in the previous layer. Dependencies can include any of the Quino framework assemblies (e.g. Quino.Meta, Quino.Application and so on).

The New Hotness1

The diagram below shows the new hotness in Quino 2.2

image

The hierarchy is now extremely flat. There is an IServer interface and a Server implementation, both generic in TListener, of type IServerListener. The server manages a single instance of an IServerListener.

The listener, in turn, has an IHttpServerRequestHandler, the main implementation of which uses an IHttpServerAuthenticator.

As mentioned above, the IServerStateProvider is included in this diagram, but is unchanged from Quino 2.0-beta3, except that it is now used by the request handler rather than directly by the server.

You can see how the abstract layer is enhanced by an HTTP-specific layer (the Encodo.Server.Http namespace) and the metadata-specific layer is nice encapsulated in three classes in the Quino.Server assembly.

Server Components and Flow

This type hierarchy has decoupled the main elements of the workflow of handling requests for a server:

  • The server manages listeners (currently a single listener), created by a listener factory
  • The listener, in turn, dispatches requests to the request handler
  • The request handler uses the route handler to figure out where to direct the request
  • The route handler uses a registry to map requests to response items
  • The request handler asks the state provider for the state for the given request
  • The state provider checks its cache for the state (the default support uses persistent states to cache sessions for a limited time); if not found, it creates a new one
  • Finally, the request handler checks whether the user for the request is authenticated and/or authorized to execute the action and, if so, executes the response items

It is important to note that this behavior is unchanged from the previous version -- it's just that now each step is encapsulated in its own component. The components are small and easily replaced, with clear and concise interfaces.

Note also that the current implementation of the request handler is for HTTP servers only. Should the need arise, however, it would be relatively easy to abstract away the HttpListener dependency and generalize most of the logic in the request handler for any kind of server, regardless of protocol and networking implementation. Only the request handler is affected by the HTTP dependency, though: authentication, state-provision and listener-management can all be re-used as-is.

Also of note is that the only full-fledged implementation is for metadata-based applications. At the bottom of the diagram, you can see the metadata-specific implementations for the route registry, state provider and authenticator. This is reflected in the standard registration in the IOC.

These are the service registrations from Encodo.Server:

return handler
  .RegisterSingle<IServerSettings, ServerSettings>()
  .RegisterSingle<IServerListenerFactory<HttpServerListener>, HttpServerListenerFactory>()
  .Register<IServer, Server<HttpServerListener>>();

And these are the service registrations from Quino.Server:

handler
  .RegisterSingle<IServerRouteRegistry<IMetaServerState>, StandardMetaServerRouteRegistry>()
  .RegisterSingle<IServerStateProvider<IMetaServerState>, MetaPersistentServerStateProvider>()
  .RegisterSingle<IServerStateCreator<IMetaServerState>, MetaServerStateCreator>()
  .RegisterSingle<IHttpServerAuthenticator<IMetaServerState>, MetaHttpServerAuthenticator>()
  .RegisterSingle<IHttpServerRequestHandler, HttpServerRequestHandler<IMetaServerState>>()

As you can see, the registration is extremely fine-grained and allows very precise customization as well as easy mocking and testing.



  1. Any Men in Black fans out there? Tommy Lee Jones was "old and busted" while Will Smith was "the new hotness"? No? Just me? All righty then...

  2. This diagram brought to you by the diagramming and architecture tools in ReSharper 9.2. Just select the files or assemblies you want to diagram in the Solution Explorer and choose the option to show them in a diagram. You can right-click any type or assembly to show dependent or referenced modules or types. For type diagrams , you can easily control which relationships are to be shown (e.g. I hide aggregations to avoid clutter) and how the elements are to be grouped (e.g. I grouped by namespace to include the boxes in my diagram).

Iterating with NDepend to remove cyclic dependencies (Part II)

In the previous article, we discussed the task of Splitting up assemblies in Quino using NDepend. In this article, I'll discuss both the high-level and low-level workflows I used with NDepend to efficiently clear up these cycles.

Please note that what follows is a description of how I have used the tool -- so far -- to get my very specific tasks accomplished. If you're looking to solve other problems or want to solve the same problems more efficiently, you should take a look at the official NDepend documentation.

What were we doing?

To recap briefly: we are reducing dependencies among top-level namespaces in two large assemblies, in order to be able to split them up into multiple assemblies. The resulting assemblies will have dependencies on each other, but the idea is to make at least some parts of the Encodo/Quino libraries opt-in.

The plan of attack

On a high-level, I tackled the task in the following loosely defined phases.

Remove direct, root-level dependencies

This is the big first step -- to get rid of the little black boxes. I made NDepend show only direct dependencies at first, to reduce clutter. More on specific techniques below.

Remove indirect dependencies

imageCrank up the magnification to show indirect dependencies as well. This will will help you root out the remaining cycles, which can be trickier if you're not showing enough detail. On the contrary, if you turn on indirect dependencies too soon, you'll be overwhelmed by darkness (see the depressing initial state of the Encodo assembly to the right).

Examine dependencies between root-level namespaces

Even once you've gotten rid of all cycles, you may still have unwanted dependencies that hinder splitting namespaces into the desired constellation of assemblies.

For example, the plan is to split all logging and message-recording into an assembly called Encodo.Logging. However, the IRecorder interface (with a single method, Log()) is used practically everywhere. It quickly becomes necessary to split interfaces and implementation -- with many more potential dependencies -- into two assemblies for some very central interfaces and support classes. In this specific case, I moved IRecorder to Encodo.Core.

Even after you've conquered the black hole, you might still have quite a bit of work to do. Never fear, though: NDepend is there to help root out those dependencies as well.

Examine cycles in non-root namespaces

Because we can split off smaller assemblies regardless, these dependencies are less important to clean up for our current purposes. However, once this code is packed into its own assembly, its namespaces become root namespaces of their own and -- voila! you have more potentially nasty dependencies to deal with. Granted, the problem is less severe because you're dealing with a logically smaller component.

In Quino, use non-root namespaces more for organization and less for defining components. Still, cycles are cycles and they're worth examining and at least plucking the low-hanging fruit.

Removing root-level namespace cycles

With the high-level plan described above in hand, I repeated the following steps for the many dependencies I had to untangle. Don't despair if it looks like your library has a ton of unwanted dependencies. If you're smart about the ones you untangle first, you can make excellent -- and, most importantly, rewarding -- progress relatively quickly.1

  1. Show the dependency matrix
  2. Choose the same assembly in the row and column
  3. Choose a square that's black
  4. Click the name of the namespace in the column to show sub-namespaces
  5. Do the same in a row
  6. Keep zooming until you can see where there are dependencies that you don't want
  7. Refactor/compile/run NDepend analysis to show changes
  8. GOTO 1

Once again, with pictures!

The high-level plan of attack sounded interesting, but might have left you cold with its abstraction. Then there was the promise of detail with a focus on root-level namespaces, but alas, you might still be left wondering just how exactly do you reduce these much-hated cycles?

I took some screenshots as I worked on Quino, to document my process and point out parts of NDepend I thought were eminently helpful.

Show only namespaces

imageimageI mentioned above that you should "[k]eep zooming in", but how do you do that? A good first step is to zoom all the way out and show only direct namespace dependencies. This focuses only on using references instead of the much-more frequent member accesses. In addition, I changed the default setting to show dependencies in only one direction -- when a column references a row (blue), but not vice versa (green).

As you can see, the diagrams are considerably less busy than the one shown above. Here, we can see a few black spots that indicate cycles, but it's not so many as to be overwhelming.2 You can hover over the offending squares to show more detail in a popup.

Show members

imageimageIf you don't see any more cycles between namespaces, switch the detail level to "Members". Another very useful feature is to "Bind Matrix", which forces the columns and rows to be shown in the same order and concentrates the cycles in a smaller area of the matrix.

As you can see in the diagram, NDepend then highlights the offending area and you can even click the upper-left corner to focus the matrix only on that particular cycle.

Drill down to classes

imageimageOnce you're looking at members, it isn't enough to know just the namespaces involved -- you need to know which types are referencing which types. The powerful matrix view lets you drill down through namespaces to show classes as well.

If your classes are large -- another no-no, but one thing at a time -- then you can drill down to show which method is calling which method to create the cycle. In the screenshot to the right, you can see where I had to do just that in order to finally figure out what was going on.

In that screenshot, you can also see something that I only discovered after using the tool for a while: the direction of usage is indicated with an arrow. You can turn off the tooltips -- which are informative, but can be distracting for this task -- and you don't have to remember which color (blue or green) corresponds to which direction of usage.

Indirect dependencies

imageimageOnce you've drilled your way down from namespaces-only to showing member dependencies, to focusing on classes, and even members, your diagram should be shaping up quite well.

On the right, you'll see a diagram of all direct dependencies for the remaining area with a problem. You don't see any black boxes, which means that all direct dependencies are gone. So we have to turn up the power of our microscope further to show indirect dependencies.

On the left, you can see that the scary, scary black hole from the start of our journey has been whittled down to a small, black spot. And that's with all direct and indirect dependencies as well as both directions of usage turned on (i.e. the green boxes are back). This picture is much more pleasing, no?

Queries and graphs

imageimageimageFor the last cluster of indirect dependencies shown above, I had to unpack another feature: NDepend queries: you can select any element and run a query to show using/used by assemblies/namespaces.3 The results are shown in a panel, where you can edit the query see live updates immediately.

Even with a highly zoomed-in view on the cycle, I still couldn't see the problem, so I took NDepend's suggestion and generated a graph of the final indirect dependency between Culture and Enums (through Expression). At this zoom level, the graph becomes more useful (for me) and illuminates problems that remain muddy in the matrix (see right).

Crossing the finish line

In order to finish the job efficiently, here are a handful of miscellaneous tips that are useful, but didn't fit into the guide above.

image

  • I set NDepend to automatically re-run an analysis on a successful build. The matrix updates automatically to reflect changes from the last analysis and won't lose your place.
  • If you have ReSharper, you'll generally be able to tell whether you've fixed the dependencies because the usings will be grayed out in the offending file. You can make several fixes at once before rebuilding and rerunning the analysis.
  • At higher zoom levels (e.g. having drilled down to methods), it is useful to toggle display of row dependencies back on because the dependency issue is only clear when you see the one green box in a sea of blue.
  • Though Matrix Binding is useful for localizing, remember to toggle it off when you want to drill down in the row independently of the namespace selected in the column.

And BOOM! just like that4, phase 1 (root namespaces) for Encodo was complete! Now, on to Quino.dll...

Conclusion

imageDepending on what shape your library is in, do not underestimate the work involved. Even with NDepend riding shotgun and barking out the course like a rally navigator, you still have to actually make the changes. That means lots of refactoring, lots of building, lots of analysis, lots of running tests and lots of reviews of at-times quite-sweeping changes to your code base. The destination is worth the journey, but do not embark on it lightly -- and don't forget to bring the right tools.5



  1. This can be a bit distracting: you might get struck trying to figure out which of all these offenders to fix first.

  2. I'm also happy to report that my initial forays into maintaining a relatively clean library -- as opposed to cleaning it -- with NDepend have been quite efficient.

  3. And much more: I don't think I've even scratched the surface of the analysis and reporting capabilities offered by this ability to directly query the dependency data.

  4. I'm just kidding. It was a lot of time-consuming work.

  5. In this case, in case it's not clear: NDepend for analysis and good ol' ReSharper for refactoring. And ReSharper's new(ish) architecture view is also quite good, though not even close to detailed enough to replace NDepend: it shows assembly-level dependencies only.

Splitting up assemblies in Quino using NDepend (Part I)

imageA lot of work has been put into Quino 2.01, with almost no stone left unturned. Almost every subsystem has been refactored and simplified, including but not limited to the data driver, the schema migration, generated code and metadata, model-building, security and authentication, service-application support and, of course, configuration and execution.

Two of the finishing touches before releasing 2.0 are to reorganize all of the code into a more coherent namespace structure and to reduce the size of the two monolithic assemblies: Encodo and Quino.

A Step Back

The first thing to establish is: why are we doing this? Why do we want to reduce dependencies and reduce the size of our assemblies? There are several reasons, but a major reason is to improve the discoverability of patterns and types in Quino. Two giant assemblies are not inviting -- they are, in fact, daunting. Replace these assemblies with dozens of smaller ones and users of your framework will be more likely to (A) find what they're looking for on their own and (B) build their own extensions with the correct dependencies and patterns. Neither of these is guaranteed, but smaller modules are a great start.

Another big reason is portability. The .NET Core was released as open-source software some time ago and more and more .NET source code is added to it each day. There are portable targets, non-Windows targets, Universal-build targets and much more. It makes sense to split code up into highly portable units with as few dependencies as possible. That is, the dependencies should be explicit and intended.

Not only that, but NuGet packaging has come to the fore more than ever. Quino was originally designed to keep third-party boundaries clear, but we wanted to make it as easy as possible to use Quino. Just include Encodo and Quino and off you went. However, with NuGet, you can now say you want to use Quino.Standard and you'll get Quino.Core, Encodo.Core, Encodo.Services.SimpleInjector, Quino.Services.SimpleInjector and other packages.

With so much interesting code in the Quino framework, we want to make it available as much as possible not only for our internal projects but also for customer projects where appropriate and, also, possibly for open-source distribution.

NDepend

I've used NDepend before2 to clean up dependencies. However, the last analysis I did about a year ago showed quite deep problems3 that needed to be addressed before any further dependency analysis could bear fruit at all. With that work finally out of the way, I'm ready to re-engage with NDepend and see where we stand with Quino.

As luck would have it, NDepend is in version 6, released at the start of summer 2015. As was the case last year, NDepend has generously provided me with an upgrade license to allow me to test and evaluate the new version with a sizable and real-world project.

Here is some of the feedback I sent to NDepend:

I really, really like the depth of insight NDepend gives me into my code. I find myself thinking "SOLID" much more often when I have NDepend shaking its head sadly at me, tsk-tsking at all of the dependency snarls I've managed to build.

  • It's fast and super-reliable. I can work these checks into my workflow relatively easily.
  • I'm using the matrix view a lot more than the graphs because even NDepend recommends I don't use a graph for the number of namespaces/classes I'm usually looking at
  • Where the graph view is super-useful is for examining indirect dependencies, which are harder to decipher with the graph
  • I've found so many silly mistakes/lazy decisions that would lead to confusion for developers new to my framework
  • I'm spending so much time with it and documenting my experiences because I want more people at my company to use it
  • I haven't even scratched the surface of the warnings/errors but want to get to that, as well (the Dashboard tells me of 71 rules violated; 9 critical; I'm afraid to look :-)

Use Cases

Before I get more in-depth with NDepend, please note that there at least two main use cases for this tool4:

  1. Clean up a project or solution that has never had a professional dependency checkup
  2. Analyze and maintain separation and architectural layers in a project or solution

These two use cases are vastly different. The first is like cleaning a gas-station bathroom for the first time in years; the second is more like the weekly once-over you give your bathroom at home. The tools you'll need for the two jobs are similar, but quite different in scope and power. The same goes for NDepend: how you'll use it to claw your way back to architectural purity is different than how you'll use it to occasionally clean up an already mostly-clean project.

Quino is much better than it was the last time we peeked under the covers with NDepend, but we're still going to need a bucket of industrial cleaner before we're done.5

The first step is to make sure that you're analyzing the correct assemblies. Show the project properties to see which assemblies are included. You should remove all assemblies from consideration that don't currently interest you (especially if your library is not quite up to snuff, dependency-wise; afterwards, you can leave as many clean assemblies in the list as you like).6

Industrial-strength cleaner for Quino

Running an analysis with NDepend 6 generates a nice report, which includes the following initial dependency graph for the assemblies.

image

As you can see, Encodo and Quino depend only on system assemblies, but there are components that pull in other references where they might not be needed. The initial dependency matrices for Encodo and Quino both look much better than they did when I last generated one. The images below show what we have to work with in the Encodo and Quino assemblies.

imageimage

It's not as terrible as I've made out, right? There is far less namespace-nesting, so it's much easier to see where the bidirectional dependencies are. There are only a handful of cyclic dependencies in each library, with Encodo edging out Quino because of (A) the nature of the code and (B) I'd put more effort into Encodo so far.

I'm not particularly surprised to see that this is relatively clean because we've put effort into keeping the external dependencies low. It's the internal dependencies in Encodo and Quino that we want to reduce.

Small and Focused Assemblies

imageimageimage

The goal, as stated in the title of this article, is to split Encodo and Quino into separate assemblies. While removing cyclic dependencies is required for such an operation, it's not sufficient. Even without cycles, it's still possible that a given assembly is still too dependent on other assemblies.

Before going any farther, I'm going to list the assemblies we'd like to have. By "like to have", I mean the list that we'd originally planned plus a few more that we added while doing the actual splitting.7 The images on the right show the assemblies in Encodo, Quino and a partial overview of the dependency graph (calculated with the ReSharper Architecture overview rather than with NDepend, just for variety).

Of these, the following assemblies and their dependencies are of particular interest[^8]:

  • Encodo.Core: System dependencies only
  • Encodo.Application: basic application support8
  • Encodo.Application.Standard: configuration methods for non-metadata applications that don't want to pick and choose packages/assemblies
  • Encodo.Expressions: depends only on Encodo.Core
  • Quino.Meta: depends only on Encodo.Core and Encodo.Expressions
  • Quino.Meta.Standard: Optional, but useful metadata extensions
  • Quino.Application: depends only on Encodo.Application and Quino.Meta
  • Quino.Application.Standard: configuration methods for metadata applications that don't want to pick and choose packages/assemblies
  • Quino.Data: depends on Quino.Application and some Encodo.* assemblies
  • Quino.Schema: depends on Quino.Data

This seems like a good spot to stop, before getting into the nitty-gritty detail of how we used NDepend in practice. In the next article, I'll discuss both the high-level and low-level workflows I used with NDepend to efficiently clear up these cycles. Stay tuned!


Articles about design:

    * [Encodos configuration library for Quino: part I](/blogs/developer-blogs/encodos-configuration-library-for-quino-part-i/)
    * [Encodos configuration library for Quino: part II](/blogs/developer-blogs/encodos-configuration-library-for-quino-part-ii/)
    * [Encodos configuration library for Quino: part III](/blogs/developer-blogs/encodos-configuration-library-for-quino-part-iii/)
    * [API Design: Running an Application (Part I)](/blogs/developer-blogs/api-design-running-an-application-part-i/)
    * [API Design: To Generic or not Generic? (Part II)](/blogs/developer-blogs/api-design-to-generic-or-not-generic-part-ii/)
If you already see the correct assemblies in the list, you should still check that NDepend picked up the right paths. That is, if you haven't followed the advice in NDepend's white paper and still have a different `bin` folder for each assembly, you may see something like the following in the tooltip when you hover over the assembly name:

Several valid .NET assemblies with the name have been found. They all have the same version. the one with the biggest file has been chosen.

If NDepend has accidentally found an older copy of your assembly, you must delete that assembly. Even if you add an assembly directly, NDepend will not honor the path from which you added it. This isn't as bad as it sounds, since it's a very strange constellation of circumstances that led to this assembly hanging around anyway:

    * The project is no longer included in the latest Quino but lingers in my workspace
    * The version number is unfortunately the same, even though the assembly is wildly out of date

I only noticed because I knew I didn't have that many dependency cycles left in the Encodo assembly.


  1. Release notes for 2.0 betas:

    * [v2.0-beta1: Configuration, services and web](/blogs/developer-blogs/v20-beta1-configuration-services-and-web/)
    * [v2.0-beta2: Code generation, IOC and configuration](/blogs/developer-blogs/v20-beta2-code-generation-ioc-and-configuration/)
    

  2. I published a two-parter in August and November of 2014.

    * [The Road to Quino 2.0: Maintaining architecture with NDepend (part I)](/blogs/developer-blogs/the-road-to-quino-20-maintaining-architecture-with-ndepend-part-i/)
    * [The Road to Quino 2.0: Maintaining architecture with NDepend (part II)](/blogs/developer-blogs/the-road-to-quino-20-maintaining-architecture-with-ndepend-part-ii/)
    

  3. You can see a lot of the issues associated with these changes in the release notes for Quino 2.0-beta1 (mostly the first point in the "Highlights" section) and Quino 2.0-beta2 (pretty much all of the points in the "Highlights" section).

  4. I'm sure there are more, but those are the ones I can think of that would apply to my project (for now).

  5. ...to stretch the gas-station metaphor even further.

  6. Here I'm going to give you a tip that confused me for a while, but that I think was due to particularly bad luck and is actually quite a rare occurrence.

  7. Especially for larger libraries like Quino, you'll find that your expectations about dependencies between modules will be largely correct, but will still have gossamer filaments connecting them that prevent a clean split. In those cases, we just created new assemblies to hold these common dependencies. Once an initial split is complete, we'll iterate and refactor to reduce some of these ad-hoc assemblies.[^8]: Screenshots, names and dependencies are based on a pre-release version of Quino, so while the likelihood is small, everything is subject to change.

  8. Stay tuned for an upcoming post on the details of starting up an application, which is the support provided in Encodo.Application.

API Design: To Generic or not Generic? (Part II)

imageIn this article, I'm going to continue the discussion started in Part I, where we laid some groundwork about the state machine that is the startup/execution/shutdown feature of Quino. As we discussed, this part of the API still suffers from "several places where generic TApplication parameters [are] cluttering the API". In this article, we'll take a closer look at different design approaches to this concrete example -- and see how we decided whether to use generic type parameters.

Consistency through Patterns and API

Any decision you take with a non-trivial API is going to involve several stakeholders and aspects. It's often not easy to decide which path is best for your stakeholders and your product.

For any API you design, consider how others are likely to extend it -- and whether your pattern is likely to deteriorate from neglect.

For any API you design, consider how others are likely to extend it -- and whether your pattern is likely to deteriorate from neglect. Even a very clever solution has to be balanced with simplicity and elegance if it is to have a hope in hell of being used and standing the test of time.

In Quino 2.0, the focus has been on ruthlessly eradicating properties on the IApplication interface as well as getting rid of the descendant interfaces, ICoreApplication and IMetaApplication. Because Quino now uses a pattern of placing sub-objects in the IOC associated with an IApplication, there is far less need for a generic TApplication parameter in the rest of the framework. See Encodos configuration library for Quino: part I for more information and examples.

This focus raised an API-design question: if we no longer want descendant interfaces, should we eliminate parameters generic in that interface? Or should we continue to support generic parameters for applications so that the caller will always get back the type of application that was passed in?

Before getting too far into the weeds1, let's look at a few concrete examples to illustrate the issue.

Do Fluent APIs require generic return-parameters?

As discussed in Encodos configuration library for Quino: part III in detail, Quino applications are configured with the "Use*" pattern, where the caller includes functionality in an application by calling methods like UseRemoteServer() or UseCommandLine(). The latest version of this API pattern in Quino recommends returning the application that was passed in to allow chaining and fluent configuration.

For example, the following code chains the aforementioned methods together without creating a local variable or other clutter.

return new CodeGeneratorApplication().UseRemoteServer().UseCommandLine();

What should the return type of such standard configuration operations be? Taking a method above as an example, it could be defined as follows:

public static IApplication UseCommandLine(this IApplication application, string[] args) { ... }

This seems like it would work fine, but the original type of the application that was passed in is lost, which is not exactly in keeping with the fluent style. In order to maintain the type, we could define the method as follows:

public static TApplication UseCommandLine<TApplication>(this TApplication application, string[] args)
  where TApplication : IApplication
{ ... }

This style is not as succinct but has the advantage that the caller loses no type information. On the other hand, it's more work to define methods in this way and there is a strong likelihood that many such methods will simply be written in the style in the first example.

Generics definitely offer advantages, but it remains to be seen how much those advantages are worth.

Why would other coders do that? Because it's easier to write code without generics, and because the stronger result type is not needed in 99% of the cases. If every configuration method expects and returns an IApplication, then the stronger type will never come into play. If the compiler isn't going to complain, you can expect a higher rate of entropy in your API right out of the gate.

One way the more-derived type would come in handy is if the caller wanted to define the application-creation method with their own type as a result, as shown below:

private static CodeGeneratorApplication CreateApplication()
{
  return new CodeGeneratorApplication().UseRemoteServer().UseCommandLine();
}

If the library methods expect and return IApplication values, the result of UseCommandLine() will be IApplication and requires a cast to be used as defined above. If the library methods are defined generic in TApplication, then everything works as written above.

This is definitely an advantage, in that the user gets the exact type back that they created. Generics definitely offer advantages, but it remains to be seen how much those advantages are worth.2

Another example: The IApplicationManager

Before we examine the pros and cons further, let's look at another example.

In Quino 1.x, applications were created directly by the client program and passed into the framework. In Quino 2.x, the IApplicationManager is responsible for creating and executing applications. A caller passes in two functions: one to create an application and another to execute an application.

A standard application startup looks like this:

new ApplicationManager().Run(CreateApplication, RunApplication);[^3]

Generic types can trigger an avalanche of generic parameters(tm) throughout your code.

The question is: what should the types of the two function parameters be? Does CreateApplication return an IApplication or a caller-specific derived type? What is the type of the application parameter passed to RunApplication? Also IApplication? Or the more derived type returned by CreateApplication?

As with the previous example, if the IApplicationManager is to return a derived type, then it must be generic in TApplication and both function parameters will be generically typed as well. These generic types will trigger an avalanche of generic parameters(tm) throughout the other extension methods, interfaces and classes involved in initializing and executing applications.

That sounds horrible. This sounds like a pretty easy decision. Why are we even considering the alternative? Well, because it can be very advantageous if the application can declare RunApplication with a strictly typed signature, as shown below.

private static void RunApplication(CodeGeneratorApplication application) { ... }

Neat, right? I've got my very own type back.

Where Generics Goes off the Rails

However, if the IApplicationManager is to call this function, then the signature of CreateAndStartUp() and Run() have to be generic, as shown below.

TApplication CreateAndStartUp<TApplication>(
  Func<IApplicationCreationSettings, TApplication> createApplication
)
 where TApplication : IApplication;

IApplicationExecutionTranscript Run<TApplication>(
  Func<IApplicationCreationSettings, TApplication> createApplication,
  Action<TApplication> run
)
  where TApplication : IApplication;

These are quite messy -- and kinda scary -- signatures.3 if these core methods are already so complex, any other methods involved in startup and execution would have to be equally complex -- including helper methods created by calling applications.4

The advantage here is that the caller will always get back the type of application that was created. The compiler guarantees it. The caller is not obliged to cast an IApplication back up to the original type. The disadvantage is that all of the library code is infected by a generic parameter with its attendant IApplication generic constraint.5

Don't add Support for Conflicting Patterns

The title of this section seems pretty self-explanatory, but we as designers must remain vigilant against the siren call of what seems like a really elegant and strictly typed solution.

But aren't properties on an application exactly what we just worked so hard to eliminate?

The generics above establish a pattern that must be adhered to by subsequent extenders and implementors. And to what end? So that a caller can attach properties to an application and access those in a statically typed manner, i.e. without casting?

But aren't properties on an application exactly what we just worked so hard to eliminate? Isn't the recommended pattern to create a "settings" object and add it to the IOC instead? That is, as of Quino 2.0, you get an IApplication and obtain the desired settings from its IOC. Technically, the cast is still taking place in the IOC somewhere, but that seems somehow less bad than a direct cast.

If the framework recommends that users don't add properties to an application -- and ruthlessly eliminated all standard properties and descendants -- then why would the framework turn around and add support -- at considerable cost in maintenance and readability and extendibility -- for callers that expect a certain type of application?

Wrapping up

Let's take a look at the non-generic implementation and see what we lose or gain. The final version of the IApplicationManager API is shown below, which properly balances the concerns of all stakeholders and hopefully will stand the test of time (or at least last until the next major revision).

IApplication CreateAndStartUp(
  Func<IApplicationCreationSettings, IApplication> createApplication
);

IApplicationExecutionTranscript Run(
  Func<IApplicationCreationSettings, IApplication> createApplication,
  Action<IApplication> run
);

These are the hard questions of API design: ensuring consistency, enforcing intent and balancing simplicity and cleanliness of code with expressiveness.



  1. A predilection of mine, I'll admit, especially when writing about a topic about which I've thought quite a lot. In those cases, the instinct to just skip "the object" and move on to the esoteric details that stand in the way of an elegant, perfect solution, is very, very strong.

  2. This more-realized typing was so attractive that we used it in many places in Quino without properly weighing the consequences. This article is the result of reconsidering that decision.

  3. Yes, the C# compiler will allow you to elide generics for most method calls (so long as the compiler can determine the types of the parameters without it). However, generics cannot be removed from constructor calls. These must always specify all generic parameters, which makes for messier-looking, lengthy code in the caller e.g. when creating the ApplicationManager were it to have been defined with generic parameters. Yet another thing to consider when choosing how to define you API.

  4. As already mentioned elsewhere (but it bears repeating): callers can, of course, eschew the generic types and use IApplication everywhere -- and most probably will, because the advantage offered by making everything generic is vanishingly small.. If your API looks this scary, entropy will eat it alive before the end of the week, to say nothing of its surviving to the next major version.

  5. A more subtle issue that arises is if you do end up -- even accidentally -- mixing generic and non-generic calls (i.e. using IApplication as the extended parameter in some cases and TApplication in others). This issue is in how the application object is registered in the IOC. During development, when the framework was still using generics everywhere (or almost everywhere), some parts of the code were retrieving a reference to the application using the most-derived type whereas the application had been registered in the container as a singleton using IApplication. The call to retrieve the most derived type returned a new instance of the application rather than the pre-registered singleton, which was a subtle and difficult bug to track down.

API Design: Running an Application (Part I)

In this article, we're going to discuss a bit more about the configuration library in Quino 2.0.

Other entries on this topic have been the articles about Encodos configuration library for Quino: part I, part II and part III.

The goal of this article is to discuss a concrete example of how we decided whether to use generic type parameters throughout the configuration part of Quino. The meat of that discussion will be in a part 2 because we're going to have to lay some groundwork about the features we want first. (Requirements!)

A Surfeit of Generics

As of Quino 2.0-beta2, the configuration library consisted of a central IApplication interface which has a reference to an IOC container and a list of startup and shutdown actions.

As shown in part III, these actions no longer have a generic TApplication parameter. This makes it not only much easier to use the framework, but also easier to extend it. In this case, we were able to remove the generic parameter without sacrificing any expressiveness or type-safety.

As of beta2, there were still several places where generic TApplication parameters were cluttering the API. Could we perhaps optimize further? Throw out even more complexity without losing anything?

Starting up an application

One of these places is the actual engine that executes the startup and shutdown actions. This code is a bit trickier than just a simple loop because Quino supports execution in debug mode -- without exception-handling -- and release mode -- with global exception-handling and logging.

As with any application that uses an IOC container, there is a configuration phase, during which the container can be changed and an execution phase, during which the container produces objects but can no longer be re-configured.

Until 2.0-beta2, the execution engine was encapsulated in several extension methods called Run(), StartUp() and so on. These methods were generally generic in TApplication. I write "generally" because there were some inconsistencies with extension methods for custom application types like Winform or Console applications.

While extension methods can be really useful, this usage was not really appropriate as it violated the open/closed principle. For the final release of Quino, we wanted to move this logic into an IApplicationManager so that applications using Quino could (A) choose their own logic for starting an application and (B) add this startup class to a non-Quino IOC container if they wanted to.

Application Execution Modes

So far, so good. Before we discuss how to rewrite the application manager/execution engine, we should quickly revisit what exactly this engine is supposed to do. As it turns out, not only do we wnat to make an architectural change to make the design more open for extension, but the basic algorithm for starting an application changed, as well.

What does it mean to run an application?

Quino has always acknowledged and kinda/sorta supported the idea that a single application can be run in different ways. Even an execution that results in immediate failure technically counts as an execution, as a traversal of the state machine defined by the application.

If we view an application for the state machine that it is, then every application has at least two terminal nodes: OK and Error.

But what does OK mean for an application? In Quino, it means that all startup actions were executed without error and the run() action passed in by the caller was also executed without error. Anything else results in an exception and is shunted to Error.

imageBut is that true, really? Can you think of other ways in which an application could successfully execute without really having failed? For most applications, the answer is yes. Almost every application -- and certainly every Quino application -- supports a command line. One of the default options for the command line of a Quino application is -h, which shows a manual for the other command-line options.

If the application is running in a console, this manual is printed to the console; for a Winform application, a dialog box is shown; and so on.

This "help" mode is actually a successful execution of the application that did not result in the main event loop of the application being executed.

Thought of in this way, any command-line option that controls application execution could divert the application to another type of terminal node in the state machine. A good example is when an application provides support for importing or exporting data via the command line.

"Canceled" Terminal Nodes

A terminal node is also not necessarily only Crashed or Ok. Almost any application will also need to have a Canceled mode that is a perfectly valid exit state. For example,

  • If the application requires a login during execution (startup), but the user aborts authentication
  • If the application supports schema migration, but the user aborts without migrating the schema

These are two ways in which a standard Quino application could run to completion without crashing but without having accomplished any of its main tasks. It ran and it didn't crash, but it also didn't do anything useful.

Intermediate Nodes in the Application State Machine

This section title sounds a bit pretentious, but that's exactly what we want to discuss here. Instead of having just start and terminal nodes, the Quino startup supports cycles through intermediate nodes as well. What the hell does that mean? It means that some nodes may trigger Quino to restart in a different mode in order to handle a particular kind of error condition that could be repaired.1

A concrete example is desperately needed here, I think. The main use of this feature in Quino right now is to support on-the-fly schema-migration without forcing the user to restart the application. This feature has been in Quino from the very beginning and is used almost exclusively by developers during development. The use case to support is as follows:

  1. Developer is running an application
  2. Developer make change to the model (or pulls changes from the server)
  3. Developer runs the application with a schema-change
  4. Application displays migration tool; developer can easily migrate the schema and continue working

This workflow minimizes the amount of trouble that a developer has when either making changes or when integrating changes from other developers. In all cases in which the application model is different from the developer's database schema, it's very quick and easy to upgrade and continue working.

"Rescuing" an application in Quino 2.0

How does this work internally in Quino 2.0? The application starts up but somehow encounters an error that indicates that a schema migration might be required. This can happen in one of two ways:

  1. The schema-verification step in the standard Quino startup detects a change in the application model vis à vis the data schema
  2. Some other part of the startup accesses the database and runs into a DatabaseException that is indicative of a schema-mismatch

In all of these cases, the application that was running throws an ApplicationRestartException, that the standard IApplicationManager implementation knows how to handle. It handles it by shutting down the running application instance and asking the caller to create a new application, but this time one that knows how to handle the situation that caused the exception. Concretely, the exception includes an IApplicationCreationSettings descendant that the caller can use to decide how to customize the application to handle that situation.

The manager then runs this new application to completion (or until a new RestartApplicationException is thrown), shuts it down, and asks the caller to create the original application again, to give it another go.

In the example above, if the user has successfully migrated the schema, then the application will start on this second attempt. If not, then the manager enters the cycle again, attempting to repair the situation so that it can get to a terminal node. Naturally, the user can cancel the migration and the application also exits gracefully, with a Canceled state.

A few examples of possible application execution paths:

  • Standard => OK
  • Standard => Error
  • Standard => Canceled
  • Standard => Restart => Migrator => Standard => OK
  • Standard => Restart => Migrator => Canceled

The pattern is the same for interactive, client applications as for headless applications like test suites, which attempt migration once and abort if not successful. Applications like web servers or other services will generally only support the OK and Error states and fail when they encounter a RestartApplicationException.

Still, it's nice to know that the pattern is there, should you need it. It fits relatively cleanly into the rest of the API without making it more complicated. The caller passes two functions to the IApplicationManager: one to create an application and one to run it.

An example from the Quino CodeGeneratorApplication is shown below:

internal static void Main()
{
  new ApplicationManager().Run(CreateApplication, GenerateCode);
}

private static IApplication CreateApplication(
  IApplicationCreationSettings applicationCreationSettings
) { ... }

private static void GenerateCode(IApplication application) { ... }

We'll see in the next post what the final API looks like and how we arrived at the final version of that API in Quino 2.0.



  1. Or rescued, using the nomenclature from Eiffel exception-handling, which actually does something very similar. The exception handling in most languages lets you clean up and move on, but the intent isn't necessarily to re-run the code that failed. In Eiffel, this is exactly how exception-handling works: fix whatever was broken and re-run the original code. Quino now works very much like this as well.

ReSharper Unit Test Runner 9.x update

Way back in February, I wrote about my experiences with ReSharper 9 when it first came out. The following article provides an update, this time with version 9.2, released just last week.

tl;dr: I'm back to ReSharper 8.2.3 and am a bit worried about the state of the 9.x series of ReSharper. Ordinarily, JetBrains has eliminated performance, stability and functional issues by the first minor version-update (9.1), to say nothing of the second (9.2).

Test Runner

In the previous article, my main gripe was with the unit-test runner, which was unusable due to flakiness in the UI, execution and change-detection. With the release of 9.2, the UI and change-detection problems have been fixed, but the runner is still quite flaky at executing tests.

What follows is the text of the report that I sent to JetBrains when they asked me why I uninstalled R# 9.2.

As with 9.0 and 9.1, I am unable to productively use the 9.2 Test Runner with many of my NUnit tests. These tests are not straight-up, standard tests, but R# 8.2.3 handled them without any issues whatsoever.

What's special about my tests?

There are quite a few base classes providing base functionality. The top layers provide scenario-specific input via a generic type parameter.

- **TestsBase**
  - **OtherBase<TMixin>**
     (7 of these, one with an NUnit CategoryAttribute)
    - **ConcreteTests<TMixin>**
       (defines tests with NUnit TestAttributes)
      - **ProviderAConcreteTests<TMixin>**
         (CategoryAttribute)
        - **ProtocolAProviderAConcreteTests**
          (TMixin = ProtocolAProviderA; TestFixtureAttribute, CategoryAttributes)
        - **ProtocolBProviderAConcreteTests**
          (TMixin = ProtocolBProviderA; TestFixtureAttribute, CategoryAttributes)
      - **ProviderBConcreteTests<TMixin>**
         (CategoryAttribute)
        - **ProtocolAProviderBConcreteTests**
          (TMixin = ProtocolAProviderB; TestFixtureAttribute, CategoryAttributes)
        - **ProtocolBProviderBConcreteTests**
          (TMixin = ProtocolBProviderB; TestFixtureAttribute, CategoryAttributes)

The test runner in 9.2 is not happy with this at all. The test explorer shows all of the tests correctly, with the test counts correct. If I select a node for all tests for ProviderB and ProtocolA (696 tests in 36 fixtures), R# loads 36 non-expandable nodes into the runner and, after a bit of a wait, marks them all as inconclusive. Running an individual test-fixture node does not magically cause the tests to load or appear and also shows inconclusive (after a while; it seems the fixture setup executes as expected but the results are not displayed).

If I select a specific, concrete fixture and add or run those tests, R# loads and executes the runner correctly. If I select multiple test fixtures in the explorer and add them, they also show up as expandable nodes, with the correct test counts, and can be executed individually (per fixture). However, if I elect to run them all by running the parent node, R# once again marks everything as inconclusive.

As I mentioned, 8.2.3 handles this correctly and I feel R# 9.2 isn't far off -- the unit-test explorer does, after all, show the correct tests and counts. In 9.2, it's not only inconvenient, but I'm worried that my tests are not being executed with the expected configuration.

Also, I really missed the StyleCop plugin for 9.2. There's a beta version for 9.1 that caused noticeable lag, so I'm still waiting for a more unobtrusive version for 9.2 (or any version at all).

While it's possible that there's something I'm doing wrong, or there's something in my installation that's strange, I don't think that's the problem. As I mentioned, test-running for the exact same solution with 8.2.3 is error-free and a pleasure to use. In 9.2, the test explorer shows all of the tests correctly, so R# is clearly able to interpret the hierarchy and attributes (noted above) as I've intended them to be interpreted. This feels very much like a bug or a regression for which JetBrains doesn't have test coverage. I will try to work with them to help them get coverage for this case.

Real-Time StyleCop rules

Additionally, the StyleCop plugin is absolutely essential for my workflow and there still isn't an official release for any of the 9.x versions. ReSharper 9.2 isn't supported at all yet, even in prerelease form. The official Codeplex page shows the latest official version as 4.7, released in January of 2012 for ReSharper 8.2 and Visual Studio 2013. One would imagine that VS2015 support is in the works, but it's hard to say. There is a page for StyleCop in the ReSharper extensions gallery but that shows a beta4, released in April of 2015, that only works with ReSharper 9.1.x, not 9.2. I tested it with 9.1.x, but it noticeably slowed down the UI. While typing was mostly unaffected, scrolling and switching file-tabs was very laggy. Since StyleCop is essential for so many developers, it's hard to see why the plugin gets so little love from either JetBrains or Microsoft.

GoTo Word

The "Go To Word" plugin is not essential but it is an extremely welcome addition, especially with so much more client-side work depending on text-based bindings that aren't always detected by ReSharper. In those cases, you can find -- for example -- all the references of a Knockout template by searching just as you would for a type or member. Additionally, you benefit from the speed of the ReSharper indexing engine and search UI instead of using the comparatively slow and ugly "Find in Files" support in Visual Studio. Alternatives suggested in the comments to the linked issue above all depend on building yet another index of data (e.g. Sando Code Search Tool). JetBrains has pushed off integrating go-to-word until version 10. Again, not a deal-breaker, but a shame nonetheless, as I'll have to do without it in 9.x until version 10 is released.

With so much more client-side development going on in Visual Studio and with dynamic languages and data-binding languages that use name-matching for data-binding, GoToWord is more and more essential. Sure, ReSharper can continue to integrate native support for finding such references, but until that happens, we're stuck with the inferior Find-in-Files dialog or other extensions that increase the memory pressure for larger solutions.

Encodo Git Handbook 3.0

Encodo first published a Git Handbook for employees in September 2011 and last updated it in July of 2012. Since then, we've continued to use Git, refining our practices and tools. Although a lot of the content is still relevant, some parts are quite outdated and the overall organization suffered through several subsequent, unpublished updates.

What did we change from the version 2.0?

  • We removed all references to the Encodo Git Shell. This shell was a custom environment based on Cygwin. It configured the SSH agent, set up environment variables and so on. Since tools for Windows have improved considerably, we no longer need this custom tool. Instead, we've moved to PowerShell and PoshGit to handle all of our Git command-line needs.
  • We removed all references to Enigma. This was a Windows desktop application developed by Encodo to provide an overview, eager-fetching and batch tasks for multiple Git repositories. We stopped development on this when SmartGit included all of the same functionality in versions 5 and 6.
  • We removed all detailed documentation for Git submodules. Encodo stopped using submodules (except for one legacy project) several years ago. We used to use submodules to manage external binary dependencies but have long since moved to NuGet instead.
  • We reorganized the chapters to lead off with a quick overview of Basic Concepts followed by a focus on Best Practices and our recommended Development Process. We also reorganized the Git-command documentation to use a more logical order.

You can download version 3 of the Git Handbook or get the latest copy from here.

Chapter 3, Basic Concepts and chapter 4, Best Practices have been included in their entirety below.

3 Best Practices

3.1 Focused Commits

Focused commits are required; small commits are highly recommended. Keeping the number of changes per commit tightly focused on a single task helps in many cases.

  • They are easier to resolve when merge conflicts occur
  • They can be more easily merged/rebased by Git
  • If a commit addresses only one issue, it is easier for a reviewer or reader to decide whether it should be examined.

For example, if you are working on a bug fix and discover that you need to refactor a file as well, or clean up the documentation or formatting, you should finish the bug fix first, commit it and then reformat, document or refactor in a separate commit.

Even if you have made a lot of changes all at once, you can still separate changes into multiple commits to keep those commits focused. Git even allows you to split changes from a single file over multiple commits (the Git Gui provides this functionality as does the index editor in SmartGit).

3.2 Snapshots

Use the staging area to make quick snapshots without committing changes but still being able to compare them against more recent changes.

For example, suppose you want to refactor the implementation of a class.

  • Make some changes and run the tests; if everythings ok, stage those changes
  • Make more changes; now you can diff these new changes not only against the version in the repository but also against the version in the index (that you staged).
  • If the new version is broken, you can revert to the staged version or at least more easily figure out where you went wrong (because there are fewer changes to examine than if you had to diff against the original)
  • If the new version is ok, you can stage it and continue working

3.3 Developing New Code

Where you develop new code depends entirely on the project release plan.

  • Code for releases should be committed to the release branch (if there is one) or to the develop branch if there is no release branch for that release
  • If the new code is a larger feature, then use a feature branch. If you are developing a feature in a hotfix or release branch, you can use the optional base parameter to base the feature on that branch instead of the develop branch, which is the default.

3.4 Merging vs. Rebasing

Follow these rules for which command to use to combine two branches:

  • If both branches have already been pushed, then merge. There is no way around this, as you wont be able to push a non-merged result back to the origin.
  • If you work with branches that are part of the standard branching model (e.g. release, feature, etc.), then merge.
  • If both you and someone else made changes to the same branch (e.g. develop), then rebase. This will be the default behavior during development

4 Development Process

A branching model is required in order to successfully manage a non-trivial project.

Whereas a trivial project generally has a single branch and few or no tags, a non-trivial project has a stable releasewith tags and possible hotfix branchesas well as a development branchwith possible feature branches.

A common branching model in the Git world is called Git Flow. Previous versions of this manual included more specific instructions for using the Git Flow-plugin for Git but experience has shown that a less complex branching model is sufficient and that using standard Git commands is more transparent.

However, since Git Flow is a very widely used branching model, retaining the naming conventions helps new developers more easily understand how a repository is organized.

4.1 Branch Types

The following list shows the branch types as well as the naming convention for each type:

  • master is the main development branch. All other branches should be merged back to this branch (unless the work is to be discarded). Developers may apply commits and create tags directly on this branch.
  • feature/name is a feature branch. Feature branches are for changes that require multiple commits or coordination between multiple developers. When the feature is completed and stable, it is merged to the master branch after which it should be removed. Multiple simultaneous feature branches are allowed.
  • release/vX.X.X is a release branch. Although a project can be released (and tagged) directly from the master branch, some projects require a longer stabilization and testing phase before a release is ready. Using a release branch allows development on the develop branch to continue normally without affecting the release candidate. Multiple simultaneous release branches are strongly discouraged.
  • hotfix/vX.X.X is a hotfix branch. Hotfix branches are always created from the release tag for the version in which the hotfix is required. These branches are generally very short-lived. If a hotfix is needed in a feature or release branch, it can be merged there as well (see the optional arrow in the following diagram).

The main difference from the Git Flow branching model is that there is no explicit stable branch. Instead, the last version tag serves the purpose just as well and is less work to maintain. For more information on where to develop code, see 3.3 Developing New Code.

4.2 Example

To get a better picture of how these branches are created and merged, the following diagram depicts many of the situations outlined above.

The diagram tells the following story:

  • Development began on the master branch
  • v1.0 was released directly from the master branch
  • Development on feature B began
  • A bug was discovered in v1.0 and the v1.0.1 hotfix branch was created to address it
  • Development on feature A began
  • The bug was fixed, v1.0.1 was released and the fix was merged back to the master branch
  • Development continued on master as well as features A and B
  • Changes from master were merged to feature A (optional merge)
  • Release branch v1.1 was created
  • Development on feature A completed and was merged to the master branch
  • v1.1 was released (without feature A), tagged and merged back to the master branch
  • Changes from master were merged to feature B (optional merge)
  • Development continued on both the master branch and feature B
  • v1.2 was released (with feature A) directly from the master branch

image

Legend:

  • Circles depict commits
  • Blue balloons are the first commit in a branch
  • Grey balloons are a tag
  • Solid arrows are a required merge
  • Dashed arrows are an optional merge

How Encodo sets up new workstations

Windows 8.1UbuntuClonezillaChocolateyVirtualBox

We've recently set up a few new workstations with Windows 8.1 and wanted to share the process we use, in case it might come in handy for others.

Windows can take a long time to install, as can Microsoft Office and, most especially, Visual Studio with all of its service packs. If we installed everything manually every time we needed a new machine, we'd lose a day each time.

To solve this problem, we decided to define the Encodo Windows Base Image, which includes all of the standard software that everyone should have installed. Using this image saves a lot of time when you need to either install a new workstation or you'd like to start with a fresh installation if your current one has gotten a bit crufty.

Encodo doesn't have a lot of workstations, so we don't really need anything too enterprise-y, but we do want something that works reliably and quickly.

After a lot of trial and error, we've come up with the following scheme.

  • Maintain a Windows 8.1 image in a VMDK file
  • Use VirtualBox to run the image
  • Use Chocolatey for (almost) all software installation
  • Use Ubuntu Live on a USB stick (from which to boot)
  • Use Clonezilla to copy the image to the target drive

Installed Software

The standard loadout for developers comprises the following applications.

These are updated by Windows Update.

  • Windows 8.1 Enterprise
  • Excel
  • Powerpoint
  • Word
  • Visio
  • German Office Proofing Tools
  • Visual Studio 2013

These applications must be updated manually.

  • ReSharper Ultimate
  • Timesnapper

The rest of the software is maintained with Chocolatey.

  • beyondcompare (file differ)
  • conemu (PowerShell enhancement)
  • fiddler4 (HTTP traffic analyzer)
  • firefox
  • flashplayerplugin
  • git (source control)
  • googlechrome
  • greenshot (screenshot tool)
  • jitsi (VOIP/SIP)
  • jre8 (Java)
  • keepass (Password manager)
  • nodejs
  • pidgin (XMPP chat)
  • poshgit (Powershell/Git integration)
  • putty (SSH)
  • smartgit (GIT GUI)
  • stylecop (VS/R# extension)
  • sublimetext3 (text editor)
  • sumatrapdf (PDF viewer)
  • truecrypt (Drive encryption)
  • vlc (video/audio player/converter)
  • winscp (SSH file-copy tool)
  • wireshark (TCP traffic analyzer)

Maintaining the Image

This part has gotten quite simple.

  1. Load the VM with the Windows 8.1 image
  2. Apply Windows Updates
  3. Update ReSharper, if necessary
  4. Run choco upgrade all to update all Chocolatey packages
  5. Shut down the VM cleanly

Writing the image to a new SSD

The instructions we maintain internally are more detailed, but the general gist is to do the following,

  1. Install the SSD in the target machine
  2. Plug in the Ubuntu Live USB stick
  3. Plug in the USB drive that has the Windows image and Clonezilla on it
  4. Boot to the Ubuntu desktop
  5. Make sure you have network access
  6. Install VirtualBox in Ubuntu from the App Center
  7. Create a VMDK file for the target SSD
  8. Start VirtualBox and create a new VM with the Windows image and SSD VMDK as drives and Clonezilla configured as a CD
  9. Start the VM and boot to Clonezilla
  10. Follow instructions, choose options and then wait 40 minutes to clone data
  11. Power off Clonezilla
  12. Shut down Ubuntu Live
  13. Unplug the USB drive and stick
  14. Boot your newly minted Windows 8.1 from the SSD
  15. Install Lenovo System Update (if necessary) and update drivers (if necessary)
  16. Add the machine to the Windows domain
  17. Remote-install Windows/Office licenses and activate Windows
  18. Remote-install Avira Antivirus
  19. Grant administrator rights to the owner of the laptop
  20. Use sysprep /generalize to reset Windows to an OOB (Out-of-box) experience for the new owner

Conclusion

We're pretty happy with this approach and the loadout but welcome any feedback or suggestions to improve them. We've set up two notebooks in the last three weeks, but that's definitely a high-water mark for us. We expect to use this process one more time this year (in August, when a new hire arrives), but it's nice to know that we now have a predictable process.

v2.0-beta2: Code generation, IOC and configuration

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

In beta1, we read about changes to configuration, the data driver architecture, DDL commands, and security and access control in web applications.

In beta-2, we made the following additional improvements:

Goodbye, old friends

This release addressed some issues that have been bugging us for a while (almost 3 years in one case).

  • QNO-3765 (32 months): After a schema migration caused by a DatabaseException on login, restart the application
  • QNO-4117 (27 months): PreferredType registration for models is not always executed
  • QNO-4408 (18 months): When access to the remoting server is unauthorized, the web site should respond with an error
  • QNO-4506 (14 months): The code generator should generate the persistent object and metadata references in separate classes
  • QNO-4507 (14 months): Business objects for modules should not rely on GlobalContext in generated code

You will not be missed.

Breaking changes

As we've mentioned before, this release is absolutely merciless in regard to backwards compatibility. Old code is not retained as obsolete Obsolete. Instead, a project upgrading to 2.0 will encounter compile errors.

That said, if you arm yourself with a bit of time, ReSharper and the release notes (and possibly keep an Encodo employee on speed-dial), the upgrade is not difficult. It consists mainly of letting ReSharper update namespace references for you. In cases where the update is not so straightforward, we've provided release notes.

V1 generated code support

One of the few things you'll be able to keep (at least for a minor version or two) is the old-style generated code. We made this concession because, while even a large solution can be upgraded from 1.13.0 to 2.0 relatively painlessly in about an hour (we've converted our own internal projects to test), changing the generated-code format is potentially a much larger change. Again, an upgrade to the generated-code format isn't complicated but it might require more than an hour or two's worth of elbow grease to complete.

Therefore, you'll be able to not only retain your old generated code, but the code generator will continue support the old-style code-generation format for further development. Expect the grace period to be relatively short, though.

Regardless of whether you elect to keep the old-style generated code, you'll have to do a little bit of extra work just to be able to generate code again.

  1. Manually update a couple of generated files, as shown below.
  2. Compile the solution
  3. Generate code with the Quino tools

Before you can regenerate, you'll have to manually update your previously generated code in the main model file, as shown below.

Previous version

static MyModel()
{
  Messages = new InMemoryRecorder();
  Loader = new ModelLoader(() => Instance, () => Messages, new MyModelGenerator());
}

public static IMetaModel CreateModel(IExtendedRecorder recorder)
{
  if (recorder == null) { throw new ArgumentNullException("recorder"); }

  var result = Loader.Generator.CreateModel(recorder);

  result.Configure();

  return result;
}

// More code ...

/// <inheritdoc/>
protected override void DoConfigure()
{
  base.DoConfigure();

  ConfigurePreferredTypes();
  ApplyCustomConfiguration();
}

Manually updated version

static MyModel()
{
  Messages = new InMemoryRecorder();
  Loader = new ModelLoader(() => Instance, () => Messages, new MyModelGenerator());
}

public static IMetaModel CreateModel(IExtendedRecorder recorder)
{
  if (recorder == null) { throw new ArgumentNullException("recorder"); }

  var result = Loader.Generator(MyModel)new MyModelGenerator().CreateModel(
    ServiceLocator.Current.GetInstance<IExpressionParser>(),
    ServiceLocator.Current.GetInstance<IMetaExpressionFactory>(),
    recorder
  );

  result.ConfigurePreferredTypes();
  result.ApplyCustomConfiguration();

  return result;
}

/// <inheritdoc/>
protected override void DoConfigure()
{
  base.DoConfigure();

  ConfigurePreferredTypes();
  ApplyCustomConfiguration();
}

Integrate into the model builder

In the application configuration for the first time you generate code with Quino 2.0, you should use:

ModelLoader = MyModel.Loader;
this.UseMetaSimpleInjector();
this.UseModelLoader(MyModel.CreateModel);

After regenerating code, you should use the following for version-2 generated code:

ModelLoader = MyModel.Loader;
this.UseMetaSimpleInjector();
this.UseModelLoader(MyModelExtensions.CreateModelAndMetadata);

...and the following for version-1 generated code:

ModelLoader = MyModel.Loader;
this.UseMetaSimpleInjector();
this.UseModelLoader(MyModel.CreateModel);

Still to do by RTM

As you can see, we've already done quite a bit of work in beta1 and beta2. We have a few more tasks planned for the feature-complete release candidate for 2.0

Move the schema-migration metadata table to a module.

The Quino schema-migration extracts most of the information it needs from database schema itself. It also stores extra metadata in a special table. This table has been with Quino since before modules were supported (over seven years) and hence was built in a completely custom manner. Moving this support to a Quino metadata module will remove unnecessary implementation and make the migration process more straightforward. (QNO-4888) Separate collection algorithm from storage/display method in IRecorder and descendants.

The recording/logging library has a very good interface but the implementation for the standard recorders has become too complex as we added support for multi-threading, custom disposal and so on. We want to clean this up to make it easier to extend the library with custom loggers. (QNO-4888) Split up Encodo and Quino assemblies based on functionality.

There are only a very dependencies left to untangle (QNO-4678, QNO-4672, QNO-4670); after that, we'll split up the two main Encodo and Quino assemblies along functional lines. (QNO-4376) Finish integrating building and publishing NuGet and symbol packages into Quino's release process.

And, finally, once we have the assemblies split up to our liking, we'll finalize the Nuget packages for the Quino library and leave the direct-assembly-reference days behind us, ready for Visual Studio 2015. (QNO-4376)

That's all we've got for now. See you next month for the next (and, hopefully, final update)!

Encodo's configuration library for Quino: part III

imageThis discussion about configuration spans three articles:

  1. part I discusses the history of the configuration system in Quino as well as a handful of principles we kept in mind while designing the new system
  2. part II discusses the basic architectural changes and compares an example from the old configuration system to the new.
  3. part III takes a look at configuring the "execution order" -- the actions to execute during application startup and shutdown

Introduction

Registering with an IOC is all well and good, but something has to make calls into the IOC to get the ball rolling.

Something has to actually make calls into the IOC to get the ball rolling.

Even service applications -- which start up quickly and wait for requests to do most of their work -- have basic operations to execute before declaring themselves ready.

Things can get complex when starting up registered components and performing basic checks and non-IOC configuration.

  • In which order are the components and configuration elements executed?
  • How do you indicate dependencies?
  • How can an application replace a piece of the standard startup?
  • What kind of startup components are there?

Part of the complexity of configuration and startup is that developers quickly forget all of the things that they've come to expect from a mature product and start from zero again with each application. Encodo and Quino applications take advantage of prior work to include standard behavior for a lot of common situations.

Configuration Patterns

Some components can be configured once and directly by calling a method like UseMetaTranslations(string filePath), which includes all of the configuration options directly in the composition call. This pattern is perfect for options that are used only by one action or that it wouldn't make sense to override in a subsequent action.

So, for simple actions, an application can just replace the existing action with its own, custom action. In the example above, an application for which translations had already been configured would just call UseMetaTranslations() again in order to override that behavior with its own.

Most application will replace standard actions or customize standard settings

Some components, however, will want to expose settings that can be customized by actions before they are used to initialize the component.

For example, there is an action called SetUpLoggingAction, which configures logging for the application. This action uses IFileLogSettings and IEventLogSettings objects from the IOC during execution to determine which types of logging to configure.

An application is, of course, free to replace the entire SetUpLoggingAction action with its own, completely custom behavior. However, an application that just wanted to change the log-file behavior or turn on event-logging could use the Configure<TService>() method1, as shown below.

application.Configure<IFileLogSettings>(
  s => s.Behavior = LogFileBehavior.MultipleFiles
);
application.Configure<IEventLogSettings>(
  s => s.Enabled = true
);

Actions

A Quino application object has a list of StartupActions and a list of ShutdownActions. Most standard middleware methods register objects with the IOC and add one or more actions to configure those objects during application startup.

Actions have existed for quite a while in Quino. In Quino 2, they have been considerably simplified and streamlined to the point where all but a handful are little more than a functional interface2.

The list below will give you an idea of the kind of configuration actions we're talking about.

  • Load configuration data
  • Process command line
  • Set up logging
  • Upgrade settings/configuration (e.g. silent upgrade)
  • Log a header (e.g. user/date/file locations/etc.; for console apps. this might be mirrored to the console)
  • Load plugins
  • Set up standard locations (e.g. file-system locations)

For installed/desktop/mobile applications, there's also:

  • Initialize UI components
  • Provide loading feedback
  • Check/manage multiple running instances
  • Check software update
  • Login/authentication

Quino applications also have actions to configure metadata:

  • Configure expression engine
  • Load metadata
  • Load metadata-overlays
  • Validate metadata
  • Check data-provider connections
  • Check/migrate schema
  • Generate default data

Application shutdown has a smaller set of vital cleanup chores that:

  • dispose of connection managers and other open resources
  • write out to the log, flush it and close it
  • show final feedback to the user

Anatomy of an Action

The following example3 is for the 1.x version of the relatively simple ConfigureDisplayLanguageAction.

public class ConfigureDisplayLanguageAction<TApplication> 
  : ApplicationActionBase<TApplication>
  where TApplication : ICoreApplication
{
  public ConfigureDisplayLanguageAction()
    : base(CoreActionNames.ConfigureDisplayLanguage)
  {
  }

  protected override int DoExecute(
    TApplication application, ConfigurationOptions options, int currentResult)
  {
    // Configuration code...
  }
}

What is wrong with this startup action? The following list illustrates the main points, each of which is addressed in more detail in its own section further below.

  • The ConfigurationOptions parameter introduces an unnecessary layer of complexity
  • The generic parameter TApplication complicates declaration, instantiation and extension methods that use the action
  • The int return type along with the currentResult parameter are a bad way of controlling flow.

The same startup action in Quino 2.x has the following changes from the Quino 1.x version above (legend: additions; deletions).

public class ConfigureDisplayLanguageAction<TApplication>
  : ApplicationActionBase<TApplication>
  where TApplication : ICoreApplication
{
  public ConfigureDisplayLanguageAction()
    : base(CoreActionNames.ConfigureDisplayLanguage)
  {
  }

  publicprotected override void int DoExecute(
    TApplication application, ConfigurationOptions options, int currentResult)
  {
    // Configuration code...
  }
}

As you can see, quite a bit of code and declaration text was removed, all without sacrificing any functionality. The final form is quite simple, inheriting from a simple base class that manages the name of the action and overrides a single parameter-less method. It is now much easier to see what an action does and the barrier to entry for customization is much lower.

public class ConfigureDisplayLanguageAction : ApplicationActionBase
{
  public ConfigureDisplayLanguageAction()
    : base(CoreActionNames.ConfigureDisplayLanguage)
  {
  }

  public override void Execute()
  {
    // Configuration code...
  }
}

In the following sections, we'll take a look at each of the problems indicated above in more detail.

Remove the ConfigurationOptions parameter

These options are a simple enumeration with values like Client, Testing, Service and so on. They were used only by a handful of standard actions.

These options made it more difficult to decide how to implement the action for a given task. If two tasks were completely different, then a developer would know to create two separate actions. However, if two tasks were similar, but could be executed differently depending on application type (e.g. testing vs. client), then the developer could still have used two separate actions, but could also have used the configuration options. Multiple ways of doing the exact same thing is all kinds of bad.

Multiple ways of doing the exact same thing is all kinds of bad.

Parameters like this conflict conceptually with the idea of using composition to build an application. To keep things simple, Quino applications should be configured exclusively by composition. Composing an application with service registrations and startup actions and then passing options to the startup introduced an unneeded level of complexity.

Instead, an application now defines a separate action for each set of options. For example, most applications will need to set up the display language to use -- be it for a GUI, a command-line or just to log messages in the correct language. For that, the application can add a ConfigureDisplayLanguageAction to the startup actions or call the standard method UseCore(). Desktop or single-user applications can use the ConfigureGlobalDisplayLanguageAction or call UseGlobalCore() to make sure that global language resources are also configured.

Remove the TApplication generic parameter

The generic parameter to this interface complicates the IApplication<TApplication> interface and causes no end of trouble in MetaApplication, which actually inherits from IApplication<IMetaApplication> for historical reasons.

There is no need to maintain statelessness for a single-use object.

Originally, this parameter guaranteed that an action could be stateless. However, each action object is attached to exactly one application (in the IApplication<TApplication>.StartupActions list. So the action that is attached to an application is technically stateless, and a completely different application than the one to which the action is attached could be passed to the IApplcationAction.Execute...which makes no sense whatsoever.

Luckily, this never happens, and only the application to which the action is attached is passed to that method. If that's the case, though, why not just create the action with the application as a constructor parameter when the action is added to the StartupActions list? There is no need to maintain statelessness for a single-use object.

This way, there is no generic parameter for the IApplication interface, all of the extension methods are much simpler and applications are free to create custom actions that work with descendants of IApplication simply by requiring that type in the constructor parameter.

Debugging is important

A global exception handler is terrible for debugging

The original startup avoided exceptions, preferring an integer return result instead.

In release mode, a global exception handler is active and is there to help the application exit more or less smoothly -- e.g. by logging the error, closing resources where possible, and so on.

A global exception handler is terrible for debugging, though. For exceptions that are caught, the default behavior of the debugger is to stop where the exception is caught rather than where it is thrown. Instead, you want exceptions raised by your application to to stop the debugger from where they are thrown.

So that's part of the reason why the startup and shutdown in 1.x used return codes rather than exceptions.

Multiple valid code paths

The other reason Quino used result codes is that most non-trivial applications actually have multiple paths through which they could successfully run.

Exactly which path the application should take depends on startup conditions, parameters and so on. Some common examples are:

  • Show command-line help
  • Migrate an application schema
  • Import, export or generate data

To show command-line help, an application execute its startup actions in order. It reaches the action that checks whether the user requested command-line help. This action processes the request, displays that help and then wants to smoothly exit the application. The "main" path -- perhaps showing the user a desktop application -- should no longer be executed.

Non-trivial applications have multiple valid run profiles.

Similarly, the action that checks the database schema determines that the schema in the data provider doesn't match the model. In this case, it would like to offer the user (usually a developer) the option to update the schema. Once the schema is updated, though, startup should be restarted from the beginning, trying again to run the main path.

Use exceptions to indicate errors

Whereas the Quino 1.x startup addressed the design requirements above with return codes, this imposes an undue burden on implementors. There was also confusion as to when it was OK to actually throw an exception rather than returning a special code.

Instead, the Quino 2.x startup always uses exceptions to indicate errors. There are a few special types of exceptions recognized by the startup code that can indicate whether the application should silently -- and successfully -- exit or whether the startup should be attempted again.

Conclusion

There is of course more detail into which we could go on much of what we discussed in these three articles, but that should suffice for an overview of the Quino configuration library.



  1. If C# had them, that it is. See Java 8 for an explanation of what they are.

  2. This pattern is echoed in the latest beta of the ASP.NET libraries, as described in the article Strongly typed routing for ASP.NET MVC 6 with IApplicationModelConvention.

  3. Please note that formatting for the code examples has been adjusted to reduce horizontal space. The formatting does not conform to the Encodo C# Handbook.