2 3 4 5 6 7 8 9 10 11 12
API Design: To Generic or not Generic? (Part II)

imageIn this article, I'm going to continue the discussion started in Part I, where we laid some groundwork about the state machine that is the startup/execution/shutdown feature of Quino. As we discussed, this part of the API still suffers from "several places where generic TApplication parameters [are] cluttering the API". In this article, we'll take a closer look at different design approaches to this concrete example -- and see how we decided whether to use generic type parameters.

Consistency through Patterns and API

Any decision you take with a non-trivial API is going to involve several stakeholders and aspects. It's often not easy to decide which path is best for your stakeholders and your product.

For any API you design, consider how others are likely to extend it -- and whether your pattern is likely to deteriorate from neglect.

For any API you design, consider how others are likely to extend it -- and whether your pattern is likely to deteriorate from neglect. Even a very clever solution has to be balanced with simplicity and elegance if it is to have a hope in hell of being used and standing the test of time.

In Quino 2.0, the focus has been on ruthlessly eradicating properties on the IApplication interface as well as getting rid of the descendant interfaces, ICoreApplication and IMetaApplication. Because Quino now uses a pattern of placing sub-objects in the IOC associated with an IApplication, there is far less need for a generic TApplication parameter in the rest of the framework. See Encodos configuration library for Quino: part I for more information and examples.

This focus raised an API-design question: if we no longer want descendant interfaces, should we eliminate parameters generic in that interface? Or should we continue to support generic parameters for applications so that the caller will always get back the type of application that was passed in?

Before getting too far into the weeds1, let's look at a few concrete examples to illustrate the issue.

Do Fluent APIs require generic return-parameters?

As discussed in Encodos configuration library for Quino: part III in detail, Quino applications are configured with the "Use*" pattern, where the caller includes functionality in an application by calling methods like UseRemoteServer() or UseCommandLine(). The latest version of this API pattern in Quino recommends returning the application that was passed in to allow chaining and fluent configuration.

For example, the following code chains the aforementioned methods together without creating a local variable or other clutter.

return new CodeGeneratorApplication().UseRemoteServer().UseCommandLine();

What should the return type of such standard configuration operations be? Taking a method above as an example, it could be defined as follows:

public static IApplication UseCommandLine(this IApplication application, string[] args) { ... }

This seems like it would work fine, but the original type of the application that was passed in is lost, which is not exactly in keeping with the fluent style. In order to maintain the type, we could define the method as follows:

public static TApplication UseCommandLine<TApplication>(this TApplication application, string[] args)
  where TApplication : IApplication
{ ... }

This style is not as succinct but has the advantage that the caller loses no type information. On the other hand, it's more work to define methods in this way and there is a strong likelihood that many such methods will simply be written in the style in the first example.

Generics definitely offer advantages, but it remains to be seen how much those advantages are worth.

Why would other coders do that? Because it's easier to write code without generics, and because the stronger result type is not needed in 99% of the cases. If every configuration method expects and returns an IApplication, then the stronger type will never come into play. If the compiler isn't going to complain, you can expect a higher rate of entropy in your API right out of the gate.

One way the more-derived type would come in handy is if the caller wanted to define the application-creation method with their own type as a result, as shown below:

private static CodeGeneratorApplication CreateApplication()
{
  return new CodeGeneratorApplication().UseRemoteServer().UseCommandLine();
}

If the library methods expect and return IApplication values, the result of UseCommandLine() will be IApplication and requires a cast to be used as defined above. If the library methods are defined generic in TApplication, then everything works as written above.

This is definitely an advantage, in that the user gets the exact type back that they created. Generics definitely offer advantages, but it remains to be seen how much those advantages are worth.2

Another example: The IApplicationManager

Before we examine the pros and cons further, let's look at another example.

In Quino 1.x, applications were created directly by the client program and passed into the framework. In Quino 2.x, the IApplicationManager is responsible for creating and executing applications. A caller passes in two functions: one to create an application and another to execute an application.

A standard application startup looks like this:

new ApplicationManager().Run(CreateApplication, RunApplication);[^3]

Generic types can trigger an avalanche of generic parameters(tm) throughout your code.

The question is: what should the types of the two function parameters be? Does CreateApplication return an IApplication or a caller-specific derived type? What is the type of the application parameter passed to RunApplication? Also IApplication? Or the more derived type returned by CreateApplication?

As with the previous example, if the IApplicationManager is to return a derived type, then it must be generic in TApplication and both function parameters will be generically typed as well. These generic types will trigger an avalanche of generic parameters(tm) throughout the other extension methods, interfaces and classes involved in initializing and executing applications.

That sounds horrible. This sounds like a pretty easy decision. Why are we even considering the alternative? Well, because it can be very advantageous if the application can declare RunApplication with a strictly typed signature, as shown below.

private static void RunApplication(CodeGeneratorApplication application) { ... }

Neat, right? I've got my very own type back.

Where Generics Goes off the Rails

However, if the IApplicationManager is to call this function, then the signature of CreateAndStartUp() and Run() have to be generic, as shown below.

TApplication CreateAndStartUp<TApplication>(
  Func<IApplicationCreationSettings, TApplication> createApplication
)
 where TApplication : IApplication;

IApplicationExecutionTranscript Run<TApplication>(
  Func<IApplicationCreationSettings, TApplication> createApplication,
  Action<TApplication> run
)
  where TApplication : IApplication;

These are quite messy -- and kinda scary -- signatures.3 if these core methods are already so complex, any other methods involved in startup and execution would have to be equally complex -- including helper methods created by calling applications.4

The advantage here is that the caller will always get back the type of application that was created. The compiler guarantees it. The caller is not obliged to cast an IApplication back up to the original type. The disadvantage is that all of the library code is infected by a generic parameter with its attendant IApplication generic constraint.5

Don't add Support for Conflicting Patterns

The title of this section seems pretty self-explanatory, but we as designers must remain vigilant against the siren call of what seems like a really elegant and strictly typed solution.

But aren't properties on an application exactly what we just worked so hard to eliminate?

The generics above establish a pattern that must be adhered to by subsequent extenders and implementors. And to what end? So that a caller can attach properties to an application and access those in a statically typed manner, i.e. without casting?

But aren't properties on an application exactly what we just worked so hard to eliminate? Isn't the recommended pattern to create a "settings" object and add it to the IOC instead? That is, as of Quino 2.0, you get an IApplication and obtain the desired settings from its IOC. Technically, the cast is still taking place in the IOC somewhere, but that seems somehow less bad than a direct cast.

If the framework recommends that users don't add properties to an application -- and ruthlessly eliminated all standard properties and descendants -- then why would the framework turn around and add support -- at considerable cost in maintenance and readability and extendibility -- for callers that expect a certain type of application?

Wrapping up

Let's take a look at the non-generic implementation and see what we lose or gain. The final version of the IApplicationManager API is shown below, which properly balances the concerns of all stakeholders and hopefully will stand the test of time (or at least last until the next major revision).

IApplication CreateAndStartUp(
  Func<IApplicationCreationSettings, IApplication> createApplication
);

IApplicationExecutionTranscript Run(
  Func<IApplicationCreationSettings, IApplication> createApplication,
  Action<IApplication> run
);

These are the hard questions of API design: ensuring consistency, enforcing intent and balancing simplicity and cleanliness of code with expressiveness.



  1. A predilection of mine, I'll admit, especially when writing about a topic about which I've thought quite a lot. In those cases, the instinct to just skip "the object" and move on to the esoteric details that stand in the way of an elegant, perfect solution, is very, very strong.

  2. This more-realized typing was so attractive that we used it in many places in Quino without properly weighing the consequences. This article is the result of reconsidering that decision.

  3. Yes, the C# compiler will allow you to elide generics for most method calls (so long as the compiler can determine the types of the parameters without it). However, generics cannot be removed from constructor calls. These must always specify all generic parameters, which makes for messier-looking, lengthy code in the caller e.g. when creating the ApplicationManager were it to have been defined with generic parameters. Yet another thing to consider when choosing how to define you API.

  4. As already mentioned elsewhere (but it bears repeating): callers can, of course, eschew the generic types and use IApplication everywhere -- and most probably will, because the advantage offered by making everything generic is vanishingly small.. If your API looks this scary, entropy will eat it alive before the end of the week, to say nothing of its surviving to the next major version.

  5. A more subtle issue that arises is if you do end up -- even accidentally -- mixing generic and non-generic calls (i.e. using IApplication as the extended parameter in some cases and TApplication in others). This issue is in how the application object is registered in the IOC. During development, when the framework was still using generics everywhere (or almost everywhere), some parts of the code were retrieving a reference to the application using the most-derived type whereas the application had been registered in the container as a singleton using IApplication. The call to retrieve the most derived type returned a new instance of the application rather than the pre-registered singleton, which was a subtle and difficult bug to track down.

API Design: Running an Application (Part I)

In this article, we're going to discuss a bit more about the configuration library in Quino 2.0.

Other entries on this topic have been the articles about Encodos configuration library for Quino: part I, part II and part III.

The goal of this article is to discuss a concrete example of how we decided whether to use generic type parameters throughout the configuration part of Quino. The meat of that discussion will be in a part 2 because we're going to have to lay some groundwork about the features we want first. (Requirements!)

A Surfeit of Generics

As of Quino 2.0-beta2, the configuration library consisted of a central IApplication interface which has a reference to an IOC container and a list of startup and shutdown actions.

As shown in part III, these actions no longer have a generic TApplication parameter. This makes it not only much easier to use the framework, but also easier to extend it. In this case, we were able to remove the generic parameter without sacrificing any expressiveness or type-safety.

As of beta2, there were still several places where generic TApplication parameters were cluttering the API. Could we perhaps optimize further? Throw out even more complexity without losing anything?

Starting up an application

One of these places is the actual engine that executes the startup and shutdown actions. This code is a bit trickier than just a simple loop because Quino supports execution in debug mode -- without exception-handling -- and release mode -- with global exception-handling and logging.

As with any application that uses an IOC container, there is a configuration phase, during which the container can be changed and an execution phase, during which the container produces objects but can no longer be re-configured.

Until 2.0-beta2, the execution engine was encapsulated in several extension methods called Run(), StartUp() and so on. These methods were generally generic in TApplication. I write "generally" because there were some inconsistencies with extension methods for custom application types like Winform or Console applications.

While extension methods can be really useful, this usage was not really appropriate as it violated the open/closed principle. For the final release of Quino, we wanted to move this logic into an IApplicationManager so that applications using Quino could (A) choose their own logic for starting an application and (B) add this startup class to a non-Quino IOC container if they wanted to.

Application Execution Modes

So far, so good. Before we discuss how to rewrite the application manager/execution engine, we should quickly revisit what exactly this engine is supposed to do. As it turns out, not only do we wnat to make an architectural change to make the design more open for extension, but the basic algorithm for starting an application changed, as well.

What does it mean to run an application?

Quino has always acknowledged and kinda/sorta supported the idea that a single application can be run in different ways. Even an execution that results in immediate failure technically counts as an execution, as a traversal of the state machine defined by the application.

If we view an application for the state machine that it is, then every application has at least two terminal nodes: OK and Error.

But what does OK mean for an application? In Quino, it means that all startup actions were executed without error and the run() action passed in by the caller was also executed without error. Anything else results in an exception and is shunted to Error.

imageBut is that true, really? Can you think of other ways in which an application could successfully execute without really having failed? For most applications, the answer is yes. Almost every application -- and certainly every Quino application -- supports a command line. One of the default options for the command line of a Quino application is -h, which shows a manual for the other command-line options.

If the application is running in a console, this manual is printed to the console; for a Winform application, a dialog box is shown; and so on.

This "help" mode is actually a successful execution of the application that did not result in the main event loop of the application being executed.

Thought of in this way, any command-line option that controls application execution could divert the application to another type of terminal node in the state machine. A good example is when an application provides support for importing or exporting data via the command line.

"Canceled" Terminal Nodes

A terminal node is also not necessarily only Crashed or Ok. Almost any application will also need to have a Canceled mode that is a perfectly valid exit state. For example,

  • If the application requires a login during execution (startup), but the user aborts authentication
  • If the application supports schema migration, but the user aborts without migrating the schema

These are two ways in which a standard Quino application could run to completion without crashing but without having accomplished any of its main tasks. It ran and it didn't crash, but it also didn't do anything useful.

Intermediate Nodes in the Application State Machine

This section title sounds a bit pretentious, but that's exactly what we want to discuss here. Instead of having just start and terminal nodes, the Quino startup supports cycles through intermediate nodes as well. What the hell does that mean? It means that some nodes may trigger Quino to restart in a different mode in order to handle a particular kind of error condition that could be repaired.1

A concrete example is desperately needed here, I think. The main use of this feature in Quino right now is to support on-the-fly schema-migration without forcing the user to restart the application. This feature has been in Quino from the very beginning and is used almost exclusively by developers during development. The use case to support is as follows:

  1. Developer is running an application
  2. Developer make change to the model (or pulls changes from the server)
  3. Developer runs the application with a schema-change
  4. Application displays migration tool; developer can easily migrate the schema and continue working

This workflow minimizes the amount of trouble that a developer has when either making changes or when integrating changes from other developers. In all cases in which the application model is different from the developer's database schema, it's very quick and easy to upgrade and continue working.

"Rescuing" an application in Quino 2.0

How does this work internally in Quino 2.0? The application starts up but somehow encounters an error that indicates that a schema migration might be required. This can happen in one of two ways:

  1. The schema-verification step in the standard Quino startup detects a change in the application model vis à vis the data schema
  2. Some other part of the startup accesses the database and runs into a DatabaseException that is indicative of a schema-mismatch

In all of these cases, the application that was running throws an ApplicationRestartException, that the standard IApplicationManager implementation knows how to handle. It handles it by shutting down the running application instance and asking the caller to create a new application, but this time one that knows how to handle the situation that caused the exception. Concretely, the exception includes an IApplicationCreationSettings descendant that the caller can use to decide how to customize the application to handle that situation.

The manager then runs this new application to completion (or until a new RestartApplicationException is thrown), shuts it down, and asks the caller to create the original application again, to give it another go.

In the example above, if the user has successfully migrated the schema, then the application will start on this second attempt. If not, then the manager enters the cycle again, attempting to repair the situation so that it can get to a terminal node. Naturally, the user can cancel the migration and the application also exits gracefully, with a Canceled state.

A few examples of possible application execution paths:

  • Standard => OK
  • Standard => Error
  • Standard => Canceled
  • Standard => Restart => Migrator => Standard => OK
  • Standard => Restart => Migrator => Canceled

The pattern is the same for interactive, client applications as for headless applications like test suites, which attempt migration once and abort if not successful. Applications like web servers or other services will generally only support the OK and Error states and fail when they encounter a RestartApplicationException.

Still, it's nice to know that the pattern is there, should you need it. It fits relatively cleanly into the rest of the API without making it more complicated. The caller passes two functions to the IApplicationManager: one to create an application and one to run it.

An example from the Quino CodeGeneratorApplication is shown below:

internal static void Main()
{
  new ApplicationManager().Run(CreateApplication, GenerateCode);
}

private static IApplication CreateApplication(
  IApplicationCreationSettings applicationCreationSettings
) { ... }

private static void GenerateCode(IApplication application) { ... }

We'll see in the next post what the final API looks like and how we arrived at the final version of that API in Quino 2.0.



  1. Or rescued, using the nomenclature from Eiffel exception-handling, which actually does something very similar. The exception handling in most languages lets you clean up and move on, but the intent isn't necessarily to re-run the code that failed. In Eiffel, this is exactly how exception-handling works: fix whatever was broken and re-run the original code. Quino now works very much like this as well.

ReSharper Unit Test Runner 9.x update

Way back in February, I wrote about my experiences with ReSharper 9 when it first came out. The following article provides an update, this time with version 9.2, released just last week.

tl;dr: I'm back to ReSharper 8.2.3 and am a bit worried about the state of the 9.x series of ReSharper. Ordinarily, JetBrains has eliminated performance, stability and functional issues by the first minor version-update (9.1), to say nothing of the second (9.2).

Test Runner

In the previous article, my main gripe was with the unit-test runner, which was unusable due to flakiness in the UI, execution and change-detection. With the release of 9.2, the UI and change-detection problems have been fixed, but the runner is still quite flaky at executing tests.

What follows is the text of the report that I sent to JetBrains when they asked me why I uninstalled R# 9.2.

As with 9.0 and 9.1, I am unable to productively use the 9.2 Test Runner with many of my NUnit tests. These tests are not straight-up, standard tests, but R# 8.2.3 handled them without any issues whatsoever.

What's special about my tests?

There are quite a few base classes providing base functionality. The top layers provide scenario-specific input via a generic type parameter.

- **TestsBase**
  - **OtherBase<TMixin>**
     (7 of these, one with an NUnit CategoryAttribute)
    - **ConcreteTests<TMixin>**
       (defines tests with NUnit TestAttributes)
      - **ProviderAConcreteTests<TMixin>**
         (CategoryAttribute)
        - **ProtocolAProviderAConcreteTests**
          (TMixin = ProtocolAProviderA; TestFixtureAttribute, CategoryAttributes)
        - **ProtocolBProviderAConcreteTests**
          (TMixin = ProtocolBProviderA; TestFixtureAttribute, CategoryAttributes)
      - **ProviderBConcreteTests<TMixin>**
         (CategoryAttribute)
        - **ProtocolAProviderBConcreteTests**
          (TMixin = ProtocolAProviderB; TestFixtureAttribute, CategoryAttributes)
        - **ProtocolBProviderBConcreteTests**
          (TMixin = ProtocolBProviderB; TestFixtureAttribute, CategoryAttributes)

The test runner in 9.2 is not happy with this at all. The test explorer shows all of the tests correctly, with the test counts correct. If I select a node for all tests for ProviderB and ProtocolA (696 tests in 36 fixtures), R# loads 36 non-expandable nodes into the runner and, after a bit of a wait, marks them all as inconclusive. Running an individual test-fixture node does not magically cause the tests to load or appear and also shows inconclusive (after a while; it seems the fixture setup executes as expected but the results are not displayed).

If I select a specific, concrete fixture and add or run those tests, R# loads and executes the runner correctly. If I select multiple test fixtures in the explorer and add them, they also show up as expandable nodes, with the correct test counts, and can be executed individually (per fixture). However, if I elect to run them all by running the parent node, R# once again marks everything as inconclusive.

As I mentioned, 8.2.3 handles this correctly and I feel R# 9.2 isn't far off -- the unit-test explorer does, after all, show the correct tests and counts. In 9.2, it's not only inconvenient, but I'm worried that my tests are not being executed with the expected configuration.

Also, I really missed the StyleCop plugin for 9.2. There's a beta version for 9.1 that caused noticeable lag, so I'm still waiting for a more unobtrusive version for 9.2 (or any version at all).

While it's possible that there's something I'm doing wrong, or there's something in my installation that's strange, I don't think that's the problem. As I mentioned, test-running for the exact same solution with 8.2.3 is error-free and a pleasure to use. In 9.2, the test explorer shows all of the tests correctly, so R# is clearly able to interpret the hierarchy and attributes (noted above) as I've intended them to be interpreted. This feels very much like a bug or a regression for which JetBrains doesn't have test coverage. I will try to work with them to help them get coverage for this case.

Real-Time StyleCop rules

Additionally, the StyleCop plugin is absolutely essential for my workflow and there still isn't an official release for any of the 9.x versions. ReSharper 9.2 isn't supported at all yet, even in prerelease form. The official Codeplex page shows the latest official version as 4.7, released in January of 2012 for ReSharper 8.2 and Visual Studio 2013. One would imagine that VS2015 support is in the works, but it's hard to say. There is a page for StyleCop in the ReSharper extensions gallery but that shows a beta4, released in April of 2015, that only works with ReSharper 9.1.x, not 9.2. I tested it with 9.1.x, but it noticeably slowed down the UI. While typing was mostly unaffected, scrolling and switching file-tabs was very laggy. Since StyleCop is essential for so many developers, it's hard to see why the plugin gets so little love from either JetBrains or Microsoft.

GoTo Word

The "Go To Word" plugin is not essential but it is an extremely welcome addition, especially with so much more client-side work depending on text-based bindings that aren't always detected by ReSharper. In those cases, you can find -- for example -- all the references of a Knockout template by searching just as you would for a type or member. Additionally, you benefit from the speed of the ReSharper indexing engine and search UI instead of using the comparatively slow and ugly "Find in Files" support in Visual Studio. Alternatives suggested in the comments to the linked issue above all depend on building yet another index of data (e.g. Sando Code Search Tool). JetBrains has pushed off integrating go-to-word until version 10. Again, not a deal-breaker, but a shame nonetheless, as I'll have to do without it in 9.x until version 10 is released.

With so much more client-side development going on in Visual Studio and with dynamic languages and data-binding languages that use name-matching for data-binding, GoToWord is more and more essential. Sure, ReSharper can continue to integrate native support for finding such references, but until that happens, we're stuck with the inferior Find-in-Files dialog or other extensions that increase the memory pressure for larger solutions.

Encodo Git Handbook 3.0

Encodo first published a Git Handbook for employees in September 2011 and last updated it in July of 2012. Since then, we've continued to use Git, refining our practices and tools. Although a lot of the content is still relevant, some parts are quite outdated and the overall organization suffered through several subsequent, unpublished updates.

What did we change from the version 2.0?

  • We removed all references to the Encodo Git Shell. This shell was a custom environment based on Cygwin. It configured the SSH agent, set up environment variables and so on. Since tools for Windows have improved considerably, we no longer need this custom tool. Instead, we've moved to PowerShell and PoshGit to handle all of our Git command-line needs.
  • We removed all references to Enigma. This was a Windows desktop application developed by Encodo to provide an overview, eager-fetching and batch tasks for multiple Git repositories. We stopped development on this when SmartGit included all of the same functionality in versions 5 and 6.
  • We removed all detailed documentation for Git submodules. Encodo stopped using submodules (except for one legacy project) several years ago. We used to use submodules to manage external binary dependencies but have long since moved to NuGet instead.
  • We reorganized the chapters to lead off with a quick overview of Basic Concepts followed by a focus on Best Practices and our recommended Development Process. We also reorganized the Git-command documentation to use a more logical order.

You can download version 3 of the Git Handbook or get the latest copy from here.

Chapter 3, Basic Concepts and chapter 4, Best Practices have been included in their entirety below.

3 Best Practices

3.1 Focused Commits

Focused commits are required; small commits are highly recommended. Keeping the number of changes per commit tightly focused on a single task helps in many cases.

  • They are easier to resolve when merge conflicts occur
  • They can be more easily merged/rebased by Git
  • If a commit addresses only one issue, it is easier for a reviewer or reader to decide whether it should be examined.

For example, if you are working on a bug fix and discover that you need to refactor a file as well, or clean up the documentation or formatting, you should finish the bug fix first, commit it and then reformat, document or refactor in a separate commit.

Even if you have made a lot of changes all at once, you can still separate changes into multiple commits to keep those commits focused. Git even allows you to split changes from a single file over multiple commits (the Git Gui provides this functionality as does the index editor in SmartGit).

3.2 Snapshots

Use the staging area to make quick snapshots without committing changes but still being able to compare them against more recent changes.

For example, suppose you want to refactor the implementation of a class.

  • Make some changes and run the tests; if everythings ok, stage those changes
  • Make more changes; now you can diff these new changes not only against the version in the repository but also against the version in the index (that you staged).
  • If the new version is broken, you can revert to the staged version or at least more easily figure out where you went wrong (because there are fewer changes to examine than if you had to diff against the original)
  • If the new version is ok, you can stage it and continue working

3.3 Developing New Code

Where you develop new code depends entirely on the project release plan.

  • Code for releases should be committed to the release branch (if there is one) or to the develop branch if there is no release branch for that release
  • If the new code is a larger feature, then use a feature branch. If you are developing a feature in a hotfix or release branch, you can use the optional base parameter to base the feature on that branch instead of the develop branch, which is the default.

3.4 Merging vs. Rebasing

Follow these rules for which command to use to combine two branches:

  • If both branches have already been pushed, then merge. There is no way around this, as you wont be able to push a non-merged result back to the origin.
  • If you work with branches that are part of the standard branching model (e.g. release, feature, etc.), then merge.
  • If both you and someone else made changes to the same branch (e.g. develop), then rebase. This will be the default behavior during development

4 Development Process

A branching model is required in order to successfully manage a non-trivial project.

Whereas a trivial project generally has a single branch and few or no tags, a non-trivial project has a stable releasewith tags and possible hotfix branchesas well as a development branchwith possible feature branches.

A common branching model in the Git world is called Git Flow. Previous versions of this manual included more specific instructions for using the Git Flow-plugin for Git but experience has shown that a less complex branching model is sufficient and that using standard Git commands is more transparent.

However, since Git Flow is a very widely used branching model, retaining the naming conventions helps new developers more easily understand how a repository is organized.

4.1 Branch Types

The following list shows the branch types as well as the naming convention for each type:

  • master is the main development branch. All other branches should be merged back to this branch (unless the work is to be discarded). Developers may apply commits and create tags directly on this branch.
  • feature/name is a feature branch. Feature branches are for changes that require multiple commits or coordination between multiple developers. When the feature is completed and stable, it is merged to the master branch after which it should be removed. Multiple simultaneous feature branches are allowed.
  • release/vX.X.X is a release branch. Although a project can be released (and tagged) directly from the master branch, some projects require a longer stabilization and testing phase before a release is ready. Using a release branch allows development on the develop branch to continue normally without affecting the release candidate. Multiple simultaneous release branches are strongly discouraged.
  • hotfix/vX.X.X is a hotfix branch. Hotfix branches are always created from the release tag for the version in which the hotfix is required. These branches are generally very short-lived. If a hotfix is needed in a feature or release branch, it can be merged there as well (see the optional arrow in the following diagram).

The main difference from the Git Flow branching model is that there is no explicit stable branch. Instead, the last version tag serves the purpose just as well and is less work to maintain. For more information on where to develop code, see 3.3 Developing New Code.

4.2 Example

To get a better picture of how these branches are created and merged, the following diagram depicts many of the situations outlined above.

The diagram tells the following story:

  • Development began on the master branch
  • v1.0 was released directly from the master branch
  • Development on feature B began
  • A bug was discovered in v1.0 and the v1.0.1 hotfix branch was created to address it
  • Development on feature A began
  • The bug was fixed, v1.0.1 was released and the fix was merged back to the master branch
  • Development continued on master as well as features A and B
  • Changes from master were merged to feature A (optional merge)
  • Release branch v1.1 was created
  • Development on feature A completed and was merged to the master branch
  • v1.1 was released (without feature A), tagged and merged back to the master branch
  • Changes from master were merged to feature B (optional merge)
  • Development continued on both the master branch and feature B
  • v1.2 was released (with feature A) directly from the master branch

image

Legend:

  • Circles depict commits
  • Blue balloons are the first commit in a branch
  • Grey balloons are a tag
  • Solid arrows are a required merge
  • Dashed arrows are an optional merge

How Encodo sets up new workstations

Windows 8.1UbuntuClonezillaChocolateyVirtualBox

We've recently set up a few new workstations with Windows 8.1 and wanted to share the process we use, in case it might come in handy for others.

Windows can take a long time to install, as can Microsoft Office and, most especially, Visual Studio with all of its service packs. If we installed everything manually every time we needed a new machine, we'd lose a day each time.

To solve this problem, we decided to define the Encodo Windows Base Image, which includes all of the standard software that everyone should have installed. Using this image saves a lot of time when you need to either install a new workstation or you'd like to start with a fresh installation if your current one has gotten a bit crufty.

Encodo doesn't have a lot of workstations, so we don't really need anything too enterprise-y, but we do want something that works reliably and quickly.

After a lot of trial and error, we've come up with the following scheme.

  • Maintain a Windows 8.1 image in a VMDK file
  • Use VirtualBox to run the image
  • Use Chocolatey for (almost) all software installation
  • Use Ubuntu Live on a USB stick (from which to boot)
  • Use Clonezilla to copy the image to the target drive

Installed Software

The standard loadout for developers comprises the following applications.

These are updated by Windows Update.

  • Windows 8.1 Enterprise
  • Excel
  • Powerpoint
  • Word
  • Visio
  • German Office Proofing Tools
  • Visual Studio 2013

These applications must be updated manually.

  • ReSharper Ultimate
  • Timesnapper

The rest of the software is maintained with Chocolatey.

  • beyondcompare (file differ)
  • conemu (PowerShell enhancement)
  • fiddler4 (HTTP traffic analyzer)
  • firefox
  • flashplayerplugin
  • git (source control)
  • googlechrome
  • greenshot (screenshot tool)
  • jitsi (VOIP/SIP)
  • jre8 (Java)
  • keepass (Password manager)
  • nodejs
  • pidgin (XMPP chat)
  • poshgit (Powershell/Git integration)
  • putty (SSH)
  • smartgit (GIT GUI)
  • stylecop (VS/R# extension)
  • sublimetext3 (text editor)
  • sumatrapdf (PDF viewer)
  • truecrypt (Drive encryption)
  • vlc (video/audio player/converter)
  • winscp (SSH file-copy tool)
  • wireshark (TCP traffic analyzer)

Maintaining the Image

This part has gotten quite simple.

  1. Load the VM with the Windows 8.1 image
  2. Apply Windows Updates
  3. Update ReSharper, if necessary
  4. Run choco upgrade all to update all Chocolatey packages
  5. Shut down the VM cleanly

Writing the image to a new SSD

The instructions we maintain internally are more detailed, but the general gist is to do the following,

  1. Install the SSD in the target machine
  2. Plug in the Ubuntu Live USB stick
  3. Plug in the USB drive that has the Windows image and Clonezilla on it
  4. Boot to the Ubuntu desktop
  5. Make sure you have network access
  6. Install VirtualBox in Ubuntu from the App Center
  7. Create a VMDK file for the target SSD
  8. Start VirtualBox and create a new VM with the Windows image and SSD VMDK as drives and Clonezilla configured as a CD
  9. Start the VM and boot to Clonezilla
  10. Follow instructions, choose options and then wait 40 minutes to clone data
  11. Power off Clonezilla
  12. Shut down Ubuntu Live
  13. Unplug the USB drive and stick
  14. Boot your newly minted Windows 8.1 from the SSD
  15. Install Lenovo System Update (if necessary) and update drivers (if necessary)
  16. Add the machine to the Windows domain
  17. Remote-install Windows/Office licenses and activate Windows
  18. Remote-install Avira Antivirus
  19. Grant administrator rights to the owner of the laptop
  20. Use sysprep /generalize to reset Windows to an OOB (Out-of-box) experience for the new owner

Conclusion

We're pretty happy with this approach and the loadout but welcome any feedback or suggestions to improve them. We've set up two notebooks in the last three weeks, but that's definitely a high-water mark for us. We expect to use this process one more time this year (in August, when a new hire arrives), but it's nice to know that we now have a predictable process.

v2.0-beta2: Code generation, IOC and configuration

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

In beta1, we read about changes to configuration, the data driver architecture, DDL commands, and security and access control in web applications.

In beta-2, we made the following additional improvements:

Goodbye, old friends

This release addressed some issues that have been bugging us for a while (almost 3 years in one case).

  • QNO-3765 (32 months): After a schema migration caused by a DatabaseException on login, restart the application
  • QNO-4117 (27 months): PreferredType registration for models is not always executed
  • QNO-4408 (18 months): When access to the remoting server is unauthorized, the web site should respond with an error
  • QNO-4506 (14 months): The code generator should generate the persistent object and metadata references in separate classes
  • QNO-4507 (14 months): Business objects for modules should not rely on GlobalContext in generated code

You will not be missed.

Breaking changes

As we've mentioned before, this release is absolutely merciless in regard to backwards compatibility. Old code is not retained as obsolete Obsolete. Instead, a project upgrading to 2.0 will encounter compile errors.

That said, if you arm yourself with a bit of time, ReSharper and the release notes (and possibly keep an Encodo employee on speed-dial), the upgrade is not difficult. It consists mainly of letting ReSharper update namespace references for you. In cases where the update is not so straightforward, we've provided release notes.

V1 generated code support

One of the few things you'll be able to keep (at least for a minor version or two) is the old-style generated code. We made this concession because, while even a large solution can be upgraded from 1.13.0 to 2.0 relatively painlessly in about an hour (we've converted our own internal projects to test), changing the generated-code format is potentially a much larger change. Again, an upgrade to the generated-code format isn't complicated but it might require more than an hour or two's worth of elbow grease to complete.

Therefore, you'll be able to not only retain your old generated code, but the code generator will continue support the old-style code-generation format for further development. Expect the grace period to be relatively short, though.

Regardless of whether you elect to keep the old-style generated code, you'll have to do a little bit of extra work just to be able to generate code again.

  1. Manually update a couple of generated files, as shown below.
  2. Compile the solution
  3. Generate code with the Quino tools

Before you can regenerate, you'll have to manually update your previously generated code in the main model file, as shown below.

Previous version

static MyModel()
{
  Messages = new InMemoryRecorder();
  Loader = new ModelLoader(() => Instance, () => Messages, new MyModelGenerator());
}

public static IMetaModel CreateModel(IExtendedRecorder recorder)
{
  if (recorder == null) { throw new ArgumentNullException("recorder"); }

  var result = Loader.Generator.CreateModel(recorder);

  result.Configure();

  return result;
}

// More code ...

/// <inheritdoc/>
protected override void DoConfigure()
{
  base.DoConfigure();

  ConfigurePreferredTypes();
  ApplyCustomConfiguration();
}

Manually updated version

static MyModel()
{
  Messages = new InMemoryRecorder();
  Loader = new ModelLoader(() => Instance, () => Messages, new MyModelGenerator());
}

public static IMetaModel CreateModel(IExtendedRecorder recorder)
{
  if (recorder == null) { throw new ArgumentNullException("recorder"); }

  var result = Loader.Generator(MyModel)new MyModelGenerator().CreateModel(
    ServiceLocator.Current.GetInstance<IExpressionParser>(),
    ServiceLocator.Current.GetInstance<IMetaExpressionFactory>(),
    recorder
  );

  result.ConfigurePreferredTypes();
  result.ApplyCustomConfiguration();

  return result;
}

/// <inheritdoc/>
protected override void DoConfigure()
{
  base.DoConfigure();

  ConfigurePreferredTypes();
  ApplyCustomConfiguration();
}

Integrate into the model builder

In the application configuration for the first time you generate code with Quino 2.0, you should use:

ModelLoader = MyModel.Loader;
this.UseMetaSimpleInjector();
this.UseModelLoader(MyModel.CreateModel);

After regenerating code, you should use the following for version-2 generated code:

ModelLoader = MyModel.Loader;
this.UseMetaSimpleInjector();
this.UseModelLoader(MyModelExtensions.CreateModelAndMetadata);

...and the following for version-1 generated code:

ModelLoader = MyModel.Loader;
this.UseMetaSimpleInjector();
this.UseModelLoader(MyModel.CreateModel);

Still to do by RTM

As you can see, we've already done quite a bit of work in beta1 and beta2. We have a few more tasks planned for the feature-complete release candidate for 2.0

Move the schema-migration metadata table to a module.

The Quino schema-migration extracts most of the information it needs from database schema itself. It also stores extra metadata in a special table. This table has been with Quino since before modules were supported (over seven years) and hence was built in a completely custom manner. Moving this support to a Quino metadata module will remove unnecessary implementation and make the migration process more straightforward. (QNO-4888) Separate collection algorithm from storage/display method in IRecorder and descendants.

The recording/logging library has a very good interface but the implementation for the standard recorders has become too complex as we added support for multi-threading, custom disposal and so on. We want to clean this up to make it easier to extend the library with custom loggers. (QNO-4888) Split up Encodo and Quino assemblies based on functionality.

There are only a very dependencies left to untangle (QNO-4678, QNO-4672, QNO-4670); after that, we'll split up the two main Encodo and Quino assemblies along functional lines. (QNO-4376) Finish integrating building and publishing NuGet and symbol packages into Quino's release process.

And, finally, once we have the assemblies split up to our liking, we'll finalize the Nuget packages for the Quino library and leave the direct-assembly-reference days behind us, ready for Visual Studio 2015. (QNO-4376)

That's all we've got for now. See you next month for the next (and, hopefully, final update)!

Encodo's configuration library for Quino: part III

imageThis discussion about configuration spans three articles:

  1. part I discusses the history of the configuration system in Quino as well as a handful of principles we kept in mind while designing the new system
  2. part II discusses the basic architectural changes and compares an example from the old configuration system to the new.
  3. part III takes a look at configuring the "execution order" -- the actions to execute during application startup and shutdown

Introduction

Registering with an IOC is all well and good, but something has to make calls into the IOC to get the ball rolling.

Something has to actually make calls into the IOC to get the ball rolling.

Even service applications -- which start up quickly and wait for requests to do most of their work -- have basic operations to execute before declaring themselves ready.

Things can get complex when starting up registered components and performing basic checks and non-IOC configuration.

  • In which order are the components and configuration elements executed?
  • How do you indicate dependencies?
  • How can an application replace a piece of the standard startup?
  • What kind of startup components are there?

Part of the complexity of configuration and startup is that developers quickly forget all of the things that they've come to expect from a mature product and start from zero again with each application. Encodo and Quino applications take advantage of prior work to include standard behavior for a lot of common situations.

Configuration Patterns

Some components can be configured once and directly by calling a method like UseMetaTranslations(string filePath), which includes all of the configuration options directly in the composition call. This pattern is perfect for options that are used only by one action or that it wouldn't make sense to override in a subsequent action.

So, for simple actions, an application can just replace the existing action with its own, custom action. In the example above, an application for which translations had already been configured would just call UseMetaTranslations() again in order to override that behavior with its own.

Most application will replace standard actions or customize standard settings

Some components, however, will want to expose settings that can be customized by actions before they are used to initialize the component.

For example, there is an action called SetUpLoggingAction, which configures logging for the application. This action uses IFileLogSettings and IEventLogSettings objects from the IOC during execution to determine which types of logging to configure.

An application is, of course, free to replace the entire SetUpLoggingAction action with its own, completely custom behavior. However, an application that just wanted to change the log-file behavior or turn on event-logging could use the Configure<TService>() method1, as shown below.

application.Configure<IFileLogSettings>(
  s => s.Behavior = LogFileBehavior.MultipleFiles
);
application.Configure<IEventLogSettings>(
  s => s.Enabled = true
);

Actions

A Quino application object has a list of StartupActions and a list of ShutdownActions. Most standard middleware methods register objects with the IOC and add one or more actions to configure those objects during application startup.

Actions have existed for quite a while in Quino. In Quino 2, they have been considerably simplified and streamlined to the point where all but a handful are little more than a functional interface2.

The list below will give you an idea of the kind of configuration actions we're talking about.

  • Load configuration data
  • Process command line
  • Set up logging
  • Upgrade settings/configuration (e.g. silent upgrade)
  • Log a header (e.g. user/date/file locations/etc.; for console apps. this might be mirrored to the console)
  • Load plugins
  • Set up standard locations (e.g. file-system locations)

For installed/desktop/mobile applications, there's also:

  • Initialize UI components
  • Provide loading feedback
  • Check/manage multiple running instances
  • Check software update
  • Login/authentication

Quino applications also have actions to configure metadata:

  • Configure expression engine
  • Load metadata
  • Load metadata-overlays
  • Validate metadata
  • Check data-provider connections
  • Check/migrate schema
  • Generate default data

Application shutdown has a smaller set of vital cleanup chores that:

  • dispose of connection managers and other open resources
  • write out to the log, flush it and close it
  • show final feedback to the user

Anatomy of an Action

The following example3 is for the 1.x version of the relatively simple ConfigureDisplayLanguageAction.

public class ConfigureDisplayLanguageAction<TApplication> 
  : ApplicationActionBase<TApplication>
  where TApplication : ICoreApplication
{
  public ConfigureDisplayLanguageAction()
    : base(CoreActionNames.ConfigureDisplayLanguage)
  {
  }

  protected override int DoExecute(
    TApplication application, ConfigurationOptions options, int currentResult)
  {
    // Configuration code...
  }
}

What is wrong with this startup action? The following list illustrates the main points, each of which is addressed in more detail in its own section further below.

  • The ConfigurationOptions parameter introduces an unnecessary layer of complexity
  • The generic parameter TApplication complicates declaration, instantiation and extension methods that use the action
  • The int return type along with the currentResult parameter are a bad way of controlling flow.

The same startup action in Quino 2.x has the following changes from the Quino 1.x version above (legend: additions; deletions).

public class ConfigureDisplayLanguageAction<TApplication>
  : ApplicationActionBase<TApplication>
  where TApplication : ICoreApplication
{
  public ConfigureDisplayLanguageAction()
    : base(CoreActionNames.ConfigureDisplayLanguage)
  {
  }

  publicprotected override void int DoExecute(
    TApplication application, ConfigurationOptions options, int currentResult)
  {
    // Configuration code...
  }
}

As you can see, quite a bit of code and declaration text was removed, all without sacrificing any functionality. The final form is quite simple, inheriting from a simple base class that manages the name of the action and overrides a single parameter-less method. It is now much easier to see what an action does and the barrier to entry for customization is much lower.

public class ConfigureDisplayLanguageAction : ApplicationActionBase
{
  public ConfigureDisplayLanguageAction()
    : base(CoreActionNames.ConfigureDisplayLanguage)
  {
  }

  public override void Execute()
  {
    // Configuration code...
  }
}

In the following sections, we'll take a look at each of the problems indicated above in more detail.

Remove the ConfigurationOptions parameter

These options are a simple enumeration with values like Client, Testing, Service and so on. They were used only by a handful of standard actions.

These options made it more difficult to decide how to implement the action for a given task. If two tasks were completely different, then a developer would know to create two separate actions. However, if two tasks were similar, but could be executed differently depending on application type (e.g. testing vs. client), then the developer could still have used two separate actions, but could also have used the configuration options. Multiple ways of doing the exact same thing is all kinds of bad.

Multiple ways of doing the exact same thing is all kinds of bad.

Parameters like this conflict conceptually with the idea of using composition to build an application. To keep things simple, Quino applications should be configured exclusively by composition. Composing an application with service registrations and startup actions and then passing options to the startup introduced an unneeded level of complexity.

Instead, an application now defines a separate action for each set of options. For example, most applications will need to set up the display language to use -- be it for a GUI, a command-line or just to log messages in the correct language. For that, the application can add a ConfigureDisplayLanguageAction to the startup actions or call the standard method UseCore(). Desktop or single-user applications can use the ConfigureGlobalDisplayLanguageAction or call UseGlobalCore() to make sure that global language resources are also configured.

Remove the TApplication generic parameter

The generic parameter to this interface complicates the IApplication<TApplication> interface and causes no end of trouble in MetaApplication, which actually inherits from IApplication<IMetaApplication> for historical reasons.

There is no need to maintain statelessness for a single-use object.

Originally, this parameter guaranteed that an action could be stateless. However, each action object is attached to exactly one application (in the IApplication<TApplication>.StartupActions list. So the action that is attached to an application is technically stateless, and a completely different application than the one to which the action is attached could be passed to the IApplcationAction.Execute...which makes no sense whatsoever.

Luckily, this never happens, and only the application to which the action is attached is passed to that method. If that's the case, though, why not just create the action with the application as a constructor parameter when the action is added to the StartupActions list? There is no need to maintain statelessness for a single-use object.

This way, there is no generic parameter for the IApplication interface, all of the extension methods are much simpler and applications are free to create custom actions that work with descendants of IApplication simply by requiring that type in the constructor parameter.

Debugging is important

A global exception handler is terrible for debugging

The original startup avoided exceptions, preferring an integer return result instead.

In release mode, a global exception handler is active and is there to help the application exit more or less smoothly -- e.g. by logging the error, closing resources where possible, and so on.

A global exception handler is terrible for debugging, though. For exceptions that are caught, the default behavior of the debugger is to stop where the exception is caught rather than where it is thrown. Instead, you want exceptions raised by your application to to stop the debugger from where they are thrown.

So that's part of the reason why the startup and shutdown in 1.x used return codes rather than exceptions.

Multiple valid code paths

The other reason Quino used result codes is that most non-trivial applications actually have multiple paths through which they could successfully run.

Exactly which path the application should take depends on startup conditions, parameters and so on. Some common examples are:

  • Show command-line help
  • Migrate an application schema
  • Import, export or generate data

To show command-line help, an application execute its startup actions in order. It reaches the action that checks whether the user requested command-line help. This action processes the request, displays that help and then wants to smoothly exit the application. The "main" path -- perhaps showing the user a desktop application -- should no longer be executed.

Non-trivial applications have multiple valid run profiles.

Similarly, the action that checks the database schema determines that the schema in the data provider doesn't match the model. In this case, it would like to offer the user (usually a developer) the option to update the schema. Once the schema is updated, though, startup should be restarted from the beginning, trying again to run the main path.

Use exceptions to indicate errors

Whereas the Quino 1.x startup addressed the design requirements above with return codes, this imposes an undue burden on implementors. There was also confusion as to when it was OK to actually throw an exception rather than returning a special code.

Instead, the Quino 2.x startup always uses exceptions to indicate errors. There are a few special types of exceptions recognized by the startup code that can indicate whether the application should silently -- and successfully -- exit or whether the startup should be attempted again.

Conclusion

There is of course more detail into which we could go on much of what we discussed in these three articles, but that should suffice for an overview of the Quino configuration library.



  1. If C# had them, that it is. See Java 8 for an explanation of what they are.

  2. This pattern is echoed in the latest beta of the ASP.NET libraries, as described in the article Strongly typed routing for ASP.NET MVC 6 with IApplicationModelConvention.

  3. Please note that formatting for the code examples has been adjusted to reduce horizontal space. The formatting does not conform to the Encodo C# Handbook.

Encodo's configuration library for Quino: part II

In this article, we'll continue the discussion about configuration started in part I. We wrapped up that part with the following principles to keep in mind while designing the new system.

  • Consistency
  • Opt-in configuration
  • Inversion of Control
  • Configuration vs. Execution
  • Common Usage

Borrowing from ASP.NET vNext

Quino's configuration inconsistencies and issues have been well-known for several versions -- and years -- but the opportunity to rewrite it comes only now with a major-version break.

Luckily for us, ASP.NET has been going through a similar struggle and evolution. We were able to model some of our terminology on the patterns from their next version. For example, ASP.NET has moved to a pattern where an application-builder object is passed to user code for configuration. The pattern there is to include middleware (what we call "configuration") by calling extension methods starting with "Use".

Quino has had a similar pattern for a while, but the method names varied: "Integrate", "Add", "Include"; these methods have now all been standardized to "Use" to match the prevailing .NET winds.

Begone configuration and feedback

Additionally, Quino used to make a distinction between an application instance and its "configuration" -- the template on which an application is based. No more. Too complicated. This design decision, coupled with the promotion of a platform-specific "Feedback" object to first-level citizen, led to an explosion of generic type parameters.1

The distinction between configuration (template) and application (instance) has been removed. Instead, there is just an application object to configure.

The feedback object is now to be found in the service locator. An application registers a platform-specific feedback to use as it would any other customization.

Hello service locator

ASP.NET vNext has made the service locator a first-class citizen. In ASP.NET, applications receive an IApplicationBuilder in one magic "Configure" method and receive an IServiceCollection in another magic "ConfigureServices" method.

In Quino 2.x, the application is in charge of creating the service container, though Quino provides a method to create and configure a standard one (SimpleInjector). That service locator is passed to the IApplication object and subsequently accessible there.

Services can of course be registered directly or by calling pre-packaged Middleware methods. Unlike ASP.NET vNext, Quino 2.x makes no distinction between configuring middleware and including the services required by that middleware.

Begone configuration hierarchy

Quino's configuration library has its roots in a time before we were using an IOC container. The configuration was defined as a hierarchy of configuration classes that modeled the following layers.

  • A base implementation that makes only the most primitive assumptions about an application. For example, that it has a RunMode ("debug" or "release") or an exit code or that it has a logging mechanism (e.g. IRecorder).
  • The "Core" layer comprises application components that are very common, but do not depend on Quino's metadata.
  • And, finally, the "Meta" layer includes configuration for application components that extend the core with metadata-dependent versions as well as specific components required by Quino applications.

While these layers are still somewhat evident, the move to middleware packages has blurred the distinction between them. Instead of choosing a concrete configuration base class, an application now calls a handful of "Use" methods to indicate what kind of application to build.

There are, of course, still helpful top-level methods -- e.g. UseCore() and UseMeta() methods -- that pull in all of the middleware for the standard application types. But, crucially, the application is free to tweak this configuration with more granular calls to register custom configuration in the service locator.

This is a flexible and transparent improvement over passing esoteric parameters to monolithic configuration methods, as in the previous version.

An example: Configure a software updater

Just as a simple example, whereas a Quino 1.x standalone application would set ICoreConfiguration.UseSoftwareUpdater to true, a Quino 2.x application calls UseSoftwareUpdater(). Where a Quino 1.x Winform application would inherit from the WinformFeedback in order to return a customized ISoftwareUpdateFeedback, a Quino 2.x application calls UseSoftwareUpdateFeedback().

The software-update feedback class is defined below and is used by both versions.

public class CustomSoftwareUpdateFeedback : WinformSoftwareUpdateFeedback<IMetaApplication>
{
  protected override ResponseType DoConfirmUpdate(TApplication application, ...)
  {
    ...
  }
}

That's where the similarities end, though. The code samples below show the stark difference between the old and new configuration systems.

Quino 1.x

As explained above, Quino 1.x did not allow registration of a sub-feedback like the software-updater. Instead, the application had to inherit from the main feedback and override a method to create the desired sub-feedback.

class CustomWinformFeedback : WinformFeedback
{
  public virtual ISoftwareUpdateFeedback<TApplication> GetSoftwareUpdateFeedback<TApplication, TConfiguration, TFeedback>()
    where TApplication : ICoreApplication<TConfiguration, TFeedback>
    where TConfiguration : ICoreConfiguration
    where TFeedback : ICoreFeedback
  {
    return new CustomSoftwareUpdateFeedback(this);
  }
}

var configuration = new CustomConfiguration()
{
  UseSoftwareUpdater = true
}

WinformDxMetaConfigurationTools.Run(configuration, app => new CustomMainForm(app), new CustomWinformFeedback());

The method-override in the feedback was hideous and scared off a good many developers. not only that, the pattern was to use a magical, platform-specific WinformDxMetaConfigurationTools.Run method to create an application, run it and dispose it.

Quino 2.x

Software-update feedback-registration in Quino 2.x adheres to the principles outlined at the top of the article: it is consistent and uses common patterns (functionality is included and customized with methods named "Use"), configuration is opt-in, and the IOC container is used throughout (albeit implicitly with these higher-level configuration methods).

using (var application = new CustomApplication())
{
  application.UseMetaWinformDx();
  application.UseSoftwareUpdater();
  application.UseSoftwareUpdaterFeedback(new CustomSoftwareUpdateFeedback());
  application.Run(app => new CustomMainForm(app));
}

Additionally, the program has complete control over creation, running and disposal of the application. No more magic and implicit after-the-fact configuration.

What comes after configuration?

In the next and (hopefully) final article, we'll take a look at configuring execution -- the actions to execute during startup and shutdown. Registering objects in a service locator is all well and good, but calls into the service locator have to be made in order for anything to actually happen.

Keeping this system flexible and addressing standard application requirements is a challenging but not insurmountable problem. Stay tuned.


  1. The CustomWinformFeedback in the Quino 1.x code at the end of this article provides a glaring example.

Encodos configuration library for Quino: part I

In this article, I'll continue the discussion about configuration improvements mentioned in the release notes for Quino 2.0-beta1. With beta2 development underway, I thought I'd share some more of the thought process behind the forthcoming changes.

Software Libraries

what sort of patterns integrate and customize the functionality of libraries in an application?

An application comprises multiple tasks, only some of which are part of that application's actual domain. For those parts not in the application domain, software developers use libraries. A library captures a pattern or a particular way of doing something, making it available through an abstraction. These simplify and smooth away detail irrelevant to the application.

A runtime and its standard libraries provide many such abstractions: for reading/writing files, connecting to networks and so on. Third-party libraries provide others, like logging, IOC, task-scheduling and more.

Because Encodo's been writing software for a long time, we have a lot of patterns that we've come up with for our applications. These libraries are split into two main groups:

  • Encodo.*: extensions to the .NET framework or third-party libraries that don't depend on Quino metadata.
  • Quino.*: extensions to the .NET framework, third-party libraries or Encodo libraries that depend on Quino metadata.

A sort of "meta" library that lies on top of all of this is configuration and startup of applications that use these libraries. That is, what sort of patterns integrate and customize the functionality of libraries in an application?

Balancing K.I.S.S. and D.R.Y

Almost nowhere in an application is the balance between K.I.S.S. and D.R.Y. more difficult to maintain than in configuration and startup.

So if we already know all of that, why does Quino need a new configuration library?

As mentioned above, there is a lot of commonality between applications in this area. An application will definitely want to incorporate such common configuration from a library. Updates and improvements to that library will then be applied as for any other. This is a good thing.

However, an application will also want to be able to tweak almost any given facet of this shared configuration. That is: just keep the good parts, have those upgraded when they're changed, but apply customization and extend functionality for the application's domain. Easy, right?

It is here that a good configuration library will find just the right level of granularity for customization. Too coarse? Then an application ends up throwing out too much common configuration in order to customize a small part of it. Too fine? Then the configuration system is too verbose or complex and the application avoids using it.

Instead, a configuration system should establish clear patterns -- optimally, just one -- for how to apply customization.

  • The builder of the underlying configuration library has to consider the myriad situations that might face a library developer and distill those requirements to a common pattern.
  • The library developer needs to think about which parts an application might want to customize and think about how to expose them.

So if we already know all of that, then why does Quino need a new configuration library? Well...

History of Quino's Configuration Library

It's really easy to make things over-complicated and muddy. It's really easy to end up growing several different kinds of extension systems over the years. Quino ended up with a generics-heavy API that made declaring new configuration components very wordy.

The core of Quino is the metadata definition for an application domain. That part has barely changed at all since we first wrote it lo so many years ago. We declared it to be our core business -- the part that we are better than others at -- the part we wanted to have under our own control. Our first draft1 has held up remarkably well.

Many of the other components have undergone quite a bit of flux: changes in requirements and the components themselves as well as new development processes and patterns all contributed to change. Over time, various applications had different needs and made adjustments to a different iteration of the configuration library. We moved from supporting only single-threaded, single-user desktop applications to also supporting multi-user, multi-threaded services and web servers.

...we were left with an ugly configuration system that no-one wanted to extend or use -- so yet another would be invented.

For all of these different applications, we naturally wanted to maintain the common configuration where possible -- but customizations for new platforms stretched the capabilities of the configuration library.

Customization would be made to a new version of that library, but applications that couldn't be upgraded immediately forced backwards-compatibility and thus resulted in several different concurrent ways of configuring a particular facet of an application.

In order to keep things in one place, we ended up breaking the interface-separation rule. Dependencies started clumping drastically, but it was OK because nobody was trying to use one thing without the other ten. But it was hard to see what was going on; customization became a black box for all but one or two gurus. On and on it went, until we were left with an ugly configuration system that no-one wanted to extend or use -- so yet another would be invented, ad-hoc. And so it went.

Principles for Quino 2.0 Configuration

With Quino 2.0, we examined the existing system and came up with a list of principles.

  • Consistency: there should be only be one way of customizing settings and components. When a developer asks how to change something, the answer should always be the same pattern. If not, there better be a damned good reason (see "Configuration vs. Execution" below).
  • Opt-in configuration: No more magic methods or base classes that automatically add components and settings in black boxes. Even if the application has to call one or two more methods, it's better to be declarative than clever(tm).
  • Inversion of Control: Standardize configuration to use an IOC container or service locator wherever possible. Instead of clumping settings in configuration or application objects, create discrete settings and put them in the container. Make dependencies explicit (constructor parameters!) and resolved through the container wherever possible.
  • Configuration vs. Execution: Be very aware of the difference between the "configuration" phase and the "execution" phase. During configuration, the service locator is used in write-only mode; during execution, the service locator is in read-only mode. Code executed during configuration must rely only on explicit dependency-injection via constructor.
  • Common Usage: Establish a pattern for calling configuration methods, from least to most specific. E.g. call Quino's base configuration methods before any application-specific customization. Establish patterns for how to configure a single startup action or how to create settings for a larger component that could be further customized in subsequent phases.

In the next part, we'll take a look at some concrete examples and documentation for the new patterns.2



  1. To be fair, it wasn't our first attempt at metadata. In one way or another, we'd been defining metadata structures for generic programming for more years than we'd be comfortable divulging. A h/t of course to Opus Software's Atlas libraries -- 1 and 2 -- where many of us contributed. Also, I had experience with cross-platform, generic libraries in C++ stretching all the way back to the late 90s as well as the generalized/meta elements of the earthli WebCore. So it was more like the fourth or fifth shot at it, if we're going to be honest -- but at least we got it right. :-)

  2. In particular, I'll add more detail about "Common Usage" for those who might feel I've left them hanging a bit in the last bullet point. Sorry 'bout that. The day is only so long. See you next time...

v2.0-beta1: Configuration, services and web

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

These are the big ones that forced a major-version change.

Some smaller, but important changes:

  • Added support for RunInTransaction attribute. Specify the attribute on any IMetaTestFixture to wrap a test or every test in a fixture in a transaction. (QNO-4682)
  • Shared connection manager is now disposed when an application is disposed. (QNO-4752)

Breaking changes

Oh yeah. You betcha. This is a major release and we've knowingly made a decision not to maintain backwards-compatibility at all costs. Good news, though, the changes to make are relatively straightforward and easy to make if you've got a tool like ReSharper that can update using statements automatically.

Namespace changes

As we saw in part I and part II of the guide to using NDepend, Quino 2.0 has unsnarled quite a few dependency issues. A large number of classes and interfaces have been moved out of the Encodo.Tools namespace. Many have been moved to Encodo.Core but others have been scattered into more appropriate and more specific namespaces.

This is one part of the larger changes, easily addressed by using ReSharper to Alt + Enter your way through the compile errors.

Logging changes

Another large change is in renaming IMessageRecorder to IRecorder and IMessageStore to IInMemoryRecorder. Judicious use of search/replace or just a bit of elbow grease will get you through these as well.

Configuration changes

Finally, probably the most far-reaching change is in merging IConfiguration into IApplication. In previous versions of Quino, applications would create a configuration object and pass that to a platform-dependent Quino Run() method. Some configuration was provided by the application and some by the platform-specific method.

The example for Quino 1.13.0 below comes from the JobVortex Winform application.

var configuration = new JobVortexConfiguration
{
  MainSettings = Settings.Default
};

configuration.Add(new JobVortexClientConfigurationPackage());

if (!string.IsNullOrEmpty(Settings.Default.DisplayLanguage))
{
  configuration.DisplayLanguage = new Language(Settings.Default.DisplayLanguage);
}

WinformDxMetaConfigurationTools.Run(
  configuration, 
  app => new MainForm(app)
);

In Quino 2.0, the code above has been rewritten as shown below.

using (IMetaApplication application = new JobVortexApplication())
{
  application.MainSettings = Settings.Default;
  application.UseJobVortexClient();

  if (!string.IsNullOrEmpty(Settings.Default.DisplayLanguage))
  {
    application.DisplayLanguage = new Language(Settings.Default.DisplayLanguage);
  }

  application.Run(app => new MainForm(app));
}

As you can see, instead of creating a configuration, the program creates an application object. Instead of using configuration packages mixed with extension methods named "Integrate", "Configure" and so on, the new API uses "Use" everywhere. This should be comfortable for people familiar with the OWIN/Katana configuration pattern.

It does, however, mean that the IConfiguration, ICoreConfiguration and IMetaConfiguration don't exist anymore. Instead, use IApplication, ICoreApplication and IMetaApplication Again, a bit of elbow grease will be needed to get through these compile errors, but there's little to no risk or need for high-level decisions.

There are a lot of these prepackaged methods to help you create common kinds of applications:

  • UseCoreConsole() (a non-Quino application that uses the console)
  • UseMetaConsole() (a Quino application that uses the console)
  • UseCoreWinformDx() (a non-Quino application that uses Winform)
  • UseMetaWinformDx() (a Quino application that uses Winform)
  • UseReporting()
  • UseRemotingServer()
  • Etc.

I think you get the idea. Once we have a final release for Quino 2.0, we'll write more about how to use this new pattern.

Looking ahead to 2.0 Final

This is still just an internal beta of the 2.0 final version. More changes are on the way, including but not limited to:

  • Remove IConfigurationPackage and standardize the configuration API to be named "Use" everywhere (QNO-4771)
  • GenericObject improvements (QNO-4761, QNO-4762)
  • Change compile location for all projects (QNO-4756)
  • Move a lot of properties from ICoreApplication and IMetaApplication to configuration objects in the service locator. Also improve use of and configuration of service locator (QNO-4659)
  • More improvements to the recorders and logging (QNO-4688)
  • Changes to how ORM objects and metadata are generated. (QNO-4506)
  • Separate Encodo and Quino assemblies into multiple, smaller assemblies (QNO-4376)

See you there!