1 2 3 4 5 6 7 8 9 10 11
Which type should you register in an IOC container?

Use Case

I just ran into an issue recently where a concrete implementation registered as a singleton was suddenly not registered as a singleton because of architectural changes.

The changes involved creating mini-applications within a main application, each of which has its own IOC. Instead of creating controllers using the main application, I was now creating controllers with the mini-application instead (to support multi-tenancy, of which more in an upcoming post).

Silent Replacement of Singleton with Transient

Controllers are, by their nature, transient; a new controller is created to handle each incoming request.

In the original architecture, the concrete singleton was injected into the controller and all controller instances used the same shared instance. In the new architecture, the registration was not present in the mini-application (at first), which led to a (relatively) subtle bug: a transient and freshly created instance was injected into each new controller.

In cases where the singleton is a stateless algorithm, this wouldn't be a logical problem at all. At the very worst, you're over-allocating---but you probably wouldn't notice that, either. In this case, the singleton was a settings object, configured at application startup. The configured object was still in the main application's IOC, but not registered in the mini-application's IOC.

Because the singleton was registered on a concrete type rather than an interface, the semantic error occurred silently instead of throwing a lifestyle-mismatch or unregistered-interface exception.

A Straightforward Fix

This is only one of the reasons that I recommend using interfaces as the anchoring type of an IOC registration.

To fix the issue, I did exactly this: I extracted an interface from the class and used the interface everywhere (except for the implementing type of the registration). Re-running the test caused an immediate exception rather than a strange data bug (which resulted because the default configuration in the concrete type was just correct enough to allow it to limp to a result).

To show an example, instead of the following,

application.RegisterSingle<ApiSettings>()

I used,

application.RegisterSingle<IApiSettings, ApiSettings>()

This still didn't fix the crash because the mini-application doesn't get that registration automatically.

I also can't use the same registration as above because that would just create a new unconfigured ApiSettings in each mini-application (the same as I had before, but now as a singleton). To go that route, I would have to replicate the configuration-loading for the ApiSettings as well. And I don't want to do that.

Instead, I just injected the IApiSettings from the main application to the component responsible for creating the mini-application and registered the object as a singleton directly, as shown below.

public class MiniApplicationFactory
{
  public MiniApplicationFactory([NotNull] IApiSettings apiSettings)
  {
    if (apiSettings = null) { throw new ArgumentNullException(nameof(apiSettings(); }

    _apiSettings = apiSettings;
  }

  IApplication CreateApplication()
  {
    return new Application().UseRegisterSingle(_apiSettings);
  }

  [NotNull]
  private readonly IApiSettings _apiSettings;
}

On a side note, whereas C# syntax has become more concise and powerful from version to version, I still think it has a way to go in terms of terseness for such simple objects. For such things, Kotlin and TypeScript nicely illustrate what such a syntax could look like.1

Other Drawbacks

I mentioned above that this is only "one" of the reasons I don't like registering concrete singletons. The other two reasons are:

  1. Complicates replacement: If the registered type is a concrete instance, then any replacement must inherit from this instance. The base class has to be constructed more carefully in order to allow for all foreseeable customizations. With an interface, the implementor is completely free to either use the existing class as a base or to re-implement the interface entirely.
  2. Limits Mocking: Related to the first reason is that mocking is limited in its ability to override non-virtual methods. Even without a mocking library, you're just as hard-pressed to work around unwanted behavior in a hand-coded mock as you are with an actual replacement (as described above). Such limitations are non-existent with interfaces.


  1. I'm still waiting for C# to clean up a bit more of this syntax for me. The [NotNull] should be a language feature checked by the compiler so that the ArgumentNullException is no longer needed. On top of that, I'd like to see parameter properties, as in TypeScript (this is where you can prefix a constructor parameter with a keyword to declare and initialize it as a property). With a few more C#-language iterations that included non-nullable reference types and parameter properties, the example could look like the code below:

    public class MiniApplicationFactory
    {
    public MiniApplicationFactory(private IApiSettings apiSettings)
    {
    }
    
    IApplication CreateApplication()
    {
      return new Application().UseRegistereSingle(apiSettings);
    }
    }
    

Learning Quino: a roadmap for documentation and tutorials

In recent articles, we outlined a roadmap to .NET Standard and .NET Core and a roadmap for deployment and debugging. These two roadmaps taken together illustrate our plans to extend as much of Quino as possible to other platforms (.NET Standard/Core) and to make development with Quino as convenient as possible (getting/upgrading/debugging).

To round it off, we've made good progress on another vital piece of any framework: documentation.

Introducing docs.encodo.ch

We recently set up a new server to host Quino documentation. There, you can find documentation for current releases. Going forward, we'll also retain documentation for any past releases.

We're generating our documentation with DocFX, which is the same system that powers Microsoft's own documentation web site. We've integrated documentation-generation as a build step in Quino's nightly build on TeamCity, so it's updated every night (Zürich time) 1.

The documentation includes conceptual documentation which provides an overview/tutorials/FAQ for basic concepts in Quino. The API Reference includes comprehensive documentation about the types and methods available in Quino.

Next Steps

While we're happy to announce that we have publicly available documentation for Quino, we're aware that we've got work to do. The next steps are:

Even though there's still work to do, this is a big step in the right direction. We're very happy to have found DocFX, which is a very comprehensive, fast and nice-looking solution to generating documentation for .NET code.2

--


  1. If the build succeeds, naturally. :-)

  2. We used to use Sandcastle many years ago, but dropped support because it took forever to generate documentation, required its own solution file, didn't look very nice out-of-the-box, wasn't so easily customized and didn't have a very good search (which also didn't work without an IIS running it).

Delivering Quino: a roadmap for deployment and debugging

In a recent article, we outlined a roadmap to .NET Standard and .NET Core. We've made really good progress on that front: we have a branch of Quino-Standard that targets .NET Standard for class libraries and .NET Core for utilities and tests. So far, we've smoke-tested these packages with Quino-WebApi. Our next steps there are to convert Quino-WebApi to .NET Standard and .NET Core as well. We'll let you know when it's ready, but progress is steady and promising.

With so much progress on several fronts, we want to address how we get Quino from our servers to our customers and users.

Getting Quino

Currently, we provide access to a private fileshare for customers. They download the NuGet packages for the release they want. They copy these to a local folder and bind it as a NuGet source for their installations.

In order to make a build available to customers, we have to publish that build by deploying it and copying the files to our file share. This process has been streamlined considerably so that it really just involves telling our CI server (TeamCity) to deploy a new release (official or pre-). From there, we download the ZIP and copy it to the fileshare.

Encodo developers don't have to use the fileshare because we can pull packages directly from TeamCity as soon as they're available. This is a much more comfortable experience and feels much more like working with nuget.org directly.

Debugging Quino

The debugging story with external code in .NET is much better than it used to be (spoiler: it was almost impossible, even with Microsoft sources), but it's not as smooth as it should be. This is mostly because NuGet started out as a packaging mechanism for binary dependencies published by vendors with proprietary/commerical products. It's only in recent year(s) that packages are predominantly open-source.

In fact, debugging with third-party sources – even without NuGet involved – has never been easy with .NET/Visual Studio.

Currently, all Quino developers must download the sources separately (also available from TeamCity or the file-share) in order to use source-level debugging.

Binding these sources to the debugger is relatively straightforward but cumbersome. Binding these sources to ReSharper is even more cumbersome and somewhat unreliable, to boot. I've created the issue Add an option to let the user search for external sources explicitly (as with the VS debugger) when navigating in the hopes that this will improve in a future version. JetBrains has already fixed one of my issues in this are (Navigate to interface/enum/non-method symbol in Nuget-package assembly does not use external sources), so I'm hopeful that they'll appreciate this suggestion, as well.

The use case I cited in the issue above is,

Developers using NuGet packages that include sources or for which sources are available want to set breakpoints in third-party source code. Ideally, a developer would be able to use R# to navigate through these sources (e.g. via F12) to drill down into the code and set a breakpoint that will actually be triggered in the debugger.

As it is, navigation in these sources is so spotty that you often end up in decompiled code and are forced to use the file-explorer in Windows to find the file and then drag/drop it to Visual Studio where you can set a breakpoint that will work.

The gist of the solution I propose is to have R# ask the user where missing sources are before decompiling (as the Visual Studio debugger does).

Nuget Protocol v3 to the rescue?

There is hope on the horizon, though: Nuget is going to address the debugging/symbols/sources workflow in an upcoming release. The overview is at NuGet Package Debugging & Symbols Improvements and the issue is Improve NuGet package debugging and symbols experience.

Once this feature lands, Visual Studio will offer seamless support for debugging packages hosted on nuget.org. Since we're using TeamCity to host our packages, we need JetBrains to [Add support for NuGet Server API v3|https://youtrack.jetbrains.com/issue/TW-47289] in order to benefit from the improved experience. Currently, our customers are out of luck even if JetBrains releases simultaneously (because our TeamCity is not available publicly).

Quino goes public?

I've created an issue for Quino, Make Quino Nuget packages available publicly to track our progress in providing Quino packages to our customers in a more convenient way that also benefits from improvements to the debugging workflow with Nuget Packages.

If we published Quino packages to NuGet (or MyGet, which allows private packages), then we would have the benefit of the latest Nuget protocol/improvements for both ourselves and our customers as soon as it's available. Alternatively, we could also proxy our TeamCity feed publicly. We're still considering our options there.

As you can see, we're always thinking about the development experience for both our developers and our customers. We're fine-tuning on several fronts to make developing and debugging with Quino a seamless experience for all developers on all platforms.

We'll keep you posted.

Quino's Roadmap to .NET Standard and .NET Core

With Quino 5, we've gotten to a pretty good place organizationally. Dependencies are well-separated into projects—and there are almost 150 of them.

We can use code-coverage, solution-wide-analysis and so on without a problem. TeamCity runs the ~10,000 tests quickly enough to provide feedback in a reasonable time. The tests run even more quickly on our desktops. It's a pretty comfortable and efficient experience, overall.

Monolithic Solution: Pros and Cons

As of Quino 5, all Quino-related code was still in one repository and included in a single solution file. Luckily for us, Visual Studio 2017 (and Rider and Visual Studio for Mac) were able to keep up quite well with such a large solution. Recent improvements to performance kept the experience quite comfortable on a reasonably equipped developer machine.

Having everything in one place is both an advantage and disadvantage: when we make adjustments to low-level shared code, the refactoring is applied in all dependent components, automatically. If it's not 100% automatic, at least we know where we need to make changes in dependent components. This provides immediate feedback on any API changes, letting us fine-tune and adjust until the API is appropriate for known use cases.

On the other hand, having everything in one place means that you must make sure that your API not only works for but compiles and tests against components that you may not immediately be interested in.

For example, we've been pushing much harder on the web front lately. Changes we make in the web components (or in the underlying Quino core) must also work immediately for dependent Winform and WPF components. Otherwise, the solution doesn't compile and tests fail.

While this setup had its benefits, the drawbacks were becoming more painful. We wanted to be able to work on one platform without worrying about all of the others.

On top of that, all code in one place is no longer possible with cross-platform support. Some code—Winform and WPF—doesn't run on Mac or Linux.1

The time had come to separate Quino into a few larger repositories.

Separate Solutions

We decided to split along platform-specific lines.

  • Quino-Standard: all common code, including base libraries, application, configuration and IOC support, metadata, builders and all data drivers
  • Quino-WebApi: all web-related code, including remaining ASP.NET MVC support
  • Quino-Windows: all Windows-platform-only code (Windows-only APIs (i.e. native code) as well as Winform and WPF)

The Quino-WebApi and Quino-Windows solution will consume Quino-Standard via NuGet packages, just like any other Quino-based product. And, just like any Quino-based product, they will be able to choose when to upgrade to a newer version of Quino-Standard.

Quino-Standard

Part of the motivation for the split is cross-platform support. The goal is to target all assemblies in Quino-Standard to .NET Standard 2.0. The large core of Quino will be available on all platforms supported by .NET Core 2.0 and higher.

This work is quite far along and we expect to complete it by August 2018.

Quino-WebApi

As of Quino 5.0.5, we've moved web-based code to its own repository and set up a parallel deployment for it. Currently, the assemblies still target .NET Framework, but the goal here is to target class libraries to .NET Standard and to use .NET Core for all tests and sample web projects.

We expect to complete this work by August 2018 as well.

Quino-Windows

We will be moving all Winform and WPF code to its own repository, setting it up with its own deployment (as we did with Quino-WebApi). These projects will remain targeted to .NET Framework 4.6.2 (the lowest version that supports interop with .NET Standard assemblies).

We expect this work to be completed by July 2018.

Quino-Mobile

One goal we have with this change is to be able to use Quino code from Xamarin projects. Any support we build for mobile projects will proceed in a separate repository from the very beginning.

We'll keep you posted on work and improvements and news in this area.

Conclusion

Customer will, for the most part, not notice this change, except in minor version numbers. Core and platform versions may (and almost certainly will) diverge between major versions. For major versions, we plan to ship all platforms with a single version number.



  1. I know, Winform can be made to run on Mac using Mono. And WPF may eventually become a target of Xamarin. But a large part of our Winform UI uses the Developer Express components, which aren't going to run on a Mac. And the plans for WPF on Mac/Linux are still quite up in the air right now.

v5.0.5: Split out Quino-WebApi repository, improve authorization data-driver

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.

Highlights

  • Fixed Creation of Database during schema-migration (PostgreSql-only) (QNO-5938)
  • Use Authorization in data driver for count/loadValues/reload (QNO-5937)
  • Added Wrapper properties (for backward-compatibility only) (QNO-5928)
  • Fixed command-line logging output for code-generator (QNO-5921)
  • Moved web support to Quino-WebApi (QNO-5903, QNO-5907)

Notes

All web support in Quino has been moved to a separate repository. The Quino repository has been renamed to Quino-Standard. The only effect this has on customers is that minor version numbers for web components may diverge from those of Quino-Standard. In a subsequent release, we will be moving all Windows-platform–specific projects (Windows, Winform and WPF) to a Quino-Windows repository. Again, users of Quino will be unaffected other than minor version-numbers diverging slightly.

The reasoning behind this change is as follows:

We are in the process of targeting Quino-Standard to .NET Standard 2.0. This work is nearing completion, but Windows-based components will remain targeted to .NET Framework.

Parts of the web framework are being developed more quickly than either Winform/WPF or Quino-Standard itself. We wanted to allow those components to be developed individually to allow more freedom for innovation and to allow the logical components to choose when to upgrade (i.e. both Quino-WebApi and Quino-Windows are/will be consumers of Quino-Standard libraries, just like any customer product).

Breaking changes

  • No known breaking changes.
Compile-check LESS/CSS classnames using TypeScript and Webpack

As I am making myself familiar with modern frontend development based on React, TypeScript, Webpack and others, I learned something really cool. I like to write this down not only for you – dear reader – but also for my own reference.

The problem

Let’s say you have a trivial React component like this where you specify a classsName to tell what CSS class should be used:

const MyComponent = (props: MyComponentProps) => (
<MySubCompnent className='myclass'>
....
</MySubCompnent>
);

export default MyComponent;

The problem with this is that we don’t have any compiler-check to ensure this class myclass really exists in our LESS file. So if we have a Typo or we later change the LESS file, we cannot be sure all classes/selectors are still valid. Not even the browser will show that. It silently breaks. Bad thing!

A solution

Using Webpack and the LESS loader one can fix this by checking this at compile time. To do so, you can define the style and its classname in the LESS file and import it into .tsx files. The LESS loader for webpack will expose the following LESS variables to the build process where the TypeScript loader (used for the .tsx files) can pick it up.

MyComponent.less:

@my-class: ~':local(.myClass)';
 
@{my-class}{
  width: 100%;
  background-color: green;
}
...

Note the local() function supported by the LESS loader (see webpack config at the end) which scopes that class to a local scope.

The above LESS files can be typed and imported into the .TSX file like this:

MyComponent.tsx:

type TStyles = {
  myClass: string;
};
 
const styles: TStyles = require('./MyComponent.less');
 
const MyComponent = (props: MyComponentProps) => (
 <MySubCompnent className={{styles.myClass}}>
 ....
 </MySubCompnent>
);
 
export default MyComponent;

Then firing up your build, the .less file gets picked up using the require() function and checked against the TypeScript type TStyles. The property myClass will contain the LESS/CSS classname as defined in the .less file.

I then can use the styles.myClass instead of the string literal of the original code.

To get this working, ensure you have the LESS loader included in your webpack configuration (you probably already have it if your are already using LESS):

webpack.json:

module: {
  rules: [
  {
    test: /.tsx?$/,
    loader: "ts-loader"
  },
  {
    test: /.less$/,
    use: ExtractTextPlugin.extract({
    use: [
      {
        loader: "css-loader",
        options: {
          localIdentName: '[local]--[hash:5]',
          sourceMap: true
        }
      }, {
        loader: "less-loader",
        options: {
          sourceMap: true
        }
      }
    ],
    fallback: "style-loader",
    ...
  }),
  ...
},
...

Note: The samples use LESS stylesheets, but one can do the same with SCSS/SASS – I guess. You just have to use another loader for webpack and therefore the syntax supported by that loader.

No broken CSS classnames anymore – isn’t this cool? Let me know your feedback.

This is a cross-post from Marc's personal blog at https://marcduerst.com/2018/03/08/compile-check-less-css-classnames-using-typescript-and-webpack/

Finding deep assembly dependencies

Quino contains a Sandbox in the main solution that lets us test a lot of the Quino subsystems in real-world conditions. The Sandbox has several application targets:

  • WPF
  • Winform
  • Remote Data Server
  • WebAPI Server
  • Console

The targets that connect directly to a database (e.g. WPF, Winform) were using the PostgreSql driver by default. I wanted to configure all Sandbox applications to be easily configurable to run with SqlServer.

Just add the driver, right?

This is pretty straightforward for a Quino application. The driver can be selected directly in the application (directly linking the corresponding assembly) or it can be configured externally.

Naturally, if the Sandbox loads the driver from configuration, some mechanism still has to make sure that the required data-driver assemblies are available.

The PostgreSql driver was in the output folder. This was expected, since that driver works. The SqlServer was not in the output folder. This was also expected, since that driver had never been used.

I checked the direct dependencies of the Sandbox Winform application, but it didn't include the PostgreSql driver. That's not really good, as I would like both SqlServer and PostgreSql to be configured in the same way. As it stood, though, I would be referencing SqlServer directly and PostgreSql would continue to show up by magic.

Before doing anything else, I was going to have to find out why PostgreSql was included in the output folder.

I needed to figure out assembly dependencies.

Visual Studio?

My natural inclination was to reach for NDepend, but I thought maybe I'd see what the other tools have to offer first.

Does Visual Studio include anything that might help? The "Project Dependencies" shows only assemblies on which a project is dependent. I wanted to find assemblies that were dependent on PostgreSql. I have the Enterprise version of Visual Studio and I seem to recall an "Architecture" menu, but I discovered that these tools are no longer installed by default.

According to the VS support team in that link, you have to install the "Visual Studio extension development" workload in the Visual Studio installer. In this package, the "Architecture and analysis tools" feature is available, but not included by default.

Hovering this feature shows a tooltip indicating that it contains "Code Map, Live Dependency Validation and Code Clone detection". The "Live Dependency Validation" sounds like it might do what I want, but it also sounds quite heavyweight and somewhat intrusive, as described in this blog from the end of 2016. Instead of further modifying my VS installation (and possibly slowing it down), I decided to try another tool.

ReSharper?

What about ReSharper? For a while now, it's included project-dependency graphs and hierarchies. Try as I might, I couldn't get the tools to show me the transitive dependency on PostgreSql that Sandbox Winform was pulling in from somewhere. The hierarchy view is live and quick, but it doesn't show all transitive usages.

The graph view is nicely rendered, but shows dependencies by default instead of dependencies and usages. At any rate, the Sandbox wasn't showing up as a transitive user of PostgreSql.

I didn't believe ReSharper at this point because something was causing the data driver to be copied to the output folder.

NDepend to the rescue

So, as expected, I turned to NDepend. I took a few seconds to run an analysis and then right-clicked the PostgreSql data-driver project to select NDepend => Select Assemblies... => That are Using Me (Directly or Indirectly) to show the following query and results.

Bingo. Sandbox.Model is indirectly referencing the PostgreSql data driver, via a transitive-dependency chain of 4 assemblies. Can I see which assemblies they are? Of course I can: this kind of information is best shown on a graph, so you can show a graph of any query results by clicking "Export to Graph" to show the graph below.

Now I can finally see that the SandboxModel pulls in the Quino.Testing.Models.Generated (to use the BaseTypes module) which, in turn, has a reference to Quino.Tests.Base which, of course, includes the PostgreSql driver because that's the default testing driver for Quino tests.

Now that I know how the reference is coming in, I can fix the problem. Here I'm on my own: I have to solve this problem without NDepend. But at least NDepend was able to show me exactly what I have to fix (unlike VS or ReSharper).

I ended up moving the test-fixture base classes from Quino.Testing.Models.Generated into a new assembly called Quino.Testing.Models.Fixtures. The latter assembly still depends on Quino.Tests.Base and thus the PostgreSql data driver, but it's now possible to reference the Quino testing models without transitively referencing the PostgreSql data driver.

A quick re-analysis with NDepend and I can see that the same query now shows a clean view: only testing code and testing assemblies reference the PostgreSql driver.

Finishing up

And now to finish my original task! I ran the Winform Sandbox application with the PostgreSql driver configured and was greeted with an error message that the driver could not be loaded. I now had parity between PostgreSql and SqlServer.

The fix? Obviously, make sure that the drivers are available by referencing them directly from any Sandbox application that needs to connect to a database. This was the obvious solution from the beginning, but we had to quickly fix a problem with dependencies first. Why? Because we hate hacking. :-)

Two quick references added, a build and I was able to connect to both SQL Server and PostgreSql.

Tools for maintaining Quino

The Quino roadmap shows you where we're headed. How do we plan to get there?

A few years back, we made a big leap in Quino 2.0 to split up dependencies in anticipation of the initial release of .NET Core. Three tools were indispensable: ReSharper, NDepend and, of course, Visual Studio. Almost all .NET developers use Visual Studio, many use ReSharper and most should have at least heard of NDepend.

At the time, I wrote a series of articles on the migration from two monolithic assemblies (Encodo and Quino) to dozens of layered and task-specific assemblies that allows applications to include our software in a much more fine-grained manner. As you can see from the articles, NDepend was the main tool I used for finding and tracking dependencies.1 I used ReSharper to disentangle them.

Since then, I've not taken advantage of NDepend's features for maintaining architecture as much as I'd like. I recently fired it up again to see where Quino stands now, with 5.0 in beta.

But, first, let's think about why we're using yet another tool for examining our code. Since I started using NDepend, other tools have improved their support for helping a developer maintain code quality.

  • ReSharper itself has introduced tools for visualizing project and type dependencies with very nice graphs. However, there is currently no support for establishing boundaries and getting ReSharper to tell me when I've inadvertently introduced new dependencies. In fact, ReSharper's only improved its support for quickly pulling in a dependency with its excellent Nuget-Package integration. ReSharper is excellent for finding lower-level code smells, like formatting, style and null-reference issues, as well as language usage, missing documentation and code-complexity (with an extension). DotCover provides test-coverage data but I haven't used it for real-time analysis yet (I don't use continuous testing with ReSharper on Quino because I feel it would destroy my desktop).
  • Visual Studio has also been playing catch-up with ReSharper and has done an excellent job in the last couple of years. VS 2017 is much, much faster than its predecessors; without it, we would be foundering badly with a Quino solution with almost 150 projects.2 Visual Studio provides Code Analysis and Portability Analysis and can calculate Code Metrics. Code Analysis is mostly covered by ReSharper, although it has a few extra inspections related to proper application and usage of the IDisposable pattern. The Portability Analysis is essential for moving libraries to .NET Standard but doesn't offer any insight into architectural violations like NDepend does.
  • We've recently started working with SonarQube on our TeamCity build server because a customer wanted to use it. It has a very nice UI and very nice reports, but doesn't go much farther than VS/R# inspections. Also, the report isn't in the UI, so it's not as quick to jump into the code. I don't want to review it here, since we only recently started working with it. It looks promising and is a welcome addition to that project. Hopefully more will reveal itself in time.
  • TeamCity provides a lot of the services that ReSharper also provides: inspections and code-coverage for builds. This takes quite a while, though, so we only run inspections and coverage for the Quino nightly build. The reports are nice but, as with SonarQube, of limited use because of the tenuous integration with Visual Studio. The integration works, but it's balky and we don't use it very much. Instead, we analyze inspections in real-time in Visual Studio with ReSharper and don't use real-time code-coverage 3
  • NDepend integrates right into Visual Studio and has a super-fast analysis with a very nice dashboard overview, from which you can drill down into myriad issues and reports and analyses, from technical debt (with very daunting but probably accurate estimates for repair) to type- and assembly-interdependency problems. NDepend can also integrate code-coverage results from DotCover to show how you're doing on that front on the dashboard as well. As with TeamCity and SonarQube, the analyses are retained as snapshots. With NDepend, you can quickly compare them (and comparing against a baseline is even included by default in the dashboard), which is essential to see if you're making progress or regressing. 4 NDepend also integrates with TeamCity, but we haven't set that up (yet).

With a concrete .NET Core/Standard project in the wings/under development, we're finally ready to finish our push to make Quino Core ready for cross-platform development. For that, we're going to need NDepend's help, I think. Let's take a look at where we stand today.

The first step is to choose what you want to cover. In the past, I've selected specific assemblies that corresponded to the "Core". I usually do the same when building code-coverage results, because the UI assemblies tend to skew the results heavily. As noted in a footnote below, we're starting an effort to separate Quino into high-level components (roughly, a core with satellites like Winform, WPF and Web). Once we've done that, the health of the core itself should be more apparent (I hope).

For starters, though, I've thrown all assemblies in for both NDepend analysis as well as code coverage. Let's see how things stand overall.

The amount of information can be quite daunting but the latest incarnation of the dashboard is quite easy to read. All data is presented with a current number and a delta from the analysis against which you're comparing. Since I haven't run an analysis in a while, there's no previous data against which to compare, but that's OK.

  • Lines of Code
  • Code Elements (Types, Methods, etc.)
  • Comments (documentation)
  • Technical Debt
  • Code Coverage 5
  • Quality Gates / Rules / Issues

Let's start with the positive.

  • The Quino sources contain almost 50% documentation. That's not unexpected. The XML documentation from which we generate our developer documentation 6 is usually as long as or longer than the method itself.
  • We have a solid B rating for technical debt, which is really not bad, all things considered. I take that to mean that, even without looking, we instinctively produce code with a reasonable level of quality.

Now to the cool part: you can click anything in the NDepend dashboard to see a full list of all of the data in the panel.

Click the "B" on technical debt and you'll see an itemized and further-drillable list of the grades for all code elements. From there, you can see what led to the grade. By clicking the "Explore Debt" button, you get a drop-down list of pre-selected reports like "Types Hot Spots".

Click lines of code and you get a breakdown of which projects/files/types/methods have the most lines of code

Click failed quality gates to see where you've got the most major problems (Quino currently has 3 categories)

Click "Critical" or "Violated" rules to see architectural rules that you're violating. As with everything in NDepend, you can pick and choose which rules should apply. I use the default set of rules in Quino.

Most of our critical issues are for mutually-dependent namespaces. This is most likely not root namespaces crossing each other (though we'd like to get rid of those ASAP) but sub-namespaces that refer back to the root and vice-versa. This isn't necessarily a no-go, but it's definitely something to watch out for.

There are so many interesting things in these reports:

  • Don't create threads explicitly (this is something we've been trying to reduce; I already knew about the one remaining, but it's great to see it in a report as a tracked metric)
  • Methods with too many parameters (you can adjust the threshold, of course)
  • Types too big: we'd have to check these because some of them are probably generated code, in which case we'd remove them from analysis.
  • Abstract constructors should be protected: ReSharper also indicates this one, but we have it as a suggestion, not a warning, so it doesn't get regularly cleaned up. It's not critical, but a code-style thing. I find the NDepend report much easier to browse than the inspection report in TeamCity.

Click the "Low" issues (Quino has over 46,000!) and you can see that NDepend analyzes your code at an incredibly low level of granularity

  • There are almost 10,000 cases where methods could have a lower visibility. This is good to know, but definitely low-priority.
  • Namespace does not correspond to file location: I'm surprised to see 4,400 violations because I thought that ReSharper managed that for us quite well. This one bears investigating – maybe NDepend found something ReSharper didn't or maybe I need to tweak NDepend's settings.

Finallly, there's absolutely everything, which includes boxing/unboxing issues 7, method-names too long, large interfaces, large instances (could also be generated classes).

These already marked as low, so don't worry that NDepend just rains information down on you. Stick to the critical/high violations and you'll have real issues to deal with (i.e. code that might actually lead to bugs rather than code that leads to maintenance issues or incurs technical debt, both of which are more long-term issues).

What you'll also notice in the screenshots that NDepend doesn't just provide pre-baked reports: everything is based on its query language. That is, NDepend's analysis is lightning fast (takes only a few seconds for all of Quino) during which it builds up a huge database of information about your code that it then queries in real-time. NDepends provides a ton of pre-built queries linked from all over the UI, but you can adjust any of those queries in the pane at the top to tweak the results. The syntax is Linq to Sql and there are a ton of comments in the query to help you figure out what else you can do with it.

As noted above, the amount of information can be overwhelming, but just hang in there and figure out what NDepend is trying to tell you. You can pin or hide a lot of the floating windows if it's all just a bit too much at first.

In our case, the test assemblies have more technical debt than the code that it tests. This isn't optimal, but it's better than the other way around. You might be tempted to exclude test assemblies from the analysis, to boost your grade, but I think that's a bad idea. Testing code is production code. Make it just as good as the code it tests to ensure overall quality.

I did a quick comparison between Quino 4 and Quino 5 and we're moving in the right direction: the estimation of work required to get to grade A was already cut in half, so we've made good progress even without NDepend. I'm quite looking forward to using NDepend more regularly in the coming months. I've got my work cut out for me.

--


  1. Many thanks to Patrick Smacchia of NDepend for generously providing an evaluator's license to me over the years.

  2. We came up with a plan for reducing the size of the core solution in a recent architecture meeting. More on that in a subsequent blog post.

  3. Quino has 10,000 tests, many of which are integration tests, so a change to a highly shared component would trigger thousands of tests to run, possibly for minutes. I can't see how it would be efficient to run tests continuously as I type in Quino. I've used continuous testing in smaller projects and it's really wonderful (both with ReSharper and also Wallaby for TypeScript), but it doesn't work so well with Quino because of its size and highly generalized nature.

  4. I ran the analysis on both Quino 4 and Quino 5, but wasn't able to directly compare results because I think I inadvertently threw them away with our nant clean command. I'd moved the ndepend out folder to the common folder and our command wiped out the previous results. I'll work on persisting those better in the future.

  5. I generated coverage data using DotCover, but realized only later that I should have configured it to generate NDepend-compatible coverage data (as detailed in NDepend Coverage Data. I'll have to do that and run it again. For now, no coverage data in NDepend. This is what it looks like in DotCover, though. Not too shabby:

  6. Getting that documentation out to our developers is also a work-in-progress. Until recently, we've been stymied by the lack of a good tool and ugly templates. But recently we added DocFX support to Quino and the generated documentation is gorgeous. There'll be a post hopefully soon announcing the public availability of Quino documentation.

  7. There's probably a lot of low-hanging fruit of inadvertent allocations here. On the other hand, if they're not code hot paths, then they're mostly harmless. It's more a matter of coding consistently. There's also an extension for ReSharper (the "Heap Allocations Viewer") that indicates allocations directly in the IDE, in real-time. I have it installed, and it's nice to see where I'm incurring allocations.

v4.1.7: Winform bug fixes and resources captions for modules

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.

Highlights

  • Fixed Custom Controls in Winform Navigation (QNO-5889)
  • Use Resource Captions for all standard modules (QNO-5883, QNO-5884)

Note

Unless we find a blocking issue that can't be fixed with a patch to the product, this will be the last release on the 4.x branch.

Breaking changes

  • IExternalLoggerFactory has been renamed to IExternalLoggerProvider
  • ExternalLoggerFactory has been renamed to ExternalLoggerProvider
  • NullExternalLoggerFactory has been renamed to NullExternalLoggerProvider
  • IUserCredentials.AuthenticationToken is now an IToken instead of a string
v4.1.6: Winform / DevExpress improvements

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.

Highlights

Breaking changes

  • The property ReportDefinitionParameter.Hidden now has the default value false. Integrating this release will trigger a schema migration to adjust that value in the database.