1 2 3 4 5 6
Adventures in .NET Standard 2.0-preview1

.NET Standard 2.0 is finally publicly available as a preview release. I couldn't help myself and took a crack at converting parts of Quino to .NET Standard just to see where we stand. To keep me honest, I did all of my investigations on my MacBook Pro in MacOS.

IDEs and Tools

I installed Visual Studio for Mac, the latest JetBrains Rider EAP and .NET Standard 2.0-preview1. I already had Visual Studio Code with the C#/OmniSharp extensions installed. Everything installed easily and quickly and I was up-and-running in no time.

Armed with 3 IDEs and a powerful command line, I waded into the task.

Porting Quino to .NET Standard

Quino is an almost decade-old .NET Framework solution that has seen continuous development and improvement. It's quite modern and well-modularized, but we still ran into considerable trouble when experimenting with .NET Core 1.1 almost a year ago. At the time, we dropped our attempts to work with .NET Core, but were encouraged when Microsoft shifted gears from the extremely low--surface-area API of .NET Core to the more inclusive though still considerably cleaned-up API of .NET Standard.

Since it's an older solution, Quino projects use the older csproj file-format: the one where you have to whitelist the files to include. Instead of re-using these projects, I figured a good first step would be to use the dotnet command-line tool to create a new solution and projects and then copy files over. That way, I could be sure that I was really only including the code I wanted -- instead of random cruft generated into the project files by previous versions of Visual Studio.

The dotnet Command

The dotnet command is really very nice and I was able to quickly build up a list of core projects in a new solution using the following commands:

  • dotnet new sln
  • dotnet new classlib -n {name}
  • dotnet add reference {../otherproject/otherproject.csproj}
  • dotnet add package {nuget-package-name}
  • dotnet clean
  • dotnet build

That's all I've used so far, but it was enough to investigate this brave new world without needing an IDE. Spoiler alert: I like it very much. The API is so straightforward that I don't even need to include descriptions for the commands above. (Right?)

Everything really seems to be coming together: even the documentation is clean, easy-to-navigate and has very quick and accurate search results.

Initial Results

  • Encodo.Core compiles (almost) without change. The only change required was to move project-description attributes that used to be in the AssemblyInfo.cs file to the project file instead (where they admittedly make much more sense). If you don't do this, the compiler complains about "[CS0579] Duplicate 'System.Reflection.AssemblyCompanyAttribute' attribute" and so on.
  • Encodo.Expressions references Windows.System.Media for Color and the Colors constants. I changed those references to System.Drawing and Color, respectively -- something I knew I would have to do.
  • Encodo.Connections references the .NET-Framework--only WindowsIdentity. I will have to move these references to a Encodo.Core.Windows project and move creation of the CurrentCredentials, AnonymousCredentials and UserCredentials to a factory in the IOC.
  • Quino.Meta references the .NET-Framework--only WeakEventManager. There are only two references and these are used to implement a CollectionChanged feature that is nearly unused. I will probably have to copy/implement the WeakEventManager for now until we can deprecate those events permanently.
  • Quino.Data depends on Quino.Meta.Standard, which references System.Windows.Media (again) as well as a few other things. The Quino.Meta.Standard potpourri will have to be split up.

I discovered all of these things using just VS Code and the command-line build. It was pretty easy and straightforward.

So far, porting to .NET Standard is a much more rewarding process than our previous attempt at porting to .NET Core.

The Game Plan

At this point, I had a shadow copy of a bunch of the core Quino projects with new project files as well as a handful of ad-hoc changes and commented code in the source files. While OK for investigation, this was not a viable strategy for moving forward on a port for Quino.

I want to be able to work in a branch of Quino while I further investigate the viability of:

  • Targeting parts of Quino to .Net Standard 2.0 while keeping other parts targeting the lowest version of .NET Framework that is compatible with .NET Standard 2.0 (4.6.1). This will, eventually, be only the Winform and WPF projects, which will never be supported under .NET Standard.
  • Using the new project-file format for all projects, regardless of target (which IDEs can I still use? Certainly the latest versions of Visual Studio et. al.)

To test things out, I copied the new Encodo.Core project file back to the main Quino workspace and opened the old solution in Visual Studio for Mac and JetBrains Rider.

IDE Pros and Cons

Visual Studio for Mac

Visual Studio for Mac says it's a production release, but it stumbled right out of the gate: it failed to compile Encodo.Core even though dotnet build had compiled it without complaint from the get-go. Visual Studio for Mac claimed that OperatingSytem was not available. However, according to the documentation, Operating System is available for .NET Standard -- but not in .NET Core. My theory is that Visual Studio for Mac was somehow misinterpreting my project file.

Update: After closing and re-opening the IDE, though, this problem went away and I was able to build Encodo.Core as well. Shaky, but at least it works now.

imageUnfortunately, working with this IDE remained difficult. It stumbled again on the second project that I changed to .NET Standard. Encodo.Core and Encodo.Expressions both have the same framework property in their project files -- <TargetFramework>netstandard2.0</TargetFramework> -- but, as you can see in the screenshot to the left, both are identified as .NETStandard.Library but one has version 2.0.0-preview1-25301-01 and the other has version 1.6.1. I have no idea where there second version number is coming from -- it looks like this IDE is mashing up the .NET Framework version and the .NET Standard versions. Not quite ready for primetime.

Also, the application icon is mysteriously the bog-standard MacOS-app icon instead of something more...Visual Studio-y.

JetBrains Rider EAP (April 27th)

JetBrains Rider built the assembly without complaint, just as dotnet build did on the command line. Rider didn't stumble as hard as Visual Studio for Mac, but it also didn't have problems building projects after the framework had changed. On top of that, it wasn't always so easy to figure out what to do to get the framework downloaded and installed. Rider still has a bit of a way to go before I would make it my main IDE.

I also noticed that, while Rider's project/dependencies view accurately reflects .NET Standard projects, the "project properties" dialog shows the framework version as just "2.0". The list of version numbers makes this look like I'm targeting .NET Framework 2.0.

Addtionally, Rider's error messages in the build console are almost always truncated. image The image to the right is of the IDE trying to inform me that Encodo.Logging (which was still targeting .NET Framework 4.5) cannot reference Encodo.Core (which references NET Standard 2.0). If you copy/paste the message into an editor, you can see that's what it says.1

Visual Studio Code

I don't really know how to get Visual Studio Code to do much more than syntax-highlight my code and expose a terminal from which I can manually call dotnet build. They write about Roslyn integration where "[o]n startup the best matching projects are loaded automatically but you can also choose your projects manually". While I saw that the solution was loaded and recognized, I never saw any error-highlighting in VS Code. The documentation does say that it's "optimized for cross-platform .NET Core development" and my projects targeted .NET Standard so maybe that was the problem. At any rate, I didn't put much time into VS Code yet.

Next Steps

  1. Convert all Quino projects to use the new project-file format and target .NET Framework. Once that's all running with the new project-file format, it will be much easier to start targeting .NET Standard with certain parts of the framework
  2. Change the target for all projects to .NET Framework 4.6.1 to ensure compatibility with .NET Standard once I start converting projects.
  3. Convert projects to .NET Standard wherever possible. As stated above, Encodo.Core already works and there are only minor adjustments needed to be able to compile Encodo.Expressions and Quino.Meta.
  4. Continue with conversion until I can compile Quino.Schema, Quino.Data.PostgreSql, Encodo.Parsers.Antlr and Quino.Web. With this core, we'd be able to run the WebAPI server we're building for a big customer on a Mac or a Linux box.
  5. Given this proof-of-concept, a next step would be to deploy as an OWIN server to Linux on Amazon and finally see a Quino-based application running on a much leaner OS/Web-server stack than the current Windows/IIS one.

I'll keep you posted.2



  1. Encodo.Expressions.AssemblyInfo.cs(14, 12): [CS0579] Duplicate 'System.Reflection.AssemblyCompanyAttribute' attribute Microsoft.NET.Sdk.Common.targets(77, 5): [null] Project '/Users/marco/Projects/Encodo/quino/src/libraries/Encodo.Core/Encodo.Core.csproj' targets '.NETStandard,Version=v2.0'. It cannot be referenced by a project that targets '.NETFramework,Version=v4.5'.

  2. Update: I investigated a bit farther and I'm having trouble using NETStandard2.0 from NETFramework462 (the Mono version on Mac). I was pretty sure that's how it's supposed to work, but NETFramework (any version) doesn't seem to want to play with NETStandard right now. Visual Studio for Mac tells me that Encodo.Core (NETStandard2.0) cannot be used from Encodo.Expressions (Net462), which doesn't seem right, but I'm not going to fight with it on this machine anymore. I'm going to try it on a fully updated Windows box next -- just to remove the Mono/Mac/NETCore/Visual Studio for Mac factors from the equation. Once I've got things running on Windows, I'll prepare a NETStandard project-only solution that I'll try on the Mac.

Beware the Hype: .NET Core

The article .NET Core, a call to action by Mark Rendle exhorts everyone to "go go go".

I say, "pump the brakes."

RC => Beta => Alpha

Mark says, "The next wave of work must be undertaken by the wider .NET community, both inside and outside Microsoft."

No. The next wave of work must be undertaken by the team building the product. This product is not even Beta yet. They have called the last two releases RC, but they aren't: the API is still changing quite dramatically. For example, the article Announcing .NET Core RC2 and .NET Core SDK Preview 11 lists all sorts of changes and the diff of APIs between RC1 and RC2 is gigantic -- the original article states that "[w]e added over a 1000 new APIs in .NET Core RC2".

What?!?!

That is a huge API-surface change between release candidates. That's why I think these designations are largely incorrect. Maybe they just mean, "hey, if y'all can actually work with this puny footprint, then we'll call it a final release. If not, we'll just add a bunch more stuff until y'all can compile again." Then, yeah, I guess each release is a "candidate".

But then they should just release 1.0 because this whole "RC" business is confusing. What they're really releasing are "alpha" builds. The quality is high, maybe even production-quality, but they're still massive changes vis-a-vis previous builds.

An Example: Project files

That doesn't sound like "RC" to me. As an example, look at the project-file format, project.json.

Mark also noted that there are "no project.json files in the repository" for the OData project that comes from Microsoft. That's not too surprising, considering the team behind .NET Core just backed off of the project.json format considerably, as concisely documented in The Future of project.json in ASP.NET Core by Shawn Wildermuth. The executive summary is that they've decided "to phase out project.json in deference to MSBuild". Anyone who's based any of their projects on the already-available-in-VS-2015 project templates that use that format will have to convert them to whatever the final format is.

Wildermuth also wrote that "Microsoft has decided after the RTM of the ASP.NET Core framework to phase out project.json and use MSBuild for build data. (Emphasis added.)" I was confused (again) but am pretty sure that he's wrong about RTM because, just a couple of days later, MS published an article Announcing ASP.NET Core RC2 -- and I'm pretty sure that RCs come before RTM.

Our Experience

At Encodo, we took a shot at porting the base assembly of Quino to .NET Core. It has only dependencies on framework-provided assemblies in the GAC, so that eliminated any issues with third-party support, but it does provide helper methods for AppDomains and Reflection, which made a port to .NET Core nontrivial.

Here's a few things we learned that made the port take much longer than we expected.

  • Multi-target project.json works with the command-line tools. Create the project file and compile with dotnet.
  • Multi-target project.json do not work in Visual Studio; you have to choose a single target. Otherwise, the same project that just built on the command line barely loads.
  • Also, Visual Studio ignores any #IFDEFs you use for platform-specific code. So, even if you've gotten everything compiling on the command-line, be prepared to do it all over again differently if you actually want it to work in VS2015.
  • If you do have to change code per-platform (e.g. for framework-only), then you have to put that code in its own assembly if you want to use Visual Studio.
  • If you go to all the trouble to change your API surface to accommodate .NET Core, then you might have done the work for nothing: many of the missing APIs that we had to work around in porting Encodo.Core are suddenly back in RC2. That means that if we'd waited, we'd have saved a lot of time and ended up in the same place.
  • There are several versions and RCs available, but only the beta channel was usable for us (e.g. the RC3 versions didn't work at all when we tried them).
  • In the end, we didn't have to make a lot of changes to get Encodo.Core compiling under .NET Core.
  • We learned a lot and know that we won't have too much trouble porting at least some assemblies, but the tools and libraries are still not working together in a helpful way -- and that ends up eating a lot of time and effort.

With so much in flux -- APIs and project format -- we're not ready to invest more time and money in helping MS figure out what the .NET Core target needs. We're going to sit it out until there's an actual RTM. Even at that point, if we make a move, we'll try a small slice of Quino again and see how long it takes. If it's still painful, then we'll wait until the first service pack (as is our usual policy with development tools and libraries).

Conclusion

I understand Mark's argument that "the nature of a package-based ecosystem such as NuGet can mean that Project Z can't be updated until Project Y releases .NET Core packages, and Project Y may be waiting on Project X, and so on". But I just don't, as he says, "trust that what we have now in RC2 is going to remain stable in API terms", so I wouldn't recommend "that OSS project maintainers" do so, either. It's just not ready yet.

If you jump on the .NET Core train now, be prepared to shovel coal. Oh, and you might just have to walk to the next station, too. At noon. Carrying your overseas trunk on your back. Once you get there, though, you might be just in time for the 1.0.1 or 1.0.2 express arriving at the station, where you can get on, you might not even have to buy a new ticket -- and you can get there at the same time as everyone else.



  1. The Mark Renton article states boldly that "Yesterday we finally got our hands on the first Release Candidate of .NET Core [...]" but I don't know what he's talking about. The project just released RC2 and there are even RC3 packages available in the channel already -- but these are totally useless and didn't work at all in our projects.

C# Handbook Rewrite Coming Soon

Encodo published its first C# Handbook and published it to its web site in 2008. At the time, we also published to several other standard places and got some good, positive feedback. Over the next year, I made some more changes and published new versions. The latest version is 1.5.2 and is available from Encodo's web site. Since then, though I've made a few extra notes and corrected a few errors, but never published an official version again.

This is not because Encodo hasn't improved or modernized its coding guidelines, but because of several issues, listed below.

  • At 72 pages, it's really quite long
  • A more compact, look-up reference would be nice
  • It contains a mix of C#-specific, Encodo-specific and library-specific advice
  • It's maintained in Microsoft Word
  • Code samples are manually formatted
  • New versions are simply new copies in versioned folders (no source control)
  • Collaboration is nearly impossible
  • There is nothing about any .NET version newer than 3.5
  • There is no mention of any other programming language (e.g. TypeScript, JavaScript)
  • A lot of stuff is overly complicated (e.g. var advice) or just plain wrong (e.g. var advice)

To address these issues and to accommodate the new requirements, here's what we're going to do:

  • Convert the entire document from Word to Markdown and put it in a Git repository

    • Collaboration? Pull requests. Branches.
    • Versioning? Standard diffing of commits.
    • Code samples? Automatic highlighting from GitLab (Encodo's internal server) or GitHub (external repository).
  • Separate the chapters into individual files and keep them shorter and more focused on a single topic

  • Separate all of the advice and rules into the following piles:

    • General programming advice and best practices
    • C#-specific
    • Encodo-specific
    • Library-specific (e.g. Quino)

These are the requirements and goals for a new version of the C# handbook.

The immediate next steps are:

  1. Convert current version from Microsoft Word to Markdown (done)
  2. Add everything to a Git repository (done)
  3. Overhaul the manual to remove incorrect and outdated material; address issues above (in progress)
  4. Mirror externally (GitHub or GitLab or both)

I hope to have an initial, modern version ready within the next month or so.

API Design: The Road Not Taken

Unwritten code requires no maintenance and introduces no cognitive load.

As I was working on another part of Quino the other day, I noticed that the oft-discussed registration and configuration methods1 were a bit clunkier than I'd have liked. To whit, the methods that I tended to use together for configuration had different return types and didn't allow me to freely mix calls fluently.

The difference between Register and Use

The return type for Register methods is IServiceRegistrationHandler and the return type for Use methods is IApplication (a descendant), The Register* methods come from the IOC interfaces, while the application builds on top of this infrastructure with higher-level Use* configuration methods.

This forces developers to write code in the following way to create and configure an application.

public IApplication CreateApplication()
{
  var result =
    new Application()
    .UseStandard()
    .UseOtherComponent();

  result.
    .RegisterSingle<ICodeHandler, CustomCodeHandler>()
    .Register<ICodePacket, FSharpCodePacket>();

  return result;
}

That doesn't look too bad, though, does it? It doesn't seem like it would cramp anyone's style too much, right? Aren't we being a bit nitpicky here?

That's exactly why Quino 2.0 was released with this API. However, here we are, months later, and I've written a lot more configuration code and it's really starting to chafe that I have to declare a local variable and sort my method invocations.

So I think it's worth addressing. Anything that disturbs me as the writer of the framework -- that gets in my way or makes me write more code than I'd like -- is going to disturb the users of the framework as well.

Whether they're aware of it or not.

Developers are the Users of a Framework

In the best of worlds, users will complain about your crappy API and make you change it. In the world we're in, though, they will cheerfully and unquestioningly copy/paste the hell out of whatever examples of usage they find and cement your crappy API into their products forever.

Do not underestimate how quickly calls to your inconvenient API will proliferate. In my experience, programmers really tend to just add a workaround for whatever annoys them instead of asking you to fix the problem at its root. This is a shame. I'd rather they just complained vociferously that the API is crap rather than using it and making me support it side-by-side with a better version for usually feels like an eternity.

Maybe it's because I very often have control over framework code that I will just not deal with bad patterns or repetitive code. Also I've become very accustomed to having a wall of tests at my beck and call when I bound off on another initially risky but in-the-end rewarding refactoring.

If you're not used to this level of control, then you just deal with awkward APIs or you build a workaround as a band-aid for the symptom rather than going after the root cause.

Better Sooner than Later

So while the code above doesn't trigger warning bells for most, once I'd written it a dozen times, my fingers were already itching to add [Obsolete] on something.

I am well-aware that this is not a simple or cost-free endeavor. However, I happen to know that there aren't that many users of this API yet, so the damage can be controlled.

If I wait, then replacing this API with something better later will take a bunch of versions, obsolete warnings, documentation and re-training until the old API is finally eradicated. It's much better to use your own APIs -- if you can -- before releasing them into the wild.

Another more subtle reason why the API above poses a problem is that it's more difficult to discover, to learn. The difference in return types will feel arbitrary to product developers. Code-completion is less helpful than it could be.

It would be much nicer if we could offer an API that helped users discover it at their own pace instead of making them step back and learn new concepts. Ideally, developers of Quino-based applications shouldn't have to know the subtle difference between the IOC and the application.

A Better Way

Something like the example below would be nice.

return
  new Application()
  .UseStandard()
  .RegisterSingle<ICodeHandler, CustomCodeHandler>()
  .UseOtherComponent()
  .Register<ICodePacket, FSharpCodePacket>();

Right? Not a gigantic change, but if you can imagine how a user would write that code, it's probably a lot easier and more fluid than writing the first example. In the second example, they would just keep asking code-completion for the next configuration method and it would just be there.

Attempt #1: Use a Self-referencing Generic Parameter

In order to do this, I'd already created an issue in our tracker to parameterize the IServiceRegistrationHandler type in order to be able to pass back the proper return type from registration methods.

I'll show below what I mean, but I took a crack at it recently because I'd just watched the very interesting video Fun with Generics by Benjamin Hodgson, which starts off with a technique identical to the one I'd planned to use -- and that I'd already used successfully for the IQueryCondition interface.2

Let's redefine the IServiceRegistrationHandler interface as shown below,

public interface IServiceRegistrationHandler<TSelf>
{
  TSelf Register<TService, TImplementation>()
      where TService : class
      where TImplementation : class, TService;

  // ...
}

Can you see how we pass the type we'd like to return as a generic type parameter? Then the descendants would be defined as,

public interface IApplication : IServiceRegistrationHandler<IApplication>
{
}

In the video, Hodgson notes that the technique has a name in formal notation, "F-bounded quantification" but that a snappier name comes from the C++ world, "curiously recurring template pattern". I've often called it a self-referencing generic parameter, which seems to be a popular search term as well.

This is only the first step, though. The remaining work is to update all usages of the formerly non-parameterized interface IServiceRegistrationHandler. This means that a lot of extension methods like the one below

public static IServiceRegistrationHandler RegisterCoreServices(
  [NotNull] this IServiceRegistrationHandler handler)
{
}

will now look like this:

public static TSelf RegisterCoreServices<TSelf>(
[NotNull] this IServiceRegistrationHandler<TSelf> handler)
  where TSelf : IServiceRegistrationHandler<TSelf>
{
}

This makes defining such methods more complex (again).3 in my attempt at implementing this, Visual Studio indicated 170 errors remaining after I'd already updated a couple of extension methods.

Attempt #2: Simple Extension Methods

Instead of continuing down this path, we might just want to follow the pattern we established in a few other places, by defining both a Register method, which uses the IServiceRegistrationHandler, and a Use method, which uses the IApplication

Here's an example of the corresponding "Use" method:

public static IApplication UseCoreServices(
  [NotNull] this IApplication application)
{
  if (application == null) { throw new ArgumentNullException("application"); }

  application
    .RegisterCoreServices()
    .RegisterSingle(application.GetServices())
    .RegisterSingle(application);

  return application;
}

Though the technique involves a bit more boilerplate, it's easy to write and understand (and reason about) these methods. As mentioned in the initial sentence of this article, the cognitive load is lower than the technique with generic parameters.

The only place where it would be nice to have an IApplication return type is from the Register* methods defined on the IServiceRegistrationHandler itself.

We already decided that self-referential generic constraints would be too messy. Instead, we could define some extension methods that return the correct type. We can't name the method the same as the one that already exists on the interface4, though, so let's prepend the word Use, as shown below:

IApplication UseRegister<TService, TImplementation>(
  [NotNull] this IApplication application)
      where TService : class
      where TImplementation : class, TService;
{
  if (application == null) { throw new ArgumentNullException("application"); }

  application.Register<TService, TImplementation>();

  return application;
}

That's actually pretty consistent with the other configuration methods. Let's take it for a spin and see how it feels. Now that we have an alternative way of registering types fluently without "downgrading" the result type from IApplication to IServiceRegistrationHandler, we can rewrite the example from above as:

return
  new Application()
  .UseStandard()
  .UseRegisterSingle<ICodeHandler, CustomCodeHandler>()
  .UseOtherComponent()
  .UseRegister<ICodePacket, FSharpCodePacket>();

Instead of increasing cognitive load by trying to push the C# type system to places it's not ready to go (yet), we use tiny methods to tweak the API and make it easier for users of our framework to write code correctly.5


Perhaps an example is in order:

interface IA 
{
  IA RegisterSingle<TService, TConcrete>();
}

interface IB : IA { }

static class BExtensions
{
  static IB RegisterSingle<TService, TConcrete>(this IB b) { return b; }

  static IB UseStuff(this IB b) { return b; }
}

Let's try to call the method from BExtensions:

public void Configure(IB b)
{
  b.RegisterSingle<IFoo, Foo>().UseStuff();
}

The call to UseStuff cannot be resolved because the return type of the matched RegisterSingle method is the IA of the interface method not the IB of the extension method. There is a solution, but you're not going to like it (I know I don't).

public void Configure(IB b)
{
  BExtensions.RegisterSingle<IFoo, Foo>(b).UseStuff();
}

You have to specify the extension-method class's name explicitly, which engenders awkward fluent chaining -- you'll have to nest these calls if you have more than one -- but the desired method-resolution was obtained.

But at what cost? The horror...the horror.


  1. See Encodos configuration library for Quino Part 1, Part 2 and Part 3 as well as API Design: Running and Application Part 1 and Part 2 and, finally, Starting up an application, in detail.

  2. The video goes into quite a bit of depth on using generics to extend the type system in the direction of dependent types. Spoiler alert: he doesn't make it because the C# type system can't be abused in this way, but the journey is informative.

  3. As detailed in the links in the first footnote, I'd just gotten rid of this kind of generic constraint in the configuration calls because it was so ugly and offered little benefit.

  4. If you define an extension method for a descendant type that has the same name as a method of an ancestor interface, the method-resolution algorithm for C# will never use it. Why? Because the directly defined method matches the name and all the types and is a "stronger" match than an extension method.

  5. The final example does not run against Quino 2.2, but will work in an upcoming version of Quino, probably 2.3 or 2.4.

Profiling: that critical 3% (Part II)

image In part I of this series, we discussed some core concepts of profiling. In that article, we not only discussed the problem at hand, but also how to think about not only fixing performance problems, but reducing the likelihood that they get out of hand in the first place.

In this second part, we'll go into detail and try to fix the problem.

Reevaluating the Requirements

Since we have new requirements for an existing component, it's time to reconsider the requirements for all stakeholders. In terms of requirements, the IScope can be described as follows:

  1. Hold a list of objects in LIFO order
  2. Hold a list of key/value pairs with a unique name as the key
  3. Return the value/reference for a key
  4. Return the most appropriate reference for a given requested type. The most appropriate object is the one that was added with exactly the requested type. If no such object was added, then the first object that conforms to the requested type is returned
  5. These two piles of objects are entirely separate: if an object is added by name, we do not expect it to be returned when a request for an object of a certain type is made

There is more detail, but that should give you enough information to understand the code examples that follow.

Usage Patterns

There are many ways of implementing the functional requirements listed above. While you can implement the feature with only requirements, it's very helpful to know usage patterns when trying to optimize code.

Therefore, we'd like to know exactly what kind of contract our code has to implement -- and to not implement any more than was promised.

Sometimes a hopeless optimization task gets a lot easier when you realize that you only have to optimize for a very specific situation. In that case, you can leave the majority of the code alone and optimize a single path through the code to speed up 95% of the calls. All other calls, while perhaps a bit slow, will at least still be yield the correct results.

And "optimized" doesn't necessarily mean that you have to throw all of your language's higher-level constructs out the window. Once your profiling tool tells you that a particular bit of code has introduced a bottleneck, it often suffices to just examine that particular bit of code more closely. Just picking the low-hanging fruit will usually be more than enough to fix the bottleneck.1

Create scopes faster2

I saw in the profiler that creating the ExpressionContext had gotten considerably slower. Here's the code in the constructor.

foreach (var value in values.Where(v => v != null))
{
  Add(value);
}

I saw a few potential problems immediately.

  • The call to Add() had gotten more expensive in order to return the most appropriate object from the GetInstances() method
  • The Linq replaced a call to AddRange()

The faster version is below:

var scope = CurrentScope;
for (var i = 0; i < values.Length; i++)
{
  var value = values[i];
  if (value != null)
  {
    scope.AddUnnamed(value);
  }
}

Why is this version faster? The code now uses the fact that we know we're dealing with an indexable list to avoid allocating an enumerator and to use non-allocating means of checking null. While the Linq code is highly optimized, a for loop is always going to be faster because it's guaranteed not to allocate anything. Furthermore, we now call AddUnnamed() to use the faster registration method because the more involved method is never needed for these objects.

The optimized version is less elegant and harder to read, but it's not terrible. Still, you should use these techniques only if you can prove that they're worth it.

Optimizing CurrentScope

Another minor improvement is that the call to retrieve the scope is made only once regardless of how many objects are added. On the one hand, we might expect only a minor improvement since we noted above that most use cases only ever add one object anyway. On the other, however, we know that we call the constructor 20 million times in at least one test, so it's worth examining.

The call to CurrentScope gets the last element of the list of scopes. Even something as innocuous as calling the Linq extension method Last() can get more costly than it needs to be when your application calls it millions of times. Of course, Microsoft has decorated its Linq calls with all sorts of compiler hints for inlining and, of course, if you decompile, you can see that the method itself is implemented to check whether the target of the call is a list and use indexing, but it's still slower. There is still an extra stack frame (unless inlined) and there is still a type-check with as.

Replacing a call to Last() with getting the item at the index of the last position in the list is not recommended in the general case. However, making that change in a provably performance-critical area shaved a percent or two off a test run that takes about 45 minutes. That's not nothing.

protected IScope CurrentScope
{
  get { return _scopes.Last(); }
}
protected IScope CurrentScope
{
  get { return _scopes[_scopes.Count - 1]; }
}

That takes care of the creation & registration side, where I noticed a slowdown when creating the millions of ExpressionContext objects needed by the data driver in our product's test suite.

Get objects faster

Let's now look at the evaluation side, where objects are requested from the context.

The offending, slow code is below:

public IEnumerable<TService> GetInstances<TService>()
{
  var serviceType = typeof(TService);
  var rawNameMatch = this[serviceType.FullName];

  var memberMatches = All.OfType<TService>();
  var namedMemberMatches = NamedMembers.Select(
    item => item.Value
  ).OfType<TService>();

  if (rawNameMatch != null)
  {
    var nameMatch = (TService)rawNameMatch;

    return
      nameMatch
      .ToSequence()
      .Union(namedMemberMatches)
      .Union(memberMatches)
      .Distinct(ReferenceEqualityComparer<TService>.Default);
  }

  return namedMemberMatches.Union(memberMatches);
}

As you can readily see, this code isn't particularly concerned about performance. It is, however, relatively easy to read and to figure out the logic behind returning objects, though. As long as no-one really needs this code to be fast -- if it's not used that often and not used in tight loops -- it doesn't matter. What matters more is legibility and maintainability.

But we now know that we need to make it faster, so let's focus on the most-likely use cases. I know the following things:

  • Almost all Scope instances are created with a single object in them and no other objects are ever added.
  • Almost all object-retrievals are made on such single-object scopes
  • Though the scope should be able to return all matching instances, sorted by the rules laid out in the requirements, all existing calls get the FirstOrDefault() object.

These extra bits of information will allow me to optimize the already-correct implementation to be much, much faster for the calls that we're likely to make.

The optimized version is below:

public IEnumerable<TService> GetInstances<TService>()
{
  var members = _members;

  if (members == null)
  {
    yield break;
  }

  if (members.Count == 1)
  {
    if (members[0] is TService)
    {
      yield return (TService)members[0];
    }

    yield break;
  }

  object exactTypeMatch;
  if (TypedMembers.TryGetValue(typeof(TService), out exactTypeMatch))
  {
    yield return (TService)exactTypeMatch;
  }

  foreach (var member in members.OfType<TService>())
  {
    if (!ReferenceEquals(member, exactTypeMatch))
    {
      yield return member;
    }
  }
}

Given the requirements, the handful of use cases and decent naming, you should be able to follow what's going on above. The code contains many more escape clauses for common and easily handled conditions, handling them in an allocation-free manner wherever possible.

  1. Handle empty case
  2. Handle single-element case
  3. Return exact match
  4. Return all other matches3

You'll notice that returning a value added by-name is not a requirement and has been dropped. Improving performance by removing code for unneeded requirements is a perfectly legitimate solution.

Test Results

And, finally, how did we do? I created tests for the following use cases:

  • Create scope with multiple objects
  • Get all matching objects in an empty scope
  • Get first object in an empty scope
  • Get all matching objects in a scope with a single object
  • Get first object in a scope with a single object
  • Get all matching objects in a scope with multiple objects
  • Get first object in a scope with multiple objects

Here are the numbers from the automated tests.

image

image

  • Create scope with multiple objects -- 12x faster
  • Get all matching objects in an empty scope -- almost 2.5x faster
  • Get first object in an empty scope -- almost 3.5x faster
  • Get all matching objects in a scope with a single object -- over 3x faster
  • Get first object in a scope with a single object -- over 3.25x faster
  • Get all matching objects in a scope with multiple objects -- almost 3x faster
  • Get first object in a scope with multiple objects -- almost 2.25x faster

This looks amazing but remember: while the optimized solution may be faster than the original, all we really know is that we've just managed to claw our way back from the atrocious performance characteristics introduced by a recent change. We expect to see vast improvements versus a really slow version.

Since I know that these calls showed up as hotspots and were made millions of times in the test, the performance improvement shown by these tests is enough for me to deploy a pre-release of Quino via TeamCity, upgrade my product to that version and run the tests again. Wish me luck4



  1. The best approach at this point is to create issues for the other performance investigations you could make. For example, I opened an issue called Optimize allocations in the data handlers (start with IExpressionContexts), documented everything I had analyzed and quickly got back to the issue on which I'd started.

  2. For those with access to the Quino Git repository, the diffs shown below come from commit a825d5030ce6f65a452e1db85a308e1351288b96.

  3. If you're following along very, very carefully, you'll recall at this point that the requirement stated above is that objects are returned in LIFO order. The faster version of the code returns objects in FIFO order. You can't tell that the original, slow version did guarantee LIFO ordering, but only because the call to get All members contained a hidden call to the Linq call Reverse(), which slowed things down even more! I removed the call to reverse all elements because (A) I don't actually have any tests for the LIFO requirement nor (B) do I have any other code that expects it to happen. I wasn't about to make the code even more complicated and possibly slower just to satisfy a purely theoretical requirement. That's the kind of behavior that got me into this predicament in the first place.

  4. Spoiler alert: it worked. ;-) The fixes cut the testing time from about 01:30 to about 01:10 for all tests on the build server, so we won back the lost 25%.

Profiling: that critical 3% (Part I)

An oft-quoted bit of software-development sagacity is

Premature optimization is the root of all evil.

As is so often the case with quotes -- especially those on the Internet1 -- this one has a slightly different meaning in context. The snippet above invites developers to overlook the word "premature" and interpret the received wisdom as "you don't ever need to optimize."

Instead, Knuth's full quote actually tells you how much of your code is likely to be affected by performance issues that matter (highlighted below).

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

An Optimization Opportunity in Quino2

In other articles, I'd mentioned that we'd upgraded several solutions to Quino 2 in order to test that the API was solid enough for a more general release. One of these products is both quite large and has a test suite of almost 1500 tests. The product involves a lot of data-import and manipulation and the tests include several scenarios where Quino is used very intensively to load, process and save data.

These tests used to run in a certain amount of time, but started taking about 25% longer after the upgrade to Quino 2.

Measuring Execution Speed

Before doing anything else -- making educated guesses as to what the problem could be, for example -- we measure. At Encodo, we use JetBrains DotTrace to collect performance profiles.

There is no hidden secret: the standard procedure is to take a measurement before and after the change and to compare them. However, so much had changed from Quino 1.13 to Quino 2 -- e.g. namespaces and type names had changed -- that while DotTrace was able to show some matches, the comparisons were not as useful as usual.

A comparison between codebases that hadn't changed so much is much easier, but I didn't have that luxury.

Tracking the Problem

Even excluding the less-than-optimal comparison, it was an odd profile. Ordinarily, one or two issues stick out right away, but the slowness seemed to suffuse the entire test run. Since the direct profiling comparison was difficult, I downloaded test-speed measurements as CSV from TeamCity for the product where we noticed the issue.

How much slower, you might ask? The test that I looked at most closely took almost 4 minutes (236,187ms) in the stable version, but took 5:41 in the latest build.

image

This test was definitely one of the largest and longest tests, so it was particularly impacted. Most other tests that imported and manipulated data ranged anywhere from 10% to 30% slower.

When I looked for hot-spots, the profile unsurprisingly showed me that database access took up the most time. The issue was more subtle: while database-access still used the most time, it was using a smaller percentage of the total time. Hot-spot analysis wasn't going to help this time. Sorting by absolute times and using call counts in the tracing profiles yielded better clues.

The tests were slower when saving and also when loading data. But I knew that the ORM code itself had barely changed at all. And, since the product was using Quino so heavily, the stack traces ran quite deep. After a lot of digging, I noticed that creating the ExpressionContext to hold an object while evaluating expressions locally seemed to be taking longer than before. This was my first, real clue.

Once I was on the trail, I found that when evaluating calls (getting objects) that used local evaluation, it was also always slower.

Don't Get Distracted

imageOnce you start looking for places where performance is not optimal, you're likely to start seeing them everywhere. However, as noted above, 97% of them are harmless.

To be clear, we're not optimizing because we feel that the framework is too slow but because we've determined that the framework is now slower than it used to be and we don't know why.

Even after we've finished restoring the previous performance (or maybe even making it a little better), we might still be able to easily optimize further, based on other information that we gleaned during our investigation.

But we want to make sure that we don't get distracted and start trying to FIX ALL THE THINGS instead of just focusing on one task at a time. While it's somewhat disturbing that we seem to be created 20 million ExpressionContext objects in a 4-minute test, that is also how we've always done it, and no-one has complained about the speed up until now.

Sure, if we could reduce that number to only 2 million, we might be even faster3, but the point is that that we used to be faster on the exact same number of calls -- so fix that first.

A Likely Culprit: Scope

I found a likely candidate in the Scope class, which implements the IScope interface. This type is used throughout Quino, but the two use-cases that affect performance are:

  1. As a base for the ExpressionContext, which holds the named values and objects to be used when evaluating the value of an IExpression. These expressions are used everywhere in the data driver.
  2. As a base for the poor-man's IOC used in Stage 2 of application execution.4

The former usage has existed unchanged for years; its implementation is unlikely to be the cause of the slowdown. The latter usage is new and I recall having made a change to the semantics of which objects are returned by the Scope in order to make it work there as well.

How could this happen?

You may already be thinking: smooth move, moron. You changed the behavior of a class that is used everywhere for a tacked-on use case. That's definitely a valid accusation to make.

In my defense, my instinct is to reuse code wherever possible. If I already have a class that holds a list of objects and gives me back the object that matches a requested type, then I will use that. If I discover that the object that I get back isn't as predictable as I'd like, then I improve the predictability of the API until I've got what I want. If the improvement comes at no extra cost, then it's a win-win situation. However, this time I paid for the extra functionality with degraded performance.

Where I really went wrong was that I'd made two assumptions:

  1. I assumed that all other usages were also interested in improved predictability.
  2. I assumed that all other usages were not performance-critical. When I wrote the code you'll see below, I distinctly remember thinking: it's not fast, but it'll do and I'll make it faster if it becomes a problem. Little did I know how difficult it would be to find the problem.

Preventing future slippage

Avoid changing a type shared by different systems without considering all stakeholder requirements.

I think a few words on process here are important. Can we improve the development process so that this doesn't happen again? One obvious answer would be to avoid changing a type shared by different systems without considering all stakeholder requirements. That's a pretty tall order, though. Including this in the process will most likely lead to less refactoring and improvement out of fear of breaking something.

We discussed above how completely reasonable assumptions and design decisions led to the performance degradation. So we can't be sure it won't happen again. What we would like, though, is to be notified quickly when there is performance degradation, so that it appears as a test failure.

Notify quickly when there is performance degradation

Our requirements are captured by tests. If all of the tests pass, then the requirements are satisfied. Performance is a non-functional requirement. Where we could improve Quino is to include high-level performance tests that would sound the alarm the next time something like this happens.[^5]

Enough theory: in part II, we'll describe the problem in detail and take a crack at improving the speed. See you there.



  1. In fairness, the quote is at least properly attributed. It really was Donald Knuth who wrote it.

  2. By "opportunity", of course, I mean that I messed something up that made Quino slower in the new version.

  3. See the article Quino 2: Starting up an application, in detail for more information on this usage.

  4. I'm working on this right now, in issue Add standard performance tests for release 2.1.

ReSharper Unit Test Runner 9.x update

Way back in February, I wrote about my experiences with ReSharper 9 when it first came out. The following article provides an update, this time with version 9.2, released just last week.

tl;dr: I'm back to ReSharper 8.2.3 and am a bit worried about the state of the 9.x series of ReSharper. Ordinarily, JetBrains has eliminated performance, stability and functional issues by the first minor version-update (9.1), to say nothing of the second (9.2).

Test Runner

In the previous article, my main gripe was with the unit-test runner, which was unusable due to flakiness in the UI, execution and change-detection. With the release of 9.2, the UI and change-detection problems have been fixed, but the runner is still quite flaky at executing tests.

What follows is the text of the report that I sent to JetBrains when they asked me why I uninstalled R# 9.2.

As with 9.0 and 9.1, I am unable to productively use the 9.2 Test Runner with many of my NUnit tests. These tests are not straight-up, standard tests, but R# 8.2.3 handled them without any issues whatsoever.

What's special about my tests?

There are quite a few base classes providing base functionality. The top layers provide scenario-specific input via a generic type parameter.

- **TestsBase**
  - **OtherBase<TMixin>**
     (7 of these, one with an NUnit CategoryAttribute)
    - **ConcreteTests<TMixin>**
       (defines tests with NUnit TestAttributes)
      - **ProviderAConcreteTests<TMixin>**
         (CategoryAttribute)
        - **ProtocolAProviderAConcreteTests**
          (TMixin = ProtocolAProviderA; TestFixtureAttribute, CategoryAttributes)
        - **ProtocolBProviderAConcreteTests**
          (TMixin = ProtocolBProviderA; TestFixtureAttribute, CategoryAttributes)
      - **ProviderBConcreteTests<TMixin>**
         (CategoryAttribute)
        - **ProtocolAProviderBConcreteTests**
          (TMixin = ProtocolAProviderB; TestFixtureAttribute, CategoryAttributes)
        - **ProtocolBProviderBConcreteTests**
          (TMixin = ProtocolBProviderB; TestFixtureAttribute, CategoryAttributes)

The test runner in 9.2 is not happy with this at all. The test explorer shows all of the tests correctly, with the test counts correct. If I select a node for all tests for ProviderB and ProtocolA (696 tests in 36 fixtures), R# loads 36 non-expandable nodes into the runner and, after a bit of a wait, marks them all as inconclusive. Running an individual test-fixture node does not magically cause the tests to load or appear and also shows inconclusive (after a while; it seems the fixture setup executes as expected but the results are not displayed).

If I select a specific, concrete fixture and add or run those tests, R# loads and executes the runner correctly. If I select multiple test fixtures in the explorer and add them, they also show up as expandable nodes, with the correct test counts, and can be executed individually (per fixture). However, if I elect to run them all by running the parent node, R# once again marks everything as inconclusive.

As I mentioned, 8.2.3 handles this correctly and I feel R# 9.2 isn't far off -- the unit-test explorer does, after all, show the correct tests and counts. In 9.2, it's not only inconvenient, but I'm worried that my tests are not being executed with the expected configuration.

Also, I really missed the StyleCop plugin for 9.2. There's a beta version for 9.1 that caused noticeable lag, so I'm still waiting for a more unobtrusive version for 9.2 (or any version at all).

While it's possible that there's something I'm doing wrong, or there's something in my installation that's strange, I don't think that's the problem. As I mentioned, test-running for the exact same solution with 8.2.3 is error-free and a pleasure to use. In 9.2, the test explorer shows all of the tests correctly, so R# is clearly able to interpret the hierarchy and attributes (noted above) as I've intended them to be interpreted. This feels very much like a bug or a regression for which JetBrains doesn't have test coverage. I will try to work with them to help them get coverage for this case.

Real-Time StyleCop rules

Additionally, the StyleCop plugin is absolutely essential for my workflow and there still isn't an official release for any of the 9.x versions. ReSharper 9.2 isn't supported at all yet, even in prerelease form. The official Codeplex page shows the latest official version as 4.7, released in January of 2012 for ReSharper 8.2 and Visual Studio 2013. One would imagine that VS2015 support is in the works, but it's hard to say. There is a page for StyleCop in the ReSharper extensions gallery but that shows a beta4, released in April of 2015, that only works with ReSharper 9.1.x, not 9.2. I tested it with 9.1.x, but it noticeably slowed down the UI. While typing was mostly unaffected, scrolling and switching file-tabs was very laggy. Since StyleCop is essential for so many developers, it's hard to see why the plugin gets so little love from either JetBrains or Microsoft.

GoTo Word

The "Go To Word" plugin is not essential but it is an extremely welcome addition, especially with so much more client-side work depending on text-based bindings that aren't always detected by ReSharper. In those cases, you can find -- for example -- all the references of a Knockout template by searching just as you would for a type or member. Additionally, you benefit from the speed of the ReSharper indexing engine and search UI instead of using the comparatively slow and ugly "Find in Files" support in Visual Studio. Alternatives suggested in the comments to the linked issue above all depend on building yet another index of data (e.g. Sando Code Search Tool). JetBrains has pushed off integrating go-to-word until version 10. Again, not a deal-breaker, but a shame nonetheless, as I'll have to do without it in 9.x until version 10 is released.

With so much more client-side development going on in Visual Studio and with dynamic languages and data-binding languages that use name-matching for data-binding, GoToWord is more and more essential. Sure, ReSharper can continue to integrate native support for finding such references, but until that happens, we're stuck with the inferior Find-in-Files dialog or other extensions that increase the memory pressure for larger solutions.

C# 6 Features and C# 7 Design Notes

Microsoft has recently made a lot of their .NET code open-source. Not only is the code for many of the base libraries open-source but also the code for the runtime itself. On top of that, basic .NET development is now much more open to community involvement.

In that spirit, even endeavors like designing the features to be included in the next version of C# are online and open to all: C# Design Meeting Notes for Jan 21, 2015 by Mads Torgerson.

C# 6 Recap

You may be surprised at the version number "7" -- aren't we still waiting for C# 6 to be officially released? Yes, we are.

If you'll recall, the primary feature added to C# 5 was support for asynchronous operations through the async/await keywords. Most .NET programmers are only getting around to using this rather far- and deep-reaching feature, to say nothing of the new C# 6 features that are almost officially available.

C# 6 brings the following features with it and can be used in the CTP versions of Visual Studio 2015 or downloaded from the Roslyn project.

Some of the more interesting features of C# 6 are:

  • Auto-Property Initializers: initialize a property in the declaration rather than in the constructor or on an otherwise unnecessary local variable.
  • Out Parameter Declaration: An out parameter can now be declared inline with var or a specific type. This avoids the ugly variable declaration outside of a call to a Try* method.
  • Using Static Class: using can now be used with with a static class as well as a namespace. Direct access to methods and properties of a static class should clean up some code considerably.
  • String Interpolation: Instead of using string.Format() and numbered parameters for formatting, C# 6 allows expressions to be embedded directly in a string (รก la PHP): e.g. "{Name} logged in at {Time}"
  • nameof(): This language feature gets the name of the element passed to it; useful for data-binding, logging or anything that refers to variables or properties.
  • Null-conditional operator: This feature reduces conditional, null-checking cruft by returning null when the target of a call is null. E.g. company.People?[0]?.ContactInfo?.BusinessAddress.Street includes three null-checks

Looking ahead to C# 7

If the idea of using await correctly or wrapping your head around the C# 6 features outlined above doesn't already make your poor head spin, then let's move on to language features that aren't even close to being implemented yet.

That said, the first set of design notes for C# 7 by Mads Torgerson include several interesting ideas as well.

  • Pattern-matching: C# has been ogling its similarly named colleague F# for a while. One of the major ideas on the table for C# is improving the ability to represent as well as match against various types of pure data, with an emphasis on immutable data.
  • Metaprogramming: Another focus for C# is reducing boilerplate and capturing common code-generation patterns. They're thinking of delegation of interfaces through composition. Also welcome would be an improvement in the expressiveness of generic constraints.

Related User Voice issues:

* [Expand Generic Constraints for constructors](http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2122427-expand-generic-constraints-for-constructors)
* [[p]roper (generic) type ali[a]sing](http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2315417-proper-generic-type-alising)
  • Controlling Nullability: Another idea is to be able to declare reference types that can never be null at compile-time (where reasonable -- they do acknowledge that they may end up with a "less ambitious approach").
  • Readonly parameters and locals: Being able to express when change is allowed is a powerful form of expressiveness. C# 7 may include the ability to make local variables and parameters readonly. This will help avoid accidental side-effects.
  • Lambda capture lists: One of the issues with closures is that they currently just close over any referenced variables. The compiler just makes this happen and for the most part works as expected. When it doesn't work as expected, it creates subtle bugs that lead to leaks, race conditions and all sorts of hairy situations that are difficult to debug.

If you throw in the increased use of and nesting of lambda calls, you end up with subtle bugs buried in frameworks and libraries that are nearly impossible to tease out.

The idea of this feature is to allow a lambda to explicitly capture variables and perhaps even indicate whether the capture is read-only. Any additional capture would be flagged by the compiler or tools as an error.Contracts(!): And, finally, this is the feature I'm most excited about because I've been waiting for integrated language support for Design by Contract for literally decades1, ever since I read the Object-Oriented Software Construction 2 (OOSC2) for the first time. The design document doesn't say much about it, but mentions that ".NET already has a contract system", the weaknesses of which I've written about before. Torgersen writes:

When you think about how much code is currently occupied with arguments and result checking, this certainly seems like an attractive way to reduce code bloat and improve readability.

...and expressiveness and provability!

There are a bunch of User Voice issues that I can't encourage you enough to vote for so we can finally get this feature:

* [Integrate Code Contracts more deeply in the .NET Framework](http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2304022-integrate-code-contract-keywords-into-the-main-ne)
* [Integrate Code Contract Keywords into the main .Net Languages](http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2304022-integrate-code-contract-keywords-into-the-main-ne)

With some or all of these improvements, C# 7 would move much closer to a provable language at compile-time, an improvement over being a safe language at run-time.

We can already indicate that instance data or properties are readonly. We can already mark methods as static to prevent the use of this. We can use ReSharper [NotNull] attributes to (kinda) enforce non-null references without using structs and incurring the debt of value-passing and -copying semantics.

I'm already quite happy with C# 5, but if you throw in some or all of the stuff outlined above, I'll be even happier. I'll still have stuff I can think of to increase expressiveness -- covariant return types for polymorphic methods or anchored types or relaxed contravariant type-conformance -- but this next set of features being discussed sounds really, really good.



  1. I love the features of the language Eiffel, but haven't ever been able to use it for work. The tools and IDE are a bit stuck in the past (very dated on Windows; X11 required on OS X). The language is super-strong, with native support for contracts, anchored types, null-safe programming, contravariant type-conformance, covariant return types and probably much more that C# is slowly but surely including with each version. Unfair? I've been writing about this progress for years (from newest to oldest):

    * [.NET 4.5.1 and Visual Studio 2013 previews are available](/blogs/developer-blogs/net-451-and-visual-studio-2013-previews-are-available/)
    * [A provably safe parallel language extension for C#](/blogs/developer-blogs/a-provably-safe-parallel-language-extension-for-c/)
    * [Waiting for C# 4.0: A casting problem in C# 3.5](/blogs/developer-blogs/waiting-for-c-40-a-casting-problem-in-c-35/)
    * [Microsoft Code Contracts: Not with a Ten-foot Pole](/blogs/developer-blogs/microsoft-code-contracts-not-with-a-ten-foot-pole/)
    * [Generics and Delegates in C#](/blogs/developer-blogs/generics-and-delegates-in-c/)
    * [Wildcard Generics](/blogs/developer-blogs/wildcard-generics/) (this one was actually about Java)
    * [An analysis of C# language design](http://earthli.com/news/view_article.php?id=892)
    * [Static-typing for languages with covariant parameters](http://earthli.com/news/view_article.php?id=820)
    * [What is .NET?](/blogs/developer-blogs/v1110-improvements-to-local-evaluation-remoting/)
    

Are you ready for ReSharper 9? Not for testing, you aren't.

We've been using ReSharper at Encodo since version 4. And we regularly use a ton of other software from JetBrains1 -- so we're big fans.

How to Upgrade R#

As long-time users of ReSharper, we've become accustomed to the following pattern of adoption for new major versions:

EAP

  1. Read about cool new features and improvements on the JetBrains blog
  2. Check out the EAP builds page
  3. Wait for star ratings to get higher than 2 out of 5
  4. Install EAP of next major version
  5. Run into issues/problems that make testing EAP more trouble than it's worth
  6. Re-install previous major version

RTM

  1. Major version goes RTM
  2. Install immediately; new features! Yay!
  3. Experience teething problems in x.0 version
  4. Go through hope/disappointment cycle for a couple of bug-fix versions (e.g. x.0.1, x.0.2)
  5. Install first minor-version release immediately; stability! Yay!

This process can take anywhere from several weeks to a couple of months. The reason we do it almost every time is that the newest version of ReSharper almost always has a few killer features. For example, version 8 had initial TypeScript support. Version 9 carries with it a slew of support improvements for Gulp, TypeScript and other web technologies.

Unfortunately, if you need to continue to use the test-runner with C#, you're in for a bumpy ride.

History of the Test Runner

Any new major version of ReSharper can be judged by its test runner. The test runner seems to be rewritten from the ground-up in every major version. Until the test runner has settled down, we can't really use that version of ReSharper for C# development.

The 6.x and 7.x versions were terrible at the NUnit TestCase and Values attributes. They were so bad that we actually converted tests back from using those attributes. While 6.x had trouble reliably compiling and executing those tests, 7.x was better at noticing that something had changed without forcing the user to manually rebuild everything.

Unfortunately, this new awareness in 7.x came at a cost: it slowed editing in larger NUnit fixtures down to a crawl, using a tremendous amount of memory and sending VS into a 1.6GB+ memory-churn that made you want to tear your hair out.

8.x fixed all of this and, by 8.2.x was a model of stability and usefulness, getting the hell out of the way and reliably compiling, displaying and running tests.

The 9.x Test Runner

And then along came 9.x, with a whole slew of sexy new features that just had to be installed. I tried the new features and they were good. They were fast. I was looking forward to using the snazzy new editor to create our own formatting template. ReSharper seemed to be using less memory, felt snappier, it was lovely.

And then I launched the test runner.

And then I uninstalled 9.x and reinstalled 8.x.

And then I needed the latest version of DotMemory and was forced to reinstall 9.x. So I tried the test runner again, which inspired this post.2

So what's not to love about the test runner? It's faster and seems much more asynchronous. However, it gets quite confused about which tests to run, how to handle test cases and how to handle abstract unit-test base classes.

Just like 6.x, ReSharper 9.x can't seem to keep track of which assemblies need to be built based on changes made to the code and which test(s) the user would like to run.

imageimage

To be fair, we have some abstract base classes in our unit fixtures. For example, we define all ORM query tests in multiple abstract test-fixtures and then create concrete descendants that run those tests for each of our supported databases. If I make a change to a common assembly and run the tests for PostgreSql, then I expect -- at the very least -- that the base assembly and the PostgreSql test assemblies will be rebuilt. 9.x isn't so good at that yet, forcing you to "Rebuild All" -- something that I'd no longer had to do with 8.2.x.

TestCases and the Unit Test Explorer

It's the same with TestCases: whereas 8.x was able to reliably show changes and to make sure that the latest version was run, 9.x suffers from the same issue that 6.x and 7.x had: sometimes the test is shown as a single node without children and sometimes it's shown with the wrong children. Running these tests results in a spinning cursor that never ends. You have to manually abort the test-run, rebuild all, reload the runner with the newly generated tests from the explorer and try again. This is a gigantic pain in the ass compared to 8.x, which just showed the right tests -- if not in the runner, then at-least very reliably in the explorer.

imageimage

And the explorer in 9.x! It's a hyperactive, overly sensitive, eager-to-please puppy that reloads, refreshes, expands nodes and scrolls around -- all seemingly with a mind of its own! Tests wink in and out of existence, groups expand seemingly at random, the scrollbar extends and extends and extends to accommodate all of the wonderful things that the unit-test explorer wants you to see -- needs for you to see. Again, it's possible that this is due to our abstract test fixtures, but this is new to 9.x. 8.2.x is perfectly capable of displaying our tests in a far less effusive and frankly hyperactive manner.

One last thing: output-formatting

Even the output formatting has changed in 9.x, expanding all CR/LF pairs from single-spacing to double-spacing. It's not a deal-breaker, but it's annoying: copying text is harder, reading stack traces is harder. How could no one have noticed this in testing?

image

Conclusion

The install/uninstall process is painless and supports jumping back and forth between versions quite well, so I'll keep trying new versions of 9.x until the test runner is as good as the one in 8.2.x is. For now, I'm back on 8.2.3. Stay tuned.



  1. In no particular order, we have used or are using:

    * DotMemory
    * DotTrace
    * DotPeek
    * DotCover
    * TeamCity
    * PHPStorm
    * WebStorm
    * PyCharm
    

  2. Although I was unable to install DotMemory without upgrading to ReSharper 9.x, I was able to uninstall ReSharper 9.x afterwards and re-install ReSharper 8.x.

Configure IIS for passing static-file requests to ASP.Net/MVC

At Encodo we had several ASP.Net MVC projects what needed to serve some files with a custom MVC Controller/Action. The general problem with this is that IIS tries hard to serve simple files like PDF's, pictures etc. with its static-file handler which is generally fine but not for files or lets say file-content served by our own action.

The goal is to switch off the static-file handling of IIS for some paths. One of the current projects came up with the following requirements so I did some research and how we can do this better then we did in past projects.

Requirements:

  1. Switch it off only for /Data/...
  2. Switch it off for ALL file-types as we don't yet know what files the authors will store in somewhere else.

This means that the default static-file handling of IIS must be switched off by some "magic" IIS config. In other apps we switched it off on a per file-type basis for the entire application. I finally came up with the following IIS-config (in web.config). It sets up a local configuration for the "data"-location only. Then I used a simple "*" wild-card as the path (yes, this is possible) to transfer requests to the ASP.Net. It looks like this:

<location path="data">
  <system.webServer>
    <handlers>
      <add name="nostaticfile" path="*" verb="GET" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
    </handlers>
  </system.webServer>
</location>

Alternative: Instead a controller one could also use a custom HttpHandler for serving such special URL's/Resources. In this project I decided using an action for this because of the central custom security which I needed for the /Data/... requests as well and got for free when using Action instead a HttpHandler.