1 2 3 4 5 6
Are you ready for ReSharper 9? Not for testing, you aren't.

We've been using ReSharper at Encodo since version 4. And we regularly use a ton of other software from JetBrains1 -- so we're big fans.

How to Upgrade R#

As long-time users of ReSharper, we've become accustomed to the following pattern of adoption for new major versions:


  1. Read about cool new features and improvements on the JetBrains blog
  2. Check out the EAP builds page
  3. Wait for star ratings to get higher than 2 out of 5
  4. Install EAP of next major version
  5. Run into issues/problems that make testing EAP more trouble than it's worth
  6. Re-install previous major version


  1. Major version goes RTM
  2. Install immediately; new features! Yay!
  3. Experience teething problems in x.0 version
  4. Go through hope/disappointment cycle for a couple of bug-fix versions (e.g. x.0.1, x.0.2)
  5. Install first minor-version release immediately; stability! Yay!

This process can take anywhere from several weeks to a couple of months. The reason we do it almost every time is that the newest version of ReSharper almost always has a few killer features. For example, version 8 had initial TypeScript support. Version 9 carries with it a slew of support improvements for Gulp, TypeScript and other web technologies.

Unfortunately, if you need to continue to use the test-runner with C#, you're in for a bumpy ride.

History of the Test Runner

Any new major version of ReSharper can be judged by its test runner. The test runner seems to be rewritten from the ground-up in every major version. Until the test runner has settled down, we can't really use that version of ReSharper for C# development.

The 6.x and 7.x versions were terrible at the NUnit TestCase and Values attributes. They were so bad that we actually converted tests back from using those attributes. While 6.x had trouble reliably compiling and executing those tests, 7.x was better at noticing that something had changed without forcing the user to manually rebuild everything.

Unfortunately, this new awareness in 7.x came at a cost: it slowed editing in larger NUnit fixtures down to a crawl, using a tremendous amount of memory and sending VS into a 1.6GB+ memory-churn that made you want to tear your hair out.

8.x fixed all of this and, by 8.2.x was a model of stability and usefulness, getting the hell out of the way and reliably compiling, displaying and running tests.

The 9.x Test Runner

And then along came 9.x, with a whole slew of sexy new features that just had to be installed. I tried the new features and they were good. They were fast. I was looking forward to using the snazzy new editor to create our own formatting template. ReSharper seemed to be using less memory, felt snappier, it was lovely.

And then I launched the test runner.

And then I uninstalled 9.x and reinstalled 8.x.

And then I needed the latest version of DotMemory and was forced to reinstall 9.x. So I tried the test runner again, which inspired this post.2

So what's not to love about the test runner? It's faster and seems much more asynchronous. However, it gets quite confused about which tests to run, how to handle test cases and how to handle abstract unit-test base classes.

Just like 6.x, ReSharper 9.x can't seem to keep track of which assemblies need to be built based on changes made to the code and which test(s) the user would like to run.


To be fair, we have some abstract base classes in our unit fixtures. For example, we define all ORM query tests in multiple abstract test-fixtures and then create concrete descendants that run those tests for each of our supported databases. If I make a change to a common assembly and run the tests for PostgreSql, then I expect -- at the very least -- that the base assembly and the PostgreSql test assemblies will be rebuilt. 9.x isn't so good at that yet, forcing you to "Rebuild All" -- something that I'd no longer had to do with 8.2.x.

TestCases and the Unit Test Explorer

It's the same with TestCases: whereas 8.x was able to reliably show changes and to make sure that the latest version was run, 9.x suffers from the same issue that 6.x and 7.x had: sometimes the test is shown as a single node without children and sometimes it's shown with the wrong children. Running these tests results in a spinning cursor that never ends. You have to manually abort the test-run, rebuild all, reload the runner with the newly generated tests from the explorer and try again. This is a gigantic pain in the ass compared to 8.x, which just showed the right tests -- if not in the runner, then at-least very reliably in the explorer.


And the explorer in 9.x! It's a hyperactive, overly sensitive, eager-to-please puppy that reloads, refreshes, expands nodes and scrolls around -- all seemingly with a mind of its own! Tests wink in and out of existence, groups expand seemingly at random, the scrollbar extends and extends and extends to accommodate all of the wonderful things that the unit-test explorer wants you to see -- needs for you to see. Again, it's possible that this is due to our abstract test fixtures, but this is new to 9.x. 8.2.x is perfectly capable of displaying our tests in a far less effusive and frankly hyperactive manner.

One last thing: output-formatting

Even the output formatting has changed in 9.x, expanding all CR/LF pairs from single-spacing to double-spacing. It's not a deal-breaker, but it's annoying: copying text is harder, reading stack traces is harder. How could no one have noticed this in testing?



The install/uninstall process is painless and supports jumping back and forth between versions quite well, so I'll keep trying new versions of 9.x until the test runner is as good as the one in 8.2.x is. For now, I'm back on 8.2.3. Stay tuned.

  1. In no particular order, we have used or are using:

    * DotMemory
    * DotTrace
    * DotPeek
    * DotCover
    * TeamCity
    * PHPStorm
    * WebStorm
    * PyCharm

  2. Although I was unable to install DotMemory without upgrading to ReSharper 9.x, I was able to uninstall ReSharper 9.x afterwards and re-install ReSharper 8.x.

Configure IIS for passing static-file requests to ASP.Net/MVC

At Encodo we had several ASP.Net MVC projects what needed to serve some files with a custom MVC Controller/Action. The general problem with this is that IIS tries hard to serve simple files like PDF's, pictures etc. with its static-file handler which is generally fine but not for files or lets say file-content served by our own action.

The goal is to switch off the static-file handling of IIS for some paths. One of the current projects came up with the following requirements so I did some research and how we can do this better then we did in past projects.


  1. Switch it off only for /Data/...
  2. Switch it off for ALL file-types as we don't yet know what files the authors will store in somewhere else.

This means that the default static-file handling of IIS must be switched off by some "magic" IIS config. In other apps we switched it off on a per file-type basis for the entire application. I finally came up with the following IIS-config (in web.config). It sets up a local configuration for the "data"-location only. Then I used a simple "*" wild-card as the path (yes, this is possible) to transfer requests to the ASP.Net. It looks like this:

<location path="data">
      <add name="nostaticfile" path="*" verb="GET" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />

Alternative: Instead a controller one could also use a custom HttpHandler for serving such special URL's/Resources. In this project I decided using an action for this because of the central custom security which I needed for the /Data/... requests as well and got for free when using Action instead a HttpHandler.

ASP.Net MVC Areas

After some initial skepticism regarding Areas, I now use them more and more when building new Web-Applications using ASP.Net MVC. Therefore, I decided to cover some of my thoughts and experiences in a blog post so others may get some inspiration out of it.

Before we start, here's a link to a general introduction to the area feature of MVC. Check out this article if you are not yet familiar with Areas.

Furthermore, this topic is based on MVC 5 and C# 4 but may also apply to older versions too as Areas are not really a new thing and were first introduced with MVC 2.


Areas are intended to structure an MVC web application. Let's say you're building a line-of-business application. You may want to separate your modules on one hand from each other and on the other hand from central pieces of your web application like layouts, HTML helpers, etc.

Therefore an area should be pretty much self-contained and should not have too much interaction with other areas -- as little as possible -- as otherwise the dependencies between the modules and their implementation grows more and more, undercutting separation, and resulting in less maintainability.

How one draw the borders of each area depends on the need of the application and company. For example, modules can be separated by functionality or by developer/developer-team. In our line-of-business application, we may have an area for "Customer Management", one for "Order Entry", one for "Bookkeeping" and one for "E-Banking" as they have largely separate functionality and will likely be built by different developers or even teams.

The nice thing about areas -- besides modularization/organization of the app -- is that they are built in MVC and therefore are supported by most tools like Visual Studio, R# and most libraries. On the negative side, I can count the lack of good support in the standard HtmlHelpers as one need to specify the area as a routing-object property and use the name of the area as a string. There is no dedicated parameter for the area. But to put that downside into perspective, this is only needed when making an URL to another area than the current one.

Application modularization using Areas

In my point of view, modularization using areas has two major advantages. The first one is the separation from other parts of the application and the second is the fact that area-related files are closer together in the Solution Explorer.

The separation -- apart from the usual separation advantages -- is helpful for reviews as the reviewer can easily see what has changed within the area and what changed on the core level of the application and therefore needs an even-closer look. Another point of the separation is that for larger applications and teams it results in fewer merge conflicts when pushing to the central code repository as each team has its own playground. Last but not least, its nice for me as an application-developer because I know that when I make changes to my area only I will not break other developers' work.

As I am someone who uses the Solution Explorer a lot, I like the fact that with areas I normally have to scroll and search less and have a good overview of the folder- and file-tree of the feature-set I am currently working on. This happens because I moved all area-related stuff into the area itself and leave the general libraries, layouts, base-classes and helpers outside. This results in a less cluttered folder-tree for my areas, where I normally spend the majority of my time developing new features.

Tips and tricks

  • Move all files related to the area into the area itself including style-sheets (CSS, LESS, SASS) and client-side scripts (Javascript, TypeScript).
  • Configure bundles for your area or even for single pages within your area in the area itself. I do this in an enhanced area-registration file.
  • Enhance the default area registration to configure more aspects of your area.When generating links in global views/layouts use add the area="" routing attribute so the URL always points to the central stuff instead being area-relative.

For example: if your application uses '@Html.ActionLink()' in your global _layout.cshtml, use:

@Html.ActionLink("Go to home", "Index", "Home", new { area = "" });

Area Registration / Start Up

And here is a sample of one of my application's area registrations:

public class BookkeepingAreaRegistration : AreaRegistration
  public override string AreaName
    get { return "Bookkeeping"; }

  public override void RegisterArea(AreaRegistrationContext context)

  private void RegisterRoutes(AreaRegistrationContext context)
    if (context == null) { throw new ArgumentNullException("context"); }

      new { controller = "Home", action = "Index", id = UrlParameter.Optional }

  private void RegisterBundles(BundleCollection bundles)
      if (bundles == null) { throw new ArgumentNullException("bundles"); }

      // Bookings Bundles
      bundles.Add(new ScriptBundle("~/bundles/bookkeeping/booking")

      bundles.Add(new StyleBundle("~/bookkeeping/css/booking")

      // Account Overview Bundle

As you can see in this example, I enhanced the area registration a little so area-specific bundles are registered in the area-registration too. I try to place all areas-specific start-up code in here.

Folder Structure

As I wrote in one of the tips, I strongly recommend storing all area-related files within the area's folder. This includes style-sheets, client-side scripts (JS, TypeScript), content, controller, views, view-models, view-model builders, HTML Helpers, etc. My goal is to make an area self-contained so all work can be done within the area's folder.

So the folder structure of my MVC Apps look something like this:

  • App_Start
  • Areas
    • Content
    • Controllers
    • Core
    • Models
      • Builders
    • Scripts
    • Views
  • bin
  • Content
  • Controllers
  • HtmlHelpers
  • Models
    • Builders
  • Scripts
  • Views

As you can see, each area is something like a mini-application within the main application.

Add-ons using Areas deployed as NuGet packages

Beside structuring an entire MVC Application, another nice usage is for hosting "add-on's" in their own area.

For example, lately I wrote a web-user-interface for the database-schema migration of our meta-data framework Quino. Instead of pushing it all in old-school web-resources and do binary deployment of them, I built it as an area. This area I've packed into a NuGet package (.nupkg) and published it to our local NuGet repo.

Applications which want to use the web-based schema-migration UI can just install that package using the NuGet UI or console. The package will then add the area with all the sources I wrote and because the area-registration is called by MVC automatically, it's ready to go without any manual action required. If I publish an update to the NuGet package applications can get these as usual with NuGet. A nice side-effect of this deployment is that the web application contains all the sources so developers can have a look at it if they like. It doesn't just include some binary files. Another nice thing is that the add-on can define its own bundles which get hosted the same way as the MVC app does its own bundles. No fancy web-resources and custom bundling and minification is needed.

To keep conflicts to a minimum with such add-on areas, the name should be unique and the area should be self-contained as written above.

Should you return `null` or an empty list?

I've seen a bunch of articles addressing this topic of late, so I've decided to weigh in.

The reason we frown on returning null from a method that returns a list or sequence is that we want to be able to freely use these sequences or lists with in a functional manner.

It seems to me that the proponents of "no nulls" are generally those who have a functional language at their disposal and the antagonists do not. In functional languages, we almost always return sequences instead of lists or arrays.

In C# and other functional languages, we want to be able to do this:

var names = GetOpenItems()
  .Where(i => i.OverdueByTwoWeeks)
  .SelectMany(i => i.GetHistoricalAssignees()
    .Select(a => new { a.FirstName, a.LastName })

foreach (var name in names)
  Console.WriteLine("{1}, {0}", name.FirstName, name.LastName);

If either GetHistoricalAssignees() or GetOpenItems() might return null, then we'd have to write the code above as follows instead:

var openItems = GetOpenItems();
if (openItems != null)
  var names = openItems
  .Where(i => i.OverdueByTwoWeeks)
  .SelectMany(i => (i.GetHistoricalAssignees() ?? Enumerable.Empty<Person>())
    .Select(a => new { a.FirstName, a.LastName })

  foreach (var name in names)
    Console.WriteLine("{1}, {0}", name.FirstName, name.LastName);

This seems like exactly the kind of code we'd like to avoid writing, if possible. It's also the kind of code that calling clients are unlikely to write, which will lead to crashes with NullReferenceExceptions. As we'll see below, there are people that seem to think that's perfectly OK. I am not one of those people, but I digress.

The post, Is it Really Better to 'Return an Empty List Instead of null'? / Part 1 by Christian Neumanns serves as a good example of an article that seems to be providing information but is just trying to distract people into accepting it as a source of genuine information. He introduces his topic with the following vagueness.

If we read through related questions in Stackoverflow and other forums, we can see that not all people agree. There are many different, sometimes truly opposite opinions. For example, the top rated answer in the Stackoverflow question Should functions return null or an empty object? (related to objects in general, not specifically to lists) tells us exactly the opposite:

Returning null is usually the best idea ...

The statement "we can see that not all people agree" is a tautology. I would split the people into groups of those whose opinions we should care about and everyone else. The statement "There are many different, sometimes truly opposite opinions" is also tautological, given the nature of the matter under discussion -- namely, a question that can only be answered as "yes" or "no". Such questions generally result in two camps with diametrically opposed opinions.

As the extremely long-winded pair of articles writes: sometimes you can't be sure of what an external API will return. That's correct. You have to protect against those with ugly, defensive code. But don't use that as an excuse to produce even more methods that may return null. Otherwise, you're just part of the problem.

The second article Is it Really Better to 'Return an Empty List Instead of null'? - Part 2 by Christian Neumanns includes many more examples.

I just don't know what to say about people that write things like "Bugs that cause NullPointerExceptions are usually easy to debug because the cause and effect are short-distanced in space (i.e. location in source code) and time." While this is kind of true, it's also even more true that you can't tell the difference between such an exception being caused by a savvy programmer who's using it to his advantage and a non-savvy programmer whose code is buggy as hell.

He has a ton of examples that try to distinguish between a method that returns an empty sequence being different from a method that cannot properly answer a question. This is a concern and a very real distinction to make, but the answer is not to return null to indicate nonsensical input. The answer is to throw an exception.

The method providing the sequence should not be making decisions about whether an empty sequence is acceptable for the caller. For sequences that cannot logically be empty, the method should throw an exception instead of returning null to indicate "something went wrong".

A caller may impart semantic meaning to an empty result and also throw an exception (as in his example with a cycling team that has no members). If the display of such a sequence on a web page is incorrect, then that is the fault of the caller, not of the provider of the sequence.

  • If data is not yet available, but should be, throw an exception
  • If data is not available but the provider isn't qualified to decide, return an empty sequence
  • If the caller receives an empty sequence and knows that it should not be empty, then it is responsible for indicating an error.

That there exists calling code that makes assumptions about return values that are incorrect is no reason to start returning values that will make calling code crash with a NullPointerException.

All of his examples are similar: he tries to make the pure-data call to retrieve a sequence of elements simultaneously validate some business logic. That's not a good idea. If this is really necessary, then the validity check should go in another method.

The example he cites for getting the amount from a list of PriceComponents is exactly why most aggregation functions in .NET throw an exception when the input sequence is empty. But that's a much better way of handling it -- with a precise exception -- than by returning null to try to force an exception somewhere in the calling code.

But the upshot for me is: I am not going to write code that, when I call it, forces me to litter other code with null-checks. That's just ridiculous.

Working with EF Migrations and branches

The version of EF Migrations discussed in this article is 5.0.20627. The version of Quino is less relevant: the features discussed have been supported for years. For those in a hurry, there is a tl;dr near the end of the article.

We use Microsoft Entity Framework (EF) Migrations in one of our projects where we are unable to use Quino. We were initially happy to be able to automate database-schema changes. After using it for a while, we have decidedly mixed feelings.

As developers of our own schema migration for the Quino ORM, we're always on the lookout for new and better ideas to improve our own product. If we can't use Quino, we try to optimize our development process in each project to cause as little pain as possible.

EF Migrations and branches

We ran into problems in integrating EF Migrations into a development process that uses feature branches. As long as a developer stays on a given branch, there are no problems and EF functions relatively smoothly.1

However, if a developer switches to a different branch -- with different migrations -- EF Migrations is decidedly less helpful. It is, in fact, quite cryptic and blocks progress until you figure out what's going on.

Assume the following not-uncommon situation:

  • The project is created in the master branch
  • The project has an initial migration BASE
  • Developers A and B migrate their databases to BASE
  • Developer A starts branch feature/A and includes migration A in her database
  • Developer B starts branch feature/B and includes migration B in his database

We now have the situation in which two branches have different code and each has its own database schema. Switching from one branch to another with Git quickly and easily addresses the code differences. The database is, unfortunately, a different story.

Let's assume that developer A switches to branch feature/B to continue working there. The natural thing for A to do is to call "update-database" from the Package Manager Console2. This yields the following message -- all-too-familiar to EF Migrations developers.


Unable to update database to match the current model because there are pending changes and automatic migration is disabled. Either write the pending changes to a code-based migration or enable automatic migration. [...]

This situation happens regularly when working with multiple branches. It's even possible to screw up a commit within a single branch, as illustrated in the following real-world example.

  • Add two fields to an existing class
  • Generate a migration with code that adds two fields
  • Migrate the database
  • Realize that you don't need one of the two fields
  • Remove the C# code from the migration for that field
  • Tests run green
  • Commit everything and push it

As far as you're concerned, you committed a single field to the model. When your co-worker runs that migration, it will be applied, but EF Migrations immediately thereafter complains that there are pending model changes to make. How can that be?

Out-of-sync migrations != outdated database

Just to focus, we're actually trying to get real work done, not necessarily debug EF Migrations. We want to answer the following questions:

  1. Why is EF Migrations having a problem updating the schema?
  2. How do I quickly and reliably update my database to use the current schema if EF Migrations refuses to do it?

The underlying reason why EF Migrations has problems is that it does not actually know what the schema of the database is. It doesn't read the schema from the database itself, but relies instead on a copy of the EF model that it stored in the database when it last performed a successful migration.

That copy of the model is also stored in the resource file generated for the migration. EF Migrations does this so that the migration includes information about which changes it needs to apply and about the model to which the change can be applied.

If the model stored in the database does not match the model stored with the migration that you're trying to apply, EF Migrations will not update the database. This is probably for the best, but leads us to the second question above: what do we have to do to get the database updated?

Generate a migration for those "pending changes"

The answer has already been hinted at above: we need to fix the model stored in the database for the last migration.

Let's take a look at the situation above in which your colleague downloaded what you thought was a clean commit.

From the Package Manager Console, run add-migration foo to scaffold a migration for the so-called "pending changes" that EF Migrations detected. That's interesting: EF Migrations thinks that your colleague should generate a migration to drop the column that you'd only temporarily added but never checked in.

That is, the column isn't in his database, it's not in your database, but EF Migrations is convinced that it was once in the model and must be dropped.

How does EF Migrations even know about a column that you added to your own database but that you removed from the code before committing? What dark magic is this?

The answer is probably obvious: you did check in the change. The part that you can easily remove (the C# code) is only half of the migration. As mentioned above, the other part is a binary chunk stored in the resource file associated with each migration. These BLOBS are stored in the table _MigrationHistory table in the database.


How to fix this problem and get back to work

Here's the tl;dr: generate a "fake" migration, remove all of the C# code that would apply changes to the database (shown below) and execute update-database from the Package Manager Console.


This may look like it does exactly nothing. What actually happens is that it includes the current state of the EF model in the binary data for the last migration applied to the database (because you just applied it).

Once you've applied the migration, delete the files and remove them from the project. This migration was only generated to fix your local database; do not commit it.

Everything's cool now, right?

Applying the fix above doesn't mean that you won't get database errors. If your database schema does not actually match the application model, EF will crash when it assumes fields or tables are available which do not exist in your database.

Sometimes, the only way to really clean up a damaged database -- especially if you don't have the code for the migrations that were applied there3 -- is to remove the misapplied migrations from your database, undo all of the changes to the schema (manually, of course) and then generate a new migration that starts from a known good schema.

Conclusions and comparison to Quino

The obvious answer to the complaint "it hurts when I do this" is "stop doing that". We would dearly love to avoid these EF Migrations-related issues but developing without any schema-migration support is even more unthinkable.

We'd have to create upgrade scripts manually or would have to maintain scripts to generate a working development database and this in each branch. When branches are merged, the database-upgrade scripts have to be merged and tested as well. This would be a significant addition to our development process, has maintainability and quality issues and would probably slow us down even more.

And we're certainly not going to stop developing with branches, either.

We were hoping to avoid all of this pain by using EF Migrations. That EF Migrations makes us think of going back to manual schema migration is proof that it's not nearly as elegant a solution as our own Quino schema migration, which never gave us these problems.

Quino actually reads the schema in the database and compares that model directly against the current application model. The schema migrator generates a custom list of differences that map from the current schema to the desired schema and applies them. There is user intervention but it's hardly ever really required. This is an absolute godsend during development where we can freely switch between branches without any hassle.4

Quino doesn't recognize "upgrade" versus "downgrade" but instead applies "changes". This paradigm has proven to be a much better fit for our agile, multi-branch style of development and lets us focus on our actual work rather than fighting with tools and libraries.

  1. EF Migrations as we use it is tightly bound to SQL Server. Just as one example, the inability of SQL Server to resolve cyclic cascade dependencies is in no way shielded by EF Migrations. Though the drawback originates in SQL Server, EF Migrations simply propagates it to the developer, even though it purports to provide an abstraction layer. Quino, on the other hand, does the heavy lifting of managing triggers to circumvent this limitation.

  2. As an aside, this is a spectacularly misleading name for a program feature. It should just be called "Console".

  3. I haven't ever been able to use the Downgrade method that is generated with each migration, but perhaps someone with more experience could explain how to properly apply such a thing. If that doesn't work, the method outlined above is your only fallback.

  4. The aforementioned database-script maintenance or having only very discrete schema-update points or maintaining a database per branch and switching with configuration files or using database backups or any other schemes that end up distracting you from working.

Dealing with improper disposal in WCF clients

There's an old problem in generated WCF clients in which the Dispose() method calls Close() on the client irrespective of whether there was a fault. If there was a fault, then the method should call Abort() instead. Failure to do so causes another exception, which masks the original exception. Client code will see the subsequent fault rather than the original one. A developer running the code in debug mode will have be misled as to what really happened.

You can see WCF Clients and the "Broken" IDisposable Implementation by David Barrett for a more in-depth analysis, but that's the gist of it.

This issue is still present in the ClientBase implementation in .NET 4.5.1. The linked article shows how you can add your own implementation of the Dispose() method in each generated client. An alternative is to use a generic adaptor if you don't feel like adding a custom dispose to every client you create.1

**public class** SafeClient<T> : IDisposable
  **where** T : ICommunicationObject, IDisposable
  **public** SafeClient(T client)
    **if** (client == **null**) { **throw new** ArgumentNullException("client"); }

    Client = client;
  **public** T Client { **get**; **private set**; }

  **public void** Dispose()

  **protected virtual void** Dispose(**bool** disposing)
    **if** (disposing)
      **if** (Client != **null**)
        **if** (Client.State == CommunicationState.Faulted) 

        Client = **default**(T);

To use your WCF client safely, you wrap it in the class defined above, as shown below.

**using** (**var** safeClient = **new** SafeClient<SystemLoginServiceClient>(**new** SystemLoginServiceClient(...)))
  **var** client = safeClient.Client;
  // Work with "client"

If you can figure out how to initialize your clients without passing parameters to the constructor, you could slim it down by adding a "new" generic constraint to the parameter T in SafeClient and then using the SafeClient as follows:

**using** (**var** safeClient = **new** SafeClient<SystemLoginServiceClient>())
  **var** client = safeClient.Client;
  // Work with "client"

  1. The code included in this article is a sketch of a solution and has not been tested. It does compile, though.

Entity Framework Generated SQL

This article originally appeared on earthli News and has been cross-posted here.

Microsoft just recently released Visual Studio 2013, which includes Entity Framework 6 and introduces a lot of new features. It reminded me of the following query that EF generated for me, way, way back when it was still version 3.5. Here's hoping that they've taken care of this problem since then.

So, the other day EF (v3.5) seemed to be taking quite a while to execute a query on SQL Server. This was a pretty central query and involved a few joins and restrictions, but wasn't anything too wild. All of the restrictions and joins were on numeric fields backed by indexes.

In these cases, it's always best to just fire up the profiler and see what kind of SQL is being generated by EF. It was a pretty scary thing (I've lost it unfortunately), but I did manage to take a screenshot of the query plan, shown below.


It doesn't look too bad until you notice that the inset on the bottom right (the black smudgy thing) is a representation of the entire query ... and that it just kept going on down the page.

.NET 4.5.1 and Visual Studio 2013 previews are available

The article Announcing the .NET Framework 4.5.1 Preview provides an incredible amount of detail about a relatively exciting list of improvements for .NET developers.

x64 Edit & Continue

First and foremost, the Edit-and-Continue feature is now available for x64 builds as well as x86 builds. Whereas an appropriate cynical reaction is that "it's about damn time they got that done", another appropriate reaction is to just be happy that they will finally support x64-debugging as a first-class feature in Visual Studio 2013.

Now that they have feature-parity for all build types, they can move on to other issues in the debugger (see the list of suggestions at the end).

Async-aware debugging

We haven't had much opportunity to experience the drawbacks of the current debugger vis à vis asynchronous debugging, but the experience outlined in the call-stack screenshot below is one that is familiar to anyone who's done multi-threaded (or multi-fiber, etc.) programming.


Instead of showing the actual stack location in the thread within which the asynchronous operation is being executed, the new and improved version of the debugger shows a higher-level interpretation that places the current execution point within the context of the asnyc operation. This is much more in keeping with the philosophy of the async/await feature in .NET 4.5, which lets developers write asynchronous code in what appears to be a serial fashion. This improved readability has been translated to the debugger now, as well.


Return-value inspection

The VS2013 debugger can now show the "direct return values and the values of embedded methods (the arguments)" for the current line.1 Instead of manually selecting the text segment and using the Quick Watch window, you can now just see the chain of values in the "Autos" debugger pane.


Nuget Improvements

We are also releasing an update in Visual Studio 2013 Preview to provide better support for apps that indirectly depend on multiple versions of a single NuGet package. You can think of this as sane NuGet library versioning for desktop apps.

We've been bitten by the afore-mentioned issue and are hopeful that the solution in Visual Studio 2013 will fill the gaps in the current release. The article describes several other improvements to the Nuget services, including integration with Windows Update for large-scale deployment. They also mentioned "a curated list of Microsoft .NET Framework NuGet Packages to help you discover these releases, published in OData format on the NuGet site", but don't mention whether the Nuget UI in VS2013 has been improved. The current UI, while not as awful and slow as initial versions, is still not very good for discovery and is quite clumsy for installation and maintenance.

User Voice for Visual Studio/.NET

You're not limited to just waiting on the sidelines to see which feature Microsoft has decided to implement in the latest version of .NET/Visual Studio. You should head over to the User Voice for Visual Studio site to get an account and vote for the issues you'd like the to work on next.

Here's a list of the ones I found interesting, and some of which I've voted on.

  1. In a similar vein, I found the issue Bring back Classic Visual Basic, an improved version of VB6 to be interesting, simply because of the large number of votes for it (1712 at the time of writing). While it's understandable that VB6 developers don't understand the programming paradigm that came with the transition to .NET, the utterly reactionary desire to go back to VB6 is somewhat unfathomable. It's 2013, you can't put the dynamic/lambda/jitted genie back in the bottle. If you can't run with the big dogs, you'll have to stay on the porch...and stop being a developer. There isn't really any room for software written in a glorified batch language anymore.

  2. This feature has been available for the unmanaged-code debugger (read: C++) for a while now.

Deleting multiple objects in Entity Framework

Many improvements have been made to Microsoft's Entity Framework (EF) since we here at Encodo last used it in production code. In fact, we'd last used it waaaaaay back in 2008 and 2009 when EF had just been released. Instead of EF, we've been using the Quino ORM whenever we can.

However, we've recently started working on a project where EF5 is used (EF6 is in the late stages of release, but is no longer generally available for production use). Though we'd been following the latest EF developments via the ADO.Net blog, we finally had a good excuse to become more familiar with the latest version with some hands-on experience.

Our history with EF

Entity Framework: Be Prepared was the first article we wrote about working with EF. It's quite long and documents the pain of using a 1.0 product from Microsoft. That version support only a database-first approach, the designer was slow and the ORM SQL-mapper was quite primitive. Most of the tips and advice in the linked article, while perhaps amusing, are no longer necessary (especially if you're using the Code-first approach, which is highly recommended).

Our next update, The Dark Side of Entity Framework: Mapping Enumerated Associations, discusses a very specific issue related to mapping enumerated types in an entity model (something that Quino does very well). This shortcoming in EF has also been addressed but we haven't had a chance to test it yet.

Our final article was on performance, Pre-generating Entity Framework (EF) Views, which, while still pertinent, no longer needs to be done manually (there's an Entity Framework Power Tools extension for that now).

So let's just assume that that was the old EF; what's the latest and greatest version like?

Well, as you may have suspected, you're not going to get an article about Code-first or database migrations.1 While a lot of things have been fixed and streamlined to be not only much more intuitive but also work much more smoothly, there are still a few operations that aren't so intuitive (or that aren't supported by EF yet).

Standard way to delete objects

One such operation is deleting multiple objects in the database. It's not that it's not possible, but that the only solution that immediately appears is to,

  • load the objects to delete into memory,
  • then remove these objects from the context
  • and finally save changes to the context, which will remove them from the database

The following code illustrates this pattern for a hypothetical list of users.

var users = context.Users.Where(u => u.Name == "John");

foreach (var u in users)


This seems somewhat roundabout and quite inefficient.2

Support for batch deletes?

While the method above is fine for deleting a small number of objects -- and is quite useful when removing different types of objects from various collections -- it's not very useful for a large number of objects. Retrieving objects into memory only to delete them is neither intuitive nor logical.

The question is: is there a way to tell EF to delete objects based on a query from the database?

I found an example attached as an answer to the post Simple delete query using EF Code First. The gist of it is shown below.

  "DELETE FROM Users WHERE Name = @name",
  new [] { new SqlParameter("@name", "John") }

To be clear right from the start, using ESQL is already sub-optimal because the identifiers are not statically checked. This query will cause a run-time error if the model changes so that the "Users" table no longer exists or the "Name" column no longer exists or is no longer a string.

Since I hadn't found anything else more promising, though, I continued with this approach, aware that it might not be usable as a pattern because of the compile-time trade-off.

Although the answer had four up-votes, it is not clear that either the author or any of his fans have actually tried to execute the code. The code above returns an IEnumerable<User> but doesn't actually do anything.

After I'd realized this, I went to MSDN for more information on the SqlQuery method. The documentation is not encouraging for our purposes (still trying to delete objects without first loading them), as it describes the method as follows (emphasis added),

Creates a raw SQL query that will return elements of the given generic type. The type can be any type that has properties that match the names of the columns returned from the query, or can be a simple primitive type.

This does not bode well for deleting objects using this method. Creating an enumerable does very little, though. In order to actually execute the query, you have to evaluate it.

Die Hoffnung stirbt zuletzt3 as we like to say on this side of the pond, so I tried evaluating the enumerable. A foreach should do the trick.

var users = context.Database.SqlQuery<User>(
  "DELETE FROM Users WHERE Name = @name", 
  new [] { new SqlParameter("@name", "John") }

foreach (var u in users)
  // NOP?

As indicated by the "NOP?" comment, it's unclear what one should actually do in this loop because the query already includes the command to delete the selected objects.

Our hopes are finally extinguished with the following error message:

System.Data.EntityCommandExecutionException : The data reader is incompatible with the specified 'Demo.User'. A member of the type, 'Id', does not have a corresponding column in the data reader with the same name.

That this approach does not work is actually a relief because it would have been far too obtuse and confusing to use in production.

It turns out that the SqlQuery only works with SELECT statements, as was strongly implied by the documentation.

var users = context.Database.SqlQuery<User>(
  "SELECT * FROM Users WHERE Name = @name",
  new [] { new SqlParameter("@name", "John") }

Once we've converted to this syntax, though, we can just use the much clearer and compile-time--checked version that we started with, repeated below.

var users = context.Users.Where(u => u.Name == "John");

foreach (var u in users)


So we're back where we started, but perhaps a little wiser for having tried.

Deleting objects with Quino

As a final footnote, I just want to point out how you would perform multiple deletes with the Quino ORM. It's quite simple, really. Any query that you can use to select objects you can also use to delete objects4.

So, how would I execute the query above in Quino?

Session.Delete(Session.CreateQuery<User>().WhereEquals(User.MetaProperties.Name, "John").Query);

To make it a little clearer instead of showing off with a one-liner:

var query = Session.CreateQuery<User>();
query.WhereEquals(User.MetaProperties.Name, "John");

Quino doesn't support using Linq to create queries, but its query API is still more statically checked than ESQL. You can see how the query could easily be extended to restrict on much more complex conditions, even including fields on joined tables.

Some combination of these reasons possibly accounts for EF's lack of support for batch deletes.

  1. As I wrote, We're using Code-first, which is much more comfortable than using the database-diagram editor of old. We're also using the nascent "Migrations" support, which has so far worked OK, though it's nowhere near as convenient as Quino's automated schema-migration.

  2. Though it is inefficient, it's better than a lot of other examples out there, which almost unilaterally include the call to context.SaveChanges() inside the foreach-loop. Doing so is wasteful and does not give EF an opportunity to optimize the delete calls into a single SQL statement (see footnote below).

  3. Translates to: "Hope is the last (thing) to die."

  4. With the following caveats, which generally apply to all queries with any ORM:

    * Many databases use a different syntax and provide different support for `DELETE` vs. `SELECT` operations.
    * Therefore, it is more likely that more complex conditions are not supported for `DELETE` operations on some database back-ends
    * Since the syntax often differs, it's more likely that a more complex query will fail to map properly in a `DELETE` operation than in a `SELECT` operation simply because that particular combination has never come up before.
    * That said, Quino has quite good support for deleting objects with restrictions not only on the table from which to delete data but also from other, joined tables.

A provably safe parallel language extension for C#

This article originally appeared on earthli News and has been cross-posted here.

The paper Uniqueness and Reference Immutability for Safe Parallelism by Colin S. Gordon, Matthew J. Parkinson, Jared Parsons, Aleks Bromfield, Joe Duffy is quite long (26 pages), detailed and involved. To be frank, most of the notation was foreign to me -- to say nothing of making heads or tails of most of the proofs and lemmas -- but I found the higher-level discussions and conclusions quite interesting.

The abstract is concise and describes the project very well:

A key challenge for concurrent programming is that side-effects (memory operations) in one thread can affect the behavior of another thread. In this paper, we present a type system to restrict the updates to memory to prevent these unintended side-effects. We provide a novel combination of immutable and unique (isolated) types that ensures safe parallelism (race freedom and deterministic execution). The type system includes support for polymorphism over type qualifiers, and can easily create cycles of immutable objects. Key to the system's flexibility is the ability to recover immutable or externally unique references after violating uniqueness without any explicit alias tracking. Our type system models a prototype extension to C# that is in active use by a Microsoft team. We describe their experiences building large systems with this extension. We prove the soundness of the type system by an embedding into a program logic.

The project proposes a type-system extension with which developers can write provably safe parallel programs -- i.e. "race freedom and deterministic execution" -- with the amount of actual parallelism determined when the program is analyzed and compiled rather than decided by a programmer creating threads of execution.

Isolating objects for parallelism

The "isolation" part of this type system reminds me a bit of the way that SCOOP addresses concurrency. That system also allows programs to designate objects as "separate" from other objects while also releasing the program from the onus of actually creating and managing separate execution contexts. That is, the syntax of the language allows a program to be written in a provably correct way (at least as far as parallelism is concerned; see the "other provable-language projects" section below). In order to execute such a program, the runtime loads not just the program but also another file that specifies the available virtual processors (commonly mapped to threads). Sections of code marked as "separate" can be run in parallel, depending on the available number of virtual processors. Otherwise, the program runs serially.

In SCOOP, methods are used as a natural isolation barrier, with input parameters marked as "separate". See SCOOP: Concurrency for Eiffel and SCOOP (software) for more details. The paper also contains an entire section listing other projects -- many implemented on the the JVM -- that have attempted to make provably safe programming languages.

The system described in this paper goes much further, adding immutability as well as isolation (the same concept as "separate" in SCOOP). An interesting extension to the type system is that isolated object trees are free to have references to immutable objects (since those can't negatively impact parallelism). This allows for globally shared immutable state and reduces argument-passing significantly. Additionally, there are readable and writable references: the former can only be read but may be modified by other objects (otherwise it would be immutable); the latter can be read and written and is equivalent to a "normal" object in C# today. In fact, "[...] writable is the default annotation, so any single-threaded C# that does not access global state also compiles with the prototype."

Permission types

In this safe-parallel extension, a standard type system is extended so that every type can be assigned such a permission and there is "support for polymorphism over type qualifiers", which means that the extended type system includes the permission in the type, so that, given B => A, a reference to readable B can be passed to a method that expects an immutable A. In addition, covariance is also supported for generic parameter types.

When they say that the "[k]ey to the system's flexibility is the ability to recover immutable or externally unique references after violating uniqueness without any explicit alias tracking", they mean that the type system allows programs to specify sections that accept isolated references as input, lets them convert to writable references and then convert back to isolated objects -- all without losing provably safe parallelism. This is quite a feat since it allows programs to benefit from isolation, immutability and provably safe parallelism without significantly changing common programming practice. In essence, it suffices to decorate variables and method parameters with these permission extensions to modify the types and let the compiler guide you as to further changes that need to be made. That is, an input parameter for a method will be marked as immutable so that it won't be changed and subsequent misuse has to be corrected.

Even better, they found that, in practice, it is possible to use extension methods to allow parallel and standard implementations of collections (lists, maps, etc.) to share most code.

A fully polymorphic version of a map() method for a collection can coexist with a parallelized version pmap() specialized for immutable or readable collections. [...] Note that the parallelized version can still be used with writable collections through subtyping and framing as long as the mapped operation is pure; no duplication or creation of an additional collection just for concurrency is needed.

Real projects and performance impact

Much of the paper is naturally concerned with proving that their type system actually does what it says it does. As mentioned above, at least 2/3 of the paper is devoted to lemmas and large swaths of notation. For programmers, the more interesting part is the penultimate section that discusses the extension to C# and the experiences in using it for larger projects.

A source-level variant of this system, as an extension to C#, is in use by a large project at Microsoft, as their primary programming language. The group has written several million lines of code, including: core libraries (including collections with polymorphism over element permissions and data-parallel operations when safe), a webserver, a high level optimizing compiler, and an MPEG decoder.

Several million lines of code is, well, it's an enormous amount of code. I'm not sure how many programmers they have or how they're counting lines or how efficiently they write their code, but millions of lines of code suggests generated code of some kind. Still, taken with the next statement on performance, that much code more than proves that the type system is viable.

These and other applications written in the source language are performance-competitive with established implementations on standard benchmarks; we mention this not because our language design is focused on performance, but merely to point out that heavy use of reference immutability, including removing mutable static/global state, has not come at the cost of performance in the experience of the Microsoft team.

Not only is performance not impacted, but the nature of the typing extensions allows the compiler to know much more about which values and collections can be changed, which affects how aggressively this data can be cached or inlined.

In fact, the prototype compiler exploits reference immutability information for a number of otherwise-unavailable compiler optimizations. [...] Reference immutability enables some new optimizations in the compiler and runtime system. For example, the concurrent GC can use weaker read barriers for immutable data. The compiler can perform more code motion and caching, and an MSIL-to-native pass can freeze immutable data into the binary.

Incremental integration ("unstrict" blocks)

In the current implementation, there is an unstrict block that allows the team at Microsoft to temporarily turn off the new type system and to ignore safety checks. This is a pragmatic approach which allows the software to be run before it has been proven 100% parallel-safe. This is still better than having no provably safe blocks at all. Their goal is naturally to remove as many of these blocks as possible -- and, in fact, this requirement drives further refinement of the type system and library.

We continue to work on driving the number of unstrict blocks as low as possible without over-complicating the type systems use or implementation.

The project is still a work-in-progress but has seen quite a few iterations, which is promising. The paper was written in 2012; it would be very interesting to take it for a test drive in a CTP.

Other provable-language projects

A related project at Microsoft Research Spec# contributed a lot of basic knowledge about provable programs. The authors even state that the "[...] type system grew naturally from a series of efforts at safe parallelism. [...] The earliest version was simply copying Spec#s [Pure] method attribute, along with a set of carefully designed task-and data-parallelism libraries." Spec#, in turn, is a "[...] formal language for API contracts (influenced by JML, AsmL, and Eiffel), which extends C# with constructs for non-null types, preconditions, postconditions, and object invariants".

Though the implementation of this permissions-based type system may have started with Spec#, the primary focus of that project was more a valiant attempt to bring Design-by-Contract principles (examples and some discussion here) to the .NET world via C#. Though spec# has downloadable code, the project hasn't really been updated in years. This is a shame, as support for Eiffel[^1] in .NET, mentioned above as one of the key influences of spec#, was dropped by ISE Eiffel long ago.

Spec#, in turn, was mostly replaced by Microsoft Research's Contracts project (an older version of which was covered in depth in Microsoft Code Contracts: Not with a Ten-foot Pole). The Contracts project seems to be alive and well: the most recent release is from October, 2012. I have not checked it out since my initial thumbs-down review (linked above) but did note in passing that the implementation is still (A) library-only and (B) does not support Visual Studio 2012.

The library-only restriction is particularly galling, as such an implementation can lead to repeated code and unwieldy anti-patterns. As documented in the Contracts FAQ, the current implementation of the "tools take care of enforcing consistency and proper inheritance of contracts" but this is presumably accomplished with compiler errors that require the programmer to include contracts from base methods in overrides.

The seminal work Object-oriented Software Construction by Bertrand Meyer (vol. II in particular) goes into tremendous detail on a type system that incorporates contracts directly. The type system discussed in this article covers only parallel safety: null-safety and other contracts are not covered at all. If you're at all interested in these types of language extensions, the vol.2 of OOSC is a great read. The examples are all in Eiffel but should be relatively accessible. Though some features -- generics, notably but also tuples, once routines and agents -- have since made their way into C# and other more commonly used languages, many others -- such as contracts, anchored types (contravariance is far too constrained in C# to allow them), covariant return types, covariance everywhere, multiple inheritance, explicit feature removal, loop variants and invariants, etc. -- are still not available. Subsequent interesting work has also been done on extensions that allow creation of provably null-safe programs, something also addressed in part by Microsoft Research's Contracts project.