1 2 3 4 5 6 7 8 9 10 11
Using Unity, Collab and Git

If you're familiar with the topic, you might be recoiling in horror. It would be unclear, though, whether you're recoiling from the "using Collab" part or the "using Collab with Git" part.

Neither is as straightforward as I'd hoped.

tl;dr: If you have to use Collab with Unity, but want to back it up with Git, disable core.autocrlf1 and add * -text to the .gitattributes.

Collab's Drawbacks

Collab is the source-control system integrated into the Unity IDE.

It was built for designers to be able to do some version control, but not much more. Even with its limited scope, it's a poor tool.

The functionality horror

  • The system does not ever show you any differences, neither in the web UI nor the local UI, neither for uncommitted nor committed files
  • Some changes cannot be reverted. No reason is given.
  • You can only delete new files from the file system.
  • There is no support for renaming
  • Reverting to a known commit has worked for me exactly once out of about 10 tries. The operation fails with an Error occurred and no further information. If you really get stuck, your only choice is to restore the entire workspace by re-cloning/re-downloading it.
  • Conflict resolution is poorly supported, although it works better than expected (it integrates with BeyondCompare, thank goodness).

The usability horror

  • The UI only lets you commit all changed files at once.
      • There is no notion of "commits".
      • You can’t commit individual files or chunks.
      • There is no staging area.
      • You can't exclude files.
      • You can ignore them completely, but that doesn't help.
  • The UI is only accessible via mouse from the menu bar.
  • You can sometimes revert folders (sometimes you can't, again with an Error occurred message), but you can't revert arbitrary groups of files.
  • The UI is almost entirely in that custom drop-down menu.
  • You can scroll through your changed files, but you can't expand the menu to show more files at once.
  • You can show a commit history, but there are no diffs. None.
  • There aren't even any diffs in the web version of the UI, which is marginally better, but read-only.

Pair Git with Collab

This is really dangerous, especially with Unity projects. There is so much in a Unity project without a proper "Undo" that you very often want to return to a known good version.

So what can we do to improve this situation? We would like to use Git instead of Collab.

However, we have to respect the capabilities and know-how of the designers on our team, who don't know how to use Git.

On our current project, there's no time to train everyone on Git—and they already know how to use Collab and don't feel tremendously limited by it.

Remember, any source control is better than no source control. The designers are regularly backing up their work now. In its defense, Collab is definitely better than nothing (or using a file-share or some other weak form of code-sharing).

Instead, those of us who know Git are using Git alongside Collab.

It kind of works...

We started naively, with all of our default settings in Git. Our workflow was:

  1. Pull in Unity/Collab
  2. Fetch from Git/Rebase to head (we actually just use "pull with rebase")

Unfortunately, we would often end up with a ton of files marked as changed in Collab. These were always line-ending differences. As mentioned above, Collab is not a good tool for reverting changes.

The project has time constraints—it's a prototype for a conference, with a hard deadline—so, despite its limitations, we reverted in Collab and updated Git with the line-endings that Collab expected.

We limped along like this for a bit, but with two developers on Git/Collab on Windows and one designer on Collab on Mac, we were spending too much time "fixing up" files. The benefit of having Git was outweighed by the problems it caused with Collab.

Know Your Enemy

So we investigated what was really going on. The following screenshots show that Collab doesn't seem to care about line-endings. They're all over the map.

JSON file with mixed line-endings

CS file with CRLF line-endings

.unity file with LF line-endings

Configuring Git

Git, on the other hand, really cares about line-endings. By default, Git will transform the line-endings in files that it considers to be text files (this part is important later) to the line-ending of the local platform.

In the repository, all text files are LF-only. If you work on MacOS or Linux, line-endings in the workspace are unchanged; if you work on Windows, Git changes all of these line-endings to CRLF on checkout—and back to LF on commit.

Our first "fix" was to turn off the core.autocrlf option in the local Git repository.

git config --local core.autocrlf false

We thought this would fix everything since now Git was no longer transforming our line-endings on commit and checkout.

This turned out to be only part of the problem, though. As you can see above, the text files in the repository have an arbitrary mix of line-endings already. Even with the feature turned off, Git was still normalizing line-endings to LF on Windows.

The only thing we'd changed so far is to stop using the CRLF instead of LF. Any time we git reset, for example, the line-endings in our workspace would still end up being different than what was in Git or Collab.

Git: Stop doing stuff

What we really want is for Git to stop changing any line-endings at all.

This isn't part of the command-line configuration, though. Instead, you have to set up .gitattributes. Git has default settings that determine which files it treats as which types. We wanted to adjust these default settings by telling Git that, in this repository, it should treat no files as text.

Once we knew this, it's quite easy to configure. Simply add a .gitattributes file to the root of the repository, with the following contents:

* -text

This translates to "do not treat any file as text" (i.e. match all files; disable text-handling).

Conclusion

With these settings, the two developers were able to reset their workspaces and both Git and Collab were happy. Collab is still a sub-par tool, but we can now work with designers and still have Git to allow the developers to use a better workflow.

The designers using only Collab were completely unaffected by our changes.



  1. Technically, I don't think you have to change the autocrlf setting. Turning off text-handling in Git should suffice. However, I haven't tested with this feature left on and, due to time-constraints, am not going to risk it.

Breaking Changes in C#

Due to the nature of the language, there are some API changes that almost inevitably lead to breaking changes in C#.

Change constructor parameters

While you can easily make another constructor, marking the old one(s) as obsolete, if you use an IOC that allows only a single public constructor, you're forced to either

  • remove the obsolete constructor or
  • mark the obsolete constructor as protected.

In either case, the user has a compile error.

Virtual methods/Interfaces

There are several known issues with introducing new methods or changing existing methods on an existing interface. For many of these situations, there are relatively smooth upgrade paths.

I encountered a situation recently that I thought worth mentioning. I wanted to introduce a new overload on an existing type.

Suppose you have the following method:

bool TryGetValue<T>(
  out T value,
  TKey key = default(TKey), 
  [CanBeNull] ILogger logger = null
);

We would like to remove the logger parameter. So we deprecate the method above and declare the new method.

bool TryGetValue<T>(
  out T value, 
  TKey key = default(TKey)
);

Now the compiler/ReSharper notifies you that there will be an ambiguity if a caller does not pass a logger. How to resolve this? Well, we can just remove the default value for that parameter in the obsolete method.

bool TryGetValue<T>(
  out T value,
  TKey key = default(TKey),
  [CanBeNull] ILogger logger
);

But now you've got another problem: The parameter logger cannot come after the key parameter because it doesn't have a default value.

So, now you'd have to move the logger parameter in front of the key parameter. This will cause a compile error in clients, which is what we were trying to avoid in the first place.

In this case, we have a couple of sub-optimal options.

Multiple Releases

Use a different name for the new API (e.g. TryGetValueEx à la Windows) in the next major version, then switch the name back in the version after that and finally remove the obsolete member in yet another version.

That is,

  • in version n, TryGetValue (with logger) is obsolete and users are told to use TryGetValueEx (no logger)
  • in version n+1, TryGetValueEx (no logger) is obsolete and users are told to use TryGetValue (no logger)
  • in version n+2, we finally remove TryGetValueEx.

This is a lot of work and requires three upgrades to accomplish. You really need to stay on the ball in order to get this kind of change integrated and it takes a non-trivial amount of time and effort.

We generally don't use this method, as our customers are developers and can deal with a compile error or two, especially when it's noted in the release notes and the workaround is fairly obvious (e.g. the logger parameter is just no longer required).

Remove instead of deprecating

Accept that there will be a compile error and soften the landing as much as possible for customers by noting it in the release notes.

Version numbers in .NET Projects

Any software product should have a version number. This article will answer the following questions about how Encodo works with them.

  • How do we choose a version number?
  • What parts does a version number have?
  • What do these parts mean?
  • How do different stakeholders interpret the number?
  • What conventions exist for choosing numbers?
  • Who chooses and sets these parts?

Stakeholders

In decreasing order of expected expertise,

  • Developers: Writes the software; may change version numbers
  • Testers: Tests the software; highly interested in version numbers that make sense
  • End users: Uses the software as a black box

The intended audience of this document is developers.

Definitions and Assumptions

  • Build servers, not developer desktops, produce artifacts
  • The source-control system is Git
  • The quino command-line tool is installed on all machines. This tool can read and write version numbers for any .NET solution, regardless of which of the many version-numbering methods a given solution actually uses.
  • A software library is a package or product that has a developer as an end user
  • A breaking change in a software library causes one of the following
    • a build error
    • an API to behave differently in a way that cannot be justified as a bug fix

Semantic versions

Encodo uses semantic versions. This scheme has a strict ordering that allows you to determine which version is "newer". It indicates pre-releases (e.g. alphas, betas, rcs) with a "minus", as shown below.

Version numbers come in two flavors:

  • Official releases: [Major].[Minor].[Patch].[Build]
  • Pre-releases: [Major].[Minor].[Patch]-[Label][Build]

See Microsoft's NuGet Package Version Reference for more information.

Examples

  • 0.9.0-alpha34: A pre-release of 0.9.0
  • 0.9.0-beta48: A pre-release of 0.9.0
  • 0.9.0.67: An official release of 0.9.0
  • 1.0.0-rc512: A pre-release of 1.0.0
  • 1.0.0.523: An official release of 1.0.0

The numbers are strictly ordered. The first three parts indicate the "main" version. The final part counts strictly upward.

Parts

The following list describes each of the parts and explains what to expect when it changes.

Build

  • Identifies the build task that produced the artifact
  • Strictly increasing

Label

  • An arbitrary designation for the "type" of pre-release

Patch

  • Introduces bug fixes but no features or API changes
  • May introduce obsolete members
  • May not introduce breaking changes

This part is also known as "Maintenance" (see Software versioning on Wikipedia).

Minor

  • Introduces new features that extend existing functionality
  • May include bug fixes
  • May cause minor breaking changes
  • May introduce obsolete members that cause compile errors
  • Workarounds must be documented in release notes or obsolete messages

Major

  • Introduces major new features
  • Introduces breaking changes that require considerable effort to integrate
  • Introduces a new data or protocol format that requires migration

Conventions

Uniqueness for official releases

There will only ever be one artifact of an official release corresponding to a given "main" version number.

That is, if 1.0.0.523 exists, then there will never be a 1.0.0.524. This is due the fact that the build number (e.g. 524) is purely for auditing.

For example, suppose your software uses a NuGet package with version 1.0.0.523. NuGet will not offer to upgrade to 1.0.0.524.

Pre-release Labels

There are no restrictions on the labels for pre-releases. However, it's recommended to use one of the following:

  • alpha
  • beta
  • rc

Be aware that if you choose a different label, then it is ordered alphabetically relative to the other pre-releases.

For example, if you were to use the label pre-release to produce the version 0.9.0-prealpha21, then that version is considered to be higher than 0.0.0-alpha34. A tool like NuGet will not see the latter version as an upgrade.

Release branches

The name of a release branch should be the major version of that release. E.g. release/1 for version 1.x.x.x.

Pre-release branches

The name of a pre-release branch should be of the form feature/[label] where [label] is one of the labels recommended above. It's also OK to use a personal branch to create a pre-release build, as in mvb/[label].

Setting the base version

A developer uses the quino tool to set the version.

For example, to set the version to 1.0.1, execute the following:

quino fix -v 1.0.1.0

The tool will have updated the version number in all relevant files.

Calculating final version

The build server calculates a release's version number as follows,

  • major: Taken from solution
  • minor: Taken from solution
  • maintenance: Read from solution
  • label: Taken from the Git branch (see below for details)
  • build: Provided by the build server

Git Branches

The name of the Git branch determines which kind of release to produce.

  • If the name of the branch matches the glob **/release/*, then it's an official release
  • Everything else is a pre-release

For example,

  • origin/release/1
  • origin/production/release/new
  • origin/release/
  • release/1
  • production/release/new
  • release/

The name of the branch doesn't influence the version number since an official release doesn't have a label.

Pre-release labels

The label is taken from the last part of the branch name.

For example,

  • origin/feature/beta yields beta
  • origin/feature/rc yields rc
  • origin/mvb/rc yields rc

The following algorithm ensures that the label can be part of a valid semantic version.

  • Remove invalid characters
  • Append an X after a trailing digit
  • Use X if the label is empty (or becomes empty after having removed invalid characters)

For example,

  • origin/feature/rc1 yields rc1X
  • origin/feature/linux_compat yields linuxcompat
  • origin/feature/12 yields X

Examples

Assume that,

  • the version number in the solution is 0.9.0.0
  • the build counter on the build server is at 522

Then,

  • Deploying from branch origin/release/1 produces artifacts with version number 0.9.0.522
  • Deploying from branch origin/feature/rc produces artifacts with version number 0.9.0-rc522

Release Workflow

The following are very concise guides for how to produce artifacts.

Pre-release

  • Ensure you are on a non-release branch (e.g. feature/rc, master)
  • Verify or set the base version (e.g. quino fix -v 1.0.2.0
  • Push any changes to Git
  • Execute the "deploy" task against your branch on the build server

Release

  • Ensure you are on a release branch (e.g. release/1)
  • Verify or set the base version (e.g. quino fix -v 1.0.2.0
  • Push any changes to Git
  • Execute the "deploy" task against your branch on the build server
v6.1/6.2: Cross-platform, SourceLink and Docker

The summary below describes major new features, items of note and breaking changes.

The links above require a login.

*Few changes so far, other than that Quino-Windows targets .NET Framework 4.7.2 and DevExpress 18.2.5

Highlights

  • Inline Documentation: Quino packages now consistently include inline/developer documentation, in the form of *.xml files. IDEs use these files to provide real-time documentation in tooltips and code-completion.
  • Nullability Attributes: Attributes like NotNull, CanBeNull, [Pure], etc. are now included in Quino assemblies. Tools like R# use these attributes during code-analysis to find possible bugs and improve warnings and suggestions.
  • Online Documentation: The online documentation is once again up-to-date. See the release/6 documentation or default documentation (master branch).
  • Debugging: We've laid a bunch of groundwork for SourceLink. If the NuGet server supports this protocol, then Visual Studio automatically offers to download source code for debugging. This feature will be fully enabled in subsequent releases, after we've upgraded our package server.
  • Cross-platform: We've made more improvements to how Quino-Standard compiles and runs under Linux or MacOS. All tests now run on Linux or MacOS as well as Windows.
  • Containers: Improved integration and usage of Docker for local development and on build servers
  • Roslyn: Encodo.Compilers now uses Roslyn to provide compiling services. Quino-Standard uses these from tests to verify generated code. As of this release, C#6 and C#7 features are supported. Also, the compiler support is available on all platforms.

Breaking Changes

  • UseRunModeCommand() is no longer available by default. Applications have to opt-in to the rm -debug setting. Please see the Debugging documentation for assistance in converting to the new configuration methods.
  • KeyValueNode.GetValue() no longer returns null; use TryGetValue() instead.
  • KeyValueNode.GetValue() no longer accepts a parameter of type logger. All logging is now done to the central logger. If you still need to collect messages from that operation, then see ConfigurableLoggerExtensions.UseLogger() or ConfigurableLoggerExtensions.UseInMemoryLogger().
  • IDatabaseProperties.Collation is now of type string rather than Collation. This change was made to allow an application to specify exactly the desired collation without having Quino reinterpret it or do matching.
  • Similarly, ISqlServerCollationTools.GetEncodingAndCollation() now returns a tuple of (Encoding, string) rather than a tuple of (Encoding, Collation).
  • The constructor of NamedValueNode has changed. Instead, you should use the abstraction INamedValueNodeTools.CreateNode()or INamedValueNodeTools.CreateRootNode().
QQL: A Query Language for Quino

In late 2011 and early 2012, Encodo designed a querying language for Quino. Quino has an ORM that, combined with .NET Linq provides a powerful querying interface for developers. QQL is a DSL that brings this power to non-developers.

QQL never made it to implementation---only specification. In the meantime, the world moved on and we have common, generic querying APIs like OData. The time for QQL is past, but the specification is still an interesting artifact, in its own right.

Who knows? Maybe we'll get around to implementing some of it, at some point.

At any rate, you can download the specification from the downloads section.

The following excerpts should give you an idea of what you're in for, should you download and read the 80-page document.

Details

The TOC lists the following top-level chapters:

  1. Introduction
  2. Examples
  3. Context & Scopes
  4. Standard Queries
  5. Grouping Queries
  6. Evaluation
  7. Syntax
  8. Data Types and Operators
  9. Libraries
  10. Best Practices
  11. Implementation Details
  12. Future Enhancements

From the abstract in the document:

The Quino Query Language (QQL) defines a syntax and semantics for formulating data requests against hierarchical data structures. It is easy to read and learn both for those familiar with SQL and non-programmers with a certain capacity for abstract thinking (i.e. power users). Learning only a few basic rules is enough to allow a user to quickly determine which data will be returned by all but the more complex queries. As with any other language, more complex concepts result in more complex texts, but the syntax of QQL limits these cases.

From the overview:

QQL defines a syntax and semantics for writing queries against hierarchical data structures. A query describes a set of data by choosing an initial context in the data and specifying which data are to be returned and how the results are to be organized. An execution engine generates this result by applying the query to the data.

Examples

Standard Projections

The follow is from chapter 2.1, "Simple Standard Query":

The following query returns the first and last name of all active people as well as their 10 most recent time entries, reverse-sorted first by last name, then by first name.

Person
{
  select
  {
    FirstName; LastName;
    Sample:= TimeEntries { orderby Date desc; limit 10 }
  }
  where Active
  orderby
  {
    LastName desc;
    FirstName desc;
  }
}

In chapter 2, there are also "2.2 Intermediate Standard Query" and "2.3 Complex Standard Query" examples.

Grouping Projections

The following is from chapter 2.4, "Simple Grouping Query":

The following query groups active people by last name and returns the age of the youngest person and the maximum contracts for each last name. Results are ordered by the maximum contracts for each group and then by last name.

group Person
{
  groupby LastName;
  select
  {
    default;
    Age:= (Now - BirthDate.Min).Year;
    MaxContracts:= Contracts.Count.Max
  }
  where Active;
  orderby
  {
    MaxContracts desc;
    LastName desc;
  }
}

In chapter 2, there are also "2.5 Complex Grouping Query", "2.6 Standard Query with Grouping Query" and "2.7 Nested Grouping Queries" examples.

v5.2.9: Console, Services and Bug Fixes

The summary below describes major new features, items of note and breaking changes.

The links above require a login.

Overview

This is the last planned release on the 5.x branch of Quino. The future lies with Quino 6, which targets .NET Standard and .NET Core wherever possible.

Highlights

  • Improved Console Support for Windows Services (QNO-5938)

Breaking changes

  • Renamed AddDefaultExceptionDetailsFormatters() to UseDefaultExceptionDetailsFormatters()
  • Moved crash-reporter types to Encodo.Monitor
  • Renamed ProcessExtensions.StopChild() to StopAndClose()
  • Renamed IProcessParameters.Timeout to IProcessParameters.TimeoutInMilliseconds
  • Renamed IDataGeneratePluginInitializer to IDataGeneratorPluginInitializer
  • Plugin support has been moved from Encodo.Application.Standard to Encodo.Plugins
  • PasswordEncryptionType has been replaced with HashAlgorithm
  • GetSearchClass(IMetaClass) was moved from Quino.Meta to Quino.Builders
  • GetLayout no longer has a default value for the layoutType parameter (pass LayoutType.Detail to get the previous behavior)
  • IKeyValueNode.GetValue() has been moved to an extension method. You may need to include the namespace Encodo.Nodes.
  • Namespaces in Quino.Bookmarks have been changed from Encodo.Quino.Bookmarks.Bookmarks to Encodo.Quino.Bookmarks.
  • WPF schema-migration types have been moved from the Quino.Module.Schema.Wpfnamespace to Encodo.Quino.Module.Schema.Wpf.
  • Some testing support types have been moved from the Tests to the Testing namespace (e.g. EncodoServicesBasedTestsBase and EncodoTestsBase).
  • Introduced the IFinalizerBuilderProvider so that applications no longer need to include the MetaStandardBuilder in models.

Finalizer Provider

Whiile an application no longer needs to manage the insertion point of the MetaStandardBuilder and therefore no longer needs to override PostFinalize() instead of a more relevant method, this is a breaking change for certain situations. For example, if a builder overrode PostFinalize() and expected the finalizer builders included in MetaStandardBuilder to have already run previously.

The workaround is to either:

  • Rewrite the logic to avoid depending on expressions/model added by the finalizer builders. (this is what we ended up doing in Punchclock).
  • Add the builder to the list of finalizer builders (in which case it will be executed last). You can customize the finalizer builders as with an other IProvide using something like the following in your application startup.
application.Configure<IFinalizerBuilderProvider>(
  p =>
  {
    p.Register<MyOwnFinalizerBuilder>();
    p.Remove<FallbackLayoutBuilder>();
  }
);

Relation Captions

Automatic generation of relation captions has changed slightly.

Whereas previously, the captions of the target class were always used, the FallbackCaptionBuilder now only does this if the name of the relation has not been changed from the default.

If the name was changed to something else, then that name is used for relation captions. For example, if you make a relation from a Company to a Person called People, then Quino will still use the captions from the class. This allows your application to set the caption on the class and it will be used for many relations.

However, if you create a similar relation, but rename it to CEO, Quino will no longer use the class captions, since in this case Person and People (and translations) are no longer appropriate. In this case, CEO and CEOs is more appropriate.

In this case, the captions in the default language will be more appropriate but the captions for other languages previously taken from the class will no longer be used on the relation.

v6.0: .NET Standard & Authentication

The summary below describes major new features, items of note and breaking changes.

The links above require a login.

Overview

At long last, Quino enters the world of .NET Standard and .NET Core. Libraries target .NET Standard 2.0, which means they can all be used with any .NET runtime on any .NET platform (e.g. Mac and Linux). Sample applications and testing assemblies target .NET Core 2.0. Tools like quinogenerate and quinofix target .NET Core 2.1 to take advantage of the standardized external tool-support there.

Furthermore, the Windows, Winform and WPF projects have moved to a separate solution/repository called Quino-Windows.

Quino-Standard is the core on which both Quino-Windows and Quino-WebAPI build.

  • All core assemblies target .NET Standard 2.0.
  • All assemblies in Quino-Windows target .NET Framework 4.6.2 because that's the first framework that can interact with .NET Standard (and under which Windows-specific code runs).
  • All assemblies in Quino-WebAPI currently target .NET Framework 4.6.2. We plan on targeting .NET Core in an upcoming version (tentatively planned for v7).

Highlights

  • Target .NET Standard and .NET Core from Quino-Standard
  • Split Windows-specific code to Quino-Windows
  • Improve authentication API to use IIdentity everywhere (deprecating ICredentials and IUserCredentials).

Breaking Changes

6.0 is a pretty major break from the 5.x release. Although almost all assembly names have stayed the same, we had to move some types around to accommodate targeting .NET Standard with 85% of Quino's code.

APIs

We've tried to support existing code wherever possible, but some compile errors will be unavoidable (e.g. from namespace changes or missing references). In many cases, R#/VS should be able to help repair these errors.

These are the breaking changes that are currently known.

  • Moved IRunSettings and RunMode from Encodo.Application to Encodo.Core.

References

Any .NET Framework executable that uses assemblies targeting .NET Standard must reference .NET Standard itself. The compiler (MSBuild) in Visual Studio will alert you to add a reference to .NET Standard using NuGet. This applies not just to Winform executables, but also to any unit-test assemblies.

Tools

One piece that has changed significantly is the tool support formerly provided with Quino.Utils. As of version 6, Quino no longer uses NAnt, instead providing dotnet-compatible tools that you can install using common .NET commands. Currently, Quino supports:

  • dotnet quinofix
  • dotnet quinogenerate
  • dotnet quinopack

Please see the tools documentation for more information on how to install and use the new tools.

The standalone Winforms-based tools are in the Quino-Windows download, in the Tools.zip archive.

  • Quino.Migrator
  • Quino.PasswordEncryptor

Quino.Utils is no longer supported as a NuGet package.

v5.0.15: bug fixes for Winform and Report Manager

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.

Highlights

Breaking changes

  • No known breaking changes.
SPA state management libraries

Abstract

Encodo has updated its technology stack for SPAs. Current and future projects will use a combination of React Component States, React Contexts and Redux:

  • Use React Component States to manage state that is used only in a single component.
  • Use React Contexts to manage presentational state for component hierarchies.
  • Use Redux to manage global or persistent state.

The following article provides justification and reasoning for the conclusions listed above.

Overview

Encodo have undertaken a number of Single Page Application (SPA) projects over the last several years. During this time, web technologies, common standards and best practices have changed significantly. As such, these projects each had different configurations and used different sets of web technologies.

The last two years have brought a reduction in churn in web technologies. Encodo have therefore decided to evaluate SPA libraries with the goal of defining a stack that will be stable for the next few years. The outcome of this evaluation proposes a set of best practices in architecting an SPA, and most importantly, architecting an SPA’s state.

Participants

  • Marc Duerst
  • Marco von Ballmoos
  • Remo von Ballmoos
  • Richi Buetzer
  • Tom Szpytman

Introduction

Having undertaken an earlier evaluation of SPA rendering libraries, Encodo’s SPA projects have all relied upon the Javascript library React. To date, the company still feels that React provides an adequate and stable platform upon which to build SPAs.

Where the company feels its knowledge can be improved upon, is how state should be structured in an SPA, and which SPA state libraries, or combination of libraries, provide the most maintainable, stable and readable architectures. As such, this evaluation will only focus on discussing SPA-state libraries and not SPA-rendering libraries.

Requirements

Encodo focusses on both the building and maintenance of elegant solutions. It is therefore paramount that these software solutions are stable, yet future-proof in an ever-changing world. Many of Encodo’s clients request that solutions be backward-compatible with older browsers. An SPA state library must therefore adhere to the following criteria:

  • The library must have a moderately-sized community
  • The library must have a solid group of maintainers and contributors
  • Typescript typings must be available (and maintained)
  • The library must be open-source
  • The library must support all common browsers, as far back as IE11
  • The library must have a roadmap / future
  • Code using the library must be maintainable and readable (and support refactoring in a useful manner)

Candidates

Redux

Redux was released over three years ago and has amassed over 40,000 stars on Github. The project team provides a thorough set of tutorials, and software developers are additionally able to find a plethora of other resources online. Furthermore, with almost 15,000 related questions on StackOverflow, the chances of finding unique problems that other developers haven’t previously encountered are slim. Redux has over 600 contributors, and although its main contributors currently work for Facebook, the library is open source in its own right and not owned by Facebook.

Redux is an implementation of the Flux pattern; that is, you have a set of stores that each hold and maintain part of an application state. Stores register with a dispatcher, and by connecting to the dispatcher, receive notifications of events (usually as a result of a user input, but often also as a result of an automated process, e.g. a timer which emits an event each second). In the Redux world, these events are called actions. An action is no more than a Javascript object containing a type (a descriptive unique id) and optional additional information associated with that event.

Example 1 – A basic Redux action

Imagine a scenario where a user clicks on a button. Clicking the button toggles the application’s background colour. If the button were connected to Redux, it would then call the dispatcher and pass it an action:

{
  type: ‘TOGGLE_BACKGROUND_COLOUR’
}

Example 2 – A Redux action with a payload

Suppose now the application displayed an input field which allowed the user to specify a background colour. The value of the text field could be extracted and added as data to the action:

{
  type: ‘TOGGLE_BACKGROUND_COLOUR’,
  payload: {
    colour: ‘red’ // Taken from the value of the text field, for example
  }
}

When the dispatcher receives an action, it passes the action down to its registered stores. If the store is configured to process an action of that type, it runs the action through its configuration and emits a modified state. If not, it simply ignores the action.

Example 3 – Processing an action

Suppose an application had two stores:

  • Store X has an initial state { colour: ‘red’ } and is configured to process TOGGLE_BACKGROUND_COLOUR actions. If it encounters an action of this type, it is configured to set its state to the colour in the action’s payload.

  • Store Y has an initial state { users: [] } and is configured to process USER_LOAD actions. If it encounters an action of this type, it is configured to set its state to the users in the action’s payload.

Suppose the following occurs:

  • A TOGGLE_BACKGROUND_COLOUR action is sent to the dispatcher with payload { colour: ‘green’ }.

The result would be:

  • Store Y ignores this action as it is not configured to process actions of the TOGGLE_BACKGROUND_COLOUR type. It therefore maintains its state as { users: [] }.

  • Store X on the other hand, is configured to process TOGGLE_BACKGROUND_COLOUR actions and emits the modified state { colour: ‘green’ }.

Views bind the stores and dispatcher together. Views are as their name suggests; the application’s visual components.

When using Redux in combination with React, Views, in the Redux sense, are React components that render parts of the state (e.g. the application’s background colour) and re-render when those parts of the state change.

Redux doesn’t have to be used in conjunction with React, so the general definition of a view is a construct that re-renders every time the part of the state it watches changes. Views are additionally responsible for creating and passing actions to the dispatcher. As an example, a view might render two elements; a div displaying the current colour and a toggle button which, when clicked, sends a TOGGLE_BACKGROUND_COLOUR action to the dispatcher.

Figure 1: The flow of a Redux application

Pros

Software written following Redux’s guidelines is readable, quick to learn and easy to work with. Whilst Redux’s verbosity is often cited as a pitfall, the level of detail its verbosity provides helps debugging. Debugging is also aided by a highly detailed, well-constructed, browser extension; Redux DevTools. At any given point in time, a developer can open up Redux DevTools and not only be presented with an overview of an application’s state, but the effect of each action on the state. That’s certainly a tick in the box for Redux when it comes to ease of debugging.

Pairing Redux with React is as simple as installing and configuring the react-redux library. The library enforces a certain pattern of integrating the two, and as such, React-Redux projects are generally structured in similar ways. This is incredibly beneficial for developers, as the learning curve when jumping into new Redux-based projects is significantly reduced.

Redux also allows applications to rehydrate their state. In non-Redux terms, this means that when a Redux application starts, a developer can provide the application with a previously saved state. This is incredibly useful in instances where an application’s state needs to be persisted between sessions, or when data from the server needs to be cached. As an example, suppose our application sends data to and from an authenticated API and needs to send an authentication token on each request. It’d be impractical if this token were to be lost every time the page was refreshed or the browser closed. Redux could instead be configured so that the authentication token always be persisted and then re-retrieved from the browser’s Local Storage when the application started. The ability to re-hydrate a state can also lead to significantly faster application start-up times. In an application which displays a list of users, Redux could be configured to cache/persist the list of users, and on startup, display the cached version of that list until the application has time to make an API call to fetch the latest, updated list of users.

All in all, Redux proves itself to be a library which is easy to learn, use and maintain. It provides excellent React integration and the community around it offer a plethora of tools that help optimise and simplify complicated application scenarios.

Cons

As previously mentioned, Redux is considered verbose; that is, a software developer has to write a lot of code in order to connect a View to a Dispatcher. Many regard this as ‘boilerplate’ code, however, the author considers this a misuse of the word ‘boilerplate’, as code is not repeated, but rather, a developer has to write a lot of it.

Additionally, while Redux describes the flow and states very well, its precision negatively impacts on refactoring and maintainability. If there is significant change to the structure of the components, it's very difficult to modify the existing actions and reducers. It's not hard to lose time trying to find the balance between refactoring what you had and just starting from scratch.

Example 4 – The disadvantages of Redux

class Foo Extends React.Component {
  render() {
    return (
      <div onClick={this.props.click}>
        {this.props.hasBeenClicked
          ? “I’ve been clicked”
          : “I haven’t been clicked yet”
        }
      </div>
    );
  }
}

Consider the bare-bones example above that illustrates:

  • A component which starts by displaying the string “I haven’t been clicked yet” and then changes to display the string “I’ve been clicked” when the initial string is clicked.

If we were to use Redux as the state store for this scenario, we’d have to:

  • Create and define a reducer (Redux’s term for a store’s configuration function) and a corresponding action
  • Configure this component to use Redux. This would involve wiring up the various prop types (those passed down from the parent component, a click action to send to the dispatcher and a hasBeenClicked prop that needs to be read out from the Redux state)

What could remain a fairly small file if we were to use, say, class Foo’s component state (see the React Component State chapter for details), would end up as a series of long files if we were to use Redux. Clearly Redux’s forte doesn’t lie in managing a purely presentational component’s state.

Furthermore, suppose we had fifty presentational components like Foo, whose states were only used by the components themselves. Storing each component’s UI state in the global application state would not only pollute the Redux state tree (imagine having fifty different reducers/actions just to track tiny UI changes), but would actually slow down Redux’s performance. There’d be a lot of state changes, and each time the state changed, Redux would have to calculate which views were listening on that changed state and notify them of the changes.

Managing the state of simple UI/presentational components is therefore not a good fit for Redux.

Summary

Redux’s strengths lie in acting as an application’s global state manager. That is, Redux works extremely well for state which needs to be accessed from across an application at various depths. Its enforcement of common patterns and its well-constructed developer tools means that developers can reasonably quickly open up unfamiliar Redux-based projects and understand the code. Finally, the out of the box ability to save/restore parts of the state means that Redux outweighs most other contenders as a global state manager.

Mobx

At the time of writing, as with Redux, Mobx was first released over three years ago. Although still sizeable, its community is much smaller than Redux’s; it has over 16,000 stars on Github and almost 700 related questions on StackOverflow. The library is maintained by one main contributor, and although other contributors do exist, the combined number of their commits is dwarfed by those from the main contributor.

In its most basic form, Mobx implements the observer pattern. In practice, this means that Mobx allows an object to be declared as an ‘observable’ which ‘observers’ can then subscribe to and receive notifications of the observable object’s changes. When combining Mobx with React, observers can take the form of React components that re-render each time the observables they’re subscribed to change, or basic functions which re-run when the observables they reference change.

Pros

What Mobx lacks in community support, it makes up for in its ease of setup. Re-implementing Example 4 above using Mobx, a developer would simply:

  • Declare the component an observer
  • Give the class a boolean field property and register it as an observable

Example 5 – A simple Mobx setup

@observer
class Foo Extends React.Component {

  hasBeenClicked = observable(false);

  render() {
    return (
      <div onClick={() => this.hasBeenClicked.set(true)}>
        {this.hasBeenClicked.get()
          ? “I’ve been clicked”
          : “I haven’t been clicked yet”
        }
      </div>
    );
  }
}

Short and sweet. Mobx at its finest.

The tree-like state structures required by Redux can feel rather limiting. In contrast, a developer using Mobx as a global state manager could encapsulate the global state in several singleton objects, each making up part of the state. Although there are recommended guidelines for using this approach (Mobx StoreMobx project structure), these aren’t as readily enforced by Mobx as Redux does its recommended ways of structuring code.

Mobx proves itself as a worthy candidate for managing the state of UI/presentational components. Furthermore, it offers the flexibility of being able to declare observables/observers anywhere in the application, thus preventing pollution of the global application state and allowing some states to be encapsulated locally within React components. Finally, when used as a global application state manager, the ability to model the global state in an object-orientated manner can also seem more logical than the tree structure enforced by Redux.

Cons

Mobx seems great so far; it’s a small, niche library which does exactly what it says on the tin. Where could it possibly go wrong? Lots of places…

For starters, Mobx’s debugging tool is far inferior to the host of tools offered by the Redux community. Mobx-trace work perfectly well when trying to ascertain who/what triggered an update to an observable, or why an observer re-rendered/re-executed, but in contrast to Redux DevTools, it lacks the ability to gain an overview of the entire application state at any given point in time.

Moreover, Mobx doesn’t come with any out of the box persist/restore functionality, and although there are libraries out there to help, these libraries have such small user bases that they don’t provide Typescript support. The Mobx creator has, in the past, claimed that it wouldn’t be too hard for a developer to write a custom persistence library, but having simple, out of the box persist/restore functionality as Redux does is still favourable.

Beyond the simplicity presented in Example 5, Mobx is a library that provides an overwhelming number of low-level functions. In doing so, and in not always providing clear guidelines describing the use-cases of each function, the library allows developers to trip up over themselves. As examples, developers could read the Mobx documentation and still be left with the questions:

  • When is it best to use autorun vs reaction vs a simple observer?
  • When should I “dispose” of observable functions?

In summary, the relatively small community surrounding Mobx has led to the library lacking in a solid set of developer tools, add-on libraries and resources to learn about its intricacies. Ultimately this is a huge negative aspect and should be heavily considered when opting to use Mobx in an SPA.

Summary

As a library, Mobx has huge potential; its core concepts are simple and the places in which it lacks could easily be improved upon. Its downfall, however, is the fact that it only has one main contributor and a small community surrounding it. This means that the library lacks, and will lack, essentials such as in-depth tutorials and development tools.

Added to this, as of Mobx v5, the library dropped support for IE11. In doing so, the library now fails to meet Encodo’s cross-compatibility requirements. The current claim is that Mobx v4 will remain actively supported, but with a limited number of contributors, it is debatable whether or not support for v4 will remain a priority.

Beyond the lack of IE11 support, Mobx's lack of coherent guidelines, sub-par debugging tools and free reign given to developers to architect projects as they please makes for problematic code maintenance.

React Component State

React was initially released over five years ago and from its inception, has always offered a way of managing and maintaining state. Here we must note, that whilst a state management system exists, it is not intended to be used as a global state management system. Instead, it is designed to function as a local state system for UI/presentational components. As such, the evaluation of React Component states will only focus on the benefits and drawbacks of using it as a local state manager.

Pros

React component state is an easy to learn, easy to use framework for managing a small UI state.

Example 6 – Component state

class Foo Extends React.Component {
  state = {
    hasBeenClicked = false
  };

  render () {
    return (
      <div
        onClick={() => this.setState({ hasBeenClicked: true })}
      >
        {this.state.hasBeenClicked
          ? “I’ve been clicked”
          : “I haven’t been clicked yet”
        }
      </div>
    );
  }
}

Refreshingly simple.

Configuring a Component state can be as simple as:

  • Defining a component’s initial state
  • Configuring the component’s render function to display that state
  • Binding to UI triggers (e.g. onClick methods) which update the component’s state, thus forcing a re-render of the component

React Component State is just as concise as the similar MobX example above, without introducing a separate library.

Cons

React component states aren’t well architected to managing the states of hierarchies of UI components.

Example 7 – The drawbacks of using Component state

class Foo Extends React.Component {
  state = {
     hasBeenClicked: false,
     numberOfClicks: 0
  };

  onClick = () => {
    return this.setState(
      (previousState) =>
        ({
          hasBeenClicked: true,
          numberOfClicks: previousState.numberOfClicks + 1
        })
    );
  }

  render() {
    return (
      <div>
        <FooDisplay
          hasBeenClicked={this.state.hasBeenClicked}
          numberOfClicks={this.state.numberOfClicks}
        />
        <Button onClick={this.onClick} />
      </div>
    );
  }
}


class FooDisplay extends React.Component {
  render() {
    return (
      <div>
        {this.props.hasBeenClicked
          ? “I’ve been clicked”
          : <CountDisplay
              numberOfClicks={this.props.numberOfClicks}
            />
        }
      </div>
    );
  }
}


class CountDisplay extends React.Component {
  render() {
    return (
      <div>
        I’ve been clicked {this.props.numberOfClicks} times
      </div>
    );
  }
}


class Button Extends React.Component {
  render () {
    return (
      <button onClick={this.props.onClick}>
        Click me
      </button>
    );
  }
}

Although basic, the example above attempts to illustrate the following:

  • State/state modifying functions have to be passed down through the hierarchy of components as props. If the components were split into multiple files (as is common to do in a React project), it’d become cumbersome to trace the source of CountDisplay’s props
  • FooDisplay’s only use for its numberOfClicks prop is to pass it down further. This feels a bit sloppy, but is the only way of getting numberOfClicks down to CountDisplay when using Component State.

Summary

React Component States are often overlooked. Yes, they are limited and only work well for a single specific use case (managing the UI state of a single component), but component states do this extremely well. Software developers often claim that they need more fully-fledged state management libraries such as Redux or Mobx, but if just used to manage UI states, they’d probably be mis-using these libraries.

React component state is as its name suggests; a way of managing state for a single component. React has this functionality built-in, begging the question; is there ever really a use-case for using an alternative library to manage a single component’s state?

React Contexts

React 16.3 introduced a public-facing ‘Context’ API. Contexts were part of the library prior to the public API, and other libraries such as Redux were already making use of it as early as 2 years ago.

React Contexts excel where the Component State architecture begins to crumble; with Contexts, applications no longer have to pass state data down through the component tree to the consumer of the data. Like Redux, React Contexts aren’t well suited to tracking the state of single presentational components; an application tracking single component states with Contexts would end up being far too complicated. Rather, React Contexts are useful for managing the state of hierarchies of presentational components.

Pros

React Contexts allow developers to encapsulate a set of UI components without affecting the rest of the application. By encapsulating the state of a hierarchy of UI components, the hierarchy can be used within any application, at any depth of a component tree. Contexts furthermore allow developers to model UI state in an OO structure. In this sense, React Contexts (in addition to React Component State) provide many of the advantages of Mobx (again, without pulling in a separate library).

UI states are quite often a set of miscellaneous booleans and other variables which don’t necessarily fit into hierarchical tree structures. The ability to encapsulate these variables into one, or several objects is much more fitting. The last benefit of using Contexts is that they allow all components below the hierarchy’s root component to retrieve the state without interfering with intermediary components in the process.

Example 8 – React Contexts

class VisibilityStore = {
  isVisible = true;
  toggle = () => this.isVisible = !this.isVisible;
}

const VisibilityContext = React.createContext(VisibilityStore);

class Visibility extends React.Component {
  store = new VisibilityStore();
  render() {
    return (
       <VisibilityContext.Provider value={store}>
         <VisibilityButton />
         <VisibilityDisplay />
       </VisibilityContext.Provider>
    );
  }
}

class VisibilityButton extends React.Component {
  render() {
    return (
      <VisibilityContext.Consumer>
        {(context) => <button onClick={context.toggle} />}
      </VisibilityContext.Consumer>
    );
  }
}

class VisibilityDisplay extends React.Component {]
  render() {
    return (
      <VisibilityContext.Consumer>
        {
          (context) =>
            <div>
              {context.isVisible
                ? ‘Visible’
                : ‘Invisible’
              }
            </div>
        }
      </VisibilityContext.Consumer>
    );
  }
}

The example above exemplifies modelling the UI state as an object (VisibilityStore), retrieving the UI state (VisibilityDisplay) and finally updating the state (VisibilityButton). Although a simple example, it depicts how state can be accessed at various depths of the component tree without affecting intermediary nodes.

Cons

Using Contexts to manage the state of single components would be overkill. Contexts are also ill-equipped to be used as global state managers; they lack a persist/re-load mechanism, and additionally, lack debugging tools which would help provide an overview of the application’s state at any given point in time.

Summary

React Contexts are well suited to a single use-case; managing the state of a group of UI components. Contexts, on their own, aren’t the solution to managing the state of an entire SPA, but the React team’s public release of the Context API comes at a time where it is common to see SPA states bloated full of UI-related state. Developers should therefore seriously consider trimming down their global application states by making use of Contexts.

Alternatives

Although the main React-compatible state management libraries have already been evaluated in this document, it is important to evaluate alternative libraries that are growing in popularity.

Undux

Undux was first released a year ago and sells itself as a lightweight Redux alternative. In just under a year it has amassed nearly 1000 Github stars. That being said, the library still lacks a community around it; there’s still only one main contributor and resources on the library are scarce. Having a single contributor means that the library suffers from under-delivering on essential features like state selectors.

That aside, Undux seems like a promising library; it strips out the verbosity of React, works with React’s debugging tools, supports Typescript and is highly cross-browser compatible. If the size of Undux’s community and number of contributors were to increase, it could be a real contender to Redux.

React Easy State

Like Undux, React Easy State was released over a year ago and has amassed just over 1000 Github stars. It sells itself as an alternative to Mobx and has gained a strong community around it. Both official and non-official resources are plentiful, Typescript support comes out of the box and the library’s API looks extremely promising. React Easy State, however, cannot be considered an SPA management library for Encodo’s purposes as it doesn’t support (and states it will never support) Internet Explorer.

Conclusion

Software libraries are built out of a need to solve a specific problem, or a set of specific problems. Software developers should be mindful of using libraries to solve these sets of problems, and not overstretch libraries to solve problems they weren’t ever designed to solve. Dan Abramov’s blogpost on why Redux shouldn’t be used as the go-to library for all SPA state management problems highlights this argument perfectly.

In light of this, Encodo propose that the use of multiple libraries to solve different problems is beneficial, so long as there are clear rules detailing when one library should be used over another. Having evaluated several different SPA state management libraries, Encodo conclude by suggesting that SPAs should use a combination of Redux, React Contexts and React Component states:

  • React Component states should be used to manage the state of individual presentational components whose states aren't required by the rest of the application.
  • React Contexts should be used to manage the state of hierarchies of presentational components. Again, beyond the hierarchies, the states encapsulated by Contexts shouldn’t be required by the rest of the application.
  • Redux should be used to store any state that needs to be used across the application, or needs to be persisted and then re-initialised.

Mobx has been omitted from the list of recommendations, as upon evaluation, Encodo conclude that it does not meet their requirements. Mobx is a library which exposes a large surface area, thereby offering solutions to a wide range of problems, but not providing a small set of optimised solutions. Many of the advantages of Mobx – mapping state in an OO manner and concise, simple bindings – are provided by React Component State and React Context.

The contender to Mobx, React Easy State, has also been omitted from Encodo’s recommendations, as although it is certainly a promising library with a growing community surrounding it, the library doesn’t support Internet Explorer and therefore does not fulfil Encodo’s requirements.

Finally, although Undux could be a strong contender in replacing Redux, at the time of writing, Encodo feel that the library is not mature enough to be a production-ready, future proof choice and therefore also exclude it from their list of recommendations.

Removing unwanted references to .NET 4.6.1 from web applications

The title is a bit specific for this blog post, but that's the gist of it: we ended up with a bunch of references to an in-between version of .NET (4.6.1) that was falsely advertising itself as a more optimal candidate for satisfying 4.6.2 dependencies. This is a known issue; there are several links to MS GitHub issues below.

In this blog, I will discuss direct vs. transient dependencies as well as internal vs. runtime dependencies.

tl;dr

If you've run into problems with an application targeted to .NET Framework 4.6.2 that does not compile on certain machines, it's possible that the binding redirects Visual Studio has generated for you use versions of assemblies that aren't installed anywhere but on a machine with Visual Studio installed.

How I solved this issue:

  • Remove the C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461\ directory
  • Remove all System* binding redirects
  • Clean out all bin/ and obj/ folders
  • Delete the .vs folder (may not be strictly necessary)
  • Build in Visual Studio
  • Observe that a few binding-redirect warnings appear
  • Double-click them to re-add the binding redirects, but this time to actual 4.6.2 versions (you may need to add <AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects> to your project)
  • Rebuild and verify that you have no more warnings

The product should now run locally and on other machines.

For more details, background and the story of how I ran into and solved this problem, read on.

Building Software

What do we mean when we say that we "build" an application?

Building is the process of taking a set of inputs and producing an artifact targeted at a certain runtime. Some of these inputs are included directly while others are linked externally.

  • Examples of direct inputs are the binary artifacts produced from the source code that comprises your application
  • Examples of external inputs are OS components and runtime environments

The machine does exactly what you tell it to, so it's up to you to make sure that your instructions are as precise as possible. However, you also want your application to be flexible so that it can run on as wide an array of environments as possible.

Your source code consists of declarations. We've generally got the direct inputs under control. The code compiles and produces artifacts as expected. It's the external-input declarations where things go awry.

What kind of external inputs does our application have?

  • System dependencies in the runtime target (assemblies like System.Runtime, System.Data, etc.), each with a minimum version
  • Third-party dependencies pulled via NuGet, each with a minimum version

How is this stitched together to produce the application that is executed?

  • The output folder contains our application, our own libraries and the assemblies from NuGet dependencies
  • All other dependencies (e.g. system dependencies) are pulled from the environment

The NuGet dependencies are resolved at build time. All resources are pulled and added to the release on the build machine. There are no run-time decisions to make about which versions of which assemblies to use.

Dependencies come in two flavors:

  • Direct: A reference in the project itself
  • Transient: A direct reference inherited from another direct or transient reference

It is with the transient references that we run into issues. The following situations can occur:

  • A transient dependency is referenced one or more times with the same version. This is no problem, as the builder simply uses that version or substitutes a newer version if that version is no longer available (rare, but possible)
  • A transient dependency is referenced in different versions. In this case, the builder tries to substitute a single version for all requirements. This generally works OK since most dependencies require a given version or higher. It may be that one or another library cannot work with all newer versions, but this is also rare. In this case, the top-level assembly (the application) must include a hint (an assembly-binding redirect) that indicates that the substitution is OK. More on these below.
  • A transient dependency requires a lower version than the version that is directly referenced. This is also not a problem, as the transient dependency is satisfied by the direct dependency with the higher version. In this case, the top-level application must also include an assembly-binding redirect to allow the substitution without warning.
  • A transient dependency requires a higher version than the version that is directly referenced. This is an error (no longer just a warning) that must be solved by either downgrading the dependency that leads to the problematic transient dependency or upgrading the direct dependency. Generally, the application will upgrade the direct dependency.

Assembly-Binding Redirects

An application generally includes an app.config (desktop applications or services) or web.config XML file that includes a section where binding redirects are listed. A binding redirect indicates the range of versions that can be mapped (or redirected) to a certain fixed version (which is generally also included as a direct dependency).

A redirect looks like this (a more-complete form is further below):

<bindingRedirect oldVersion="0.0.0.0-4.0.1.0" newVersion="4.0.1.0"/>

When the direct dependency is updated, the binding redirect must be updated as well (generally by updating the maximum version number in the range and the version number of the target of the redirect). NuGet does this for you when you're using package.config. If you're using Package References, you must update these manually. This situation is currently not so good, as it increases the likelihood that your binding redirects remain too restrictive.

NuGet Packages

NuGet packages are resolved at build time. These dependencies are delivered as part of the deployment. If they could be resolved on the build machine, then they are unlikely to cause issues on the deployment machine.

System Dependencies

Where the trouble comes in is with dependencies that are resolved at execution time rather than build time. The .NET Framework assemblies are resolved in this manner. That is, an application that targets .NET Framework expects certain versions of certain assemblies to be available on the deployment machine.

We mentioned above that the algorithm sometimes chooses the desired version or higher. This is not the case for dependencies that are in the assembly-binding redirects. Adding an explicit redirect locks the version that can be used.

This is generally a good idea as it increases the likelihood that the application will only run in a deployment environment that is extremely close or identical to the development, building or testing environment.

Aside: Other Bundling Strategies

How can we avoid these pesky run-time dependencies? There are several ways that people have come up with, in increasing order of flexibility:

  • Deliver hardware and software together. This is common in industrial applications and used to be much more common for businesses, as well. Nearly bulletproof. If it worked in the factory, it will work for the customer.
  • Deliver a VM (virtual machine) as your application. This includes the entire execution environment right down to the hardware. Safe, but inefficient.
  • Use a container (e.g. Docker) to deliver a description of the execution environment. The image is built to match the declaration. This is also quite stable and can avoid many of the substitution errors outlined above. If components are outdated, the machine fails to start and the definition must first be updated (and, presumably, tested). This type of deployment is getting more reliable but is also overkill for many applications.
  • Deliver the runtime with the application instead of describing the runtime you'd like to have. Targeting .NET Core instead of .NET Framework includes the runtime. This seems like a nice alternative and it's not surprising that Microsoft went in this direction with .NET Core. It's a good solution to the external-dependency issues outlined above.

To sum up:

  • A VM delivers the OS, runtime and application.
  • A Container delivers a description of the OS and runtime as well as the application itself.
  • .NET Core includes the runtime and application and is OS-agnostic (within reason).
  • .NET Framework includes only the application and some directives on the remaining components to obtain from the runtime environment.

Our application targets .NET Framework (for now). We're looking into .NET Core, but aren't ready to take that step yet.

Where can the deployment go wrong?

To sum up the information from above, problems arise when the build machine contains components that are not available on the deployment machine.

How can this happen? Won't the deployment machine just use the best match for the directives included in the build?

Ordinarily, it would. However, if you remember our discussion of assembly-binding redirects above, those are set in stone. What if you included binding redirects that required versions of system dependencies that are only available on your build machine ... or even your developer machine?

Special Tip for Web Applications

We actually discovered an issue in our deployment because the API server was running, but the Authentication server was not. The Authentication server was crashing because it couldn't find the runtime it needed in order to compile its Razor views (it has ASP.Net MVC components). We only discovered this issue on the deployment server because the views were only ever compiled on-the-fly.

To catch these errors earlier in the deployment process, you can enable pre-compiling views in release mode so that the build server will fail to compile instead of a producing a build that will sometimes fail to run.

Add the <MvcBuildViews>true</MvcBuildViews> to any MVC projects in the PropertyGroup for the release build, as shown in the example below:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
  <DebugType>pdbonly</DebugType>
  <Optimize>true</Optimize>
  <OutputPath>bin</OutputPath>
  <DefineConstants>TRACE</DefineConstants>
  <ErrorReport>prompt</ErrorReport>
  <WarningLevel>4</WarningLevel>
  <LangVersion>6</LangVersion>
  <MvcBuildViews>true</MvcBuildViews>
</PropertyGroup>

How do I create a redirect?

We mentioned above that NuGet is capable of updating these redirects when the target version changes. An example is shown below. As you can see, they're not very easy to write:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="System.Reflection.Extensions" publicKeyToken="B03F5F7F11D50A3A" culture="neutral"/>
        <bindingRedirect oldVersion="0.0.0.0-4.0.1.0" newVersion="4.0.1.0"/>
      </dependentAssembly>
      <!-- Other bindings... -->
    </assemblyBinding>
  </runtime>
</configuration>

Most bindings are created automatically when MSBuild emits a warning that one would be required in order to avoid potential runtime errors. If you compile with MSBuild in Visual Studio, the warning indicates that you can double-click the warning to automatically generate a binding.

If the warning doesn't indicate this, then it will tell you that you should add the following to your project file:

<AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>

After that, you can rebuild to show the new warning, double-click it and generate your assembly-binding redirect.

How did we get the wrong redirects?

When MSBuild generates a redirect, it uses the highest version of the dependency that it found on the build machine. In most cases, this will be the developer machine. A developer machine tends to have more versions of the runtime targets installed than either the build or the deployment machine.

A Visual Studio installation, in particular, includes myriad runtime targets, including many that you're not using or targeting. These are available to MSBuild but are ordinarily ignored in favor of more appropriate ones.

That is, unless there's a bit of a bug in one or more of the assemblies included with one of the SDKs...as there is with the net461 distribution in Visual Studio 2017.

Even if you are targeting .NET Framework 4.6.2, MSBuild will still sometimes reference assemblies from the 461 distribution because the assemblies are incorrectly marked as having a higher version than those in 4.6.2 and are taken first.

I found the following resources somewhat useful in explaining the problem (though none really offer a solution):

How can you fix the problem if you're affected?

You'll generally have a crash on the deployment server that indicates a certain assembly could not be loaded (e.g. System.Runtime). If you show the properties for that reference in your web application, do you see the path C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461 somewhere in there? If so, then your build machine is linking in references to this incorrect version. If you let MSBuild generate binding redirects with those referenced paths, they will refer to versions of runtime components that do not generally exist on a deployment machine.

Tips for cleaning up:

  • Use MSBuild to debug this problem. R# Build is nice, but not as good as MSBuild for this task.
  • Clean and Rebuild to force all warnings
  • Check your output carefully.
    • Do you see warnings related to package conflicts?
    • Ambiguities?
    • Do you see the path C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461 in the output?

A sample warning message:

[ResolvePackageFileConflicts] Encountered conflict between 'Platform:System.Collections.dll' and 'CopyLocal:C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461\lib\System.Collections.dll'.  Choosing 'CopyLocal:C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461\lib\System.Collections.dll' because AssemblyVersion '4.0.11.0' is greater than '4.0.10.0'.

The Solution

As mentioned above, but reiterated here, this what I did to finally stabilize my applications:

  • Remove the C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461\ directory
  • Remove all System* binding redirects
  • Clean out all bin/ and obj/ folders
  • Delete the .vs folder (may not be strictly necessary)
  • Build in Visual Studio
  • Observe that a few binding-redirect warnings appear
  • Double-click them to re-add the binding redirects, but this time to actual 4.6.2 versions (you may need to add <AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects> to your project)
  • Rebuild and verify that you have no more warnings
  • Deploy and TADA!

One more thing

When you install any update of Visual Studio, it will silently repair these missing files for you. So be aware and check the folder after any installations or upgrades to make sure that the problem doesn't creep up on you again.