1 2 3 4 5 6 7 8 9 10 11
v5.0.15: bug fixes for Winform and Report Manager

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.

Highlights

Breaking changes

  • No known breaking changes.
SPA state management libraries

Abstract

Encodo has updated its technology stack for SPAs. Current and future projects will use a combination of React Component States, React Contexts and Redux:

  • Use React Component States to manage state that is used only in a single component.
  • Use React Contexts to manage presentational state for component hierarchies.
  • Use Redux to manage global or persistent state.

The following article provides justification and reasoning for the conclusions listed above.

Overview

Encodo have undertaken a number of Single Page Application (SPA) projects over the last several years. During this time, web technologies, common standards and best practices have changed significantly. As such, these projects each had different configurations and used different sets of web technologies.

The last two years have brought a reduction in churn in web technologies. Encodo have therefore decided to evaluate SPA libraries with the goal of defining a stack that will be stable for the next few years. The outcome of this evaluation proposes a set of best practices in architecting an SPA, and most importantly, architecting an SPA’s state.

Participants

  • Marc Duerst
  • Marco von Ballmoos
  • Remo von Ballmoos
  • Richi Buetzer
  • Tom Szpytman

Introduction

Having undertaken an earlier evaluation of SPA rendering libraries, Encodo’s SPA projects have all relied upon the Javascript library React. To date, the company still feels that React provides an adequate and stable platform upon which to build SPAs.

Where the company feels its knowledge can be improved upon, is how state should be structured in an SPA, and which SPA state libraries, or combination of libraries, provide the most maintainable, stable and readable architectures. As such, this evaluation will only focus on discussing SPA-state libraries and not SPA-rendering libraries.

Requirements

Encodo focusses on both the building and maintenance of elegant solutions. It is therefore paramount that these software solutions are stable, yet future-proof in an ever-changing world. Many of Encodo’s clients request that solutions be backward-compatible with older browsers. An SPA state library must therefore adhere to the following criteria:

  • The library must have a moderately-sized community
  • The library must have a solid group of maintainers and contributors
  • Typescript typings must be available (and maintained)
  • The library must be open-source
  • The library must support all common browsers, as far back as IE11
  • The library must have a roadmap / future
  • Code using the library must be maintainable and readable (and support refactoring in a useful manner)

Candidates

Redux

Redux was released over three years ago and has amassed over 40,000 stars on Github. The project team provides a thorough set of tutorials, and software developers are additionally able to find a plethora of other resources online. Furthermore, with almost 15,000 related questions on StackOverflow, the chances of finding unique problems that other developers haven’t previously encountered are slim. Redux has over 600 contributors, and although its main contributors currently work for Facebook, the library is open source in its own right and not owned by Facebook.

Redux is an implementation of the Flux pattern; that is, you have a set of stores that each hold and maintain part of an application state. Stores register with a dispatcher, and by connecting to the dispatcher, receive notifications of events (usually as a result of a user input, but often also as a result of an automated process, e.g. a timer which emits an event each second). In the Redux world, these events are called actions. An action is no more than a Javascript object containing a type (a descriptive unique id) and optional additional information associated with that event.

Example 1 – A basic Redux action

Imagine a scenario where a user clicks on a button. Clicking the button toggles the application’s background colour. If the button were connected to Redux, it would then call the dispatcher and pass it an action:

{
  type: ‘TOGGLE_BACKGROUND_COLOUR’
}

Example 2 – A Redux action with a payload

Suppose now the application displayed an input field which allowed the user to specify a background colour. The value of the text field could be extracted and added as data to the action:

{
  type: ‘TOGGLE_BACKGROUND_COLOUR’,
  payload: {
    colour: ‘red’ // Taken from the value of the text field, for example
  }
}

When the dispatcher receives an action, it passes the action down to its registered stores. If the store is configured to process an action of that type, it runs the action through its configuration and emits a modified state. If not, it simply ignores the action.

Example 3 – Processing an action

Suppose an application had two stores:

  • Store X has an initial state { colour: ‘red’ } and is configured to process TOGGLE_BACKGROUND_COLOUR actions. If it encounters an action of this type, it is configured to set its state to the colour in the action’s payload.

  • Store Y has an initial state { users: [] } and is configured to process USER_LOAD actions. If it encounters an action of this type, it is configured to set its state to the users in the action’s payload.

Suppose the following occurs:

  • A TOGGLE_BACKGROUND_COLOUR action is sent to the dispatcher with payload { colour: ‘green’ }.

The result would be:

  • Store Y ignores this action as it is not configured to process actions of the TOGGLE_BACKGROUND_COLOUR type. It therefore maintains its state as { users: [] }.

  • Store X on the other hand, is configured to process TOGGLE_BACKGROUND_COLOUR actions and emits the modified state { colour: ‘green’ }.

Views bind the stores and dispatcher together. Views are as their name suggests; the application’s visual components.

When using Redux in combination with React, Views, in the Redux sense, are React components that render parts of the state (e.g. the application’s background colour) and re-render when those parts of the state change.

Redux doesn’t have to be used in conjunction with React, so the general definition of a view is a construct that re-renders every time the part of the state it watches changes. Views are additionally responsible for creating and passing actions to the dispatcher. As an example, a view might render two elements; a div displaying the current colour and a toggle button which, when clicked, sends a TOGGLE_BACKGROUND_COLOUR action to the dispatcher.

Figure 1: The flow of a Redux application

Pros

Software written following Redux’s guidelines is readable, quick to learn and easy to work with. Whilst Redux’s verbosity is often cited as a pitfall, the level of detail its verbosity provides helps debugging. Debugging is also aided by a highly detailed, well-constructed, browser extension; Redux DevTools. At any given point in time, a developer can open up Redux DevTools and not only be presented with an overview of an application’s state, but the effect of each action on the state. That’s certainly a tick in the box for Redux when it comes to ease of debugging.

Pairing Redux with React is as simple as installing and configuring the react-redux library. The library enforces a certain pattern of integrating the two, and as such, React-Redux projects are generally structured in similar ways. This is incredibly beneficial for developers, as the learning curve when jumping into new Redux-based projects is significantly reduced.

Redux also allows applications to rehydrate their state. In non-Redux terms, this means that when a Redux application starts, a developer can provide the application with a previously saved state. This is incredibly useful in instances where an application’s state needs to be persisted between sessions, or when data from the server needs to be cached. As an example, suppose our application sends data to and from an authenticated API and needs to send an authentication token on each request. It’d be impractical if this token were to be lost every time the page was refreshed or the browser closed. Redux could instead be configured so that the authentication token always be persisted and then re-retrieved from the browser’s Local Storage when the application started. The ability to re-hydrate a state can also lead to significantly faster application start-up times. In an application which displays a list of users, Redux could be configured to cache/persist the list of users, and on startup, display the cached version of that list until the application has time to make an API call to fetch the latest, updated list of users.

All in all, Redux proves itself to be a library which is easy to learn, use and maintain. It provides excellent React integration and the community around it offer a plethora of tools that help optimise and simplify complicated application scenarios.

Cons

As previously mentioned, Redux is considered verbose; that is, a software developer has to write a lot of code in order to connect a View to a Dispatcher. Many regard this as ‘boilerplate’ code, however, the author considers this a misuse of the word ‘boilerplate’, as code is not repeated, but rather, a developer has to write a lot of it.

Additionally, while Redux describes the flow and states very well, its precision negatively impacts on refactoring and maintainability. If there is significant change to the structure of the components, it's very difficult to modify the existing actions and reducers. It's not hard to lose time trying to find the balance between refactoring what you had and just starting from scratch.

Example 4 – The disadvantages of Redux

class Foo Extends React.Component {
  render() {
    return (
      <div onClick={this.props.click}>
        {this.props.hasBeenClicked
          ? “I’ve been clicked”
          : “I haven’t been clicked yet”
        }
      </div>
    );
  }
}

Consider the bare-bones example above that illustrates:

  • A component which starts by displaying the string “I haven’t been clicked yet” and then changes to display the string “I’ve been clicked” when the initial string is clicked.

If we were to use Redux as the state store for this scenario, we’d have to:

  • Create and define a reducer (Redux’s term for a store’s configuration function) and a corresponding action
  • Configure this component to use Redux. This would involve wiring up the various prop types (those passed down from the parent component, a click action to send to the dispatcher and a hasBeenClicked prop that needs to be read out from the Redux state)

What could remain a fairly small file if we were to use, say, class Foo’s component state (see the React Component State chapter for details), would end up as a series of long files if we were to use Redux. Clearly Redux’s forte doesn’t lie in managing a purely presentational component’s state.

Furthermore, suppose we had fifty presentational components like Foo, whose states were only used by the components themselves. Storing each component’s UI state in the global application state would not only pollute the Redux state tree (imagine having fifty different reducers/actions just to track tiny UI changes), but would actually slow down Redux’s performance. There’d be a lot of state changes, and each time the state changed, Redux would have to calculate which views were listening on that changed state and notify them of the changes.

Managing the state of simple UI/presentational components is therefore not a good fit for Redux.

Summary

Redux’s strengths lie in acting as an application’s global state manager. That is, Redux works extremely well for state which needs to be accessed from across an application at various depths. Its enforcement of common patterns and its well-constructed developer tools means that developers can reasonably quickly open up unfamiliar Redux-based projects and understand the code. Finally, the out of the box ability to save/restore parts of the state means that Redux outweighs most other contenders as a global state manager.

Mobx

At the time of writing, as with Redux, Mobx was first released over three years ago. Although still sizeable, its community is much smaller than Redux’s; it has over 16,000 stars on Github and almost 700 related questions on StackOverflow. The library is maintained by one main contributor, and although other contributors do exist, the combined number of their commits is dwarfed by those from the main contributor.

In its most basic form, Mobx implements the observer pattern. In practice, this means that Mobx allows an object to be declared as an ‘observable’ which ‘observers’ can then subscribe to and receive notifications of the observable object’s changes. When combining Mobx with React, observers can take the form of React components that re-render each time the observables they’re subscribed to change, or basic functions which re-run when the observables they reference change.

Pros

What Mobx lacks in community support, it makes up for in its ease of setup. Re-implementing Example 4 above using Mobx, a developer would simply:

  • Declare the component an observer
  • Give the class a boolean field property and register it as an observable

Example 5 – A simple Mobx setup

@observer
class Foo Extends React.Component {

  hasBeenClicked = observable(false);

  render() {
    return (
      <div onClick={() => this.hasBeenClicked.set(true)}>
        {this.hasBeenClicked.get()
          ? “I’ve been clicked”
          : “I haven’t been clicked yet”
        }
      </div>
    );
  }
}

Short and sweet. Mobx at its finest.

The tree-like state structures required by Redux can feel rather limiting. In contrast, a developer using Mobx as a global state manager could encapsulate the global state in several singleton objects, each making up part of the state. Although there are recommended guidelines for using this approach (Mobx StoreMobx project structure), these aren’t as readily enforced by Mobx as Redux does its recommended ways of structuring code.

Mobx proves itself as a worthy candidate for managing the state of UI/presentational components. Furthermore, it offers the flexibility of being able to declare observables/observers anywhere in the application, thus preventing pollution of the global application state and allowing some states to be encapsulated locally within React components. Finally, when used as a global application state manager, the ability to model the global state in an object-orientated manner can also seem more logical than the tree structure enforced by Redux.

Cons

Mobx seems great so far; it’s a small, niche library which does exactly what it says on the tin. Where could it possibly go wrong? Lots of places…

For starters, Mobx’s debugging tool is far inferior to the host of tools offered by the Redux community. Mobx-trace work perfectly well when trying to ascertain who/what triggered an update to an observable, or why an observer re-rendered/re-executed, but in contrast to Redux DevTools, it lacks the ability to gain an overview of the entire application state at any given point in time.

Moreover, Mobx doesn’t come with any out of the box persist/restore functionality, and although there are libraries out there to help, these libraries have such small user bases that they don’t provide Typescript support. The Mobx creator has, in the past, claimed that it wouldn’t be too hard for a developer to write a custom persistence library, but having simple, out of the box persist/restore functionality as Redux does is still favourable.

Beyond the simplicity presented in Example 5, Mobx is a library that provides an overwhelming number of low-level functions. In doing so, and in not always providing clear guidelines describing the use-cases of each function, the library allows developers to trip up over themselves. As examples, developers could read the Mobx documentation and still be left with the questions:

  • When is it best to use autorun vs reaction vs a simple observer?
  • When should I “dispose” of observable functions?

In summary, the relatively small community surrounding Mobx has led to the library lacking in a solid set of developer tools, add-on libraries and resources to learn about its intricacies. Ultimately this is a huge negative aspect and should be heavily considered when opting to use Mobx in an SPA.

Summary

As a library, Mobx has huge potential; its core concepts are simple and the places in which it lacks could easily be improved upon. Its downfall, however, is the fact that it only has one main contributor and a small community surrounding it. This means that the library lacks, and will lack, essentials such as in-depth tutorials and development tools.

Added to this, as of Mobx v5, the library dropped support for IE11. In doing so, the library now fails to meet Encodo’s cross-compatibility requirements. The current claim is that Mobx v4 will remain actively supported, but with a limited number of contributors, it is debatable whether or not support for v4 will remain a priority.

Beyond the lack of IE11 support, Mobx's lack of coherent guidelines, sub-par debugging tools and free reign given to developers to architect projects as they please makes for problematic code maintenance.

React Component State

React was initially released over five years ago and from its inception, has always offered a way of managing and maintaining state. Here we must note, that whilst a state management system exists, it is not intended to be used as a global state management system. Instead, it is designed to function as a local state system for UI/presentational components. As such, the evaluation of React Component states will only focus on the benefits and drawbacks of using it as a local state manager.

Pros

React component state is an easy to learn, easy to use framework for managing a small UI state.

Example 6 – Component state

class Foo Extends React.Component {
  state = {
    hasBeenClicked = false
  };

  render () {
    return (
      <div
        onClick={() => this.setState({ hasBeenClicked: true })}
      >
        {this.state.hasBeenClicked
          ? “I’ve been clicked”
          : “I haven’t been clicked yet”
        }
      </div>
    );
  }
}

Refreshingly simple.

Configuring a Component state can be as simple as:

  • Defining a component’s initial state
  • Configuring the component’s render function to display that state
  • Binding to UI triggers (e.g. onClick methods) which update the component’s state, thus forcing a re-render of the component

React Component State is just as concise as the similar MobX example above, without introducing a separate library.

Cons

React component states aren’t well architected to managing the states of hierarchies of UI components.

Example 7 – The drawbacks of using Component state

class Foo Extends React.Component {
  state = {
     hasBeenClicked: false,
     numberOfClicks: 0
  };

  onClick = () => {
    return this.setState(
      (previousState) =>
        ({
          hasBeenClicked: true,
          numberOfClicks: previousState.numberOfClicks + 1
        })
    );
  }

  render() {
    return (
      <div>
        <FooDisplay
          hasBeenClicked={this.state.hasBeenClicked}
          numberOfClicks={this.state.numberOfClicks}
        />
        <Button onClick={this.onClick} />
      </div>
    );
  }
}


class FooDisplay extends React.Component {
  render() {
    return (
      <div>
        {this.props.hasBeenClicked
          ? “I’ve been clicked”
          : <CountDisplay
              numberOfClicks={this.props.numberOfClicks}
            />
        }
      </div>
    );
  }
}


class CountDisplay extends React.Component {
  render() {
    return (
      <div>
        I’ve been clicked {this.props.numberOfClicks} times
      </div>
    );
  }
}


class Button Extends React.Component {
  render () {
    return (
      <button onClick={this.props.onClick}>
        Click me
      </button>
    );
  }
}

Although basic, the example above attempts to illustrate the following:

  • State/state modifying functions have to be passed down through the hierarchy of components as props. If the components were split into multiple files (as is common to do in a React project), it’d become cumbersome to trace the source of CountDisplay’s props
  • FooDisplay’s only use for its numberOfClicks prop is to pass it down further. This feels a bit sloppy, but is the only way of getting numberOfClicks down to CountDisplay when using Component State.

Summary

React Component States are often overlooked. Yes, they are limited and only work well for a single specific use case (managing the UI state of a single component), but component states do this extremely well. Software developers often claim that they need more fully-fledged state management libraries such as Redux or Mobx, but if just used to manage UI states, they’d probably be mis-using these libraries.

React component state is as its name suggests; a way of managing state for a single component. React has this functionality built-in, begging the question; is there ever really a use-case for using an alternative library to manage a single component’s state?

React Contexts

React 16.3 introduced a public-facing ‘Context’ API. Contexts were part of the library prior to the public API, and other libraries such as Redux were already making use of it as early as 2 years ago.

React Contexts excel where the Component State architecture begins to crumble; with Contexts, applications no longer have to pass state data down through the component tree to the consumer of the data. Like Redux, React Contexts aren’t well suited to tracking the state of single presentational components; an application tracking single component states with Contexts would end up being far too complicated. Rather, React Contexts are useful for managing the state of hierarchies of presentational components.

Pros

React Contexts allow developers to encapsulate a set of UI components without affecting the rest of the application. By encapsulating the state of a hierarchy of UI components, the hierarchy can be used within any application, at any depth of a component tree. Contexts furthermore allow developers to model UI state in an OO structure. In this sense, React Contexts (in addition to React Component State) provide many of the advantages of Mobx (again, without pulling in a separate library).

UI states are quite often a set of miscellaneous booleans and other variables which don’t necessarily fit into hierarchical tree structures. The ability to encapsulate these variables into one, or several objects is much more fitting. The last benefit of using Contexts is that they allow all components below the hierarchy’s root component to retrieve the state without interfering with intermediary components in the process.

Example 8 – React Contexts

class VisibilityStore = {
  isVisible = true;
  toggle = () => this.isVisible = !this.isVisible;
}

const VisibilityContext = React.createContext(VisibilityStore);

class Visibility extends React.Component {
  store = new VisibilityStore();
  render() {
    return (
       <VisibilityContext.Provider value={store}>
         <VisibilityButton />
         <VisibilityDisplay />
       </VisibilityContext.Provider>
    );
  }
}

class VisibilityButton extends React.Component {
  render() {
    return (
      <VisibilityContext.Consumer>
        {(context) => <button onClick={context.toggle} />}
      </VisibilityContext.Consumer>
    );
  }
}

class VisibilityDisplay extends React.Component {]
  render() {
    return (
      <VisibilityContext.Consumer>
        {
          (context) =>
            <div>
              {context.isVisible
                ? ‘Visible’
                : ‘Invisible’
              }
            </div>
        }
      </VisibilityContext.Consumer>
    );
  }
}

The example above exemplifies modelling the UI state as an object (VisibilityStore), retrieving the UI state (VisibilityDisplay) and finally updating the state (VisibilityButton). Although a simple example, it depicts how state can be accessed at various depths of the component tree without affecting intermediary nodes.

Cons

Using Contexts to manage the state of single components would be overkill. Contexts are also ill-equipped to be used as global state managers; they lack a persist/re-load mechanism, and additionally, lack debugging tools which would help provide an overview of the application’s state at any given point in time.

Summary

React Contexts are well suited to a single use-case; managing the state of a group of UI components. Contexts, on their own, aren’t the solution to managing the state of an entire SPA, but the React team’s public release of the Context API comes at a time where it is common to see SPA states bloated full of UI-related state. Developers should therefore seriously consider trimming down their global application states by making use of Contexts.

Alternatives

Although the main React-compatible state management libraries have already been evaluated in this document, it is important to evaluate alternative libraries that are growing in popularity.

Undux

Undux was first released a year ago and sells itself as a lightweight Redux alternative. In just under a year it has amassed nearly 1000 Github stars. That being said, the library still lacks a community around it; there’s still only one main contributor and resources on the library are scarce. Having a single contributor means that the library suffers from under-delivering on essential features like state selectors.

That aside, Undux seems like a promising library; it strips out the verbosity of React, works with React’s debugging tools, supports Typescript and is highly cross-browser compatible. If the size of Undux’s community and number of contributors were to increase, it could be a real contender to Redux.

React Easy State

Like Undux, React Easy State was released over a year ago and has amassed just over 1000 Github stars. It sells itself as an alternative to Mobx and has gained a strong community around it. Both official and non-official resources are plentiful, Typescript support comes out of the box and the library’s API looks extremely promising. React Easy State, however, cannot be considered an SPA management library for Encodo’s purposes as it doesn’t support (and states it will never support) Internet Explorer.

Conclusion

Software libraries are built out of a need to solve a specific problem, or a set of specific problems. Software developers should be mindful of using libraries to solve these sets of problems, and not overstretch libraries to solve problems they weren’t ever designed to solve. Dan Abramov’s blogpost on why Redux shouldn’t be used as the go-to library for all SPA state management problems highlights this argument perfectly.

In light of this, Encodo propose that the use of multiple libraries to solve different problems is beneficial, so long as there are clear rules detailing when one library should be used over another. Having evaluated several different SPA state management libraries, Encodo conclude by suggesting that SPAs should use a combination of Redux, React Contexts and React Component states:

  • React Component states should be used to manage the state of individual presentational components whose states aren't required by the rest of the application.
  • React Contexts should be used to manage the state of hierarchies of presentational components. Again, beyond the hierarchies, the states encapsulated by Contexts shouldn’t be required by the rest of the application.
  • Redux should be used to store any state that needs to be used across the application, or needs to be persisted and then re-initialised.

Mobx has been omitted from the list of recommendations, as upon evaluation, Encodo conclude that it does not meet their requirements. Mobx is a library which exposes a large surface area, thereby offering solutions to a wide range of problems, but not providing a small set of optimised solutions. Many of the advantages of Mobx – mapping state in an OO manner and concise, simple bindings – are provided by React Component State and React Context.

The contender to Mobx, React Easy State, has also been omitted from Encodo’s recommendations, as although it is certainly a promising library with a growing community surrounding it, the library doesn’t support Internet Explorer and therefore does not fulfil Encodo’s requirements.

Finally, although Undux could be a strong contender in replacing Redux, at the time of writing, Encodo feel that the library is not mature enough to be a production-ready, future proof choice and therefore also exclude it from their list of recommendations.

Removing unwanted references to .NET 4.6.1 from web applications

The title is a bit specific for this blog post, but that's the gist of it: we ended up with a bunch of references to an in-between version of .NET (4.6.1) that was falsely advertising itself as a more optimal candidate for satisfying 4.6.2 dependencies. This is a known issue; there are several links to MS GitHub issues below.

In this blog, I will discuss direct vs. transient dependencies as well as internal vs. runtime dependencies.

tl;dr

If you've run into problems with an application targeted to .NET Framework 4.6.2 that does not compile on certain machines, it's possible that the binding redirects Visual Studio has generated for you use versions of assemblies that aren't installed anywhere but on a machine with Visual Studio installed.

How I solved this issue:

  • Remove the C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461\ directory
  • Remove all System* binding redirects
  • Clean out all bin/ and obj/ folders
  • Delete the .vs folder (may not be strictly necessary)
  • Build in Visual Studio
  • Observe that a few binding-redirect warnings appear
  • Double-click them to re-add the binding redirects, but this time to actual 4.6.2 versions (you may need to add <AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects> to your project)
  • Rebuild and verify that you have no more warnings

The product should now run locally and on other machines.

For more details, background and the story of how I ran into and solved this problem, read on.

Building Software

What do we mean when we say that we "build" an application?

Building is the process of taking a set of inputs and producing an artifact targeted at a certain runtime. Some of these inputs are included directly while others are linked externally.

  • Examples of direct inputs are the binary artifacts produced from the source code that comprises your application
  • Examples of external inputs are OS components and runtime environments

The machine does exactly what you tell it to, so it's up to you to make sure that your instructions are as precise as possible. However, you also want your application to be flexible so that it can run on as wide an array of environments as possible.

Your source code consists of declarations. We've generally got the direct inputs under control. The code compiles and produces artifacts as expected. It's the external-input declarations where things go awry.

What kind of external inputs does our application have?

  • System dependencies in the runtime target (assemblies like System.Runtime, System.Data, etc.), each with a minimum version
  • Third-party dependencies pulled via NuGet, each with a minimum version

How is this stitched together to produce the application that is executed?

  • The output folder contains our application, our own libraries and the assemblies from NuGet dependencies
  • All other dependencies (e.g. system dependencies) are pulled from the environment

The NuGet dependencies are resolved at build time. All resources are pulled and added to the release on the build machine. There are no run-time decisions to make about which versions of which assemblies to use.

Dependencies come in two flavors:

  • Direct: A reference in the project itself
  • Transient: A direct reference inherited from another direct or transient reference

It is with the transient references that we run into issues. The following situations can occur:

  • A transient dependency is referenced one or more times with the same version. This is no problem, as the builder simply uses that version or substitutes a newer version if that version is no longer available (rare, but possible)
  • A transient dependency is referenced in different versions. In this case, the builder tries to substitute a single version for all requirements. This generally works OK since most dependencies require a given version or higher. It may be that one or another library cannot work with all newer versions, but this is also rare. In this case, the top-level assembly (the application) must include a hint (an assembly-binding redirect) that indicates that the substitution is OK. More on these below.
  • A transient dependency requires a lower version than the version that is directly referenced. This is also not a problem, as the transient dependency is satisfied by the direct dependency with the higher version. In this case, the top-level application must also include an assembly-binding redirect to allow the substitution without warning.
  • A transient dependency requires a higher version than the version that is directly referenced. This is an error (no longer just a warning) that must be solved by either downgrading the dependency that leads to the problematic transient dependency or upgrading the direct dependency. Generally, the application will upgrade the direct dependency.

Assembly-Binding Redirects

An application generally includes an app.config (desktop applications or services) or web.config XML file that includes a section where binding redirects are listed. A binding redirect indicates the range of versions that can be mapped (or redirected) to a certain fixed version (which is generally also included as a direct dependency).

A redirect looks like this (a more-complete form is further below):

<bindingRedirect oldVersion="0.0.0.0-4.0.1.0" newVersion="4.0.1.0"/>

When the direct dependency is updated, the binding redirect must be updated as well (generally by updating the maximum version number in the range and the version number of the target of the redirect). NuGet does this for you when you're using package.config. If you're using Package References, you must update these manually. This situation is currently not so good, as it increases the likelihood that your binding redirects remain too restrictive.

NuGet Packages

NuGet packages are resolved at build time. These dependencies are delivered as part of the deployment. If they could be resolved on the build machine, then they are unlikely to cause issues on the deployment machine.

System Dependencies

Where the trouble comes in is with dependencies that are resolved at execution time rather than build time. The .NET Framework assemblies are resolved in this manner. That is, an application that targets .NET Framework expects certain versions of certain assemblies to be available on the deployment machine.

We mentioned above that the algorithm sometimes chooses the desired version or higher. This is not the case for dependencies that are in the assembly-binding redirects. Adding an explicit redirect locks the version that can be used.

This is generally a good idea as it increases the likelihood that the application will only run in a deployment environment that is extremely close or identical to the development, building or testing environment.

Aside: Other Bundling Strategies

How can we avoid these pesky run-time dependencies? There are several ways that people have come up with, in increasing order of flexibility:

  • Deliver hardware and software together. This is common in industrial applications and used to be much more common for businesses, as well. Nearly bulletproof. If it worked in the factory, it will work for the customer.
  • Deliver a VM (virtual machine) as your application. This includes the entire execution environment right down to the hardware. Safe, but inefficient.
  • Use a container (e.g. Docker) to deliver a description of the execution environment. The image is built to match the declaration. This is also quite stable and can avoid many of the substitution errors outlined above. If components are outdated, the machine fails to start and the definition must first be updated (and, presumably, tested). This type of deployment is getting more reliable but is also overkill for many applications.
  • Deliver the runtime with the application instead of describing the runtime you'd like to have. Targeting .NET Core instead of .NET Framework includes the runtime. This seems like a nice alternative and it's not surprising that Microsoft went in this direction with .NET Core. It's a good solution to the external-dependency issues outlined above.

To sum up:

  • A VM delivers the OS, runtime and application.
  • A Container delivers a description of the OS and runtime as well as the application itself.
  • .NET Core includes the runtime and application and is OS-agnostic (within reason).
  • .NET Framework includes only the application and some directives on the remaining components to obtain from the runtime environment.

Our application targets .NET Framework (for now). We're looking into .NET Core, but aren't ready to take that step yet.

Where can the deployment go wrong?

To sum up the information from above, problems arise when the build machine contains components that are not available on the deployment machine.

How can this happen? Won't the deployment machine just use the best match for the directives included in the build?

Ordinarily, it would. However, if you remember our discussion of assembly-binding redirects above, those are set in stone. What if you included binding redirects that required versions of system dependencies that are only available on your build machine ... or even your developer machine?

Special Tip for Web Applications

We actually discovered an issue in our deployment because the API server was running, but the Authentication server was not. The Authentication server was crashing because it couldn't find the runtime it needed in order to compile its Razor views (it has ASP.Net MVC components). We only discovered this issue on the deployment server because the views were only ever compiled on-the-fly.

To catch these errors earlier in the deployment process, you can enable pre-compiling views in release mode so that the build server will fail to compile instead of a producing a build that will sometimes fail to run.

Add the <MvcBuildViews>true</MvcBuildViews> to any MVC projects in the PropertyGroup for the release build, as shown in the example below:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
  <DebugType>pdbonly</DebugType>
  <Optimize>true</Optimize>
  <OutputPath>bin</OutputPath>
  <DefineConstants>TRACE</DefineConstants>
  <ErrorReport>prompt</ErrorReport>
  <WarningLevel>4</WarningLevel>
  <LangVersion>6</LangVersion>
  <MvcBuildViews>true</MvcBuildViews>
</PropertyGroup>

How do I create a redirect?

We mentioned above that NuGet is capable of updating these redirects when the target version changes. An example is shown below. As you can see, they're not very easy to write:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="System.Reflection.Extensions" publicKeyToken="B03F5F7F11D50A3A" culture="neutral"/>
        <bindingRedirect oldVersion="0.0.0.0-4.0.1.0" newVersion="4.0.1.0"/>
      </dependentAssembly>
      <!-- Other bindings... -->
    </assemblyBinding>
  </runtime>
</configuration>

Most bindings are created automatically when MSBuild emits a warning that one would be required in order to avoid potential runtime errors. If you compile with MSBuild in Visual Studio, the warning indicates that you can double-click the warning to automatically generate a binding.

If the warning doesn't indicate this, then it will tell you that you should add the following to your project file:

<AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>

After that, you can rebuild to show the new warning, double-click it and generate your assembly-binding redirect.

How did we get the wrong redirects?

When MSBuild generates a redirect, it uses the highest version of the dependency that it found on the build machine. In most cases, this will be the developer machine. A developer machine tends to have more versions of the runtime targets installed than either the build or the deployment machine.

A Visual Studio installation, in particular, includes myriad runtime targets, including many that you're not using or targeting. These are available to MSBuild but are ordinarily ignored in favor of more appropriate ones.

That is, unless there's a bit of a bug in one or more of the assemblies included with one of the SDKs...as there is with the net461 distribution in Visual Studio 2017.

Even if you are targeting .NET Framework 4.6.2, MSBuild will still sometimes reference assemblies from the 461 distribution because the assemblies are incorrectly marked as having a higher version than those in 4.6.2 and are taken first.

I found the following resources somewhat useful in explaining the problem (though none really offer a solution):

How can you fix the problem if you're affected?

You'll generally have a crash on the deployment server that indicates a certain assembly could not be loaded (e.g. System.Runtime). If you show the properties for that reference in your web application, do you see the path C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461 somewhere in there? If so, then your build machine is linking in references to this incorrect version. If you let MSBuild generate binding redirects with those referenced paths, they will refer to versions of runtime components that do not generally exist on a deployment machine.

Tips for cleaning up:

  • Use MSBuild to debug this problem. R# Build is nice, but not as good as MSBuild for this task.
  • Clean and Rebuild to force all warnings
  • Check your output carefully.
    • Do you see warnings related to package conflicts?
    • Ambiguities?
    • Do you see the path C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461 in the output?

A sample warning message:

[ResolvePackageFileConflicts] Encountered conflict between 'Platform:System.Collections.dll' and 'CopyLocal:C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461\lib\System.Collections.dll'.  Choosing 'CopyLocal:C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461\lib\System.Collections.dll' because AssemblyVersion '4.0.11.0' is greater than '4.0.10.0'.

The Solution

As mentioned above, but reiterated here, this what I did to finally stabilize my applications:

  • Remove the C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\net461\ directory
  • Remove all System* binding redirects
  • Clean out all bin/ and obj/ folders
  • Delete the .vs folder (may not be strictly necessary)
  • Build in Visual Studio
  • Observe that a few binding-redirect warnings appear
  • Double-click them to re-add the binding redirects, but this time to actual 4.6.2 versions (you may need to add <AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects> to your project)
  • Rebuild and verify that you have no more warnings
  • Deploy and TADA!

One more thing

When you install any update of Visual Studio, it will silently repair these missing files for you. So be aware and check the folder after any installations or upgrades to make sure that the problem doesn't creep up on you again.

Which type should you register in an IOC container?

Use Case

I just ran into an issue recently where a concrete implementation registered as a singleton was suddenly not registered as a singleton because of architectural changes.

The changes involved creating mini-applications within a main application, each of which has its own IOC. Instead of creating controllers using the main application, I was now creating controllers with the mini-application instead (to support multi-tenancy, of which more in an upcoming post).

Silent Replacement of Singleton with Transient

Controllers are, by their nature, transient; a new controller is created to handle each incoming request.

In the original architecture, the concrete singleton was injected into the controller and all controller instances used the same shared instance. In the new architecture, the registration was not present in the mini-application (at first), which led to a (relatively) subtle bug: a transient and freshly created instance was injected into each new controller.

In cases where the singleton is a stateless algorithm, this wouldn't be a logical problem at all. At the very worst, you're over-allocating---but you probably wouldn't notice that, either. In this case, the singleton was a settings object, configured at application startup. The configured object was still in the main application's IOC, but not registered in the mini-application's IOC.

Because the singleton was registered on a concrete type rather than an interface, the semantic error occurred silently instead of throwing a lifestyle-mismatch or unregistered-interface exception.

A Straightforward Fix

This is only one of the reasons that I recommend using interfaces as the anchoring type of an IOC registration.

To fix the issue, I did exactly this: I extracted an interface from the class and used the interface everywhere (except for the implementing type of the registration). Re-running the test caused an immediate exception rather than a strange data bug (which resulted because the default configuration in the concrete type was just correct enough to allow it to limp to a result).

To show an example, instead of the following,

application.RegisterSingle<ApiSettings>()

I used,

application.RegisterSingle<IApiSettings, ApiSettings>()

This still didn't fix the crash because the mini-application doesn't get that registration automatically.

I also can't use the same registration as above because that would just create a new unconfigured ApiSettings in each mini-application (the same as I had before, but now as a singleton). To go that route, I would have to replicate the configuration-loading for the ApiSettings as well. And I don't want to do that.

Instead, I just injected the IApiSettings from the main application to the component responsible for creating the mini-application and registered the object as a singleton directly, as shown below.

public class MiniApplicationFactory
{
  public MiniApplicationFactory([NotNull] IApiSettings apiSettings)
  {
    if (apiSettings = null) { throw new ArgumentNullException(nameof(apiSettings(); }

    _apiSettings = apiSettings;
  }

  IApplication CreateApplication()
  {
    return new Application().UseRegisterSingle(_apiSettings);
  }

  [NotNull]
  private readonly IApiSettings _apiSettings;
}

On a side note, whereas C# syntax has become more concise and powerful from version to version, I still think it has a way to go in terms of terseness for such simple objects. For such things, Kotlin and TypeScript nicely illustrate what such a syntax could look like.1

Other Drawbacks

I mentioned above that this is only "one" of the reasons I don't like registering concrete singletons. The other two reasons are:

  1. Complicates replacement: If the registered type is a concrete instance, then any replacement must inherit from this instance. The base class has to be constructed more carefully in order to allow for all foreseeable customizations. With an interface, the implementor is completely free to either use the existing class as a base or to re-implement the interface entirely.
  2. Limits Mocking: Related to the first reason is that mocking is limited in its ability to override non-virtual methods. Even without a mocking library, you're just as hard-pressed to work around unwanted behavior in a hand-coded mock as you are with an actual replacement (as described above). Such limitations are non-existent with interfaces.


  1. I'm still waiting for C# to clean up a bit more of this syntax for me. The [NotNull] should be a language feature checked by the compiler so that the ArgumentNullException is no longer needed. On top of that, I'd like to see parameter properties, as in TypeScript (this is where you can prefix a constructor parameter with a keyword to declare and initialize it as a property). With a few more C#-language iterations that included non-nullable reference types and parameter properties, the example could look like the code below:

    public class MiniApplicationFactory
    {
    public MiniApplicationFactory(private IApiSettings apiSettings)
    {
    }
    
    IApplication CreateApplication()
    {
      return new Application().UseRegistereSingle(apiSettings);
    }
    }
    

Learning Quino: a roadmap for documentation and tutorials

In recent articles, we outlined a roadmap to .NET Standard and .NET Core and a roadmap for deployment and debugging. These two roadmaps taken together illustrate our plans to extend as much of Quino as possible to other platforms (.NET Standard/Core) and to make development with Quino as convenient as possible (getting/upgrading/debugging).

To round it off, we've made good progress on another vital piece of any framework: documentation.

Introducing docs.encodo.ch

We recently set up a new server to host Quino documentation. There, you can find documentation for current releases. Going forward, we'll also retain documentation for any past releases.

We're generating our documentation with DocFX, which is the same system that powers Microsoft's own documentation web site. We've integrated documentation-generation as a build step in Quino's nightly build on TeamCity, so it's updated every night (Zürich time) 1.

The documentation includes conceptual documentation which provides an overview/tutorials/FAQ for basic concepts in Quino. The API Reference includes comprehensive documentation about the types and methods available in Quino.

Next Steps

While we're happy to announce that we have publicly available documentation for Quino, we're aware that we've got work to do. The next steps are:

Even though there's still work to do, this is a big step in the right direction. We're very happy to have found DocFX, which is a very comprehensive, fast and nice-looking solution to generating documentation for .NET code.2

--


  1. If the build succeeds, naturally. :-)

  2. We used to use Sandcastle many years ago, but dropped support because it took forever to generate documentation, required its own solution file, didn't look very nice out-of-the-box, wasn't so easily customized and didn't have a very good search (which also didn't work without an IIS running it).

Delivering Quino: a roadmap for deployment and debugging

In a recent article, we outlined a roadmap to .NET Standard and .NET Core. We've made really good progress on that front: we have a branch of Quino-Standard that targets .NET Standard for class libraries and .NET Core for utilities and tests. So far, we've smoke-tested these packages with Quino-WebApi. Our next steps there are to convert Quino-WebApi to .NET Standard and .NET Core as well. We'll let you know when it's ready, but progress is steady and promising.

With so much progress on several fronts, we want to address how we get Quino from our servers to our customers and users.

Getting Quino

Currently, we provide access to a private fileshare for customers. They download the NuGet packages for the release they want. They copy these to a local folder and bind it as a NuGet source for their installations.

In order to make a build available to customers, we have to publish that build by deploying it and copying the files to our file share. This process has been streamlined considerably so that it really just involves telling our CI server (TeamCity) to deploy a new release (official or pre-). From there, we download the ZIP and copy it to the fileshare.

Encodo developers don't have to use the fileshare because we can pull packages directly from TeamCity as soon as they're available. This is a much more comfortable experience and feels much more like working with nuget.org directly.

Debugging Quino

The debugging story with external code in .NET is much better than it used to be (spoiler: it was almost impossible, even with Microsoft sources), but it's not as smooth as it should be. This is mostly because NuGet started out as a packaging mechanism for binary dependencies published by vendors with proprietary/commerical products. It's only in recent year(s) that packages are predominantly open-source.

In fact, debugging with third-party sources – even without NuGet involved – has never been easy with .NET/Visual Studio.

Currently, all Quino developers must download the sources separately (also available from TeamCity or the file-share) in order to use source-level debugging.

Binding these sources to the debugger is relatively straightforward but cumbersome. Binding these sources to ReSharper is even more cumbersome and somewhat unreliable, to boot. I've created the issue Add an option to let the user search for external sources explicitly (as with the VS debugger) when navigating in the hopes that this will improve in a future version. JetBrains has already fixed one of my issues in this are (Navigate to interface/enum/non-method symbol in Nuget-package assembly does not use external sources), so I'm hopeful that they'll appreciate this suggestion, as well.

The use case I cited in the issue above is,

Developers using NuGet packages that include sources or for which sources are available want to set breakpoints in third-party source code. Ideally, a developer would be able to use R# to navigate through these sources (e.g. via F12) to drill down into the code and set a breakpoint that will actually be triggered in the debugger.

As it is, navigation in these sources is so spotty that you often end up in decompiled code and are forced to use the file-explorer in Windows to find the file and then drag/drop it to Visual Studio where you can set a breakpoint that will work.

The gist of the solution I propose is to have R# ask the user where missing sources are before decompiling (as the Visual Studio debugger does).

Nuget Protocol v3 to the rescue?

There is hope on the horizon, though: Nuget is going to address the debugging/symbols/sources workflow in an upcoming release. The overview is at NuGet Package Debugging & Symbols Improvements and the issue is Improve NuGet package debugging and symbols experience.

Once this feature lands, Visual Studio will offer seamless support for debugging packages hosted on nuget.org. Since we're using TeamCity to host our packages, we need JetBrains to [Add support for NuGet Server API v3|https://youtrack.jetbrains.com/issue/TW-47289] in order to benefit from the improved experience. Currently, our customers are out of luck even if JetBrains releases simultaneously (because our TeamCity is not available publicly).

Quino goes public?

I've created an issue for Quino, Make Quino Nuget packages available publicly to track our progress in providing Quino packages to our customers in a more convenient way that also benefits from improvements to the debugging workflow with Nuget Packages.

If we published Quino packages to NuGet (or MyGet, which allows private packages), then we would have the benefit of the latest Nuget protocol/improvements for both ourselves and our customers as soon as it's available. Alternatively, we could also proxy our TeamCity feed publicly. We're still considering our options there.

As you can see, we're always thinking about the development experience for both our developers and our customers. We're fine-tuning on several fronts to make developing and debugging with Quino a seamless experience for all developers on all platforms.

We'll keep you posted.

Quino's Roadmap to .NET Standard and .NET Core

With Quino 5, we've gotten to a pretty good place organizationally. Dependencies are well-separated into projects—and there are almost 150 of them.

We can use code-coverage, solution-wide-analysis and so on without a problem. TeamCity runs the ~10,000 tests quickly enough to provide feedback in a reasonable time. The tests run even more quickly on our desktops. It's a pretty comfortable and efficient experience, overall.

Monolithic Solution: Pros and Cons

As of Quino 5, all Quino-related code was still in one repository and included in a single solution file. Luckily for us, Visual Studio 2017 (and Rider and Visual Studio for Mac) were able to keep up quite well with such a large solution. Recent improvements to performance kept the experience quite comfortable on a reasonably equipped developer machine.

Having everything in one place is both an advantage and disadvantage: when we make adjustments to low-level shared code, the refactoring is applied in all dependent components, automatically. If it's not 100% automatic, at least we know where we need to make changes in dependent components. This provides immediate feedback on any API changes, letting us fine-tune and adjust until the API is appropriate for known use cases.

On the other hand, having everything in one place means that you must make sure that your API not only works for but compiles and tests against components that you may not immediately be interested in.

For example, we've been pushing much harder on the web front lately. Changes we make in the web components (or in the underlying Quino core) must also work immediately for dependent Winform and WPF components. Otherwise, the solution doesn't compile and tests fail.

While this setup had its benefits, the drawbacks were becoming more painful. We wanted to be able to work on one platform without worrying about all of the others.

On top of that, all code in one place is no longer possible with cross-platform support. Some code—Winform and WPF—doesn't run on Mac or Linux.1

The time had come to separate Quino into a few larger repositories.

Separate Solutions

We decided to split along platform-specific lines.

  • Quino-Standard: all common code, including base libraries, application, configuration and IOC support, metadata, builders and all data drivers
  • Quino-WebApi: all web-related code, including remaining ASP.NET MVC support
  • Quino-Windows: all Windows-platform-only code (Windows-only APIs (i.e. native code) as well as Winform and WPF)

The Quino-WebApi and Quino-Windows solution will consume Quino-Standard via NuGet packages, just like any other Quino-based product. And, just like any Quino-based product, they will be able to choose when to upgrade to a newer version of Quino-Standard.

Quino-Standard

Part of the motivation for the split is cross-platform support. The goal is to target all assemblies in Quino-Standard to .NET Standard 2.0. The large core of Quino will be available on all platforms supported by .NET Core 2.0 and higher.

This work is quite far along and we expect to complete it by August 2018.

Quino-WebApi

As of Quino 5.0.5, we've moved web-based code to its own repository and set up a parallel deployment for it. Currently, the assemblies still target .NET Framework, but the goal here is to target class libraries to .NET Standard and to use .NET Core for all tests and sample web projects.

We expect to complete this work by August 2018 as well.

Quino-Windows

We will be moving all Winform and WPF code to its own repository, setting it up with its own deployment (as we did with Quino-WebApi). These projects will remain targeted to .NET Framework 4.6.2 (the lowest version that supports interop with .NET Standard assemblies).

We expect this work to be completed by July 2018.

Quino-Mobile

One goal we have with this change is to be able to use Quino code from Xamarin projects. Any support we build for mobile projects will proceed in a separate repository from the very beginning.

We'll keep you posted on work and improvements and news in this area.

Conclusion

Customer will, for the most part, not notice this change, except in minor version numbers. Core and platform versions may (and almost certainly will) diverge between major versions. For major versions, we plan to ship all platforms with a single version number.



  1. I know, Winform can be made to run on Mac using Mono. And WPF may eventually become a target of Xamarin. But a large part of our Winform UI uses the Developer Express components, which aren't going to run on a Mac. And the plans for WPF on Mac/Linux are still quite up in the air right now.

v5.0.5: Split out Quino-WebApi repository, improve authorization data-driver

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.

Highlights

  • Fixed Creation of Database during schema-migration (PostgreSql-only) (QNO-5938)
  • Use Authorization in data driver for count/loadValues/reload (QNO-5937)
  • Added Wrapper properties (for backward-compatibility only) (QNO-5928)
  • Fixed command-line logging output for code-generator (QNO-5921)
  • Moved web support to Quino-WebApi (QNO-5903, QNO-5907)

Notes

All web support in Quino has been moved to a separate repository. The Quino repository has been renamed to Quino-Standard. The only effect this has on customers is that minor version numbers for web components may diverge from those of Quino-Standard. In a subsequent release, we will be moving all Windows-platform–specific projects (Windows, Winform and WPF) to a Quino-Windows repository. Again, users of Quino will be unaffected other than minor version-numbers diverging slightly.

The reasoning behind this change is as follows:

We are in the process of targeting Quino-Standard to .NET Standard 2.0. This work is nearing completion, but Windows-based components will remain targeted to .NET Framework.

Parts of the web framework are being developed more quickly than either Winform/WPF or Quino-Standard itself. We wanted to allow those components to be developed individually to allow more freedom for innovation and to allow the logical components to choose when to upgrade (i.e. both Quino-WebApi and Quino-Windows are/will be consumers of Quino-Standard libraries, just like any customer product).

Breaking changes

  • No known breaking changes.
Compile-check LESS/CSS classnames using TypeScript and Webpack

As I am making myself familiar with modern frontend development based on React, TypeScript, Webpack and others, I learned something really cool. I like to write this down not only for you – dear reader – but also for my own reference.

The problem

Let’s say you have a trivial React component like this where you specify a classsName to tell what CSS class should be used:

const MyComponent = (props: MyComponentProps) => (
<MySubCompnent className='myclass'>
....
</MySubCompnent>
);

export default MyComponent;

The problem with this is that we don’t have any compiler-check to ensure this class myclass really exists in our LESS file. So if we have a Typo or we later change the LESS file, we cannot be sure all classes/selectors are still valid. Not even the browser will show that. It silently breaks. Bad thing!

A solution

Using Webpack and the LESS loader one can fix this by checking this at compile time. To do so, you can define the style and its classname in the LESS file and import it into .tsx files. The LESS loader for webpack will expose the following LESS variables to the build process where the TypeScript loader (used for the .tsx files) can pick it up.

MyComponent.less:

@my-class: ~':local(.myClass)';
 
@{my-class}{
  width: 100%;
  background-color: green;
}
...

Note the local() function supported by the LESS loader (see webpack config at the end) which scopes that class to a local scope.

The above LESS files can be typed and imported into the .TSX file like this:

MyComponent.tsx:

type TStyles = {
  myClass: string;
};
 
const styles: TStyles = require('./MyComponent.less');
 
const MyComponent = (props: MyComponentProps) => (
 <MySubCompnent className={{styles.myClass}}>
 ....
 </MySubCompnent>
);
 
export default MyComponent;

Then firing up your build, the .less file gets picked up using the require() function and checked against the TypeScript type TStyles. The property myClass will contain the LESS/CSS classname as defined in the .less file.

I then can use the styles.myClass instead of the string literal of the original code.

To get this working, ensure you have the LESS loader included in your webpack configuration (you probably already have it if your are already using LESS):

webpack.json:

module: {
  rules: [
  {
    test: /.tsx?$/,
    loader: "ts-loader"
  },
  {
    test: /.less$/,
    use: ExtractTextPlugin.extract({
    use: [
      {
        loader: "css-loader",
        options: {
          localIdentName: '[local]--[hash:5]',
          sourceMap: true
        }
      }, {
        loader: "less-loader",
        options: {
          sourceMap: true
        }
      }
    ],
    fallback: "style-loader",
    ...
  }),
  ...
},
...

Note: The samples use LESS stylesheets, but one can do the same with SCSS/SASS – I guess. You just have to use another loader for webpack and therefore the syntax supported by that loader.

No broken CSS classnames anymore – isn’t this cool? Let me know your feedback.

This is a cross-post from Marc's personal blog at https://marcduerst.com/2018/03/08/compile-check-less-css-classnames-using-typescript-and-webpack/

Finding deep assembly dependencies

Quino contains a Sandbox in the main solution that lets us test a lot of the Quino subsystems in real-world conditions. The Sandbox has several application targets:

  • WPF
  • Winform
  • Remote Data Server
  • WebAPI Server
  • Console

The targets that connect directly to a database (e.g. WPF, Winform) were using the PostgreSql driver by default. I wanted to configure all Sandbox applications to be easily configurable to run with SqlServer.

Just add the driver, right?

This is pretty straightforward for a Quino application. The driver can be selected directly in the application (directly linking the corresponding assembly) or it can be configured externally.

Naturally, if the Sandbox loads the driver from configuration, some mechanism still has to make sure that the required data-driver assemblies are available.

The PostgreSql driver was in the output folder. This was expected, since that driver works. The SqlServer was not in the output folder. This was also expected, since that driver had never been used.

I checked the direct dependencies of the Sandbox Winform application, but it didn't include the PostgreSql driver. That's not really good, as I would like both SqlServer and PostgreSql to be configured in the same way. As it stood, though, I would be referencing SqlServer directly and PostgreSql would continue to show up by magic.

Before doing anything else, I was going to have to find out why PostgreSql was included in the output folder.

I needed to figure out assembly dependencies.

Visual Studio?

My natural inclination was to reach for NDepend, but I thought maybe I'd see what the other tools have to offer first.

Does Visual Studio include anything that might help? The "Project Dependencies" shows only assemblies on which a project is dependent. I wanted to find assemblies that were dependent on PostgreSql. I have the Enterprise version of Visual Studio and I seem to recall an "Architecture" menu, but I discovered that these tools are no longer installed by default.

According to the VS support team in that link, you have to install the "Visual Studio extension development" workload in the Visual Studio installer. In this package, the "Architecture and analysis tools" feature is available, but not included by default.

Hovering this feature shows a tooltip indicating that it contains "Code Map, Live Dependency Validation and Code Clone detection". The "Live Dependency Validation" sounds like it might do what I want, but it also sounds quite heavyweight and somewhat intrusive, as described in this blog from the end of 2016. Instead of further modifying my VS installation (and possibly slowing it down), I decided to try another tool.

ReSharper?

What about ReSharper? For a while now, it's included project-dependency graphs and hierarchies. Try as I might, I couldn't get the tools to show me the transitive dependency on PostgreSql that Sandbox Winform was pulling in from somewhere. The hierarchy view is live and quick, but it doesn't show all transitive usages.

The graph view is nicely rendered, but shows dependencies by default instead of dependencies and usages. At any rate, the Sandbox wasn't showing up as a transitive user of PostgreSql.

I didn't believe ReSharper at this point because something was causing the data driver to be copied to the output folder.

NDepend to the rescue

So, as expected, I turned to NDepend. I took a few seconds to run an analysis and then right-clicked the PostgreSql data-driver project to select NDepend => Select Assemblies... => That are Using Me (Directly or Indirectly) to show the following query and results.

Bingo. Sandbox.Model is indirectly referencing the PostgreSql data driver, via a transitive-dependency chain of 4 assemblies. Can I see which assemblies they are? Of course I can: this kind of information is best shown on a graph, so you can show a graph of any query results by clicking "Export to Graph" to show the graph below.

Now I can finally see that the SandboxModel pulls in the Quino.Testing.Models.Generated (to use the BaseTypes module) which, in turn, has a reference to Quino.Tests.Base which, of course, includes the PostgreSql driver because that's the default testing driver for Quino tests.

Now that I know how the reference is coming in, I can fix the problem. Here I'm on my own: I have to solve this problem without NDepend. But at least NDepend was able to show me exactly what I have to fix (unlike VS or ReSharper).

I ended up moving the test-fixture base classes from Quino.Testing.Models.Generated into a new assembly called Quino.Testing.Models.Fixtures. The latter assembly still depends on Quino.Tests.Base and thus the PostgreSql data driver, but it's now possible to reference the Quino testing models without transitively referencing the PostgreSql data driver.

A quick re-analysis with NDepend and I can see that the same query now shows a clean view: only testing code and testing assemblies reference the PostgreSql driver.

Finishing up

And now to finish my original task! I ran the Winform Sandbox application with the PostgreSql driver configured and was greeted with an error message that the driver could not be loaded. I now had parity between PostgreSql and SqlServer.

The fix? Obviously, make sure that the drivers are available by referencing them directly from any Sandbox application that needs to connect to a database. This was the obvious solution from the beginning, but we had to quickly fix a problem with dependencies first. Why? Because we hate hacking. :-)

Two quick references added, a build and I was able to connect to both SQL Server and PostgreSql.