In a recent article, we outlined a roadmap to .NET Standard and .NET Core. We've made really good progress on that front: we have a branch of Quino-Standard that targets .NET Standard for class libraries and .NET Core for utilities and tests. So far, we've smoke-tested these packages with Quino-WebApi. Our next steps there are to convert Quino-WebApi to .NET Standard and .NET Core as well. We'll let you know when it's ready, but progress is steady and promising.
With so much progress on several fronts, we want to address how we get Quino from our servers to our customers and users.
Currently, we provide access to a private fileshare for customers. They download the NuGet packages for the release they want. They copy these to a local folder and bind it as a NuGet source for their installations.
In order to make a build available to customers, we have to publish that build by deploying it and copying the files to our file share. This process has been streamlined considerably so that it really just involves telling our CI server (TeamCity) to deploy a new release (official or pre-). From there, we download the ZIP and copy it to the fileshare.
Encodo developers don't have to use the fileshare because we can pull packages directly from TeamCity as soon as they're available. This is a much more comfortable experience and feels much more like working with nuget.org directly.
The debugging story with external code in .NET is much better than it used to be (spoiler: it was almost impossible, even with Microsoft sources), but it's not as smooth as it should be. This is mostly because NuGet started out as a packaging mechanism for binary dependencies published by vendors with proprietary/commerical products. It's only in recent year(s) that packages are predominantly open-source.
In fact, debugging with third-party sources – even without NuGet involved – has never been easy with .NET/Visual Studio.
Currently, all Quino developers must download the sources separately (also available from TeamCity or the file-share) in order to use source-level debugging.
Binding these sources to the debugger is relatively straightforward but cumbersome. Binding these sources to ReSharper is even more cumbersome and somewhat unreliable, to boot. I've created the issue Add an option to let the user search for external sources explicitly (as with the VS debugger) when navigating in the hopes that this will improve in a future version. JetBrains has already fixed one of my issues in this are (Navigate to interface/enum/non-method symbol in Nuget-package assembly does not use external sources), so I'm hopeful that they'll appreciate this suggestion, as well.
The use case I cited in the issue above is,
Developers using NuGet packages that include sources or for which sources are available want to set breakpoints in third-party source code. Ideally, a developer would be able to use R# to navigate through these sources (e.g. via F12) to drill down into the code and set a breakpoint that will actually be triggered in the debugger.
As it is, navigation in these sources is so spotty that you often end up in decompiled code and are forced to use the file-explorer in Windows to find the file and then drag/drop it to Visual Studio where you can set a breakpoint that will work.
The gist of the solution I propose is to have R# ask the user where missing sources are before decompiling (as the Visual Studio debugger does).
There is hope on the horizon, though: Nuget is going to address the debugging/symbols/sources workflow in an upcoming release. The overview is at NuGet Package Debugging & Symbols Improvements and the issue is Improve NuGet package debugging and symbols experience.
Once this feature lands, Visual Studio will offer seamless support for debugging packages hosted on nuget.org. Since we're using TeamCity to host our packages, we need JetBrains to [Add support for NuGet Server API v3|https://youtrack.jetbrains.com/issue/TW-47289] in order to benefit from the improved experience. Currently, our customers are out of luck even if JetBrains releases simultaneously (because our TeamCity is not available publicly).
I've created an issue for Quino, Make Quino Nuget packages available publicly to track our progress in providing Quino packages to our customers in a more convenient way that also benefits from improvements to the debugging workflow with Nuget Packages.
If we published Quino packages to NuGet (or MyGet, which allows private packages), then we would have the benefit of the latest Nuget protocol/improvements for both ourselves and our customers as soon as it's available. Alternatively, we could also proxy our TeamCity feed publicly. We're still considering our options there.
As you can see, we're always thinking about the development experience for both our developers and our customers. We're fine-tuning on several fronts to make developing and debugging with Quino a seamless experience for all developers on all platforms.
We'll keep you posted.
With Quino 5, we've gotten to a pretty good place organizationally. Dependencies are well-separated into projects—and there are almost 150 of them.
We can use code-coverage, solution-wide-analysis and so on without a problem. TeamCity runs the ~10,000 tests quickly enough to provide feedback in a reasonable time. The tests run even more quickly on our desktops. It's a pretty comfortable and efficient experience, overall.
As of Quino 5, all Quino-related code was still in one repository and included in a single solution file. Luckily for us, Visual Studio 2017 (and Rider and Visual Studio for Mac) were able to keep up quite well with such a large solution. Recent improvements to performance kept the experience quite comfortable on a reasonably equipped developer machine.
Having everything in one place is both an advantage and disadvantage: when we make adjustments to low-level shared code, the refactoring is applied in all dependent components, automatically. If it's not 100% automatic, at least we know where we need to make changes in dependent components. This provides immediate feedback on any API changes, letting us fine-tune and adjust until the API is appropriate for known use cases.
On the other hand, having everything in one place means that you must make sure that your API not only works for but compiles and tests against components that you may not immediately be interested in.
For example, we've been pushing much harder on the web front lately. Changes we make in the web components (or in the underlying Quino core) must also work immediately for dependent Winform and WPF components. Otherwise, the solution doesn't compile and tests fail.
While this setup had its benefits, the drawbacks were becoming more painful. We wanted to be able to work on one platform without worrying about all of the others.
On top of that, all code in one place is no longer possible with cross-platform support. Some code—Winform and WPF—doesn't run on Mac or Linux.1
The time had come to separate Quino into a few larger repositories.
We decided to split along platform-specific lines.
The Quino-WebApi and Quino-Windows solution will consume Quino-Standard via NuGet packages, just like any other Quino-based product. And, just like any Quino-based product, they will be able to choose when to upgrade to a newer version of Quino-Standard.
Part of the motivation for the split is cross-platform support. The goal is to target all assemblies in Quino-Standard to .NET Standard 2.0. The large core of Quino will be available on all platforms supported by .NET Core 2.0 and higher.
This work is quite far along and we expect to complete it by August 2018.
As of Quino 5.0.5, we've moved web-based code to its own repository and set up a parallel deployment for it. Currently, the assemblies still target .NET Framework, but the goal here is to target class libraries to .NET Standard and to use .NET Core for all tests and sample web projects.
We expect to complete this work by August 2018 as well.
We will be moving all Winform and WPF code to its own repository, setting it up with its own deployment (as we did with Quino-WebApi). These projects will remain targeted to .NET Framework 4.6.2 (the lowest version that supports interop with .NET Standard assemblies).
We expect this work to be completed by July 2018.
One goal we have with this change is to be able to use Quino code from Xamarin projects. Any support we build for mobile projects will proceed in a separate repository from the very beginning.
We'll keep you posted on work and improvements and news in this area.
Customer will, for the most part, not notice this change, except in minor version numbers. Core and platform versions may (and almost certainly will) diverge between major versions. For major versions, we plan to ship all platforms with a single version number.
I know, Winform can be made to run on Mac using Mono. And WPF may eventually become a target of Xamarin. But a large part of our Winform UI uses the Developer Express components, which aren't going to run on a Mac. And the plans for WPF on Mac/Linux are still quite up in the air right now.↩
The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.
All web support in Quino has been moved to a separate repository. The Quino repository has been renamed to Quino-Standard. The only effect this has on customers is that minor version numbers for web components may diverge from those of Quino-Standard. In a subsequent release, we will be moving all Windows-platform–specific projects (Windows, Winform and WPF) to a Quino-Windows
repository. Again, users of Quino will be unaffected other than minor version-numbers diverging slightly.
The reasoning behind this change is as follows:
We are in the process of targeting Quino-Standard
to .NET Standard 2.0. This work is nearing completion, but Windows-based components will remain targeted to .NET Framework.
Parts of the web framework are being developed more quickly than either Winform/WPF or Quino-Standard itself. We wanted to allow those components to be developed individually to allow more freedom for innovation and to allow the logical components to choose when to upgrade (i.e. both Quino-WebApi
and Quino-Windows
are/will be consumers of Quino-Standard
libraries, just like any customer product).
As I am making myself familiar with modern frontend development based on React, TypeScript, Webpack and others, I learned something really cool. I like to write this down not only for you – dear reader – but also for my own reference.
Let’s say you have a trivial React component like this where you specify a classsName
to tell what CSS class should be used:
const MyComponent = (props: MyComponentProps) => (
<MySubCompnent className='myclass'>
....
</MySubCompnent>
);
export default MyComponent;
The problem with this is that we don’t have any compiler-check to ensure this class myclass
really exists in our LESS file. So if we have a Typo or we later change the LESS file, we cannot be sure all classes/selectors are still valid. Not even the browser will show that. It silently breaks. Bad thing!
Using Webpack and the LESS loader one can fix this by checking this at compile time. To do so, you can define the style and its classname in the LESS file and import it into .tsx files. The LESS loader for webpack will expose the following LESS variables to the build process where the TypeScript loader (used for the .tsx files) can pick it up.
MyComponent.less:
@my-class: ~':local(.myClass)';
@{my-class}{
width: 100%;
background-color: green;
}
...
Note the local()
function supported by the LESS loader (see webpack config at the end) which scopes that class to a local scope.
The above LESS files can be typed and imported into the .TSX file like this:
MyComponent.tsx:
type TStyles = {
myClass: string;
};
const styles: TStyles = require('./MyComponent.less');
const MyComponent = (props: MyComponentProps) => (
<MySubCompnent className={{styles.myClass}}>
....
</MySubCompnent>
);
export default MyComponent;
Then firing up your build, the .less file gets picked up using the require()
function and checked against the TypeScript type TStyles
. The property myClass
will contain the LESS/CSS classname as defined in the .less file.
I then can use the styles.myClass
instead of the string literal of the original code.
To get this working, ensure you have the LESS loader included in your webpack configuration (you probably already have it if your are already using LESS):
webpack.json:
module: {
rules: [
{
test: /.tsx?$/,
loader: "ts-loader"
},
{
test: /.less$/,
use: ExtractTextPlugin.extract({
use: [
{
loader: "css-loader",
options: {
localIdentName: '[local]--[hash:5]',
sourceMap: true
}
}, {
loader: "less-loader",
options: {
sourceMap: true
}
}
],
fallback: "style-loader",
...
}),
...
},
...
Note: The samples use LESS stylesheets, but one can do the same with SCSS/SASS – I guess. You just have to use another loader for webpack and therefore the syntax supported by that loader.
No broken CSS classnames anymore – isn’t this cool? Let me know your feedback.
This is a cross-post from Marc's personal blog at https://marcduerst.com/2018/03/08/compile-check-less-css-classnames-using-typescript-and-webpack/
Quino contains a Sandbox in the main solution that lets us test a lot of the Quino subsystems in real-world conditions. The Sandbox has several application targets:
The targets that connect directly to a database (e.g. WPF, Winform) were using the PostgreSql driver by default. I wanted to configure all Sandbox applications to be easily configurable to run with SqlServer.
This is pretty straightforward for a Quino application. The driver can be selected directly in the application (directly linking the corresponding assembly) or it can be configured externally.
Naturally, if the Sandbox loads the driver from configuration, some mechanism still has to make sure that the required data-driver assemblies are available.
The PostgreSql driver was in the output folder. This was expected, since that driver works. The SqlServer was not in the output folder. This was also expected, since that driver had never been used.
I checked the direct dependencies of the Sandbox Winform application, but it didn't include the PostgreSql driver. That's not really good, as I would like both SqlServer and PostgreSql to be configured in the same way. As it stood, though, I would be referencing SqlServer directly and PostgreSql would continue to show up by magic.
Before doing anything else, I was going to have to find out why PostgreSql was included in the output folder.
I needed to figure out assembly dependencies.
My natural inclination was to reach for NDepend, but I thought maybe I'd see what the other tools have to offer first.
Does Visual Studio include anything that might help? The "Project Dependencies" shows only assemblies on which a project is dependent. I wanted to find assemblies that were dependent on PostgreSql. I have the Enterprise version of Visual Studio and I seem to recall an "Architecture" menu, but I discovered that these tools are no longer installed by default.
According to the VS support team in that link, you have to install the "Visual Studio extension development" workload in the Visual Studio installer. In this package, the "Architecture and analysis tools" feature is available, but not included by default.
Hovering this feature shows a tooltip indicating that it contains "Code Map, Live Dependency Validation and Code Clone detection". The "Live Dependency Validation" sounds like it might do what I want, but it also sounds quite heavyweight and somewhat intrusive, as described in this blog from the end of 2016. Instead of further modifying my VS installation (and possibly slowing it down), I decided to try another tool.
What about ReSharper? For a while now, it's included project-dependency graphs and hierarchies. Try as I might, I couldn't get the tools to show me the transitive dependency on PostgreSql that Sandbox Winform was pulling in from somewhere. The hierarchy view is live and quick, but it doesn't show all transitive usages.
The graph view is nicely rendered, but shows dependencies by default instead of dependencies and usages. At any rate, the Sandbox wasn't showing up as a transitive user of PostgreSql.
I didn't believe ReSharper at this point because something was causing the data driver to be copied to the output folder.
So, as expected, I turned to NDepend. I took a few seconds to run an analysis and then right-clicked the PostgreSql data-driver project to select NDepend => Select Assemblies... => That are Using Me (Directly or Indirectly)
to show the following query and results.
Bingo. Sandbox.Model
is indirectly referencing the PostgreSql data driver, via a transitive-dependency chain of 4 assemblies. Can I see which assemblies they are? Of course I can: this kind of information is best shown on a graph, so you can show a graph of any query results by clicking "Export to Graph" to show the graph below.
Now I can finally see that the SandboxModel
pulls in the Quino.Testing.Models.Generated
(to use the BaseTypes
module) which, in turn, has a reference to Quino.Tests.Base
which, of course, includes the PostgreSql driver because that's the default testing driver for Quino tests.
Now that I know how the reference is coming in, I can fix the problem. Here I'm on my own: I have to solve this problem without NDepend. But at least NDepend was able to show me exactly what I have to fix (unlike VS or ReSharper).
I ended up moving the test-fixture base classes from Quino.Testing.Models.Generated
into a new assembly called Quino.Testing.Models.Fixtures
. The latter assembly still depends on Quino.Tests.Base
and thus the PostgreSql data driver, but it's now possible to reference the Quino testing models without transitively referencing the PostgreSql data driver.
A quick re-analysis with NDepend and I can see that the same query now shows a clean view: only testing code and testing assemblies reference the PostgreSql driver.
And now to finish my original task! I ran the Winform Sandbox application with the PostgreSql driver configured and was greeted with an error message that the driver could not be loaded. I now had parity between PostgreSql and SqlServer.
The fix? Obviously, make sure that the drivers are available by referencing them directly from any Sandbox application that needs to connect to a database. This was the obvious solution from the beginning, but we had to quickly fix a problem with dependencies first. Why? Because we hate hacking. :-)
Two quick references added, a build and I was able to connect to both SQL Server and PostgreSql.
The Quino roadmap shows you where we're headed. How do we plan to get there?
A few years back, we made a big leap in Quino 2.0 to split up dependencies in anticipation of the initial release of .NET Core. Three tools were indispensable: ReSharper, NDepend and, of course, Visual Studio. Almost all .NET developers use Visual Studio, many use ReSharper and most should have at least heard of NDepend.
At the time, I wrote a series of articles on the migration from two monolithic assemblies (Encodo
and Quino
) to dozens of layered and task-specific assemblies that allows applications to include our software in a much more fine-grained manner. As you can see from the articles, NDepend was the main tool I used for finding and tracking dependencies.1 I used ReSharper to disentangle them.
Since then, I've not taken advantage of NDepend's features for maintaining architecture as much as I'd like. I recently fired it up again to see where Quino stands now, with 5.0 in beta.
But, first, let's think about why we're using yet another tool for examining our code. Since I started using NDepend, other tools have improved their support for helping a developer maintain code quality.
IDisposable
pattern. The Portability Analysis is essential for moving libraries to .NET Standard but doesn't offer any insight into architectural violations like NDepend does.With a concrete .NET Core/Standard project in the wings/under development, we're finally ready to finish our push to make Quino Core ready for cross-platform development. For that, we're going to need NDepend's help, I think. Let's take a look at where we stand today.
The first step is to choose what you want to cover. In the past, I've selected specific assemblies that corresponded to the "Core". I usually do the same when building code-coverage results, because the UI assemblies tend to skew the results heavily. As noted in a footnote below, we're starting an effort to separate Quino into high-level components (roughly, a core with satellites like Winform, WPF and Web). Once we've done that, the health of the core itself should be more apparent (I hope).
For starters, though, I've thrown all assemblies in for both NDepend analysis as well as code coverage. Let's see how things stand overall.
The amount of information can be quite daunting but the latest incarnation of the dashboard is quite easy to read. All data is presented with a current number and a delta from the analysis against which you're comparing. Since I haven't run an analysis in a while, there's no previous data against which to compare, but that's OK.
Let's start with the positive.
Now to the cool part: you can click anything in the NDepend dashboard to see a full list of all of the data in the panel.
Click the "B" on technical debt and you'll see an itemized and further-drillable list of the grades for all code elements. From there, you can see what led to the grade. By clicking the "Explore Debt" button, you get a drop-down list of pre-selected reports like "Types Hot Spots".
Click lines of code and you get a breakdown of which projects/files/types/methods have the most lines of code
Click failed quality gates to see where you've got the most major problems (Quino currently has 3 categories)
Click "Critical" or "Violated" rules to see architectural rules that you're violating. As with everything in NDepend, you can pick and choose which rules should apply. I use the default set of rules in Quino.
Most of our critical issues are for mutually-dependent namespaces. This is most likely not root namespaces crossing each other (though we'd like to get rid of those ASAP) but sub-namespaces that refer back to the root and vice-versa. This isn't necessarily a no-go, but it's definitely something to watch out for.
There are so many interesting things in these reports:
Click the "Low" issues (Quino has over 46,000!) and you can see that NDepend analyzes your code at an incredibly low level of granularity
Finallly, there's absolutely everything, which includes boxing/unboxing issues 7, method-names too long, large interfaces, large instances (could also be generated classes).
These already marked as low, so don't worry that NDepend just rains information down on you. Stick to the critical/high violations and you'll have real issues to deal with (i.e. code that might actually lead to bugs rather than code that leads to maintenance issues or incurs technical debt, both of which are more long-term issues).
What you'll also notice in the screenshots that NDepend doesn't just provide pre-baked reports: everything is based on its query language. That is, NDepend's analysis is lightning fast (takes only a few seconds for all of Quino) during which it builds up a huge database of information about your code that it then queries in real-time. NDepends provides a ton of pre-built queries linked from all over the UI, but you can adjust any of those queries in the pane at the top to tweak the results. The syntax is Linq to Sql and there are a ton of comments in the query to help you figure out what else you can do with it.
As noted above, the amount of information can be overwhelming, but just hang in there and figure out what NDepend is trying to tell you. You can pin or hide a lot of the floating windows if it's all just a bit too much at first.
In our case, the test assemblies have more technical debt than the code that it tests. This isn't optimal, but it's better than the other way around. You might be tempted to exclude test assemblies from the analysis, to boost your grade, but I think that's a bad idea. Testing code is production code. Make it just as good as the code it tests to ensure overall quality.
I did a quick comparison between Quino 4 and Quino 5 and we're moving in the right direction: the estimation of work required to get to grade A was already cut in half, so we've made good progress even without NDepend. I'm quite looking forward to using NDepend more regularly in the coming months. I've got my work cut out for me.
--
Many thanks to Patrick Smacchia of NDepend for generously providing an evaluator's license to me over the years.↩
We came up with a plan for reducing the size of the core solution in a recent architecture meeting. More on that in a subsequent blog post.↩
Quino has 10,000 tests, many of which are integration tests, so a change to a highly shared component would trigger thousands of tests to run, possibly for minutes. I can't see how it would be efficient to run tests continuously as I type in Quino. I've used continuous testing in smaller projects and it's really wonderful (both with ReSharper and also Wallaby for TypeScript), but it doesn't work so well with Quino because of its size and highly generalized nature.↩
I ran the analysis on both Quino 4 and Quino 5, but wasn't able to directly compare results because I think I inadvertently threw them away with our nant clean
command. I'd moved the ndepend out
folder to the common folder and our command wiped out the previous results. I'll work on persisting those better in the future.↩
I generated coverage data using DotCover, but realized only later that I should have configured it to generate NDepend-compatible coverage data (as detailed in NDepend Coverage Data. I'll have to do that and run it again. For now, no coverage data in NDepend. This is what it looks like in DotCover, though. Not too shabby:↩
Getting that documentation out to our developers is also a work-in-progress. Until recently, we've been stymied by the lack of a good tool and ugly templates. But recently we added DocFX support to Quino and the generated documentation is gorgeous. There'll be a post hopefully soon announcing the public availability of Quino documentation.↩
There's probably a lot of low-hanging fruit of inadvertent allocations here. On the other hand, if they're not code hot paths, then they're mostly harmless. It's more a matter of coding consistently. There's also an extension for ReSharper (the "Heap Allocations Viewer") that indicates allocations directly in the IDE, in real-time. I have it installed, and it's nice to see where I'm incurring allocations.↩
The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.
Unless we find a blocking issue that can't be fixed with a patch to the product, this will be the last release on the 4.x branch.
IExternalLoggerFactory
has been renamed to IExternalLoggerProvider
ExternalLoggerFactory
has been renamed to ExternalLoggerProvider
NullExternalLoggerFactory
has been renamed to NullExternalLoggerProvider
IUserCredentials.AuthenticationToken
is now an IToken
instead of a stringThe summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.
ReportDefinitionParameter.Hidden
now has the default value false
. Integrating this release will trigger a schema migration to adjust that value in the database.Consider the following scenarios:
Under the stresses that come with the combination of these two scenarios, software developers often overlook one critical aspect to a successful, future-proof project: external package-maintenance.
I recently sat down and wrote an email explaining how I go about package-maintenance and thought it would be useful to write up those notes and share them with others.
The tech world moves quickly; new code styles, frameworks and best practices evolve in the blink of an eye. Before you know it, the packages you'd installed the previous year are no longer documented and there aren’t any blogposts describing how to upgrade them to their latest versions. Nightmare.
My general rule of thumb to avoid this ill-fated destiny is to set aside some time each sprint to upgrade packages. The process isn’t really involved, but it can be time-consuming if you upgrade a handful of packages at once and find that one of them breaks your code. You then have to go through each, one by one, downgrade and figure out if it’s the culprit.
My upgrade procedure (in this case using the yarn package manager) is:
yarn outdated
yarn add clean-webpack-plugin@latest
or yarn add clean-webpack-plugin@VERSION_NUMBER
to install a specific versionTom Szpytman is a Software Developer at Encodo and works primarily on the React/Typescript stack
The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.
tl;dr: Applications might have to include the
System.Tuple
NuGet package in some assemblies.
This release adds an overload for creating delegate expressions that returns a Tuple (object, bool)
. This improvement allows applications to more easily specify a lambda that returns a value and a flag indicating whether the value is valid.
There are several overloads available for creating a DelegateExpression
. The simplest of these assumes that a value can be calculated and is appropriate for constant values or values whose calculation does not depend on the IExpressionContext
passed to the expression.
However, many (if not most) delegates should indicate whether a value can be calculated by returning true
or false
and setting an out object value
parameter instead. This is still the standard API, but 4.1.5 introduces an overload that supports tuples, which makes it easier to call.
In 4.1.4, an application had the following two choices for using the "tryGetValue" variant of the API.
Elements.Classes.A.AddCalculatedProperty("FullText", f => f.CreateDelegate(GetFullText));
private object GetFullText(IExpressionContext context, out object value)
{
var obj = context.GetInstance<IMetaObject>();
if (obj != null)
{
value = obj.ToString();
return true;
}
value = null;
return false;
}
If the application wanted to inline the lambda, the types have to be explicitly specified:
Elements.Classes.A.AddCalculatedProperty("FullText", f => f.CreateDelegate((IExpressionContext context, out object value) => {
var obj = context.GetInstance<IMetaObject>();
if (obj != null)
{
value = obj.ToString();
return true;
}
value = null;
return false;
}));
The overload that expects a tuple makes this a bit simpler:
Elements.Classes.A.AddCalculatedProperty("FullText", f => f.CreateDelegate(context => {
var obj = context.GetInstance<IMetaObject>();
return obj != null ? (obj.ToString(), true) : (null, false);
}));
Previously, a DelegateExpression
would always return false for a call to TryGetValue()
made with an empty context. This has been changed so that the DelegateExpression
no longer has any logic of its own. This means, though, that an application must be a little more careful to properly return false
when it is missing the information that it needs to calculate the value of an expression.
All but the lowest-level overloads and helper methods are unaffected by this change. An application that uses factory.CreateDelegate<T>()
will not be affected. Only calls to new DelegateExpression()
need to be examined on upgrade. It is strongly urged to convert direct calls to the construct to use the IMetaExpressionFactory
instead.
Imagine if an application used a delegate for a calculated expression as shown below.
Elements.Classes.Person.AddCalculatedProperty("ActiveUserCount", MetaType.Boolean, new DelegateExpression(GetActiveUserCount));
// ...
private object GetActiveUserCount(IExpression context)
{
return context.GetInstance<Person>().Company.ActiveUsers;
}
When Quino processes the model during startup, expression are evaluated in order to determine whether they can be executed without a context (caching, optimization, etc.). This application was safe in 4.1.4 because Quino automatically ignored an empty context and never called this code.
However, as of 4.1.5, an application that calls this low-level version will have to handle the case of an or insufficient context on its own.
It is highly recommended to move to using the expression factory and typed arguments instead, as shown below:
Elements.Classes.Person.AddCalculatedProperty<Person>("ActiveUserCount", MetaType.Boolean, f => f.CreateDelegate(GetActiveUserCount));
// ...
private object GetActiveUserCount(Person person)
{
return person.Company.ActiveUsers;
}
If you just want to add your own handling for empty contexts, you can do the following (note that you have to change the signature of GetActiveUserCount
):
Elements.Classes.Person.AddCalculatedProperty("ActiveUserCount", MetaType.Boolean, new DelegateExpression(GetActiveUserCount));
// ...
private bool GetActiveUserCount(IExpression context, out object value)
{
var person = context.GetInstanceOrDefault<Person>();
if (person == null)
{
value = null;
return false;
}
value = person.Company.ActiveUsers;
return true;
}