1 2 3 4 5 6 7 8 9 10 11
v4.1: Layouts, captions, multiple requests-per-connection

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.


Breaking changes

  • Usages of {{RunWinform}} should be updated to {{RunMetaWinform}}. The method {{RunWinform}} is now defined with a function parameter that expects an {{IApplication}} parameter instead of an {{IDataSession}} parameter. The change was made to allow applications more flexibility in configuring startup for applications with multiple main forms (QNO-4922) while still re-using as much shared code in Quino as possible. {{RunMetaWinform}} extends the {{RunWinform}} support to create and configure an {{IDataSession}} per form.
v4.1.3: Fixes for search layouts, languages, reconnects, descendant relations

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.


  • Fixed reconnect for ADO database connections (QNO-5776)
  • Fixed occasional startup crash when generating data (QNO-5768)
  • Improved search-layout / search-class generation (QNO-5767)
  • Restored support / example for toggling data languages in the UI (QNO-5764)
  • Fixed save for relations with descendant endpoints (QNO-5753)

Breaking changes

  • None
Windows 10 Fall Creators Update

At Encodo, we're much more cautious about installing massive Windows updates. Since a couple of us (including me) have started experiencing memory leaks in the previous version, we installed it on select machines.

Memory Leak fixed?

The memory leak we were experiencing was only on a couple of machines. It manifested as Task Manager reporting a very high RAM-usage percentage and, occasionally, Windows popping up a message box asking to close applications. Also, Win + S no longer responded on the first try (i.e. the Windows shell became only partially responsive).

Investigating with the RAMMap tool from Microsoft revealed a large amount (8GB) of "Process Private" RAM that couldn't all be accounted for in Task Manager or the Resource Monitor.

Initial results are better and seem to indicate normal behavior: if an application that uses a lot of RAM (e.g. Visual Studio) is closed, the reported RAM usage drops correspondingly.

NB; this is not at all a scientific conclusion. We applied the update and memory management on a previously misbehaving machine is better. That's all.

Task Manager

The Task Manager has two immediately obvious improvements:

  • All of an application's related processes are now collected under that application's node in the Task Manager. This is most obvious for web browsers, for which a much more realistic -- and, possibly, scary -- RAM-usage figure is shown. Other applications, like Visual Studio and even iTunes (as shown in the screenshot below, benefit). This gives the user a much clearer picture of which applications are actually using resources, even if they have been split into multiple processes.
  • % GPU usage is now a default column. You can now see that your web browsers are making good use of all hardware, where appropriate.

One drawback, though, is that you can no longer see which solution is open in which instance of Visual Studio.

Aero Shake

Microsoft, as usual, has re-enabled settings that you may have turned off. They did this with the mind-boggling feature called "Aero Shake": when you grab a Window title and shake it with the mouse, all other Windows are minimized. At first, just the feature was bizarre; now, it's Microsoft's fixation with re-enabling this feature that is truly worrying.

We've disabled it in the group policies on our domain controller so our users never have to suffer again.


We have not found any drawbacks to this update with our software and development tools and will roll it out to the rest of our users immediately.

v5.0: Bootstrap IOC, authorization driver, improve dependencies

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes below and is available to those with access to the Encodo issue tracker.


Quino 4 was released on 11. August, 2017. Quino 4.1.7 brings that venerable line to a close. Products are encouraged to upgrade to the this version as the next stable build with the improvements outlined below.


Breaking changes

  • ApplicationBasedTestsBase.CreateApplication() has a new parameter (bool isFixtureApplication). Applications that override this method will have to adjust their definition.
  • PathTools methods are deprecated. In some cases, these methods have been marked as obsolete (true), which causes a compile error. The obsolete message indicates which method to use instead.
  • ICommandSetManager.Apply() no longer has a IValueTools parameter
  • RegisterStandardConnectionServices has been renamed to RegisterApplicationConnectionServices
  • IValueTools.EnumParser is no longer settable; register your object/implementation in the Bootstrap instead
  • Methods from DataMetaExpressionFactoryExtensions has been removed. Instead of creating expressions directly with the expression factory, you should use the query or query table.
  • IQueryCondition no longer has a type parameter (it used to be IQueryCondition<TSelf>)
v4.1.2: Expressions and code-generation

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.


  • Fix code-generation for modules with no classes (QNO-5722)
  • Restore ExpressionConstants.Now (QNO-5720)
  • Improve backwards-compatibility for wrappers (QNO-5744)

Breaking changes

  • None
2017 Migration from ESXi 5.5 to Proxmox 5.0

Here at Encodo, we host our services in our own infrastructure which, after 12 years, has grown quite large. But this article is about our migration away from VMWare.

So, here's how we proceeded:

We set up a test environment as close as possible to the new one before buying the new server, to test everything. This is the first time we had contact with software raids and it's monitoring capabilities.

Install the Hypervisor

Installation time, here it goes:

  • Install latest Proxmox1: This is very straight forward, I won't go into that part.
  • After the installation is done, log in via ssh and check the syslog for errors (we had some NTP issues, so I fixed that before doing anything else).

Check Disks

We have our 3 Disks for our Raid5. We do not have a lot of files to store, so we use 1TB which should be still OK (see Why RAID 5 stops working in 2009 as to why you shouldn't do RAID5 anymore).

We set up Proxmox on a 256GB SSD. Our production server will have 4x 1TB SSD's, one of which is a spare. Note down the serial number of all your disks. I don't care how you do it -- make pictures or whatever -- but if you ever care which slot contains which disk or if the failing disk is actually in that slot, having solid documentation really helps a ton.

You should check your disks for errors beforehand! Do a full smartctl check. Find out which disks are which. This is key, we even took pictures prior to inserting them into the server (and put them in our wiki) so we have the SN available for each slot.

See which disk is which:

for x in {a..e}; do smartctl -a /dev/sd$x | grep 'Serial' | xargs echo "/dev/sd$x: "; done

Start a long test for each disk:

for x in {a..e}; do smartctl -t long /dev/sd$x; done

See SMART tests with smartctl for more detailed information.

Disk Layout & Building the RAID

We'll assume the following hard disk layout:

/dev/sda = System Disk (Proxmox installation)
/dev/sdb = RAID 5, 1
/dev/sdc = RAID 5, 2
/dev/sdd = RAID 5, 3
/dev/sde = RAID 5 Spare disk
/dev/sdf = RAID 1, 1
/dev/sdg = RAID 1, 2
/dev/sdh = Temporary disk for migration

When the check is done (usually a few hours), you can verify the test result with

smartctl -a /dev/sdX

Now that we know our disks are OK, we can proceed creating the software RAID. Make sure you get the correct disks:

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd

The RAID5 will start building immediately but you can also start using it right away. Since I had other things on my hand, I waited for it to finish.

Add the spare disk (if you have one) and export the configuration to the config:

mdadm --add /dev/md0 /dev/sde
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Configure Monitoring

Edit the email address in /etc/mdadm/mdadm.conf to a valid mail address within your network and test it via

mdadm --monitor --scan --test -1

Once you know that your monitoring mails come through, add active monitoring for the raid device:

mdadm --monitor --daemonise --mail=valid@domain.com --delay=1800 /dev/md0

To finish up monitoring, it's important to read the mismatch_cnt from /sys/block/md0/md/mismatch_cnt periodically to make sure the Hardware is OK. We use our very old Nagios installation for this and got a working script for the check from Mdadm checkarray by Thomas Krenn

Creating and Mounting Volumes

Back to building! We now need to make the created storage available to Proxmox. To do this, we create a PV, VG and an LV-Thin-Pool. We use 90% of the storage since we need to migrate other devices as well, and 10% is enough for us to migrate 2 VM's at a time. We format it with XFS:

pvcreate /dev/md0 storage
vgcreate raid5vg /dev/md0
lvcreate -l 90%FREE -T raid5vg
lvcreate -n migrationlv -l +100%FREE raid5vg
mkfs.xfs /dev/mapper/raid5vg-migrationlv

Mount the formatted migration logical volume (if you want to reboot, add it to fstab obviously):

mkdir /mnt/migration
mount /dev/mapper/raid5vg-migrationlv /mnt/migration

If you don't have the disk space to migrate the VM's like this, add an additional disk (/dev/sdh in our case). Create a new partition on it with

fdisk /dev/sdh

Accept all the defaults for max size. Then format the partition with xfs and mount it:

mkfs.xfs /dev/sdh1
mkdir /mnt/largemigration
mount /dev/sdh1 /mnt/largemigration

Now you can go to your Proxmox installation and add the thin pool (and your largemigration partition if you have it) in the Datacenter -> Storage -> Add. Give it an ID (I called it raid5 because I'm very creative), Volume Group: raid5vg, Thin Pool: raid5lv.

Extra: Upgrade Proxmox

At this time, we'd bought our Proxmox license and did a dist upgrade from 4.4 to 5.0 which had just released. To do that, follow the upgrade document from the Proxmox wiki. Or install 5.0 right away.

Migrating VMs

Now that the storage is in place, we are all set to create our VM's and do the migration. Here's the process we were doing - there are probably more elegant and efficient ways to do that, but this way works for both our Ubuntu installations and our Windows VM's:

  1. In ESXi: Shut down the VM to migrate
  2. Download the vmdk file from vmware from the storage or activate ssh on ESXi and scp the vmdk including the flat file (important) directly to /mnt/migration (or largemigration respectively).
  3. Shrink the vmdk if you actually downloaded it locally (use the non-flat file as input if the flat doesn't work):2
     vdiskmanager-windows.exe -r vmname.vmdk -t 0 vmname-pve.vmdk
  4. Copy the new file (vmname-pve.vmdk) to proxmox via scp into the migration directory /mnt/migration (or largemigration respectively)
  5. Ssh into your proxmox installation and convert the disk to qcow2:
     qemu-img convert -f vmdk /mnt/migration/vmname-pve.vmdk -O qcow2 /mnt/migration/vmname-pve.qcow2
  6. In the meantime you can create a new VM:
    1. In general: give it the same resources as it had in the old hypervisor
    2. Do not attach a cd/dvd
    3. Set the disk to at least the size of the vmdk image
    4. Make sure the image is in the "migration" storage
    5. Note the ID of the vm, you're gonna need it in the next step
  7. Once the conversion to qcow2 is done, override the existing image with the converted one. Make sure you get the correct ID and that the target .qcow2 file exists. Override with no remorse:
     mv /mnt/migration/vmname-pve.qcow2 /mnt/migration/images/<vm-id>/vm-<vm-id>-disk-1.qcow2
  8. When this is done, boot the image and test if it comes up and runs
  9. If it does, go to promox and move the disk to the RAID5:
    1. Select the VM you just started
    2. Go to Hardware
    3. Click on Hard Disk
    4. Click on Move Disk
    5. Select the Raid5 Storage and check the checkbox Delete Source
    6. This will happen live

That's it. Now repeat these last steps for all the VMs - in our case around 20, which is just barely manageable without any automation. If you have more VMs you could automate more things, like copying the VMs directly from ESXi to Proxmox via scp and do the initial conversion there.

  1. We initially installed Proxmox 4.4, then upgraded to 5.0 during the migration.

  2. You can get the vdiskmanager from Repairing a virtual disk in Fusion 3.1 and Workstation 7.1 (1023856) under "Attachments"

v4.0: New modeling API, expanded UI support and data improvements

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes below and is available to those with access to the Encodo issue tracker.


Metadata & Modeling

Most of the existing metadata-building API has been deprecrated and replaced with a fluent API that is consistent and highly extensible.




  • Improve compatibility of generated code with StyleCop/best practices (QNO-5252, QNO-5584, QNO-5515)
  • Add support for integrating interfaces into generated code (QNO-5585)
  • Finalize support for including generated code in a separate assembly. Generated code can now be in a separate assembly from the modeling code.
  • Removed WinformDx code generator (QNO-5324)


  • Improve debugging with Quino sources and Nuget packages (QNO-5473)
  • Improve directory-services integration (QNO-5421)
  • Reworked the plugins system (QNO-2525)
  • Improve assembly-loading in tests and tools (QNO-5538, QNO-5571)
  • Improve registration API for external loggers; integrate Serilog (QNO-5591)
  • Improve schema-migration logging (QNO-5586)
  • Allow customization of exception and message formatting (QNO-5551, QNO-5550)

Breaking changes

Metadata & Modeling

  • The Encodo.Quino.Builders.Extensions namespace has been removed. All members were moved to Encodo.Quino.Meta or Encodo.Quino.Builders instead.
  • The assembly Quino.Meta.Standard no longer exists and may have to be removed manually if Nuget does not remove it for you.
  • Added default CreateModel() to MetaBuilderBasedModelBuilderBase
  • Added empty constructor to MetaBuilderBasedModelBuilderBase
  • GetSubModules() and GetModules() now returns IMetaModule instead of IModuleAspect
  • Even deprecated versions of AddSort(), AddSortOrderProperty(), AddEnumeratedClass(), AddValueListProperty() all expect a parameter of type IMetaExpressionFactory or IExpressionConstants now.


  • The IDataSessionAwareList is used instead of IMetaAwareList
  • Two constructors of DataList have been made private
  • GenericObject.DoSetDedicatedSession() is no longer called or overridable
  • None of the classes derived from AuthenticatorBase accept an IApplication as constructor parameters anymore. Instead, use the Application or Session to create the authenticator with GetInstance<TService>(). E.g. if before you created a TokenAuthenticator with this call, new TokenAuthenticator(Application), you should now create the TokenAuthenticator with Application.GetInstance<TokenAuthenticator>(). You are free also to call the new constructor directly, but construction using the IOC is strongly recommended.
  • The constructor for DataSession has changed; this shouldn't cause too many problems as applications should be using the IDataSessionFactory to construct instances anyway.
  • DataGenerators have changed considerably. Implement the IDataGenerator interface instead of using the DataGenerator base class.
  • The names of ISchemaDifference have changed, so the output of a migration plan will also be different. Software that depended on scraping the plan to determine outcomes may no longer work.
  • Default values are no longer implicitly set. A default value for a required property will only be supplied if one is set in the model. Otherwise, a NULL-constraint violation will be thrown by the database. Existing applications will have to be updated: either set a default value in the metadata or set the property value before saving objects.


  • The generated filename for builders has changed from "Extensions.cs to "Builders.cs". When you regenerate code for the V2 format, you will have include the new files and remove the old ones from your project.
  • Data-language-specific properties are no longer generated by default because there is no guarantee that languages are available in a given application, You can still enable code-generation by calling SetCodeGenerated() on the multi-language or value-list property
  • The generated MetaModelBuilder classes are no longer generated. QNO-5515


  • LanguageTools.GetCaption() no longer defaults to GetDescription() because this is hardly ever what you wanted to happen.
  • CaptionExtensions are now in CaptionTools and are no longer extension methods on object.
  • ReflectionExtensions are now in ReflectionTools and are also no longer extension methods on object.


  • Redeclared Operation<> with new method signature


Some Windows-specific functionality has been moved to new assemblies. These assemblies are automatically included for Winform and WPF applications (as before). Applications that want to use the Windows-specific functionality will have to reference the following packages:

  • For WindowsIdentity-based code, use the Encodo.Connections.Windows package and call UseWindowsConnectionServices()
  • For ApplicationSettingsBase support, use the Encodo.Application.Windows package and call UseWindowsApplication()
  • For Directory Services support, use the Encodo.Security.Windows package and call UseWindowsSecurityServices().
C# Handbook 7.0

imageI announced almost exactly one year ago that I was rewriting the Encodo C# Handbook. The original was published almost exactly nine years ago. There were a few more releases as well as a few unpublished chapters.

I finally finished a version that I think I can once again recommend to my employees at Encodo. The major changes are:

  • The entire book is now a Git Repository. All content is now in Markdown. Pull requests are welcome.
  • I've rewritten pretty much everything. I removed a lot of redundancies, standardized formulations and used a much more economical writing style than in previous versions.
  • Recommendations now include all versions of C# up to 7
  • There is a clearer distinction between general and C#-specific recommendations
  • There are now four main sections: Naming, Formatting, Usage and Best Practices, which is broken into Design, Safe Programming, Error-handling, Documentation and a handful of other, smaller topics.

Here's the introduction:

The focus of this document is on providing a reference for writing C#. It includes naming, structural and formatting conventions as well as best practices for writing clean, safe and maintainable code. Many of the best practices and conventions apply equally well to other languages.

Check out the whole thing! Or download the PDF that I included in the repository.

Adventures in .NET Standard 2.0-preview1

.NET Standard 2.0 is finally publicly available as a preview release. I couldn't help myself and took a crack at converting parts of Quino to .NET Standard just to see where we stand. To keep me honest, I did all of my investigations on my MacBook Pro in MacOS.

IDEs and Tools

I installed Visual Studio for Mac, the latest JetBrains Rider EAP and .NET Standard 2.0-preview1. I already had Visual Studio Code with the C#/OmniSharp extensions installed. Everything installed easily and quickly and I was up-and-running in no time.

Armed with 3 IDEs and a powerful command line, I waded into the task.

Porting Quino to .NET Standard

Quino is an almost decade-old .NET Framework solution that has seen continuous development and improvement. It's quite modern and well-modularized, but we still ran into considerable trouble when experimenting with .NET Core 1.1 almost a year ago. At the time, we dropped our attempts to work with .NET Core, but were encouraged when Microsoft shifted gears from the extremely low--surface-area API of .NET Core to the more inclusive though still considerably cleaned-up API of .NET Standard.

Since it's an older solution, Quino projects use the older csproj file-format: the one where you have to whitelist the files to include. Instead of re-using these projects, I figured a good first step would be to use the dotnet command-line tool to create a new solution and projects and then copy files over. That way, I could be sure that I was really only including the code I wanted -- instead of random cruft generated into the project files by previous versions of Visual Studio.

The dotnet Command

The dotnet command is really very nice and I was able to quickly build up a list of core projects in a new solution using the following commands:

  • dotnet new sln
  • dotnet new classlib -n {name}
  • dotnet add reference {../otherproject/otherproject.csproj}
  • dotnet add package {nuget-package-name}
  • dotnet clean
  • dotnet build

That's all I've used so far, but it was enough to investigate this brave new world without needing an IDE. Spoiler alert: I like it very much. The API is so straightforward that I don't even need to include descriptions for the commands above. (Right?)

Everything really seems to be coming together: even the documentation is clean, easy-to-navigate and has very quick and accurate search results.

Initial Results

  • Encodo.Core compiles (almost) without change. The only change required was to move project-description attributes that used to be in the AssemblyInfo.cs file to the project file instead (where they admittedly make much more sense). If you don't do this, the compiler complains about "[CS0579] Duplicate 'System.Reflection.AssemblyCompanyAttribute' attribute" and so on.
  • Encodo.Expressions references Windows.System.Media for Color and the Colors constants. I changed those references to System.Drawing and Color, respectively -- something I knew I would have to do.
  • Encodo.Connections references the .NET-Framework--only WindowsIdentity. I will have to move these references to a Encodo.Core.Windows project and move creation of the CurrentCredentials, AnonymousCredentials and UserCredentials to a factory in the IOC.
  • Quino.Meta references the .NET-Framework--only WeakEventManager. There are only two references and these are used to implement a CollectionChanged feature that is nearly unused. I will probably have to copy/implement the WeakEventManager for now until we can deprecate those events permanently.
  • Quino.Data depends on Quino.Meta.Standard, which references System.Windows.Media (again) as well as a few other things. The Quino.Meta.Standard potpourri will have to be split up.

I discovered all of these things using just VS Code and the command-line build. It was pretty easy and straightforward.

So far, porting to .NET Standard is a much more rewarding process than our previous attempt at porting to .NET Core.

The Game Plan

At this point, I had a shadow copy of a bunch of the core Quino projects with new project files as well as a handful of ad-hoc changes and commented code in the source files. While OK for investigation, this was not a viable strategy for moving forward on a port for Quino.

I want to be able to work in a branch of Quino while I further investigate the viability of:

  • Targeting parts of Quino to .Net Standard 2.0 while keeping other parts targeting the lowest version of .NET Framework that is compatible with .NET Standard 2.0 (4.6.1). This will, eventually, be only the Winform and WPF projects, which will never be supported under .NET Standard.
  • Using the new project-file format for all projects, regardless of target (which IDEs can I still use? Certainly the latest versions of Visual Studio et. al.)

To test things out, I copied the new Encodo.Core project file back to the main Quino workspace and opened the old solution in Visual Studio for Mac and JetBrains Rider.

IDE Pros and Cons

Visual Studio for Mac

Visual Studio for Mac says it's a production release, but it stumbled right out of the gate: it failed to compile Encodo.Core even though dotnet build had compiled it without complaint from the get-go. Visual Studio for Mac claimed that OperatingSytem was not available. However, according to the documentation, Operating System is available for .NET Standard -- but not in .NET Core. My theory is that Visual Studio for Mac was somehow misinterpreting my project file.

Update: After closing and re-opening the IDE, though, this problem went away and I was able to build Encodo.Core as well. Shaky, but at least it works now.

imageUnfortunately, working with this IDE remained difficult. It stumbled again on the second project that I changed to .NET Standard. Encodo.Core and Encodo.Expressions both have the same framework property in their project files -- <TargetFramework>netstandard2.0</TargetFramework> -- but, as you can see in the screenshot to the left, both are identified as .NETStandard.Library but one has version 2.0.0-preview1-25301-01 and the other has version 1.6.1. I have no idea where there second version number is coming from -- it looks like this IDE is mashing up the .NET Framework version and the .NET Standard versions. Not quite ready for primetime.

Also, the application icon is mysteriously the bog-standard MacOS-app icon instead of something more...Visual Studio-y.

JetBrains Rider EAP (April 27th)

JetBrains Rider built the assembly without complaint, just as dotnet build did on the command line. Rider didn't stumble as hard as Visual Studio for Mac, but it also didn't have problems building projects after the framework had changed. On top of that, it wasn't always so easy to figure out what to do to get the framework downloaded and installed. Rider still has a bit of a way to go before I would make it my main IDE.

I also noticed that, while Rider's project/dependencies view accurately reflects .NET Standard projects, the "project properties" dialog shows the framework version as just "2.0". The list of version numbers makes this look like I'm targeting .NET Framework 2.0.

Addtionally, Rider's error messages in the build console are almost always truncated. image The image to the right is of the IDE trying to inform me that Encodo.Logging (which was still targeting .NET Framework 4.5) cannot reference Encodo.Core (which references NET Standard 2.0). If you copy/paste the message into an editor, you can see that's what it says.1

Visual Studio Code

I don't really know how to get Visual Studio Code to do much more than syntax-highlight my code and expose a terminal from which I can manually call dotnet build. They write about Roslyn integration where "[o]n startup the best matching projects are loaded automatically but you can also choose your projects manually". While I saw that the solution was loaded and recognized, I never saw any error-highlighting in VS Code. The documentation does say that it's "optimized for cross-platform .NET Core development" and my projects targeted .NET Standard so maybe that was the problem. At any rate, I didn't put much time into VS Code yet.

Next Steps

  1. Convert all Quino projects to use the new project-file format and target .NET Framework. Once that's all running with the new project-file format, it will be much easier to start targeting .NET Standard with certain parts of the framework
  2. Change the target for all projects to .NET Framework 4.6.1 to ensure compatibility with .NET Standard once I start converting projects.
  3. Convert projects to .NET Standard wherever possible. As stated above, Encodo.Core already works and there are only minor adjustments needed to be able to compile Encodo.Expressions and Quino.Meta.
  4. Continue with conversion until I can compile Quino.Schema, Quino.Data.PostgreSql, Encodo.Parsers.Antlr and Quino.Web. With this core, we'd be able to run the WebAPI server we're building for a big customer on a Mac or a Linux box.
  5. Given this proof-of-concept, a next step would be to deploy as an OWIN server to Linux on Amazon and finally see a Quino-based application running on a much leaner OS/Web-server stack than the current Windows/IIS one.

I'll keep you posted.2

  1. Encodo.Expressions.AssemblyInfo.cs(14, 12): [CS0579] Duplicate 'System.Reflection.AssemblyCompanyAttribute' attribute Microsoft.NET.Sdk.Common.targets(77, 5): [null] Project '/Users/marco/Projects/Encodo/quino/src/libraries/Encodo.Core/Encodo.Core.csproj' targets '.NETStandard,Version=v2.0'. It cannot be referenced by a project that targets '.NETFramework,Version=v4.5'.

  2. Update: I investigated a bit farther and I'm having trouble using NETStandard2.0 from NETFramework462 (the Mono version on Mac). I was pretty sure that's how it's supposed to work, but NETFramework (any version) doesn't seem to want to play with NETStandard right now. Visual Studio for Mac tells me that Encodo.Core (NETStandard2.0) cannot be used from Encodo.Expressions (Net462), which doesn't seem right, but I'm not going to fight with it on this machine anymore. I'm going to try it on a fully updated Windows box next -- just to remove the Mono/Mac/NETCore/Visual Studio for Mac factors from the equation. Once I've got things running on Windows, I'll prepare a NETStandard project-only solution that I'll try on the Mac.