1 2 3 4 5 6 7 8 9 10 11
2017 Migration from ESXi 5.5 to Proxmox 5.0

Here at Encodo, we host our services in our own infrastructure which, after 12 years, has grown quite large. But this article is about our migration away from VMWare.

So, here's how we proceeded:

We set up a test environment as close as possible to the new one before buying the new server, to test everything. This is the first time we had contact with software raids and it's monitoring capabilities.

Install the Hypervisor

Installation time, here it goes:

  • Install latest Proxmox1: This is very straight forward, I won't go into that part.
  • After the installation is done, log in via ssh and check the syslog for errors (we had some NTP issues, so I fixed that before doing anything else).

Check Disks

We have our 3 Disks for our Raid5. We do not have a lot of files to store, so we use 1TB which should be still OK (see Why RAID 5 stops working in 2009 as to why you shouldn't do RAID5 anymore).

We set up Proxmox on a 256GB SSD. Our production server will have 4x 1TB SSD's, one of which is a spare. Note down the serial number of all your disks. I don't care how you do it -- make pictures or whatever -- but if you ever care which slot contains which disk or if the failing disk is actually in that slot, having solid documentation really helps a ton.

You should check your disks for errors beforehand! Do a full smartctl check. Find out which disks are which. This is key, we even took pictures prior to inserting them into the server (and put them in our wiki) so we have the SN available for each slot.

See which disk is which:

for x in {a..e}; do smartctl -a /dev/sd$x | grep 'Serial' | xargs echo "/dev/sd$x: "; done

Start a long test for each disk:

for x in {a..e}; do smartctl -t long /dev/sd$x; done

See SMART tests with smartctl for more detailed information.

Disk Layout & Building the RAID

We'll assume the following hard disk layout:

/dev/sda = System Disk (Proxmox installation)
/dev/sdb = RAID 5, 1
/dev/sdc = RAID 5, 2
/dev/sdd = RAID 5, 3
/dev/sde = RAID 5 Spare disk
/dev/sdf = RAID 1, 1
/dev/sdg = RAID 1, 2
/dev/sdh = Temporary disk for migration

When the check is done (usually a few hours), you can verify the test result with

smartctl -a /dev/sdX

Now that we know our disks are OK, we can proceed creating the software RAID. Make sure you get the correct disks:

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd

The RAID5 will start building immediately but you can also start using it right away. Since I had other things on my hand, I waited for it to finish.

Add the spare disk (if you have one) and export the configuration to the config:

mdadm --add /dev/md0 /dev/sde
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Configure Monitoring

Edit the email address in /etc/mdadm/mdadm.conf to a valid mail address within your network and test it via

mdadm --monitor --scan --test -1

Once you know that your monitoring mails come through, add active monitoring for the raid device:

mdadm --monitor --daemonise --mail=valid@domain.com --delay=1800 /dev/md0

To finish up monitoring, it's important to read the mismatch_cnt from /sys/block/md0/md/mismatch_cnt periodically to make sure the Hardware is OK. We use our very old Nagios installation for this and got a working script for the check from Mdadm checkarray by Thomas Krenn

Creating and Mounting Volumes

Back to building! We now need to make the created storage available to Proxmox. To do this, we create a PV, VG and an LV-Thin-Pool. We use 90% of the storage since we need to migrate other devices as well, and 10% is enough for us to migrate 2 VM's at a time. We format it with XFS:

pvcreate /dev/md0 storage
vgcreate raid5vg /dev/md0
lvcreate -l 90%FREE -T raid5vg
lvcreate -n migrationlv -l +100%FREE raid5vg
mkfs.xfs /dev/mapper/raid5vg-migrationlv

Mount the formatted migration logical volume (if you want to reboot, add it to fstab obviously):

mkdir /mnt/migration
mount /dev/mapper/raid5vg-migrationlv /mnt/migration

If you don't have the disk space to migrate the VM's like this, add an additional disk (/dev/sdh in our case). Create a new partition on it with

fdisk /dev/sdh

Accept all the defaults for max size. Then format the partition with xfs and mount it:

mkfs.xfs /dev/sdh1
mkdir /mnt/largemigration
mount /dev/sdh1 /mnt/largemigration

Now you can go to your Proxmox installation and add the thin pool (and your largemigration partition if you have it) in the Datacenter -> Storage -> Add. Give it an ID (I called it raid5 because I'm very creative), Volume Group: raid5vg, Thin Pool: raid5lv.

Extra: Upgrade Proxmox

At this time, we'd bought our Proxmox license and did a dist upgrade from 4.4 to 5.0 which had just released. To do that, follow the upgrade document from the Proxmox wiki. Or install 5.0 right away.

Migrating VMs

Now that the storage is in place, we are all set to create our VM's and do the migration. Here's the process we were doing - there are probably more elegant and efficient ways to do that, but this way works for both our Ubuntu installations and our Windows VM's:

  1. In ESXi: Shut down the VM to migrate
  2. Download the vmdk file from vmware from the storage or activate ssh on ESXi and scp the vmdk including the flat file (important) directly to /mnt/migration (or largemigration respectively).
  3. Shrink the vmdk if you actually downloaded it locally (use the non-flat file as input if the flat doesn't work):2
     vdiskmanager-windows.exe -r vmname.vmdk -t 0 vmname-pve.vmdk
  4. Copy the new file (vmname-pve.vmdk) to proxmox via scp into the migration directory /mnt/migration (or largemigration respectively)
  5. Ssh into your proxmox installation and convert the disk to qcow2:
     qemu-img convert -f vmdk /mnt/migration/vmname-pve.vmdk -O qcow2 /mnt/migration/vmname-pve.qcow2
  6. In the meantime you can create a new VM:
    1. In general: give it the same resources as it had in the old hypervisor
    2. Do not attach a cd/dvd
    3. Set the disk to at least the size of the vmdk image
    4. Make sure the image is in the "migration" storage
    5. Note the ID of the vm, you're gonna need it in the next step
  7. Once the conversion to qcow2 is done, override the existing image with the converted one. Make sure you get the correct ID and that the target .qcow2 file exists. Override with no remorse:
     mv /mnt/migration/vmname-pve.qcow2 /mnt/migration/images/<vm-id>/vm-<vm-id>-disk-1.qcow2
  8. When this is done, boot the image and test if it comes up and runs
  9. If it does, go to promox and move the disk to the RAID5:
    1. Select the VM you just started
    2. Go to Hardware
    3. Click on Hard Disk
    4. Click on Move Disk
    5. Select the Raid5 Storage and check the checkbox Delete Source
    6. This will happen live

That's it. Now repeat these last steps for all the VMs - in our case around 20, which is just barely manageable without any automation. If you have more VMs you could automate more things, like copying the VMs directly from ESXi to Proxmox via scp and do the initial conversion there.

  1. We initially installed Proxmox 4.4, then upgraded to 5.0 during the migration.

  2. You can get the vdiskmanager from Repairing a virtual disk in Fusion 3.1 and Workstation 7.1 (1023856) under "Attachments"

v4.0: New modeling API, expanded UI support and data improvements

The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes below and is available to those with access to the Encodo issue tracker.


Metadata & Modeling

Most of the existing metadata-building API has been deprecrated and replaced with a fluent API that is consistent and highly extensible.




  • Improve compatibility of generated code with StyleCop/best practices (QNO-5252, QNO-5584, QNO-5515)
  • Add support for integrating interfaces into generated code (QNO-5585)
  • Finalize support for including generated code in a separate assembly. Generated code can now be in a separate assembly from the modeling code.
  • Removed WinformDx code generator (QNO-5324)


  • Improve debugging with Quino sources and Nuget packages (QNO-5473)
  • Improve directory-services integration (QNO-5421)
  • Reworked the plugins system (QNO-2525)
  • Improve assembly-loading in tests and tools (QNO-5538, QNO-5571)
  • Improve registration API for external loggers; integrate Serilog (QNO-5591)
  • Improve schema-migration logging (QNO-5586)
  • Allow customization of exception and message formatting (QNO-5551, QNO-5550)

Breaking changes

Metadata & Modeling

  • The Encodo.Quino.Builders.Extensions namespace has been removed. All members were moved to Encodo.Quino.Meta or Encodo.Quino.Builders instead.
  • The assembly Quino.Meta.Standard no longer exists and may have to be removed manually if Nuget does not remove it for you.
  • Added default CreateModel() to MetaBuilderBasedModelBuilderBase
  • Added empty constructor to MetaBuilderBasedModelBuilderBase
  • GetSubModules() and GetModules() now returns IMetaModule instead of IModuleAspect
  • Even deprecated versions of AddSort(), AddSortOrderProperty(), AddEnumeratedClass(), AddValueListProperty() all expect a parameter of type IMetaExpressionFactory or IExpressionConstants now.


  • The IDataSessionAwareList is used instead of IMetaAwareList
  • Two constructors of DataList have been made private
  • GenericObject.DoSetDedicatedSession() is no longer called or overridable
  • None of the classes derived from AuthenticatorBase accept an IApplication as constructor parameters anymore. Instead, use the Application or Session to create the authenticator with GetInstance<TService>(). E.g. if before you created a TokenAuthenticator with this call, new TokenAuthenticator(Application), you should now create the TokenAuthenticator with Application.GetInstance<TokenAuthenticator>(). You are free also to call the new constructor directly, but construction using the IOC is strongly recommended.
  • The constructor for DataSession has changed; this shouldn't cause too many problems as applications should be using the IDataSessionFactory to construct instances anyway.
  • DataGenerators have changed considerably. Implement the IDataGenerator interface instead of using the DataGenerator base class.
  • The names of ISchemaDifference have changed, so the output of a migration plan will also be different. Software that depended on scraping the plan to determine outcomes may no longer work.
  • Default values are no longer implicitly set. A default value for a required property will only be supplied if one is set in the model. Otherwise, a NULL-constraint violation will be thrown by the database. Existing applications will have to be updated: either set a default value in the metadata or set the property value before saving objects.


  • The generated filename for builders has changed from "Extensions.cs to "Builders.cs". When you regenerate code for the V2 format, you will have include the new files and remove the old ones from your project.
  • Data-language-specific properties are no longer generated by default because there is no guarantee that languages are available in a given application, You can still enable code-generation by calling SetCodeGenerated() on the multi-language or value-list property
  • The generated MetaModelBuilder classes are no longer generated. QNO-5515


  • LanguageTools.GetCaption() no longer defaults to GetDescription() because this is hardly ever what you wanted to happen.
  • CaptionExtensions are now in CaptionTools and are no longer extension methods on object.
  • ReflectionExtensions are now in ReflectionTools and are also no longer extension methods on object.


  • Redeclared Operation<> with new method signature


Some Windows-specific functionality has been moved to new assemblies. These assemblies are automatically included for Winform and WPF applications (as before). Applications that want to use the Windows-specific functionality will have to reference the following packages:

  • For WindowsIdentity-based code, use the Encodo.Connections.Windows package and call UseWindowsConnectionServices()
  • For ApplicationSettingsBase support, use the Encodo.Application.Windows package and call UseWindowsApplication()
  • For Directory Services support, use the Encodo.Security.Windows package and call UseWindowsSecurityServices().
C# Handbook 7.0

imageI announced almost exactly one year ago that I was rewriting the Encodo C# Handbook. The original was published almost exactly nine years ago. There were a few more releases as well as a few unpublished chapters.

I finally finished a version that I think I can once again recommend to my employees at Encodo. The major changes are:

  • The entire book is now a Git Repository. All content is now in Markdown. Pull requests are welcome.
  • I've rewritten pretty much everything. I removed a lot of redundancies, standardized formulations and used a much more economical writing style than in previous versions.
  • Recommendations now include all versions of C# up to 7
  • There is a clearer distinction between general and C#-specific recommendations
  • There are now four main sections: Naming, Formatting, Usage and Best Practices, which is broken into Design, Safe Programming, Error-handling, Documentation and a handful of other, smaller topics.

Here's the introduction:

The focus of this document is on providing a reference for writing C#. It includes naming, structural and formatting conventions as well as best practices for writing clean, safe and maintainable code. Many of the best practices and conventions apply equally well to other languages.

Check out the whole thing! Or download the PDF that I included in the repository.

Adventures in .NET Standard 2.0-preview1

.NET Standard 2.0 is finally publicly available as a preview release. I couldn't help myself and took a crack at converting parts of Quino to .NET Standard just to see where we stand. To keep me honest, I did all of my investigations on my MacBook Pro in MacOS.

IDEs and Tools

I installed Visual Studio for Mac, the latest JetBrains Rider EAP and .NET Standard 2.0-preview1. I already had Visual Studio Code with the C#/OmniSharp extensions installed. Everything installed easily and quickly and I was up-and-running in no time.

Armed with 3 IDEs and a powerful command line, I waded into the task.

Porting Quino to .NET Standard

Quino is an almost decade-old .NET Framework solution that has seen continuous development and improvement. It's quite modern and well-modularized, but we still ran into considerable trouble when experimenting with .NET Core 1.1 almost a year ago. At the time, we dropped our attempts to work with .NET Core, but were encouraged when Microsoft shifted gears from the extremely low--surface-area API of .NET Core to the more inclusive though still considerably cleaned-up API of .NET Standard.

Since it's an older solution, Quino projects use the older csproj file-format: the one where you have to whitelist the files to include. Instead of re-using these projects, I figured a good first step would be to use the dotnet command-line tool to create a new solution and projects and then copy files over. That way, I could be sure that I was really only including the code I wanted -- instead of random cruft generated into the project files by previous versions of Visual Studio.

The dotnet Command

The dotnet command is really very nice and I was able to quickly build up a list of core projects in a new solution using the following commands:

  • dotnet new sln
  • dotnet new classlib -n {name}
  • dotnet add reference {../otherproject/otherproject.csproj}
  • dotnet add package {nuget-package-name}
  • dotnet clean
  • dotnet build

That's all I've used so far, but it was enough to investigate this brave new world without needing an IDE. Spoiler alert: I like it very much. The API is so straightforward that I don't even need to include descriptions for the commands above. (Right?)

Everything really seems to be coming together: even the documentation is clean, easy-to-navigate and has very quick and accurate search results.

Initial Results

  • Encodo.Core compiles (almost) without change. The only change required was to move project-description attributes that used to be in the AssemblyInfo.cs file to the project file instead (where they admittedly make much more sense). If you don't do this, the compiler complains about "[CS0579] Duplicate 'System.Reflection.AssemblyCompanyAttribute' attribute" and so on.
  • Encodo.Expressions references Windows.System.Media for Color and the Colors constants. I changed those references to System.Drawing and Color, respectively -- something I knew I would have to do.
  • Encodo.Connections references the .NET-Framework--only WindowsIdentity. I will have to move these references to a Encodo.Core.Windows project and move creation of the CurrentCredentials, AnonymousCredentials and UserCredentials to a factory in the IOC.
  • Quino.Meta references the .NET-Framework--only WeakEventManager. There are only two references and these are used to implement a CollectionChanged feature that is nearly unused. I will probably have to copy/implement the WeakEventManager for now until we can deprecate those events permanently.
  • Quino.Data depends on Quino.Meta.Standard, which references System.Windows.Media (again) as well as a few other things. The Quino.Meta.Standard potpourri will have to be split up.

I discovered all of these things using just VS Code and the command-line build. It was pretty easy and straightforward.

So far, porting to .NET Standard is a much more rewarding process than our previous attempt at porting to .NET Core.

The Game Plan

At this point, I had a shadow copy of a bunch of the core Quino projects with new project files as well as a handful of ad-hoc changes and commented code in the source files. While OK for investigation, this was not a viable strategy for moving forward on a port for Quino.

I want to be able to work in a branch of Quino while I further investigate the viability of:

  • Targeting parts of Quino to .Net Standard 2.0 while keeping other parts targeting the lowest version of .NET Framework that is compatible with .NET Standard 2.0 (4.6.1). This will, eventually, be only the Winform and WPF projects, which will never be supported under .NET Standard.
  • Using the new project-file format for all projects, regardless of target (which IDEs can I still use? Certainly the latest versions of Visual Studio et. al.)

To test things out, I copied the new Encodo.Core project file back to the main Quino workspace and opened the old solution in Visual Studio for Mac and JetBrains Rider.

IDE Pros and Cons

Visual Studio for Mac

Visual Studio for Mac says it's a production release, but it stumbled right out of the gate: it failed to compile Encodo.Core even though dotnet build had compiled it without complaint from the get-go. Visual Studio for Mac claimed that OperatingSytem was not available. However, according to the documentation, Operating System is available for .NET Standard -- but not in .NET Core. My theory is that Visual Studio for Mac was somehow misinterpreting my project file.

Update: After closing and re-opening the IDE, though, this problem went away and I was able to build Encodo.Core as well. Shaky, but at least it works now.

imageUnfortunately, working with this IDE remained difficult. It stumbled again on the second project that I changed to .NET Standard. Encodo.Core and Encodo.Expressions both have the same framework property in their project files -- <TargetFramework>netstandard2.0</TargetFramework> -- but, as you can see in the screenshot to the left, both are identified as .NETStandard.Library but one has version 2.0.0-preview1-25301-01 and the other has version 1.6.1. I have no idea where there second version number is coming from -- it looks like this IDE is mashing up the .NET Framework version and the .NET Standard versions. Not quite ready for primetime.

Also, the application icon is mysteriously the bog-standard MacOS-app icon instead of something more...Visual Studio-y.

JetBrains Rider EAP (April 27th)

JetBrains Rider built the assembly without complaint, just as dotnet build did on the command line. Rider didn't stumble as hard as Visual Studio for Mac, but it also didn't have problems building projects after the framework had changed. On top of that, it wasn't always so easy to figure out what to do to get the framework downloaded and installed. Rider still has a bit of a way to go before I would make it my main IDE.

I also noticed that, while Rider's project/dependencies view accurately reflects .NET Standard projects, the "project properties" dialog shows the framework version as just "2.0". The list of version numbers makes this look like I'm targeting .NET Framework 2.0.

Addtionally, Rider's error messages in the build console are almost always truncated. image The image to the right is of the IDE trying to inform me that Encodo.Logging (which was still targeting .NET Framework 4.5) cannot reference Encodo.Core (which references NET Standard 2.0). If you copy/paste the message into an editor, you can see that's what it says.1

Visual Studio Code

I don't really know how to get Visual Studio Code to do much more than syntax-highlight my code and expose a terminal from which I can manually call dotnet build. They write about Roslyn integration where "[o]n startup the best matching projects are loaded automatically but you can also choose your projects manually". While I saw that the solution was loaded and recognized, I never saw any error-highlighting in VS Code. The documentation does say that it's "optimized for cross-platform .NET Core development" and my projects targeted .NET Standard so maybe that was the problem. At any rate, I didn't put much time into VS Code yet.

Next Steps

  1. Convert all Quino projects to use the new project-file format and target .NET Framework. Once that's all running with the new project-file format, it will be much easier to start targeting .NET Standard with certain parts of the framework
  2. Change the target for all projects to .NET Framework 4.6.1 to ensure compatibility with .NET Standard once I start converting projects.
  3. Convert projects to .NET Standard wherever possible. As stated above, Encodo.Core already works and there are only minor adjustments needed to be able to compile Encodo.Expressions and Quino.Meta.
  4. Continue with conversion until I can compile Quino.Schema, Quino.Data.PostgreSql, Encodo.Parsers.Antlr and Quino.Web. With this core, we'd be able to run the WebAPI server we're building for a big customer on a Mac or a Linux box.
  5. Given this proof-of-concept, a next step would be to deploy as an OWIN server to Linux on Amazon and finally see a Quino-based application running on a much leaner OS/Web-server stack than the current Windows/IIS one.

I'll keep you posted.2

  1. Encodo.Expressions.AssemblyInfo.cs(14, 12): [CS0579] Duplicate 'System.Reflection.AssemblyCompanyAttribute' attribute Microsoft.NET.Sdk.Common.targets(77, 5): [null] Project '/Users/marco/Projects/Encodo/quino/src/libraries/Encodo.Core/Encodo.Core.csproj' targets '.NETStandard,Version=v2.0'. It cannot be referenced by a project that targets '.NETFramework,Version=v4.5'.

  2. Update: I investigated a bit farther and I'm having trouble using NETStandard2.0 from NETFramework462 (the Mono version on Mac). I was pretty sure that's how it's supposed to work, but NETFramework (any version) doesn't seem to want to play with NETStandard right now. Visual Studio for Mac tells me that Encodo.Core (NETStandard2.0) cannot be used from Encodo.Expressions (Net462), which doesn't seem right, but I'm not going to fight with it on this machine anymore. I'm going to try it on a fully updated Windows box next -- just to remove the Mono/Mac/NETCore/Visual Studio for Mac factors from the equation. Once I've got things running on Windows, I'll prepare a NETStandard project-only solution that I'll try on the Mac.

A tuple-inference bug in the Swift 3.0.1 compiler

I encountered some curious behavior while writing a service-locator interface (protocol) in Swift. I've reproduced the issue in a stripped-down playground1 and am almost certain I've found a bug in the Swift 3.0.1 compiler included in XCode 8.2.1.

A Simple, Generic Function

We'll start off with a very basic example, shown below.


The example above shows a very simple function, generic in its single parameter with a required argument label a:. As expected, the compiler determines the generic type T to be Int.

I'm not a big fan of argument labels for such simple functions, so I like to use the _ to free the caller from writing the label, as shown below.


As you can see, the result of calling the function is unchanged.

Or Maybe Not So Simple?

Let's try calling the function with some other combinations of parameters and see what happens.


If you're coming from another programming language, it might be quite surprising that the Swift compiler happily compiles every single one of these examples. Let's take them one at a time.

  • int: This works as expected
  • odd: This is the call that I experienced in my original code. At the time, I was utterly mystified how Swift -- a supposedly very strictly typed language -- allowed me to call a function with a single parameter with two parameters. This example's output makes it more obvious what's going on here: Swift interpreted the two parameters as a Tuple. Is that correct, though? Are the parentheses allowed to serve double-duty both as part of the function-call expression and as part of the tuple expression?
  • tuple: With two sets of parentheses, it's clear that the compiler interprets T as tuple (Int, Int).
  • labels: The issue with double-duty parentheses isn't limited to anonymous tuples. The compiler treats what looks like two labeled function-call parameters as a tuple with two Ints labeled a: and b:.
  • nestedTuple: The compiler seems to be playing fast and loose with parentheses inside of a function call. The compiler sees the same type for the parameter with one, two and three sets of parentheses.2 I would have expected the type to be ((Int, Int)) instead.
  • complexTuple: As with tuple, the compiler interprets the type for this call correctly.

Narrowing Down the Issue

The issue with double-duty parentheses seems to be limited to function calls without argument labels. When I changed the function definition to require a label, the compiler choked on all of the calls, as expected. To fix the problem, I added the argument label for each call and you can see the results below.


  • int: This works as expected
  • odd: With an argument label, instead of inferring the tuple type (Int, Int), the compiler correctly binds the label to the first parameter 1. The second parameter 2 is marked as an error.
  • tuple: With two sets of parentheses, it's clear that the compiler interprets T as tuple (Int, Int).
  • labels: This example behaves the same as odd, with the second parameter b: 2 flagged as an error.
  • nestedTuple: This example works the same as tuple, with the compiler ignoring the extra set of parentheses, as it did without an argument label.
  • complexTuple: As with tuple, the compiler interprets the type for this call correctly.

Swift Grammar

I claimed above that I was pretty sure that we're looking at a compiler bug here. I took a closer look at the productions for tuples and functions defined in The Swift Programming Language (Swift 3.0.1) manual available from Apple.

First, let's look at tuples:


As expected, a tuple expression is created by surrounding zero or more comma-separated expressions (with optional identifiers) in parentheses. I don't see anything about folding parentheses in the grammar, so it's unclear why (((1))) produces the same type as (1). Using parentheses makes it a bit difficult to see what's going on with the types, so I'm going to translate to C# notation.

  • () => empty tuple3
  • (1) => Tuple<int>
  • ((1)) => Tuple<Tuple<int>>
  • ...and so on.

This seems to be a separate issue from the second, but opposite, problem: instead of ignoring parentheses, the compiler allows one set of parentheses to simultaneously denote the argument clause of a single-arity function call and an argument of type Tuple encompassing all parameters.

A look at the grammar of a function call shows that the parentheses are required.


Nowhere did I find anything in the grammar that would allow the kind of folding I observed in the compiler, as shown in the examples above. I'm honestly not sure how that would be indicated in grammar notation.


Given how surprising the result is, I can't imagine this is anything but a bug. Even if it can be shown that the Swift compiler is correctly interpreting these cases, it's confusing that the type-inference is different with and without labels.

func test<T>(_ a: T) -> String
  return String(describing: type(of: T.self))

var int = test(1)
var odd = test(1, 2)
var tuple = test((1, 2))
var labels = test(a: 1, b: 2)
var nestedTuple = test((((((1, 2))))))
var complexTuple = test((1, (2, 3)))

  1. The X-Code playground is a very decent REPL for this kind of example. Here's the code I used, if you want to play around on your own.

  2. I didn't include the examples, but the type is unchanged with four, five and six sets of parentheses. The compiler treats them as semantically irrelevant, though the Swift grammar doesn't allow for this, as far as I could tell from the BNF in the official manual.

  3. This is apparently legal in Swift, but I can't divine its purpose in an actual program

Two more presentations: Web tools & Quino upgrade

Check out two new talks on our web site:

Networking Event: How Encodo builds web applications

At our last networking event, Urs presented our latest tech stack. We've been working productively with this stack for most of this year and feel we've finally stabilized on something we can use for a while. Urs discusses the technologies and libraries (TypeScript, Less, React, MobX) as well as tools (Visual Studio Code, WebStorm).

Quino: from 1.13 to 4.x

Since Quino 1.13 came out in December of 2014, we've come a long way. This presentation shows just how far we've come and provides customers with information about the many, many improvements as well as a migration path.

Thoughts on .NET Standard 2.0

Check out two new talks on our web site:

Microsoft recently published a long blog article Introducing .NET Standard. The author Immo Landwerth appeared on a weekly videocast called The week in .NET to discuss and elaborate. I distilled all of this information into a presentation for Encodo's programmers and published it to our web site, TechTalk: .NET Standard 2.0. I hope it helps!

Also, Sebastian has taken a Tech Talk that he did for a networking event earlier this year, Code Review Best Practices, on the road to Germany, as Die Wahrheit über Code Reviews So klappt's!

v3.1: New metadata builder API

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.


This release is a "bridge" release that has the entire new Metadata API as well as the older version, which is marked as obsolete. It is intended that projects upgrade to this version only temporarily in order to more easily migrate to the 4.0 Metadata API. At that point, projects should immediately upgrade to Quino 4.0, from which all obsolete methods have been removed. Once 4.0 is available, there will be no more bug-fix releases for this release.

Metadata construction

  • Remove MetaId/Guid parameters from all metadata-building APIs
  • Remove all dependencies and requirement for the MetaBuilder; make MetaBuilder obsolete.
  • Drastically reduce the surface of the MetadataBuilder and base classes and improve dependency-resolution
  • Replaced AddClassWithDefaultPrimaryKey("A") with AddClass("A").AddId()
  • Added AddForeignKeys() step to builders and AddForeignKey() APIs
  • Moved a lot of the builder-related extension methods from Quino.Meta to Quino.Builders.
  • Added API to make it easier to add paths: AddPath(classA.FromOne(), classTwo.ToMany("OneId"))
  • Fixed generation of extension classes (e.g. PunchclockUser).

Data & Schema

  • Made all GenericObject constructors without an IDataSession obsolete.
  • Removed last reference to the GlobalContext from the data driver
  • Improved flexibility of caching subsystem and added default global/immutable-data cache (in addition to the session-based cache).
  • Fixed schema-migration for SQL Server: not-null constraints are now properly added and trigger-migration works with pure SQL statements and has been drastically simplified.


  • Added BootstrapServices to the application to clearly delineate which objects are needed during the boostrap phase of configuration and startup
  • Re-added all satellite assemblies for all Quino Nuget packages (ch-DE translations)


  • Replaced all usages of DevExpress TreeList with GridControl/GridView.
  • Added a lot of WPF controls and application implementation with a Sandbox.WPF project

Breaking changes

  • PersistentEventHandlerAspect has been renamed to PersistentEventHandlerAspectBase


The next step is to bring out the 4.0 release, which will include the following features,

  • Remove all currently obsolete code
  • Reshape the metadata API to be more task-based and to significantly reduce the surface revealed to the application developer. This includes a drastic reduction of extension methods in Quino.Meta
  • Finish implementation and support for layouts everywhere (specifically as they are used for titles).
  • Split Quino.Data.Backend out of Quino.Data to reduce the surface exposed to application developers.
  • Deprecate IMetaReadable, IMetaWritable and IPersistable and replace with a more appropriate and slimmer API.
Why you shouldn't use Bootstrap when you have a styleguide

From a customer, we got the request to apply a visual style guide (VSG) to a Bootstrap-based application. Since we do have a lot of experience with applying style guides on web applications and styling in general, we accepted the job and started to evaluate the details.

Which version of Bootstrap to use

The most recent stable version of Bootstrap is 3.3.6. However, when you go to the Bootstrap website, there is an announcement that Bootstrap 4 "is coming". The current state of Bootstrap 4 is alpha and the last blog post is from December 2015 which is almost half a year ago. It also is not clear, when version 4 finally will be available and stable and so we had to use the old Bootstrap 3 for this project.

But even here, there is some obscurity going on: Bootstrap was initially developed with LESS but for some reason they decided to switch to SASS. Even if we prefer to use LESS at Encodo, we decided to use SASS for this project to be able to upgrade to Bootstrap 4 more easily when it's available. There is also a SASS version of Bootstrap available which we decided to use as the base for this project.

How to customize Bootstrap

Bootstrap is a GUI library that is intended to be as simple as possible to use for the consuming developer. Unfortunately, this does not mean that it is also simple to create a theme for it or to modify some of the existing components.

There is a customization section on the Bootstrap website that allows you to select the needed components and change some basic thing like colors and a few other options. This might be very nice if you just want to use Bootstrap with your own colors but since we had a style guide with a layout quite different from Bootstrap, we could not use this option.

So we decided to clone the entire Bootstrap library, make our changes and then build our custom Bootstrap version. This makes it possible to add some custom components and change the appearance of existing elements.

Problems we ran into

Bootstrap provides support for all kinds of browsers, including Internet Explorer down to version 8. While this is nice for developing an application that runs anywhere, it makes the SASS styling code very hard to read and edit. Also, you cannot use modern technologies such as Flexbox that makes the styling a lot easier and is the base of every layout we have created in the recent past.

Another important point is that the modularity of components is not really given. For example, the styles for the button are defined in one file, but there are many other locations where you can find styles for buttons and that modify the appearance of the button based on the container.

Also, the styles are defined "inside-out" which means that the size of a container is defined by its content. Styleguides normally work the other way around. All these points make it hard to change the structure of the page without affecting everything else. Especially when you try to use the original Bootstrap HTML markup that may not match the needs of the desired layout.

To increase the struggles, there is also the complex build- and documentation system used in the Bootstrap project. It might be great that Bootstrap itself is used for the documentation but I cannot understand why there is another CSS file with 1600 lines of code that changes some things especially for the documentation. Of course this messes up our painstakingly crafted Bootstrap styles again. In the end, we had to remove this file from our demo site, which broke styling for some documentation-specific features (like the sidebar menu).

Another point of concern is that Bootstrap uses jQuery plugins for controls that require JavaScript interaction. This might be good for simple websites that just need some basic interaction but is counterproductive for real web applications because the jQuery event handling can interfere with web application frameworks such as React or Angular.

When to use bootstrap

I do not think that Bootstrap is a bad library but it is not really suitable for projects like this. The main use case of Bootstrap is to provide a good-looking layout for a website with little effort and little foreknowledge required. If you just want to put some information in the web and do not really care how it looks as long as it looks good, then Bootstrap is a good option for you.

If you'd like more information about this, then please feel free to contact us!