The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.
tl;dr: Applications might have to include the
System.Tuple
NuGet package in some assemblies.
This release adds an overload for creating delegate expressions that returns a Tuple (object, bool)
. This improvement allows applications to more easily specify a lambda that returns a value and a flag indicating whether the value is valid.
There are several overloads available for creating a DelegateExpression
. The simplest of these assumes that a value can be calculated and is appropriate for constant values or values whose calculation does not depend on the IExpressionContext
passed to the expression.
However, many (if not most) delegates should indicate whether a value can be calculated by returning true
or false
and setting an out object value
parameter instead. This is still the standard API, but 4.1.5 introduces an overload that supports tuples, which makes it easier to call.
In 4.1.4, an application had the following two choices for using the "tryGetValue" variant of the API.
Elements.Classes.A.AddCalculatedProperty("FullText", f => f.CreateDelegate(GetFullText));
private object GetFullText(IExpressionContext context, out object value)
{
var obj = context.GetInstance<IMetaObject>();
if (obj != null)
{
value = obj.ToString();
return true;
}
value = null;
return false;
}
If the application wanted to inline the lambda, the types have to be explicitly specified:
Elements.Classes.A.AddCalculatedProperty("FullText", f => f.CreateDelegate((IExpressionContext context, out object value) => {
var obj = context.GetInstance<IMetaObject>();
if (obj != null)
{
value = obj.ToString();
return true;
}
value = null;
return false;
}));
The overload that expects a tuple makes this a bit simpler:
Elements.Classes.A.AddCalculatedProperty("FullText", f => f.CreateDelegate(context => {
var obj = context.GetInstance<IMetaObject>();
return obj != null ? (obj.ToString(), true) : (null, false);
}));
Previously, a DelegateExpression
would always return false for a call to TryGetValue()
made with an empty context. This has been changed so that the DelegateExpression
no longer has any logic of its own. This means, though, that an application must be a little more careful to properly return false
when it is missing the information that it needs to calculate the value of an expression.
All but the lowest-level overloads and helper methods are unaffected by this change. An application that uses factory.CreateDelegate<T>()
will not be affected. Only calls to new DelegateExpression()
need to be examined on upgrade. It is strongly urged to convert direct calls to the construct to use the IMetaExpressionFactory
instead.
Imagine if an application used a delegate for a calculated expression as shown below.
Elements.Classes.Person.AddCalculatedProperty("ActiveUserCount", MetaType.Boolean, new DelegateExpression(GetActiveUserCount));
// ...
private object GetActiveUserCount(IExpression context)
{
return context.GetInstance<Person>().Company.ActiveUsers;
}
When Quino processes the model during startup, expression are evaluated in order to determine whether they can be executed without a context (caching, optimization, etc.). This application was safe in 4.1.4 because Quino automatically ignored an empty context and never called this code.
However, as of 4.1.5, an application that calls this low-level version will have to handle the case of an or insufficient context on its own.
It is highly recommended to move to using the expression factory and typed arguments instead, as shown below:
Elements.Classes.Person.AddCalculatedProperty<Person>("ActiveUserCount", MetaType.Boolean, f => f.CreateDelegate(GetActiveUserCount));
// ...
private object GetActiveUserCount(Person person)
{
return person.Company.ActiveUsers;
}
If you just want to add your own handling for empty contexts, you can do the following (note that you have to change the signature of GetActiveUserCount
):
Elements.Classes.Person.AddCalculatedProperty("ActiveUserCount", MetaType.Boolean, new DelegateExpression(GetActiveUserCount));
// ...
private bool GetActiveUserCount(IExpression context, out object value)
{
var person = context.GetInstanceOrDefault<Person>();
if (person == null)
{
value = null;
return false;
}
value = person.Company.ActiveUsers;
return true;
}
The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.
The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.
ConfigureWhen()
to apply application configuration based on configuration data (QNO-5688)The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.
The following is a complete list of all Quino release notes, from newest to oldest. See the roadmap for future releases.
At Encodo, we're much more cautious about installing massive Windows updates. Since a couple of us (including me) have started experiencing memory leaks in the previous version, we installed it on select machines.
The memory leak we were experiencing was only on a couple of machines. It manifested as Task Manager reporting a very high RAM-usage percentage and, occasionally, Windows popping up a message box asking to close applications. Also, Win + S no longer responded on the first try (i.e. the Windows shell became only partially responsive).
Investigating with the RAMMap tool from Microsoft revealed a large amount (8GB) of "Process Private" RAM that couldn't all be accounted for in Task Manager or the Resource Monitor.
Initial results are better and seem to indicate normal behavior: if an application that uses a lot of RAM (e.g. Visual Studio) is closed, the reported RAM usage drops correspondingly.
NB; this is not at all a scientific conclusion. We applied the update and memory management on a previously misbehaving machine is better. That's all.
The Task Manager has two immediately obvious improvements:
One drawback, though, is that you can no longer see which solution is open in which instance of Visual Studio.
Microsoft, as usual, has re-enabled settings that you may have turned off. They did this with the mind-boggling feature called "Aero Shake": when you grab a Window title and shake it with the mouse, all other Windows are minimized. At first, just the feature was bizarre; now, it's Microsoft's fixation with re-enabling this feature that is truly worrying.
We've disabled it in the group policies on our domain controller so our users never have to suffer again.
We have not found any drawbacks to this update with our software and development tools and will roll it out to the rest of our users immediately.
The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes and is available to those with access to the Encodo issue tracker.
ExpressionConstants.Now
(QNO-5720)Here at Encodo, we host our services in our own infrastructure which, after 12 years, has grown quite large. But this article is about our migration away from VMWare.
So, here's how we proceeded:
We set up a test environment as close as possible to the new one before buying the new server, to test everything. This is the first time we had contact with software raids and it's monitoring capabilities.
Installation time, here it goes:
We have our 3 Disks for our Raid5. We do not have a lot of files to store, so we use 1TB which should be still OK (see Why RAID 5 stops working in 2009 as to why you shouldn't do RAID5 anymore).
We set up Proxmox on a 256GB SSD. Our production server will have 4x 1TB SSD's, one of which is a spare. Note down the serial number of all your disks. I don't care how you do it -- make pictures or whatever -- but if you ever care which slot contains which disk or if the failing disk is actually in that slot, having solid documentation really helps a ton.
You should check your disks for errors beforehand! Do a full smartctl check. Find out which disks are which. This is key, we even took pictures prior to inserting them into the server (and put them in our wiki) so we have the SN available for each slot.
See which disk is which:
for x in {a..e}; do smartctl -a /dev/sd$x | grep 'Serial' | xargs echo "/dev/sd$x: "; done
Start a long test for each disk:
for x in {a..e}; do smartctl -t long /dev/sd$x; done
See SMART tests with smartctl for more detailed information.
We'll assume the following hard disk layout:
/dev/sda = System Disk (Proxmox installation)
/dev/sdb = RAID 5, 1
/dev/sdc = RAID 5, 2
/dev/sdd = RAID 5, 3
/dev/sde = RAID 5 Spare disk
/dev/sdf = RAID 1, 1
/dev/sdg = RAID 1, 2
/dev/sdh = Temporary disk for migration
When the check is done (usually a few hours), you can verify the test result with
smartctl -a /dev/sdX
Now that we know our disks are OK, we can proceed creating the software RAID. Make sure you get the correct disks:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
The RAID5 will start building immediately but you can also start using it right away. Since I had other things on my hand, I waited for it to finish.
Add the spare disk (if you have one) and export the configuration to the config:
mdadm --add /dev/md0 /dev/sde
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Edit the email address in /etc/mdadm/mdadm.conf to a valid mail address within your network and test it via
mdadm --monitor --scan --test -1
Once you know that your monitoring mails come through, add active monitoring for the raid device:
mdadm --monitor --daemonise --mail=valid@domain.com --delay=1800 /dev/md0
To finish up monitoring, it's important to read the mismatch_cnt from /sys/block/md0/md/mismatch_cnt periodically to make sure the Hardware is OK. We use our very old Nagios installation for this and got a working script for the check from Mdadm checkarray by Thomas Krenn
Back to building! We now need to make the created storage available to Proxmox. To do this, we create a PV, VG and an LV-Thin-Pool. We use 90% of the storage since we need to migrate other devices as well, and 10% is enough for us to migrate 2 VM's at a time. We format it with XFS:
pvcreate /dev/md0 storage
vgcreate raid5vg /dev/md0
lvcreate -l 90%FREE -T raid5vg
lvcreate -n migrationlv -l +100%FREE raid5vg
mkfs.xfs /dev/mapper/raid5vg-migrationlv
Mount the formatted migration logical volume (if you want to reboot, add it to fstab obviously):
mkdir /mnt/migration
mount /dev/mapper/raid5vg-migrationlv /mnt/migration
If you don't have the disk space to migrate the VM's like this, add an additional disk (/dev/sdh in our case). Create a new partition on it with
fdisk /dev/sdh
n
Accept all the defaults for max size. Then format the partition with xfs and mount it:
mkfs.xfs /dev/sdh1
mkdir /mnt/largemigration
mount /dev/sdh1 /mnt/largemigration
Now you can go to your Proxmox installation and add the thin pool (and your largemigration partition if you have it) in the Datacenter -> Storage -> Add
. Give it an ID (I called it raid5 because I'm very creative), Volume Group: raid5vg
, Thin Pool: raid5lv
.
Now that the storage is in place, we are all set to create our VM's and do the migration. Here's the process we were doing - there are probably more elegant and efficient ways to do that, but this way works for both our Ubuntu installations and our Windows VM's:
vdiskmanager-windows.exe -r vmname.vmdk -t 0 vmname-pve.vmdk
qemu-img convert -f vmdk /mnt/migration/vmname-pve.vmdk -O qcow2 /mnt/migration/vmname-pve.qcow2
mv /mnt/migration/vmname-pve.qcow2 /mnt/migration/images/<vm-id>/vm-<vm-id>-disk-1.qcow2
That's it. Now repeat these last steps for all the VMs - in our case around 20, which is just barely manageable without any automation. If you have more VMs you could automate more things, like copying the VMs directly from ESXi to Proxmox via scp and do the initial conversion there.
We initially installed Proxmox 4.4, then upgraded to 5.0 during the migration.↩
You can get the vdiskmanager from Repairing a virtual disk in Fusion 3.1 and Workstation 7.1 (1023856) under "Attachments"↩
The summary below describes major new features, items of note and breaking changes. The full list of issues is in the release notes below and is available to those with access to the Encodo issue tracker.
Most of the existing metadata-building API has been deprecrated and replaced with a fluent API that is consistent and highly extensible.
IMetaProperty.Primary
(QNO-5400)IMetaClass.Relations
and IMetaClass.Actions
auto-managed and read-only sequences (QNO-5402, QNO-5400, QNO-5400)IMetaBase.Identifier
to IMetaBase.Name
(QNO-5412)MetaBuilder
along with associated metadata builders and extension methods.GlobalContext
obsolete (QNO-5313)IDataSession
sticky, by default (QNO-5018)IMetaReadable
and IMetaWritable
with IDataObject
(QNO-5429, QNO-5238, QNO-4737, QNO-3043)IDataSession.ActiveDataLanguages
(QNO-5476)GenericObject
(QNO-5583)Encodo.Quino.Builders.Extensions
namespace has been removed. All members were moved to Encodo.Quino.Meta
or Encodo.Quino.Builders
instead.Quino.Meta.Standard
no longer exists and may have to be removed manually if Nuget does not remove it for you.CreateModel()
to MetaBuilderBasedModelBuilderBase
MetaBuilderBasedModelBuilderBase
GetSubModules()
and GetModules()
now returns IMetaModule
instead of IModuleAspect
AddSort()
, AddSortOrderProperty()
, AddEnumeratedClass()
, AddValueListProperty()
all expect a parameter of type IMetaExpressionFactory
or IExpressionConstants
now.IDataSessionAwareList
is used instead of IMetaAwareList
DataList
have been made privateGenericObject.DoSetDedicatedSession()
is no longer called or overridableAuthenticatorBase
accept an IApplication
as constructor parameters anymore. Instead, use the Application or Session to create the authenticator with GetInstance<TService>()
. E.g. if before you created a TokenAuthenticator
with this call, new TokenAuthenticator(Application)
, you should now create the TokenAuthenticator
with Application.GetInstance<TokenAuthenticator>()
. You are free also to call the new constructor directly, but construction using the IOC is strongly recommended.DataSession
has changed; this shouldn't cause too many problems as applications should be using the IDataSessionFactory
to construct instances anyway.IDataGenerator
interface instead of using the DataGenerator
base class.ISchemaDifference
have changed, so the output of a migration plan will also be different. Software that depended on scraping the plan to determine outcomes may no longer work.NULL
-constraint violation will be thrown by the database. Existing applications will have to be updated: either set a default value in the metadata or set the property value before saving objects.SetCodeGenerated()
on the multi-language or value-list propertyLanguageTools.GetCaption()
no longer defaults to GetDescription()
because this is hardly ever what you wanted to happen.CaptionExtensions
are now in CaptionTools
and are no longer extension methods on object
.ReflectionExtensions
are now in ReflectionTools
and are also no longer extension methods on object
.Operation<>
with new method signatureSome Windows-specific functionality has been moved to new assemblies. These assemblies are automatically included for Winform and WPF applications (as before). Applications that want to use the Windows-specific functionality will have to reference the following packages:
WindowsIdentity
-based code, use the Encodo.Connections.Windows
package and call UseWindowsConnectionServices()
ApplicationSettingsBase
support, use the Encodo.Application.Windows
package and call UseWindowsApplication()
Encodo.Security.Windows
package and call UseWindowsSecurityServices()
.