4 5 6 7 8 9 10 11 12 13 14
Java 8

imageThis article discusses and compares the initial version of Java 8 and C# 4.5.1. I have not used Java 8 and I have not tested that any of the examples -- Java or C# -- even compile, but they should be pretty close to valid.

Java 8 has finally been released and -- drum roll, please -- it has closures/lambdas, as promised! I would be greeting this as champagne-cork--popping news if I were still a Java programmer.1 As an ex-Java developer, I greet this news more with an ambivalent shrug than with any overarching joy. It's a sunny morning and I'm in a good mood, so I'm able to suppress what would be a more than appropriate comment: "it's about time".

Since I'm a C# programmer, I'm more interested in peering over the fence at the pile of goodies that Java just received for its eighth birthday and see if it got something "what I ain't got". I found a concise list of new features in the article Will Java 8 Kill Scala? by Ahmed Soliman and was distraught/pleased2 to discover that Java had in fact gotten two presents that C# doesn't already have.

As you'll see, these two features aren't huge and the lack of them doesn't significantly impact design or expressiveness, but you know how jealousy works:

Jealousy doesn't care.

Jealousy is.

I'm sure I'll get over it, but it will take time.3

Default methods and static interface methods

Java 8 introduces support for static methods on interfaces as well as default methods that, taken together, amount to functionality that is more or less what extensions methods brings to C#.

In Java 8, you can define static methods on an interface, which is nice, but it becomes especially useful when combined with the keyword default on those methods. As defined in Default Methods:

Default methods enable you to add new functionality to the interfaces of your libraries and ensure binary compatibility with code written for older versions of those interfaces.

In Java, you no longer have to worry that adding a method to an interface will break implementations of that interface in other jar files that have not yet been recompiled against the new version of the interface. You can avoid that by adding a default implementation for your method. This applies only to those methods where a default implementation is possible, of course.

The page includes an example but it's relatively obvious what it looks like:

**public interface** ITransformer
{
  **string** Adjust(**string** value);
  **string** NewAdjust(**string** value)
  {
    **return** value.Replace(' ', '\t');
  }
}

How do these compare with extension methods in C#?

Extension methods are nice because they allow you to quasi-add methods to an interface without requiring an implementor to actually implement them. My rule of thumb is that any method that can be defined purely in terms of the public API of an interface should be defined as an extension method rather than added to the interface.

Java's default methods are a twist on this concept that addresses a limitation of extension methods. What is that limitation? That the method definition in the extension method can't be overridden by the actual implementation behind the interface. That is, the default implementation can be expressed purely in terms of the public interface, but perhaps a specific implementor of the interface would like to do that plus something more. Or would perhaps like to execute the extension method in a different way, but only for a specific implementation. There is no way to do this with extension methods.

Interface default methods in Java 8 allow you to provide a fallback implementation but also allows any class to actually implement that method and override the fallback.

Functional Interfaces

Functional interfaces are a nice addition, too, and something I've wanted in C# for some time. Eric Meijer of Microsoft doesn't miss an opportunity to point out that this is a must for functional languages (he's exaggerating, but the point is taken).

Saying that a language supports functional interface simply means that a lambda defined in that language can be assigned to any interface with a single method that has the same signature as that lambda.

An example in C# should make things clearer:

**public interface** ITransformer
{
  **string** Adjust(**string** value);
}

**public static class** Utility
{
  **public static void** WorkOnText(**string** text, ITransformer)
  {
    // Do work
  }
}

In order to call WorkOnText() in C#, I am required to define a class that implements ITransformer. There is no other way around it. However, in a language that allows functional interfaces, I could call the method with a lambda directly. The following code looks like C# but won't actually compile.

Utility.WorkOnText(
  "Hello world",
  s => s.Replace("Hello", "Goodbye cruel")
);

For completeness, let's also see how much extra code it is do this in C#, which has no functional interfaces.

**public class** PessimisticTransformer : ITransformer
{
  **public string** Adjust(**string** value)
  {
    **return** value.Replace("Hello", "Goodbye cruel");
  }
}

Utility.WorkOnText(
  "Hello world",
  **new** PessimisticTransformer()
);

That's quite a huge difference. It's surprising that C# hasn't gotten this functionality yet. It's hard to see what the downside is for this feature -- it doesn't seem to alter semantics.

While it is supported in Java, there are other restrictions. The signature has to match exactly. What happens if we add an optional parameter to the interface-method definition?

**public interface** ITransformer
{
  **string** Adjust(**string** value, ITransformer additional = **null**);
}

In the C# example, the class implementing the interface would have to be updated, of course, but the code at calling location remains unchanged. The functional interface's definition is the calling location, so the change would be closer to the implementation instead of more abstracted from it.

**public class** PessimisticTransformer : ITransformer
{
  **public string** Adjust(**string** value, ITransformer additional = **null**)
  {
    **return** value.Replace("Hello", "Goodbye cruel");
  }
}

// Using a class
Utility.WorkOnText(
  "Hello world",
  **new** PessimisticTransformer()
);

// Using a functional interface
Utility.WorkOnText(
  "Hello world",
  (s, a) => s.Replace("Hello", "Goodbye cruel")
);

I would take the functional interface any day.

Java Closures

As a final note, Java 8 has finally acquired closures/lambdas4 but there is a limitation on which functions can be passed as lambdas. It turns out that the inclusion of functional interfaces is a workaround for not having first-class functions in the language.

Citing the article,

[...] you cannot pass any function as first-class to other functions, the function must be explicitly defined as lambda or using Functional Interfaces

While in C# you can assign any method with a matching signature to a lambda variable or parameter, Java requires that the method be first assigned to a variable that is "explicitly assigned as lambda" in order to use. This isn't a limitation on expressiveness but may lead to clutter.

In C# I can write the following:

**public string** Twist(**string** value)
{ 
  **return** value.Reverse();
}

**public string** Alter(**this string** value, Func<**string**, **string**> func)
{
  **return** func(value);
}

**public string** ApplyTransformations(**string** value)
{
  **return** value.Alter(Twist).Alter(s => s.Reverse());
}

This example shows how you can declare a Func to indicate that the parameter is a first-class function. I can pass the Twist function or I can pass an inline lambda, as shown in ApplyTransformations. However, in Java, I can't declare a Func: only functional interfaces. In order to replicate the C# example above in Java, I would do the following:

**public** String twist(String value)
{ 
  **return** new StringBuilder(value).reverse().toString();
}

**public** String alter(String value, ITransformer transformer)
{
  **return** transformer.adjust(value);
}

**public** String applyTransformations(String value)
{
  **return** alter(alter(value, s -> twist(s)), s -> new StringBuilder(s).reverse().toString();
}

Note that the Java example cannot pass Twist directly; instead, it wraps it in a lambda so that it can be passed as a functional interface. Also, the C# example uses an extension method, which allows me to "add" methods to class string, which is not really possible in Java.

Overall, though, while these things feel like deal-breakers to a programming-language snob5 -- especially those who have a choice as to which language to use -- Java developers can rejoice that their language has finally acquired features that both increase expressiveness and reduce clutter.6

As a bonus, as a C# developer, I find that I don't have to be so jealous after all.

Though I'd still really like me some functional interfaces.



  1. Even if I were still a Java programmer, the champagne might still stay in the bottle because adoption of the latest runtime in the Java world is extremely slow-paced. Many projects and products require a specific, older version of the JVM and preclude updating to take advantage of newer features. The .NET world naturally has similar limitations but the problem seems to be less extreme.

  2. Distraught because the features look quite interesting and useful and C# doesn't have them and pleased because (A) I am not so immature that I can't be happy for others and (B) I know that innovation in other languages is an important driver in your own language.

  3. Totally kidding here. I'm not insane. Take my self-diagnosis with a grain of salt.

  4. I know that lambdas and closures are not by definition the same and I'm not supposed to use the interchangeably. I'm trying to make sure that a C# developer who reads this article doesn't read "closure" (which is technically what a lambda in C# is because it's capable of "closing over" or capturing variables) and not understand that it means "lambda".

  5. Like yours truly.

  6. Even if most of those developers won't be able to use those features for quite some time because they work on projects or products that are reluctant to upgrade.

Setting up the Lenovo T440p Laptop

This article originally appeared on earthli News and has been cross-posted here.


I recently got a new laptop and ran into a few issues while setting it up for work. There's a tl;dr at the end for the impatient.

Lenovo has finally spruced up their lineup of laptops with a series that features:

  • An actually usable and large touchpad
  • A decent and relatively sensibly laid-out keyboard
  • Very long battery life (between 6-9 hours, depending on use)
  • Low-power Haswell processor
  • 14-inch full-HD (1920x1080)
  • Dual graphics cards
  • Relatively light at 2.1kg
  • Relatively small/thin form-factor
  • Solid-feeling, functional design w/latchless lid
  • Almost no stickers

I recently got one of these. Let's get it set up so that we can work.

Pop in the old SSD

Instead of setting up the hard drive that I ordered with the laptop, I'm going to transplant the SSD I have in my current laptop to the new machine. Though this maneuver no longer guarantees anguish as it would have in the old days, we'll see below that it doesn't work 100% smoothly.

As mentioned above, the case is well-designed and quite elegant. All I need is a Phillips screwdriver to take out two screws from the back and then a downward slide on the backing plate pulls off the entire bottom of the laptop.1

At any rate, I was able to easily remove the new/unwanted drive and replace it with my fully configured SSD. I replaced the backing plate, but didn't put the screws back in yet. I wasn't that confident that it would work.

My pessimism turns out to have been well-founded. I boot up the machine and was greeted by the BIOS showing me a list of all of the various places that it had checked in order to find a bootable volume.

It failed to find a bootable volume anywhere.

Try again. Still nothing.

UEFI and BIOS usability

From dim memory, I recalled that there's something called UEFI for newer machines and that Windows 8 likes it and that it may have been enabled on the drive that shipped with the laptop but almost certainly isn't on my SSD.

Snooping about in the BIOS settings -- who doesn't like to do that? -- I find that UEFI is indeed enabled. I disable that setting as well as something called UEFI secure-boot and try again. I am rewarded within seconds with my Windows 8 lock screen.

I was happy to have been able to fix the problem, but was disappointed that the error messages thrown up by a very modern BIOS are still so useless. To be more precise, the utter lack of error messages or warnings or hints was disappointing.

I already have access to the BIOS, so it's not a security issue. There is nothing to be gained by hiding from me the fact that the BIOS checked a potential boot volume and failed to find a UEFI bootable sector but did find a non-UEFI one. Would it have killed them to show the list of bootable volumes with a little asterisk or warning telling me that a volume could boot were I to disable UEFI? Wouldn't that have been nice? I'm not even asking them to let me jump right to the setting, though that would be above and beyond the call of duty.

Detecting devices

At any rate, we can boot and Windows 8, after "detecting devices" for a few seconds was able to start up to the lock screen. Let's log in.

I have no network access.

Checking the Device Manager reveals that a good half-dozen devices could not be recognized and no drivers were installed for them.

This is pathetic. It is 2014, people. Most of the hardware in this machine is (A) very standard equipment to have on a laptop and (B) made by Intel. Is it too much to ask to have the 20GB Windows 8 default installation include generic drivers that will work with even newer devices?

The drivers don't have to be optimized; they just have to work well enough to let the user work on getting better ones. Windows is able to do this for the USB ports, for the display and for the mouse and keyboard because it would be utter failure for it not to be able to do so. It is an ongoing mystery how network access has not yet been promoted to this category of mandatory devices.

When Windows 8 is utterly incapable of setting up the network card, then there is a very big problem. A chicken-and-egg problem that can only be solved by having (A) a USB stick and (B) another computer already attached to the Internet.

Thank goodness Windows 8 was able to properly set up the drivers for the USB port or I'd have had a sense-less laptop utterly incapable of ever bootstrapping itself into usefulness.

On the bright side, the Intel network driver was only 1.8MB, it installed with a single click and it worked immediately for both the wireless and Ethernet cards. So that was very nice.

Update System

The obvious next step once I have connectivity is to run Windows Update. That works as expected and even finds some extra driver upgrades once it can actually get online.

Since this is a Lenovo laptop, there is also the Lenovo System Update, which updates more drivers, applies firmware upgrades and installs/updates some useful utilities.

At least it would do all of those things if I could start it.

That's not 100% fair. It kind of started. It's definitely running, there's an icon in the task-bar and the application is not using any CPU. When I hover the icon, it even shows me a thumbnail of a perfectly rendered main window.

Click. Nothing. The main window does not appear.

Fortunately, I am not alone. As recently as November of 2013, there were others with the same problem.2 Unfortunately, no one was able to figure out why it happens nor were there workarounds offered.

I had the sound enabled, though and noticed that when I tried to execute a shortcut, it triggered an alert. And the System Update application seemed to be in the foreground -- somehow -- despite the missing main window.

Acting on a hunch, I pressed Alt + PrtSc to take a screenshot of the currently focused window. Paste into an image editor. Bingo.

image

Now that I could read the text on the main window, I could figure out which keys to press. I didn't get a screenshot of the first screen, but it showed a list of available updates. I pressed the following keys to initiate the download:

  • Alt + S to "Select all"
  • Alt + N to move to the next page
  • Alt + D to "Download" (the screenshot above)

Hovering the mouse cursor over the taskbar icon revealed the following reassuring thumbnail of the main window:

image

Lucky for me, the System Update was able to get the "restart now" onto the screen so that I could reboot when required. On reboot, the newest version of Lenovo System Update was able to make use of the main window once again.

Recommendations

  • If you can't boot off of a drive on a new machine, remember that UEFI might be getting in the way.
  • If you're going to replace the drive, make sure that you download the driver for your machine's network card to that hard drive so that you can at least establish connectivity and continue bootstrapping your machine back to usability.
  • Make sure you update the Lenovo System tools on the destination drive before transferring it to the new machine to avoid weird software bugs.

[![image](2014-01-22_22_27_49-hotkey_features_integration_for_windows_8.1_(64-bit),8(64-bit),7(32-bit,64-.png)](2014-01-22_22_27_49-hotkey_features_integration_for_windows_8.1(64-bit),8(64-bit),7(32-bit,_64-.png)


  1. I'm making this sound easier than it was. I'm not so well-versed in cracking open cases anymore. I was forced to download the manual to look up how to remove the backing plate. The sliding motion would probably have been intuitive for someone more accustomed to these tasks.

  2. In my searches for help, manuals and other software, I came across the following download, offered on Lenovo's web site. You can download something called "Hotkey Features Integration for Windows 8.1" and it only needs 11.17GB of space.

Quino: efficiency, hinting and local sorting

In Quino: partially-mapped queries we took a look at how Quino seamlessly maps as much as possible to the database, while handling unmappable query components locally as efficiently as possible.

Correctness is more important than efficiency

As efficiently as possible can be a bit of a weasel statement. We saw that partial application of restrictions could significantly reduce the data returned. And we saw that efficient handling of that returned data could minimize the impact on both performance and memory, keeping in mind, of course, that the primary goal is correctness.

However, as we saw in the previous article, it's still entirely possible that even an optimally mapped query will result in an unacceptable memory-usage or performance penalty. In these cases, we need to be able to hint or warn the developer that something non-optimal is occurring. It would also be nice if the developer could indicate whether or not queries with such deficiencies should even be executed.

When do things slow down?

Why would this be necessary? Doesn't the developer have ultimate control over which queries are called? The developer has control over queries in business-logic code. But recall that the queries that we are using are somewhat contrived in order to keep things simple. Quino is a highly generic metadata framework: most of the queries are constructed by standard components from expressions defined in the metadata.

For example, the UI may piece together a query from various sources in order to retrieve the data for a particular view. In such cases, the developer has less direct control to "repair" queries with hand-tuning. Instead, the developer has to view the application holistically and make repairs in the metadata. This is one of many reasons why Quino has local evaluation and does not simply throw an exception for partially mapped queries, as EF does.

Debugging data queries

imageIt is, in general, far better to continue working while executing a possibly sub-optimal and performance-damaging query than it is to simply crash out. Such behavior would increase the testing requirements for generated UIs considerably. Instead, the UI always works and the developer can focus on optimization and fine-tuning in the model, using tools like the Statistics Viewer, shown to the left.

imageThe statistics viewer shows all commands executed in an application, with a stack trace, messages (hints/warnings/info) and the original query and mapped SQL/remote statement for each command. The statistics are available for SQL-based data drivers, but also for remoting drivers for all payload types (including JSON).

The screenshot above is for the statistics viewer for Winform applications; we've also integrated statistics into web applications using Glimpse, a plugin architecture for displaying extra information for web-site developers. The screenshot to the right shows a preview-release version that will be released with Quino 1.11 at the end of March.

Sorting is all or nothing

One place where an application can run into efficiency problems is when the sort order for entities is too complex to map to the server.

If a single restriction cannot be mapped to the database, we can map all of the others and evaluate the unmappable ones locally. What happens if a single sort cannot be mapped to the database? Can we do the same thing? Again, to avoid being too abstract, let's start with an example.

var query = Session.GetQuery<Person>();
query
  .Where(Person.Fields.LastName, ExpressionOperator.StartsWith[1], "M")
  .OrderBy(Person.Fields.LastName)
  .OrderBy(Person.Fields.FirstName)
  .Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetList(query).Count, Is.Between(100, 120));

Both of these sorts can be mapped to the server so the performance and memory hit is very limited. The ORM will execute a single query and will return data for and create about 100 objects.

Now, let's replace one of the mappable sorts with something unmappable:

var query = Session.GetQuery<Person>();
query
  .Where(Person.Fields.LastName, ExpressionOperator.StartsWith[1], "M")
  .OrderBy(new DelegateExpression(c => c.GetObject<Person>().FirstName)
  .OrderBy(Person.Fields.LastName)
  .Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetList(query).Count, Is.Between(100, 120));

What's happening here? Instead of being able to map both sorts to the database, now only one can be mapped. Or can it? The primary sort can't be mapped, so there's obviously no point in mapping the secondary sort. Instead, all sorting must be applied locally.

What if we had been able to map the primary sort but not the secondary one? Then we could have the database apply the primary sort, returning the data partially ordered. We can apply the remaining sort in memory...but that won't work, will it? If we only applied the secondary sort in memory, then the data would end up sort only by that value. It turns out that, unlike restrictions, sorting is all-or-nothing. If we can't map all sorts to the database, then we have to apply them all locally.1

In this case, the damage is minimal because the restrictions can be mapped and guarantee that only about 100 objects are returned. Sorting 100 objects locally isn't likely to show up on the performance radar.

Still, sorting is a potential performance-killer: as soon as you stray from the path of standard sorting, you run the risk of either:

  • Choosing a sort that is mappable but not covered by an index on the database
  • Choosing a sort that is unmappable and losing out on index-optimized sorting on the database

In the next article, we'll discuss how we can extract slices from a result set -- using limit and offset -- and what sort of effect this can have on performance in partially mapped queries.



  1. The mapper also doesn't bother adding any ordering to the generated query if at least one ordering is unmappable. There's no point in wasting time on the database with a sort that will be re-applied locally.

Quino: partially-mapped queries

In Quino: an overview of query-mapping in the data driver we took a look at some of the basics of querying data with Quino while maintaining acceptable performance and memory usage.

Now we'll take a look at what happens with partially-mapped queries. Before explaining what those are, we need a more concrete example to work with. Here's the most-optimized query we ended up with in the previous article:

var query = Session.GetQuery<Person>();
query.Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(query), Is.GreaterThanEqual(140000));

With so many entries, we'll want to trim down the list a bit more before we actually create objects. Let's choose only people whose last names start with the letter "M".

var query = Session.GetQuery<Person>();
query
  .Where(Person.Fields.LastName, ExpressionOperator.StartsWith[^1], "M")
  .Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(query), Is.Between(100, 120));

This is the kind of stuff that works just fine in other ORMs, like Entity Framework. Where Quino goes just a little farther is in being more forgiving when a query can be only partially mapped to the server. If you've used EF for anything beyond trivial queries, you've surely run into an exception that tells you that some portion of your query could not be mapped.1

Instead of throwing an exception, Quino sends what it can to the database and uses LINQ to post-process the data sent back by the database to complete the query.

Introducing unmappable expressions

Unmappable code can easily sneak in through aspects in the metadata that define filters or sorts using local methods or delegates that do not exist on the server. Instead of building a complex case, we're going to knowingly include an unmappable expression in the query.

var query = Session.GetQuery<Person>();
query
  .Where(new DelegateExpression[^3](c => c.GetObject<Person>().LastName.StartsWith("M")[^4])
  .Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(query), Is.Between(100, 120));

The new expression performs the same check as the previous example, but in a way that cannot be mapped to SQL.2 With our new example, we've provoked a situation where any of the following could happen:

  • The ORM could throw up its hands and declare the query unmappable, pushing the responsibility for separating mappable from unmappable onto the shoulders of the developers. As noted above, this is what EF does.
  • The ORM could determine that the query is unmappable and evaluate everything locally, retrieving only the initial set of Person objects from the server (all several million of them, if you'll recall from the previous post).
  • The ORM could map part of the query to the database, retrieving the minimal set of objects necessary in order to guarantee the correct result. This is what Quino does. The strategy works in many cases, but is not without its pitfalls.

What happens when we evaluate the query above? With partial mapping, we know that the restriction to "IBM" will be applied on the database. But we still have an additional restriction that must be applied locally. Instead of being able to get the count from the server without creating any objects, we're now forced to create objects in memory so that we can apply the local restrictions and only count the objects that match them all.

But as you'll recall from the previous article, the number of matches for "IBM" is 140,000 objects. The garbage collector just gave you a dirty look again.

Memory bubbles

There is no way to further optimized this query because of the local evaluation, but there is a way to avoid another particularly nasty issue: memory bubbles.

What is a memory bubble you might ask? It describes what happens when your application is using nMB and then is suddenly using n + 100MB because you created 140,000 objects all at once. Milliseconds later, the garbage collector is thrashing furiously to clean up all of these objects -- and all because you created them only in order to filter and count them. A few milliseconds after that, your application is back at nMB but the garbage collector's fur is ruffled and it's still trembling slightly from the shock.

The way to avoid this is to stream the objects through your analyzer one at a time rather than to create them all at once. Quino uses lazily-evaluated IEnumerable<T> sequences throughout the data driver specifically to prevent memory bubbles.

Streaming with IEnumerable<T> sequences

Before tackling how the Quino ORM handles the Count(), let's look at how it would return the actual objects from this query.

  • Map the query to create a SELECT statement
  • At this point, it doesn't matter whether the entire query could be mapped
  • Create an IEnumerable<T> sequence that represents the result of the mapped query
  • At this point, nothing has been executed and no objects have been returned
  • Wrap the sequence in another sequence that applies all of the "unhandled" parts of the query
  • Return that sequence as the result of executing the query
  • At this point, we still haven't actually executed anything on the database or created any objects

Right, now we have an IEnumerable<T> that represents the result set, but we haven't lit the fuse on it yet.

How do we light the fuse? Well, the most common way to do so is to call ToList() on it. What happens then?

  • The IEnumerator<T> requests an element
  • The query is executed against the database and returns an IDataReader
  • The reader requests a row and creates a Person object from that row's data
  • The wrapper that performs the local evaluation applies its filter(s) to this Person and yields it if it matches
  • If it matched the local filters, the Person is added to the list
  • Control returns to the IDataReader, which requests another row
  • Repeat until no more rows are returned from the database

Since the decision to add all objects to a list occurs all the way at the very outer caller, it's the caller that's at fault for the memory bubble not the driver.3 We'll see in the section how to avoid creating a list when none is needed.

Using cursors to control evaluation

If we wanted to process data but perhaps offer the user a chance to abort processing at any time, we could even get an IDataCursor<T> from the Quino ORM so control iteration ourselves.

**using** (**var** cursor = Session.CreateCursor(query))
{
  **foreach** (**var** obj **in** cursor)
  {
    // Do something with obj

    **if** (userAbortedOperation) { **break**; }
  }
}

And finally, the count query

But back to evaluating the query above. The Quino ORM handles it like this:

  • Try to map the query to create a COUNT statement
  • Notice that at least one restriction could not be mapped
  • Create a cursor to SELECT all of the objects for the same query (shown above) and count all the objects returned by that instead

So, if a count-query cannot be fully mapped to the database, the most efficient possible alternative is to execute a query that retrieves as few objects as possible (i.e. maps as much to the server as it can) and streams those objects to count them locally.

Tune in next time for a look at how to exert more control with limit and offset and how those work together with partial mapping.



  1. If we were worried that the last names in our database might not necessarily be capitalized, we would use the ExpressionOperator.StartsWithCI to perform the check in a case-insensitive manner instead.

  2. If Quino had a LINQ-to-SQL provider, there's a chance that more of these delegates could be mapped, but we don't have one...and they can't.

  3. Did we still create 140,000 objects? Yes we did, but not all at once. Now, there are probably situations where it is better to create several objects rather than streaming them individually, but I'm confident that keeping this as the default is the right choice. If you find that your particular situation warrants different behavior, feel free to use Session.CreateCursor() to control evaluation yourself and create the right-sized batches of objects to count. The ChangeAndSave() extension method does exactly that to load objects in batches (size adjustable by an optional parameter) rather than one by one.

LESS vs. SASS: Variable semantics

This article originally appeared on earthli News and has been cross-posted here.


I've been using CSS since pretty much its inception. It's powerful but quite low-level and lacks support for DRY. So, I switched to generating CSS with LESS a while back. This has gone quite well and I've been pretty happy with it.

Recently, I was converting some older, theme stylesheets for earthli. A theme stylesheet provides no structural CSS, mostly setting text, background and border colors to let users choose the basic color set. This is a perfect candidate for LESS.

So I constructed a common stylesheet that referenced LESS variables that I would define in the theme stylesheet. Very basically, it looks like this:


| crimson.less |

@body_color: #800;
@import "theme-base";


| theme-base.less |

body
{
  background-color: @body_color;
}

This is just about the most basic use of LESS that even an amateur user could possibly imagine. I'm keeping it simple because I'd like to illustrate a subtlety to variables in LESS that tripped me up at first -- but for which I'm very thankful. I'll give you a hint: LESS treats variables as a stylesheet would, whereas SASS treats them as one would expect in a programming language.

Let's expand the theme-base.less file with some more default definitions. I'm going to define some other variables in terms of the body color so that themes don't have to explicitly set all values. Instead, a theme can set a base value and let the base stylesheet calculate derived values. If a calculated value isn't OK for a theme, the theme can set that value explicitly to override.

Let's see an example before we continue.


| theme-base.less |

@title_color: darken(@body_color, 25%);
@border_color: @title_color;

body
{
  background-color: @body_color;
}

h2
{
  color: @title_color;
  border: 1px solid @border_color;
}

You'll notice that I avoided setting a value for @body_color because I didn't want to override the value set previously in the theme. But then wouldn't it be impossible for the theme to override the values for @title_color and @border_color? We seem to have a problem here.1

I want to be able to set some values and just use defaults for everything that I don't want to override. There is a construct in SASS called !default that does exactly this. It indicates that an assignment should only take place if the variable has not yet been assigned.2 Searching around for an equivalent in LESS took me to this page, Add support for "default" variables (similar to !default in SASS) #1706. There users suggested various solutions and the original poster became ever more adamant -- "Suffice it to say that we believe we need default variable setting as we've proposed here" -- until a LESS developer waded in to state that it would be "a pointless feature in less", which seemed harsh until an example showed that he was quite right.

The clue is further down in one of the answers:

If users define overrides after then it works as if it had a default on it. [T]hat's because even in the imported file it will take the last definition in the same way as css, even if defined after usage. (Emphasis added.)

It was at this point that the lightbulb went on for me. I was thinking like a programmer where a file is processed top-down and variable values can vary depending on location in the source text. That the output of the following C# code is 12 should amaze no one.

**var** a = 1;
Console.Write(a);
a = 2;
Console.Write(a);
a = 3;

In fact, we would totally expect our IDE to indicate that the value in the final assignment is never used and can be removed. Using LESS variable semantics, though, where variables are global in scope3 and assignment are treated as they are in CSS, we would get 33 as output. Why? Because the value of the variable a has the value 3 because that's the last value assigned to it. That is, LESS has a cascading approach to variable assignment.

This is exactly as the developer from LESS said: stop fighting it and just let LESS do what it does best. Do you want default values? Define the defaults first, then define your override values. The overridden value will be used even when used for setting the value of another default value that you didn't even override.

Now let's go fix our stylesheet to use these terse semantics of LESS. Here's a first cut at a setup that feels pretty right. I put the files in the order that you would read them so that you can see the overridden values and everything makes sense again.4


| theme-variables.less |

@body_color: white;
@title_color: darken(@body_color, 25%);
@border_color: @title_color;


| crimson.less |

@import "theme-variables";
@body_color: #800;
@import "theme-base";


| theme-base.less |

body
{
  background-color: @body_color;
}

h2
{
  color: @title_color;
  border: 1px solid @border_color;
}

You can see in the example above that the required variables are all declared, then overridden and then used. From what we learned above, we know that the value of @title_color in the file theme-variables.less will use a value of #800 for @body_color because that was the last value it was assigned.

We can do better though. The example above hasn't quite embraced the power of LESS fully. Let's try again.


| theme-base.less |

@body_color: white;
@title_color: darken(@body_color, 25%);
@border_color: @title_color;

body
{
  background-color: @body_color;
}

h2
{
  color: @title_color;
  border: 1px solid @border_color;
}


| crimson.less |

@import "theme-base";
@body_color: #800;

Boom! That's all you have to do. Set up everything in your base stylesheet file. Define all variables and define them in terms of each other in as convoluted a manner as you like. The final value of each value is determined before any CSS is generated.

This final version also has the added advantage that a syntax-checking IDE like JetBrains WebStorm or PHPStorm will be able to provide perfect assistance and validity checking. That wasn't true at all for any of the previous versions, where variable declarations were in different files.

Although I was seriously considering moving away from LESS and over to SASS -- because at least they didn't leave out such a basic feature, as I had thought crossly to myself -- I'm quite happy to have learned this lesson and am more happy with LESS than ever.



  1. For those of you who already know how to fix this, stop smirking. I'm writing this post because it wasn't intuitive for me -- although now I see the utter elegance of it.

  2. I'd also seen the same concept in NAnt property tasks where you can use the now-deprecated overwrite="false" directive. For the curious, now you're supposed to use unless="${property::exists('property-name')}" instead, which is just hideous.

  3. There are exceptions, but "variables are global in LESS is a good rule of thumb". One example is that if a parameter for a mixin has the same name as a globally assigned variable, the value within that mixin is taken from the parameter rather than the global.

  4. Seriously, LESS experts, stop smirking. I'm taking a long time to get there because a programmer's intuitive understanding of how variables work is a hard habit to break. Almost there.

How to configure Visual Studio 2013 with licenses from a multi-pack

If you're only interesting in what we promised to show you in the title of the article, then you can jump to the tl;dr at the end.

Silver Partnership

Encodo is a member of the Microsoft Partner Program with a Silver Competency. We maintain this competency through a combination of the following:

  • A yearly fee
  • Registration of .NET products developed by Encodo (Punchclock and Quino in our case)
  • Customer endorsements for .NET products that Encodo has developed
  • Competency exams

This involves no small amount of work and means that the competency isn't that easy to get. You can also use Microsoft competencies (e.g. MCSE) but we don't have any of those (yet).

We've had this membership for a while in order to get partner benefits, which basically translates to having licenses for Microsoft software at our disposal in order to develop and test .NET software. This includes 10 licenses for all premium versions of Visual Studio, up to and including the latest and greatest.

The partner web site

In previous versions, we were able to go to the partner web site and, after a lot of effort, get license keys for our 10 licenses and distribute them among our developers.

I mention the effort only because the partner site(s) and page(s) and application(s) are so notoriously hard to navigate. Every year, Microsoft improves something on the site, generally to the style but not the usability. I pluralized the components above because it's hard to tell how many different teams, applications and technologies are actually behind the site.1

image

  • You have to log in with your official LiveID but some pages mysteriously don't use the common login whereas others do use it and still others just log you out entirely, forcing you to log in again.
  • Some pages work in any browser whereas others highly recommend using Internet Explorer, some even recommending version 11. If you don't use IE, you'll always wonder whether the site failed to work because it's so buggy or because your browser is not properly supported.
  • The downloads page includes Windows operating systems and server software of all kinds as well as productivity software like Office and Visio but mentions nothing about Visual Studio.

It's basically always been a mess and still is a mess and our suspicion is that Microsoft deliberately makes it hard to redeem your licenses so that you'll either (A) just purchase more licenses via a channel that actually delivers them or (B) call their for-fee hotline and spend a bunch of money waiting on hold before you get forwarded from one person to another until finally ending up with someone who can answer your question.


The convoluted path to licenses

That question being, in case we've forgotten, "how can I please get your software to recognize the licenses that I purchased instead of threatening me that it will go dark in 90 days?"

The magical answer to that question is below.

First, what are we actually trying to accomplish? We have a multi-pack license and we want some way of letting our users/developers register products. In most cases, this still works as it always has: get a license key and enter it/register it with the local installation of the product (e.g. Office, Windows, etc.)

With Visual Studio 2013 (VS2013), things are slightly different. There are no multi-pack license keys anymore. Instead, users log in to their Visual Studio with a particular user. VS2013 checks that account for a license on the MSDN server and enables its functionality accordingly. If you have no license registered, then you get a 90-day trial license.

If the license is a multi-pack and the user accounts are individual...how does that work? Easy: just associate individual users with the partner account and assign them licenses. However, this all works via MSDN, so it's not enough to just have a Windows Live account. That user must also be registered as an MSDN subscriber.

So, for each user, you're going to have to do the following:

  1. Get them a Windows Live account if they don't already have one
  2. Add that account ID to the partner account
  3. Enable that user to get premium benefits (this can take up to 72 hours; see below for more detail)
  4. Register that Windows Live account as an MSDN subscriber
  5. Enter your credentials into VS2013 or refresh your license

The solution (with screenshots)

Sounds easy, right?. Once you know what to do, it's not so bad. But we had a lot of trouble discovering this process on our own. So here are the exact steps you need to take, with screenshots and hints to help you along.

  1. Log in with the Windows LiveID that corresponds to the account under which the Silver Membership is registered.

  2. Navigate to the account settings where you can see the list of members registered with your account.

  3. Add the email address of the user to that list of members2

  4. Make sure that the "Premium" box is checked at the end of the list3

  5. A six-character TechID will be generated for that user. The site claims that it can take up to 72 hours for this number to be ready for use on the MSDN site. Our experience was that it took considerably less time.

  6. Give that user their ID and have them register with MSDN to create a subscriber

    1. Get the Tech ID for your user from the steps above;
    2. Browse to the MSDN home page and click "Downloads"4
    3. Click "MSDN Subscriptions" in the sub-menu under "Downloads" (totally intuitive, right?)5
    4. Ignore the gigantic blue button enticing you to check out "Access benefits" and click "Register a subscription"6
    5. You'll finally be on the page to "Activate your subscription". Use the exact same address as registered with the partner account and enter your Tech ID.7
  7. Once the user has a subscriber, that user can log in to VS2013 from the registration dialog to enable that license8

Logging in has other benefits: you can store your VS2013 settings on the Live server and use them wherever you work with VS2013 and have logged in.


  1. You can confirm our impressions just by looking at the screenshots attached below.

  2. image

  3. image

  4. image

  5. image

  6. image

  7. image

  8. image

Converting an existing web application from JavaScript to TypeScript

TypeScript is a new programming language developed by Microsoft with the goal of adding static typing to JavaScript. Its syntax is based on the ECMA Script 6 standard, which is currently being defined by a consortium. There are features in the languages most developer know well from other languages like C#: Static Types, Generics, Interfaces, Inheritance and more.

With this new language, Microsoft tries to solve a problem that many web developers have faced while developing JavaScript: Since the code is not compiled, an error is only detected when the browser actually executes the code (at run time). This is time-consuming, especially when developing for mobile devices which are not that easy to debug. With TypeScript, the code passes through a compiler before actually being executed by a JavaScript interpreter. In this step, many of the errors can be detected and fixed by the developer before testing the code in the browser.

Another benefit of static typing is that the IDE is able to give much more precise hints to the developer as to which elements are available on a certain object. In plain JavaScript, pretty much everything was possible and you had to add type hints to your code to have at least some information available from the IDE.

Tools

For a developer, it is very important to tools for writing code are as good as possible. I tried different IDEs for developing TypeScript and came to the conclusion that the best currently available is VisualStudio 2013 with the TypeScript and the WebEssentials plugins. Those plugins are also available for VisualStudio 2012 but the new version feels much quicker when writing TypeScript code: Errors are detected almost immediately while typing. As a bonus, you can also install ReSharper. The TypeScript support of the current version 8.0 is almost nonexistent but JetBrains announced that it would improve this dramatically in the next version 8.1 which is currently available in JetBrains Early Access Program (EAP).

There is also WebStorm from JetBrains which also has support for TypeScript but this tool does not feel as natural to me as Visual Studio does (currently). I hope (and am pretty sure), JetBrains is working on this and there will soon be more good tools available for TypeScript development than just VisualStudio.

Switch the project

The actual switching of the project is pretty straight-forward. As a first step, change the file extension of every JavaScript file in your project to .ts. Then create a new TypeScript project in VisualStudio 2013 in which you then include all your brand-new TypeScript files. Since the TypeScript files later are compiled into similar *.js files, you don't have to change the script tags in your HTML files.

When you try to compile your new TypeScript project for the first time, it most certainly won't work. Chances are that you use some external libraries like jQuery or something similar. Since the compiler don't know those types, it assumes you have some errors in the code. To fix this, go to the GitHub project DefinitelyTyped and search for the typed interface for all of your libraries. There are also NuGet packages for each of those interfaces, if you prefer this approach. Then, you have to add a reference to those interfaces in at least one code file of your project. To include a file, simple add a line as follows at the top of the file:

///<reference path="typings/jquery/jquery.d.ts" />

This should fix most of the errors in your project. If the compiler still has something to complain about, chances are you that you've already gotten the first benefit out of TypeScript and found an actual error in your JavaScript code. Once you've fixed all of the remaining errors, your project has been successfully ported to TypeScript. I recommend to also enable source maps for a better debugging experience. Also I recommend not to include the generated *.js and *.js.map files in the source control because these files are generated by the compiler and would cause otherwise unnecessary merge conflicts.

When your project successfully runs on TypeScript, you can start to add static types to your project. The big advantage to TypeScript (versus something like Dart) is that you don't have to do this for the whole project in one rush but you can start where you think it brings the most benefit and then add other parts only when you have the time to do so. As a long-term goal, I recommend adding as many types and modules to your project as possible since this makes developing for you and your team easier in the future.

Conclusion

In my opinion, TypeScript is ready to be used in bigger web applications. There might be some worries because TypeScript is not yet at version 1.0 and there are not so many existing bigger projects implemented in TypeScript. But this is not a real argument because if you decide you don't like TypeScript at any point in the future, you can simply compile the project into JavaScript and work from then on with the generated code instead.

The most important point for me is that TypeScript forces you and your co-developers to write correct code, which is not given for JavaScript. If the compiler can't compile the code, you should not be able to push your code to the repository. This is also the case when you don't have a single class or module statement in the project. Of course, the compiler can't find all possible errors but can at least find the most obvious ones, which would otherwise have cost a lot of time to in testing and debugging. Time that you could better for something more important.

We do not have real long-term experience with TypeScript (nobody has that) but we decided at Encodo to not start another plain JavaScript project, as long as we have the choice (i.e. unless external factors force us to do so).

An introduction to query-mapping in the ORM

One of the most-used components of Quino is the ORM. An ORM is an Object-Relational Mapper, which accepts queries and returns data.

  • Applications formulate queries in Quino using application metadata
  • The ORM maps this query to the query language of the target database
  • The ORM transforms the results returned by the database to objects (the classes for which were also generated from application metadata).

This all sounds a bit abstract, so let's start with a concrete example. Let's say that we have millions of records in an employee database. We'd like to get some information about that data using our ORM. With millions of records, we have to be a bit careful about how that data is retrieved, but let's continue with concrete examples.

Attempt #1: Get your data and refine it locally

The following example returns the correct information, but does not satisfy performance or scalability requirements.1

var people = Session.GetList<Person[^2]>().Where(p => p.Company.Name == "IBM");

Assert.That(people.Count(), Is.GreaterThanEqual(140000));

What's wrong with the statement above? Since the call to Where occurs after the call to GetList<Person>(), the restriction cannot possibly have been passed on to the ORM.

The first line of code doesn't actually execute anything. It's in the call to Count() that the ORM and LINQ are called into action. Here's what happens, though:

  • For each row in the Person table, create a Person object
  • For each person object, create a corresponding Company object
  • Count all people where the Name of the person's company is equal to "IBM".

The code above benefits from almost no optimization, instantiating a tremendous number of objects in order to yield a scalar result. The only side-effect that can be considered an optimization is that most of the related Company objects will be retrieved from cache rather than from the database. So that's a plus.

Still, the garbage collector is going to be running pretty hot and the database is going to see far more queries than necessary.2

Attempt #2: Refine results on the database

Let's try again, using Quino's fluent querying API.3 The Quino ORM can map much of this API to SQL. Anything that is mapped to the database is not performed locally and is, by definition, more efficient.4

var people = Session.GetList<Person>();
people.Query.Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");[^6]

Assert.That(people.Count, Is.GreaterThanEqual(140000));

First, we get a list of people from the Session. As of the first line, we haven't actually gotten any data into memory yet -- we've only created a container for results of a certain type (Person in this case).

The default query for the list we created is to retrieve everything without restriction, as we saw in the first example. In this example, though, we restrict the Query to only the people that work for a company called "IBM". At this point, we still haven't called the database.

The final line is the first point at which data is requested, so that's where the database is called. We ask the list for the number of entries that match it and it returns an impressive number of employees.

At this point, things look pretty good. In older versions of Quino, this code would already have been sufficiently optimized. It results in a single call to the database that returns a single scalar value with everything calculated on the database. Perfect.

Attempt #3: Avoid creating objects at all

However, since v1.6.0 of Quino5, the call to the property IDataList.Count has automatically populated the list with all matching objects as well. We made this change because the following code pattern was pretty common:

var list = Session.GetList<Person>();
// Adjust query here
if (list.Count > 0)
{
  // do something with all of the objects here
}

That kind of code resulted in not one, but two calls to the database, which was killing performance, especially in high-latency environments.

That means, however, that the previous example is still going to pull 14,000 objects into memory, all just to count them and add them to a list that we're going to ignore. The garbage collector isn't a white-hot glowing mess anymore, but it's still throwing you a look of disapproval.

Since we know that we don't want the objects in this case, we can get the old behavior back by making the following adjustment.

var people = Session.GetList<Person>();
people.Query.Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(people.Query), Is.GreaterThanEqual(140000));

It would be even clearer to just forget about creating a list at all and work only with the query instead.

var query = Session.GetQuery<Person>();
query.Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(query), Is.GreaterThanEqual(140000));

Now that's a maximally efficient request for a number of people in Quino 1.10 as well.

Tune in next time for a look at what happens when a query can only be partially mapped to the database.


There are different strategies for retrieving associated data. Quino does not yet support retrieving anything other than root objects. That is, the associated Company object is not retrieved in the same query as the Person object.

In the example in question, the first indication that the ORM has that a Company is required is when the lambda retrieves them individually. Even if the original query had somehow indicated that the Company objects were also desired (e.g. using something like Include(Person.Relations.Company) as you would in EF), the most optimal mapping strategy is still not clear.

Should the mapper join the company table and retrieve that highly redundant data with each person? Or should it execute a single query for all companies and prime a cache with those? The right answer depends on the latency and bandwidth between the ORM and the database as well as myriad other conditions. When dealing with a lot of data, it's not hard to find examples where the default behavior of even a clever ORM isn't maximally efficient -- or even very efficient at all.

As we already noted, though, the example in question does everything in memory. If we reasonably assume that the people belong to a relatively small number of companies -- say qc -- then the millions of calls to retrieve companies associated with people will result in a lot of cache hits and generate "only" qc + 1 queries.


  1. I suppose it depends on what those requirements are, but if you think your application's performance requirements are so loose that it's OK to create millions of objects in memory just in order to count them, then you're probably not in the target audience for this article.

  2. Quino does not have LINQ to SQL support. I'm not even going to write "yet" at the end of that sentence because it's not at all clear that we're ever going to have it. Popular demand might convince us otherwise, but for now we're quite happy with our API (and soon-to-be-revealed query language QQL).

  3. That's an assumption I'm going to make for which counterexamples certainly exist, but none of which apply to the simple examples we'll address in this article.

  4. That was in almost three years ago, in June of 2011.

TrueCrypt: yet another organically grown user interface

This article originally appeared on earthli News and has been cross-posted here.


I use TrueCrypt at work to encrypt/protect the volume where I store source code for various customers. It generally works pretty seamlessly and I don't even notice that I'm working on an encrypted volume.

The other day, Windows started complaining in the Action Center that my drive needed checking because errors had been discovered. At first, I thought that it was referring to my system drive -- which is not encrypted -- and I rebooted Windows to let it do its thing.

Windows was back up and running relatively quickly and I wondered whether it had even checked the drive at all. The little flag in the Action Center was gone, though, so all was well.

My TrueCrypt drive doesn't auto-mount, though. When I mounted it a while later to do some work, the little flag popped up immediately and I realized that Windows was complaining about that drive rather than my system drive.

Windows's advice to "reboot to fix the problem" wasn't going to work because there is no way that Windows can access the TrueCrypt-encrypted drive early in the BIOS/boot process. So I went to the properties for that volume and tried to scan it using the standard system tools.

No dice. Windows claims that it can't check that volume.

Weird.

If it can't even check that volume, then where does Windows get off telling me that the volume has errors? Had Windows noticed -- after several months -- that it was incapable of checking that drive and decided to nag me about it, even though it can't offer any solutions? As a longtime Windows user, this didn't strike me as especially unlikely.

I got advice from a more savvy TrueCrypt user that it offers its own file-system check-and-repair tools. So I fired up the main window for TrueCrypt, which appeared as shown below.

image

O-K. Now how do I check my volume? Volume Tools makes sense. Click.

image

Nope. My initial intuition was wrong. How about "Tools" in the menu? Click.

image

Strike two. Some commands are repeated from the "Volume Tools" popup and there are some other things, but "Check" and "Repair" aren't here either.

How about the "Volumes" menu? Click.

image

Strike three. Again, there are a few volume-related functions, but not the ones I'm looking for. Maybe my colleague was wrong when he said that there were check/repair tools? Maybe they were dropped from the TrueCrypt software? I'm losing faith here.

Wait, I have one more idea. How about if I right-click the volume in the list?

Click.

image

There it is.

Cue relief mixed with disappointment that this is yet another user interface that is wildly inconsistent and utterly unintuitive. It doesn't have to have a groundbreaking UI, but it could at least follow some basic guidelines. A few hours of work would suffice, I think.

I ran the check, which found no errors and repaired nothing. Windows has not complained about errors since. Very reassuring.

v1.10.0: .NET 4.5, MVC5, JSON remoting; improved NuGet and Windows services support

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

Breaking changes

Utilities

  • PathTools.Canonicalize() has been replaced with FileTools.Canonicalize().
  • ICredentials.Password no longer exists; instead, use ICredentials.PasswordHash.
  • The ReversibleEncryptionProvider no longer exists; instead, use the RijndaelEncryptionProvider.
  • UrlParts.Protocol no longer exists; instead, use UrlParts.Scheme.
  • DirectoryServicesSettings.ServerUri no longer exists; instead, use DirectoryServicesSettings.HostName.

Configuration

  • The extension method CoreConfigurationTools.Integrate() no longer exists; instead, use ConfigurationTools.Add(). If you still need to clear existing packages, call StartupActions.Clear() and ShutdownActions.Clear() before adding the package.
  • MetaConfigurationTools.IntegrateMongoLoopbackDatabase(), MetaConfigurationTools.IntegrateLocalDatabase and MetaConfigurationTools.IntegrateLoopbackDatabase() no longer exist. Instead, the MongoLoopbackDatabase is the default provider registered for the ILoopbackDatabase interface. If the configuration of an application requires a loopback database, the version provided by the ServiceLocator will provide it. Override that registration to change the type of database to use.
  • The class Encodo.Quino.Tools.DataGenerator has been moved to the Encodo.Quino.Models.Tools.DataGenerators namespace.

Metadata/Builders/Modules

  • MetaBuilderBase.ApplyGenerators() no longer exists; instead, use MetaBuilderBase.Commit().
  • The attribute Encodo.Quino.Methods.AspectsBaseAttribute has been replaced with Encodo.Quino.Meta.Aspects.AspectProviderAttributeBase.
  • The interface Encodo.Quino.View.Aspects.IViewColorAspect and class ViewColorAspect have been replaced with the Encodo.Quino.View.Aspects.IViewAppearanceAspect and ViewAppearanceAspect, respectively.

Reporting

  • The class Encodo.Quino.Models.Reports.ReportsModuleGenerator has been moved to the Encodo.Quino.Models.Reports.Generators namespace.
  • The aspect Encodo.Quino.Models.Reports.ReportContextVisibleAspect has been moved to the Encodo.Quino.Models.Reports.Aspects namespace.

Security

  • The aspect Encodo.Quino.Models.Security.SecurityContextVisibleAspect has been moved to the Encodo.Quino.Models.Security.Aspects namespace.
  • The aspect Encodo.Quino.Models.Security.SecurityModuleSecurityClassAspect has been moved to the Encodo.Quino.Models.Security.Aspects namespace.
  • The class Encodo.Quino.Models.Security.StandardSecurityModuleAccessControl has been moved to the Encodo.Quino.Models.Security.Logic namespace.
  • The class Encodo.Quino.Security.LoginTokenBasedLoginAuthenticator has been replaced with the Encodo.Quino.Models.Security.Login.SecurityModuleTokenBasedAuthenticator.
  • The class Encodo.Quino.Security.UniqueIdentifierBasedAuthenticator has been replaced with the Encodo.Quino.Models.Security.Login.SecurityModuleUniqueIdentifierBasedAuthenticator.

Data providers

  • PostgreSqlMetaDatabase no longer has a constructor that accepts a string to name the database. The name is always taken from the ConnectionSettings.Name.
  • Database.RefreshState() and Database.DefaultConnectionSettings no longer exist; instead, use Database.RefreshDetails() and Database.ConnectionSettings, respectively.
  • Extension method MetaApplicationTools.GetQueryableDatabases(this IMetaApplication) has been replaced with extension method DataProviderTools.GetDatabaseHandlers(this IDataProvider).
  • Extension method MetaConfigurationTools.GetQueryableDatabases(this IMetaApplication) has been replaced with extension method DataProviderTools.GetDatabaseHandlers(this IDataProvider).
  • The IDataConnection.SyncRoot object no longer exists. Connections should never be shared between threads. Instead of explicit locking, applications should use an IPooledDataConnectionManager to maximize sharing of connections between threads.
  • The method IPersistable.SetStored() no longer exists; instead, use SetState().
  • IPersistentObjectContext no longer exists; instead, use IPersistentObjectContext.InitialObjectState.
  • The methods IMetaObjectHandler.GetObjectStored(), IMetaObjectHandler.GetObjectDelete(), IMetaObjectHandler.GetObjectChangedOrNew() and IMetaObjectHandler.SetObjectStored() no longer exist; instead, use some combination of IMetaObjectHandler.GetObjectState(), IMetaObjectHandler.GetObjectChanged() and IMetaObjectHandler.SetObjectState().
  • The NullCache no longer exists; instead, register an IDataCacheFactory with the ServiceLocator that returns a DataCache without any providers defined or has the property Enabled set to false.
  • The constant MetaActionNames.DetermineDatabaseState has been replaced with MetaActionNames.DetermineDataProviderConnectivity.

Remoting

  • The attribute Encodo.Quino.Methods.MetaMethodParameterInterceptorAspect has been replaced with Encodo.Quino.Methods.Aspects.MethodParameterInterceptorAspectBase.
  • The extension method MetaConfigurationTools.IntegrateRemoting() no longer exists. Remoting is now automatically configured to use the HttpRemoteClientFactory. To override this default behavior, use the ServiceLocator to register a different implementation for the IRemoteClientFactory.
  • MetaConfigurationTools.SetupForRemotingServer() no longer exists; instead, use MetaConfigurationTools.ConfigureAsRemotingServer().
  • The SetUpRemotingAction and MetaActionNames.SetUpRemoting constant no longer exist. Instead, the IRemoteClientFactory is configured in the ServiceLocator; override that registration to use a different implementation.
  • The constant RemoteServerDataHandler.DefaultName and method RemoteServerDataHandler.GetName() no longer exist; instead, use the name of the driver, which is derived from the name in the connection settings for that driver.
  • The interface Encodo.Quino.Remoting.Client.IRemoteMethodCaller has been moved to the Encodo.Quino.Remoting.Methods namespace.
  • The class RemoteClientFactoryAuto no longer exists. Remoting is now automatically configured to use the HttpRemoteClientFactory. To override this default behavior, use the ServiceLocator to register a different implementation for the IRemoteClientFactory.
  • HttpRemoteHost and RemoteHostHttp no longer exist; instead, use Encodo.Quino.Remoting.Http.MetaHttpRemoteHost.

Winform

  • WinformDxMetaConfigurationTools.IncludeSettings() has been replaced with the extension method WinformDxMetaConfigurationTools.IncludeWinformDxSettings().
  • WinformDxMetaConfigurationTools.ConfigureDefaults() has been replaced with the extension method WinformDxMetaConfigurationTools.IntegrateWinformDxPackages().
  • The method ControlToolsDX.FindProperty() no longer exists; instead, use the method ControlsToolsDX.TryGetProperty().