6 7 8 9 10 11 12 13 14 15 16
Quino: partially-mapped queries

In Quino: an overview of query-mapping in the data driver we took a look at some of the basics of querying data with Quino while maintaining acceptable performance and memory usage.

Now we'll take a look at what happens with partially-mapped queries. Before explaining what those are, we need a more concrete example to work with. Here's the most-optimized query we ended up with in the previous article:

var query = Session.GetQuery<Person>();
query.Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(query), Is.GreaterThanEqual(140000));

With so many entries, we'll want to trim down the list a bit more before we actually create objects. Let's choose only people whose last names start with the letter "M".

var query = Session.GetQuery<Person>();
query
  .Where(Person.Fields.LastName, ExpressionOperator.StartsWith[^1], "M")
  .Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(query), Is.Between(100, 120));

This is the kind of stuff that works just fine in other ORMs, like Entity Framework. Where Quino goes just a little farther is in being more forgiving when a query can be only partially mapped to the server. If you've used EF for anything beyond trivial queries, you've surely run into an exception that tells you that some portion of your query could not be mapped.1

Instead of throwing an exception, Quino sends what it can to the database and uses LINQ to post-process the data sent back by the database to complete the query.

Introducing unmappable expressions

Unmappable code can easily sneak in through aspects in the metadata that define filters or sorts using local methods or delegates that do not exist on the server. Instead of building a complex case, we're going to knowingly include an unmappable expression in the query.

var query = Session.GetQuery<Person>();
query
  .Where(new DelegateExpression[^3](c => c.GetObject<Person>().LastName.StartsWith("M")[^4])
  .Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(query), Is.Between(100, 120));

The new expression performs the same check as the previous example, but in a way that cannot be mapped to SQL.2 With our new example, we've provoked a situation where any of the following could happen:

  • The ORM could throw up its hands and declare the query unmappable, pushing the responsibility for separating mappable from unmappable onto the shoulders of the developers. As noted above, this is what EF does.
  • The ORM could determine that the query is unmappable and evaluate everything locally, retrieving only the initial set of Person objects from the server (all several million of them, if you'll recall from the previous post).
  • The ORM could map part of the query to the database, retrieving the minimal set of objects necessary in order to guarantee the correct result. This is what Quino does. The strategy works in many cases, but is not without its pitfalls.

What happens when we evaluate the query above? With partial mapping, we know that the restriction to "IBM" will be applied on the database. But we still have an additional restriction that must be applied locally. Instead of being able to get the count from the server without creating any objects, we're now forced to create objects in memory so that we can apply the local restrictions and only count the objects that match them all.

But as you'll recall from the previous article, the number of matches for "IBM" is 140,000 objects. The garbage collector just gave you a dirty look again.

Memory bubbles

There is no way to further optimized this query because of the local evaluation, but there is a way to avoid another particularly nasty issue: memory bubbles.

What is a memory bubble you might ask? It describes what happens when your application is using nMB and then is suddenly using n + 100MB because you created 140,000 objects all at once. Milliseconds later, the garbage collector is thrashing furiously to clean up all of these objects -- and all because you created them only in order to filter and count them. A few milliseconds after that, your application is back at nMB but the garbage collector's fur is ruffled and it's still trembling slightly from the shock.

The way to avoid this is to stream the objects through your analyzer one at a time rather than to create them all at once. Quino uses lazily-evaluated IEnumerable<T> sequences throughout the data driver specifically to prevent memory bubbles.

Streaming with IEnumerable<T> sequences

Before tackling how the Quino ORM handles the Count(), let's look at how it would return the actual objects from this query.

  • Map the query to create a SELECT statement
  • At this point, it doesn't matter whether the entire query could be mapped
  • Create an IEnumerable<T> sequence that represents the result of the mapped query
  • At this point, nothing has been executed and no objects have been returned
  • Wrap the sequence in another sequence that applies all of the "unhandled" parts of the query
  • Return that sequence as the result of executing the query
  • At this point, we still haven't actually executed anything on the database or created any objects

Right, now we have an IEnumerable<T> that represents the result set, but we haven't lit the fuse on it yet.

How do we light the fuse? Well, the most common way to do so is to call ToList() on it. What happens then?

  • The IEnumerator<T> requests an element
  • The query is executed against the database and returns an IDataReader
  • The reader requests a row and creates a Person object from that row's data
  • The wrapper that performs the local evaluation applies its filter(s) to this Person and yields it if it matches
  • If it matched the local filters, the Person is added to the list
  • Control returns to the IDataReader, which requests another row
  • Repeat until no more rows are returned from the database

Since the decision to add all objects to a list occurs all the way at the very outer caller, it's the caller that's at fault for the memory bubble not the driver.3 We'll see in the section how to avoid creating a list when none is needed.

Using cursors to control evaluation

If we wanted to process data but perhaps offer the user a chance to abort processing at any time, we could even get an IDataCursor<T> from the Quino ORM so control iteration ourselves.

**using** (**var** cursor = Session.CreateCursor(query))
{
  **foreach** (**var** obj **in** cursor)
  {
    // Do something with obj

    **if** (userAbortedOperation) { **break**; }
  }
}

And finally, the count query

But back to evaluating the query above. The Quino ORM handles it like this:

  • Try to map the query to create a COUNT statement
  • Notice that at least one restriction could not be mapped
  • Create a cursor to SELECT all of the objects for the same query (shown above) and count all the objects returned by that instead

So, if a count-query cannot be fully mapped to the database, the most efficient possible alternative is to execute a query that retrieves as few objects as possible (i.e. maps as much to the server as it can) and streams those objects to count them locally.

Tune in next time for a look at how to exert more control with limit and offset and how those work together with partial mapping.



  1. If we were worried that the last names in our database might not necessarily be capitalized, we would use the ExpressionOperator.StartsWithCI to perform the check in a case-insensitive manner instead.

  2. If Quino had a LINQ-to-SQL provider, there's a chance that more of these delegates could be mapped, but we don't have one...and they can't.

  3. Did we still create 140,000 objects? Yes we did, but not all at once. Now, there are probably situations where it is better to create several objects rather than streaming them individually, but I'm confident that keeping this as the default is the right choice. If you find that your particular situation warrants different behavior, feel free to use Session.CreateCursor() to control evaluation yourself and create the right-sized batches of objects to count. The ChangeAndSave() extension method does exactly that to load objects in batches (size adjustable by an optional parameter) rather than one by one.

LESS vs. SASS: Variable semantics

This article originally appeared on earthli News and has been cross-posted here.


I've been using CSS since pretty much its inception. It's powerful but quite low-level and lacks support for DRY. So, I switched to generating CSS with LESS a while back. This has gone quite well and I've been pretty happy with it.

Recently, I was converting some older, theme stylesheets for earthli. A theme stylesheet provides no structural CSS, mostly setting text, background and border colors to let users choose the basic color set. This is a perfect candidate for LESS.

So I constructed a common stylesheet that referenced LESS variables that I would define in the theme stylesheet. Very basically, it looks like this:


| crimson.less |

@body_color: #800;
@import "theme-base";


| theme-base.less |

body
{
  background-color: @body_color;
}

This is just about the most basic use of LESS that even an amateur user could possibly imagine. I'm keeping it simple because I'd like to illustrate a subtlety to variables in LESS that tripped me up at first -- but for which I'm very thankful. I'll give you a hint: LESS treats variables as a stylesheet would, whereas SASS treats them as one would expect in a programming language.

Let's expand the theme-base.less file with some more default definitions. I'm going to define some other variables in terms of the body color so that themes don't have to explicitly set all values. Instead, a theme can set a base value and let the base stylesheet calculate derived values. If a calculated value isn't OK for a theme, the theme can set that value explicitly to override.

Let's see an example before we continue.


| theme-base.less |

@title_color: darken(@body_color, 25%);
@border_color: @title_color;

body
{
  background-color: @body_color;
}

h2
{
  color: @title_color;
  border: 1px solid @border_color;
}

You'll notice that I avoided setting a value for @body_color because I didn't want to override the value set previously in the theme. But then wouldn't it be impossible for the theme to override the values for @title_color and @border_color? We seem to have a problem here.1

I want to be able to set some values and just use defaults for everything that I don't want to override. There is a construct in SASS called !default that does exactly this. It indicates that an assignment should only take place if the variable has not yet been assigned.2 Searching around for an equivalent in LESS took me to this page, Add support for "default" variables (similar to !default in SASS) #1706. There users suggested various solutions and the original poster became ever more adamant -- "Suffice it to say that we believe we need default variable setting as we've proposed here" -- until a LESS developer waded in to state that it would be "a pointless feature in less", which seemed harsh until an example showed that he was quite right.

The clue is further down in one of the answers:

If users define overrides after then it works as if it had a default on it. [T]hat's because even in the imported file it will take the last definition in the same way as css, even if defined after usage. (Emphasis added.)

It was at this point that the lightbulb went on for me. I was thinking like a programmer where a file is processed top-down and variable values can vary depending on location in the source text. That the output of the following C# code is 12 should amaze no one.

**var** a = 1;
Console.Write(a);
a = 2;
Console.Write(a);
a = 3;

In fact, we would totally expect our IDE to indicate that the value in the final assignment is never used and can be removed. Using LESS variable semantics, though, where variables are global in scope3 and assignment are treated as they are in CSS, we would get 33 as output. Why? Because the value of the variable a has the value 3 because that's the last value assigned to it. That is, LESS has a cascading approach to variable assignment.

This is exactly as the developer from LESS said: stop fighting it and just let LESS do what it does best. Do you want default values? Define the defaults first, then define your override values. The overridden value will be used even when used for setting the value of another default value that you didn't even override.

Now let's go fix our stylesheet to use these terse semantics of LESS. Here's a first cut at a setup that feels pretty right. I put the files in the order that you would read them so that you can see the overridden values and everything makes sense again.4


| theme-variables.less |

@body_color: white;
@title_color: darken(@body_color, 25%);
@border_color: @title_color;


| crimson.less |

@import "theme-variables";
@body_color: #800;
@import "theme-base";


| theme-base.less |

body
{
  background-color: @body_color;
}

h2
{
  color: @title_color;
  border: 1px solid @border_color;
}

You can see in the example above that the required variables are all declared, then overridden and then used. From what we learned above, we know that the value of @title_color in the file theme-variables.less will use a value of #800 for @body_color because that was the last value it was assigned.

We can do better though. The example above hasn't quite embraced the power of LESS fully. Let's try again.


| theme-base.less |

@body_color: white;
@title_color: darken(@body_color, 25%);
@border_color: @title_color;

body
{
  background-color: @body_color;
}

h2
{
  color: @title_color;
  border: 1px solid @border_color;
}


| crimson.less |

@import "theme-base";
@body_color: #800;

Boom! That's all you have to do. Set up everything in your base stylesheet file. Define all variables and define them in terms of each other in as convoluted a manner as you like. The final value of each value is determined before any CSS is generated.

This final version also has the added advantage that a syntax-checking IDE like JetBrains WebStorm or PHPStorm will be able to provide perfect assistance and validity checking. That wasn't true at all for any of the previous versions, where variable declarations were in different files.

Although I was seriously considering moving away from LESS and over to SASS -- because at least they didn't leave out such a basic feature, as I had thought crossly to myself -- I'm quite happy to have learned this lesson and am more happy with LESS than ever.



  1. For those of you who already know how to fix this, stop smirking. I'm writing this post because it wasn't intuitive for me -- although now I see the utter elegance of it.

  2. I'd also seen the same concept in NAnt property tasks where you can use the now-deprecated overwrite="false" directive. For the curious, now you're supposed to use unless="${property::exists('property-name')}" instead, which is just hideous.

  3. There are exceptions, but "variables are global in LESS is a good rule of thumb". One example is that if a parameter for a mixin has the same name as a globally assigned variable, the value within that mixin is taken from the parameter rather than the global.

  4. Seriously, LESS experts, stop smirking. I'm taking a long time to get there because a programmer's intuitive understanding of how variables work is a hard habit to break. Almost there.

How to configure Visual Studio 2013 with licenses from a multi-pack

If you're only interesting in what we promised to show you in the title of the article, then you can jump to the tl;dr at the end.

Silver Partnership

Encodo is a member of the Microsoft Partner Program with a Silver Competency. We maintain this competency through a combination of the following:

  • A yearly fee
  • Registration of .NET products developed by Encodo (Punchclock and Quino in our case)
  • Customer endorsements for .NET products that Encodo has developed
  • Competency exams

This involves no small amount of work and means that the competency isn't that easy to get. You can also use Microsoft competencies (e.g. MCSE) but we don't have any of those (yet).

We've had this membership for a while in order to get partner benefits, which basically translates to having licenses for Microsoft software at our disposal in order to develop and test .NET software. This includes 10 licenses for all premium versions of Visual Studio, up to and including the latest and greatest.

The partner web site

In previous versions, we were able to go to the partner web site and, after a lot of effort, get license keys for our 10 licenses and distribute them among our developers.

I mention the effort only because the partner site(s) and page(s) and application(s) are so notoriously hard to navigate. Every year, Microsoft improves something on the site, generally to the style but not the usability. I pluralized the components above because it's hard to tell how many different teams, applications and technologies are actually behind the site.1

image

  • You have to log in with your official LiveID but some pages mysteriously don't use the common login whereas others do use it and still others just log you out entirely, forcing you to log in again.
  • Some pages work in any browser whereas others highly recommend using Internet Explorer, some even recommending version 11. If you don't use IE, you'll always wonder whether the site failed to work because it's so buggy or because your browser is not properly supported.
  • The downloads page includes Windows operating systems and server software of all kinds as well as productivity software like Office and Visio but mentions nothing about Visual Studio.

It's basically always been a mess and still is a mess and our suspicion is that Microsoft deliberately makes it hard to redeem your licenses so that you'll either (A) just purchase more licenses via a channel that actually delivers them or (B) call their for-fee hotline and spend a bunch of money waiting on hold before you get forwarded from one person to another until finally ending up with someone who can answer your question.


The convoluted path to licenses

That question being, in case we've forgotten, "how can I please get your software to recognize the licenses that I purchased instead of threatening me that it will go dark in 90 days?"

The magical answer to that question is below.

First, what are we actually trying to accomplish? We have a multi-pack license and we want some way of letting our users/developers register products. In most cases, this still works as it always has: get a license key and enter it/register it with the local installation of the product (e.g. Office, Windows, etc.)

With Visual Studio 2013 (VS2013), things are slightly different. There are no multi-pack license keys anymore. Instead, users log in to their Visual Studio with a particular user. VS2013 checks that account for a license on the MSDN server and enables its functionality accordingly. If you have no license registered, then you get a 90-day trial license.

If the license is a multi-pack and the user accounts are individual...how does that work? Easy: just associate individual users with the partner account and assign them licenses. However, this all works via MSDN, so it's not enough to just have a Windows Live account. That user must also be registered as an MSDN subscriber.

So, for each user, you're going to have to do the following:

  1. Get them a Windows Live account if they don't already have one
  2. Add that account ID to the partner account
  3. Enable that user to get premium benefits (this can take up to 72 hours; see below for more detail)
  4. Register that Windows Live account as an MSDN subscriber
  5. Enter your credentials into VS2013 or refresh your license

The solution (with screenshots)

Sounds easy, right?. Once you know what to do, it's not so bad. But we had a lot of trouble discovering this process on our own. So here are the exact steps you need to take, with screenshots and hints to help you along.

  1. Log in with the Windows LiveID that corresponds to the account under which the Silver Membership is registered.

  2. Navigate to the account settings where you can see the list of members registered with your account.

  3. Add the email address of the user to that list of members2

  4. Make sure that the "Premium" box is checked at the end of the list3

  5. A six-character TechID will be generated for that user. The site claims that it can take up to 72 hours for this number to be ready for use on the MSDN site. Our experience was that it took considerably less time.

  6. Give that user their ID and have them register with MSDN to create a subscriber

    1. Get the Tech ID for your user from the steps above;
    2. Browse to the MSDN home page and click "Downloads"4
    3. Click "MSDN Subscriptions" in the sub-menu under "Downloads" (totally intuitive, right?)5
    4. Ignore the gigantic blue button enticing you to check out "Access benefits" and click "Register a subscription"6
    5. You'll finally be on the page to "Activate your subscription". Use the exact same address as registered with the partner account and enter your Tech ID.7
  7. Once the user has a subscriber, that user can log in to VS2013 from the registration dialog to enable that license8

Logging in has other benefits: you can store your VS2013 settings on the Live server and use them wherever you work with VS2013 and have logged in.


  1. You can confirm our impressions just by looking at the screenshots attached below.

  2. image

  3. image

  4. image

  5. image

  6. image

  7. image

  8. image

Converting an existing web application from JavaScript to TypeScript

TypeScript is a new programming language developed by Microsoft with the goal of adding static typing to JavaScript. Its syntax is based on the ECMA Script 6 standard, which is currently being defined by a consortium. There are features in the languages most developer know well from other languages like C#: Static Types, Generics, Interfaces, Inheritance and more.

With this new language, Microsoft tries to solve a problem that many web developers have faced while developing JavaScript: Since the code is not compiled, an error is only detected when the browser actually executes the code (at run time). This is time-consuming, especially when developing for mobile devices which are not that easy to debug. With TypeScript, the code passes through a compiler before actually being executed by a JavaScript interpreter. In this step, many of the errors can be detected and fixed by the developer before testing the code in the browser.

Another benefit of static typing is that the IDE is able to give much more precise hints to the developer as to which elements are available on a certain object. In plain JavaScript, pretty much everything was possible and you had to add type hints to your code to have at least some information available from the IDE.

Tools

For a developer, it is very important to tools for writing code are as good as possible. I tried different IDEs for developing TypeScript and came to the conclusion that the best currently available is VisualStudio 2013 with the TypeScript and the WebEssentials plugins. Those plugins are also available for VisualStudio 2012 but the new version feels much quicker when writing TypeScript code: Errors are detected almost immediately while typing. As a bonus, you can also install ReSharper. The TypeScript support of the current version 8.0 is almost nonexistent but JetBrains announced that it would improve this dramatically in the next version 8.1 which is currently available in JetBrains Early Access Program (EAP).

There is also WebStorm from JetBrains which also has support for TypeScript but this tool does not feel as natural to me as Visual Studio does (currently). I hope (and am pretty sure), JetBrains is working on this and there will soon be more good tools available for TypeScript development than just VisualStudio.

Switch the project

The actual switching of the project is pretty straight-forward. As a first step, change the file extension of every JavaScript file in your project to .ts. Then create a new TypeScript project in VisualStudio 2013 in which you then include all your brand-new TypeScript files. Since the TypeScript files later are compiled into similar *.js files, you don't have to change the script tags in your HTML files.

When you try to compile your new TypeScript project for the first time, it most certainly won't work. Chances are that you use some external libraries like jQuery or something similar. Since the compiler don't know those types, it assumes you have some errors in the code. To fix this, go to the GitHub project DefinitelyTyped and search for the typed interface for all of your libraries. There are also NuGet packages for each of those interfaces, if you prefer this approach. Then, you have to add a reference to those interfaces in at least one code file of your project. To include a file, simple add a line as follows at the top of the file:

///<reference path="typings/jquery/jquery.d.ts" />

This should fix most of the errors in your project. If the compiler still has something to complain about, chances are you that you've already gotten the first benefit out of TypeScript and found an actual error in your JavaScript code. Once you've fixed all of the remaining errors, your project has been successfully ported to TypeScript. I recommend to also enable source maps for a better debugging experience. Also I recommend not to include the generated *.js and *.js.map files in the source control because these files are generated by the compiler and would cause otherwise unnecessary merge conflicts.

When your project successfully runs on TypeScript, you can start to add static types to your project. The big advantage to TypeScript (versus something like Dart) is that you don't have to do this for the whole project in one rush but you can start where you think it brings the most benefit and then add other parts only when you have the time to do so. As a long-term goal, I recommend adding as many types and modules to your project as possible since this makes developing for you and your team easier in the future.

Conclusion

In my opinion, TypeScript is ready to be used in bigger web applications. There might be some worries because TypeScript is not yet at version 1.0 and there are not so many existing bigger projects implemented in TypeScript. But this is not a real argument because if you decide you don't like TypeScript at any point in the future, you can simply compile the project into JavaScript and work from then on with the generated code instead.

The most important point for me is that TypeScript forces you and your co-developers to write correct code, which is not given for JavaScript. If the compiler can't compile the code, you should not be able to push your code to the repository. This is also the case when you don't have a single class or module statement in the project. Of course, the compiler can't find all possible errors but can at least find the most obvious ones, which would otherwise have cost a lot of time to in testing and debugging. Time that you could better for something more important.

We do not have real long-term experience with TypeScript (nobody has that) but we decided at Encodo to not start another plain JavaScript project, as long as we have the choice (i.e. unless external factors force us to do so).

An introduction to query-mapping in the ORM

One of the most-used components of Quino is the ORM. An ORM is an Object-Relational Mapper, which accepts queries and returns data.

  • Applications formulate queries in Quino using application metadata
  • The ORM maps this query to the query language of the target database
  • The ORM transforms the results returned by the database to objects (the classes for which were also generated from application metadata).

This all sounds a bit abstract, so let's start with a concrete example. Let's say that we have millions of records in an employee database. We'd like to get some information about that data using our ORM. With millions of records, we have to be a bit careful about how that data is retrieved, but let's continue with concrete examples.

Attempt #1: Get your data and refine it locally

The following example returns the correct information, but does not satisfy performance or scalability requirements.1

var people = Session.GetList<Person[^2]>().Where(p => p.Company.Name == "IBM");

Assert.That(people.Count(), Is.GreaterThanEqual(140000));

What's wrong with the statement above? Since the call to Where occurs after the call to GetList<Person>(), the restriction cannot possibly have been passed on to the ORM.

The first line of code doesn't actually execute anything. It's in the call to Count() that the ORM and LINQ are called into action. Here's what happens, though:

  • For each row in the Person table, create a Person object
  • For each person object, create a corresponding Company object
  • Count all people where the Name of the person's company is equal to "IBM".

The code above benefits from almost no optimization, instantiating a tremendous number of objects in order to yield a scalar result. The only side-effect that can be considered an optimization is that most of the related Company objects will be retrieved from cache rather than from the database. So that's a plus.

Still, the garbage collector is going to be running pretty hot and the database is going to see far more queries than necessary.2

Attempt #2: Refine results on the database

Let's try again, using Quino's fluent querying API.3 The Quino ORM can map much of this API to SQL. Anything that is mapped to the database is not performed locally and is, by definition, more efficient.4

var people = Session.GetList<Person>();
people.Query.Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");[^6]

Assert.That(people.Count, Is.GreaterThanEqual(140000));

First, we get a list of people from the Session. As of the first line, we haven't actually gotten any data into memory yet -- we've only created a container for results of a certain type (Person in this case).

The default query for the list we created is to retrieve everything without restriction, as we saw in the first example. In this example, though, we restrict the Query to only the people that work for a company called "IBM". At this point, we still haven't called the database.

The final line is the first point at which data is requested, so that's where the database is called. We ask the list for the number of entries that match it and it returns an impressive number of employees.

At this point, things look pretty good. In older versions of Quino, this code would already have been sufficiently optimized. It results in a single call to the database that returns a single scalar value with everything calculated on the database. Perfect.

Attempt #3: Avoid creating objects at all

However, since v1.6.0 of Quino5, the call to the property IDataList.Count has automatically populated the list with all matching objects as well. We made this change because the following code pattern was pretty common:

var list = Session.GetList<Person>();
// Adjust query here
if (list.Count > 0)
{
  // do something with all of the objects here
}

That kind of code resulted in not one, but two calls to the database, which was killing performance, especially in high-latency environments.

That means, however, that the previous example is still going to pull 14,000 objects into memory, all just to count them and add them to a list that we're going to ignore. The garbage collector isn't a white-hot glowing mess anymore, but it's still throwing you a look of disapproval.

Since we know that we don't want the objects in this case, we can get the old behavior back by making the following adjustment.

var people = Session.GetList<Person>();
people.Query.Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(people.Query), Is.GreaterThanEqual(140000));

It would be even clearer to just forget about creating a list at all and work only with the query instead.

var query = Session.GetQuery<Person>();
query.Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetCount(query), Is.GreaterThanEqual(140000));

Now that's a maximally efficient request for a number of people in Quino 1.10 as well.

Tune in next time for a look at what happens when a query can only be partially mapped to the database.


There are different strategies for retrieving associated data. Quino does not yet support retrieving anything other than root objects. That is, the associated Company object is not retrieved in the same query as the Person object.

In the example in question, the first indication that the ORM has that a Company is required is when the lambda retrieves them individually. Even if the original query had somehow indicated that the Company objects were also desired (e.g. using something like Include(Person.Relations.Company) as you would in EF), the most optimal mapping strategy is still not clear.

Should the mapper join the company table and retrieve that highly redundant data with each person? Or should it execute a single query for all companies and prime a cache with those? The right answer depends on the latency and bandwidth between the ORM and the database as well as myriad other conditions. When dealing with a lot of data, it's not hard to find examples where the default behavior of even a clever ORM isn't maximally efficient -- or even very efficient at all.

As we already noted, though, the example in question does everything in memory. If we reasonably assume that the people belong to a relatively small number of companies -- say qc -- then the millions of calls to retrieve companies associated with people will result in a lot of cache hits and generate "only" qc + 1 queries.


  1. I suppose it depends on what those requirements are, but if you think your application's performance requirements are so loose that it's OK to create millions of objects in memory just in order to count them, then you're probably not in the target audience for this article.

  2. Quino does not have LINQ to SQL support. I'm not even going to write "yet" at the end of that sentence because it's not at all clear that we're ever going to have it. Popular demand might convince us otherwise, but for now we're quite happy with our API (and soon-to-be-revealed query language QQL).

  3. That's an assumption I'm going to make for which counterexamples certainly exist, but none of which apply to the simple examples we'll address in this article.

  4. That was in almost three years ago, in June of 2011.

TrueCrypt: yet another organically grown user interface

This article originally appeared on earthli News and has been cross-posted here.


I use TrueCrypt at work to encrypt/protect the volume where I store source code for various customers. It generally works pretty seamlessly and I don't even notice that I'm working on an encrypted volume.

The other day, Windows started complaining in the Action Center that my drive needed checking because errors had been discovered. At first, I thought that it was referring to my system drive -- which is not encrypted -- and I rebooted Windows to let it do its thing.

Windows was back up and running relatively quickly and I wondered whether it had even checked the drive at all. The little flag in the Action Center was gone, though, so all was well.

My TrueCrypt drive doesn't auto-mount, though. When I mounted it a while later to do some work, the little flag popped up immediately and I realized that Windows was complaining about that drive rather than my system drive.

Windows's advice to "reboot to fix the problem" wasn't going to work because there is no way that Windows can access the TrueCrypt-encrypted drive early in the BIOS/boot process. So I went to the properties for that volume and tried to scan it using the standard system tools.

No dice. Windows claims that it can't check that volume.

Weird.

If it can't even check that volume, then where does Windows get off telling me that the volume has errors? Had Windows noticed -- after several months -- that it was incapable of checking that drive and decided to nag me about it, even though it can't offer any solutions? As a longtime Windows user, this didn't strike me as especially unlikely.

I got advice from a more savvy TrueCrypt user that it offers its own file-system check-and-repair tools. So I fired up the main window for TrueCrypt, which appeared as shown below.

image

O-K. Now how do I check my volume? Volume Tools makes sense. Click.

image

Nope. My initial intuition was wrong. How about "Tools" in the menu? Click.

image

Strike two. Some commands are repeated from the "Volume Tools" popup and there are some other things, but "Check" and "Repair" aren't here either.

How about the "Volumes" menu? Click.

image

Strike three. Again, there are a few volume-related functions, but not the ones I'm looking for. Maybe my colleague was wrong when he said that there were check/repair tools? Maybe they were dropped from the TrueCrypt software? I'm losing faith here.

Wait, I have one more idea. How about if I right-click the volume in the list?

Click.

image

There it is.

Cue relief mixed with disappointment that this is yet another user interface that is wildly inconsistent and utterly unintuitive. It doesn't have to have a groundbreaking UI, but it could at least follow some basic guidelines. A few hours of work would suffice, I think.

I ran the check, which found no errors and repaired nothing. Windows has not complained about errors since. Very reassuring.

v1.10.0: .NET 4.5, MVC5, JSON remoting; improved NuGet and Windows services support

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

Breaking changes

Utilities

  • PathTools.Canonicalize() has been replaced with FileTools.Canonicalize().
  • ICredentials.Password no longer exists; instead, use ICredentials.PasswordHash.
  • The ReversibleEncryptionProvider no longer exists; instead, use the RijndaelEncryptionProvider.
  • UrlParts.Protocol no longer exists; instead, use UrlParts.Scheme.
  • DirectoryServicesSettings.ServerUri no longer exists; instead, use DirectoryServicesSettings.HostName.

Configuration

  • The extension method CoreConfigurationTools.Integrate() no longer exists; instead, use ConfigurationTools.Add(). If you still need to clear existing packages, call StartupActions.Clear() and ShutdownActions.Clear() before adding the package.
  • MetaConfigurationTools.IntegrateMongoLoopbackDatabase(), MetaConfigurationTools.IntegrateLocalDatabase and MetaConfigurationTools.IntegrateLoopbackDatabase() no longer exist. Instead, the MongoLoopbackDatabase is the default provider registered for the ILoopbackDatabase interface. If the configuration of an application requires a loopback database, the version provided by the ServiceLocator will provide it. Override that registration to change the type of database to use.
  • The class Encodo.Quino.Tools.DataGenerator has been moved to the Encodo.Quino.Models.Tools.DataGenerators namespace.

Metadata/Builders/Modules

  • MetaBuilderBase.ApplyGenerators() no longer exists; instead, use MetaBuilderBase.Commit().
  • The attribute Encodo.Quino.Methods.AspectsBaseAttribute has been replaced with Encodo.Quino.Meta.Aspects.AspectProviderAttributeBase.
  • The interface Encodo.Quino.View.Aspects.IViewColorAspect and class ViewColorAspect have been replaced with the Encodo.Quino.View.Aspects.IViewAppearanceAspect and ViewAppearanceAspect, respectively.

Reporting

  • The class Encodo.Quino.Models.Reports.ReportsModuleGenerator has been moved to the Encodo.Quino.Models.Reports.Generators namespace.
  • The aspect Encodo.Quino.Models.Reports.ReportContextVisibleAspect has been moved to the Encodo.Quino.Models.Reports.Aspects namespace.

Security

  • The aspect Encodo.Quino.Models.Security.SecurityContextVisibleAspect has been moved to the Encodo.Quino.Models.Security.Aspects namespace.
  • The aspect Encodo.Quino.Models.Security.SecurityModuleSecurityClassAspect has been moved to the Encodo.Quino.Models.Security.Aspects namespace.
  • The class Encodo.Quino.Models.Security.StandardSecurityModuleAccessControl has been moved to the Encodo.Quino.Models.Security.Logic namespace.
  • The class Encodo.Quino.Security.LoginTokenBasedLoginAuthenticator has been replaced with the Encodo.Quino.Models.Security.Login.SecurityModuleTokenBasedAuthenticator.
  • The class Encodo.Quino.Security.UniqueIdentifierBasedAuthenticator has been replaced with the Encodo.Quino.Models.Security.Login.SecurityModuleUniqueIdentifierBasedAuthenticator.

Data providers

  • PostgreSqlMetaDatabase no longer has a constructor that accepts a string to name the database. The name is always taken from the ConnectionSettings.Name.
  • Database.RefreshState() and Database.DefaultConnectionSettings no longer exist; instead, use Database.RefreshDetails() and Database.ConnectionSettings, respectively.
  • Extension method MetaApplicationTools.GetQueryableDatabases(this IMetaApplication) has been replaced with extension method DataProviderTools.GetDatabaseHandlers(this IDataProvider).
  • Extension method MetaConfigurationTools.GetQueryableDatabases(this IMetaApplication) has been replaced with extension method DataProviderTools.GetDatabaseHandlers(this IDataProvider).
  • The IDataConnection.SyncRoot object no longer exists. Connections should never be shared between threads. Instead of explicit locking, applications should use an IPooledDataConnectionManager to maximize sharing of connections between threads.
  • The method IPersistable.SetStored() no longer exists; instead, use SetState().
  • IPersistentObjectContext no longer exists; instead, use IPersistentObjectContext.InitialObjectState.
  • The methods IMetaObjectHandler.GetObjectStored(), IMetaObjectHandler.GetObjectDelete(), IMetaObjectHandler.GetObjectChangedOrNew() and IMetaObjectHandler.SetObjectStored() no longer exist; instead, use some combination of IMetaObjectHandler.GetObjectState(), IMetaObjectHandler.GetObjectChanged() and IMetaObjectHandler.SetObjectState().
  • The NullCache no longer exists; instead, register an IDataCacheFactory with the ServiceLocator that returns a DataCache without any providers defined or has the property Enabled set to false.
  • The constant MetaActionNames.DetermineDatabaseState has been replaced with MetaActionNames.DetermineDataProviderConnectivity.

Remoting

  • The attribute Encodo.Quino.Methods.MetaMethodParameterInterceptorAspect has been replaced with Encodo.Quino.Methods.Aspects.MethodParameterInterceptorAspectBase.
  • The extension method MetaConfigurationTools.IntegrateRemoting() no longer exists. Remoting is now automatically configured to use the HttpRemoteClientFactory. To override this default behavior, use the ServiceLocator to register a different implementation for the IRemoteClientFactory.
  • MetaConfigurationTools.SetupForRemotingServer() no longer exists; instead, use MetaConfigurationTools.ConfigureAsRemotingServer().
  • The SetUpRemotingAction and MetaActionNames.SetUpRemoting constant no longer exist. Instead, the IRemoteClientFactory is configured in the ServiceLocator; override that registration to use a different implementation.
  • The constant RemoteServerDataHandler.DefaultName and method RemoteServerDataHandler.GetName() no longer exist; instead, use the name of the driver, which is derived from the name in the connection settings for that driver.
  • The interface Encodo.Quino.Remoting.Client.IRemoteMethodCaller has been moved to the Encodo.Quino.Remoting.Methods namespace.
  • The class RemoteClientFactoryAuto no longer exists. Remoting is now automatically configured to use the HttpRemoteClientFactory. To override this default behavior, use the ServiceLocator to register a different implementation for the IRemoteClientFactory.
  • HttpRemoteHost and RemoteHostHttp no longer exist; instead, use Encodo.Quino.Remoting.Http.MetaHttpRemoteHost.

Winform

  • WinformDxMetaConfigurationTools.IncludeSettings() has been replaced with the extension method WinformDxMetaConfigurationTools.IncludeWinformDxSettings().
  • WinformDxMetaConfigurationTools.ConfigureDefaults() has been replaced with the extension method WinformDxMetaConfigurationTools.IntegrateWinformDxPackages().
  • The method ControlToolsDX.FindProperty() no longer exists; instead, use the method ControlsToolsDX.TryGetProperty().
Apple Developer Videos

This article originally appeared on earthli News and has been cross-posted here.


It's well-known that Apple runs a walled garden. Apple makes its developers pay a yearly fee to get access to that garden. In fairness, though, they do provide some seriously nice-looking APIs for their iOS and OS X platforms. They've been doing this for years, as listed in the post iOS 7 only is the only sane thing to do by Tal Bereznitskey. It argues that the new stuff in iOS 7 is compelling enough to make developers consider dropping support for all older operating systems. And this for pragmatic reasons, such as having far less of your own code to support and correspondingly making the product cost less to support. It's best to check your actual target market, but Apple users tend to upgrade very quickly and reliably, so an iOS 7-only strategy is a good option.

Among the improvements that Apple has brought in the recent past are blocks (lambdas), GCD (asynchronous execution management) and ARC (mostly automated memory management), all introduced in iOS 4 and OS X 10.6 Snow Leopard. OS X 10.9 Mavericks and iOS 7 introduced a slew of common UI improvements (e.g. AutoLayout and HTML strings for labels).1

To find the videos listed below, browse to WWDC 2013 Development Videos.

For the web, Apple has improved developer tools and support in Safari considerably. There are two pretty good videos demonstrating a lot of these improvements:

#601: Getting to Know Web Inspector

This video shows a lot of improvements to Safari 7 debugging, in the form of a much more fluid and intuitive Web Inspector and the ability to save changes made there directly back to local sources.

#603: Getting the Most Out of Web Inspector

This video shows how to use the performance monitoring and analysis tools in Safari 7. The demonstration of how to optimize rendering and compositing layers was really interesting.

For non-web development, Apple has been steadily introducing libraries to provide support for common application tasks, the most interesting of which are related to UI APIs like Core Image, Core Video, Core Animation, etc.

Building on top of these, Apple presents the Sprite Kit -- for building 2D animated user interfaces and games -- and the Scene Kit -- for building 3D animated user interfaces and games. There are some good videos demonstrating these APIs as well.

#500: Whats New in Scene Kit

An excellent presentation content-wise; the heavily accented English is sometimes a bit difficult to follow, but the material is top-notch.

#502: Introduction to Sprite Kit

This is a good introduction to nodes, textures, actions, physics and the pretty nice game engine that Apple delivers for 2D games.

#503: Designing Games with Sprite Kit

The first half is coverage of tools and assets management along with more advanced techniques. The second half is with game designers Graeme Devine2 and Spencer Lindsay, who designed the full-fledged online multi-player game Adventure to showcase the Sprite Kit.



  1. Disclaimer: I work with C# for Windows and HTML5 applications of all stripes. I don't actually work with any of these technologies that I listed above. The stuff looks fascinating, though and, as a framework developer, I'm impressed by the apparent cohesiveness of their APIs. Take recommendations with a grain of salt; it could very well be that things are a good deal less rosy when you actually have to work with these technologies.

  2. Formerly of Trilobyte and then id Software, now at Apple.

ELI5 answer to: How and why do computer programs crash?

This article originally appeared on earthli News and has been cross-posted here.


ELI5 is the "Explain LIke I'm Five" forum at Reddit. I recently answered the question "How and why do computer programs crash?" and thought the answer might be worth cross-posting (even though the post itself never gained any traction).

What is a program?

Programs comprise a limited set of instructions that tell them what they should do when they encounter certain inputs under certain conditions.

Who writes programs?

People write computer programs. Therefore, programs only do what those people can anticipate. Unanticipated situations result in crashes.

Anatomy of a crash

A "crash" is when a program is no longer able to process further input.

Here's roughly how it works:

  • The environment in which the program runs applies input events to the program.
  • The program checks for an instruction that matches its current state plus the new input.
  • If one is found, it applies that instruction to create a new, current state.
  • A program "crashes" when it receives an input in a given state that it was not designed to handle.

Different kinds of crashes

This can happen either:

  • When the program enters an infinite loop and is no longer capable of responding to new input (sometime called "hanging").
  • When the program terminates itself as a result of not being able to handle the input ("hard crash" or "unhandled exception" or "segfault", etc.).

This does not mean that the program behaves unpredictably. The crash is perfectly predictable.

Avoiding crashes

Crashes can be avoided with one or more of the following:

  • Good design
  • Good programmers
  • Good libraries & programming languages
  • Good testers
  • Time
  • Money

Hope that helps.

v1.9.4: Bug fixes for 1.9.3 -- Reporting fixes and improvements

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

  • QNO-4436: Generated objects can now hook events directly, instead of being forced to use aspects
  • QNO-4435: MetaBindingList no longer implicitly saves deleted objects
  • QNO-4434: Reporting: import for multiple reports works again (only the first report was imported)

Breaking changes

  • None