4 5 6 7 8 9 10 11 12 13 14
iTunes: another tale of woe in UX

I know that pointing out errors in iTunes is a bit passé but Apple keeps releasing new versions of this thing without addressing the fundamental problems that it has as a synchronization client.

The software has to synchronize with hardware from only one manufacturer -- the same one that makes iTunes. I'll leave off complaints about the horrific, very old and utterly non-scaling UI and just regale you with a tale of a recent interaction in which I restored my phone from a backup. In that sense, it's a "user experience".

In this tale, we will see that two of the main features of the synchronization part of the iTunes software -- backup and sync -- seem to be utterly misinterpreted.

Spoiler alert: it all works out in the end, but it's mind-boggling that this is the state of Apple's main software after almost 15 years.1

10 million new iPhones were sold over the weekend. Their owners will all have the pleasure of working with this software.

Restore from backup

Me: attaches phone iTunes: Restore from backup? Me: Sure! iTunes: shows almost full iPhone There you go! Me: Thanks! That was fast! Me: Wait...my phone is empty (no apps, no music, no contacts) iTunes: blushes Yeah, about that... Me: reconnects phone iTunes: shows nearly empty iPhone What's the problem? Me: Seriously, RESTORE FROM BACKUP (select EXACT SAME backup as before) iTunes: On it! Sir, yes sir! Me: OK. Apps are back; contacts are back. No music, iTunes? What part of the word "backup" is causing difficulties here? iTunes: blushes (again) Ummm, dunno what happened there Me: Fine. It was randomly selected anyway. Me: Select random music from this playlist iTunes: Here ya go! Me: Sync iTunes: Nothing to do Me: Sync iTunes: Seriously, dude, there's nothing to do Me: SYNC iTunes: Done Me: No music on phone. Do you understand the word "sync" differently as well? You know, like how you have trouble with the word "backup"? iTunes: ... Me: notices that size of playlist exceeds capacity of iPhone Me: that's 17GB of music. For a 16GB iPhone. iTunes: Yep! Awesome, right? Me: Is that why you won't sync? iTunes: Error messages are gauche. I don't use them. Everything is intuitive. Me: Fine. Reserve space when selecting music: 1GB (don't need more extra space than that) iTunes: NP! Here's 15GB of music. Me: Wait, what? You're supposed to leave 1GB empty of the available space not the total size of the device iTunes: Math is hard. ... You do it. Me: Fine. Reserve 4.2GB? iTunes: Done. Me: Now I have a 28GB playlist. iTunes: pats self on back Me: Reserve 3.2GB ... and "delete all existing" and "replace"? Now does it work? iTunes: 9GB for you Me: tweaks settings 2 or 3 more times iTunes: 10.5GB Me: Perfect. That was totally easy. Me: Sync iTunes: On it! hums to self Me: Why are you only syncing 850 songs when the playlist has 1700 of them? iTunes: continues humming Me: Fine. wanders away iTunes: Done Me: Sync iTunes: syncing 250 more songs Me: What the hell? iTunes: Done. Me: Sync iTunes: syncs remaining songs Me: This is ridiculous iTunes: Done



  1. It has been pointed out to me that I am using this software in a somewhat archaic way: to wit, I am not allowing iTunes to synchronize all of my data to the cloud first. Had I done that, it is claimed, I would have had fewer problems. I am, however, skeptical. I think that a company that can't even get local sync working properly after 15 years has no business getting any of my data.

An introduction to PowerShell

On Wednesday, August 27th, Tymon gave the rest of Encodo1 a great introduction to PowerShell. I've attached the presentation but a lot of the content was in demonstrations on the command-line.

  1. Download the presentation
  2. Unzip to a local folder
  3. Open index.html in a modern web browser (Chrome/Opera/Firefox work the best; IE has some rendering issues)

We learned a few very interesting things:

  • PowerShell is pre-installed on every modern Windows computer
  • You can PowerShell to other machines (almost like ssh!)
  • Windows developers should definitely learn how to use PowerShell.
  • Unix administrators who have to work on Windows machines should definitely learn how to use PowerShell. The underlying functionality of the operating system is much more discoverable via command line, get-command and get-member than the GUI.
  • You should definitely install ConEmu
  • When running ConEmu, make sure that you start a PowerShell session rather than the default Cmd session.
  • If you're writing scripts, you should definitely install and use the ISE, which is an IDE for PowerShell scripts with debugging, code-completion, lists of available commands and much better copy/paste than the standard console.
  • The PowerShell Language Reference v3 is a very useful and compact reference for beginners and even for more advanced users

ConEmu Setup

The easiest way to integrate PowerShell into your workflow is to make it eminently accessible by installing ConEmu. ConEmu is a Windows command-line with a tabbed interface and offers a tremendous number of power-user settings and features. You can tweak it to your heart's content.

imageI set mine up to look like the one that Tymon had in the demonstrations (shown on my desktop to the right).

  1. Download ConEmu; I installed version 140814, the most recent version marked as "beta". There is no official release yet, but the software is quite mature.
  2. Install it and run it. I didn't allow the Win + Num support because I know that I'd never use it. YMMV and you can always change your choice from the preferences.
  3. Show the settings to customize your installation. There are a ton of settings, so I listed the ones I changed below.
  4. imageSet the window size to something a bit larger than the standard settings, especially if you have a larger monitor. I use 120 x 40.
  5. imageChoose the color scheme you want to use. I'm using the standard PowerShell colors but a lot of popular, darker schemes are also available (e.g. Monokai).
  6. imageCheck out the hotkeys and set them up accordingly. The only key I plan on using is the one to show ConEmu. On the Swiss-German keyboard, it's Ctrl + ¨.
  7. imageThe default console is not transparent, but there are those of us who enjoy a bit of transparency. Again, YMMV. I turned it on and left the slider at the default setting.
  8. imageAnd, finally, you can turn on Quake-style console mode to make it drop down from the top of your primary monitor instead of appearing in a free-floating window.


  1. and one former Encodo employee -- hey Stephan!

ASP.Net MVC Areas

After some initial skepticism regarding Areas, I now use them more and more when building new Web-Applications using ASP.Net MVC. Therefore, I decided to cover some of my thoughts and experiences in a blog post so others may get some inspiration out of it.

Before we start, here's a link to a general introduction to the area feature of MVC. Check out this article if you are not yet familiar with Areas.

Furthermore, this topic is based on MVC 5 and C# 4 but may also apply to older versions too as Areas are not really a new thing and were first introduced with MVC 2.

Introduction

Areas are intended to structure an MVC web application. Let's say you're building a line-of-business application. You may want to separate your modules on one hand from each other and on the other hand from central pieces of your web application like layouts, HTML helpers, etc.

Therefore an area should be pretty much self-contained and should not have too much interaction with other areas -- as little as possible -- as otherwise the dependencies between the modules and their implementation grows more and more, undercutting separation, and resulting in less maintainability.

How one draw the borders of each area depends on the need of the application and company. For example, modules can be separated by functionality or by developer/developer-team. In our line-of-business application, we may have an area for "Customer Management", one for "Order Entry", one for "Bookkeeping" and one for "E-Banking" as they have largely separate functionality and will likely be built by different developers or even teams.

The nice thing about areas -- besides modularization/organization of the app -- is that they are built in MVC and therefore are supported by most tools like Visual Studio, R# and most libraries. On the negative side, I can count the lack of good support in the standard HtmlHelpers as one need to specify the area as a routing-object property and use the name of the area as a string. There is no dedicated parameter for the area. But to put that downside into perspective, this is only needed when making an URL to another area than the current one.

Application modularization using Areas

In my point of view, modularization using areas has two major advantages. The first one is the separation from other parts of the application and the second is the fact that area-related files are closer together in the Solution Explorer.

The separation -- apart from the usual separation advantages -- is helpful for reviews as the reviewer can easily see what has changed within the area and what changed on the core level of the application and therefore needs an even-closer look. Another point of the separation is that for larger applications and teams it results in fewer merge conflicts when pushing to the central code repository as each team has its own playground. Last but not least, its nice for me as an application-developer because I know that when I make changes to my area only I will not break other developers' work.

As I am someone who uses the Solution Explorer a lot, I like the fact that with areas I normally have to scroll and search less and have a good overview of the folder- and file-tree of the feature-set I am currently working on. This happens because I moved all area-related stuff into the area itself and leave the general libraries, layouts, base-classes and helpers outside. This results in a less cluttered folder-tree for my areas, where I normally spend the majority of my time developing new features.

Tips and tricks

  • Move all files related to the area into the area itself including style-sheets (CSS, LESS, SASS) and client-side scripts (Javascript, TypeScript).
  • Configure bundles for your area or even for single pages within your area in the area itself. I do this in an enhanced area-registration file.
  • Enhance the default area registration to configure more aspects of your area.When generating links in global views/layouts use add the area="" routing attribute so the URL always points to the central stuff instead being area-relative.

For example: if your application uses '@Html.ActionLink()' in your global _layout.cshtml, use:

@Html.ActionLink("Go to home", "Index", "Home", new { area = "" });

Area Registration / Start Up

And here is a sample of one of my application's area registrations:

public class BookkeepingAreaRegistration : AreaRegistration
{
  public override string AreaName
  {
    get { return "Bookkeeping"; }
  }

  public override void RegisterArea(AreaRegistrationContext context)
  {
    RegisterRoutes(context);
    RegisterBundles(BundleTable.Bundles);
  }

  private void RegisterRoutes(AreaRegistrationContext context)
  {
    if (context == null) { throw new ArgumentNullException("context"); }

    context.MapRoute(
      "Bookkeeping_default",
      "Bookkeeping/{controller}/{action}/{id}",
      new { controller = "Home", action = "Index", id = UrlParameter.Optional }
    );
  }

  private void RegisterBundles(BundleCollection bundles)
    {
      if (bundles == null) { throw new ArgumentNullException("bundles"); }

      // Bookings Bundles
      bundles.Add(new ScriptBundle("~/bundles/bookkeeping/booking")
          .Include("~/Areas/bookkeeping/Scripts/booking.js"
      ));

      bundles.Add(new StyleBundle("~/bookkeeping/css/booking")
          .Include("~/Areas/bookkeeping/Content/booking.css"));

      // Account Overview Bundle
      ...
  }
}

As you can see in this example, I enhanced the area registration a little so area-specific bundles are registered in the area-registration too. I try to place all areas-specific start-up code in here.

Folder Structure

As I wrote in one of the tips, I strongly recommend storing all area-related files within the area's folder. This includes style-sheets, client-side scripts (JS, TypeScript), content, controller, views, view-models, view-model builders, HTML Helpers, etc. My goal is to make an area self-contained so all work can be done within the area's folder.

So the folder structure of my MVC Apps look something like this:

  • App_Start
  • Areas
    • Content
    • Controllers
    • Core
    • Models
      • Builders
    • Scripts
    • Views
  • bin
  • Content
  • Controllers
  • HtmlHelpers
  • Models
    • Builders
  • Scripts
  • Views

As you can see, each area is something like a mini-application within the main application.

Add-ons using Areas deployed as NuGet packages

Beside structuring an entire MVC Application, another nice usage is for hosting "add-on's" in their own area.

For example, lately I wrote a web-user-interface for the database-schema migration of our meta-data framework Quino. Instead of pushing it all in old-school web-resources and do binary deployment of them, I built it as an area. This area I've packed into a NuGet package (.nupkg) and published it to our local NuGet repo.

Applications which want to use the web-based schema-migration UI can just install that package using the NuGet UI or console. The package will then add the area with all the sources I wrote and because the area-registration is called by MVC automatically, it's ready to go without any manual action required. If I publish an update to the NuGet package applications can get these as usual with NuGet. A nice side-effect of this deployment is that the web application contains all the sources so developers can have a look at it if they like. It doesn't just include some binary files. Another nice thing is that the add-on can define its own bundles which get hosted the same way as the MVC app does its own bundles. No fancy web-resources and custom bundling and minification is needed.

To keep conflicts to a minimum with such add-on areas, the name should be unique and the area should be self-contained as written above.

Is Encodo a .NET/C# company?

Encodo has never been about maintaining or establishing a monoculture in either operating system, programming language or IDE. Pragmatism drives our technology and environment choices.1

Choosing technology

Each project we work on has different requirements and we choose the tools and technologies that fit best. A good fit involves considering:

  • What exists in the project already?
  • How much work needs to be done?
  • What future directions could the project take?
  • How maintainable is the solution/are the technologies?
  • How appropriate are various technologies?
  • What do our developers know how to do best?
  • What do the developers who will maintain the project know best? What are they capable of?
  • Is there framework code available that would help?

History: Delphi and Java

When we started out in 2005, we'd also spent years writing frameworks and highly generic software. This kind of software is not really a product per se, but more of a highly configurable programmable "engine", in which other programmers would write their actual end-user applications.

A properly trained team can turn around products very quickly using this kind of approach. It is not without its issues, though: maintaining a framework involves a lot of work, especially producing documentation, examples and providing support. While this is very interesting work, it can be hard to make lucrative, so we decided to move away from this business and focus on creating individual products.

Still, we stuck to the programming environment and platform that we knew best2 (and that our customers were requesting): we developed software mostly in Delphi for projects that we already had.3 For new products, we chose Java.

Why did we choose Java as our "next" language? Simply because Java satisfied a lot of the requirements outlined above. We were moving into web development and found Delphi's offerings lacking, both in the IDE as well as the library support. So we moved on to using Eclipse with Jetty. We evaluated several common Java development libraries and settled on Hibernate for our ORM and Tapestry for our web framework (necessitating HiveMind as our IOC).

History: .NET

A few years later, we were faced with the stark reality that developing web applications on Java (at the time) was fraught with issues, the worst of which was extremely slow development-turnaround times. We found ourselves incapable of properly estimating how long it would take to develop a project. We accept that this may have been our fault, of course, but the reality was that (1 )we were having trouble making money programming Java and (2) we weren't having any fun anymore.

We'd landed a big project that would be deployed on both the web and Windows desktops, with an emphasis on the Windows desktop clients. At this point, we needed to reëvaluate: such a large project required a development language, runtime and IDE strong on the Windows Desktop. It also, in our view, necessitated a return to highly generic programming, which we'd moved away from for a while.

Our evaluation at the time included Groovy/Grails/Gtk, Python/Django/Gtk, Java/Swing/SWT/Web framekworks, etc. We made the decision based on various factors (tools, platform suitability, etc.) and moved to .NET/C# for developing our metadata framework Quino, upon which we would build the array of applications required for this big project.

Today (2014)

We're still developing a lot of software in C# and .NET but also have a project that's built entirely in Python.4 We're not at all opposed to a suggestion by a customer that we add services to their Java framework on another project, because that's what's best there.

We've had some projects that run on a Linux/Mono stack on dedicated hardware. For that project, we made a build-server infrastructure in Linux that created the embedded OS with our software in it.

Most of our infrastructure runs on Linux with a few Windows VMs where needed to host or test software. We use PostgreSql wherever we can and MS-SQL when the customer requires it.5

We've been doing a lot of web projects lately, which means the usual client-side mix of technology (JS/CSS/HTML). We use jQuery, but prefer Knockout for data-binding. We've evaluated the big libraries -- Angular, Backbone, Ember -- and found them to be too all-encompassing for our needs.

We've evaluated both Dart and TypeScript to see if those are useful yet. We've since moved to TypeScript for all of our projects but are still keeping an eye on Dart.

We use LESS instead of pure CSS. We've used SCSS as well, but prefer LESS. We're using Bootstrap in some projects but find it to be too restrictive, especially where we can use Flexbox for layout on modern browsers.

And, with the web comes development, support and testing for iOS and other mobile devices, which to some degree necessitates a move from pure .NET/C# and toward a mix.

We constantly reëvaluate our tools, as well. We use JetBrains WebStorm instead of Visual Studio for some tasks: it's better at finding problems in JavaScript and LESS. We also use PhpStorm for our corporate web site, including these blogs. We used the Java-based Jenkins build server for years but moved to JetBrains TeamCity because it better supports the kind of projects we need to build.

Conclusion

The description above is meant to illustrate flexibility, not chaos. We are quite structured and, again, pragmatic in our approach.

Given the choice, we tend to work in .NET because we have the most experience and supporting frameworks and software for it. We use .NET/C# because it's the best choice for many of the projects we have, but we are most definitely not a pure Microsoft development shop.

I hope that gives you a better idea of Encodo's attitude toward software development.



  1. If it's not obvious, we employ the good kind of pragmatism, where we choose the best tool for the job and the situation, not the bad kind, founded in laziness and unwillingness to think about complex problems. Just so we're clear.

  2. Remo had spent most of his career working with Borland's offerings, whereas I had started our with Borland's Object Pascal before moving on to the first version of Delphi, then Microsoft C++ and MFC for many years. After that came the original version of ASP.NET with the "old" VB/VBScript and, finally, back to Delphi at Opus Software.

  3. We were actually developing on Windows using Delphi and then deploying on Linux, doing final debugging with Borland's Linux IDE, Kylix. The software to be deployed on Linux was headless, which made it much easier to write cross-platform code.

  4. For better or worse; we inherited a Windows GUI in Python, which is not very practical, but I digress

  5. Which is almost always, unfortunately.

Should you return `null` or an empty list?

I've seen a bunch of articles addressing this topic of late, so I've decided to weigh in.

The reason we frown on returning null from a method that returns a list or sequence is that we want to be able to freely use these sequences or lists with in a functional manner.

It seems to me that the proponents of "no nulls" are generally those who have a functional language at their disposal and the antagonists do not. In functional languages, we almost always return sequences instead of lists or arrays.

In C# and other functional languages, we want to be able to do this:

var names = GetOpenItems()
  .Where(i => i.OverdueByTwoWeeks)
  .SelectMany(i => i.GetHistoricalAssignees()
    .Select(a => new { a.FirstName, a.LastName })
  );

foreach (var name in names)
{
  Console.WriteLine("{1}, {0}", name.FirstName, name.LastName);
}

If either GetHistoricalAssignees() or GetOpenItems() might return null, then we'd have to write the code above as follows instead:

var openItems = GetOpenItems();
if (openItems != null)
{
  var names = openItems
  .Where(i => i.OverdueByTwoWeeks)
  .SelectMany(i => (i.GetHistoricalAssignees() ?? Enumerable.Empty<Person>())
    .Select(a => new { a.FirstName, a.LastName })
  );

  foreach (var name in names)
  {
    Console.WriteLine("{1}, {0}", name.FirstName, name.LastName);
  }
}

This seems like exactly the kind of code we'd like to avoid writing, if possible. It's also the kind of code that calling clients are unlikely to write, which will lead to crashes with NullReferenceExceptions. As we'll see below, there are people that seem to think that's perfectly OK. I am not one of those people, but I digress.

The post, Is it Really Better to 'Return an Empty List Instead of null'? / Part 1 by Christian Neumanns serves as a good example of an article that seems to be providing information but is just trying to distract people into accepting it as a source of genuine information. He introduces his topic with the following vagueness.

If we read through related questions in Stackoverflow and other forums, we can see that not all people agree. There are many different, sometimes truly opposite opinions. For example, the top rated answer in the Stackoverflow question Should functions return null or an empty object? (related to objects in general, not specifically to lists) tells us exactly the opposite:

Returning null is usually the best idea ...

The statement "we can see that not all people agree" is a tautology. I would split the people into groups of those whose opinions we should care about and everyone else. The statement "There are many different, sometimes truly opposite opinions" is also tautological, given the nature of the matter under discussion -- namely, a question that can only be answered as "yes" or "no". Such questions generally result in two camps with diametrically opposed opinions.

As the extremely long-winded pair of articles writes: sometimes you can't be sure of what an external API will return. That's correct. You have to protect against those with ugly, defensive code. But don't use that as an excuse to produce even more methods that may return null. Otherwise, you're just part of the problem.

The second article Is it Really Better to 'Return an Empty List Instead of null'? - Part 2 by Christian Neumanns includes many more examples.

I just don't know what to say about people that write things like "Bugs that cause NullPointerExceptions are usually easy to debug because the cause and effect are short-distanced in space (i.e. location in source code) and time." While this is kind of true, it's also even more true that you can't tell the difference between such an exception being caused by a savvy programmer who's using it to his advantage and a non-savvy programmer whose code is buggy as hell.

He has a ton of examples that try to distinguish between a method that returns an empty sequence being different from a method that cannot properly answer a question. This is a concern and a very real distinction to make, but the answer is not to return null to indicate nonsensical input. The answer is to throw an exception.

The method providing the sequence should not be making decisions about whether an empty sequence is acceptable for the caller. For sequences that cannot logically be empty, the method should throw an exception instead of returning null to indicate "something went wrong".

A caller may impart semantic meaning to an empty result and also throw an exception (as in his example with a cycling team that has no members). If the display of such a sequence on a web page is incorrect, then that is the fault of the caller, not of the provider of the sequence.

  • If data is not yet available, but should be, throw an exception
  • If data is not available but the provider isn't qualified to decide, return an empty sequence
  • If the caller receives an empty sequence and knows that it should not be empty, then it is responsible for indicating an error.

That there exists calling code that makes assumptions about return values that are incorrect is no reason to start returning values that will make calling code crash with a NullPointerException.

All of his examples are similar: he tries to make the pure-data call to retrieve a sequence of elements simultaneously validate some business logic. That's not a good idea. If this is really necessary, then the validity check should go in another method.

The example he cites for getting the amount from a list of PriceComponents is exactly why most aggregation functions in .NET throw an exception when the input sequence is empty. But that's a much better way of handling it -- with a precise exception -- than by returning null to try to force an exception somewhere in the calling code.

But the upshot for me is: I am not going to write code that, when I call it, forces me to litter other code with null-checks. That's just ridiculous.

Optimizing data access for high-latency networks: part IV

imageIn the previous three articles, we sped up the opening of the calendar in Encodo's time-tracking product Punchclock. We showed how we reduced the number of queries from one very slow query per person to a single very fast query for all people at once.

Because we're talking about latency in these articles, we'd also like to clear away a few other queries that aren't related to time entries but are still slowing things down.

Lazy-loading unneeded values

In particular, the queries that "Load values" for person objects look quite suspicious. These queries don't take a lot of time to execute but they will definitely degrade performance in high-latency networks.1

image

As we did before, we can click on one of these queries to show the query that's being loaded. In the screenshot below, we see that the person's picture is being loaded for each person in the drop-down list.

image

We're not showing pictures in the drop-down list, though, so this is an extravagant waste of time. On a LAN, we hardly notice how wasteful we are with queries; on a WAN, the product will feel...sluggish.

What is a load-group?

In order to understand the cause of these queries, you must first know that Quino allows a developer to put metadata properties into different load-groups. A load-group has the following behavior: If the value for a property in a load-group is requested on an object, the values for all of the properties in the load-group are retrieved with a single query and set on that object.

The default load-group of an object's metadata determine the values that are initially retrieved and applied to objects materialized by the ORM.

The metadata for a person puts the "picture" property of a person into a separate load-group so that the value is not loaded by default when objects of type peron are loaded from the data driver. With this setting, business logic avoids downloading a lot of unwanted picture data by default.

Business logic that needs the pictures can either explicitly include the picture in the query or let the value be lazy-loaded by the ORM when it is accessed. The proper solution depends on the situation.

Lazy-loaded property values

As before, we can check the stack trace of the query to figure out which application component is triggering the call. In this case, the culprit is the binding list that we are using to attach the list of people to the drop-down control.

The binding list binds the values for all of the properties in a metaclass (e.g. "person"), triggering a lazy load when it accesses the "picture" property. To avoid the lazy-load, we can create a wrapper of the default metadata for a person and remove/hide the property so that the binding list will no longer access it.

This is quite easy2, as shown in the code below.

var personMetaClass = new WrapMetaClass(Person.Metadata);
personMetaClass.Properties.Remove(Person.MetaProperties.Picture);
var query = new Query(personMetaClass);

With this simple fix, the binding list no longer knows about the picture property, doesn't retrieve values for that property and therefore no longer triggers any queries to lazily load the pictures from the database for each person object.

The screenshot of the statistics window below shows us that we were successful. We have two main queries: one for the list of people to show in the drop-down control and one for the time entries to show in the calendar.

image

Final version

For completeness, here's the code that Punchclock is using in the current version of Quino (1.11).

var personMetaClass = new WrapMetaClass(Person.Metadata);
personMetaClass.Properties.Remove(Person.MetaProperties.Picture);

var accessToolkit = new PostgreSqlMetaDatabase().AccessToolkit;

var query = new Query(personMetaClass);
query.CustomCommandText = new CustomCommandText();
query.CustomCommandText.SetSection(
  CommandTextSections.Where, 
  CommandTextAction.Replace,
  string.Format(
    "EXISTS (SELECT id FROM {0} WHERE {1} = {2})", 
    accessToolkit.GetName(TimeEntry.Metadata), 
    accessToolkit.GetField(TimeEntry.MetaProperties.PersonId), 
    accessToolkit.GetField(Person.MetaProperties.Id)
  )>
);
var people = Session.GetList<Person>(query);

Future, improved version

Once we fix the bug in the WhereExists join type mentioned in the previous article and add the fluent methods for constructing wrappers mentioned in the footnote below, the code will be as follows:

var personMetaClass = 
  Person.Metadata.
  Wrap().
  RemoveProperty(Person.MetaProperties.Picture);

var people = 
  Session.GetList<Person>(
    new Query(personMetaClass).
    Join(Person.MetaRelations.TimeEntries, JoinType.WhereExists).
    Query
  );

This concludes our investigation into performance issues with Quino and Punchclock.


image

var personMetaClass = 
  Person.Metadata.
  Wrap().
  RemoveProperty(Person.MetaProperties.Picture);

var query = new Query(personMetaClass);

But you'll have to wait for Quino 1.12 for that.


  1. You may have noticed that these calls to "load values" are technically lazy-loaded but don't seem to be marked as such in the screenshots. This was a bug in the statistics viewer that I discovered and addressed while writing this article.

  2. This is a rather old API and hasn't been touched with the "fluent" wand that we've applied to other parts of the Quino API. A nicer way of writing it would be to create extension methods called Wrap() and RemoveProperty that return the wrapper class, like so:

v1.12.0: .Improvements to data-provider statistics and Windows 8.1 fixes

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

imageimage

  • Window 8.1: fixed culture-handling for en-US and de-CH that is broken in Windows 8.1 (QNO-4534, QNO-4553)
  • Data-provider statistics: improved the WinForm-based statistics form (QNO-4231, QNO-4545, QNO-4546)
  • Data driver: bug fixes and improvements (QNO-4538, QNO-4554, QNO-4551)
  • Image-handling: the Encodo and Quino libraries now use the Windows Imaging Components instead of System.Drawing (QNO-4536)
  • Standard forms: updated the standard WinForm about window and splash screen to use Encodo web-site CI (QNO-4529)

Breaking changes

  • No known breaking changes.
Optimizing data access for high-latency networks: part III

imageIn the previous articles, we partially addressed a performance problem in the calendar of Encodo's time-tracking product, Punchclock. While we managed to drastically reduce the amount of time taken by each query (>95% time saved), we were still executing more queries than strictly necessary.

The query that we're trying to optimized further is shown below.

var people =
  Session.GetList<Person>().
  Where(p => Session.GetCount(p.TimeEntries.Query) > 0).
  ToList();

This query executes one query to get all the people and then one query per person to get the number of time entries per person. Each of these queries by itself is very fast. High latency will cause them to be slow. In order to optimize further, there's really nothing for it but to reduce the number of queries being executed.

Let's think back to what we're actually trying to accomplish: We want to get all people who have at least one time entry. Can't we get the database to do that for us? Some join or existence check or something? How about the code below?

var people = 
  Session.GetList<Person>(
    Session.CreateQuery<Person>().
    Join(Person.MetaRelations.TimeEntries, JoinType.WhereExists).
    Query
  );

What's happening in the code above? We're still getting a list of people but, instead of manipulating the related TimeEntries for each person locally, we're joining the TimeEntries relation with the Quino query Join() method and changing the join type from the default All to the restrictive WhereExists. This sounds like exactly what we want to happen! There is no local evaluation or manipulation with Linq and, with luck, Quino will be able to map this to a single query on the database.

This is the best possible query: it's purely declarative and will be executed as efficiently as the back-end knows how.

There's just one problem: the WhereExists join type is broken in Quino 1.11.

Never fear, though! We can still get it to work, but we'll have to do a bit of work until the bug is fixed in Quino 1.12. The code below builds on lessons learned in the earlier article, Mixing your own SQL into Quino queries: part 2 of 2 to use custom query text to create the restriction instead of letting Quino do it.

var accessToolkit = new PostgreSqlMetaDatabase().AccessToolkit;

var query = Session.CreateQuery<Person>();
query.CustomCommandText = new CustomCommandText();
query.CustomCommandText.SetSection(
  CommandTextSections.Where, 
  CommandTextAction.Replace,
  string.Format(
    "EXISTS (SELECT id FROM {0} WHERE {1} = {2})", 
    accessToolkit.GetName(TimeEntry.Metadata), 
    accessToolkit.GetField(TimeEntry.MetaProperties.PersonId), 
    accessToolkit.GetField(Person.MetaProperties.Id)
  )
);
var people = Session.GetList<Person>(query);

A look at the statistics is very encouraging:

image

We're down to one 29ms query for the people and an even quicker query for all the relevant time entries.1 We can see our query text appears embedded in the SQL generated by Quino, just as we expected.

There are a few other security-related queries that execute very quickly and hardly need optimization.

We've come much farther in this article and we're almost done. In the next article, we'll quickly clean up a few other queries that are showing up in the statistics and that have been nagging us since the beginning.



  1. The time-entry query is not representative because my testing data set didn't include time entries for the current day and I was too lazy to page around to older data.

Optimizing data access for high-latency networks: part II

imageIn the previous article, we discussed a performance problem in the calendar of Encodo's time-tracking product, Punchclock.

Instead of guessing at the problem, we profiled the application using the database-statistics window available to all Quino applications.1 We quickly discovered that most of the slowdown stems from the relatively innocuous line of code shown below.

var people = 
  Session.GetList<Person>().
  Where(p => p.TimeEntries.Any()).
  ToList();

First things first: what does the code do?

Before doing anything else, we should establish what the code does. Logically, it retrieves a list of people in the database who have recorded at least one time entry.

The first question we should ask at this point is: does the application even need to do this? The answer in this case is 'yes'. The calendar includes a drop-down control that lets the user switch between the calendars for different users. This query returns the people to show in this drop-down control.

With the intent and usefulness of the code established, let's dissect how it is accomplishing the task.

  1. The Session.GetList<Person>() portion retrieves a list of all people from the database
  2. The Where() method is applied locally for each object in the list2
  3. For a given person, the list of TimeEntries is accessed
  4. This access triggers a lazy load of the list
  5. The Any() method is applied to the full list of time entries
  6. The ToList() method creates a list of all people who match the condition

Though the line of code looks innocuous enough, it causes a huge number of objects to be retrieved, materialized and retained in memory -- simply in order to check whether there is at least one object.

This is a real-world example of a performance problem that can happen to any developer. Instead of blaming the developer who wrote this line of code, its more important to stay vigilant to performance problems and to have tools available to quickly and easily find them.

Stop creating all of the objects

The first solution I came up with3 was to stop creating objects that I didn't need. A good way of doing this and one that was covered in Quino: partially-mapped queries is to use cursors instead of lists. Instead of using the generated list TimeEntries, the following code retrieves a cursor on that list's query and materializes at most one object for the sub-query.

var people = Session.GetList<Person>().Select(p =>
{
  using (var cursor = Session.CreateCursor<TimeEntry>(p.TimeEntries.Query))[^4]
  {
    return cursor.Any();
  }
}).ToList();

A check of the database statistics shows improvement, as shown below.

image

Just by using cursors, we've managed to reduce the execution time for each query by about 75%.4 Since all we're interested in finding out is whether there is at least one time entry for a person, we could also ask the database to count objects rather than to return them. That should be even faster. The following code is very similar to the example above but, instead of getting a cursor based on the TimeEntries query, it gets the count.

var people =
  Session.GetList<Person>().
  Where(p => Session.GetCount(p.TimeEntries.Query) > 0).
  ToList();

How did we do? A check of the database statistics shows even more improvement, as shown below.

image

We're now down to a few dozen milliseconds for all of our queries, so we're done, right? A 95% reduction in query-execution time should be enough.

Unfortunately, we're still executing just as many queries as before, even though we're taking far less time to execute them. This is better, but still not optimal. In high-latency situations, the user is still likely to experience a significant delay when opening the calendar since each query's execution time is increased by the latency of the connection. In a local network, the latency is negligible; on a WAN, we still have a problem.

In the next article, well see if we can't reduce the number of queries being executed.


Anything formulated with the query API is guaranteed to be executed by the data provider (even if it must be evaluated locally) and anything formulated with Linq is naturally evaluated locally. In this way, the code is clear in what is sent to the server and what is evaluated locally. Quino only very, very rarely issues an "unmappable query" exception, unlike EF, which occasionally requires contortions until you've figured out which C# formulation of a particular expression can be mapped by EF.


  1. This series of articles shows the statistics window as it appears in Winforms applications. The data-provider statistics are also available in Quino web applications as a Glimpse plug-in.

  2. It is important for users of the Microsoft Entity Framework (EF) to point out that Quino does not have a Linq-to-Sql mapper. That means that any Linq expressions like Where() are evaluated locally instead of being mapped to the database. There are various reasons for this but the main one is that we ended up preferring a strict boundary between the mappable query API and the local evaluation API.

  3. Well, the first answer I'm going to pretend I came up with. I actually thought of another answer first, but then quickly discovered that Quino wasn't mapping that little-used feature correctly. I added an issue to tackle that problem at a later date and started looking for workarounds. That fix will be covered in the next article in this series.

  4. Please ignore the fact that we also dropped 13 person queries. This was not due to any fix that we made but rather that I executed the test slightly differently...and was too lazy to make a new screenshot. The 13 queries are still being executed and we'll tackle those in the last article in this series.

Optimizing data access for high-latency networks: part I

imagePunchclock is Encodo's time-tracking and invoicing tool. It includes a calendar to show time entries (shown to the left). Since the very first versions, it hasn't opened very quickly. It was fast enough for most users, but those who worked with Punchclock over the WAN through our VPN have reported that it often takes many seconds to open the calendar. So we have a very useful tool that is not often used because of how slowly it opens.

That the calendar opens slowly in a local network and even more slowly in a WAN indicates that there is not only a problem with executing many queries but also with retrieving too much data.

Looking at query statistics

This seemed like a solvable problem, so I fired up Punchclock in debug mode to have a look at the query-statistics window.

To set up the view shown below, I did the following:

  1. Start your Quino application (Punchclock in this case) in debug mode (so that the statistics window is available)
  2. Open the statistics window from the debug menu
  3. Reset the statistics to clear out anything logged during startup
  4. Group the grid by "Meta Class"
  5. Open the calendar to see what kind of queries are generated
  6. Expand the "TimeEntry" group in the grid to show details for individual queries

image

I marked a few things on the screenshot. It's somewhat suspicious that there are 13 queries for data of type "Person", but we'll get to that later. Much more suspicious is that there are 52 queries for time entries, which seems like quite a lot considering we're showing a calendar for a single user. We would instead expect to have a single query. More queries would be OK if there were good reasons for them, but I feel comfortable in deciding that 52 queries is definitely too many.

A closer look at the details for the time-entry queries shows very high durations for some of them, ranging from a tenth of a second to nearly a second. These queries are definitely the reason the calendar window takes so long to load.

Why are these queries taking so long?

If I select one of the time-entry queries and show the "Query Text" tab (see screenshot below), I can see that it retrieves all time entries for a single person, one after another. There are almost six years of historical data in our Punchclock database and some of our employees have been around for all of them.1 That's a lot of time entries to load.

image

I can also select the "Stack Trace" tab to see where the call originated in my source code. This feature lets me pinpoint the program component that is causing these slow queries to be executed.

image

As with any UI-code stack, you have to be somewhat familiar with how events are handled and dispatched. In this stack, we can see how a MouseUp command bubbled up to create a new form, then a new control and finally, to trigger a call to the data provider during that control's initialization. We don't have line numbers but we see that the call originates in a lambda defined in the DynamicSchedulerControl constructor.

The line of code that I pinpoint as the culprit is shown below.

var people = Session.GetList<Person>().Where(p => p.TimeEntries.Any()).ToList();

This looks like a nicely declarative way of getting data, but to the trained eye of a Quino developer, it's clear what the problem is.

In the next couple of articles, we'll take a closer look at what exactly the problem is and how we can improve the speed of this query. We'll also take a look at how we can improve the Quino query API to make it harder for code like the line above to cause performance problems.



  1. Encodo just turned nine years old, but we used a different time-entry system for the first couple of years. If you're interested in our time-entry software history, here it is:

     1. 06.2005 -- Start off with Open Office spreadsheets
     2. 04.2007 -- Switch to a home-grown, very lightweight time tracker based on an older framework we'd written (Punchclock 1.0)
     3. 08.2008 -- Start development of Quino
     4. 04.2010 -- Initial version of Punchclock 2.0; start dogfooding Quino