4 5 6 7 8 9 10 11 12 13 14
Questions to consider when designing APIs: Part I

A big part of an agile programmer's job is API design. In an agile project, the architecture is defined from on high only in broad strokes, leaving the fine details of component design up to the implementer. Even in projects that are specified in much more detail, implementers will still find themselves in situations where they have to design something.

This means that programmers in an agile team have to be capable of weighing the pros and cons of various approaches in order to avoid causing performance, scalability, maintenance or other problems as the API is used and evolves.

When designing an API, we consider some of the following aspects. This is not meant to be a comprehensive list, but should get you thinking about how to think about the code you're about to write.

Reusing Code

  • Will this code be re-used inside the project?
  • How about outside of the project?
  • If the code might be used elsewhere, where does that need lie on the time axis?
  • Do other projects already exist that could use this code?
  • Are there already other implementations that could be used?
  • If there are implementations, then are they insufficient?
  • Or perhaps not sufficiently encapsulated for reuse as written?
  • How likely is it that there will be other projects that need to do the same thing?
  • If another use is likely, when would the other project or projects need your API?

Organizing Code

  • Where should the API live in the code?
  • Is your API local to this class?
  • Is it private?
  • Protected?
  • Are you making it public in an extension method?
  • Or internal?
  • Which namespace should it belong to?
  • Which assembly?

Testing Code

  • What about testability?
  • How can the functionality be tested?

Even if you don't have time to write tests right now, you should still build your code so that it can be tested. It's possible that you won't be writing the tests. Instead, you should prepare the code so that others can use it.

It's also possible that a future you will be writing the tests and will hate you for having made it so hard to automate testing.

Managing Dependencies

  • Is multi-threading a consideration?
  • Does the API manage state?
  • What kind of dependencies does the API have?
  • Which dependencies does it really need?
  • Is the API perhaps composed of several aspects?
  • With a core aspect that is extended by others?
  • Can core functionality be extracted to avoid making an API that is too specific?

Documenting Code

  • How do callers use the API?
  • What are the expected values?
  • Are these expectations enforced?
  • What is the error mechanism?
  • What guarantees does the API make?
  • Is the behavior of the API enforced?
  • Is it at least documented?
  • Are known drawbacks documented?


This is a very important one and involves how your application handles situations outside of the design.

  • If you handle externally provided data, then you have to handle extant cases
  • Are you going to log errors?
  • In which format?
  • Is there a standard logging mechanism?
  • How are you going to handle and fix persistent errors?
  • Are you even going to handle weird cases?
  • Or are you going to fail early and fail often?
  • For which errors should your code even responsible?
  • How does your chosen philosophy (and you should be enforcing contracts) fit with the other code in the project?

Fail fast; enforce contracts

While we're on the subject of error-handling, I want to emphasize that this is one of the most important parts of API design, regardless of which language or environment you use.1

Add preconditions for all method parameters; verify them as non-null and verify ranges. Do not catch all exceptions and log them or -- even worse -- ignore them. This is even more important in environments -- I'm looking at you client-side web code in general and JavaScript in particular -- where the established philosophy is to run anything and to never rap a programmer on the knuckles for having written really knuckle-headed code.

You haven't tested the code, so you don't know what kind of errors you're going to get. If you ignore everything, then you'll also ignore assertions, contract violations, null-reference exceptions and so on. The code will never be improved if it never makes a noise. It will just stay silently crappy until someone notices a subtle logical error somewhere and must painstakingly track it down to your untested code.

You might say that production code shouldn't throw exceptions. This is true, but we're explicitly not talking about production code here. We're talking about code that has few to no tests and is acknowledged to be incomplete. If you move code like this into production, then it's better to crash than to silently corrupt data or impinge the user experience.

A crash will get attention and the code may even be fixed or improved. If you write code that will crash on all but the "happy path" and it never crashes? That's great. Do not program preemptively defensively in fresh code. If you have established code that interfaces with other (possibly external) components and you sometimes get errors that you can't work around in any other way, then it's OK to catch and log those exceptions rather than propagating them. At least you tried.

In the next article, we'll take a look at how all of these questions and considerations can at all be reconciled with YAGNI. Spoiler alert: we think that they can.

  1. I recently read Erlang and code style by Jesper L. Andersen, which seems to have less to do with programming Erlang and much more to do with programming properly. The advice contained in it seems to be only for Erlang programmers, but the idea of strictly enforcing APIs between software components is neither new nor language-specific.

REST API Status codes (400 vs. 500)

In a project that we're working on, we're consuming REST APIs delivered by services built by another team working for the same customer. We had a discussion about what were appropriate error codes to return for various situations. The discussion boiled down to: should a service return a 500 error code or a 400 error code when a request cannot be processed?

I took a quick look at the documentation for a couple of the larger REST API providers and they are using the 500 code only for catastrophic failure and using the 400 code for anything related to query-input validation errors.

Microsoft Azure Common REST API Error Codes

Code 400:

  • The requested URI does not represent any resource on the server.
  • One of the request inputs is out of range.
  • One of the request inputs is not valid.
  • A required query parameter was not specified for this request.
  • One of the query parameters specified in the request URI is not supported.
  • An invalid value was specified for one of the query parameters in the request URI.

Code 500:

  • The server encountered an internal error. Please retry the request.
  • The operation could not be completed within the permitted time.
  • The server is currently unable to receive requests. Please retry your request.

Twitter Error Codes & Responses

Code 400:

The request was invalid or cannot be otherwise served. An accompanying error message will explain further.

Code 500:

Something is broken. Please post to the group so the Twitter team can investigate.

REST API Tutorial HTTP Status Codes

Code 400:

General error when fulfilling the request would cause an invalid state. Domain validation errors, missing data, etc. are some examples.

Code 500:

A generic error message, given when no more specific message is suitable. The general catch-all error when the server-side throws an exception. Use this only for errors that the consumer cannot address from their endnever return this intentionally.

REST HTTP status codes

For input validation failure: 400 Bad Request + your optional description. This is suggested in the book "RESTful Web Services".

Dealing with improper disposal in WCF clients

There's an old problem in generated WCF clients in which the Dispose() method calls Close() on the client irrespective of whether there was a fault. If there was a fault, then the method should call Abort() instead. Failure to do so causes another exception, which masks the original exception. Client code will see the subsequent fault rather than the original one. A developer running the code in debug mode will have be misled as to what really happened.

You can see WCF Clients and the "Broken" IDisposable Implementation by David Barrett for a more in-depth analysis, but that's the gist of it.

This issue is still present in the ClientBase implementation in .NET 4.5.1. The linked article shows how you can add your own implementation of the Dispose() method in each generated client. An alternative is to use a generic adaptor if you don't feel like adding a custom dispose to every client you create.1

**public class** SafeClient<T> : IDisposable
  **where** T : ICommunicationObject, IDisposable
  **public** SafeClient(T client)
    **if** (client == **null**) { **throw new** ArgumentNullException("client"); }

    Client = client;
  **public** T Client { **get**; **private set**; }

  **public void** Dispose()

  **protected virtual void** Dispose(**bool** disposing)
    **if** (disposing)
      **if** (Client != **null**)
        **if** (Client.State == CommunicationState.Faulted) 

        Client = **default**(T);

To use your WCF client safely, you wrap it in the class defined above, as shown below.

**using** (**var** safeClient = **new** SafeClient<SystemLoginServiceClient>(**new** SystemLoginServiceClient(...)))
  **var** client = safeClient.Client;
  // Work with "client"

If you can figure out how to initialize your clients without passing parameters to the constructor, you could slim it down by adding a "new" generic constraint to the parameter T in SafeClient and then using the SafeClient as follows:

**using** (**var** safeClient = **new** SafeClient<SystemLoginServiceClient>())
  **var** client = safeClient.Client;
  // Work with "client"

  1. The code included in this article is a sketch of a solution and has not been tested. It does compile, though.

OpenBSD takes on OpenSSL

imageMuch of the Internet has been affected by the Heartbleed vulnerability in the widely used OpenSSL server-side software. The bug effectively allows anyone to collect random data from the memory of machines running the affected software, which was about 60% of encrypted sites worldwide. A massive cleanup effort ensued, but the vulnerability has been in the software for two years, so there's no telling how much information was stolen in the interim.

The OpenSSL software is used not only to encrypt HTTPS connections to web servers but also to generate the certificates that undergird those connections as well as many PKIs. Since data could have been stolen over a period of two years, it should be assumed that certificates, usernames and passwords have been stolen as well. Pessimism is the only way to be sure.1

In fact, any data that was loaded into memory on a server running a pre-Heartbleed version of the OpenSSL software is potentially compromised.

How to respond

We should all generate new certificates, ensuring that the root certificate from which we generate has also been re-generated and is clean. We should also choose new passwords for all affected sites. I use LastPass to manage my passwords, which makes it much easier to use long, complicated and most importantly unique passwords. If you're not already using a password manager, now would be a good time to start.

And this goes especially for those who tend to reuse their password on different sites. If one of those sites is cracked, then the hacker can use that same username/password combination on other popular sites and get into your stuff everywhere instead of just on the compromised site.

Forking OpenSSL

Though there are those who are blaming open-source software, we should instead blame ourselves for using software of unknown quality to run our most trusted connections. That the software was designed and built without the required quality controls is a different issue. People are going to write bad software. If you use their free software and it ends up not being as secure as advertised, you have to take at least some of the blame on yourself.

Instead, the security experts and professionals who've written so many articles and done so many reviews over the years touting the benefits of Open SSL should take more of the blame. They are the ones who misused their reputations by touting poorly written software to which they had source-code access, but were too lazy to perform a serious evaluation.

An advantage of open-source software is that we can at least pinpoint exactly when a bug appeared. Another is that the entire codebase is available to all, so others can jump in and try to fix it. Sure, it would have been nice if the expert security programmers of the world had jumped in earlier, but better late than never.

The site OpenSSL Rampage follows the efforts of the OpenBSD team to refactor and modernize the OpenSSL codebase. They are documenting their progress live on Tumblr, which collects commit messages, tweets, blog posts and official security warnings that result from their investigations and fixes.

They are working on a fork and are making radical changes, so it's unlikely that the changes will be taken up in the official OpenSSL fork but perhaps a new TLS/SSL tool will be available soon.2

VMS and custom memory managers

The messages tell tales of support for extinct operating systems like VMS, whose continued support makes for much more complicated code to support current OSs. This complexity, in turn, hides further misuses of malloc as well as misuses of custom buffer-allocation schemes that the OpenSSL team came up with because "malloc is too slow". Sometimes memory is freed twice for good measure.

The article Today's bugs have BRANDS? Be still my bleeding heart [logo] by Verity Stob has a (partially) humorous take on the most recent software errors that have reared their ugly heads. As also mentioned in that article, the Heartbleed Explained by Randall Munroe cartoon shows the Heartbleed issue well, even for non-technical people.

Lots o' cruft

This is all sounds horrible and one wonders how the software runs at all. Don't worry: the code base contains a tremendous amount of cruft that is never used. It is compiled and still included, but it acts as a cozy nest of code that is wrapped around the actual code.

There are vast swaths of script files that haven't been used for years, that can build versions of the software under compilers and with options that haven't been seen on this planet since before .. well, since before Tumblr or Facebook. For example, there's no need to retain a forest of macros at the top of many header files for the Metrowerks compiler for PowerPC on OS9. No reason at all.

There are also incompatibly licensed components in regular use as well as those associated with components that don't seem to be used anymore.

Modes and options and platforms: oh my!

There are compiler options for increasing resiliency that seem to work. Turning these off, however, yields an application that crashes immediately. There are clearly no tests for any of these modes. OpenSSL sounds like a classically grown system that has little in the way of code conventions, patterns or architecture. There seems to be no one who regularly cleans out and decides which code to keep and which to make obsolete. And, even when code is deemed obsolete, it remains in the code base over a decade later.

Security professionals wrote this?

This is to say nothing of how their encryption algorithm actually works. There are tales on that web site of the OpenSSL developers desperately having tried to keep entropy high by mixing in the current time every once in a while. Or even mixing in bits of the private key for good measure.

A lack of discipline (or skill)

The current OpenSSL codebase seems to be a minefield for security reviewers or for reviewers of any kind. A codebase like this is also terrible for new developers, the onboarding of which you want to encourage in such a widely used, distributed, open-source project.

Instead, the current state of the code says: don't touch, you don't know what to change or remove because clearly the main developers don't know either. The last person who knew may have died or left the project years ago.

It's clear that the code has not been reviewed in the way that it should be. Code on this level and for this purpose needs good developers/reviewers who constantly consider most of the following points during each review:

  • Correctness (does the code do what it should? Does it do it in an acceptable way?)
  • Patterns (does this code invent its own way of doing things?)
  • Architecture (is this feature in the right module?)
  • Security implications
  • Performance
  • Memory leaks/management (as long as they're still using C, which they honestly shouldn't be)
  • Supported modes/options/platforms
  • Third-party library usage/licensing
  • Automated tests (are there tests for the new feature or fix? Do existing tests still run?)
  • Comments/documentation (is the new code clear in what it does? Any tips for those who come after?)
  • Syntax (using braces can be important)

Living with OpenSSL (for now)

It sounds like it is high time that someone does what the BSD team is doing. A spring cleaning can be very healthy for software, especially once it's reached a certain age. That goes double for software that was blindly used by 60% of the encrypted web sites in the world.

It's wonderful that OpenSSL exists. Without it, we wouldn't be as encrypted as we are. But the apparent state of this code bespeaks of failure to manage on all levels. The developers of software this important must be of higher quality. They must be the best of the best, not just anyone who read about encryption on Wikipedia and "wants to help". Wanting to help is nice, but you have to know what you're doing.

OpenSSL will be with us for a while. It may be crap code and it may lack automated tests, but it has been manually (and possibly regression-) tested and used a lot, so it has earned a certain badge of reliability and predictability. The state of the code means only that future changes are riskier, not necessarily that the current software is not usable.

Knowing that the code is badly written should make everyone suspicious of patches -- which we now know are likely to break something in that vast pile of C code -- but not suspicious of the officially supported versions from Debian and Ubuntu (for example). Even if the developer team of OpenSSL doesn't test a lot (or not automatically for all options, at any rate -- they may just be testing the "happy path"), the major Linux distros do. So there's that comfort, at least.

  1. As Ripley so famously put it in the movie Aliens: "I say we take off and nuke the entire site from orbit. It's the only way to be sure."

  2. It will, however, be quite a while before the new fork is as battle-tested as OpenSSL.

The Internet of Things

This article originally appeared on earthli News and has been cross-posted here.

The article Smart TVs, smart fridges, smart washing machines? Disaster waiting to happen by Peter Bright discusses the potential downsides to having a smart home1: namely our inability to create smart software for our mediocre hardware. And once that software is written and spread throughout dozens of devices in your home, it will function poorly and quickly be taken over by hackers because "[h]ardware companies are generally bad at writing softwareand bad at updating it."

And, should hackers fail to crack your stove's firmware immediately, for the year or two where the software works as designed, it will, in all likelihood, "[...] be funneling sweet, sweet, consumer analytics back to the mothership as fast as it can", as one commentator on that article put it.

Manufacturers aren't in business to make you happy

Making you happy isn't even incidental to their business model now that monopolies have ensured that there is nowhere you can turn to get better service. Citing from the article above:

These devices will inevitably be abandoned by their manufacturers, and the result will be lots of "smart" functionalityfridges that know what we buy and when, TVs that know what shows we watchall connected to the Internet 24/7, all completely insecure.

Manufacturers almost exclusively design hardware with extremely short lifetimes, hewing to planned obsolescence. While this a great capitalist strategy, it is morally repugnant to waste so many resources and so much energy to create gadgets that will break in order to force consumers to buy new gadgets. Let's put that awful aspect of our civilization to the side for a moment and focus on other consequences.

These same manufacturers are going to take this bulletproof strategy to appliances that have historically had much longer lifetimes. They will also presumably take their extremely lackluster reputation for updating firmware and software into this market. The software will be terrible to begin with, it will be full of security holes and it will receive patches for only about 10% of its expected lifetime. What could possibly go wrong?

Either the consumer will throw away a perfectly good appliance in order to upgrade the software or the appliance will be an upstanding citizen of one, if not several, botnets. Or perhaps other, more malicious services will be funneling information about you and your household to others, all unbeknownst to you.

People are the problem2

These are not scare tactics; this is an inevitability. People have proven themselves to be wildly incapable of comprehending the devices that they already have. They have no idea how they work and have only vague ideas of what they're giving up. It might as well be magic to them. To paraphrase the classic Arthur C. Clarke citation: "Any sufficiently advanced technology is indistinguishable from magic" especially for a sufficiently technically oblivious audience.

Start up a new smart phone and try to create your account on it. Try to do so without accidentally giving away the keys to your data-kingdom. It is extremely difficult to do, even if you are technically savvy and vigilant.

Most people just accept any conditions, store everything everywhere, use the same terribly insecure password for everything and don't bother locking down privacy options, even if available. Their data is spread around the world in dozens of places and they've implicitly given away perpetual licenses to anything they've ever written or shot or created to all of the big providers.

They are sheep ready to be sheared by not only the companies they thought they could trust, but also by national spy agencies and technically adept hackers who've created an entire underground economy fueled by what can only be called deliberate ignorance, shocking gullibility and a surfeit of free time and disposable income.

The Internet of Things

The Internet of Things is a catch-phrase that describes a utopia where everything is connected to everything else via the Internet and a whole universe of new possibilities explode out of this singularity that will benefit not only mankind but the underlying effervescent glory that forms the strata of existence.

The article Ars readers react to Smart fridges and the sketchy geography of normals follows up the previous article and includes the following comment:

What I do want, is the ability to check what's in my fridge from my phone while I'm out in the grocery store to see if there's something I need.

That sounds so intriguing, doesn't it? How great would that be? The one time a year that you actually can't remember what you put in your refrigerator. On the other hand, how the hell can your fridge tell what you have? What are the odds that this technology will even come close to functioning as advertised? Would it not be more reasonable for your grocery purchases to go to a database and for you to tell that database when you've actually used or thrown out ingredients? Even if your fridge was smart, you'd have to wire up your dry-goods pantry in a similar way and commit to only storing food in areas that are under surveillance.

The commentator went on to write,

I do agree that security is a huge, huge issue, and one that needs to be addressed. But I really don't see how resisting the "Internet of things" is the longterm solution. The way technology seems to be trending, this is an inevitability, not a could be.

Resisting the "Internet of things" is not being proposed as the long-term solution. It is being proposed as a short- to medium-term solution because the purveyors of this shining vision of nirvana have proven themselves time and again to be utterly incapable of actually delivering the panaceas that they promise in a stream of consumption-inducing fraud. Instead, they consistently end up lining their own pockets while we all fritter away even more precious waking time ministering to the retarded digital children that they've birthed from their poisoned loins and foisted upon us.

Stay out of it, for now

Hand-waving away the almost-certain security catastrophe as if it can be easily solved is extremely disingenuous. This is not a world that anyone really wants to take part in until the security problems are solved. You do not want to be an early adopter here. And you most especially do not want to do so by buying the cheapest, most-discounted model available as people are also wont to do. Stay out of the fight until the later rounds: remove the SIM card, shut off Internet connectivity where it's not needed and shut down Bluetooth.

The best-case scenario is that early adopters will have their time wasted. Early rounds of software promise to be a tremendous time-suck for all involved. Managing a further herd of purportedly more efficient and optimized devices is a sucker's game. The more you buy, the less likely you are to be in charge of what you do with your free time.

As it stands, we already fight with our phones, begging them to connect to inadequate data networks and balky WLANs. We spend inordinate amounts of time trying to trick their garbage software into actually performing any of its core services. Failing that -- which is an inevitability -- we simply live with the mediocrity, wasting our time every day babysitting gadgets and devices and software that are supposed to be working for us.

Instead, it is we who end up performing the same monotonous and repetitive tasks dozens of times every day because the manufacturers have -- usually in a purely self-interested and quarterly revenue-report driven rush to market -- utterly failed to test the basic functions of their devices. Subsequent software updates do little to improve this situation, generally avoiding fixes for glaring issues in favor of adding social-network integration or some other marketing-driven hogwash.

Avoiding this almost-certain clusterf*#k does not make you a Luddite. It makes you a realist, an astute observer of reality. There has never been a time in history when so much content and games and media has been at the fingertips of anyone with a certain standard of living. At the same time, though, we seem to be so bedazzled by this wonder that we ignore the glaring and wholly incongruous dreadfulness of the tools that we are offered to navigate, watch and curate it.

If you just use what you're given without complaint, then things will never get better. Stay on the sidelines and demand better -- and be prepared to wait for it.

  1. Or a smart car or anything smart that works perfectly well without being smart.

  2. To be clear: the author is not necessarily excluding himself here. It's not easy to turn on, tune in and drop out, especially when your career is firmly in the tech world. It's also not easy to be absolutely aware of what you're giving up in as you make use of the myriad of interlinked services offered to you every day.

Mixing your own SQL into Quino queries: part 2 of 2

In the first installment, we covered the basics of mixing custom SQL with ORM-generated queries. We also took a look at a solution that uses direct ADO database access to perform arbitrarily complex queries.

In this installment, we will see more elegant techniques that make use of the CustomCommandText property of Quino queries. We'll approach the desired solution in steps, proceeding from attempt #1 -- attempt #5.

tl;dr: Skip to attempt #5 to see the final result without learning why it's correct.

Attempt #1: Replacing the entire query with custom SQL

An application can assign the CustomCommandText property of any Quino query to override some of the generated SQL. In the example below, we override all of the text, so that Quino doesn't generate any SQL at all. Instead, Quino is only responsible for sending the request to the database and materializing the objects based on the results.

public void TestExecuteCustomCommand()
  var people = Session.GetList<Person>();

  people.Query.CustomCommandText = new CustomCommandText
    Text = @"
FROM punchclock__person WHERE lastname = 'Rogers'"

  Assert.That(people.Count, Is.EqualTo(9));

This example solves two of the three problems outlined above:

  • It uses only a single query.
  • It will work with a remote application server (although it makes assumptions about the kind of SQL expected by the backing database on that server).
  • But it is even more fragile than the previous example as far as hard-coded SQL goes. You'll note that the fields expected by the object-materializer have to be explicitly included in the correct order.

Let's see if we can address the third issue by getting Quino to format the SELECT clause for us.

Attempt #2: Generating the SELECT clause

The following example uses the AccessToolkit of the IQueryableDatabase to format the list of properties obtained from the metadata for a Person. The application no longer makes assumptions about which properties are included in the select statement, what order they should be in or how to format them for the SQL expected by the database.

public virtual void TestExecuteCustomCommandWithStandardSelect()
  var people = Session.GetList<Person>();

  var accessToolkit = DefaultDatabase.AccessToolkit;
  var properties = Person.Metadata.DefaultLoadGroup.Properties;
  var fields = properties.Select(accessToolkit.GetField);

  people.Query.CustomCommandText = new CustomCommandText
    Text = string.Format(
      @"SELECT ALL {0} FROM punchclock__person WHERE lastname = 'Rogers'",

  Assert.That(people.Count, Is.EqualTo(9));

This example fixes the problem with the previous one but introduces a new problem: it no longer works with a remote application because it assumes that the client-side driver is a database with an AccessToolkit. The next example addresses this problem.

Attempt #3: Using a hard-coded AccessToolkit

The version below uses a hard-coded AccessToolkit so that it doesn't rely on the external data driver being a direct ADO database. It still makes an assumption about the database on the server but that is usually quite acceptable because the backing database for most applications rarely changes.1

public void TestCustomCommandWithPostgreSqlSelect()
  var people = Session.GetList<Person>();

  var accessToolkit = new PostgreSqlMetaDatabase().AccessToolkit;
  var properties = Person.Metadata.DefaultLoadGroup.Properties;
  var fields = properties.Select(accessToolkit.GetField);

  people.Query.CustomCommandText = new CustomCommandText
    Text = string.Format(
      @"SELECT ALL {0} FROM punchclock__person WHERE lastname = 'Rogers'",

  Assert.That(people.Count, Is.EqualTo(9));

We now have a version that satisfies all three conditions to a large degree. The application uses only a single query and the query works with both local databases and remoting servers. It still makes some assumptions about database-schema names (e.g. "punchclock__person" and "lastname"). Let's see if we can clean up some of these as well.

Attempt #4: Replacing only the where clause

Instead of replacing the entire query text, an application can replace individual sections of the query, letting Quino fill in the rest of the query with its standard generated SQL. An application can append or prepend text to the generated SQL or replace it entirely. Because the condition for our query is so simple, the example below replaces the entire WHERE clause instead of adding to it.

public void TestCustomWhereExecution()
  var people = Session.GetList<Person>();

  people.Query.CustomCommandText = new CustomCommandText();
    "lastname = 'Rogers'"

  Assert.That(people.Count, Is.EqualTo(9));

That's much nicer -- still not perfect, but nice. The only remaining quibble is that the identifier lastname is still hard-coded. If the model changes in a way where that property is renamed or removed, this code will continue to compile but will fail at run-time. This is a not insignificant problem if your application ends up using these kinds of queries throughout its business logic.

Attempt #5: Replacing the where clause with generated field names

In order to fix this query and have a completely generic query that fails to compile should anything at all change in the model, we can mix in the technique that we used in attempts #2 and #3: using the AccessToolkit to format fields for SQL. To make the query 100% statically checked, we'll also use the generated metadata -- LastName -- to indicate which property we want to format as SQL.

public void TestCustomWhereExecution()
  var people = Session.GetList<Person>();

  var accessToolkit = new PostgreSqlMetaDatabase().AccessToolkit;
  var lastNameField = accessToolkit.GetField(Person.MetaProperties.LastName);

  people.Query.CustomCommandText = new CustomCommandText();
    string.Format("{0} = 'Rogers'", lastNameField)

  Assert.That(people.Count, Is.EqualTo(9));

The query above satisfies all of the conditions we outlined above. it's clear that the condition is quite simple and that real-world business logic will likely be much more complex. For those situations, the best approach is to fall back to using the direct ADO approach mixed with using Quino facilities like the AccessToolkit as much as possible to create a fully customized SQL text.

Many thanks to Urs for proofreading and suggestions on overall structure.

  1. If an application needs to be totally database-agnostic, then it will need to do some extra legwork that we won't cover in this post.

Mixing your own SQL into Quino queries: part 1 of 2

The Quino ORM1 manages all CrUD -- Create, Update, Delete -- operations for your application. This basic behavior is generally more than enough for standard user interfaces. When a user works with a single object in a window and saves it, there really isn't that much to optimize.

Modeled methods

A more complex editing process may include several objects at once and perhaps trigger events that create additional auditing objects. Even in these cases, there are still only a handful of save operations to execute. To keep the architecture clean, an application is encouraged to model these higher-level operations with methods in the metadata (modeled methods).

The advantage to using modeled methods is that they can be executed in an application server as well as locally in the client. When an application uses a remote application server rather than a direct connection to a database, modeled methods are executed in the service layer and therefore have much less latency to the database.

When Quino's query language isn't enough

If an application needs even more optimization, then it may be necessary to write custom SQL -- or even to use stored procedures to move the query into the database. Mixing SQL with an ORM can be a tricky business. It's even more of a challenge with an ORM like that in Quino, which generates the database schema and shields the user from tables, fields and SQL syntax almost entirely.

What are the potential pitfalls when using custom query text (e.g. SQL) with Quino?

  • Schema element names: An application needs to figure out the names of database objects like table and columns. It would be best not to hard-code them so that when the model changes, the custom code will be automatically updated.

    • If the query is in a stored procedure, then the database may ensure that the code is updated or at least checked when the schema changes.2
    • If the query is in application code, then care can be taken to keep that query in-sync with the model
  • Materialization: In particular, the selected fields in a projection must match the expectations of the ORM exactly so that it can materialize the objects properly. We'll see how to ensure this in examples below.

There are two approaches to executing custom code:

  • ADO: Get a reference to the underlying ADO infrastructure to execute queries directly without using Quino at all. With this approach, Quino can still help an application retrieve properly configured connections and commands.
  • CustomCommandText: An application commonly adds restrictions and sorts to the IQuery object using expressions, but can also add text directly to enhance or replace sections of the generated query.

All of the examples below are taken directly from the Quino test suite. Some variables -- like DefaultDatabase -- are provided by the Quino base testing classes but their purpose, types and implementation should be relatively obvious.

Using ADO directly

You can use the AdoDataConnectionTools to get the underlying ADO connection for a given Session so that any commands you execute are guaranteed to be executed in the same transactions as are already active on that session. If you use these tools, your ADO code will also automatically use the same connection parameters as the rest of your application without having to use hard-coded connection strings.

The first example shows a test from the Quino framework that shows how easy it is to combine results returned from another method into a standard Quino query.

public virtual void TestExecuteAdoDirectly()
  var ids = GetIds().ToList();
  var people = Session.GetList<Person>();

  people.Query.Where(Person.MetaProperties.Id, ExpressionOperator.In, ids);

  Assert.That(people.Count, Is.EqualTo(9));

The ADO-access code is hidden inside the call to GetIds(), the implementation for which is shown below. Your application can get the connection for a session as described above and then create commands using the same helper class. If you call CreateCommand() directly on the ADO connection, you'll have a problem when running inside a transaction on SQL Server. The SQL Server ADO implementation requires that you assign the active transaction object to each command. Quino takes care of this bookkeeping for you if you use the helper method.

private IEnumerable<int> GetIds()
  using (var helper = AdoDataConnectionTools.GetAdoConnection(Session, "Name"))
    using (var command = helper.CreateCommand())
      command.AdoCommand.CommandText = 
        @"SELECT id FROM punchclock__person WHERE lastname = 'Rogers'";

      using (var reader = command.AdoCommand.ExecuteReader())
        while (reader.Read())
          yield return reader.GetInt32(0);

There are a few drawbacks to this approach:

  • Your application will make two queries instead of one.
  • The hard-coded SQL will break if you make model changes that affect those tables and fields.
  • The ADO approach only works if the application has a direct connection to the database. An application that uses ADO will not be able to switch to an application-server driver without modification.

In the second part, we will improve on this approach by using the CustomCommandText property of a Quino query. This will allow us to use only a single query. We will also improve maintainability by reducing the amount of code that isn't checked by the compiler (e.g. the SQL text above).

Stay tuned for part 2, coming soon!

Many thanks to Urs for proofreading and suggestions on overall structure.

  1. This article uses features of Quino that will only become available in version 1.12. Almost all of the examples will also work in earlier versions but the AdoDataConnectionTools is not available until 1.12. The functionality of this class can, however, be back-ported if necessary.

  2. More likely, though, is that the Quino schema migration will be prevented from applying updates if there are custom stored procedures that use tables and columns that need to be changed.

v1.11.0: .Improvements to local evaluation & remoting

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.


  • Local evaluation: improved support for local evaluation in combination with remoting and, sorting, limits and offsets (QNO-4330, QNO-3655, QNO-4224)
  • Many other bug fixes and minor improvements

Breaking changes

  • No known breaking changes.
Java 8

imageThis article discusses and compares the initial version of Java 8 and C# 4.5.1. I have not used Java 8 and I have not tested that any of the examples -- Java or C# -- even compile, but they should be pretty close to valid.

Java 8 has finally been released and -- drum roll, please -- it has closures/lambdas, as promised! I would be greeting this as champagne-cork--popping news if I were still a Java programmer.1 As an ex-Java developer, I greet this news more with an ambivalent shrug than with any overarching joy. It's a sunny morning and I'm in a good mood, so I'm able to suppress what would be a more than appropriate comment: "it's about time".

Since I'm a C# programmer, I'm more interested in peering over the fence at the pile of goodies that Java just received for its eighth birthday and see if it got something "what I ain't got". I found a concise list of new features in the article Will Java 8 Kill Scala? by Ahmed Soliman and was distraught/pleased2 to discover that Java had in fact gotten two presents that C# doesn't already have.

As you'll see, these two features aren't huge and the lack of them doesn't significantly impact design or expressiveness, but you know how jealousy works:

Jealousy doesn't care.

Jealousy is.

I'm sure I'll get over it, but it will take time.3

Default methods and static interface methods

Java 8 introduces support for static methods on interfaces as well as default methods that, taken together, amount to functionality that is more or less what extensions methods brings to C#.

In Java 8, you can define static methods on an interface, which is nice, but it becomes especially useful when combined with the keyword default on those methods. As defined in Default Methods:

Default methods enable you to add new functionality to the interfaces of your libraries and ensure binary compatibility with code written for older versions of those interfaces.

In Java, you no longer have to worry that adding a method to an interface will break implementations of that interface in other jar files that have not yet been recompiled against the new version of the interface. You can avoid that by adding a default implementation for your method. This applies only to those methods where a default implementation is possible, of course.

The page includes an example but it's relatively obvious what it looks like:

**public interface** ITransformer
  **string** Adjust(**string** value);
  **string** NewAdjust(**string** value)
    **return** value.Replace(' ', '\t');

How do these compare with extension methods in C#?

Extension methods are nice because they allow you to quasi-add methods to an interface without requiring an implementor to actually implement them. My rule of thumb is that any method that can be defined purely in terms of the public API of an interface should be defined as an extension method rather than added to the interface.

Java's default methods are a twist on this concept that addresses a limitation of extension methods. What is that limitation? That the method definition in the extension method can't be overridden by the actual implementation behind the interface. That is, the default implementation can be expressed purely in terms of the public interface, but perhaps a specific implementor of the interface would like to do that plus something more. Or would perhaps like to execute the extension method in a different way, but only for a specific implementation. There is no way to do this with extension methods.

Interface default methods in Java 8 allow you to provide a fallback implementation but also allows any class to actually implement that method and override the fallback.

Functional Interfaces

Functional interfaces are a nice addition, too, and something I've wanted in C# for some time. Eric Meijer of Microsoft doesn't miss an opportunity to point out that this is a must for functional languages (he's exaggerating, but the point is taken).

Saying that a language supports functional interface simply means that a lambda defined in that language can be assigned to any interface with a single method that has the same signature as that lambda.

An example in C# should make things clearer:

**public interface** ITransformer
  **string** Adjust(**string** value);

**public static class** Utility
  **public static void** WorkOnText(**string** text, ITransformer)
    // Do work

In order to call WorkOnText() in C#, I am required to define a class that implements ITransformer. There is no other way around it. However, in a language that allows functional interfaces, I could call the method with a lambda directly. The following code looks like C# but won't actually compile.

  "Hello world",
  s => s.Replace("Hello", "Goodbye cruel")

For completeness, let's also see how much extra code it is do this in C#, which has no functional interfaces.

**public class** PessimisticTransformer : ITransformer
  **public string** Adjust(**string** value)
    **return** value.Replace("Hello", "Goodbye cruel");

  "Hello world",
  **new** PessimisticTransformer()

That's quite a huge difference. It's surprising that C# hasn't gotten this functionality yet. It's hard to see what the downside is for this feature -- it doesn't seem to alter semantics.

While it is supported in Java, there are other restrictions. The signature has to match exactly. What happens if we add an optional parameter to the interface-method definition?

**public interface** ITransformer
  **string** Adjust(**string** value, ITransformer additional = **null**);

In the C# example, the class implementing the interface would have to be updated, of course, but the code at calling location remains unchanged. The functional interface's definition is the calling location, so the change would be closer to the implementation instead of more abstracted from it.

**public class** PessimisticTransformer : ITransformer
  **public string** Adjust(**string** value, ITransformer additional = **null**)
    **return** value.Replace("Hello", "Goodbye cruel");

// Using a class
  "Hello world",
  **new** PessimisticTransformer()

// Using a functional interface
  "Hello world",
  (s, a) => s.Replace("Hello", "Goodbye cruel")

I would take the functional interface any day.

Java Closures

As a final note, Java 8 has finally acquired closures/lambdas4 but there is a limitation on which functions can be passed as lambdas. It turns out that the inclusion of functional interfaces is a workaround for not having first-class functions in the language.

Citing the article,

[...] you cannot pass any function as first-class to other functions, the function must be explicitly defined as lambda or using Functional Interfaces

While in C# you can assign any method with a matching signature to a lambda variable or parameter, Java requires that the method be first assigned to a variable that is "explicitly assigned as lambda" in order to use. This isn't a limitation on expressiveness but may lead to clutter.

In C# I can write the following:

**public string** Twist(**string** value)
  **return** value.Reverse();

**public string** Alter(**this string** value, Func<**string**, **string**> func)
  **return** func(value);

**public string** ApplyTransformations(**string** value)
  **return** value.Alter(Twist).Alter(s => s.Reverse());

This example shows how you can declare a Func to indicate that the parameter is a first-class function. I can pass the Twist function or I can pass an inline lambda, as shown in ApplyTransformations. However, in Java, I can't declare a Func: only functional interfaces. In order to replicate the C# example above in Java, I would do the following:

**public** String twist(String value)
  **return** new StringBuilder(value).reverse().toString();

**public** String alter(String value, ITransformer transformer)
  **return** transformer.adjust(value);

**public** String applyTransformations(String value)
  **return** alter(alter(value, s -> twist(s)), s -> new StringBuilder(s).reverse().toString();

Note that the Java example cannot pass Twist directly; instead, it wraps it in a lambda so that it can be passed as a functional interface. Also, the C# example uses an extension method, which allows me to "add" methods to class string, which is not really possible in Java.

Overall, though, while these things feel like deal-breakers to a programming-language snob5 -- especially those who have a choice as to which language to use -- Java developers can rejoice that their language has finally acquired features that both increase expressiveness and reduce clutter.6

As a bonus, as a C# developer, I find that I don't have to be so jealous after all.

Though I'd still really like me some functional interfaces.

  1. Even if I were still a Java programmer, the champagne might still stay in the bottle because adoption of the latest runtime in the Java world is extremely slow-paced. Many projects and products require a specific, older version of the JVM and preclude updating to take advantage of newer features. The .NET world naturally has similar limitations but the problem seems to be less extreme.

  2. Distraught because the features look quite interesting and useful and C# doesn't have them and pleased because (A) I am not so immature that I can't be happy for others and (B) I know that innovation in other languages is an important driver in your own language.

  3. Totally kidding here. I'm not insane. Take my self-diagnosis with a grain of salt.

  4. I know that lambdas and closures are not by definition the same and I'm not supposed to use the interchangeably. I'm trying to make sure that a C# developer who reads this article doesn't read "closure" (which is technically what a lambda in C# is because it's capable of "closing over" or capturing variables) and not understand that it means "lambda".

  5. Like yours truly.

  6. Even if most of those developers won't be able to use those features for quite some time because they work on projects or products that are reluctant to upgrade.

Setting up the Lenovo T440p Laptop

This article originally appeared on earthli News and has been cross-posted here.

I recently got a new laptop and ran into a few issues while setting it up for work. There's a tl;dr at the end for the impatient.

Lenovo has finally spruced up their lineup of laptops with a series that features:

  • An actually usable and large touchpad
  • A decent and relatively sensibly laid-out keyboard
  • Very long battery life (between 6-9 hours, depending on use)
  • Low-power Haswell processor
  • 14-inch full-HD (1920x1080)
  • Dual graphics cards
  • Relatively light at 2.1kg
  • Relatively small/thin form-factor
  • Solid-feeling, functional design w/latchless lid
  • Almost no stickers

I recently got one of these. Let's get it set up so that we can work.

Pop in the old SSD

Instead of setting up the hard drive that I ordered with the laptop, I'm going to transplant the SSD I have in my current laptop to the new machine. Though this maneuver no longer guarantees anguish as it would have in the old days, we'll see below that it doesn't work 100% smoothly.

As mentioned above, the case is well-designed and quite elegant. All I need is a Phillips screwdriver to take out two screws from the back and then a downward slide on the backing plate pulls off the entire bottom of the laptop.1

At any rate, I was able to easily remove the new/unwanted drive and replace it with my fully configured SSD. I replaced the backing plate, but didn't put the screws back in yet. I wasn't that confident that it would work.

My pessimism turns out to have been well-founded. I boot up the machine and was greeted by the BIOS showing me a list of all of the various places that it had checked in order to find a bootable volume.

It failed to find a bootable volume anywhere.

Try again. Still nothing.

UEFI and BIOS usability

From dim memory, I recalled that there's something called UEFI for newer machines and that Windows 8 likes it and that it may have been enabled on the drive that shipped with the laptop but almost certainly isn't on my SSD.

Snooping about in the BIOS settings -- who doesn't like to do that? -- I find that UEFI is indeed enabled. I disable that setting as well as something called UEFI secure-boot and try again. I am rewarded within seconds with my Windows 8 lock screen.

I was happy to have been able to fix the problem, but was disappointed that the error messages thrown up by a very modern BIOS are still so useless. To be more precise, the utter lack of error messages or warnings or hints was disappointing.

I already have access to the BIOS, so it's not a security issue. There is nothing to be gained by hiding from me the fact that the BIOS checked a potential boot volume and failed to find a UEFI bootable sector but did find a non-UEFI one. Would it have killed them to show the list of bootable volumes with a little asterisk or warning telling me that a volume could boot were I to disable UEFI? Wouldn't that have been nice? I'm not even asking them to let me jump right to the setting, though that would be above and beyond the call of duty.

Detecting devices

At any rate, we can boot and Windows 8, after "detecting devices" for a few seconds was able to start up to the lock screen. Let's log in.

I have no network access.

Checking the Device Manager reveals that a good half-dozen devices could not be recognized and no drivers were installed for them.

This is pathetic. It is 2014, people. Most of the hardware in this machine is (A) very standard equipment to have on a laptop and (B) made by Intel. Is it too much to ask to have the 20GB Windows 8 default installation include generic drivers that will work with even newer devices?

The drivers don't have to be optimized; they just have to work well enough to let the user work on getting better ones. Windows is able to do this for the USB ports, for the display and for the mouse and keyboard because it would be utter failure for it not to be able to do so. It is an ongoing mystery how network access has not yet been promoted to this category of mandatory devices.

When Windows 8 is utterly incapable of setting up the network card, then there is a very big problem. A chicken-and-egg problem that can only be solved by having (A) a USB stick and (B) another computer already attached to the Internet.

Thank goodness Windows 8 was able to properly set up the drivers for the USB port or I'd have had a sense-less laptop utterly incapable of ever bootstrapping itself into usefulness.

On the bright side, the Intel network driver was only 1.8MB, it installed with a single click and it worked immediately for both the wireless and Ethernet cards. So that was very nice.

Update System

The obvious next step once I have connectivity is to run Windows Update. That works as expected and even finds some extra driver upgrades once it can actually get online.

Since this is a Lenovo laptop, there is also the Lenovo System Update, which updates more drivers, applies firmware upgrades and installs/updates some useful utilities.

At least it would do all of those things if I could start it.

That's not 100% fair. It kind of started. It's definitely running, there's an icon in the task-bar and the application is not using any CPU. When I hover the icon, it even shows me a thumbnail of a perfectly rendered main window.

Click. Nothing. The main window does not appear.

Fortunately, I am not alone. As recently as November of 2013, there were others with the same problem.2 Unfortunately, no one was able to figure out why it happens nor were there workarounds offered.

I had the sound enabled, though and noticed that when I tried to execute a shortcut, it triggered an alert. And the System Update application seemed to be in the foreground -- somehow -- despite the missing main window.

Acting on a hunch, I pressed Alt + PrtSc to take a screenshot of the currently focused window. Paste into an image editor. Bingo.


Now that I could read the text on the main window, I could figure out which keys to press. I didn't get a screenshot of the first screen, but it showed a list of available updates. I pressed the following keys to initiate the download:

  • Alt + S to "Select all"
  • Alt + N to move to the next page
  • Alt + D to "Download" (the screenshot above)

Hovering the mouse cursor over the taskbar icon revealed the following reassuring thumbnail of the main window:


Lucky for me, the System Update was able to get the "restart now" onto the screen so that I could reboot when required. On reboot, the newest version of Lenovo System Update was able to make use of the main window once again.


  • If you can't boot off of a drive on a new machine, remember that UEFI might be getting in the way.
  • If you're going to replace the drive, make sure that you download the driver for your machine's network card to that hard drive so that you can at least establish connectivity and continue bootstrapping your machine back to usability.
  • Make sure you update the Lenovo System tools on the destination drive before transferring it to the new machine to avoid weird software bugs.


  1. I'm making this sound easier than it was. I'm not so well-versed in cracking open cases anymore. I was forced to download the manual to look up how to remove the backing plate. The sliding motion would probably have been intuitive for someone more accustomed to these tasks.

  2. In my searches for help, manuals and other software, I came across the following download, offered on Lenovo's web site. You can download something called "Hotkey Features Integration for Windows 8.1" and it only needs 11.17GB of space.